Switch to: References

Add citations

You must login to add citations.
  1. Epistemic virtues of harnessing rigorous machine learning systems in ethically sensitive domains.Thomas F. Burns - 2023 - Journal of Medical Ethics 49 (8):547-548.
    Some physicians, in their care of patients at risk of misusing opioids, use machine learning (ML)-based prediction drug monitoring programmes (PDMPs) to guide their decision making in the prescription of opioids. This can cause a conflict: a PDMP Score can indicate a patient is at a high risk of opioid abuse while a patient expressly reports oppositely. The prescriber is then left to balance the credibility and trust of the patient with the PDMP Score. Pozzi1 argues that a prescriber who (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Why we should talk about institutional (dis)trustworthiness and medical machine learning.Michiel De Proost & Giorgia Pozzi - forthcoming - Medicine, Health Care and Philosophy:1-10.
    The principle of trust has been placed at the centre as an attitude for engaging with clinical machine learning systems. However, the notions of trust and distrust remain fiercely debated in the philosophical and ethical literature. In this article, we proceed on a structural level ex negativo as we aim to analyse the concept of “institutional distrustworthiness” to achieve a proper diagnosis of how we should not engage with medical machine learning. First, we begin with several examples that hint at (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • What Are Humans Doing in the Loop? Co-Reasoning and Practical Judgment When Using Machine Learning-Driven Decision Aids.Sabine Salloch & Andreas Eriksen - 2024 - American Journal of Bioethics 24 (9):67-78.
    Within the ethical debate on Machine Learning-driven decision support systems (ML_CDSS), notions such as “human in the loop” or “meaningful human control” are often cited as being necessary for ethical legitimacy. In addition, ethical principles usually serve as the major point of reference in ethical guidance documents, stating that conflicts between principles need to be weighed and balanced against each other. Starting from a neo-Kantian viewpoint inspired by Onora O'Neill, this article makes a concrete suggestion of how to interpret the (...)
    Download  
     
    Export citation  
     
    Bookmark   19 citations  
  • Detecting your depression with your smartphone? – An ethical analysis of epistemic injustice in passive self-tracking apps.Mirjam Faissner, Eva Kuhn, Regina Müller & Sebastian Laacke - 2024 - Ethics and Information Technology 26 (2):1-14.
    Smartphone apps might offer a low-threshold approach to the detection of mental health conditions, such as depression. Based on the gathering of ‘passive data,’ some apps generate a user’s ‘digital phenotype,’ compare it to those of users with clinically confirmed depression and issue a warning if a depressive episode is likely. These apps can, thus, serve as epistemic tools for affected users. From an ethical perspective, it is crucial to consider epistemic injustice to promote socially responsible innovations within digital mental (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Machine learning in healthcare and the methodological priority of epistemology over ethics.Thomas Grote - forthcoming - Inquiry: An Interdisciplinary Journal of Philosophy.
    This paper develops an account of how the implementation of ML models into healthcare settings requires revising the methodological apparatus of philosophical bioethics. On this account, ML models are cognitive interventions that provide decision-support to physicians and patients. Due to reliability issues, opaque reasoning processes, and information asymmetries, ML models pose inferential problems for them. These inferential problems lay the grounds for many ethical problems that currently claim centre-stage in the bioethical debate. Accordingly, this paper argues that the best way (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Conversational Artificial Intelligence and the Potential for Epistemic Injustice.Michiel De Proost & Giorgia Pozzi - 2023 - American Journal of Bioethics 23 (5):51-53.
    In their article, Sedlakova and Trachsel (2023) propose a holistic, ethical, and epistemic analysis of conversational artificial intelligence (CAI) in psychotherapeutic settings. They mainly descri...
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Toward an Ethics of AI Belief.Winnie Ma & Vincent Valton - 2024 - Philosophy and Technology 37 (3):1-28.
    In this paper we, an epistemologist and a machine learning scientist, argue that we need to pursue a novel area of philosophical research in AI – the ethics of belief for AI. Here we take the ethics of belief to refer to a field at the intersection of epistemology and ethics concerned with possible moral, practical, and other non-truth-related dimensions of belief. In this paper we will primarily be concerned with the normative question within the ethics of belief regarding what (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • First-person disavowals of digital phenotyping and epistemic injustice in psychiatry.Stephanie K. Slack & Linda Barclay - 2023 - Medicine, Health Care and Philosophy 26 (4):605-614.
    Digital phenotyping will potentially enable earlier detection and prediction of mental illness by monitoring human interaction with and through digital devices. Notwithstanding its promises, it is certain that a person’s digital phenotype will at times be at odds with their first-person testimony of their psychological states. In this paper, we argue that there are features of digital phenotyping in the context of psychiatry which have the potential to exacerbate the tendency to dismiss patients’ testimony and treatment preferences, which can be (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Testimonial injustice in medical machine learning: a perspective from psychiatry.George Gillett - 2023 - Journal of Medical Ethics 49 (8):541-542.
    Pozzi provides a thought-provoking account of how machine-learning clinical prediction models (such as Prediction Drug Monitoring Programmes (PDMPs)) may exacerbate testimonial injustice.1 In this response, I generalise Pozzi’s concerns about PDMPs to traditional models of clinical practice and question the claim that inaccurate clinicians are necessarily preferential to inaccurate machine-learning models. I then explore Pozzi’s concern that such models may deprive patients of a right to ‘convey information’. I suggest that machine-learning tools may be used to enhance, rather than frustrate, (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Dirty data labeled dirt cheap: epistemic injustice in machine learning systems.Gordon Hull - 2023 - Ethics and Information Technology 25 (3):1-14.
    Artificial intelligence (AI) and machine learning (ML) systems increasingly purport to deliver knowledge about people and the world. Unfortunately, they also seem to frequently present results that repeat or magnify biased treatment of racial and other vulnerable minorities. This paper proposes that at least some of the problems with AI’s treatment of minorities can be captured by the concept of epistemic injustice. To substantiate this claim, I argue that (1) pretrial detention and physiognomic AI systems commit testimonial injustice because their (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Automated opioid risk scores: a case for machine learning-induced epistemic injustice in healthcare.Giorgia Pozzi - 2023 - Ethics and Information Technology 25 (1):1-12.
    Artificial intelligence-based (AI) technologies such as machine learning (ML) systems are playing an increasingly relevant role in medicine and healthcare, bringing about novel ethical and epistemological issues that need to be timely addressed. Even though ethical questions connected to epistemic concerns have been at the center of the debate, it is going unnoticed how epistemic forms of injustice can be ML-induced, specifically in healthcare. I analyze the shortcomings of an ML system currently deployed in the USA to predict patients’ likelihood (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Clinicians’ roles and necessary levels of understanding in the use of artificial intelligence: A qualitative interview study with German medical students.F. Funer, S. Tinnemeyer, W. Liedtke & S. Salloch - 2024 - BMC Medical Ethics 25 (1):1-13.
    Background Artificial intelligence-driven Clinical Decision Support Systems (AI-CDSS) are being increasingly introduced into various domains of health care for diagnostic, prognostic, therapeutic and other purposes. A significant part of the discourse on ethically appropriate conditions relate to the levels of understanding and explicability needed for ensuring responsible clinical decision-making when using AI-CDSS. Empirical evidence on stakeholders’ viewpoints on these issues is scarce so far. The present study complements the empirical-ethical body of research by, on the one hand, investigating the requirements (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • “That’s just Future Medicine” - a qualitative study on users’ experiences of symptom checker apps.Regina Müller, Malte Klemmt, Roland Koch, Hans-Jörg Ehni, Tanja Henking, Elisabeth Langmann, Urban Wiesing & Robert Ranisch - 2024 - BMC Medical Ethics 25 (1):1-19.
    Background Symptom checker apps (SCAs) are mobile or online applications for lay people that usually have two main functions: symptom analysis and recommendations. SCAs ask users questions about their symptoms via a chatbot, give a list with possible causes, and provide a recommendation, such as seeing a physician. However, it is unclear whether the actual performance of a SCA corresponds to the users’ experiences. This qualitative study investigates the subjective perspectives of SCA users to close the empirical gap identified in (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • PDMP causes more than just testimonial injustice.Tina Nguyen - 2023 - Journal of Medical Ethics 49 (8):549-550.
    In the article ‘Testimonial injustice in medical machine learning’, Pozzi argues that the prescription drug monitoring programme (PDMP) leads to testimonial injustice as physicians are more inclined to trust the PDMP’s risk scores over the patient’s own account of their medication history.1 Pozzi further develops this argument by discussing how credibility shifts from patients to machine learning (ML) systems that are supposedly neutral. As a result, a sense of distrust is now formed between patients and physicians. While there are merits (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Physicians’ Professional Role in Clinical Care: AI as a Change Agent.Giorgia Pozzi & Jeroen van den Hoven - 2023 - American Journal of Bioethics 23 (12):57-59.
    Doernberg and Truog (2023) provide an insightful analysis of the role of medical professionals in what they call spheres of morality. While their framework is useful for inquiring into the moral de...
    Download  
     
    Export citation  
     
    Bookmark  
  • Further remarks on testimonial injustice in medical machine learning: a response to commentaries.Giorgia Pozzi - 2023 - Journal of Medical Ethics 49 (8):551-552.
    In my paper entitled ‘Testimonial injustice in medical machine learning’,1 I argued that machine learning (ML)-based Prediction Drug Monitoring Programmes (PDMPs) could infringe on patients’ epistemic and moral standing inflicting a testimonial injustice.2 I am very grateful for all the comments the paper received, some of which expand on it while others take a more critical view. This response addresses two objections raised to my consideration of ML-induced testimonial injustice in order to clarify the position taken in the paper. The (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Ubuntu as a complementary perspective for addressing epistemic (in)justice in medical machine learning.Brandon Ferlito & Michiel De Proost - 2023 - Journal of Medical Ethics 49 (8):545-546.
    Pozzi1 has thoroughly analysed testimonial injustices in the automated Prediction Drug Monitoring Programmes (PDMPs) case. Although Pozzi1 suggests that ‘the shift from an interpersonal to a structural dimension … bears a significant moral component’, her topical investigation does not further conceptualise the type of collective knowledge practices necessary to achieve epistemic justice. As Pozzi1 concludes: ‘this paper shows the limitations of systems such as automated PDMPs, it does not provide possible solutions’. In this commentary, we propose that an Ubuntu perspective—which, (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • ‘Can I trust my patient?’ Machine Learning support for predicting patient behaviour.Florian Funer & Sabine Salloch - 2023 - Journal of Medical Ethics 49 (8):543-544.
    Giorgia Pozzi’s feature article1 on the risks of testimonial injustice when using automated prediction drug monitoring programmes (PDMPs) turns the spotlight on a pressing and well-known clinical problem: physicians’ challenges to predict patient behaviour, so that treatment decisions can be made based on this information, despite any fallibility. Currently, as one possible way to improve prognostic assessments of patient behaviour, Machine Learning-driven clinical decision support systems (ML-CDSS) are being developed and deployed. To make her point, Pozzi discusses ML-CDSSs that are (...)
    Download  
     
    Export citation  
     
    Bookmark