Switch to: References

Add citations

You must login to add citations.
  1. Algorithms Advise, Humans Decide: the Evidential Role of the Patient Preference Predictor.Nicholas Makins - forthcoming - Journal of Medical Ethics.
    An AI-based “patient preference predictor” (PPP) is a proposed method for guiding healthcare decisions for patients who lack decision-making capacity. The proposal is to use correlations between sociodemographic data and known healthcare preferences to construct a model that predicts the unknown preferences of a particular patient. In this paper, I highlight a distinction that has been largely overlooked so far in debates about the PPP–that between algorithmic prediction and decision-making–and argue that much of the recent philosophical disagreement stems from this (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Commentary on ‘Autonomy-based criticisms of the patient preference predictor’.Collin O'Neil - 2022 - Journal of Medical Ethics 48 (5):315-316.
    When a patient lacks sufficient capacity to make a certain treatment decision, whether because of deficits in their ability to make a judgement that reflects their values or to make a decision that reflects their judgement or both, the decision must be made by a surrogate. Often the best way to respect the patient’s autonomy, in such cases, is for the surrogate to make a ‘substituted’ judgement on behalf of the patient, which is the decision that best reflects the patient’s (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Health Care in Contexts of Risk, Uncertainty, and Hybridity.Daniel Messelken & David Winkler (eds.) - 2021 - Springer.
    This book sheds light on various ethical challenges military and humanitarian health care personnel face while working in adverse conditions. Contexts of armed conflict, hybrid wars or other forms of violence short of war, as well as natural disasters, all have in common that ordinary circumstances can no longer be taken for granted. Hence, the provision of health care has to adapt, for example, to a different level of risk, to scarce resources, or uncommon approaches due to external incentives or (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Personalized Patient Preference Predictors Are Neither Technically Feasible nor Ethically Desirable.Nathaniel Sharadin - 2024 - American Journal of Bioethics 24 (7):62-65.
    Except in extraordinary circumstances, patients' clinical care should reflect their preferences. Incapacitated patients cannot report their preferences. This is a problem. Extant solutions to the problem are inadequate: surrogates are unreliable, and advance directives are uncommon. In response, some authors have suggested developing algorithmic "patient preference predictors" (PPPs) to inform care for incapacitated patients. In a recent paper, Earp et al. propose a new twist on PPPs. Earp et al. suggest we personalize PPPs using modern machine learning (ML) techniques. In (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Predicting and Preferring.Nathaniel Sharadin - forthcoming - Inquiry: An Interdisciplinary Journal of Philosophy.
    The use of machine learning, or “artificial intelligence” (AI) in medicine is widespread and growing. In this paper, I focus on a specific proposed clinical application of AI: using models to predict incapacitated patients’ treatment preferences. Drawing on results from machine learning, I argue this proposal faces a special moral problem. Machine learning researchers owe us assurance on this front before experimental research can proceed. In my conclusion I connect this concern to broader issues in AI safety.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • The ethics of biomedical military research: Therapy, prevention, enhancement, and risk.Alexandre Erler & Vincent C. Müller - 2021 - In Daniel Messelken & David Winkler (eds.), Health Care in Contexts of Risk, Uncertainty, and Hybridity. Springer. pp. 235-252.
    What proper role should considerations of risk, particularly to research subjects, play when it comes to conducting research on human enhancement in the military context? We introduce the currently visible military enhancement techniques (1) and the standard discussion of risk for these (2), in particular what we refer to as the ‘Assumption’, which states that the demands for risk-avoidance are higher for enhancement than for therapy. We challenge the Assumption through the introduction of three categories of enhancements (3): therapeutic, preventive, (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • A Personalized Patient Preference Predictor for Substituted Judgments in Healthcare: Technically Feasible and Ethically Desirable.Brian D. Earp, Sebastian Porsdam Mann, Jemima Allen, Sabine Salloch, Vynn Suren, Karin Jongsma, Matthias Braun, Dominic Wilkinson, Walter Sinnott-Armstrong, Annette Rid, David Wendler & Julian Savulescu - 2024 - American Journal of Bioethics 24 (7):13-26.
    When making substituted judgments for incapacitated patients, surrogates often struggle to guess what the patient would want if they had capacity. Surrogates may also agonize over having the (sole) responsibility of making such a determination. To address such concerns, a Patient Preference Predictor (PPP) has been proposed that would use an algorithm to infer the treatment preferences of individual patients from population-level data about the known preferences of people with similar demographic characteristics. However, critics have suggested that even if such (...)
    Download  
     
    Export citation  
     
    Bookmark   26 citations  
  • For the sake of multifacetedness. Why artificial intelligence patient preference prediction systems shouldn’t be for next of kin.Max Tretter & David Samhammer - 2023 - Journal of Medical Ethics 49 (3):175-176.
    In their contribution ‘Ethics of the algorithmic prediction of goal of care preferences’1 Ferrario et al elaborate a from theory to practice contribution concerning the realisation of artificial intelligence (AI)-based patient preference prediction (PPP) systems. Such systems are intended to help find the treatment that the patient would have chosen in clinical situations—especially in the intensive care or emergency units—where the patient is no longer capable of making that decision herself. The authors identify several challenges that complicate their effective development, (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Ethics of the algorithmic prediction of goal of care preferences: from theory to practice.Andrea Ferrario, Sophie Gloeckler & Nikola Biller-Andorno - 2023 - Journal of Medical Ethics 49 (3):165-174.
    Artificial intelligence (AI) systems are quickly gaining ground in healthcare and clinical decision-making. However, it is still unclear in what way AI can or should support decision-making that is based on incapacitated patients’ values and goals of care, which often requires input from clinicians and loved ones. Although the use of algorithms to predict patients’ most likely preferred treatment has been discussed in the medical ethics literature, no example has been realised in clinical practice. This is due, arguably, to the (...)
    Download  
     
    Export citation  
     
    Bookmark   14 citations  
  • Meta-surrogate decision making and artificial intelligence.Brian D. Earp - 2022 - Journal of Medical Ethics 48 (5):287-289.
    How shall we decide for others who cannot decide for themselves? And who—or what, in the case of artificial intelligence — should make the decision? The present issue of the journal tackles several interrelated topics, many of them having to do with surrogate decision making. For example, the feature article by Jardas et al 1 explores the potential use of artificial intelligence to predict incapacitated patients’ likely treatment preferences based on their sociodemographic characteristics, raising questions about the means by which (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Autonomy-based criticisms of the patient preference predictor.E. J. Jardas, David Wasserman & David Wendler - 2022 - Journal of Medical Ethics 48 (5):304-310.
    The patient preference predictor is a proposed computer-based algorithm that would predict the treatment preferences of decisionally incapacitated patients. Incorporation of a PPP into the decision-making process has the potential to improve implementation of the substituted judgement standard by providing more accurate predictions of patients’ treatment preferences than reliance on surrogates alone. Yet, critics argue that methods for making treatment decisions for incapacitated patients should be judged on a number of factors beyond simply providing them with the treatments they would (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  • Machine Learning Algorithms in the Personalized Modeling of Incapacitated Patients’ Decision Making—Is It a Viable Concept?Tomasz Rzepiński, Ewa Deskur-Śmielecka & Michał Chojnicki - 2024 - American Journal of Bioethics 24 (7):51-53.
    New informatics technologies are becoming increasingly important in medical practice. Machine learning (ML) and deep learning (DL) systems enable data analysis and the formulation of medical recomm...
    Download  
     
    Export citation  
     
    Bookmark  
  • Messy autonomy: Commentary on Patient preference predictors and the problem of naked statistical evidence.Stephen David John - 2018 - Journal of Medical Ethics 44 (12):864-864.
    Like many, I find the idea of relying on patient preference predictors in life-or-death cases ethically troubling. As part of his stimulating discussion, Sharadin1 diagnoses such unease as a worry that using PPPs disrespects patients’ autonomy, by treating their most intimate and significant desires as if they were caused by their demographic traits. I agree entirely with Sharadin’s ‘debunking’ response to this concern: we can use statistical correlations to predict others’ preferences without thereby assuming any causal claim. However, I suspect (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations