Switch to: References

Add citations

You must login to add citations.
  1. Algorithms Advise, Humans Decide: the Evidential Role of the Patient Preference Predictor.Nicholas Makins - forthcoming - Journal of Medical Ethics.
    An AI-based “patient preference predictor” (PPP) is a proposed method for guiding healthcare decisions for patients who lack decision-making capacity. The proposal is to use correlations between sociodemographic data and known healthcare preferences to construct a model that predicts the unknown preferences of a particular patient. In this paper, I highlight a distinction that has been largely overlooked so far in debates about the PPP–that between algorithmic prediction and decision-making–and argue that much of the recent philosophical disagreement stems from this (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The Patient preference predictor and the objection from higher-order preferences.Jakob Thrane Mainz - 2023 - Journal of Medical Ethics 49 (3):221-222.
    Recently, Jardas _et al_ have convincingly defended the patient preference predictor (PPP) against a range of autonomy-based objections. In this response, I propose a new autonomy-based objection to the PPP that is not explicitly discussed by Jardas _et al_. I call it the ‘objection from higher-order preferences’. Even if this objection is not sufficient reason to reject the PPP, the objection constitutes a pro tanto reason that is at least as powerful as the ones discussed by Jardas _et al._.
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Do patients want their families or their doctors to make treatment decisions in the event of incapacity, and why?David Wendler, Robert Wesley, Mark Pavlick & Annette Rid - 2016 - AJOB Empirical Bioethics 7 (4):251-259.
    Background: Current practice relies on patient-designated and next-of-kin surrogates, in consultation with clinicians, to make treatment decisions for patients who lose the ability to make their own decisions. Yet there is a paucity of data on whether this approach is consistent with patients' preferences regarding who they want to make treatment decisions for them in the event of decisional incapacity. Methods: Self-administered survey of patients at a tertiary care center. Results: Overall, 1169 respondents completed the survey (response rate = 59.8%). (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Clarifying the best interests standard: the elaborative and enumerative strategies in public policy-making.Chong Ming Lim, Michael C. Dunn & Jacqueline J. Chin - 2016 - Journal of Medical Ethics 42 (8):542-549.
    One recurring criticism of the best interests standard concerns its vagueness, and thus the inadequate guidance it offers to care providers. The lack of an agreed definition of ‘best interests’, together with the fact that several suggested considerations adopted in legislation or professional guidelines for doctors do not obviously apply across different groups of persons, result in decisions being made in murky waters. In response, bioethicists have attempted to specify the best interests standard, to reduce the indeterminacy surrounding medical decisions. (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • Predicting and Preferring.Nathaniel Sharadin - forthcoming - Inquiry: An Interdisciplinary Journal of Philosophy.
    The use of machine learning, or “artificial intelligence” (AI) in medicine is widespread and growing. In this paper, I focus on a specific proposed clinical application of AI: using models to predict incapacitated patients’ treatment preferences. Drawing on results from machine learning, I argue this proposal faces a special moral problem. Machine learning researchers owe us assurance on this front before experimental research can proceed. In my conclusion I connect this concern to broader issues in AI safety.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • A Personalized Patient Preference Predictor for Substituted Judgments in Healthcare: Technically Feasible and Ethically Desirable.Brian D. Earp, Sebastian Porsdam Mann, Jemima Allen, Sabine Salloch, Vynn Suren, Karin Jongsma, Matthias Braun, Dominic Wilkinson, Walter Sinnott-Armstrong, Annette Rid, David Wendler & Julian Savulescu - 2024 - American Journal of Bioethics 24 (7):13-26.
    When making substituted judgments for incapacitated patients, surrogates often struggle to guess what the patient would want if they had capacity. Surrogates may also agonize over having the (sole) responsibility of making such a determination. To address such concerns, a Patient Preference Predictor (PPP) has been proposed that would use an algorithm to infer the treatment preferences of individual patients from population-level data about the known preferences of people with similar demographic characteristics. However, critics have suggested that even if such (...)
    Download  
     
    Export citation  
     
    Bookmark   26 citations  
  • Patients’ Priorities for Surrogate Decision-Making: Possible Influence of Misinformed Beliefs.E. J. Jardas, Robert Wesley, Mark Pavlick, David Wendler & Annette Rid - 2022 - AJOB Empirical Bioethics 13 (3):137-151.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Ethics of the algorithmic prediction of goal of care preferences: from theory to practice.Andrea Ferrario, Sophie Gloeckler & Nikola Biller-Andorno - 2023 - Journal of Medical Ethics 49 (3):165-174.
    Artificial intelligence (AI) systems are quickly gaining ground in healthcare and clinical decision-making. However, it is still unclear in what way AI can or should support decision-making that is based on incapacitated patients’ values and goals of care, which often requires input from clinicians and loved ones. Although the use of algorithms to predict patients’ most likely preferred treatment has been discussed in the medical ethics literature, no example has been realised in clinical practice. This is due, arguably, to the (...)
    Download  
     
    Export citation  
     
    Bookmark   14 citations  
  • Surrogate Perspectives on Patient Preference Predictors: Good Idea, but I Should Decide How They Are Used.Dana Howard, Allan Rivlin, Philip Candilis, Neal W. Dickert, Claire Drolen, Benjamin Krohmal, Mark Pavlick & David Wendler - 2022 - AJOB Empirical Bioethics 13 (2):125-135.
    Background: Current practice frequently fails to provide care consistent with the preferences of decisionally-incapacitated patients. It also imposes significant emotional burden on their surrogates. Algorithmic-based patient preference predictors (PPPs) have been proposed as a possible way to address these two concerns. While previous research found that patients strongly support the use of PPPs, the views of surrogates are unknown. The present study thus assessed the views of experienced surrogates regarding the possible use of PPPs as a means to help make (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Autonomy-based criticisms of the patient preference predictor.E. J. Jardas, David Wasserman & David Wendler - 2022 - Journal of Medical Ethics 48 (5):304-310.
    The patient preference predictor is a proposed computer-based algorithm that would predict the treatment preferences of decisionally incapacitated patients. Incorporation of a PPP into the decision-making process has the potential to improve implementation of the substituted judgement standard by providing more accurate predictions of patients’ treatment preferences than reliance on surrogates alone. Yet, critics argue that methods for making treatment decisions for incapacitated patients should be judged on a number of factors beyond simply providing them with the treatments they would (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  • Surrogates and Artificial Intelligence: Why AI Trumps Family.Ryan Hubbard & Jake Greenblum - 2020 - Science and Engineering Ethics 26 (6):3217-3227.
    The increasing accuracy of algorithms to predict values and preferences raises the possibility that artificial intelligence technology will be able to serve as a surrogate decision-maker for incapacitated patients. Following Camillo Lamanna and Lauren Byrne, we call this technology the autonomy algorithm. Such an algorithm would mine medical research, health records, and social media data to predict patient treatment preferences. The possibility of developing the AA raises the ethical question of whether the AA or a relative ought to serve as (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • The Problematic “Existence” of Digital Twins: Human Intention and Moral Decision.Jeffrey P. Bishop - 2024 - American Journal of Bioethics 24 (7):45-47.
    Since surrogates are not good at predicting patient preferences, and since these decisions can cause surrogates distress, some have claimed we need an alternative way to make decisions for incapaci...
    Download  
     
    Export citation  
     
    Bookmark  
  • Administration of pro re nata medications by the nurse to incapacitated patients: An ethical perspective.Mojtaba Vaismoradi, Cathrine Fredriksen Moe, M. Flores Vizcaya-Moreno & Piret Paal - 2022 - Clinical Ethics 17 (1):5-13.
    The administration of pro re nata medications is the responsibility of the nurse. However, ethical uncertainties often happen due to the inability of incapacitated patients to collaborate with the nurse in the process of decision making for pro re nata medication administration. There is a lack of integrative knowledge and insufficient understanding regarding ethical considerations surrounding the administration of pro re nata medications to incapacitated patients. Therefore, they have been discussed in this paper and practical strategies to avoid unethical practices (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Artificial Intelligence to support ethical decision-making for incapacitated patients: a survey among German anesthesiologists and internists.Lasse Benzinger, Jelena Epping, Frank Ursin & Sabine Salloch - 2024 - BMC Medical Ethics 25 (1):1-10.
    Background Artificial intelligence (AI) has revolutionized various healthcare domains, where AI algorithms sometimes even outperform human specialists. However, the field of clinical ethics has remained largely untouched by AI advances. This study explores the attitudes of anesthesiologists and internists towards the use of AI-driven preference prediction tools to support ethical decision-making for incapacitated patients. Methods A questionnaire was developed and pretested among medical students. The questionnaire was distributed to 200 German anesthesiologists and 200 German internists, thereby focusing on physicians who (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation