Switch to: References

Add citations

You must login to add citations.
  1. Algorithms Advise, Humans Decide: the Evidential Role of the Patient Preference Predictor.Nicholas Makins - forthcoming - Journal of Medical Ethics.
    An AI-based “patient preference predictor” (PPP) is a proposed method for guiding healthcare decisions for patients who lack decision-making capacity. The proposal is to use correlations between sociodemographic data and known healthcare preferences to construct a model that predicts the unknown preferences of a particular patient. In this paper, I highlight a distinction that has been largely overlooked so far in debates about the PPP–that between algorithmic prediction and decision-making–and argue that much of the recent philosophical disagreement stems from this (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • A Personalized Patient Preference Predictor for Substituted Judgments in Healthcare: Technically Feasible and Ethically Desirable.Brian D. Earp, Sebastian Porsdam Mann, Jemima Allen, Sabine Salloch, Vynn Suren, Karin Jongsma, Matthias Braun, Dominic Wilkinson, Walter Sinnott-Armstrong, Annette Rid, David Wendler & Julian Savulescu - 2024 - American Journal of Bioethics 24 (7):13-26.
    When making substituted judgments for incapacitated patients, surrogates often struggle to guess what the patient would want if they had capacity. Surrogates may also agonize over having the (sole) responsibility of making such a determination. To address such concerns, a Patient Preference Predictor (PPP) has been proposed that would use an algorithm to infer the treatment preferences of individual patients from population-level data about the known preferences of people with similar demographic characteristics. However, critics have suggested that even if such (...)
    Download  
     
    Export citation  
     
    Bookmark   26 citations  
  • Predicting and Preferring.Nathaniel Sharadin - forthcoming - Inquiry: An Interdisciplinary Journal of Philosophy.
    The use of machine learning, or “artificial intelligence” (AI) in medicine is widespread and growing. In this paper, I focus on a specific proposed clinical application of AI: using models to predict incapacitated patients’ treatment preferences. Drawing on results from machine learning, I argue this proposal faces a special moral problem. Machine learning researchers owe us assurance on this front before experimental research can proceed. In my conclusion I connect this concern to broader issues in AI safety.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • The Patient Preference Predictor: A Timely Boost for Personalized Medicine.Nikola Biller-Andorno, Andrea Ferrario & Armin Biller - 2024 - American Journal of Bioethics 24 (7):35-38.
    The future of medicine will be predictive, preventive, personalized, and participatory. Recent technological advancements bolster the realization of this vision, particularly through innovations in...
    Download  
     
    Export citation  
     
    Bookmark  
  • Broadening the debate: the future of JME feature articles.Lucy Frith & John McMillan - 2023 - Journal of Medical Ethics 49 (3):155-155.
    The JME editorial team selects its feature articles from the best papers accepted for publication based on their quality, novelty and capacity to move debate forward on a specific issue. Feature articles are made freely available and are published alongside reviewed and submitted commentaries. We do this partly to promote and acknowledge excellent work in medical ethics, but also to encourage authors to submit their best papers to the JME. JME feature articles have deepened the analysis of some central issues (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Artificial intelligence paternalism.Ricardo Diaz Milian & Anirban Bhattacharyya - 2023 - Journal of Medical Ethics 49 (3):183-184.
    In response to Ferrario _et al_’s 1 work entitled ‘Ethics of the algorithmic prediction of goal of care preferences: from theory to practice’, we would like to point out an area of concern: the risk of artificial intelligence (AI) paternalism in their proposed framework. Accordingly, in this commentary, we underscore the importance of the implementation of safeguards for AI algorithms before they are deployed in clinical practice. The goal of documenting a living will and advanced directives is to convey personal (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Artificial Intelligence algorithms cannot recommend a best interests decision but could help by improving prognostication.Derick Wade - 2023 - Journal of Medical Ethics 49 (3):179-180.
    Most jurisdictions require a patient to consent to any medical intervention. Clinicians ask a patient, ‘Given the pain and distress associated with our intervention and the predicted likelihood of this best-case outcome, do you want to accept the treatment?’ When a patient is incapable of deciding, clinicians may ask people who know the patient to say what the patient would decide; this is substituted judgement. In contrast, asking the same people to say how the person would make the decision is (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Large language models in medical ethics: useful but not expert.Andrea Ferrario & Nikola Biller-Andorno - 2024 - Journal of Medical Ethics 50 (9):653-654.
    Large language models (LLMs) have now entered the realm of medical ethics. In a recent study, Balaset alexamined the performance of GPT-4, a commercially available LLM, assessing its performance in generating responses to diverse medical ethics cases. Their findings reveal that GPT-4 demonstrates an ability to identify and articulate complex medical ethical issues, although its proficiency in encoding the depth of real-world ethical dilemmas remains an avenue for improvement. Investigating the integration of LLMs into medical ethics decision-making appears to be (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Should Artificial Intelligence be used to support clinical ethical decision-making? A systematic review of reasons.Sabine Salloch, Tim Kacprowski, Wolf-Tilo Balke, Frank Ursin & Lasse Benzinger - 2023 - BMC Medical Ethics 24 (1):1-9.
    BackgroundHealthcare providers have to make ethically complex clinical decisions which may be a source of stress. Researchers have recently introduced Artificial Intelligence (AI)-based applications to assist in clinical ethical decision-making. However, the use of such tools is controversial. This review aims to provide a comprehensive overview of the reasons given in the academic literature for and against their use.MethodsPubMed, Web of Science, Philpapers.org and Google Scholar were searched for all relevant publications. The resulting set of publications was title and abstract (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • (1 other version)Justifying Our Credences in the Trustworthiness of AI Systems: A Reliabilistic Approach.Andrea Ferrario - 2024 - Science and Engineering Ethics 30 (6):1-21.
    We address an open problem in the philosophy of artificial intelligence (AI): how to justify the epistemic attitudes we have towards the trustworthiness of AI systems. The problem is important, as providing reasons to believe that AI systems are worthy of trust is key to appropriately rely on these systems in human-AI interactions. In our approach, we consider the trustworthiness of an AI as a time-relative, composite property of the system with two distinct facets. One is the actual trustworthiness of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • What you believe you want, may not be what the algorithm knows.Seppe Segers - 2023 - Journal of Medical Ethics 49 (3):177-178.
    Tensions between respect for autonomy and paternalism loom large in Ferrario et al ’s discussion of artificial intelligence (AI)-based preference predictors.1 To be sure, their analysis (rightfully) brings out the moral matter of respecting patient preferences. My point here, however, is that their consideration of AI-based preference predictors in treatment of incapacitated patients opens more fundamental moral questions about the desirability of over-ruling considered patient preferences, not only if these are disclosed by surrogates, but possibly also in treating competent patients. (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • AI knows best? Avoiding the traps of paternalism and other pitfalls of AI-based patient preference prediction.Andrea Ferrario, Sophie Gloeckler & Nikola Biller-Andorno - 2023 - Journal of Medical Ethics 49 (3):185-186.
    In our recent article ‘The Ethics of the Algorithmic Prediction of Goal of Care Preferences: From Theory to Practice’1, we aimed to ignite a critical discussion on why and how to design artificial intelligence (AI) systems assisting clinicians and next-of-kin by predicting goal of care preferences for incapacitated patients. Here, we would like to thank the commentators for their valuable responses to our work. We identified three core themes in their commentaries: (1) the risks of AI paternalism, (2) worries about (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • For the sake of multifacetedness. Why artificial intelligence patient preference prediction systems shouldn’t be for next of kin.Max Tretter & David Samhammer - 2023 - Journal of Medical Ethics 49 (3):175-176.
    In their contribution ‘Ethics of the algorithmic prediction of goal of care preferences’1 Ferrario et al elaborate a from theory to practice contribution concerning the realisation of artificial intelligence (AI)-based patient preference prediction (PPP) systems. Such systems are intended to help find the treatment that the patient would have chosen in clinical situations—especially in the intensive care or emergency units—where the patient is no longer capable of making that decision herself. The authors identify several challenges that complicate their effective development, (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Fracking our humanity.Edwin Jesudason - 2023 - Journal of Medical Ethics 49 (3):181-182.
    Nietzche claimed that once we know why to live, we’ll suffer almost any how.1 Artificial intelligence (AI) is used widely for the how, but Ferrario et al now advocate using AI for the why.2 Here, I offer my doubts on practical grounds but foremost on ethical ones. Practically, individuals already vacillate over the why, wavering with time and circumstance. That AI could provide prosthetics (or orthotics) for human agency feels unrealistic here, not least because ‘answers’ would be largely unverifiable. Ethically, (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation