Switch to: Citations

Add references

You must login to add references.
  1. Artificial intelligence paternalism.Ricardo Diaz Milian & Anirban Bhattacharyya - 2023 - Journal of Medical Ethics 49 (3):183-184.
    In response to Ferrario _et al_’s 1 work entitled ‘Ethics of the algorithmic prediction of goal of care preferences: from theory to practice’, we would like to point out an area of concern: the risk of artificial intelligence (AI) paternalism in their proposed framework. Accordingly, in this commentary, we underscore the importance of the implementation of safeguards for AI algorithms before they are deployed in clinical practice. The goal of documenting a living will and advanced directives is to convey personal (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Fracking our humanity.Edwin Jesudason - 2023 - Journal of Medical Ethics 49 (3):181-182.
    Nietzche claimed that once we know why to live, we’ll suffer almost any how.1 Artificial intelligence (AI) is used widely for the how, but Ferrario et al now advocate using AI for the why.2 Here, I offer my doubts on practical grounds but foremost on ethical ones. Practically, individuals already vacillate over the why, wavering with time and circumstance. That AI could provide prosthetics (or orthotics) for human agency feels unrealistic here, not least because ‘answers’ would be largely unverifiable. Ethically, (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • What you believe you want, may not be what the algorithm knows.Seppe Segers - 2023 - Journal of Medical Ethics 49 (3):177-178.
    Tensions between respect for autonomy and paternalism loom large in Ferrario et al ’s discussion of artificial intelligence (AI)-based preference predictors.1 To be sure, their analysis (rightfully) brings out the moral matter of respecting patient preferences. My point here, however, is that their consideration of AI-based preference predictors in treatment of incapacitated patients opens more fundamental moral questions about the desirability of over-ruling considered patient preferences, not only if these are disclosed by surrogates, but possibly also in treating competent patients. (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • For the sake of multifacetedness. Why artificial intelligence patient preference prediction systems shouldn’t be for next of kin.Max Tretter & David Samhammer - 2023 - Journal of Medical Ethics 49 (3):175-176.
    In their contribution ‘Ethics of the algorithmic prediction of goal of care preferences’1 Ferrario et al elaborate a from theory to practice contribution concerning the realisation of artificial intelligence (AI)-based patient preference prediction (PPP) systems. Such systems are intended to help find the treatment that the patient would have chosen in clinical situations—especially in the intensive care or emergency units—where the patient is no longer capable of making that decision herself. The authors identify several challenges that complicate their effective development, (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Ethics of the algorithmic prediction of goal of care preferences: from theory to practice.Andrea Ferrario, Sophie Gloeckler & Nikola Biller-Andorno - 2023 - Journal of Medical Ethics 49 (3):165-174.
    Artificial intelligence (AI) systems are quickly gaining ground in healthcare and clinical decision-making. However, it is still unclear in what way AI can or should support decision-making that is based on incapacitated patients’ values and goals of care, which often requires input from clinicians and loved ones. Although the use of algorithms to predict patients’ most likely preferred treatment has been discussed in the medical ethics literature, no example has been realised in clinical practice. This is due, arguably, to the (...)
    Download  
     
    Export citation  
     
    Bookmark   14 citations  
  • Artificial Intelligence algorithms cannot recommend a best interests decision but could help by improving prognostication.Derick Wade - 2023 - Journal of Medical Ethics 49 (3):179-180.
    Most jurisdictions require a patient to consent to any medical intervention. Clinicians ask a patient, ‘Given the pain and distress associated with our intervention and the predicted likelihood of this best-case outcome, do you want to accept the treatment?’ When a patient is incapable of deciding, clinicians may ask people who know the patient to say what the patient would decide; this is substituted judgement. In contrast, asking the same people to say how the person would make the decision is (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation