Switch to: Citations

Add references

You must login to add references.
  1. The Patient preference predictor and the objection from higher-order preferences.Jakob Thrane Mainz - 2023 - Journal of Medical Ethics 49 (3):221-222.
    Recently, Jardas _et al_ have convincingly defended the patient preference predictor (PPP) against a range of autonomy-based objections. In this response, I propose a new autonomy-based objection to the PPP that is not explicitly discussed by Jardas _et al_. I call it the ‘objection from higher-order preferences’. Even if this objection is not sufficient reason to reject the PPP, the objection constitutes a pro tanto reason that is at least as powerful as the ones discussed by Jardas _et al._.
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Who is afraid of black box algorithms? On the epistemological and ethical basis of trust in medical AI.Juan Manuel Durán & Karin Rolanda Jongsma - 2021 - Journal of Medical Ethics 47 (5):medethics - 2020-106820.
    The use of black box algorithms in medicine has raised scholarly concerns due to their opaqueness and lack of trustworthiness. Concerns about potential bias, accountability and responsibility, patient autonomy and compromised trust transpire with black box algorithms. These worries connect epistemic concerns with normative issues. In this paper, we outline that black box algorithms are less problematic for epistemic reasons than many scholars seem to believe. By outlining that more transparency in algorithms is not always necessary, and by explaining that (...)
    Download  
     
    Export citation  
     
    Bookmark   46 citations  
  • Transparency as design publicity: explaining and justifying inscrutable algorithms.Michele Loi, Andrea Ferrario & Eleonora Viganò - 2020 - Ethics and Information Technology 23 (3):253-263.
    In this paper we argue that transparency of machine learning algorithms, just as explanation, can be defined at different levels of abstraction. We criticize recent attempts to identify the explanation of black box algorithms with making their decisions (post-hoc) interpretable, focusing our discussion on counterfactual explanations. These approaches to explanation simplify the real nature of the black boxes and risk misleading the public about the normative features of a model. We propose a new form of algorithmic transparency, that consists in (...)
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  • Embedding Values in Artificial Intelligence (AI) Systems.Ibo van de Poel - 2020 - Minds and Machines 30 (3):385-409.
    Organizations such as the EU High-Level Expert Group on AI and the IEEE have recently formulated ethical principles and (moral) values that should be adhered to in the design and deployment of artificial intelligence (AI). These include respect for autonomy, non-maleficence, fairness, transparency, explainability, and accountability. But how can we ensure and verify that an AI system actually respects these values? To help answer this question, I propose an account for determining when an AI system can be said to embody (...)
    Download  
     
    Export citation  
     
    Bookmark   43 citations  
  • On the ethics of algorithmic decision-making in healthcare.Thomas Grote & Philipp Berens - 2020 - Journal of Medical Ethics 46 (3):205-211.
    In recent years, a plethora of high-profile scientific publications has been reporting about machine learning algorithms outperforming clinicians in medical diagnosis or treatment recommendations. This has spiked interest in deploying relevant algorithms with the aim of enhancing decision-making in healthcare. In this paper, we argue that instead of straightforwardly enhancing the decision-making capabilities of clinicians and healthcare institutions, deploying machines learning algorithms entails trade-offs at the epistemic and the normative level. Whereas involving machine learning might improve the accuracy of medical (...)
    Download  
     
    Export citation  
     
    Bookmark   60 citations  
  • Treatment Decision Making for Incapacitated Patients: Is Development and Use of a Patient Preference Predictor Feasible?Annette Rid & David Wendler - 2014 - Journal of Medicine and Philosophy 39 (2):130-152.
    It has recently been proposed to incorporate the use of a “Patient Preference Predictor” (PPP) into the process of making treatment decisions for incapacitated patients. A PPP would predict which treatment option a given incapacitated patient would most likely prefer, based on the individual’s characteristics and information on what treatment preferences are correlated with these characteristics. Including a PPP in the shared decision-making process between clinicians and surrogates has the potential to better realize important ethical goals for making treatment decisions (...)
    Download  
     
    Export citation  
     
    Bookmark   21 citations  
  • Use of a Patient Preference Predictor to Help Make Medical Decisions for Incapacitated Patients.A. Rid & D. Wendler - 2014 - Journal of Medicine and Philosophy 39 (2):104-129.
    The standard approach to treatment decision making for incapacitated patients often fails to provide treatment consistent with the patient’s preferences and values and places significant stress on surrogate decision makers. These shortcomings provide compelling reason to search for methods to improve current practice. Shared decision making between surrogates and clinicians has important advantages, but it does not provide a way to determine patients’ treatment preferences. Hence, shared decision making leaves families with the stressful challenge of identifying the patient’s preferred treatment (...)
    Download  
     
    Export citation  
     
    Bookmark   29 citations  
  • Can We Improve Treatment Decision-Making for Incapacitated Patients?Annette Rid & David Wendler - 2010 - Hastings Center Report 40 (5):36-45.
    When patients cannot make their own treatment decisions, surrogates typically step in to do it for them. Surrogate decision‐making is far from ideal, of course, as the surrogate may not know what the patient prefers or what best promotes her interests. One way to improve it would be to arm surrogates with information about what patients in similar circumstances tend to prefer, allowing them to make empirically grounded predictions about what their patient would want.
    Download  
     
    Export citation  
     
    Bookmark   15 citations  
  • Messy autonomy: Commentary on Patient preference predictors and the problem of naked statistical evidence.Stephen David John - 2018 - Journal of Medical Ethics 44 (12):864-864.
    Like many, I find the idea of relying on patient preference predictors in life-or-death cases ethically troubling. As part of his stimulating discussion, Sharadin1 diagnoses such unease as a worry that using PPPs disrespects patients’ autonomy, by treating their most intimate and significant desires as if they were caused by their demographic traits. I agree entirely with Sharadin’s ‘debunking’ response to this concern: we can use statistical correlations to predict others’ preferences without thereby assuming any causal claim. However, I suspect (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • A new method for making treatment decisions for incapacitated patients: what do patients think about the use of a patient preference predictor?David Wendler, Bob Wesley, Mark Pavlick & Annette Rid - 2016 - Journal of Medical Ethics 42 (4):235-241.
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  • Autonomy-based criticisms of the patient preference predictor.E. J. Jardas, David Wasserman & David Wendler - 2022 - Journal of Medical Ethics 48 (5):304-310.
    The patient preference predictor is a proposed computer-based algorithm that would predict the treatment preferences of decisionally incapacitated patients. Incorporation of a PPP into the decision-making process has the potential to improve implementation of the substituted judgement standard by providing more accurate predictions of patients’ treatment preferences than reliance on surrogates alone. Yet, critics argue that methods for making treatment decisions for incapacitated patients should be judged on a number of factors beyond simply providing them with the treatments they would (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • AI support for ethical decision-making around resuscitation: proceed with care.Nikola Biller-Andorno, Andrea Ferrario, Susanne Joebges, Tanja Krones, Federico Massini, Phyllis Barth, Georgios Arampatzis & Michael Krauthammer - 2022 - Journal of Medical Ethics 48 (3):175-183.
    Artificial intelligence (AI) systems are increasingly being used in healthcare, thanks to the high level of performance that these systems have proven to deliver. So far, clinical applications have focused on diagnosis and on prediction of outcomes. It is less clear in what way AI can or should support complex clinical decisions that crucially depend on patient preferences. In this paper, we focus on the ethical questions arising from the design, development and deployment of AI systems to support decision-making around (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Patient preference predictors and the problem of naked statistical evidence.Nathaniel Paul Sharadin - 2018 - Journal of Medical Ethics 44 (12):857-862.
    Patient preference predictors (PPPs) promise to provide medical professionals with a new solution to the problem of making treatment decisions on behalf of incapacitated patients. I show that the use of PPPs faces a version of a normative problem familiar from legal scholarship: the problem of naked statistical evidence. I sketch two sorts of possible reply, vindicating and debunking, and suggest that our reply to the problem in the one domain ought to mirror our reply in the other. The conclusion (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  • How do clinicians prepare family members for the role of surrogate decision-maker?Thomas V. Cunningham, Leslie P. Scheunemann, Robert M. Arnold & Douglas White - 2017 - Journal of Medical Ethics Recent Issues 44 (1):21-26.
    Purpose Although surrogate decision-making is prevalent in intensive care units and concerns with decision quality are well documented, little is known about how clinicians help family members understand the surrogate role. We investigated whether and how clinicians provide normative guidance to families regarding how to function as a surrogate. Subjects and methods We audiorecorded and transcribed 73 ICU family conferences in which clinicians anticipated discussing goals of care for incapacitated patients at high risk of death. We developed and applied a (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • How do clinicians prepare family members for the role of surrogate decision-maker?V. Cunningham Thomas, P. Scheunemann Leslie, M. Arnold Robert & White Douglas - 2018 - Journal of Medical Ethics 44 (1):21-26.
    Purpose Although surrogate decision-making is prevalent in intensive care units and concerns with decision quality are well documented, little is known about how clinicians help family members understand the surrogate role. We investigated whether and how clinicians provide normative guidance to families regarding how to function as a surrogate. Subjects and methods We audiorecorded and transcribed 73 ICU family conferences in which clinicians anticipated discussing goals of care for incapacitated patients at high risk of death. We developed and applied a (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations