Switch to: Citations

Add references

You must login to add references.
  1. Will a Patient Preference Predictor Improve Treatment Decision Making for Incapacitated Patients?Annette Rid - 2014 - Journal of Medicine and Philosophy 39 (2):99-103.
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • How to Use AI Ethically for Ethical Decision-Making.Joanna Demaree-Cotton, Brian D. Earp & Julian Savulescu - 2022 - American Journal of Bioethics 22 (7):1-3.
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • AIgorithmic Ethics: A Technically Sweet Solution to a Non-Problem.Aurelia Sauerbrei, Nina Hallowell & Angeliki Kerasidou - 2022 - American Journal of Bioethics 22 (7):28-30.
    In their proof-of-concept study, Meier et al. built an algorithm to aid ethical decision making. In the limitations section of their paper, the authors state a frequently cited ax...
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Surrogate Perspectives on Patient Preference Predictors: Good Idea, but I Should Decide How They Are Used.Dana Howard, Allan Rivlin, Philip Candilis, Neal W. Dickert, Claire Drolen, Benjamin Krohmal, Mark Pavlick & David Wendler - 2022 - AJOB Empirical Bioethics 13 (2):125-135.
    Background: Current practice frequently fails to provide care consistent with the preferences of decisionally-incapacitated patients. It also imposes significant emotional burden on their surrogates. Algorithmic-based patient preference predictors (PPPs) have been proposed as a possible way to address these two concerns. While previous research found that patients strongly support the use of PPPs, the views of surrogates are unknown. The present study thus assessed the views of experienced surrogates regarding the possible use of PPPs as a means to help make (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Algorithms for Ethical Decision-Making in the Clinic: A Proof of Concept.Lukas J. Meier, Alice Hein, Klaus Diepold & Alena Buyx - 2022 - American Journal of Bioethics 22 (7):4-20.
    Machine intelligence already helps medical staff with a number of tasks. Ethical decision-making, however, has not been handed over to computers. In this proof-of-concept study, we show how an algorithm based on Beauchamp and Childress’ prima-facie principles could be employed to advise on a range of moral dilemma situations that occur in medical institutions. We explain why we chose fuzzy cognitive maps to set up the advisory system and how we utilized machine learning to train it. We report on the (...)
    Download  
     
    Export citation  
     
    Bookmark   30 citations  
  • Explicability of artificial intelligence in radiology: Is a fifth bioethical principle conceptually necessary?Frank Ursin, Cristian Timmermann & Florian Steger - 2022 - Bioethics 36 (2):143-153.
    Recent years have witnessed intensive efforts to specify which requirements ethical artificial intelligence (AI) must meet. General guidelines for ethical AI consider a varying number of principles important. A frequent novel element in these guidelines, that we have bundled together under the term explicability, aims to reduce the black-box character of machine learning algorithms. The centrality of this element invites reflection on the conceptual relation between explicability and the four bioethical principles. This is important because the application of general ethical (...)
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  • Principles of Biomedical Ethics: Marking Its Fortieth Anniversary.James Childress & Tom Beauchamp - 2019 - American Journal of Bioethics 19 (11):9-12.
    Volume 19, Issue 11, November 2019, Page 9-12.
    Download  
     
    Export citation  
     
    Bookmark   297 citations  
  • Moral Machines: Teaching Robots Right From Wrong.Wendell Wallach & Colin Allen - 2008 - New York, US: Oxford University Press.
    Computers are already approving financial transactions, controlling electrical supplies, and driving trains. Soon, service robots will be taking care of the elderly in their homes, and military robots will have their own targeting and firing protocols. Colin Allen and Wendell Wallach argue that as robots take on more and more responsibility, they must be programmed with moral decision-making abilities, for our own safety. Taking a fast paced tour through the latest thinking about philosophical ethics and artificial intelligence, the authors argue (...)
    Download  
     
    Export citation  
     
    Bookmark   191 citations  
  • Use of a Patient Preference Predictor to Help Make Medical Decisions for Incapacitated Patients.A. Rid & D. Wendler - 2014 - Journal of Medicine and Philosophy 39 (2):104-129.
    The standard approach to treatment decision making for incapacitated patients often fails to provide treatment consistent with the patient’s preferences and values and places significant stress on surrogate decision makers. These shortcomings provide compelling reason to search for methods to improve current practice. Shared decision making between surrogates and clinicians has important advantages, but it does not provide a way to determine patients’ treatment preferences. Hence, shared decision making leaves families with the stressful challenge of identifying the patient’s preferred treatment (...)
    Download  
     
    Export citation  
     
    Bookmark   31 citations  
  • The global landscape of AI ethics guidelines.A. Jobin, M. Ienca & E. Vayena - 2019 - Nature Machine Intelligence 1.
    Download  
     
    Export citation  
     
    Bookmark   233 citations  
  • AI support for ethical decision-making around resuscitation: proceed with care.Nikola Biller-Andorno, Andrea Ferrario, Susanne Joebges, Tanja Krones, Federico Massini, Phyllis Barth, Georgios Arampatzis & Michael Krauthammer - 2022 - Journal of Medical Ethics 48 (3):175-183.
    Artificial intelligence (AI) systems are increasingly being used in healthcare, thanks to the high level of performance that these systems have proven to deliver. So far, clinical applications have focused on diagnosis and on prediction of outcomes. It is less clear in what way AI can or should support complex clinical decisions that crucially depend on patient preferences. In this paper, we focus on the ethical questions arising from the design, development and deployment of AI systems to support decision-making around (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Computer knows best? The need for value-flexibility in medical AI.Rosalind J. McDougall - 2019 - Journal of Medical Ethics 45 (3):156-160.
    Artificial intelligence (AI) is increasingly being developed for use in medicine, including for diagnosis and in treatment decision making. The use of AI in medical treatment raises many ethical issues that are yet to be explored in depth by bioethicists. In this paper, I focus specifically on the relationship between the ethical ideal of shared decision making and AI systems that generate treatment recommendations, using the example of IBM’s Watson for Oncology. I argue that use of this type of system (...)
    Download  
     
    Export citation  
     
    Bookmark   56 citations  
  • European physicians' experience with ethical difficulties in clinical practice.S. A. Hurst, A. Perrier, R. Pegoraro, S. Reiter-Theil, R. Forde, A.-M. Slowther, E. Garrett-Mayer & M. Danis - 2006 - Journal of Medical Ethics 33 (1):51-7.
    Download  
     
    Export citation  
     
    Bookmark   47 citations  
  • Improving Medical Decisions for Incapacitated Persons: Does Focusing on “Accurate Predictions” Lead to an Inaccurate Picture?Scott Y. H. Kim - 2014 - Journal of Medicine and Philosophy 39 (2):187-195.
    The Patient Preference Predictor (PPP) proposal places a high priority on the accuracy of predicting patients’ preferences and finds the performance of surrogates inadequate. However, the quest to develop a highly accurate, individualized statistical model has significant obstacles. First, it will be impossible to validate the PPP beyond the limit imposed by 60%–80% reliability of people’s preferences for future medical decisions—a figure no better than the known average accuracy of surrogates. Second, evidence supports the view that a sizable minority of (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Clinical ethics support services during the COVID-19 pandemic in the UK: a cross-sectional survey.Mariana Dittborn, Emma Cave & David Archard - 2022 - Journal of Medical Ethics 48 (10):695-701.
    Background Non-adherence to medication is associated with increased risk of relapse in patients with bipolar disorder. Objectives To validate patient-evaluated adherence to medication measured via smartphones against validated adherence questionnaire; and investigate characteristics for adherence to medication measured via smartphones. Methods Patients with BD evaluated adherence to medication daily for 6–9 months via smartphones. The Medication Adherence Rating Scale and the Rogers’ Empowerment questionnaires were filled out. The 17-item Hamilton Depression Rating Scale, the Young Mania Rating Scale and the Functional (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Disproof of Concept: Resolving Ethical Dilemmas Using Algorithms.Bryan Pilkington & Charles Binkley - 2022 - American Journal of Bioethics 22 (7):81-83.
    Allowing algorithms to guide or determine decision-making in ethically complex situations, and eventually satisfying the need for good clinical ethics consultation work, is a philosophically intere...
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Croatian physicians' and nurses' experience with ethical issues in clinical practice.I. Sorta-Bilajac, K. Bazdaric, B. Brozovic & G. J. Agich - 2008 - Journal of Medical Ethics 34 (6):450-455.
    Aim: To assess ethical issues in everyday clinical practice among physicians and nurses of the University Hospital Rijeka, Rijeka, Croatia.Subjects and methods: We surveyed the entire population of internal medicine, oncology and intensive care specialists and associated nurses employed at the University Hospital Rijeka, Rijeka, Croatia . An anonymous questionnaire was used to explore the type and frequency of ethical dilemmas, rank of their difficulty, access to and use of ethics support services, training in ethics and confidence about knowledge in (...)
    Download  
     
    Export citation  
     
    Bookmark   17 citations  
  • Patient Preference Predictors, Apt Categorization, and Respect for Autonomy.S. John - 2014 - Journal of Medicine and Philosophy 39 (2):169-177.
    In this paper, I set out two ethical complications for Rid and Wendler’s proposal that a “Patient Preference Predictor” (PPP) should be used to aid decision making about incapacitated patients’ care. Both of these worries concern how a PPP might categorize patients. In the first section of the paper, I set out some general considerations about the “ethics of apt categorization” within stratified medicine and show how these challenge certain PPPs. In the second section, I argue for a more specific—but (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  • Wrongful Birth: AI-Tools for Moral Decisions in Clinical Care in the Absence of Disability Ethics.Maya Sabatello - 2022 - American Journal of Bioethics 22 (7):43-46.
    Meier et al. describe a pilot study that developed METHAD, an AI-based Medical Ethics Advisor tool that draws on the principlism approach and was tested using text-book cases and clinical et...
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Meta-surrogate decision making and artificial intelligence.Brian D. Earp - 2022 - Journal of Medical Ethics 48 (5):287-289.
    How shall we decide for others who cannot decide for themselves? And who—or what, in the case of artificial intelligence — should make the decision? The present issue of the journal tackles several interrelated topics, many of them having to do with surrogate decision making. For example, the feature article by Jardas et al 1 explores the potential use of artificial intelligence to predict incapacitated patients’ likely treatment preferences based on their sociodemographic characteristics, raising questions about the means by which (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Autonomy-based criticisms of the patient preference predictor.E. J. Jardas, David Wasserman & David Wendler - 2022 - Journal of Medical Ethics 48 (5):304-310.
    The patient preference predictor is a proposed computer-based algorithm that would predict the treatment preferences of decisionally incapacitated patients. Incorporation of a PPP into the decision-making process has the potential to improve implementation of the substituted judgement standard by providing more accurate predictions of patients’ treatment preferences than reliance on surrogates alone. Yet, critics argue that methods for making treatment decisions for incapacitated patients should be judged on a number of factors beyond simply providing them with the treatments they would (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  • Treatment Decision Making for Incapacitated Patients: Is Development and Use of a Patient Preference Predictor Feasible?Annette Rid & David Wendler - 2014 - Journal of Medicine and Philosophy 39 (2):130-152.
    It has recently been proposed to incorporate the use of a “Patient Preference Predictor” (PPP) into the process of making treatment decisions for incapacitated patients. A PPP would predict which treatment option a given incapacitated patient would most likely prefer, based on the individual’s characteristics and information on what treatment preferences are correlated with these characteristics. Including a PPP in the shared decision-making process between clinicians and surrogates has the potential to better realize important ethical goals for making treatment decisions (...)
    Download  
     
    Export citation  
     
    Bookmark   22 citations  
  • Reflections on the Patient Preference Predictor Proposal.D. W. Brock - 2014 - Journal of Medicine and Philosophy 39 (2):153-160.
    There are substantial data establishing that surrogates are often mistaken in predicting what treatments incompetent patients would have wanted and that supplements such as advance directives have not resulted in significant improvements. Rid and Wendler’s Patient Preference Predictor (PPP) proposal will attempt to gather data about what similar patients would prefer in a variety of treatment choices. It accepts the usual goal of patient autonomy and the Substituted Judgment principle for surrogate decisions. I provide reasons for questioning sole reliance on (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Automating Justice: An Ethical Responsibility of Computational Bioethics.Vasiliki Rahimzadeh, Jonathan Lawson, Jinyoung Baek & Edward S. Dove - 2022 - American Journal of Bioethics 22 (7):30-33.
    In their proof-of-concept, Meier and colleagues describe the purpose and programming decisions underpinning Medical Ethics Advisor, an automated decision support system used t...
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • How to tackle the conundrum of quality appraisal in systematic reviews of normative literature/information? Analysing the problems of three possible strategies.Marcel Mertz - 2019 - BMC Medical Ethics 20 (1):1-12.
    Background In the last years, there has been an increase in publication of systematic reviews of normative literature or of normative information in bioethics. The aim of a systematic review is to search, select, analyse and synthesise literature in a transparent and systematic way in order to provide a comprehensive and unbiased overview of the information sought, predominantly as a basis for informed decision-making in health care. Traditionally, one part of the procedure when conducting a systematic review is an appraisal (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Do we understand the intervention? What complex intervention research can teach us for the evaluation of clinical ethics support services.Jan Schildmann, Stephan Nadolny, Joschka Haltaufderheide, Marjolein Gysels, Jochen Vollmann & Claudia Bausewein - 2019 - BMC Medical Ethics 20 (1):48.
    Evaluating clinical ethics support services has been hailed as important research task. At the same time, there is considerable debate about how to evaluate CESS appropriately. The criticism, which has been aired, refers to normative as well as empirical aspects of evaluating CESS. In this paper, we argue that a first necessary step for progress is to better understand the intervention in CESS. Tools of complex intervention research methodology may provide relevant means in this respect. In a first step, we (...)
    Download  
     
    Export citation  
     
    Bookmark   27 citations  
  • Ethical difficulties in clinical practice: experiences of European doctors.S. A. Hurst, A. Perrier, R. Pegoraro, S. Reiter-Theil, R. Forde, A.-M. Slowther, E. Garrett-Mayer & M. Danis - 2007 - Journal of Medical Ethics 33 (1):51-57.
    Background: Ethics support services are growing in Europe to help doctors in dealing with ethical difficulties. Currently, insufficient attention has been focused on the experiences of doctors who have faced ethical difficulties in these countries to provide an evidence base for the development of these services.Methods: A survey instrument was adapted to explore the types of ethical dilemma faced by European doctors, how they ranked the difficulty of these dilemmas, their satisfaction with the resolution of a recent ethically difficult case (...)
    Download  
     
    Export citation  
     
    Bookmark   70 citations  
  • Should Artificial Intelligence Augment Medical Decision Making? The Case for an Autonomy Algorithm.Camillo Lamanna - 2018 - AMA Journal of Ethics 9 (20):E902-910.
    A significant proportion of elderly and psychiatric patients do not have the capacity to make health care decisions. We suggest that machine learning technologies could be harnessed to integrate data mined from electronic health records (EHRs) and social media in order to estimate the confidence of the prediction that a patient would consent to a given treatment. We call this process, which takes data about patients as input and derives a confidence estimate for a particular patient’s predicted health care-related decision (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Predicting End-of-Life Treatment Preferences: Perils and Practicalities.P. H. Ditto & C. J. Clark - 2014 - Journal of Medicine and Philosophy 39 (2):196-204.
    Rid and Wendler propose the development of a Patient Preference Predictor (PPP), an actuarial model for predicting incapacitated patient’s life-sustaining treatment preferences across a wide range of end-of-life scenarios. An actuarial approach to end-of-life decision making has enormous potential, but transferring the logic of actuarial prediction to end-of-life decision making raises several conceptual complexities and logistical problems that need further consideration. Actuarial models have proven effective in targeted prediction tasks, but no evidence supports their effectiveness in the kind of broad (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • The AI Needed for Ethical Decision Making Does Not Exist.Amelia Barwise & Brian Pickering - 2022 - American Journal of Bioethics 22 (7):46-49.
    When considering the introduction of AI to support medical decision-making, one must take an end-to-end, holistic approach to development, evaluation, integration and governance. (Cabitza and Zeito...
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Implementation of Clinical Ethics Consultation in German Hospitals.Maximilian Schochow, Dajana Schnell & Florian Steger - 2019 - Science and Engineering Ethics 25 (4):985-991.
    In order to build on the information that was obtained in the course of the first study, a follow-up survey was conducted first by phone and subsequently in a written form between August and October 2014. We contacted 1.858 hospitals in all of Germany for the follow-up survey by phone. In cases where a hospital had not participated in the first study, the willingness to participate in the follow-up survey was established in advance. The survey’s dispatch was ensured in the (...)
    Download  
     
    Export citation  
     
    Bookmark   19 citations  
  • Ethics of the algorithmic prediction of goal of care preferences: from theory to practice.Andrea Ferrario, Sophie Gloeckler & Nikola Biller-Andorno - 2023 - Journal of Medical Ethics 49 (3):165-174.
    Artificial intelligence (AI) systems are quickly gaining ground in healthcare and clinical decision-making. However, it is still unclear in what way AI can or should support decision-making that is based on incapacitated patients’ values and goals of care, which often requires input from clinicians and loved ones. Although the use of algorithms to predict patients’ most likely preferred treatment has been discussed in the medical ethics literature, no example has been realised in clinical practice. This is due, arguably, to the (...)
    Download  
     
    Export citation  
     
    Bookmark   14 citations  
  • How competitors become collaborators—Bridging the gap(s) between machine learning algorithms and clinicians.Thomas Grote & Philipp Berens - 2021 - Bioethics 36 (2):134-142.
    Bioethics, Volume 36, Issue 2, Page 134-142, February 2022.
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • Surrogates and Artificial Intelligence: Why AI Trumps Family.Ryan Hubbard & Jake Greenblum - 2020 - Science and Engineering Ethics 26 (6):3217-3227.
    The increasing accuracy of algorithms to predict values and preferences raises the possibility that artificial intelligence technology will be able to serve as a surrogate decision-maker for incapacitated patients. Following Camillo Lamanna and Lauren Byrne, we call this technology the autonomy algorithm. Such an algorithm would mine medical research, health records, and social media data to predict patient treatment preferences. The possibility of developing the AA raises the ethical question of whether the AA or a relative ought to serve as (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Ethical Algorithmic Advice: Some Reasons to Pause and Think Twice.Torbjørn Gundersen & Kristine Bærøe - 2022 - American Journal of Bioethics 22 (7):26-28.
    Machine learning and other forms of artificial intelligence can improve parts of clinical decision making regarding the gathering and analysis of data, the detection of disease, and the provis...
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Rise of the Bioethics AI: Curse or Blessing?Craig M. Klugman & Sara Gerke - 2022 - American Journal of Bioethics 22 (7):35-37.
    In October 2021, the Allen Institute for Artificial Intelligence publicly released Delphi, an artificial intelligence system trained to make general moral decisions (Allen Institute for Artifi...
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Important Design Questions for Algorithmic Ethics Consultation.Danton Char - 2022 - American Journal of Bioethics 22 (7):38-40.
    Answering the design questions inherent to building and deploying machine learning tools —based on algorithms that can learn from and make predictions on large data sets without being explicitl...
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Against autonomy: How proposed solutions to the problems of living wills forgot its underlying principle.Laurel Mast - 2019 - Bioethics 34 (3):264-271.
    Significant criticisms have been raised regarding the ethical and psychological basis of living wills. Various solutions to address these criticisms have been advanced, such as the use of surrogate decision makers alone or data science‐driven algorithms. These proposals share a fundamental weakness: they focus on resolving the problems of living wills, and, in the process, lose sight of the underlying ethical principle of advance care planning, autonomy. By suggesting that the same sweeping solutions, without opportunities for choice, be applied to (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations