Switch to: Citations

Add references

You must login to add references.
  1. Understanding and sharing intentions: The origins of cultural cognition.Michael Tomasello, Malinda Carpenter, Josep Call, Tanya Behne & Henrike Moll - 2005 - Behavioral and Brain Sciences 28 (5):675-691.
    We propose that the crucial difference between human cognition and that of other species is the ability to participate with others in collaborative activities with shared goals and intentions: shared intentionality. Participation in such activities requires not only especially powerful forms of intention reading and cultural learning, but also a unique motivation to share psychological states with others and unique forms of cognitive representation for doing so. The result of participating in these activities is species-unique forms of cultural cognition and (...)
    Download  
     
    Export citation  
     
    Bookmark   560 citations  
  • Artificial Intelligence, Values, and Alignment.Iason Gabriel - 2020 - Minds and Machines 30 (3):411-437.
    This paper looks at philosophical questions that arise in the context of AI alignment. It defends three propositions. First, normative and technical aspects of the AI alignment problem are interrelated, creating space for productive engagement between people working in both domains. Second, it is important to be clear about the goal of alignment. There are significant differences between AI that aligns with instructions, intentions, revealed preferences, ideal preferences, interests and values. A principle-based approach to AI alignment, which combines these elements (...)
    Download  
     
    Export citation  
     
    Bookmark   67 citations  
  • AUTOGEN: A Personalized Large Language Model for Academic Enhancement—Ethics and Proof of Principle.Sebastian Porsdam Mann, Brian D. Earp, Nikolaj Møller, Suren Vynn & Julian Savulescu - 2023 - American Journal of Bioethics 23 (10):28-41.
    Large language models (LLMs) such as ChatGPT or Google’s Bard have shown significant performance on a variety of text-based tasks, such as summarization, translation, and even the generation of new...
    Download  
     
    Export citation  
     
    Bookmark   29 citations  
  • Experimental Philosophical Bioethics and Normative Inference.Brian D. Earp, Jonathan Lewis, Vilius Dranseika & Ivar R. Hannikainen - 2021 - Theoretical Medicine and Bioethics 42 (3-4):91-111.
    This paper explores an emerging sub-field of both empirical bioethics and experimental philosophy, which has been called “experimental philosophical bioethics” (bioxphi). As an empirical discipline, bioxphi adopts the methods of experimental moral psychology and cognitive science; it does so to make sense of the eliciting factors and underlying cognitive processes that shape people’s moral judgments, particularly about real-world matters of bioethical concern. Yet, as a normative discipline situated within the broader field of bioethics, it also aims to contribute to substantive (...)
    Download  
     
    Export citation  
     
    Bookmark   29 citations  
  • Ethics of the algorithmic prediction of goal of care preferences: from theory to practice.Andrea Ferrario, Sophie Gloeckler & Nikola Biller-Andorno - 2023 - Journal of Medical Ethics 49 (3):165-174.
    Artificial intelligence (AI) systems are quickly gaining ground in healthcare and clinical decision-making. However, it is still unclear in what way AI can or should support decision-making that is based on incapacitated patients’ values and goals of care, which often requires input from clinicians and loved ones. Although the use of algorithms to predict patients’ most likely preferred treatment has been discussed in the medical ethics literature, no example has been realised in clinical practice. This is due, arguably, to the (...)
    Download  
     
    Export citation  
     
    Bookmark   14 citations  
  • The Artificial Moral Advisor. The “Ideal Observer” Meets Artificial Intelligence.Alberto Giubilini & Julian Savulescu - 2018 - Philosophy and Technology 31 (2):169-188.
    We describe a form of moral artificial intelligence that could be used to improve human moral decision-making. We call it the “artificial moral advisor”. The AMA would implement a quasi-relativistic version of the “ideal observer” famously described by Roderick Firth. We describe similarities and differences between the AMA and Firth’s ideal observer. Like Firth’s ideal observer, the AMA is disinterested, dispassionate, and consistent in its judgments. Unlike Firth’s observer, the AMA is non-absolutist, because it would take into account the human (...)
    Download  
     
    Export citation  
     
    Bookmark   43 citations  
  • Consent-GPT: is it ethical to delegate procedural consent to conversational AI?Jemima Winifred Allen, Brian D. Earp, Julian Koplin & Dominic Wilkinson - 2024 - Journal of Medical Ethics 50 (2):77-83.
    Obtaining informed consent from patients prior to a medical or surgical procedure is a fundamental part of safe and ethical clinical practice. Currently, it is routine for a significant part of the consent process to be delegated to members of the clinical team not performing the procedure (eg, junior doctors). However, it is common for consent-taking delegates to lack sufficient time and clinical knowledge to adequately promote patient autonomy and informed decision-making. Such problems might be addressed in a number of (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  • Represent me: please! Towards an ethics of digital twins in medicine.Matthias Braun - 2021 - Journal of Medical Ethics 47 (6):394-400.
    Simulations are used in very different contexts and for very different purposes. An emerging development is the possibility of using simulations to obtain a more or less representative reproduction of organs or even entire persons. Such simulations are framed and discussed using the term ‘digital twin’. This paper unpacks and scrutinises the current use of such digital twins in medicine and the ideas embedded in this practice. First, the paper maps the different types of digital twins. A special focus is (...)
    Download  
     
    Export citation  
     
    Bookmark   17 citations  
  • How to Use AI Ethically for Ethical Decision-Making.Joanna Demaree-Cotton, Brian D. Earp & Julian Savulescu - 2022 - American Journal of Bioethics 22 (7):1-3.
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Autonomy-based criticisms of the patient preference predictor.E. J. Jardas, David Wasserman & David Wendler - 2022 - Journal of Medical Ethics 48 (5):304-310.
    The patient preference predictor is a proposed computer-based algorithm that would predict the treatment preferences of decisionally incapacitated patients. Incorporation of a PPP into the decision-making process has the potential to improve implementation of the substituted judgement standard by providing more accurate predictions of patients’ treatment preferences than reliance on surrogates alone. Yet, critics argue that methods for making treatment decisions for incapacitated patients should be judged on a number of factors beyond simply providing them with the treatments they would (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  • Use of a Patient Preference Predictor to Help Make Medical Decisions for Incapacitated Patients.A. Rid & D. Wendler - 2014 - Journal of Medicine and Philosophy 39 (2):104-129.
    The standard approach to treatment decision making for incapacitated patients often fails to provide treatment consistent with the patient’s preferences and values and places significant stress on surrogate decision makers. These shortcomings provide compelling reason to search for methods to improve current practice. Shared decision making between surrogates and clinicians has important advantages, but it does not provide a way to determine patients’ treatment preferences. Hence, shared decision making leaves families with the stressful challenge of identifying the patient’s preferred treatment (...)
    Download  
     
    Export citation  
     
    Bookmark   33 citations  
  • AI support for ethical decision-making around resuscitation: proceed with care.Nikola Biller-Andorno, Andrea Ferrario, Susanne Joebges, Tanja Krones, Federico Massini, Phyllis Barth, Georgios Arampatzis & Michael Krauthammer - 2022 - Journal of Medical Ethics 48 (3):175-183.
    Artificial intelligence (AI) systems are increasingly being used in healthcare, thanks to the high level of performance that these systems have proven to deliver. So far, clinical applications have focused on diagnosis and on prediction of outcomes. It is less clear in what way AI can or should support complex clinical decisions that crucially depend on patient preferences. In this paper, we focus on the ethical questions arising from the design, development and deployment of AI systems to support decision-making around (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Advance Medical Decision-Making Differs Across First- and Third-Person Perspectives.James Toomey, Jonathan Lewis, Ivar R. Hannikainen & Brian D. Earp - 2024 - AJOB Empirical Bioethics 15 (4):237-245.
    Background Advance healthcare decision-making presumes that a prior treatment preference expressed with sufficient mental capacity (“T1 preference”) should trump a contrary preference expressed after significant cognitive decline (“T2 preference”). This assumption is much debated in normative bioethics, but little is known about lay judgments in this domain. This study investigated participants’ judgments about which preference should be followed, and whether these judgments differed depending on a first-person (deciding for one’s future self) versus third-person (deciding for a friend or stranger) perspective. (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Treatment Decision Making for Incapacitated Patients: Is Development and Use of a Patient Preference Predictor Feasible?Annette Rid & David Wendler - 2014 - Journal of Medicine and Philosophy 39 (2):130-152.
    It has recently been proposed to incorporate the use of a “Patient Preference Predictor” (PPP) into the process of making treatment decisions for incapacitated patients. A PPP would predict which treatment option a given incapacitated patient would most likely prefer, based on the individual’s characteristics and information on what treatment preferences are correlated with these characteristics. Including a PPP in the shared decision-making process between clinicians and surrogates has the potential to better realize important ethical goals for making treatment decisions (...)
    Download  
     
    Export citation  
     
    Bookmark   23 citations  
  • Patient preference predictors and the problem of naked statistical evidence.Nathaniel Paul Sharadin - 2018 - Journal of Medical Ethics 44 (12):857-862.
    Patient preference predictors (PPPs) promise to provide medical professionals with a new solution to the problem of making treatment decisions on behalf of incapacitated patients. I show that the use of PPPs faces a version of a normative problem familiar from legal scholarship: the problem of naked statistical evidence. I sketch two sorts of possible reply, vindicating and debunking, and suggest that our reply to the problem in the one domain ought to mirror our reply in the other. The conclusion (...)
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  • AI knows best? Avoiding the traps of paternalism and other pitfalls of AI-based patient preference prediction.Andrea Ferrario, Sophie Gloeckler & Nikola Biller-Andorno - 2023 - Journal of Medical Ethics 49 (3):185-186.
    In our recent article ‘The Ethics of the Algorithmic Prediction of Goal of Care Preferences: From Theory to Practice’1, we aimed to ignite a critical discussion on why and how to design artificial intelligence (AI) systems assisting clinicians and next-of-kin by predicting goal of care preferences for incapacitated patients. Here, we would like to thank the commentators for their valuable responses to our work. We identified three core themes in their commentaries: (1) the risks of AI paternalism, (2) worries about (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • A new method for making treatment decisions for incapacitated patients: what do patients think about the use of a patient preference predictor?David Wendler, Bob Wesley, Mark Pavlick & Annette Rid - 2016 - Journal of Medical Ethics 42 (4):235-241.
    Download  
     
    Export citation  
     
    Bookmark   15 citations  
  • Meta-surrogate decision making and artificial intelligence.Brian D. Earp - 2022 - Journal of Medical Ethics 48 (5):287-289.
    How shall we decide for others who cannot decide for themselves? And who—or what, in the case of artificial intelligence — should make the decision? The present issue of the journal tackles several interrelated topics, many of them having to do with surrogate decision making. For example, the feature article by Jardas et al 1 explores the potential use of artificial intelligence to predict incapacitated patients’ likely treatment preferences based on their sociodemographic characteristics, raising questions about the means by which (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Experimental Philosophical Bioethics of Personal Identity.Brian D. Earp, Jonathan Lewis, J. Skorburg, Ivar Hannikainen & Jim A. C. Everett - 2022 - In Kevin Tobia, Experimental Philosophy of Identity and the Self. London: Bloomsbury. pp. 183-202.
    The question of what makes someone the same person through time and change has long been a preoccupation of philosophers. In recent years, the question of what makes ordinary or lay people judge that someone is—or isn’t—the same person has caught the interest of experimental psychologists. These latter, empirically oriented researchers have sought to understand the cognitive processes and eliciting factors that shape ordinary people’s judgments about personal identity and the self. Still more recently, practitioners within an emerging discipline, experimental (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Messy autonomy: Commentary on Patient preference predictors and the problem of naked statistical evidence.Stephen David John - 2018 - Journal of Medical Ethics 44 (12):864-864.
    Like many, I find the idea of relying on patient preference predictors in life-or-death cases ethically troubling. As part of his stimulating discussion, Sharadin1 diagnoses such unease as a worry that using PPPs disrespects patients’ autonomy, by treating their most intimate and significant desires as if they were caused by their demographic traits. I agree entirely with Sharadin’s ‘debunking’ response to this concern: we can use statistical correlations to predict others’ preferences without thereby assuming any causal claim. However, I suspect (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Patient Preference Predictors, Apt Categorization, and Respect for Autonomy.S. John - 2014 - Journal of Medicine and Philosophy 39 (2):169-177.
    In this paper, I set out two ethical complications for Rid and Wendler’s proposal that a “Patient Preference Predictor” (PPP) should be used to aid decision making about incapacitated patients’ care. Both of these worries concern how a PPP might categorize patients. In the first section of the paper, I set out some general considerations about the “ethics of apt categorization” within stratified medicine and show how these challenge certain PPPs. In the second section, I argue for a more specific—but (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  • Surrogates and Artificial Intelligence: Why AI Trumps Family.Ryan Hubbard & Jake Greenblum - 2020 - Science and Engineering Ethics 26 (6):3217-3227.
    The increasing accuracy of algorithms to predict values and preferences raises the possibility that artificial intelligence technology will be able to serve as a surrogate decision-maker for incapacitated patients. Following Camillo Lamanna and Lauren Byrne, we call this technology the autonomy algorithm. Such an algorithm would mine medical research, health records, and social media data to predict patient treatment preferences. The possibility of developing the AA raises the ethical question of whether the AA or a relative ought to serve as (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Beyond competence: advance directives in dementia research.Karin Roland Jongsma & Suzanne van de Vathorst - 2015 - Monash Bioethics Review 33 (2-3):167-180.
    Dementia is highly prevalent and incurable. The participation of dementia patients in clinical research is indispensable if we want to find an effective treatment for dementia. However, one of the primary challenges in dementia research is the patients’ gradual loss of the capacity to consent. Patients with dementia are characterized by the fact that, at an earlier stage of their life, they were able to give their consent to participation in research. Therefore, the phase when patients are still competent to (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • Artificial Intelligence to support ethical decision-making for incapacitated patients: a survey among German anesthesiologists and internists.Lasse Benzinger, Jelena Epping, Frank Ursin & Sabine Salloch - 2024 - BMC Medical Ethics 25 (1):1-10.
    Background Artificial intelligence (AI) has revolutionized various healthcare domains, where AI algorithms sometimes even outperform human specialists. However, the field of clinical ethics has remained largely untouched by AI advances. This study explores the attitudes of anesthesiologists and internists towards the use of AI-driven preference prediction tools to support ethical decision-making for incapacitated patients. Methods A questionnaire was developed and pretested among medical students. The questionnaire was distributed to 200 German anesthesiologists and 200 German internists, thereby focusing on physicians who (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Improving Medical Decisions for Incapacitated Persons: Does Focusing on “Accurate Predictions” Lead to an Inaccurate Picture?Scott Y. H. Kim - 2014 - Journal of Medicine and Philosophy 39 (2):187-195.
    The Patient Preference Predictor (PPP) proposal places a high priority on the accuracy of predicting patients’ preferences and finds the performance of surrogates inadequate. However, the quest to develop a highly accurate, individualized statistical model has significant obstacles. First, it will be impossible to validate the PPP beyond the limit imposed by 60%–80% reliability of people’s preferences for future medical decisions—a figure no better than the known average accuracy of surrogates. Second, evidence supports the view that a sizable minority of (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • The Patient preference predictor and the objection from higher-order preferences.Jakob Thrane Mainz - 2023 - Journal of Medical Ethics 49 (3):221-222.
    Recently, Jardas _et al_ have convincingly defended the patient preference predictor (PPP) against a range of autonomy-based objections. In this response, I propose a new autonomy-based objection to the PPP that is not explicitly discussed by Jardas _et al_. I call it the ‘objection from higher-order preferences’. Even if this objection is not sufficient reason to reject the PPP, the objection constitutes a pro tanto reason that is at least as powerful as the ones discussed by Jardas _et al._.
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Reflections on the Patient Preference Predictor Proposal.D. W. Brock - 2014 - Journal of Medicine and Philosophy 39 (2):153-160.
    There are substantial data establishing that surrogates are often mistaken in predicting what treatments incompetent patients would have wanted and that supplements such as advance directives have not resulted in significant improvements. Rid and Wendler’s Patient Preference Predictor (PPP) proposal will attempt to gather data about what similar patients would prefer in a variety of treatment choices. It accepts the usual goal of patient autonomy and the Substituted Judgment principle for surrogate decisions. I provide reasons for questioning sole reliance on (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Law, Ethics, and the Patient Preference Predictor.R. Dresser - 2014 - Journal of Medicine and Philosophy 39 (2):178-186.
    The Patient Preference Predictor (PPP) is intended to improve treatment decision making for incapacitated patients. The PPP would collect information about the treatment preferences of people with different demographic and other characteristics. It could be used to indicate which treatment option an individual patient would be most likely to prefer, based on data about the preferences of people who resemble the patient. The PPP could be incorporated into existing US law governing treatment for incapacitated patients, although it is unclear whether (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • The Surrogate's Authority.Hilde Lindemann & James Lindemann Nelson - 2014 - Journal of Medicine and Philosophy 39 (2):161-168.
    The authority of surrogates—often close family members—to make treatment decisions for previously capacitated patients is said to come from their knowledge of the patient, which they are to draw on as they exercise substituted judgment on the patient’s behalf. However, proxy accuracy studies call this authority into question, hence the Patient Preference Predictor (PPP). We identify two problems with contemporary understandings of the surrogate’s role. The first is with the assumption that knowledge of the patient entails knowledge of what the (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Commentary on ‘Autonomy-based criticisms of the patient preference predictor’.Collin O'Neil - 2022 - Journal of Medical Ethics 48 (5):315-316.
    When a patient lacks sufficient capacity to make a certain treatment decision, whether because of deficits in their ability to make a judgement that reflects their values or to make a decision that reflects their judgement or both, the decision must be made by a surrogate. Often the best way to respect the patient’s autonomy, in such cases, is for the surrogate to make a ‘substituted’ judgement on behalf of the patient, which is the decision that best reflects the patient’s (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Predicting End-of-Life Treatment Preferences: Perils and Practicalities.P. H. Ditto & C. J. Clark - 2014 - Journal of Medicine and Philosophy 39 (2):196-204.
    Rid and Wendler propose the development of a Patient Preference Predictor (PPP), an actuarial model for predicting incapacitated patient’s life-sustaining treatment preferences across a wide range of end-of-life scenarios. An actuarial approach to end-of-life decision making has enormous potential, but transferring the logic of actuarial prediction to end-of-life decision making raises several conceptual complexities and logistical problems that need further consideration. Actuarial models have proven effective in targeted prediction tasks, but no evidence supports their effectiveness in the kind of broad (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Response to commentaries: ‘autonomy-based criticisms of the patient preference predictor’.David Wasserman & David Wendler - 2023 - Journal of Medical Ethics 49 (8):580-582.
    The authors respond to four JME commentaries on their Feature Article, ‘Autonomy-based criticisms of the patient preference predictor’.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Ethics of digital twins: four challenges.Matthias Braun - 2022 - Journal of Medical Ethics 48 (9):579-580.
    In the article ‘Represent Me: Please! Towards an Ethics of Digital Twins in Medicine’, I analysed and tried to better understand the main ethical challenges associated with Digital Twins. For those who are just entering the debate with this article: DT is a metaphor for a bundle of artificial intelligence driven simulation technologies that constantly, in real time and ad personam simulate single or multiple parts of the body and make predictions about future health states based on these simulations. My (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Patients’ Interests in their Family Members’ Well-Being: An Overlooked, Fundamental Consideration within Substituted Judgments.Jeffrey T. Berger - 2005 - Journal of Clinical Ethics 16 (1):3-10.
    Download  
     
    Export citation  
     
    Bookmark   5 citations