Switch to: References

Add citations

You must login to add citations.
  1. Know Thyself, Improve Thyself: Personalized LLMs for Self-Knowledge and Moral Enhancement.Alberto Giubilini, Sebastian Porsdam Mann, Cristina Voinea, Brian Earp & Julian Savulescu - 2024 - Science and Engineering Ethics 30 (6):1-15.
    In this paper, we suggest that personalized LLMs trained on information written by or otherwise pertaining to an individual could serve as artificial moral advisors (AMAs) that account for the dynamic nature of personal morality. These LLM-based AMAs would harness users’ past and present data to infer and make explicit their sometimes-shifting values and preferences, thereby fostering self-knowledge. Further, these systems may also assist in processes of self-creation, by helping users reflect on the kind of person they want to be (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Artificial Intelligence, Digital Self, and the “Best Interests” Problem.Jeffrey Todd Berger - 2024 - American Journal of Bioethics 24 (7):27-29.
    In their target article, “A Personalized Patient Preference Predictor for Substituted Judgments in Healthcare: Technically Feasible and Ethically Desirable,” Earp et al. (2024) discuss ways in whic...
    Download  
     
    Export citation  
     
    Bookmark  
  • The Patient Preference Predictor: A Timely Boost for Personalized Medicine.Nikola Biller-Andorno, Andrea Ferrario & Armin Biller - 2024 - American Journal of Bioethics 24 (7):35-38.
    The future of medicine will be predictive, preventive, personalized, and participatory. Recent technological advancements bolster the realization of this vision, particularly through innovations in...
    Download  
     
    Export citation  
     
    Bookmark  
  • Personalized Patient Preference Predictors Are Neither Technically Feasible nor Ethically Desirable.Nathaniel Sharadin - 2024 - American Journal of Bioethics 24 (7):62-65.
    Except in extraordinary circumstances, patients' clinical care should reflect their preferences. Incapacitated patients cannot report their preferences. This is a problem. Extant solutions to the problem are inadequate: surrogates are unreliable, and advance directives are uncommon. In response, some authors have suggested developing algorithmic "patient preference predictors" (PPPs) to inform care for incapacitated patients. In a recent paper, Earp et al. propose a new twist on PPPs. Earp et al. suggest we personalize PPPs using modern machine learning (ML) techniques. In (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Generative AI and medical ethics: the state of play.Hazem Zohny, Sebastian Porsdam Mann, Brian D. Earp & John McMillan - 2024 - Journal of Medical Ethics 50 (2):75-76.
    Since their public launch, a little over a year ago, large language models (LLMs) have inspired a flurry of analysis about what their implications might be for medical ethics, and for society more broadly. 1 Much of the recent debate has moved beyond categorical evaluations of the permissibility or impermissibility of LLM use in different general contexts (eg, at work or school), to more fine-grained discussions of the criteria that should govern their appropriate use in specific domains or towards certain (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Artificial Intelligence to support ethical decision-making for incapacitated patients: a survey among German anesthesiologists and internists.Lasse Benzinger, Jelena Epping, Frank Ursin & Sabine Salloch - 2024 - BMC Medical Ethics 25 (1):1-10.
    Background Artificial intelligence (AI) has revolutionized various healthcare domains, where AI algorithms sometimes even outperform human specialists. However, the field of clinical ethics has remained largely untouched by AI advances. This study explores the attitudes of anesthesiologists and internists towards the use of AI-driven preference prediction tools to support ethical decision-making for incapacitated patients. Methods A questionnaire was developed and pretested among medical students. The questionnaire was distributed to 200 German anesthesiologists and 200 German internists, thereby focusing on physicians who (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • The Problematic “Existence” of Digital Twins: Human Intention and Moral Decision.Jeffrey P. Bishop - 2024 - American Journal of Bioethics 24 (7):45-47.
    Since surrogates are not good at predicting patient preferences, and since these decisions can cause surrogates distress, some have claimed we need an alternative way to make decisions for incapaci...
    Download  
     
    Export citation  
     
    Bookmark  
  • Digital Doppelgängers and Lifespan Extension: What Matters?Samuel Iglesias, Brian D. Earp, Cristina Voinea, Sebastian Porsdam Mann, Anda Zahiu, Nancy S. Jecker & Julian Savulescu - forthcoming - American Journal of Bioethics:1-16.
    There is an ongoing debate about the ethics of research on lifespan extension: roughly, using medical technologies to extend biological human lives beyond the current “natural” limit of about 120 years. At the same time, there is an exploding interest in the use of artificial intelligence (AI) to create “digital twins” of persons, for example by fine-tuning large language models on data specific to particular individuals. In this paper, we consider whether digital twins (or digital doppelgängers, as we refer to (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Enabling Demonstrated Consent for Biobanking with Blockchain and Generative AI.Caspar Barnes, Mateo Riobo Aboy, Timo Minssen, Jemima Winifred Allen, Brian D. Earp, Julian Savulescu & Sebastian Porsdam Mann - forthcoming - American Journal of Bioethics:1-16.
    Participation in research is supposed to be voluntary and informed. Yet it is difficult to ensure people are adequately informed about the potential uses of their biological materials when they donate samples for future research. We propose a novel consent framework which we call “demonstrated consent” that leverages blockchain technology and generative AI to address this problem. In a demonstrated consent model, each donated sample is associated with a unique non-fungible token (NFT) on a blockchain, which records in its metadata (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Reasons in the Loop: The Role of Large Language Models in Medical Co-Reasoning.Sebastian Porsdam Mann, Brian D. Earp, Peng Liu & Julian Savulescu - 2024 - American Journal of Bioethics 24 (9):105-107.
    Salloch and Eriksen (2024) present a compelling case for including patients as co-reasoners in medical decision-making involving artificial intelligence (AI). Drawing on O'Neill’s neo-Kantian frame...
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Digital Duplicates and the Scarcity Problem: Might AI Make Us Less Scarce and Therefore Less Valuable?John Danaher & Sven Nyholm - 2024 - Philosophy and Technology 37 (3):1-20.
    Recent developments in AI and robotics enable people to create _personalised digital duplicates_ – these are artificial, at least partial, recreations or simulations of real people. The advent of such duplicates enables people to overcome their individual scarcity. But this comes at a cost. There is a common view among ethicists and value theorists suggesting that individual scarcity contributes to or heightens the value of a life or parts of a life. In this paper, we address this topic. We make (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Large language models in medical ethics: useful but not expert.Andrea Ferrario & Nikola Biller-Andorno - 2024 - Journal of Medical Ethics 50 (9):653-654.
    Large language models (LLMs) have now entered the realm of medical ethics. In a recent study, Balaset alexamined the performance of GPT-4, a commercially available LLM, assessing its performance in generating responses to diverse medical ethics cases. Their findings reveal that GPT-4 demonstrates an ability to identify and articulate complex medical ethical issues, although its proficiency in encoding the depth of real-world ethical dilemmas remains an avenue for improvement. Investigating the integration of LLMs into medical ethics decision-making appears to be (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Digital Duplicates, Relational Scarcity, and Value: Commentary on Danaher and Nyholm (2024).Cristina Voinea, Sebastian Porsdam Mann, Christopher Register, Julian Savulescu & Brian D. Earp - 2024 - Philosophy and Technology 37 (4):1-8.
    Danaher and Nyholm ( 2024a ) have recently proposed that digital duplicates—such as fine-tuned, “personalized” large language models that closely mimic a particular individual—might reduce that individual’s _scarcity_ and thus increase the amount of instrumental value they can bring to the world. In this commentary, we introduce the notion of _relational scarcity_ and explore how digital duplicates would affect the value of interpersonal relationships.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Ethical Complexities in Utilizing Artificial Intelligence for Surrogate Decision Making.Jennifer Blumenthal-Barby, Faith E. Fletcher, Lauren Taylor, Ryan H. Nelson, Bryanna Moore, Brendan Saloner & Peter A. Ubel - 2024 - American Journal of Bioethics 24 (7):1-2.
    Ms. P. is in the ICU with respiratory failure and sepsis. She has been on a ventilator for almost a week, and now has impending kidney failure. Her children, who have been taking turns at the bedsi...
    Download  
     
    Export citation  
     
    Bookmark  
  • Respect for Autonomy Requires a Mental Model.Nada Gligorov & Pierce Randall - 2024 - American Journal of Bioethics 24 (7):53-55.
    Making decisions for incapacitated patients has been a perennial problem in bioethics. Surrogate decision-makers are sometimes expected to use substituted judgment to make such decisions. Applying...
    Download  
     
    Export citation  
     
    Bookmark  
  • As an AI Model, I Cannot Replace Human Dialogue Processes. However, I Can Assist You in Identifying Potential Alternatives.Lucas Gutiérrez-Lafrentz, V. Constanza Micolich & V. Fernando Manríquez - 2024 - American Journal of Bioethics 24 (7):58-60.
    In “A Personalized Patient Preference Predictor for Substituted Judgments in Healthcare,” Earp et al. (2024) introduce the Personalized Patient Preference Predictor (P4), an AI model designed to ex...
    Download  
     
    Export citation  
     
    Bookmark  
  • Personal but Necessarily Predictive? Developing a Bioethics Research Agenda for AI-Enabled Decision-Making Tools.Vasiliki Rahimzadeh - 2024 - American Journal of Bioethics 24 (7):29-31.
    Any human or AI system ambitious enough to mine my every social media post, purchase, or Internet search history is likely to infer some very accurate details about me. I am in my 30s, an avid trav...
    Download  
     
    Export citation  
     
    Bookmark  
  • Potentially Perilous Preference Parrots: Why Digital Twins Do Not Respect Patient Autonomy.Georg Starke & Ralf J. Jox - 2024 - American Journal of Bioethics 24 (7):43-45.
    The debate about the chances and dangers of a patient preference predictor (PPP) has been lively ever since Annette Rid and David Wendler proposed this fascinating idea ten years ago. Given the tec...
    Download  
     
    Export citation  
     
    Bookmark  
  • Social Coercion, Patient Preferences, and AI-Substituted Judgments.Christopher A. Riddle - 2024 - American Journal of Bioethics 24 (7):60-62.
    In “A Personalized Patient Preference Predictor for Substituted Judgments in Healthcare: Technically Feasible and Ethically Desirable,” Earp et al. (2024) offer what should be considered a potentia...
    Download  
     
    Export citation  
     
    Bookmark  
  • The Personalized Patient Preference Predictor: A Harmful and Misleading Solution Losing Sight of the Problem It Claims to Solve.Heidi Mertes - 2024 - American Journal of Bioethics 24 (7):41-42.
    In the age where AI is showing increasing potential to solve problems in unprecedented ways, it becomes tempting to see it as the solution for every problem, resulting in a focus on the means (i.e....
    Download  
     
    Export citation  
     
    Bookmark  
  • Parrots at the Bedside: Making Surrogate Decisions with Stochastic Strangers.Jonathan Herington & Benzi Kluger - 2024 - American Journal of Bioethics 24 (7):32-34.
    In their recent paper, Earp and coauthors (2024) argue for the ethical desirability of personalized patient preference predictors (P4s): large-language models (LLMs) finetuned on a patient’s “own p...
    Download  
     
    Export citation  
     
    Bookmark  
  • Can P4 Support Family Involvement and Best Interests in Surrogate Decision-Making?Angela Ballantyne & Rochelle Style - 2024 - American Journal of Bioethics 24 (7):56-58.
    Earp et al. (2024) sketch a thought-provoking potential use of generative AI to enhance supported decision-making for adults who have lost capacity/competence to make their own medical decisions. T...
    Download  
     
    Export citation  
     
    Bookmark  
  • Weighing Patient Preferences: Lessons for a Patient Preferences Predictor.Ben Schwan - 2024 - American Journal of Bioethics 24 (7):38-40.
    A Patient Preference Predictor (PPP)—an algorithm capable of predicting, on the basis of demographic or more personalized data, what an incapacitated patient would prefer were they capacitated—is a...
    Download  
     
    Export citation  
     
    Bookmark  
  • Predicting Patient Preferences with Artificial Intelligence: The Problem of the Data Source.Lukas J. Meier - 2024 - American Journal of Bioethics 24 (7):48-50.
    The concept of a Patient Preference Predictor—an algorithm that supplements or replaces the process of surrogate decision-making for incapacitated patients—was first suggested a decade ago (Rid and...
    Download  
     
    Export citation  
     
    Bookmark  
  • Machine Learning Algorithms in the Personalized Modeling of Incapacitated Patients’ Decision Making—Is It a Viable Concept?Tomasz Rzepiński, Ewa Deskur-Śmielecka & Michał Chojnicki - 2024 - American Journal of Bioethics 24 (7):51-53.
    New informatics technologies are becoming increasingly important in medical practice. Machine learning (ML) and deep learning (DL) systems enable data analysis and the formulation of medical recomm...
    Download  
     
    Export citation  
     
    Bookmark  
  • AUTOGEN and the Ethics of Co-Creation with Personalized LLMs—Reply to the Commentaries.Sebastian Porsdam Mann, Brian D. Earp, Nikolaj Møller, Vynn Suren & Julian Savulescu - 2024 - American Journal of Bioethics 24 (3):6-14.
    In this reply to our commentators, we respond to ethical concerns raised about the potential use (or misuse) of personalized LLMs for academic idea and prose generation, including questions about c...
    Download  
     
    Export citation  
     
    Bookmark   2 citations