Switch to: References

Add citations

You must login to add citations.
  1. Digital Doppelgängers and Lifespan Extension: What Matters?Samuel Iglesias, Brian D. Earp, Cristina Voinea, Sebastian Porsdam Mann, Anda Zahiu, Nancy S. Jecker & Julian Savulescu - forthcoming - American Journal of Bioethics:1-16.
    There is an ongoing debate about the ethics of research on lifespan extension: roughly, using medical technologies to extend biological human lives beyond the current “natural” limit of about 120 years. At the same time, there is an exploding interest in the use of artificial intelligence (AI) to create “digital twins” of persons, for example by fine-tuning large language models on data specific to particular individuals. In this paper, we consider whether digital twins (or digital doppelgängers, as we refer to (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Enabling Demonstrated Consent for Biobanking with Blockchain and Generative AI.Caspar Barnes, Mateo Riobo Aboy, Timo Minssen, Jemima Winifred Allen, Brian D. Earp, Julian Savulescu & Sebastian Porsdam Mann - forthcoming - American Journal of Bioethics:1-16.
    Participation in research is supposed to be voluntary and informed. Yet it is difficult to ensure people are adequately informed about the potential uses of their biological materials when they donate samples for future research. We propose a novel consent framework which we call “demonstrated consent” that leverages blockchain technology and generative AI to address this problem. In a demonstrated consent model, each donated sample is associated with a unique non-fungible token (NFT) on a blockchain, which records in its metadata (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Generative AI and medical ethics: the state of play.Hazem Zohny, Sebastian Porsdam Mann, Brian D. Earp & John McMillan - 2024 - Journal of Medical Ethics 50 (2):75-76.
    Since their public launch, a little over a year ago, large language models (LLMs) have inspired a flurry of analysis about what their implications might be for medical ethics, and for society more broadly. 1 Much of the recent debate has moved beyond categorical evaluations of the permissibility or impermissibility of LLM use in different general contexts (eg, at work or school), to more fine-grained discussions of the criteria that should govern their appropriate use in specific domains or towards certain (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • A Personalized Patient Preference Predictor for Substituted Judgments in Healthcare: Technically Feasible and Ethically Desirable.Brian D. Earp, Sebastian Porsdam Mann, Jemima Allen, Sabine Salloch, Vynn Suren, Karin Jongsma, Matthias Braun, Dominic Wilkinson, Walter Sinnott-Armstrong, Annette Rid, David Wendler & Julian Savulescu - 2024 - American Journal of Bioethics 24 (7):13-26.
    When making substituted judgments for incapacitated patients, surrogates often struggle to guess what the patient would want if they had capacity. Surrogates may also agonize over having the (sole) responsibility of making such a determination. To address such concerns, a Patient Preference Predictor (PPP) has been proposed that would use an algorithm to infer the treatment preferences of individual patients from population-level data about the known preferences of people with similar demographic characteristics. However, critics have suggested that even if such (...)
    Download  
     
    Export citation  
     
    Bookmark   26 citations  
  • The Impact of AUTOGEN and Similar Fine-Tuned Large Language Models on the Integrity of Scholarly Writing.David B. Resnik & Mohammad Hosseini - 2023 - American Journal of Bioethics 23 (10):50-52.
    Artificial intelligence (AI), large language models (LLMs), such as Open AI’s ChatGPT, have a remarkable ability to process and generate human language but have also raised complex and novel ethica...
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • ChatGPT’s Responses to Dilemmas in Medical Ethics: The Devil is in the Details.Lukas J. Meier - 2023 - American Journal of Bioethics 23 (10):63-65.
    In their Target Article, Rahimzadeh et al. (2023) discuss the virtues and vices of employing ChatGPT in ethics education for healthcare professionals. To this end, they confront the chatbot with a moral dilemma and analyse its response. In interpreting the case, ChatGPT relies on Beauchamp and Childress’ four prima-facie principles: beneficence, non-maleficence, respect for patient autonomy, and justice. While the chatbot’s output appears admirable at first sight, it is worth taking a closer look: ChatGPT not only misses the point when (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Digital Duplicates and the Scarcity Problem: Might AI Make Us Less Scarce and Therefore Less Valuable?John Danaher & Sven Nyholm - 2024 - Philosophy and Technology 37 (3):1-20.
    Recent developments in AI and robotics enable people to create _personalised digital duplicates_ – these are artificial, at least partial, recreations or simulations of real people. The advent of such duplicates enables people to overcome their individual scarcity. But this comes at a cost. There is a common view among ethicists and value theorists suggesting that individual scarcity contributes to or heightens the value of a life or parts of a life. In this paper, we address this topic. We make (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • AI and the need for justification (to the patient).Anantharaman Muralidharan, Julian Savulescu & G. Owen Schaefer - 2024 - Ethics and Information Technology 26 (1):1-12.
    This paper argues that one problem that besets black-box AI is that it lacks algorithmic justifiability. We argue that the norm of shared decision making in medical care presupposes that treatment decisions ought to be justifiable to the patient. Medical decisions are justifiable to the patient only if they are compatible with the patient’s values and preferences and the patient is able to see that this is so. Patient-directed justifiability is threatened by black-box AIs because the lack of rationale provided (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Deficient epistemic virtues and prevalence of epistemic vices as precursors to transgressions in research misconduct.Bor Luen Tang - 2024 - Research Ethics 20 (2):272-287.
    Scientific research is supposed to acquire or generate knowledge, but such a purpose would be severely undermined by instances of research misconduct (RM) and questionable research practices (QRP). RM and QRP are often framed in terms of moral transgressions by individuals (bad apples) whose aberrant acts could be made conducive by shortcomings in regulatory measures of organizations or institutions (bad barrels). This notion presupposes, to an extent, that the erring parties know exactly what they are doing is wrong and morally (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Generative AI and the Foregrounding of Epistemic Injustice in Bioethics.Calvin Wai-Loon Ho - 2023 - American Journal of Bioethics 23 (10):99-102.
    OpenAI’s Chat Generative Pre-training Transformer (ChatGPT), Google’s Bard and other generative artificial intelligence (GenAI) technologies can greatly enhance the capability of healthcare profess...
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Generative AI and Ethical Analysis.John McMillan - 2023 - American Journal of Bioethics 23 (10):42-44.
    Cohen (2023), Rahimzadeh and colleagues (2023), and Porsdam Mann and colleagues (2023) have written thorough and well-canvassed pieces about the ethical and conceptual challenges of large language...
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Meaning by Courtesy: LLM-Generated Texts and the Illusion of Content.Gary Ostertag - 2023 - American Journal of Bioethics 23 (10):91-93.
    Contrary to how it may seem when we observe its output, an [LLM] is a system for haphazardly stitching together sequences of linguistic forms it has observed in its vast training data, according to...
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Why Personalized Large Language Models Fail to Do What Ethics is All About.Sebastian Laacke & Charlotte Gauckler - 2023 - American Journal of Bioethics 23 (10):60-63.
    Porsdam Mann and colleagues provide an overview of opportunities and risks associated with the use of personalized large language models (LLMs) for text production in bio)ethics (Porsdam Mann et al...
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Künstliche Intelligenz in der Ethik?Sabine Salloch - 2023 - Ethik in der Medizin 35 (3):337-340.
    Download  
     
    Export citation  
     
    Bookmark  
  • Reasons in the Loop: The Role of Large Language Models in Medical Co-Reasoning.Sebastian Porsdam Mann, Brian D. Earp, Peng Liu & Julian Savulescu - 2024 - American Journal of Bioethics 24 (9):105-107.
    Salloch and Eriksen (2024) present a compelling case for including patients as co-reasoners in medical decision-making involving artificial intelligence (AI). Drawing on O'Neill’s neo-Kantian frame...
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • AUTOGEN and the Ethics of Co-Creation with Personalized LLMs—Reply to the Commentaries.Sebastian Porsdam Mann, Brian D. Earp, Nikolaj Møller, Vynn Suren & Julian Savulescu - 2024 - American Journal of Bioethics 24 (3):6-14.
    In this reply to our commentators, we respond to ethical concerns raised about the potential use (or misuse) of personalized LLMs for academic idea and prose generation, including questions about c...
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Digital Duplicates, Relational Scarcity, and Value: Commentary on Danaher and Nyholm (2024).Cristina Voinea, Sebastian Porsdam Mann, Christopher Register, Julian Savulescu & Brian D. Earp - 2024 - Philosophy and Technology 37 (4):1-8.
    Danaher and Nyholm ( 2024a ) have recently proposed that digital duplicates—such as fine-tuned, “personalized” large language models that closely mimic a particular individual—might reduce that individual’s _scarcity_ and thus increase the amount of instrumental value they can bring to the world. In this commentary, we introduce the notion of _relational scarcity_ and explore how digital duplicates would affect the value of interpersonal relationships.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Publish with AUTOGEN or Perish? Some Pitfalls to Avoid in the Pursuit of Academic Enhancement via Personalized Large Language Models.Alexandre Erler - 2023 - American Journal of Bioethics 23 (10):94-96.
    The potential of using personalized Large Language Models (LLMs) or “generative AI” (GenAI) to enhance productivity in academic research, as highlighted by Porsdam Mann and colleagues (Porsdam Mann...
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Is Academic Enhancement Possible by Means of Generative AI-Based Digital Twins?Sven Nyholm - 2023 - American Journal of Bioethics 23 (10):44-47.
    Large Language Models (LLMs) “assign probabilities to sequences of text. When given some initial text, they use these probabilities to generate new text. Large language models are language models u...
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Reimagining Scholarship: A Response to the Ethical Concerns of AUTOGEN.Hazem Zohny - 2023 - American Journal of Bioethics 23 (10):96-99.
    In their recent paper “AUTOGEN: A Personalized Large Language Model for Academic Enhancement—Ethics and Proof of Principle,” Porsdam Mann et al. (2023) demonstrate a technique for fine-tuning the l...
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Generative-AI-Generated Challenges for Health Data Research.Kayte Spector-Bagdady - 2023 - American Journal of Bioethics 23 (10):1-5.
    Generative artificial intelligence (GenAI) promises to revolutionize data-driven fields (Milmo 2023). Building on decades of large language modeling (LLM) (Toner 2023), GenAI can collect, harmonize...
    Download  
     
    Export citation  
     
    Bookmark  
  • China’s New Regulations on Generative AI: Implications for Bioethics.Li Du & Kalina Kamenova - 2023 - American Journal of Bioethics 23 (10):52-54.
    Cohen’s article (2023) on the significance of ChatGPT for bioethics suggests that little is known about the development of generative AI (“GAI”) in China and other national markets. It warns about...
    Download  
     
    Export citation  
     
    Bookmark  
  • Large Language Models and Inclusivity in Bioethics Scholarship.Sumeeta Varma - 2023 - American Journal of Bioethics 23 (10):105-107.
    In the target article, Porsdam Mann and colleagues (2023) broadly survey the ethical opportunities and risks of using general and personalized large language models (LLMs) to generate academic pros...
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • AI Can Show You the World.Marieke Bak - 2023 - American Journal of Bioethics 23 (10):107-110.
    As Cohen (2023) describes, the discourse around ChatGPT has been focused on potential risks while AI-based chatbots could also positively empower patients. There are other potential benefits to Cha...
    Download  
     
    Export citation  
     
    Bookmark