Switch to: References

Add citations

You must login to add citations.
  1. Augmenting research consent: should large language models (LLMs) be used for informed consent to clinical research?Jemima W. Allen, Owen Schaefer, Sebastian Porsdam Mann, Brian D. Earp & Dominic Wilkinson - forthcoming - Research Ethics.
    The integration of artificial intelligence (AI), particularly large language models (LLMs) like OpenAI’s ChatGPT, into clinical research could significantly enhance the informed consent process. This paper critically examines the ethical implications of employing LLMs to facilitate consent in clinical research. LLMs could offer considerable benefits, such as improving participant understanding and engagement, broadening participants’ access to the relevant information for informed consent and increasing the efficiency of consent procedures. However, these theoretical advantages are accompanied by ethical risks, including the potential (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Generative AI and medical ethics: the state of play.Hazem Zohny, Sebastian Porsdam Mann, Brian D. Earp & John McMillan - 2024 - Journal of Medical Ethics 50 (2):75-76.
    Since their public launch, a little over a year ago, large language models (LLMs) have inspired a flurry of analysis about what their implications might be for medical ethics, and for society more broadly. 1 Much of the recent debate has moved beyond categorical evaluations of the permissibility or impermissibility of LLM use in different general contexts (eg, at work or school), to more fine-grained discussions of the criteria that should govern their appropriate use in specific domains or towards certain (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Mapping the Ethics of Generative AI: A Comprehensive Scoping Review.Thilo Hagendorff - 2024 - Minds and Machines 34 (4):1-27.
    The advent of generative artificial intelligence and the widespread adoption of it in society engendered intensive debates about its ethical implications and risks. These risks often differ from those associated with traditional discriminative machine learning. To synthesize the recent discourse and map its normative concepts, we conducted a scoping review on the ethics of generative artificial intelligence, including especially large language models and text-to-image models. Our analysis provides a taxonomy of 378 normative issues in 19 topic areas and ranks them (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • A Personalized Patient Preference Predictor for Substituted Judgments in Healthcare: Technically Feasible and Ethically Desirable.Brian D. Earp, Sebastian Porsdam Mann, Jemima Allen, Sabine Salloch, Vynn Suren, Karin Jongsma, Matthias Braun, Dominic Wilkinson, Walter Sinnott-Armstrong, Annette Rid, David Wendler & Julian Savulescu - 2024 - American Journal of Bioethics 24 (7):13-26.
    When making substituted judgments for incapacitated patients, surrogates often struggle to guess what the patient would want if they had capacity. Surrogates may also agonize over having the (sole) responsibility of making such a determination. To address such concerns, a Patient Preference Predictor (PPP) has been proposed that would use an algorithm to infer the treatment preferences of individual patients from population-level data about the known preferences of people with similar demographic characteristics. However, critics have suggested that even if such (...)
    Download  
     
    Export citation  
     
    Bookmark   26 citations  
  • Enabling Demonstrated Consent for Biobanking with Blockchain and Generative AI.Caspar Barnes, Mateo Riobo Aboy, Timo Minssen, Jemima Winifred Allen, Brian D. Earp, Julian Savulescu & Sebastian Porsdam Mann - forthcoming - American Journal of Bioethics:1-16.
    Participation in research is supposed to be voluntary and informed. Yet it is difficult to ensure people are adequately informed about the potential uses of their biological materials when they donate samples for future research. We propose a novel consent framework which we call “demonstrated consent” that leverages blockchain technology and generative AI to address this problem. In a demonstrated consent model, each donated sample is associated with a unique non-fungible token (NFT) on a blockchain, which records in its metadata (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Reasons in the Loop: The Role of Large Language Models in Medical Co-Reasoning.Sebastian Porsdam Mann, Brian D. Earp, Peng Liu & Julian Savulescu - 2024 - American Journal of Bioethics 24 (9):105-107.
    Salloch and Eriksen (2024) present a compelling case for including patients as co-reasoners in medical decision-making involving artificial intelligence (AI). Drawing on O'Neill’s neo-Kantian frame...
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Beyond algorithmic trust: interpersonal aspects on consent delegation to LLMs.Zeineb Sassi, Michael Hahn, Sascha Eickmann, Anne Herrmann-Johns & Max Tretter - 2024 - Journal of Medical Ethics 50 (2):139-139.
    In their article ‘Consent-GPT: is it ethical to delegate procedural consent to conversational AI?’, Allen et al 1 explore the ethical complexities involved in handing over parts of the process of obtaining medical consent to conversational Artificial Intelligence (AI) systems, that is, AI-driven large language models (LLMs) trained to interact with patients to inform them about upcoming medical procedures and assist in the process of obtaining informed consent.1 They focus specifically on challenges related to accuracy (4–5), trust (5), privacy (5), (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • AUTOGEN and the Ethics of Co-Creation with Personalized LLMs—Reply to the Commentaries.Sebastian Porsdam Mann, Brian D. Earp, Nikolaj Møller, Vynn Suren & Julian Savulescu - 2024 - American Journal of Bioethics 24 (3):6-14.
    In this reply to our commentators, we respond to ethical concerns raised about the potential use (or misuse) of personalized LLMs for academic idea and prose generation, including questions about c...
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Consent with complications in mind.Edwin Jesudason - 2024 - Journal of Medical Ethics 50 (11):758-761.
    Parity of esteemdescribes an aspiration to see mental health valued as much as physical. Proponents point to poorer funding of mental health services, greater stigma and poorer physical health for those with mental illness. Stubborn persistence of such disparities suggests a need to do more than stipulate ethical and legal obligations toward justice or fairness. Here, I propose that we should rely more on our legal obligations toward informed consent. The latter requires clinicians to disclose information about risks in a (...)
    Download  
     
    Export citation  
     
    Bookmark