Switch to: References

Add citations

You must login to add citations.
  1. Know Thyself, Improve Thyself: Personalized LLMs for Self-Knowledge and Moral Enhancement.Alberto Giubilini, Sebastian Porsdam Mann, Cristina Voinea, Brian Earp & Julian Savulescu - 2024 - Science and Engineering Ethics 30 (6):1-15.
    In this paper, we suggest that personalized LLMs trained on information written by or otherwise pertaining to an individual could serve as artificial moral advisors (AMAs) that account for the dynamic nature of personal morality. These LLM-based AMAs would harness users’ past and present data to infer and make explicit their sometimes-shifting values and preferences, thereby fostering self-knowledge. Further, these systems may also assist in processes of self-creation, by helping users reflect on the kind of person they want to be (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Digital Doppelgängers and Lifespan Extension: What Matters?Samuel Iglesias, Brian D. Earp, Cristina Voinea, Sebastian Porsdam Mann, Anda Zahiu, Nancy S. Jecker & Julian Savulescu - forthcoming - American Journal of Bioethics:1-16.
    There is an ongoing debate about the ethics of research on lifespan extension: roughly, using medical technologies to extend biological human lives beyond the current “natural” limit of about 120 years. At the same time, there is an exploding interest in the use of artificial intelligence (AI) to create “digital twins” of persons, for example by fine-tuning large language models on data specific to particular individuals. In this paper, we consider whether digital twins (or digital doppelgängers, as we refer to (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Digital Duplicates and Personal Scarcity: Reply to Voinea et al and Lundgren.Sven Nyholm - 2024 - Philosophy and Technology 37 (4):1-6.
    Download  
     
    Export citation  
     
    Bookmark  
  • Enabling Demonstrated Consent for Biobanking with Blockchain and Generative AI.Caspar Barnes, Mateo Riobo Aboy, Timo Minssen, Jemima Winifred Allen, Brian D. Earp, Julian Savulescu & Sebastian Porsdam Mann - forthcoming - American Journal of Bioethics:1-16.
    Participation in research is supposed to be voluntary and informed. Yet it is difficult to ensure people are adequately informed about the potential uses of their biological materials when they donate samples for future research. We propose a novel consent framework which we call “demonstrated consent” that leverages blockchain technology and generative AI to address this problem. In a demonstrated consent model, each donated sample is associated with a unique non-fungible token (NFT) on a blockchain, which records in its metadata (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Imitation and Large Language Models.Éloïse Boisseau - 2024 - Minds and Machines 34 (4):1-24.
    The concept of imitation is both ubiquitous and curiously under-analysed in theoretical discussions about the cognitive powers and capacities of machines, and in particular—for what is the focus of this paper—the cognitive capacities of large language models (LLMs). The question whether LLMs understand what they say and what is said to them, for instance, is a disputed one, and it is striking to see this concept of imitation being mobilised here for sometimes contradictory purposes. After illustrating and discussing how this (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Social AI and The Equation of Wittgenstein’s Language User With Calvino’s Literature Machine.Warmhold Jan Thomas Mollema - 2024 - International Review of Literary Studies 6 (1):39-55.
    Is it sensical to ascribe psychological predicates to AI systems like chatbots based on large language models (LLMs)? People have intuitively started ascribing emotions or consciousness to social AI (‘affective artificial agents’), with consequences that range from love to suicide. The philosophical question of whether such ascriptions are warranted is thus very relevant. This paper advances the argument that LLMs instantiate language users in Ludwig Wittgenstein’s sense but that ascribing psychological predicates to these systems remains a functionalist temptation. Social AIs (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Why do We Need to Employ Exemplars in Moral Education? Insights from Recent Advances in Research on Artificial Intelligence.Hyemin Han - forthcoming - Ethics and Behavior.
    In this paper, I examine why moral exemplars are useful and even necessary in moral education despite several critiques from researchers and educators. To support my point, I review recent AI research demonstrating that exemplar-based learning is superior to rule-based learning in model performance in training neural networks, such as large language models. I particularly focus on why education aiming at promoting the development of multifaceted moral functioning can be done effectively by using exemplars, which is similar to exemplar-based learning (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • All too human? Identifying and mitigating ethical risks of Social AI.Henry Shevlin - manuscript
    This paper presents an overview of the risks and benefits of Social AI, understood as conversational AI systems that cater to human social needs like romance, companionship, or entertainment. Section 1 of the paper provides a brief history of conversational AI systems and introduces conceptual distinctions to help distinguish varieties of Social AI and pathways to their deployment. Section 2 of the paper adds further context via a brief discussion of anthropomorphism and its relevance to assessment of human-chatbot relationships. Section (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Digital Duplicates and the Scarcity Problem: Might AI Make Us Less Scarce and Therefore Less Valuable?John Danaher & Sven Nyholm - 2024 - Philosophy and Technology 37 (3):1-20.
    Recent developments in AI and robotics enable people to create _personalised digital duplicates_ – these are artificial, at least partial, recreations or simulations of real people. The advent of such duplicates enables people to overcome their individual scarcity. But this comes at a cost. There is a common view among ethicists and value theorists suggesting that individual scarcity contributes to or heightens the value of a life or parts of a life. In this paper, we address this topic. We make (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Is Academic Enhancement Possible by Means of Generative AI-Based Digital Twins?Sven Nyholm - 2023 - American Journal of Bioethics 23 (10):44-47.
    Large Language Models (LLMs) “assign probabilities to sequences of text. When given some initial text, they use these probabilities to generate new text. Large language models are language models u...
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • AUTOGEN: A Personalized Large Language Model for Academic Enhancement—Ethics and Proof of Principle.Sebastian Porsdam Mann, Brian D. Earp, Nikolaj Møller, Suren Vynn & Julian Savulescu - 2023 - American Journal of Bioethics 23 (10):28-41.
    Large language models (LLMs) such as ChatGPT or Google’s Bard have shown significant performance on a variety of text-based tasks, such as summarization, translation, and even the generation of new...
    Download  
     
    Export citation  
     
    Bookmark   24 citations  
  • AI Moral Enhancement: Upgrading the Socio-Technical System of Moral Engagement.Richard Volkman & Katleen Gabriels - 2023 - Science and Engineering Ethics 29 (2):1-14.
    Several proposals for moral enhancement would use AI to augment (auxiliary enhancement) or even supplant (exhaustive enhancement) human moral reasoning or judgment. Exhaustive enhancement proposals conceive AI as some self-contained oracle whose superiority to our own moral abilities is manifest in its ability to reliably deliver the ‘right’ answers to all our moral problems. We think this is a mistaken way to frame the project, as it presumes that we already know many things that we are still in the process (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Generative AI and medical ethics: the state of play.Hazem Zohny, Sebastian Porsdam Mann, Brian D. Earp & John McMillan - 2024 - Journal of Medical Ethics 50 (2):75-76.
    Since their public launch, a little over a year ago, large language models (LLMs) have inspired a flurry of analysis about what their implications might be for medical ethics, and for society more broadly. 1 Much of the recent debate has moved beyond categorical evaluations of the permissibility or impermissibility of LLM use in different general contexts (eg, at work or school), to more fine-grained discussions of the criteria that should govern their appropriate use in specific domains or towards certain (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Respect for Autonomy Requires a Mental Model.Nada Gligorov & Pierce Randall - 2024 - American Journal of Bioethics 24 (7):53-55.
    Making decisions for incapacitated patients has been a perennial problem in bioethics. Surrogate decision-makers are sometimes expected to use substituted judgment to make such decisions. Applying...
    Download  
     
    Export citation  
     
    Bookmark  
  • The Hidden Costs of ChatGPT: A Call for Greater Transparency.Matthew Elmore - 2023 - American Journal of Bioethics 23 (10):47-49.
    For decades, healthcare has relied on data-driven algorithms to guide clinical practice. Recent advances in machine learning have opened up new possibilities in the field, enabling detailed analyse...
    Download  
     
    Export citation  
     
    Bookmark   1 citation