Switch to: References

Add citations

You must login to add citations.
  1. Why Moral Bioenhancement Cannot Reliably Produce Virtue.Gina Lebkuecher, Marley Hornewer, Maya V. Roytman, Sydney Samoska & Joseph M. Vukov - 2024 - Journal of Medicine and Philosophy 49 (6):560-575.
    Moral bioenhancement presents the possibility of enhancing morally desirable emotions and dispositions. While some scholars have proposed that moral bioenhancement can produce virtue, we argue that within a virtue ethics framework moral bioenhancement cannot reliably produce virtue. Moreover, on a virtue ethics framework, the pursuit of moral bioenhancement carries moral risks. To make this argument, we consider three aspects of virtue—its motivational, rational, and behavioral components. In order to be virtuous, we argue, a person must (i) take pleasure in doing (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Rage Against the Authority Machines: How to Design Artificial Moral Advisors for Moral Enhancement.Ethan Landes, Cristina Voinea & Radu Uszkai - forthcoming - AI and Society:1-12.
    This paper aims to clear up the epistemology of learning morality from Artificial Moral Advisors (AMAs). We start with a brief consideration of what counts as moral enhancement and consider the risk of deskilling raised by machines that offer moral advice. We then shift focus to the epistemology of moral advice and show when and under what conditions moral advice can lead to enhancement. We argue that people’s motivational dispositions are enhanced by inspiring people to act morally, instead of merely (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • AI Moral Enhancement: Upgrading the Socio-Technical System of Moral Engagement.Richard Volkman & Katleen Gabriels - 2023 - Science and Engineering Ethics 29 (2):1-14.
    Several proposals for moral enhancement would use AI to augment (auxiliary enhancement) or even supplant (exhaustive enhancement) human moral reasoning or judgment. Exhaustive enhancement proposals conceive AI as some self-contained oracle whose superiority to our own moral abilities is manifest in its ability to reliably deliver the ‘right’ answers to all our moral problems. We think this is a mistaken way to frame the project, as it presumes that we already know many things that we are still in the process (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Artificial moral experts: asking for ethical advice to artificial intelligent assistants.Blanca Rodríguez-López & Jon Rueda - 2023 - AI and Ethics.
    In most domains of human life, we are willing to accept that there are experts with greater knowledge and competencies that distinguish them from non-experts or laypeople. Despite this fact, the very recognition of expertise curiously becomes more controversial in the case of “moral experts”. Do moral experts exist? And, if they indeed do, are there ethical reasons for us to follow their advice? Likewise, can emerging technological developments broaden our very concept of moral expertise? In this article, we begin (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Mind embedded or extended: transhumanist and posthumanist reflections in support of the extended mind thesis.Andrea Lavazza & Mirko Farina - 2022 - Synthese 200 (6):1-24.
    The goal of this paper is to encourage participants in the debate about the locus of cognition (e.g., extended mind vs embedded mind) to turn their attention to noteworthy anthropological and sociological considerations typically (but not uniquely) arising from transhumanist and posthumanist research. Such considerations, we claim, promise to potentially give us a way out of the stalemate in which such a debate has fallen. A secondary goal of this paper is to impress trans and post-humanistically inclined readers to embrace (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Ethical Issues with Artificial Ethics Assistants.Elizabeth O'Neill, Michal Klincewicz & Michiel Kemmer - 2023 - In Carissa Véliz (ed.), The Oxford Handbook of Digital Ethics. Oxford University Press.
    This chapter examines the possibility of using AI technologies to improve human moral reasoning and decision-making, especially in the context of purchasing and consumer decisions. We characterize such AI technologies as artificial ethics assistants (AEAs). We focus on just one part of the AI-aided moral improvement question: the case of the individual who wants to improve their morality, where what constitutes an improvement is evaluated by the individual’s own values. We distinguish three broad areas in which an individual might think (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Socratic nudges, virtual moral assistants and the problem of autonomy.Francisco Lara & Blanca Rodríguez-López - forthcoming - AI and Society:1-13.
    Many of our daily activities are now made more convenient and efficient by virtual assistants, and the day when they can be designed to instruct us in certain skills, such as those needed to make moral judgements, is not far off. In this paper we ask to what extent it would be ethically acceptable for these so-called virtual assistants for moral enhancement to use subtle strategies, known as “nudges”, to influence our decisions. To achieve our goal, we will first characterise (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Know Thyself, Improve Thyself: Personalized LLMs for Self-Knowledge and Moral Enhancement.Alberto Giubilini, Sebastian Porsdam Mann, Cristina Voinea, Brian Earp & Julian Savulescu - 2024 - Science and Engineering Ethics 30 (6):1-15.
    In this paper, we suggest that personalized LLMs trained on information written by or otherwise pertaining to an individual could serve as artificial moral advisors (AMAs) that account for the dynamic nature of personal morality. These LLM-based AMAs would harness users’ past and present data to infer and make explicit their sometimes-shifting values and preferences, thereby fostering self-knowledge. Further, these systems may also assist in processes of self-creation, by helping users reflect on the kind of person they want to be (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation