Switch to: References

Add citations

You must login to add citations.
  1. On the computational complexity of ethics: moral tractability for minds and machines.Jakob Stenseke - 2024 - Artificial Intelligence Review 57 (105):90.
    Why should moral philosophers, moral psychologists, and machine ethicists care about computational complexity? Debates on whether artificial intelligence (AI) can or should be used to solve problems in ethical domains have mainly been driven by what AI can or cannot do in terms of human capacities. In this paper, we tackle the problem from the other end by exploring what kind of moral machines are possible based on what computational systems can or cannot do. To do so, we analyze normative (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Doing Good with Virtual Reality: The Ethics of Using Virtual Simulations for Improving Human Morality.Jon Rueda (ed.) - 2023 - New York: Routledge.
    Much of the excitement and concern with virtual reality (VR) has to do with the impact of virtual experiences on our moral conduct in the “real world”. VR technologies offer vivid simulations that may impact prosocial dispositions and abilities or emotions related to morality. Whereas some experiences could facilitate particular moral behaviors, VR could also inculcate bad moral habits or lead to the surreptitious development of nefarious moral traits. In this chapter, I offer an overview of the ethical debate about (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • AI as IA: The use and abuse of artificial intelligence (AI) for human enhancement through intellectual augmentation (IA).Alexandre Erler & Vincent C. Müller - 2023 - In Fabrice Jotterand & Marcello Ienca (eds.), The Routledge Handbook of the Ethics of Human Enhancement. Routledge. pp. 187-199.
    This paper offers an overview of the prospects and ethics of using AI to achieve human enhancement, and more broadly what we call intellectual augmentation (IA). After explaining the central notions of human enhancement, IA, and AI, we discuss the state of the art in terms of the main technologies for IA, with or without brain-computer interfaces. Given this picture, we discuss potential ethical problems, namely inadequate performance, safety, coercion and manipulation, privacy, cognitive liberty, authenticity, and fairness in more detail. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Living with AI personal assistant: an ethical appraisal.Lorraine K. C. Yeung, Cecilia S. Y. Tam, Sam S. S. Lau & Mandy M. Ko - forthcoming - AI and Society:1-16.
    Mark Coeckelbergh (Int J Soc Robot 1:217–221, 2009) argues that robot ethics should investigate what interaction with robots can do to humans rather than focusing on the robot’s moral status. We should ask what robots do to our sociality and whether human–robot interaction can contribute to the human good and human flourishing. This paper extends Coeckelbergh’s call and investigate what it means to live with disembodied AI-powered agents. We address the following question: Can the human–AI interaction contribute to our moral (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Ethics of Artificial Intelligence and Robotics.Vincent C. Müller - 2012 - In Peter Adamson (ed.), Stanford Encyclopedia of Philosophy. Stanford Encyclopedia of Philosophy. pp. 1-70.
    Artificial intelligence (AI) and robotics are digital technologies that will have significant impact on the development of humanity in the near future. They have raised fundamental questions about what we should do with these systems, what the systems themselves should do, what risks they involve, and how we can control these. - After the Introduction to the field (§1), the main themes (§2) of this article are: Ethical issues that arise with AI systems as objects, i.e., tools made and used (...)
    Download  
     
    Export citation  
     
    Bookmark   30 citations  
  • Ethical Issues with Artificial Ethics Assistants.Elizabeth O'Neill, Michal Klincewicz & Michiel Kemmer - 2023 - In Carissa Véliz (ed.), The Oxford Handbook of Digital Ethics. Oxford University Press.
    This chapter examines the possibility of using AI technologies to improve human moral reasoning and decision-making, especially in the context of purchasing and consumer decisions. We characterize such AI technologies as artificial ethics assistants (AEAs). We focus on just one part of the AI-aided moral improvement question: the case of the individual who wants to improve their morality, where what constitutes an improvement is evaluated by the individual’s own values. We distinguish three broad areas in which an individual might think (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • AI Moral Enhancement: Upgrading the Socio-Technical System of Moral Engagement.Richard Volkman & Katleen Gabriels - 2023 - Science and Engineering Ethics 29 (2):1-14.
    Several proposals for moral enhancement would use AI to augment (auxiliary enhancement) or even supplant (exhaustive enhancement) human moral reasoning or judgment. Exhaustive enhancement proposals conceive AI as some self-contained oracle whose superiority to our own moral abilities is manifest in its ability to reliably deliver the ‘right’ answers to all our moral problems. We think this is a mistaken way to frame the project, as it presumes that we already know many things that we are still in the process (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Neuroenhancement, the Criminal Justice System, and the Problem of Alienation.Jukka Varelius - 2019 - Neuroethics 13 (3):325-335.
    It has been suggested that neuroenhancements could be used to improve the abilities of criminal justice authorities. Judges could be made more able to make adequately informed and unbiased decisions, for example. Yet, while such a prospect appears appealing, the views of neuroenhanced criminal justice authorities could also be alien to the unenhanced public. This could compromise the legitimacy and functioning of the criminal justice system. In this article, I assess possible solutions to this problem. I maintain that none of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Neuroenhancement, the Criminal Justice System, and the Problem of Alienation.Jukka Varelius - 2019 - Neuroethics 13 (3):325-335.
    It has been suggested that neuroenhancements could be used to improve the abilities of criminal justice authorities. Judges could be made more able to make adequately informed and unbiased decisions, for example. Yet, while such a prospect appears appealing, the views of neuroenhanced criminal justice authorities could also be alien to the unenhanced public. This could compromise the legitimacy and functioning of the criminal justice system. In this article, I assess possible solutions to this problem. I maintain that none of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Value alignment, human enhancement, and moral revolutions.Ariela Tubert & Justin Tiehen - forthcoming - Inquiry: An Interdisciplinary Journal of Philosophy.
    Human beings are internally inconsistent in various ways. One way to develop this thought involves using the language of value alignment: the values we hold are not always aligned with our behavior, and are not always aligned with each other. Because of this self-misalignment, there is room for potential projects of human enhancement that involve achieving a greater degree of value alignment than we presently have. Relatedly, discussions of AI ethics sometimes focus on what is known as the value alignment (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Artificial virtuous agents in a multi-agent tragedy of the commons.Jakob Stenseke - 2022 - AI and Society:1-18.
    Although virtue ethics has repeatedly been proposed as a suitable framework for the development of artificial moral agents, it has been proven difficult to approach from a computational perspective. In this work, we present the first technical implementation of artificial virtuous agents in moral simulations. First, we review previous conceptual and technical work in artificial virtue ethics and describe a functionalistic path to AVAs based on dispositional virtues, bottom-up learning, and top-down eudaimonic reward. We then provide the details of a (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Artificial Intelligence and Human Enhancement: Can AI Technologies Make Us More (Artificially) Intelligent?Sven Nyholm - 2024 - Cambridge Quarterly of Healthcare Ethics 33 (1):76-88.
    This paper discusses two opposing views about the relation between artificial intelligence (AI) and human intelligence: on the one hand, a worry that heavy reliance on AI technologies might make people less intelligent and, on the other, a hope that AI technologies might serve as a form of cognitive enhancement. The worry relates to the notion that if we hand over too many intelligence-requiring tasks to AI technologies, we might end up with fewer opportunities to train our own intelligence. Concerning (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • AUTOGEN: A Personalized Large Language Model for Academic Enhancement—Ethics and Proof of Principle.Sebastian Porsdam Mann, Brian D. Earp, Nikolaj Møller, Suren Vynn & Julian Savulescu - 2023 - American Journal of Bioethics 23 (10):28-41.
    Large language models (LLMs) such as ChatGPT or Google’s Bard have shown significant performance on a variety of text-based tasks, such as summarization, translation, and even the generation of new...
    Download  
     
    Export citation  
     
    Bookmark   18 citations  
  • Why a Virtual Assistant for Moral Enhancement When We Could have a Socrates?Francisco Lara - 2021 - Science and Engineering Ethics 27 (4):1-27.
    Can Artificial Intelligence be more effective than human instruction for the moral enhancement of people? The author argues that it only would be if the use of this technology were aimed at increasing the individual's capacity to reflectively decide for themselves, rather than at directly influencing behaviour. To support this, it is shown how a disregard for personal autonomy, in particular, invalidates the main proposals for applying new technologies, both biomedical and AI-based, to moral enhancement. As an alternative to these (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Artificial Intelligence as a Socratic Assistant for Moral Enhancement.Francisco Lara & Jan Deckers - 2019 - Neuroethics 13 (3):275-287.
    The moral enhancement of human beings is a constant theme in the history of humanity. Today, faced with the threats of a new, globalised world, concern over this matter is more pressing. For this reason, the use of biotechnology to make human beings more moral has been considered. However, this approach is dangerous and very controversial. The purpose of this article is to argue that the use of another new technology, AI, would be preferable to achieve this goal. Whilst several (...)
    Download  
     
    Export citation  
     
    Bookmark   22 citations  
  • Towards a systematic evaluation of moral bioenhancement.Karolina Kudlek - 2022 - Theoretical Medicine and Bioethics 43 (2-3):95-110.
    The ongoing debate about moral bioenhancement has been exceptionally stimulating, but it is defined by extreme polarization and lack of consensus about any relevant aspect of MBE. This article reviews the discussion on MBE, showing that a lack of consensus about enhancements’ desirable features and the constant development of the debate calls for a more rigorous ethical analysis. I identify a list of factors that may be of crucial importance for illuminating the matters of moral permissibility in the MBE debate (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Robotic Nudges for Moral Improvement through Stoic Practice.Michał Klincewicz - 2019 - Techné: Research in Philosophy and Technology 23 (3):425-455.
    This paper offers a theoretical framework that can be used to derive viable engineering strategies for the design and development of robots that can nudge people towards moral improvement. The framework relies on research in developmental psychology and insights from Stoic ethics. Stoicism recommends contemplative practices that over time help one develop dispositions to behave in ways that improve the functioning of mechanisms that are constitutive of moral cognition. Robots can nudge individuals towards these practices and can therefore help develop (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Three Risks That Caution Against a Premature Implementation of Artificial Moral Agents for Practical and Economical Use.Christian Herzog - 2021 - Science and Engineering Ethics 27 (1):1-15.
    In the present article, I will advocate caution against developing artificial moral agents based on the notion that the utilization of preliminary forms of AMAs will potentially negatively feed back on the human social system and on human moral thought itself and its value—e.g., by reinforcing social inequalities, diminishing the breadth of employed ethical arguments and the value of character. While scientific investigations into AMAs pose no direct significant threat, I will argue against their premature utilization for practical and economical (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Building Moral Robots: Ethical Pitfalls and Challenges.John-Stewart Gordon - 2020 - Science and Engineering Ethics 26 (1):141-157.
    This paper examines the ethical pitfalls and challenges that non-ethicists, such as researchers and programmers in the fields of computer science, artificial intelligence and robotics, face when building moral machines. Whether ethics is “computable” depends on how programmers understand ethics in the first place and on the adequacy of their understanding of the ethical problems and methodological challenges in these fields. Researchers and programmers face at least two types of problems due to their general lack of ethical knowledge or expertise. (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Robots as moral environments.Tomislav Furlanis, Takayuki Kanda & Dražen Brščić - forthcoming - AI and Society:1-19.
    In this philosophical exploration, we investigate the concept of robotic moral environment interaction. The common view understands moral interaction to occur between agents endowed with ethical and interactive capacities. However, recent developments in moral philosophy argue that moral interaction also occurs in relation to the environment. Here conditions and situations of the environment contribute to human moral cognition and the formation of our moral experiences. Based on this philosophical position, we imagine robots interacting as moral environments—a novel conceptualization of human–robot (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • A Personalized Patient Preference Predictor for Substituted Judgments in Healthcare: Technically Feasible and Ethically Desirable.Brian D. Earp, Sebastian Porsdam Mann, Jemima Allen, Sabine Salloch, Vynn Suren, Karin Jongsma, Matthias Braun, Dominic Wilkinson, Walter Sinnott-Armstrong, Annette Rid, David Wendler & Julian Savulescu - forthcoming - American Journal of Bioethics:1-14.
    When making substituted judgments for incapacitated patients, surrogates often struggle to guess what the patient would want if they had capacity. Surrogates may also agonize over having the (sole) responsibility of making such a determination. To address such concerns, a Patient Preference Predictor (PPP) has been proposed that would use an algorithm to infer the treatment preferences of individual patients from population-level data about the known preferences of people with similar demographic characteristics. However, critics have suggested that even if such (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • How to Use AI Ethically for Ethical Decision-Making.Joanna Demaree-Cotton, Brian D. Earp & Julian Savulescu - 2022 - American Journal of Bioethics 22 (7):1-3.
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • AI-powered recommender systems and the preservation of personal autonomy.Juan Ignacio del Valle & Francisco Lara - forthcoming - AI and Society:1-13.
    Recommender Systems (RecSys) have been around since the early days of the Internet, helping users navigate the vast ocean of information and the increasingly available options that have been available for us ever since. The range of tasks for which one could use a RecSys is expanding as the technical capabilities grow, with the disruption of Machine Learning representing a tipping point in this domain, as in many others. However, the increase of the technical capabilities of AI-powered RecSys did not (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Toward an Ethics of AI Assistants: an Initial Framework.John Danaher - 2018 - Philosophy and Technology 31 (4):629-653.
    Personal AI assistants are now nearly ubiquitous. Every leading smartphone operating system comes with a personal AI assistant that promises to help you with basic cognitive tasks: searching, planning, messaging, scheduling and so on. Usage of such devices is effectively a form of algorithmic outsourcing: getting a smart algorithm to do something on your behalf. Many have expressed concerns about this algorithmic outsourcing. They claim that it is dehumanising, leads to cognitive degeneration, and robs us of our freedom and autonomy. (...)
    Download  
     
    Export citation  
     
    Bookmark   21 citations  
  • Blame It on the AI? On the Moral Responsibility of Artificial Moral Advisors.Mihaela Constantinescu, Constantin Vică, Radu Uszkai & Cristina Voinea - 2022 - Philosophy and Technology 35 (2):1-26.
    Deep learning AI systems have proven a wide capacity to take over human-related activities such as car driving, medical diagnosing, or elderly care, often displaying behaviour with unpredictable consequences, including negative ones. This has raised the question whether highly autonomous AI may qualify as morally responsible agents. In this article, we develop a set of four conditions that an entity needs to meet in order to be ascribed moral responsibility, by drawing on Aristotelian ethics and contemporary philosophical research. We encode (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • The ethics of digital well-being: a thematic review.Christopher Burr, Mariarosaria Taddeo & Luciano Floridi - 2020 - Science and Engineering Ethics 26 (4):2313–2343.
    This article presents the first thematic review of the literature on the ethical issues concerning digital well-being. The term ‘digital well-being’ is used to refer to the impact of digital technologies on what it means to live a life that is good for a human being. The review explores the existing literature on the ethics of digital well-being, with the goal of mapping the current debate and identifying open questions for future research. The review identifies major issues related to several (...)
    Download  
     
    Export citation  
     
    Bookmark   18 citations  
  • The ethics of digital well-being: a thematic review.Christopher Burr, Mariarosaria Taddeo & Luciano Floridi - 2020 - Science and Engineering Ethics 26 (4):2313–⁠2343.
    This article presents the first thematic review of the literature on the ethical issues concerning digital well-being. The term ‘digital well-being’ is used to refer to the impact of digital technologies on what it means to live a life that isgood fora human being. The review explores the existing literature on the ethics of digital well-being, with the goal of mapping the current debate and identifying open questions for future research. The review identifies major issues related to several key social (...)
    Download  
     
    Export citation  
     
    Bookmark   20 citations  
  • In Search of a Mission: Artificial Intelligence in Clinical Ethics.Nikola Biller-Andorno, Andrea Ferrario & Sophie Gloeckler - 2022 - American Journal of Bioethics 22 (7):23-25.
    Artificial intelligence has found its way into many areas of human life, serving a range of purposes. Sometimes AI tools are designed to help humans eliminate high-volume, tedious, routine tas...
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Vertrouwen in de geneeskunde en kunstmatige intelligentie.Lily Frank & Michal Klincewicz - 2021 - Podium Voor Bioethiek 3 (28):37-42.
    Kunstmatige intelligentie (AI) en systemen die met machine learning (ML) werken, kunnen veel onderdelen van het medische besluitvormingsproces ondersteunen of vervangen. Ook zouden ze artsen kunnen helpen bij het omgaan met klinische, morele dilemma’s. AI/ML-beslissingen kunnen zo in de plaats komen van professionele beslissingen. We betogen dat dit belangrijke gevolgen heeft voor de relatie tussen een patiënt en de medische professie als instelling, en dat dit onvermijdelijk zal leiden tot uitholling van het institutionele vertrouwen in de geneeskunde.
    Download  
     
    Export citation  
     
    Bookmark