Switch to: References

Add citations

You must login to add citations.
  1. Doing Good with Virtual Reality: The Ethics of Using Virtual Simulations for Improving Human Morality.Jon Rueda - 2023 - In Andrew Kissel & Erick José Ramirez (eds.), Exploring Extended Realities: Metaphysical, Psychological, and Ethical Challenges. Routledge.
    Much of the excitement and concern with virtual reality (VR) has to do with the impact of virtual experiences on our moral conduct in the “real world”. VR technologies offer vivid simulations that may impact prosocial dispositions and abilities or emotions related to morality. Whereas some experiences could facilitate particular moral behaviors, VR could also inculcate bad moral habits or lead to the surreptitious development of nefarious moral traits. In this chapter, I offer an overview of the ethical debate about (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Automated decision-making and the problem of evil.Andrea Berber - 2023 - AI and Society:1-10.
    The intention of this paper is to point to the dilemma humanity may face in light of AI advancements. The dilemma is whether to create a world with less evil or maintain the human status of moral agents. This dilemma may arise as a consequence of using automated decision-making systems for high-stakes decisions. The use of automated decision-making bears the risk of eliminating human moral agency and autonomy and reducing humans to mere moral patients. On the other hand, it also (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Value alignment, human enhancement, and moral revolutions.Ariela Tubert & Justin Tiehen - forthcoming - Inquiry: An Interdisciplinary Journal of Philosophy.
    Human beings are internally inconsistent in various ways. One way to develop this thought involves using the language of value alignment: the values we hold are not always aligned with our behavior, and are not always aligned with each other. Because of this self-misalignment, there is room for potential projects of human enhancement that involve achieving a greater degree of value alignment than we presently have. Relatedly, discussions of AI ethics sometimes focus on what is known as the value alignment (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Artificial Intelligence and Human Enhancement: Can AI Technologies Make Us More (Artificially) Intelligent?Sven Nyholm - 2024 - Cambridge Quarterly of Healthcare Ethics 33 (1):76-88.
    This paper discusses two opposing views about the relation between artificial intelligence (AI) and human intelligence: on the one hand, a worry that heavy reliance on AI technologies might make people less intelligent and, on the other, a hope that AI technologies might serve as a form of cognitive enhancement. The worry relates to the notion that if we hand over too many intelligence-requiring tasks to AI technologies, we might end up with fewer opportunities to train our own intelligence. Concerning (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • AI as IA: The use and abuse of artificial intelligence (AI) for human enhancement through intellectual augmentation (IA).Alexandre Erler & Vincent C. Müller - 2023 - In Fabrice Jotterand & Marcello Ienca (eds.), The Routledge Handbook of the Ethics of Human Enhancement. Routledge. pp. 187-199.
    This paper offers an overview of the prospects and ethics of using AI to achieve human enhancement, and more broadly what we call intellectual augmentation (IA). After explaining the central notions of human enhancement, IA, and AI, we discuss the state of the art in terms of the main technologies for IA, with or without brain-computer interfaces. Given this picture, we discuss potential ethical problems, namely inadequate performance, safety, coercion and manipulation, privacy, cognitive liberty, authenticity, and fairness in more detail. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Artificial moral experts: asking for ethical advice to artificial intelligent assistants.Blanca Rodríguez-López & Jon Rueda - 2023 - AI and Ethics.
    In most domains of human life, we are willing to accept that there are experts with greater knowledge and competencies that distinguish them from non-experts or laypeople. Despite this fact, the very recognition of expertise curiously becomes more controversial in the case of “moral experts”. Do moral experts exist? And, if they indeed do, are there ethical reasons for us to follow their advice? Likewise, can emerging technological developments broaden our very concept of moral expertise? In this article, we begin (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The Ethics of AI Ethics. A Constructive Critique.Jan-Christoph Heilinger - 2022 - Philosophy and Technology 35 (3):1-20.
    The paper presents an ethical analysis and constructive critique of the current practice of AI ethics. It identifies conceptual substantive and procedural challenges and it outlines strategies to address them. The strategies include countering the hype and understanding AI as ubiquitous infrastructure including neglected issues of ethics and justice such as structural background injustices into the scope of AI ethics and making the procedures and fora of AI ethics more inclusive and better informed with regard to philosophical ethics. These measures (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Climate Change, Moral Bioenhancement and the Ultimate Mostropic.Jon Rueda - 2020 - Ramon Llull Journal of Applied Ethics 11:277-303.
    Tackling climate change is one of the most demanding challenges of humanity in the 21st century. Still, the efforts to mitigate the current environmental crisis do not seem enough to deal with the increased existential risks for the human and other species. Persson and Savulescu have proposed that our evolutionarily forged moral psychology is one of the impediments to facing as enormous a problem as global warming. They suggested that if we want to address properly some of the most pressing (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Therapeutic Chatbots as Cognitive-Affective Artifacts.J. P. Grodniewicz & Mateusz Hohol - 2024 - Topoi 43 (3):795-807.
    Conversational Artificial Intelligence (CAI) systems (also known as AI “chatbots”) are among the most promising examples of the use of technology in mental health care. With already millions of users worldwide, CAI is likely to change the landscape of psychological help. Most researchers agree that existing CAIs are not “digital therapists” and using them is not a substitute for psychotherapy delivered by a human. But if they are not therapists, what are they, and what role can they play in mental (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Internet of Bodies, datafied embodiment and our quantified religious future.Zheng Liu - 2023 - HTS Theological Studies 79 (1):12.
    This article discusses the datafied embodiment of the Internet of Bodies (IoB) technology by applying the methodology of postphenomenology. Firstly, the author claims that the boundaries of dual distinction between real and virtual, online and offline, and embodiment and disembodiment have become increasingly blurred. Secondly, the author argues that postphenomenology can help us to study today’s emerging technologies’ mediating role in human–world relations. Thirdly, the author analyses the implication of embodiment from phenomenological and postphenomenological perspectives and then demonstrates in what (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Trust and Psychedelic Moral Enhancement.Emma C. Gordon - 2022 - Neuroethics 15 (2):1-14.
    Moral enhancement proposals struggle to be both plausible and ethically defensible while nevertheless interestingly distinct from both cognitive enhancement as well as (mere) moral education. Brian Earp (_Royal Institute of Philosophy Supplement_ 83:415–439, 12 ) suggests that a promising middle ground lies in focusing on the (suitably qualified) use of psychedelics as _adjuncts_ to moral development. But what would such an adjunctive use of psychedelics look like in practice? In this paper, I draw on literature from three areas where techniques (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Zombies in the Loop? Humans Trust Untrustworthy AI-Advisors for Ethical Decisions.Sebastian Krügel, Andreas Ostermaier & Matthias Uhl - 2022 - Philosophy and Technology 35 (1):1-37.
    Departing from the claim that AI needs to be trustworthy, we find that ethical advice from an AI-powered algorithm is trusted even when its users know nothing about its training data and when they learn information about it that warrants distrust. We conducted online experiments where the subjects took the role of decision-makers who received advice from an algorithm on how to deal with an ethical dilemma. We manipulated the information about the algorithm and studied its influence. Our findings suggest (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Engineering Equity: How AI Can Help Reduce the Harm of Implicit Bias.Ying-Tung Lin, Tzu-Wei Hung & Linus Ta-Lun Huang - 2020 - Philosophy and Technology 34 (S1):65-90.
    This paper focuses on the potential of “equitech”—AI technology that improves equity. Recently, interventions have been developed to reduce the harm of implicit bias, the automatic form of stereotype or prejudice that contributes to injustice. However, these interventions—some of which are assisted by AI-related technology—have significant limitations, including unintended negative consequences and general inefficacy. To overcome these limitations, we propose a two-dimensional framework to assess current AI-assisted interventions and explore promising new ones. We begin by using the case of human (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • AI Moral Enhancement: Upgrading the Socio-Technical System of Moral Engagement.Richard Volkman & Katleen Gabriels - 2023 - Science and Engineering Ethics 29 (2):1-14.
    Several proposals for moral enhancement would use AI to augment (auxiliary enhancement) or even supplant (exhaustive enhancement) human moral reasoning or judgment. Exhaustive enhancement proposals conceive AI as some self-contained oracle whose superiority to our own moral abilities is manifest in its ability to reliably deliver the ‘right’ answers to all our moral problems. We think this is a mistaken way to frame the project, as it presumes that we already know many things that we are still in the process (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Blame It on the AI? On the Moral Responsibility of Artificial Moral Advisors.Mihaela Constantinescu, Constantin Vică, Radu Uszkai & Cristina Voinea - 2022 - Philosophy and Technology 35 (2):1-26.
    Deep learning AI systems have proven a wide capacity to take over human-related activities such as car driving, medical diagnosing, or elderly care, often displaying behaviour with unpredictable consequences, including negative ones. This has raised the question whether highly autonomous AI may qualify as morally responsible agents. In this article, we develop a set of four conditions that an entity needs to meet in order to be ascribed moral responsibility, by drawing on Aristotelian ethics and contemporary philosophical research. We encode (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • (1 other version)Neuroenhancement, the Criminal Justice System, and the Problem of Alienation.Jukka Varelius - 2019 - Neuroethics 13 (3):325-335.
    It has been suggested that neuroenhancements could be used to improve the abilities of criminal justice authorities. Judges could be made more able to make adequately informed and unbiased decisions, for example. Yet, while such a prospect appears appealing, the views of neuroenhanced criminal justice authorities could also be alien to the unenhanced public. This could compromise the legitimacy and functioning of the criminal justice system. In this article, I assess possible solutions to this problem. I maintain that none of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Socratic nudges, virtual moral assistants and the problem of autonomy.Francisco Lara & Blanca Rodríguez-López - forthcoming - AI and Society:1-13.
    Many of our daily activities are now made more convenient and efficient by virtual assistants, and the day when they can be designed to instruct us in certain skills, such as those needed to make moral judgements, is not far off. In this paper we ask to what extent it would be ethically acceptable for these so-called virtual assistants for moral enhancement to use subtle strategies, known as “nudges”, to influence our decisions. To achieve our goal, we will first characterise (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Three Risks That Caution Against a Premature Implementation of Artificial Moral Agents for Practical and Economical Use.Christian Herzog - 2021 - Science and Engineering Ethics 27 (1):1-15.
    In the present article, I will advocate caution against developing artificial moral agents based on the notion that the utilization of preliminary forms of AMAs will potentially negatively feed back on the human social system and on human moral thought itself and its value—e.g., by reinforcing social inequalities, diminishing the breadth of employed ethical arguments and the value of character. While scientific investigations into AMAs pose no direct significant threat, I will argue against their premature utilization for practical and economical (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • (1 other version)Neuroenhancement, the Criminal Justice System, and the Problem of Alienation.Jukka Varelius - 2019 - Neuroethics 13 (3):325-335.
    It has been suggested that neuroenhancements could be used to improve the abilities of criminal justice authorities. Judges could be made more able to make adequately informed and unbiased decisions, for example. Yet, while such a prospect appears appealing, the views of neuroenhanced criminal justice authorities could also be alien to the unenhanced public. This could compromise the legitimacy and functioning of the criminal justice system. In this article, I assess possible solutions to this problem. I maintain that none of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Why a Virtual Assistant for Moral Enhancement When We Could have a Socrates?Francisco Lara - 2021 - Science and Engineering Ethics 27 (4):1-27.
    Can Artificial Intelligence be more effective than human instruction for the moral enhancement of people? The author argues that it only would be if the use of this technology were aimed at increasing the individual's capacity to reflectively decide for themselves, rather than at directly influencing behaviour. To support this, it is shown how a disregard for personal autonomy, in particular, invalidates the main proposals for applying new technologies, both biomedical and AI-based, to moral enhancement. As an alternative to these (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Towards a systematic evaluation of moral bioenhancement.Karolina Kudlek - 2022 - Theoretical Medicine and Bioethics 43 (2-3):95-110.
    The ongoing debate about moral bioenhancement has been exceptionally stimulating, but it is defined by extreme polarization and lack of consensus about any relevant aspect of MBE. This article reviews the discussion on MBE, showing that a lack of consensus about enhancements’ desirable features and the constant development of the debate calls for a more rigorous ethical analysis. I identify a list of factors that may be of crucial importance for illuminating the matters of moral permissibility in the MBE debate (...)
    Download  
     
    Export citation  
     
    Bookmark