Switch to: References

Add citations

You must login to add citations.
  1. The Moral Status of Social Robots: A Pragmatic Approach.Paul Showler - 2024 - Philosophy and Technology 37 (2):1-22.
    Debates about the moral status of social robots (SRs) currently face a second-order, or metatheoretical impasse. On the one hand, moral individualists argue that the moral status of SRs depends on their possession of morally relevant properties. On the other hand, moral relationalists deny that we ought to attribute moral status on the basis of the properties that SRs instantiate, opting instead for other modes of reflection and critique. This paper develops and defends a pragmatic approach which aims to reconcile (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Ectogestative Technology and the Beginning of Life.Lily Frank, Julia Hermann, Ilona Kavege & Anna Puzio - 2023 - In Ibo van de Poel (ed.), Ethics of Socially Disruptive Technologies: An Introduction. Cambridge, UK: Open Book Publishers. pp. 113–140.
    How could ectogestative technology disrupt gender roles, parenting practices, and concepts such as ‘birth’, ‘body’, or ‘parent’? In this chapter, we situate this emerging technology in the context of the history of reproductive technologies and analyse the potential social and conceptual disruptions to which it could contribute. An ectogestative device, better known as ‘artificial womb’, enables the extra-uterine gestation of a human being, or mammal more generally. It is currently developed with the main goal of improving the survival chances of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Social Robots and Society.Sven Nyholm, Cindy Friedman, Michael T. Dale, Anna Puzio, Dina Babushkina, Guido Lohr, Bart Kamphorst, Arthur Gwagwa & Wijnand IJsselsteijn - 2023 - In Ibo van de Poel (ed.), Ethics of Socially Disruptive Technologies: An Introduction. Cambridge, UK: Open Book Publishers. pp. 53-82.
    Advancements in artificial intelligence and (social) robotics raise pertinent questions as to how these technologies may help shape the society of the future. The main aim of the chapter is to consider the social and conceptual disruptions that might be associated with social robots, and humanoid social robots in particular. This chapter starts by comparing the concepts of robots and artificial intelligence and briefly explores the origins of these expressions. It then explains the definition of a social robot, as well (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Anthropological Crisis or Crisis in Moral Status: a Philosophy of Technology Approach to the Moral Consideration of Artificial Intelligence.Joan Llorca Albareda - 2024 - Philosophy and Technology 37 (1):1-26.
    The inquiry into the moral status of artificial intelligence (AI) is leading to prolific theoretical discussions. A new entity that does not share the material substrate of human beings begins to show signs of a number of properties that are nuclear to the understanding of moral agency. It makes us wonder whether the properties we associate with moral status need to be revised or whether the new artificial entities deserve to enter within the circle of moral consideration. This raises the (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Living with AI personal assistant: an ethical appraisal.Lorraine K. C. Yeung, Cecilia S. Y. Tam, Sam S. S. Lau & Mandy M. Ko - forthcoming - AI and Society:1-16.
    Mark Coeckelbergh (Int J Soc Robot 1:217–221, 2009) argues that robot ethics should investigate what interaction with robots can do to humans rather than focusing on the robot’s moral status. We should ask what robots do to our sociality and whether human–robot interaction can contribute to the human good and human flourishing. This paper extends Coeckelbergh’s call and investigate what it means to live with disembodied AI-powered agents. We address the following question: Can the human–AI interaction contribute to our moral (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Interpreting ordinary uses of psychological and moral terms in the AI domain.Hyungrae Noh - 2023 - Synthese 201 (6):1-33.
    Intuitively, proper referential extensions of psychological and moral terms exclude artifacts. Yet ordinary speakers commonly treat AI robots as moral patients and use psychological terms to explain their behavior. This paper examines whether this referential shift from the human domain to the AI domain entails semantic changes: do ordinary speakers literally consider AI robots to be psychological or moral beings? Three non-literalist accounts for semantic changes concerning psychological and moral terms used in the AI domain will be discussed: the technical (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • The Philosophy of Online Manipulation.Michael Klenk & Fleur Jongepier (eds.) - 2022 - Routledge.
    Are we being manipulated online? If so, is being manipulated by online technologies and algorithmic systems notably different from human forms of manipulation? And what is under threat exactly when people are manipulated online? This volume provides philosophical and conceptual depth to debates in digital ethics about online manipulation. The contributions explore the ramifications of our increasingly consequential interactions with online technologies such as online recommender systems, social media, user-friendly design, micro-targeting, default-settings, gamification, and real-time profiling. The authors in this (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Neuroenhancement, the Criminal Justice System, and the Problem of Alienation.Jukka Varelius - 2019 - Neuroethics 13 (3):325-335.
    It has been suggested that neuroenhancements could be used to improve the abilities of criminal justice authorities. Judges could be made more able to make adequately informed and unbiased decisions, for example. Yet, while such a prospect appears appealing, the views of neuroenhanced criminal justice authorities could also be alien to the unenhanced public. This could compromise the legitimacy and functioning of the criminal justice system. In this article, I assess possible solutions to this problem. I maintain that none of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Neuroenhancement, the Criminal Justice System, and the Problem of Alienation.Jukka Varelius - 2019 - Neuroethics 13 (3):325-335.
    It has been suggested that neuroenhancements could be used to improve the abilities of criminal justice authorities. Judges could be made more able to make adequately informed and unbiased decisions, for example. Yet, while such a prospect appears appealing, the views of neuroenhanced criminal justice authorities could also be alien to the unenhanced public. This could compromise the legitimacy and functioning of the criminal justice system. In this article, I assess possible solutions to this problem. I maintain that none of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The artificial view: toward a non-anthropocentric account of moral patiency.Fabio Tollon - 2020 - Ethics and Information Technology 23 (2):147-155.
    In this paper I provide an exposition and critique of the Organic View of Ethical Status, as outlined by Torrance (2008). A key presupposition of this view is that only moral patients can be moral agents. It is claimed that because artificial agents lack sentience, they cannot be proper subjects of moral concern (i.e. moral patients). This account of moral standing in principle excludes machines from participating in our moral universe. I will argue that the Organic View operationalises anthropocentric intuitions (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • On and beyond artifacts in moral relations: accounting for power and violence in Coeckelbergh’s social relationism.Fabio Tollon & Kiasha Naidoo - 2023 - AI and Society 38 (6):2609-2618.
    The ubiquity of technology in our lives and its culmination in artificial intelligence raises questions about its role in our moral considerations. In this paper, we address a moral concern in relation to technological systems given their deep integration in our lives. Coeckelbergh develops a social-relational account, suggesting that it can point us toward a dynamic, historicised evaluation of moral concern. While agreeing with Coeckelbergh’s move away from grounding moral concern in the ontological properties of entities, we suggest that it (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Socially responsive technologies: toward a co-developmental path.Daniel W. Tigard, Niël H. Conradie & Saskia K. Nagel - 2020 - AI and Society 35 (4):885-893.
    Robotic and artificially intelligent (AI) systems are becoming prevalent in our day-to-day lives. As human interaction is increasingly replaced by human–computer and human–robot interaction (HCI and HRI), we occasionally speak and act as though we are blaming or praising various technological devices. While such responses may arise naturally, they are still unusual. Indeed, for some authors, it is the programmers or users—and not the system itself—that we properly hold responsible in these cases. Furthermore, some argue that since directing blame or (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Technological Answerability and the Severance Problem: Staying Connected by Demanding Answers.Daniel W. Tigard - 2021 - Science and Engineering Ethics 27 (5):1-20.
    Artificial intelligence and robotic technologies have become nearly ubiquitous. In some ways, the developments have likely helped us, but in other ways sophisticated technologies set back our interests. Among the latter sort is what has been dubbed the ‘severance problem’—the idea that technologies sever our connection to the world, a connection which is necessary for us to flourish and live meaningful lives. I grant that the severance problem is a threat we should mitigate and I ask: how can we stave (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Meaningful Lives in an Age of Artificial Intelligence: A Reply to Danaher.Lucas Scripter - 2022 - Science and Engineering Ethics 28 (1):1-9.
    Does the rise of artificial intelligence pose a threat to human sources of meaning? While much ink has been spilled on how AI could undercut meaningful human work, John Danaher has raised the stakes by claiming that AI could “sever” human beings from non-work-related sources of meaning—specifically, those related to intellectual and moral goods. Against this view, I argue that his suggestion that AI poses a threat to these areas of meaningful activity is overstated. Self-transformative activities pose a hard limit (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • The Spectrum of Responsibility Ascription for End Users of Neurotechnologies.Andreas Schönau - 2021 - Neuroethics 14 (3):423-435.
    Invasive neural devices offer novel prospects for motor rehabilitation on different levels of agentive behavior. From a functional perspective, they interact with, support, or enable human intentional actions in such a way that movement capabilities are regained. However, when there is a technical malfunction resulting in an unintended movement, the complexity of the relationship between the end user and the device sometimes makes it difficult to determine who is responsible for the outcome – a circumstance that has been coined as (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Are we done with (Wordy) manifestos? Towards an introverted digital humanism.Giacomo Pezzano - 2024 - Journal of Responsible Technology 17 (C):100078.
    Download  
     
    Export citation  
     
    Bookmark  
  • The ethics of crashes with self‐driving cars: A roadmap, I.Sven Nyholm - 2018 - Philosophy Compass 13 (7):e12507.
    Self‐driving cars hold out the promise of being much safer than regular cars. Yet they cannot be 100% safe. Accordingly, they need to be programmed for how to deal with crash scenarios. Should cars be programmed to always prioritize their owners, to minimize harm, or to respond to crashes on the basis of some other type of principle? The article first discusses whether everyone should have the same “ethics settings.” Next, the oft‐made analogy with the trolley problem is examined. Then (...)
    Download  
     
    Export citation  
     
    Bookmark   28 citations  
  • Artificial Intelligence and Human Enhancement: Can AI Technologies Make Us More (Artificially) Intelligent?Sven Nyholm - 2024 - Cambridge Quarterly of Healthcare Ethics 33 (1):76-88.
    This paper discusses two opposing views about the relation between artificial intelligence (AI) and human intelligence: on the one hand, a worry that heavy reliance on AI technologies might make people less intelligent and, on the other, a hope that AI technologies might serve as a form of cognitive enhancement. The worry relates to the notion that if we hand over too many intelligence-requiring tasks to AI technologies, we might end up with fewer opportunities to train our own intelligence. Concerning (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Are superintelligent robots entitled to human rights?John-Stewart Gordon - 2022 - Ratio 35 (3):181-193.
    Ratio, Volume 35, Issue 3, Page 181-193, September 2022.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • AI and Phronesis.Dan Feldman & Nir Eisikovits - 2022 - Moral Philosophy and Politics 9 (2):181-199.
    We argue that the growing prevalence of statistical machine learning in everyday decision making – from creditworthiness to police force allocation – effectively replaces many of our humdrum practical judgments and that this will eventually undermine our capacity for making such judgments. We lean on Aristotle’s famous account of how phronesis and moral virtues develop to make our case. If Aristotle is right that the habitual exercise of practical judgment allows us to incrementally hone virtues, and if AI saves us (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Toward the search for the perfect blade runner: a large-scale, international assessment of a test that screens for “humanness sensitivity”.Robert Epstein, Maria Bordyug, Ya-Han Chen, Yijing Chen, Anna Ginther, Gina Kirkish & Holly Stead - forthcoming - AI and Society:1-21.
    We introduce a construct called “humanness sensitivity,” which we define as the ability to recognize uniquely human characteristics. To evaluate the construct, we used a “concurrent study design” to conduct an internet-based study with a convenience sample of 42,063 people from 88 countries.We sought to determine to what extent people could identify subtle characteristics of human behavior, thinking, emotions, and social relationships which currently distinguish humans from non-human entities such as bots. Many people were surprisingly poor at this task, even (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The Retribution-Gap and Responsibility-Loci Related to Robots and Automated Technologies: A Reply to Nyholm.Roos de Jong - 2020 - Science and Engineering Ethics 26 (2):727-735.
    Automated technologies and robots make decisions that cannot always be fully controlled or predicted. In addition to that, they cannot respond to punishment and blame in the ways humans do. Therefore, when automated cars harm or kill people, for example, this gives rise to concerns about responsibility-gaps and retribution-gaps. According to Sven Nyholm, however, automated cars do not pose a challenge on human responsibility, as long as humans can control them and update them. He argues that the agency exercised in (...)
    Download  
     
    Export citation  
     
    Bookmark   16 citations  
  • Technological Change and Human Obsolescence.John Danaher - 2022 - Techné: Research in Philosophy and Technology 26 (1):31-56.
    Can human life have value in a world in which humans are rendered obsolete by technological advances? This article answers this question by developing an extended analysis of the axiological impact of human obsolescence. In doing so, it makes four main arguments. First, it argues that human obsolescence is a complex phenomenon that can take on at least four distinct forms. Second, it argues that one of these forms of obsolescence is not a coherent concept and hence not a plausible (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The rise of artificial intelligence and the crisis of moral passivity.Berman Chan - 2020 - AI and Society 35 (4):991-993.
    Set aside fanciful doomsday speculations about AI. Even lower-level AIs, while otherwise friendly and providing us a universal basic income, would be able to do all our jobs. Also, we would over-rely upon AI assistants even in our personal lives. Thus, John Danaher argues that a human crisis of moral passivity would result However, I argue firstly that if AIs are posited to lack the potential to become unfriendly, they may not be intelligent enough to replace us in all our (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Automated decision-making and the problem of evil.Andrea Berber - forthcoming - AI and Society:1-10.
    The intention of this paper is to point to the dilemma humanity may face in light of AI advancements. The dilemma is whether to create a world with less evil or maintain the human status of moral agents. This dilemma may arise as a consequence of using automated decision-making systems for high-stakes decisions. The use of automated decision-making bears the risk of eliminating human moral agency and autonomy and reducing humans to mere moral patients. On the other hand, it also (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • A Talking Cure for Autonomy Traps : How to share our social world with chatbots.Regina Rini - manuscript
    Large Language Models (LLMs) like ChatGPT were trained on human conversation, but in the future they will also train us. As chatbots speak from our smartphones and customer service helplines, they will become a part of everyday life and a growing share of all the conversations we ever have. It’s hard to doubt this will have some effect on us. Here I explore a specific concern about the impact of artificial conversation on our capacity to deliberate and hold ourselves accountable (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Challenges for an Ontology of Artificial Intelligence.Scott H. Hawley - 2019 - Perspectives on Science and Christian Faith 71 (2):83-95.
    Of primary importance in formulating a response to the increasing prevalence and power of artificial intelligence (AI) applications in society are questions of ontology. Questions such as: What “are” these systems? How are they to be regarded? How does an algorithm come to be regarded as an agent? We discuss three factors which hinder discussion and obscure attempts to form a clear ontology of AI: (1) the various and evolving definitions of AI, (2) the tendency for pre-existing technologies to be (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations