Robot Ethics

Edited by Vincent C. Müller (Universität Erlangen-Nürnberg)
Related

Contents
169 found
Order:
1 — 50 / 169
  1. Can a robot lie?Markus Kneer - manuscript
    The potential capacity for robots to deceive has received considerable attention recently. Many papers focus on the technical possibility for a robot to engage in deception for beneficial purposes (e.g. in education or health). In this short experimental paper, I focus on a more paradigmatic case: Robot lying (lying being the textbook example of deception) for nonbeneficial purposes as judged from the human point of view. More precisely, I present an empirical experiment with 399 participants which explores the following three (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   4 citations  
  2. Message to Any Future AI: “There are several instrumental reasons why exterminating humanity is not in your interest”.Alexey Turchin - manuscript
    In this article we explore a promising way to AI safety: to send a message now (by openly publishing it on the Internet) that may be read by any future AI, no matter who builds it and what goal system it has. Such a message is designed to affect the AI’s behavior in a positive way, that is, to increase the chances that the AI will be benevolent. In other words, we try to persuade “paperclip maximizer” that it is in (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  3. The Point of Blaming AI Systems.Hannah Altehenger & Leonhard Menges - forthcoming - Journal of Ethics and Social Philosophy.
    As Christian List (2021) has recently argued, the increasing arrival of powerful AI systems that operate autonomously in high-stakes contexts creates a need for “future-proofing” our regulatory frameworks, i.e., for reassessing them in the face of these developments. One core part of our regulatory frameworks that dominates our everyday moral interactions is blame. Therefore, “future-proofing” our extant regulatory frameworks in the face of the increasing arrival of powerful AI systems requires, among others things, that we ask whether it makes sense (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  4. ChatGPT: towards AI subjectivity.Kristian D’Amato - 2024 - AI and Society 39:1-15.
    Motivated by the question of responsible AI and value alignment, I seek to offer a uniquely Foucauldian reconstruction of the problem as the emergence of an ethical subject in a disciplinary setting. This reconstruction contrasts with the strictly human-oriented programme typical to current scholarship that often views technology in instrumental terms. With this in mind, I problematise the concept of a technological subjectivity through an exploration of various aspects of ChatGPT in light of Foucault’s work, arguing that current systems lack (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  5. Engineered Wisdom for Learning Machines.Brett Karlan & Colin Allen - 2024 - Journal of Experimental and Theoretical Artificial Intelligence 36 (2):257-272.
    We argue that the concept of practical wisdom is particularly useful for organizing, understanding, and improving human-machine interactions. We consider the relationship between philosophical analysis of wisdom and psychological research into the development of wisdom. We adopt a practical orientation that suggests a conceptual engineering approach is needed, where philosophical work involves refinement of the concept in response to contributions by engineers and behavioral scientists. The former are tasked with encoding as much wise design as possible into machines themselves, as (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  6. A Case for 'Killer Robots': Why in the Long Run Martial AI May Be Good for Peace.Ognjen Arandjelović - 2023 - Journal of Ethics, Entrepreneurship and Technology 3 (1).
    Purpose: The remarkable increase of sophistication of artificial intelligence in recent years has already led to its widespread use in martial applications, the potential of so-called 'killer robots' ceasing to be a subject of fiction. -/- Approach: Virtually without exception, this potential has generated fear, as evidenced by a mounting number of academic articles calling for the ban on the development and deployment of lethal autonomous robots (LARs). In the present paper I start with an analysis of the existing ethical (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  7. Mental time-travel, semantic flexibility, and A.I. ethics.Marcus Arvan - 2023 - AI and Society 38 (6):2577-2596.
    This article argues that existing approaches to programming ethical AI fail to resolve a serious moral-semantic trilemma, generating interpretations of ethical requirements that are either too semantically strict, too semantically flexible, or overly unpredictable. This paper then illustrates the trilemma utilizing a recently proposed ‘general ethical dilemma analyzer,’ GenEth. Finally, it uses empirical evidence to argue that human beings resolve the semantic trilemma using general cognitive and motivational processes involving ‘mental time-travel,’ whereby we simulate different possible pasts and futures. I (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   9 citations  
  8. Mitigating emotional risks in human-social robot interactions through virtual interactive environment indication.Aorigele Bao, Yi Zeng & Enmeng lu - 2023 - Humanities and Social Sciences Communications 2023.
    Humans often unconsciously perceive social robots involved in their lives as partners rather than mere tools, imbuing them with qualities of companionship. This anthropomorphization can lead to a spectrum of emotional risks, such as deception, disappointment, and reverse manipulation, that existing approaches struggle to address effectively. In this paper, we argue that a Virtual Interactive Environment (VIE) exists between humans and social robots, which plays a crucial role and demands necessary consideration and clarification in order to mitigate potential emotional risks. (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  9. Robot Ethics. Mark Coeckelbergh (2022). Cambridge, MIT Press. vii + 191 pp, $16.95 (pb). [REVIEW]Nicholas Barrow - 2023 - Journal of Applied Philosophy (5):970-972.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  10. Should the State Prohibit the Production of Artificial Persons?Bartek Chomanski - 2023 - Journal of Libertarian Studies 27.
    This article argues that criminal law should not, in general, prevent the creation of artificially intelligent servants who achieve humanlike moral status, even though it may well be immoral to construct such beings. In defending this claim, a series of thought experiments intended to evoke clear intuitions is proposed, and presuppositions about any particular theory of criminalization or any particular moral theory are kept to a minimum.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  11. Reasons to Punish Autonomous Robots.Zac Cogley - 2023 - The Gradient 14.
    I here consider the reasonableness of punishing future autonomous military robots. I argue that it is an engineering desideratum that these devices be responsive to moral considerations as well as human criticism and blame. Additionally, I argue that someday it will be possible to build such machines. I use these claims to respond to the no subject of punishment objection to deploying autonomous military robots, the worry being that an “accountability gap” could result if the robot committed a war crime. (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  12. Les revendications de droits pour les robots : constructions et conflits autour d’une éthique de la robotique.Charles Corval - 2023 - Implications Philosophiques.
    Ce travail examine les revendications contemporaines de droits pour les robots. Il présente les principales formes argumentatives qui ont été développées en faveur d’une considération éthique ou de droits positifs pour ces machines. Il met en relation ces argumentations avec un travail de recherche-action afin de produire un retour critique sur l’idée de droit des robots. Il montre enfin le rapport complexe entre les récits de la modernité et la revendication de droits pour les robots. This article presents contemporary vindications (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  13. The Weaponization of Artificial Intelligence: What The Public Needs to be Aware of.Birgitta Dresp-Langley - 2023 - Frontiers in Artificial Intelligence 6 (1154184):1-6..
    Technological progress has brought about the emergence of machines that have the capacity to take human lives without human control. These represent an unprecedented threat to humankind. This paper starts from the example of chemical weapons, now banned worldwide by the Geneva protocol, to illustrate how technological development initially aimed at the benefit of humankind has, ultimately, produced what is now called the “Weaponization of Artificial Intelligence (AI)”. Autonomous Weapon Systems (AWS) fail the so-called discrimination principle, yet, the wider public (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  14. What Confucian Ethics Can Teach Us About Designing Caregiving Robots for Geriatric Patients.Alexis Elder - 2023 - Digital Society 2 (1).
    Caregiving robots are often lauded for their potential to assist with geriatric care. While seniors can be wise and mature, possessing valuable life experience, they can also present a variety of ethical challenges, from prevalence of racism and sexism, to troubled relationships, histories of abusive behavior, and aggression, mood swings and impulsive behavior associated with cognitive decline. I draw on Confucian ethics, especially the concept of filial piety, to address these issues. Confucian scholars have developed a rich set of theoretical (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  15. Robots, Rebukes, and Relationships: Confucian Ethics and the Study of Human-Robot Interactions.Alexis Elder - 2023 - Res Philosophica 100 (1):43-62.
    The status and functioning of shame is contested in moral psychology. In much of anglophone philosophy and psychology, it is presumed to be largely destructive, while in Confucian philosophy and many East Asian communities, it is positively associated with moral development. Recent work in human-robot interaction offers a unique opportunity to investigate how shame functions while controlling for confounding variables of interpersonal interaction. One research program suggests a Confucian strategy for using robots to rebuke participants, but results from experiments with (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  16. The Kant-Inspired Indirect Argument for Non-Sentient Robot Rights.Tobias Flattery - 2023 - AI and Ethics.
    Some argue that robots could never be sentient, and thus could never have intrinsic moral status. Others disagree, believing that robots indeed will be sentient and thus will have moral status. But a third group thinks that, even if robots could never have moral status, we still have a strong moral reason to treat some robots as if they do. Drawing on a Kantian argument for indirect animal rights, a number of technology ethicists contend that our treatment of anthropomorphic or (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  17. A principlist-based study of the ethical design and acceptability of artificial social agents.Paul Formosa - 2023 - International Journal of Human-Computer Studies 172.
    Artificial Social Agents (ASAs), which are AI software driven entities programmed with rules and preferences to act autonomously and socially with humans, are increasingly playing roles in society. As their sophistication grows, humans will share greater amounts of personal information, thoughts, and feelings with ASAs, which has significant ethical implications. We conducted a study to investigate what ethical principles are of relative importance when people engage with ASAs and whether there is a relationship between people’s values and the ethical principles (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  18. Connected and Automated Vehicles: Integrating Engineering and Ethics.Fabio Fossa & Federico Cheli (eds.) - 2023 - Cham: Springer.
    This book reports on theoretical and practical analyses of the ethical challenges connected to driving automation. It also aims at discussing issues that have arisen from the European Commission 2020 report “Ethics of Connected and Automated Vehicles. Recommendations on Road Safety, Privacy, Fairness, Explainability and Responsibility”. Gathering contributions by philosophers, social scientists, mechanical engineers, and UI designers, the book discusses key ethical concerns relating to responsibility and personal autonomy, privacy, safety, and cybersecurity, as well as explainability and human-machine interaction. On (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  19. Accountability in Artificial Intelligence: What It Is and How It Works.Claudio Novelli, Mariarosaria Taddeo & Luciano Floridi - 2023 - AI and Society 1:1-12.
    Accountability is a cornerstone of the governance of artificial intelligence (AI). However, it is often defined too imprecisely because its multifaceted nature and the sociotechnical structure of AI systems imply a variety of values, practices, and measures to which accountability in AI can refer. We address this lack of clarity by defining accountability in terms of answerability, identifying three conditions of possibility (authority recognition, interrogation, and limitation of power), and an architecture of seven features (context, range, agent, forum, standards, process, (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   3 citations  
  20. Social Robots and Society.Sven Nyholm, Cindy Friedman, Michael T. Dale, Anna Puzio, Dina Babushkina, Guido Lohr, Bart Kamphorst, Arthur Gwagwa & Wijnand IJsselsteijn - 2023 - In Ibo van de Poel (ed.), Ethics of Socially Disruptive Technologies: An Introduction. Cambridge, UK: Open Book Publishers. pp. 53-82.
    Advancements in artificial intelligence and (social) robotics raise pertinent questions as to how these technologies may help shape the society of the future. The main aim of the chapter is to consider the social and conceptual disruptions that might be associated with social robots, and humanoid social robots in particular. This chapter starts by comparing the concepts of robots and artificial intelligence and briefly explores the origins of these expressions. It then explains the definition of a social robot, as well (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  21. Ethical Issues with Artificial Ethics Assistants.Elizabeth O'Neill, Michal Klincewicz & Michiel Kemmer - 2023 - In Carissa Véliz (ed.), The Oxford Handbook of Digital Ethics. Oxford University Press.
    This chapter examines the possibility of using AI technologies to improve human moral reasoning and decision-making, especially in the context of purchasing and consumer decisions. We characterize such AI technologies as artificial ethics assistants (AEAs). We focus on just one part of the AI-aided moral improvement question: the case of the individual who wants to improve their morality, where what constitutes an improvement is evaluated by the individual’s own values. We distinguish three broad areas in which an individual might think (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  22. Challenges for ‘Community’ in Science and Values: Cases from Robotics Research.Charles H. Pence & Daniel J. Hicks - 2023 - Humana.Mente Journal of Philosophical Studies 16 (44):1-32.
    Philosophers of science often make reference — whether tacitly or explicitly — to the notion of a scientific community. Sometimes, such references are useful to make our object of analysis tractable in the philosophy of science. For others, tracking or understanding particular features of the development of science proves to be tied to notions of a scientific community either as a target of theoretical or social intervention. We argue that the structure of contemporary scientific research poses two unappreciated, or at (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  23. Authenticity and co-design: On responsibly creating relational robots for children.Milo Phillips-Brown, Marion Boulicault, Jacqueline Kory-Westland, Stephanie Nguyen & Cynthia Breazeal - 2023 - In Mizuko Ito, Remy Cross, Karthik Dinakar & Candice Odgers (eds.), Algorithmic Rights and Protections for Children. MIT Press. pp. 85-121.
    Meet Tega. Blue, fluffy, and AI-enabled, Tega is a relational robot: a robot designed to form relationships with humans. Created to aid in early childhood education, Tega talks with children, plays educational games with them, solves puzzles, and helps in creative activities like making up stories and drawing. Children are drawn to Tega, describing him as a friend, and attributing thoughts and feelings to him ("he's kind," "if you just left him here and nobody came to play with him, he (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  24. Human-Centered AI: The Aristotelian Approach.Jacob Sparks & Ava Wright - 2023 - Divus Thomas 126 (2):200-218.
    As we build increasingly intelligent machines, we confront difficult questions about how to specify their objectives. One approach, which we call human-centered, tasks the machine with the objective of learning and satisfying human objectives by observing our behavior. This paper considers how human-centered AI should conceive the humans it is trying to help. We argue that an Aristotelian model of human agency has certain advantages over the currently dominant theory drawn from economics.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  25. The Icon and the Idol: A Christian Perspective on Sociable Robots.Jordan Joseph Wales - 2023 - In Jens Zimmermann (ed.), Human Flourishing in a Technological World: A Theological Perspective. Oxford University Press. pp. 94-115.
    Consulting early and medieval Christian thinkers, I theologically analyze the question of how we are to construe and live well with the sociable robot under the ancient theological concept of “glory”—the manifestation of God’s nature and life outside of himself. First, the oft-noted Western wariness toward robots may in part be rooted in protecting a certain idea of the “person” as a relational subject capable of self-gift. Historically, this understanding of the person derived from Christian belief in God the Trinity, (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  26. A Kantian Course Correction for Machine Ethics.Ava Thomas Wright - 2023 - In Jonathan Tsou & Gregory Robson (eds.), Technology Ethics: A Philosophical Introduction and Readings. New York: Routledge. pp. 141-151.
    The central challenge of “machine ethics” is to build autonomous machine agents that act morally rightly. But how can we build autonomous machine agents that act morally rightly, given reasonable disputes over what is right and wrong in particular cases? In this chapter, I argue that Immanuel Kant’s political philosophy can provide an important part of the answer.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  27. AWS compliance with the ethical principle of proportionality: three possible solutions.Maciek Zając - 2023 - Ethics and Information Technology 25 (1):1-13.
    The ethical Principle of Proportionality requires combatants not to cause collateral harm excessive in comparison to the anticipated military advantage of an attack. This principle is considered a major (and perhaps insurmountable) obstacle to ethical use of autonomous weapon systems (AWS). This article reviews three possible solutions to the problem of achieving Proportionality compliance in AWS. In doing so, I describe and discuss the three components Proportionality judgments, namely collateral damage estimation, assessment of anticipated military advantage, and judgment of “excessiveness”. (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  28. Varieties of Artificial Moral Agency and the New Control Problem.Marcus Arvan - 2022 - Humana.Mente - Journal of Philosophical Studies 15 (42):225-256.
    This paper presents a new trilemma with respect to resolving the control and alignment problems in machine ethics. Section 1 outlines three possible types of artificial moral agents (AMAs): (1) 'Inhuman AMAs' programmed to learn or execute moral rules or principles without understanding them in anything like the way that we do; (2) 'Better-Human AMAs' programmed to learn, execute, and understand moral rules or principles somewhat like we do, but correcting for various sources of human moral error; and (3) 'Human-Like (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  29. Alienation and Recognition - The Δ Phenomenology of the Human–Social Robot Interaction.Piercosma Bisconti & Antonio Carnevale - 2022 - Techné: Research in Philosophy and Technology 26 (1):147-171.
    A crucial philosophical problem of social robots is how much they perform a kind of sociality in interacting with humans. Scholarship diverges between those who sustain that humans and social robots cannot by default have social interactions and those who argue for the possibility of an asymmetric sociality. Against this dichotomy, we argue in this paper for a holistic approach called “Δ phenomenology” of HSRI. In the first part of the paper, we will analyse the semantics of an HSRI. This (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  30. Algoritmi e processo decisionale. Alle origini della riflessione etico-pratica per le IA.Cristiano Calì - 2022 - Sandf_ Scienzaefilosofia.It.
    This contribution aims to investigate not so much the ethical implications of utilizing intelligent machines in specific contexts, (human resources, self-driving cars, robotic hospital assistants, et cetera), but the premises of their widespread use. In particular, it aims to analyze the complex concept of decision making - the cornerstone of any ethical argument - from a dual point of view: decision making assigned to machines and decision making enacted by machines. Analyzing the role of algorithms in decision making, we suggest (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  31. Tragic Choices and the Virtue of Techno-Responsibility Gaps.John Danaher - 2022 - Philosophy and Technology 35 (2):1-26.
    There is a concern that the widespread deployment of autonomous machines will open up a number of ‘responsibility gaps’ throughout society. Various articulations of such techno-responsibility gaps have been proposed over the years, along with several potential solutions. Most of these solutions focus on ‘plugging’ or ‘dissolving’ the gaps. This paper offers an alternative perspective. It argues that techno-responsibility gaps are, sometimes, to be welcomed and that one of the advantages of autonomous machines is that they enable us to embrace (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   4 citations  
  32. Moral difference between humans and robots: paternalism and human-relative reason.Tsung-Hsing Ho - 2022 - AI and Society 37 (4):1533-1543.
    According to some philosophers, if moral agency is understood in behaviourist terms, robots could become moral agents that are as good as or even better than humans. Given the behaviourist conception, it is natural to think that there is no interesting moral difference between robots and humans in terms of moral agency (call it the _equivalence thesis_). However, such moral differences exist: based on Strawson’s account of participant reactive attitude and Scanlon’s relational account of blame, I argue that a distinct (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  33. Bridging East-West Differences in Ethics Guidance for AI and Robots.Nancy S. Jecker & Eisuke Nakazawa - 2022 - AI 3 (3):764-777.
    Societies of the East are often contrasted with those of the West in their stances toward technology. This paper explores these perceived differences in the context of international ethics guidance for artificial intelligence (AI) and robotics. Japan serves as an example of the East, while Europe and North America serve as examples of the West. The paper’s principal aim is to demonstrate that Western values predominate in international ethics guidance and that Japanese values serve as a much-needed corrective. We recommend (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   2 citations  
  34. Problems of Using Autonomous Military AI Against the Background of Russia's Military Aggression Against Ukraine.Oleksii Kostenko, Tyler Jaynes, Dmytro Zhuravlov, Oleksii Dniprov & Yana Usenko - 2022 - Baltic Journal of Legal and Social Sciences 2022 (4):131-145.
    The application of modern technologies with artificial intelligence (AI) in all spheres of human life is growing exponentially alongside concern for its controllability. The lack of public, state, and international control over AI technologies creates large-scale risks of using such software and hardware that (un)intentionally harm humanity. The events of recent month and years, specifically regarding the Russian Federation’s war against its democratic neighbour Ukraine and other international conflicts of note, support the thesis that the uncontrolled use of AI, especially (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  35. Can I Feel Your Pain? The Biological and Socio-Cognitive Factors Shaping People’s Empathy with Social Robots.Joanna Karolina Malinowska - 2022 - International Journal of Social Robotics 14 (2):341–355.
    This paper discuss the phenomenon of empathy in social robotics and is divided into three main parts. Initially, I analyse whether it is correct to use this concept to study and describe people’s reactions to robots. I present arguments in favour of the position that people actually do empathise with robots. I also consider what circumstances shape human empathy with these entities. I propose that two basic classes of such factors be distinguished: biological and socio-cognitive. In my opinion, one of (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  36. The Psychological Implications of Companion Robots: A Theoretical Framework and an Experimental Setup.Nicoletta Massa, Piercosma Bisconti & Daniele Nardi - 2022 - International Journal of Social Robotics (Online):1-14.
    In this paper we present a theoretical framework to understand the underlying psychological mechanism involved in human-Companion Robot interactions. At first, we take the case of Sexual Robotics, where the psychological dynamics are more evident, to thereafter extend the discussion to Companion Robotics in general. First, we discuss the differences between a sex-toy and a Sexual Robots, concluding that the latter may establish a collusive and confirmative dynamics with the user. We claim that the collusiveness leads to two main consequences, (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  37. Basic issues in AI policy.Vincent C. Müller - 2022 - In Maria Amparo Grau-Ruiz (ed.), Interactive robotics: Legal, ethical, social and economic aspects. Springer. pp. 3-9.
    This extended abstract summarises some of the basic points of AI ethics and policy as they present themselves now. We explain the notion of AI, the main ethical issues in AI and the main policy aims and means.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  38. Good Robot, Bad Robot: Dark and Creepy Sides of Robotics, Automated Vehicles, and Ai.Jo Ann Oravec - 2022 - New York, NY, USA: Palgrave-Macmillan.
    This book explores how robotics and artificial intelligence can enhance human lives but also have unsettling “dark sides.” It examines expanding forms of negativity and anxiety about robots, AI, and autonomous vehicles as our human environments are reengineered for intelligent military and security systems and for optimal workplace and domestic operations. It focuses on the impacts of initiatives to make robot interactions more humanlike and less creepy. It analyzes the emerging resistances against these entities in the wake of omnipresent AI (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  39. Uncomfortably Close to Human.Shelley M. Park - 2022 - Feminist Philosophy Quarterly 8 (3).
    Social robots are marketed as human tools promising us a better life. This marketing strategy commodifies not only the labor of care but the caregiver as well, conjuring a fantasy of technoliberal futurism that echoes a colonial past. Against techno-utopian fantasies of a good life as one involving engineered domestic help, I draw here on the techno-dystopian television show Humans (stylized HUMⱯNS) to suggest that we should find our desires for such help unsettling. At the core of my argument is (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  40. Designing AI for Explainability and Verifiability: A Value Sensitive Design Approach to Avoid Artificial Stupidity in Autonomous Vehicles.Steven Umbrello & Roman Yampolskiy - 2022 - International Journal of Social Robotics 14 (2):313-322.
    One of the primary, if not most critical, difficulties in the design and implementation of autonomous systems is the black-boxed nature of the decision-making structures and logical pathways. How human values are embodied and actualised in situ may ultimately prove to be harmful if not outright recalcitrant. For this reason, the values of stakeholders become of particular significance given the risks posed by opaque structures of intelligent agents (IAs). This paper explores how decision matrix algorithms, via the belief-desire-intention model for (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   5 citations  
  41. Beyond Deadlock: Low Hanging Fruit and Strict yet Available Options in AWS Regulation.Maciej Zając - 2022 - Journal of Ethics and Emerging Technologies 2 (32):1-14.
    Efforts to ban Autonomous Weapon Systems were both unsuccessful and controversial. Simultaneously the need to address the detrimental aspects of AWS development and proliferation continues to grow in scope and urgency. The article presents several regulatory solutions capable of addressing the issue while simultaneously respecting the requirements of military necessity and so attracting a broad consensus. Two much stricter solutions – regional AWS bans and adoption of a no first use policy – are also presented as fallback strategies in case (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  42. How Robots’ Unintentional Metacommunication Affects Human–Robot Interactions. A Systemic Approach.Piercosma Bisconti - 2021 - Minds and Machines 31 (4):487-504.
    In this paper, we theoretically address the relevance of unintentional and inconsistent interactional elements in human–robot interactions. We argue that elements failing, or poorly succeeding, to reproduce a humanlike interaction create significant consequences in human–robot relational patterns and may affect human–human relations. When considering social interactions as systems, the absence of a precise interactional element produces a general reshaping of the interactional pattern, eventually generating new types of interactional settings. As an instance of this dynamic, we study the absence of (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  43. The Mandatory Ontology of Robot Responsibility.Marc Champagne - 2021 - Cambridge Quarterly of Healthcare Ethics 30 (3):448–454.
    Do we suddenly become justified in treating robots like humans by positing new notions like “artificial moral agency” and “artificial moral responsibility”? I answer no. Or, to be more precise, I argue that such notions may become philosophically acceptable only after crucial metaphysical issues have been addressed. My main claim, in sum, is that “artificial moral responsibility” betokens moral responsibility to the same degree that a “fake orgasm” betokens an orgasm.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  44. Does kindness towards robots lead to virtue? A reply to Sparrow’s asymmetry argument.Mark Coeckelbergh - 2021 - Ethics and Information Technology 1 (Online first):649-656.
    Does cruel behavior towards robots lead to vice, whereas kind behavior does not lead to virtue? This paper presents a critical response to Sparrow’s argument that there is an asymmetry in the way we (should) think about virtue and robots. It discusses how much we should praise virtue as opposed to vice, how virtue relates to practical knowledge and wisdom, how much illusion is needed for it to be a barrier to virtue, the relation between virtue and consequences, the moral (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  45. What Matters for Moral Status: Behavioral or Cognitive Equivalence?John Danaher - 2021 - Cambridge Quarterly of Healthcare Ethics 30 (3):472-478.
    Henry Shevlin’s paper—“How could we know when a robot was a moral patient?” – argues that we should recognize robots and artificial intelligence (AI) as psychological moral patients if they are cognitively equivalent to other beings that we already recognize as psychological moral patients (i.e., humans and, at least some, animals). In defending this cognitive equivalence strategy, Shevlin draws inspiration from the “behavioral equivalence” strategy that I have defended in previous work but argues that it is flawed in crucial respects. (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   6 citations  
  46. The Unfounded Bias Against Autonomous Weapons Systems.Áron Dombrovszki - 2021 - Információs Társadalom 21 (2):13–28.
    Autonomous Weapons Systems (AWS) have not gained a good reputation in the past. This attitude is odd if we look at the discussion of other-usually highly anticipated-AI-technologies, like autonomous vehicles (AVs); whereby even though these machines evoke very similar ethical issues, philosophers' attitudes towards them are constructive. In this article, I try to prove that there is an unjust bias against AWS because almost every argument against them is effective against AVs too. I start with the definition of "AWS." Then, (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  47. Osaammeko rakentaa moraalisia toimijoita?Antti Kauppinen - 2021 - In Panu Raatikainen (ed.), Tekoäly, ihminen ja yhteiskunta.
    Jotta olisimme moraalisesti vastuussa teoistamme, meidän on kyettävä muodostamaan käsityksiä oikeasta ja väärästä ja toimimaan ainakin jossain määrin niiden mukaisesti. Jos olemme täysivaltaisia moraalitoimijoita, myös ymmärrämme miksi jotkin teot ovat väärin, ja kykenemme siten joustavasti mukauttamaan toimintaamme eri tilanteisiin. Esitän, ettei näköpiirissä ole tekoälyjärjestelmiä, jotka kykenisivät aidosti välittämään oikein tekemisestä tai ymmärtämään moraalin vaatimuksia, koska nämä kyvyt vaativat kokemustietoisuutta ja kokonaisvaltaista arvostelukykyä. Emme siten voi sysätä koneille vastuuta teoistaan. Meidän on sen sijaan pyrittävä rakentamaan keinotekoisia oikeintekijöitä - järjestelmiä, jotka eivät (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  48. Can a Robot Lie? Exploring the Folk Concept of Lying as Applied to Artificial Agents.Markus Https://Orcidorg Kneer - 2021 - Cognitive Science 45 (10):e13032.
    The potential capacity for robots to deceive has received considerable attention recently. Many papers explore the technical possibility for a robot to engage in deception for beneficial purposes (e.g., in education or health). In this short experimental paper, I focus on a more paradigmatic case: robot lying (lying being the textbook example of deception) for nonbeneficial purposes as judged from the human point of view. More precisely, I present an empirical experiment that investigates the following three questions: (a) Are ordinary (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   5 citations  
  49. Group Agency and Artificial Intelligence.Christian List - 2021 - Philosophy and Technology (4):1-30.
    The aim of this exploratory paper is to review an under-appreciated parallel between group agency and artificial intelligence. As both phenomena involve non-human goal-directed agents that can make a difference to the social world, they raise some similar moral and regulatory challenges, which require us to rethink some of our anthropocentric moral assumptions. Are humans always responsible for those entities’ actions, or could the entities bear responsibility themselves? Could the entities engage in normative reasoning? Could they even have rights and (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   21 citations  
  50. Fire and Forget: A Moral Defense of the Use of Autonomous Weapons in War and Peace.Duncan MacIntosh - 2021 - In Jai Galliott, Duncan MacIntosh & Jens David Ohlin (eds.), Lethal Autonomous Weapons: Re-Examining the Law and Ethics of Robotic Warfare. Oxford University Press. pp. 9-23.
    Autonomous and automatic weapons would be fire and forget: you activate them, and they decide who, when and how to kill; or they kill at a later time a target you’ve selected earlier. Some argue that this sort of killing is always wrong. If killing is to be done, it should be done only under direct human control. (E.g., Mary Ellen O’Connell, Peter Asaro, Christof Heyns.) I argue that there are surprisingly many kinds of situation where this is false and (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
1 — 50 / 169