Switch to: References

Add citations

You must login to add citations.
  1. Reasons to Punish Autonomous Robots.Zac Cogley - 2023 - The Gradient 14.
    I here consider the reasonableness of punishing future autonomous military robots. I argue that it is an engineering desideratum that these devices be responsive to moral considerations as well as human criticism and blame. Additionally, I argue that someday it will be possible to build such machines. I use these claims to respond to the no subject of punishment objection to deploying autonomous military robots, the worry being that an “accountability gap” could result if the robot committed a war crime. (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Can AI Weapons Make Ethical Decisions?Ross W. Bellaby - 2021 - Criminal Justice Ethics 40 (2):86-107.
    The ability of machines to make truly independent and autonomous decisions is a goal of many, not least of military leaders who wish to take the human out of the loop as much as possible, claiming that autonomous military weaponry—most notably drones—can make decisions more quickly and with greater accuracy. However, there is no clear understanding of how autonomous weapons should be conceptualized and of the implications that their “autonomous” nature has on them as ethical agents. It will be argued (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Artificial virtuous agents in a multi-agent tragedy of the commons.Jakob Stenseke - 2022 - AI and Society:1-18.
    Although virtue ethics has repeatedly been proposed as a suitable framework for the development of artificial moral agents, it has been proven difficult to approach from a computational perspective. In this work, we present the first technical implementation of artificial virtuous agents in moral simulations. First, we review previous conceptual and technical work in artificial virtue ethics and describe a functionalistic path to AVAs based on dispositional virtues, bottom-up learning, and top-down eudaimonic reward. We then provide the details of a (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Interdisciplinary Confusion and Resolution in the Context of Moral Machines.Jakob Stenseke - 2022 - Science and Engineering Ethics 28 (3):1-17.
    Recent advancements in artificial intelligence have fueled widespread academic discourse on the ethics of AI within and across a diverse set of disciplines. One notable subfield of AI ethics is machine ethics, which seeks to implement ethical considerations into AI systems. However, since different research efforts within machine ethics have discipline-specific concepts, practices, and goals, the resulting body of work is pestered with conflict and confusion as opposed to fruitful synergies. The aim of this paper is to explore ways to (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Moral Gridworlds: A Theoretical Proposal for Modeling Artificial Moral Cognition.Julia Haas - 2020 - Minds and Machines 30 (2):219-246.
    I describe a suite of reinforcement learning environments in which artificial agents learn to value and respond to moral content and contexts. I illustrate the core principles of the framework by characterizing one such environment, or “gridworld,” in which an agent learns to trade-off between monetary profit and fair dealing, as applied in a standard behavioral economic paradigm. I then highlight the core technical and philosophical advantages of the learning approach for modeling moral cognition, and for addressing the so-called value (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Artificial morality: Making of the artificial moral agents.Marija Kušić & Petar Nurkić - 2019 - Belgrade Philosophical Annual 1 (32):27-49.
    Abstract: Artificial Morality is a new, emerging interdisciplinary field that centres around the idea of creating artificial moral agents, or AMAs, by implementing moral competence in artificial systems. AMAs are ought to be autonomous agents capable of socially correct judgements and ethically functional behaviour. This request for moral machines comes from the changes in everyday practice, where artificial systems are being frequently used in a variety of situations from home help and elderly care purposes to banking and court algorithms. It (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Brain–Computer Interfaces: Lessons to Be Learned from the Ethics of Algorithms.Andreas Wolkenstein, Ralf J. Jox & Orsolya Friedrich - 2018 - Cambridge Quarterly of Healthcare Ethics 27 (4):635-646.
    :Brain–computer interfaces are driven essentially by algorithms; however, the ethical role of such algorithms has so far been neglected in the ethical assessment of BCIs. The goal of this article is therefore twofold: First, it aims to offer insights into whether the problems related to the ethics of BCIs can be better grasped with the help of already existing work on the ethics of algorithms. As a second goal, the article explores what kinds of solutions are available in that body (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • People are averse to machines making moral decisions.Yochanan E. Bigman & Kurt Gray - 2018 - Cognition 181 (C):21-34.
    Download  
     
    Export citation  
     
    Bookmark   33 citations  
  • Artificial Intelligence and Declined Guilt: Retailing Morality Comparison Between Human and AI.Marilyn Giroux, Jungkeun Kim, Jacob C. Lee & Jongwon Park - 2022 - Journal of Business Ethics 178 (4):1027-1041.
    Several technological developments, such as self-service technologies and artificial intelligence, are disrupting the retailing industry by changing consumption and purchase habits and the overall retail experience. Although AI represents extraordinary opportunities for businesses, companies must avoid the dangers and risks associated with the adoption of such systems. Integrating perspectives from emerging research on AI, morality of machines, and norm activation, we examine how individuals morally behave toward AI agents and self-service machines. Across three studies, we demonstrate that consumers’ moral concerns (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • The Moral Consideration of Artificial Entities: A Literature Review.Jamie Harris & Jacy Reese Anthis - 2021 - Science and Engineering Ethics 27 (4):1-95.
    Ethicists, policy-makers, and the general public have questioned whether artificial entities such as robots warrant rights or other forms of moral consideration. There is little synthesis of the research on this topic so far. We identify 294 relevant research or discussion items in our literature review of this topic. There is widespread agreement among scholars that some artificial entities could warrant moral consideration in the future, if not also the present. The reasoning varies, such as concern for the effects on (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • What has the Trolley Dilemma ever done for us (and what will it do in the future)? On some recent debates about the ethics of self-driving cars.Andreas Wolkenstein - 2018 - Ethics and Information Technology 20 (3):163-173.
    Self-driving cars currently face a lot of technological problems that need to be solved before the cars can be widely used. However, they also face ethical problems, among which the question of crash-optimization algorithms is most prominently discussed. Reviewing current debates about whether we should use the ethics of the Trolley Dilemma as a guide towards designing self-driving cars will provide us with insights about what exactly ethical research does. It will result in the view that although we need the (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Moralische Roboter: Humanistisch-philosophische Grundlagen und didaktische Anwendungen.André Schmiljun & Iga Maria Schmiljun - 2024 - transcript Verlag.
    Brauchen Roboter moralische Kompetenz? Die Antwort lautet ja. Einerseits benötigen Roboter moralische Kompetenz, um unsere Welt aus Regeln, Vorschriften und Werten zu begreifen, andererseits um von ihrem Umfeld akzeptiert zu werden. Wie aber lässt sich moralische Kompetenz in Roboter implementieren? Welche philosophischen Herausforderungen sind zu erwarten? Und wie können wir uns und unsere Kinder auf Roboter vorbereiten, die irgendwann über moralische Kompetenz verfügen werden? André und Iga Maria Schmiljun skizzieren aus einer humanistisch-philosophischen Perspektive erste Antworten auf diese Fragen und entwickeln (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Designing normative theories for ethical and legal reasoning: LogiKEy framework, methodology, and tool support.Christoph Benzmüller, Xavier Parent & Leendert van der Torre - 2020 - Artificial Intelligence 287 (C):103348.
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Artificial Moral Agents: A Survey of the Current Status. [REVIEW]José-Antonio Cervantes, Sonia López, Luis-Felipe Rodríguez, Salvador Cervantes, Francisco Cervantes & Félix Ramos - 2020 - Science and Engineering Ethics 26 (2):501-532.
    One of the objectives in the field of artificial intelligence for some decades has been the development of artificial agents capable of coexisting in harmony with people and other systems. The computing research community has made efforts to design artificial agents capable of doing tasks the way people do, tasks requiring cognitive mechanisms such as planning, decision-making, and learning. The application domains of such software agents are evident nowadays. Humans are experiencing the inclusion of artificial agents in their environment as (...)
    Download  
     
    Export citation  
     
    Bookmark   31 citations  
  • Can we program or train robots to be good?Amanda Sharkey - 2020 - Ethics and Information Technology 22 (4):283-295.
    As robots are deployed in a widening range of situations, it is necessary to develop a clearer position about whether or not they can be trusted to make good moral decisions. In this paper, we take a realistic look at recent attempts to program and to train robots to develop some form of moral competence. Examples of implemented robot behaviours that have been described as 'ethical', or 'minimally ethical' are considered, although they are found to only operate in quite constrained (...)
    Download  
     
    Export citation  
     
    Bookmark   16 citations  
  • The seven troubles with norm-compliant robots.Tom N. Coggins & Steffen Steinert - 2023 - Ethics and Information Technology 25 (2):1-15.
    Many researchers from robotics, machine ethics, and adjacent fields seem to assume that norms represent good behavior that social robots should learn to benefit their users and society. We would like to complicate this view and present seven key troubles with norm-compliant robots: (1) norm biases, (2) paternalism (3) tyrannies of the majority, (4) pluralistic ignorance, (5) paths of least resistance, (6) outdated norms, and (7) technologically-induced norm change. Because discussions of why norm-compliant robots can be problematic are noticeably absent (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • People's judgments of humans and robots in a classic moral dilemma.Bertram F. Malle, Matthias Scheutz, Corey Cusimano, John Voiklis, Takanori Komatsu, Stuti Thapa & Salomi Aladia - 2025 - Cognition 254 (C):105958.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Machines and humans in sacrificial moral dilemmas: Required similarly but judged differently?Yueying Chu & Peng Liu - 2023 - Cognition 239 (C):105575.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • The notion of moral competence in the scientific literature: a critical review of a thin concept.Dominic Martin, Carl-Maria Mörch & Emmanuelle Figoli - 2023 - Ethics and Behavior 33 (6):461-489.
    This critical review accomplished two main tasks: first, the article provides scope for identifying the most common conceptions of moral competence in the scientific literature, as well as the different ways to measure this type of competence. Having moral judgment is the most popular element of moral competence, but the literature introduces many other elements. The review also shows there is a plethora of ways to measure moral competence, either in standardized tests providing scores or other non-standardized tests. As a (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Robot Authority in Human-Robot Teaming: Effects of Human-Likeness and Physical Embodiment on Compliance.Kerstin S. Haring, Kelly M. Satterfield, Chad C. Tossell, Ewart J. de Visser, Joseph R. Lyons, Vincent F. Mancuso, Victor S. Finomore & Gregory J. Funke - 2021 - Frontiers in Psychology 12.
    The anticipated social capabilities of robots may allow them to serve in authority roles as part of human-machine teams. To date, it is unclear if, and to what extent, human team members will comply with requests from their robotic teammates, and how such compliance compares to requests from human teammates. This research examined how the human-likeness and physical embodiment of a robot affect compliance to a robot's request to perseverate utilizing a novel task paradigm. Across a set of two studies, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Morality on the road: Should machine drivers be more utilitarian than human drivers?Peng Liu, Yueying Chu, Siming Zhai, Tingru Zhang & Edmond Awad - 2025 - Cognition 254 (C):106011.
    Download  
     
    Export citation  
     
    Bookmark