Switch to: References

Add citations

You must login to add citations.
  1. Artificial moral agents are infeasible with foreseeable technologies.Patrick Chisan Hew - 2014 - Ethics and Information Technology 16 (3):197-206.
    For an artificial agent to be morally praiseworthy, its rules for behaviour and the mechanisms for supplying those rules must not be supplied entirely by external humans. Such systems are a substantial departure from current technologies and theory, and are a low prospect. With foreseeable technologies, an artificial agent will carry zero responsibility for its behavior and humans will retain full responsibility.
    Download  
     
    Export citation  
     
    Bookmark   16 citations  
  • When to Fill Responsibility Gaps: A Proposal.Michael Da Silva - forthcoming - Journal of Value Inquiry:1-26.
    Download  
     
    Export citation  
     
    Bookmark  
  • No Agent in the Machine: Being Trustworthy and Responsible about AI.Niël Henk Conradie & Saskia K. Nagel - 2024 - Philosophy and Technology 37 (2):1-24.
    Many recent AI policies have been structured under labels that follow a particular trend: national or international guidelines, policies or regulations, such as the EU’s and USA’s ‘Trustworthy AI’ and China’s and India’s adoption of ‘Responsible AI’, use a label that follows the recipe of [agentially loaded notion + ‘AI’]. A result of this branding, even if implicit, is to encourage the application by laypeople of these agentially loaded notions to the AI technologies themselves. Yet, these notions are appropriate only (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • On the computational complexity of ethics: moral tractability for minds and machines.Jakob Stenseke - 2024 - Artificial Intelligence Review 57 (105):90.
    Why should moral philosophers, moral psychologists, and machine ethicists care about computational complexity? Debates on whether artificial intelligence (AI) can or should be used to solve problems in ethical domains have mainly been driven by what AI can or cannot do in terms of human capacities. In this paper, we tackle the problem from the other end by exploring what kind of moral machines are possible based on what computational systems can or cannot do. To do so, we analyze normative (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Machine agency and representation.Beba Cibralic & James Mattingly - 2024 - AI and Society 39 (1):345-352.
    Theories of action tend to require agents to have mental representations. A common trope in discussions of artificial intelligence (AI) is that they do not, and so cannot be agents. Properly understood there may be something to the requirement, but the trope is badly misguided. Here we provide an account of representation for AI that is sufficient to underwrite attributions to these systems of ownership, action, and responsibility. Existing accounts of mental representation tend to be too demanding and unparsimonious. We (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Artificial Moral Responsibility: How We Can and Cannot Hold Machines Responsible.Daniel W. Tigard - 2021 - Cambridge Quarterly of Healthcare Ethics 30 (3):435-447.
    Our ability to locate moral responsibility is often thought to be a necessary condition for conducting morally permissible medical practice, engaging in a just war, and other high-stakes endeavors. Yet, with increasing reliance upon artificially intelligent systems, we may be facing a wideningresponsibility gap, which, some argue, cannot be bridged by traditional concepts of responsibility. How then, if at all, can we make use of crucial emerging technologies? According to Colin Allen and Wendell Wallach, the advent of so-called ‘artificial moral (...)
    Download  
     
    Export citation  
     
    Bookmark   16 citations  
  • The Responsibility Gap and LAWS: a Critical Mapping of the Debate.Ann-Katrien Oimann - 2023 - Philosophy and Technology 36 (1):1-22.
    AI has numerous applications and in various fields, including the military domain. The increase in the degree of autonomy in some decision-making systems leads to discussions on the possible future use of lethal autonomous weapons systems (LAWS). A central issue in these discussions is the assignment of moral responsibility for some AI-based outcomes. Several authors claim that the high autonomous capability of such systems leads to a so-called “responsibility gap.” In recent years, there has been a surge in philosophical literature (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • Sola dosis facit venenum: The Ethics of Soldier Optimisation, Enhancement, and Augmentation.Gareth Rice & Jason Selman - 2022 - Journal of Military Ethics 21 (2):97-115.
    This article examines soldier performance optimisation, enhancement, and augmentation across the three dimensions of physical performance, cognitive performance, and socio-cultural understanding. Optimisation refers to combatants attaining their maximum biological potential. Enhancement refers to combatants achieving a level of performance beyond their biological potential through drugs, surgical procedures, or even gene editing. Augmentation refers to a blending of organic and biomechatronic body parts such as electronic or mechanical implants, prosthetics, and brain–machine interfaces. This article identifies that soldier optimisation is a necessity (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Streaching the notion of moral responsibility in nanoelectronics by appying AI.Robert Albin & Amos Bardea - 2021 - In Robert Albin & Amos Bardea (eds.), Ethics in Nanotechnology Social Sciences and Philosophical Aspects, Vol. 2. Berlin: De Gruyter. pp. 75-87.
    The development of machine learning and deep learning (DL) in the field of AI (artificial intelligence) is the direct result of the advancement of nano-electronics. Machine learning is a function that provides the system with the capacity to learn from data without being programmed explicitly. It is basically a mathematical and probabilistic model. DL is part of machine learning methods based on artificial neural networks, simply called neural networks (NNs), as they are inspired by the biological NNs that constitute organic (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Responsible AI Through Conceptual Engineering.Johannes Himmelreich & Sebastian Köhler - 2022 - Philosophy and Technology 35 (3):1-30.
    The advent of intelligent artificial systems has sparked a dispute about the question of who is responsible when such a system causes a harmful outcome. This paper champions the idea that this dispute should be approached as a conceptual engineering problem. Towards this claim, the paper first argues that the dispute about the responsibility gap problem is in part a conceptual dispute about the content of responsibility and related concepts. The paper then argues that the way forward is to evaluate (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  • The Morality of Artificial Friends in Ishiguro’s Klara and the Sun.Jakob Stenseke - 2022 - Journal of Science Fiction and Philosophy 5.
    Can artificial entities be worthy of moral considerations? Can they be artificial moral agents (AMAs), capable of telling the difference between good and evil? In this essay, I explore both questions—i.e., whether and to what extent artificial entities can have a moral status (“the machine question”) and moral agency (“the AMA question”)—in light of Kazuo Ishiguro’s 2021 novel Klara and the Sun. I do so by juxtaposing two prominent approaches to machine morality that are central to the novel: the (1) (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Interdisciplinary Confusion and Resolution in the Context of Moral Machines.Jakob Stenseke - 2022 - Science and Engineering Ethics 28 (3):1-17.
    Recent advancements in artificial intelligence have fueled widespread academic discourse on the ethics of AI within and across a diverse set of disciplines. One notable subfield of AI ethics is machine ethics, which seeks to implement ethical considerations into AI systems. However, since different research efforts within machine ethics have discipline-specific concepts, practices, and goals, the resulting body of work is pestered with conflict and confusion as opposed to fruitful synergies. The aim of this paper is to explore ways to (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Mind the Gap: Autonomous Systems, the Responsibility Gap, and Moral Entanglement.Trystan S. Goetze - 2022 - Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency (FAccT ’22).
    When a computer system causes harm, who is responsible? This question has renewed significance given the proliferation of autonomous systems enabled by modern artificial intelligence techniques. At the root of this problem is a philosophical difficulty known in the literature as the responsibility gap. That is to say, because of the causal distance between the designers of autonomous systems and the eventual outcomes of those systems, the dilution of agency within the large and complex teams that design autonomous systems, and (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • (1 other version)Artificial virtuous agents: from theory to machine implementation.Jakob Stenseke - 2021 - AI and Society:1-20.
    Virtue ethics has many times been suggested as a promising recipe for the construction of artificial moral agents due to its emphasis on moral character and learning. However, given the complex nature of the theory, hardly any work has de facto attempted to implement the core tenets of virtue ethics in moral machines. The main goal of this paper is to demonstrate how virtue ethics can be taken all the way from theory to machine implementation. To achieve this goal, we (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Why machines cannot be moral.Robert Sparrow - 2021 - AI and Society (3):685-693.
    The fact that real-world decisions made by artificial intelligences (AI) are often ethically loaded has led a number of authorities to advocate the development of “moral machines”. I argue that the project of building “ethics” “into” machines presupposes a flawed understanding of the nature of ethics. Drawing on the work of the Australian philosopher, Raimond Gaita, I argue that ethical dilemmas are problems for particular people and not (just) problems for everyone who faces a similar situation. Moreover, the force of (...)
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  • There Is No Techno-Responsibility Gap.Daniel W. Tigard - 2021 - Philosophy and Technology 34 (3):589-607.
    In a landmark essay, Andreas Matthias claimed that current developments in autonomous, artificially intelligent (AI) systems are creating a so-called responsibility gap, which is allegedly ever-widening and stands to undermine both the moral and legal frameworks of our society. But how severe is the threat posed by emerging technologies? In fact, a great number of authors have indicated that the fear is thoroughly instilled. The most pessimistic are calling for a drastic scaling-back or complete moratorium on AI systems, while the (...)
    Download  
     
    Export citation  
     
    Bookmark   45 citations  
  • A Normative Approach to Artificial Moral Agency.Dorna Behdadi & Christian Munthe - 2020 - Minds and Machines 30 (2):195-218.
    This paper proposes a methodological redirection of the philosophical debate on artificial moral agency in view of increasingly pressing practical needs due to technological development. This “normative approach” suggests abandoning theoretical discussions about what conditions may hold for moral agency and to what extent these may be met by artificial entities such as AI systems and robots. Instead, the debate should focus on how and to what extent such entities should be included in human practices normally assuming moral agency and (...)
    Download  
     
    Export citation  
     
    Bookmark   19 citations  
  • Debunking (the) Retribution (Gap).Steven R. Kraaijeveld - 2020 - Science and Engineering Ethics 26 (3):1315-1328.
    Robotization is an increasingly pervasive feature of our lives. Robots with high degrees of autonomy may cause harm, yet in sufciently complex systems neither the robots nor the human developers may be candidates for moral blame. John Danaher has recently argued that this may lead to a retribution gap, where the human desire for retribution faces a lack of appropriate subjects for retributive blame. The potential social and moral implications of a retribution gap are considerable. I argue that the retributive (...)
    Download  
     
    Export citation  
     
    Bookmark   14 citations  
  • Responsibility for Killer Robots.Johannes Himmelreich - 2019 - Ethical Theory and Moral Practice 22 (3):731-747.
    Future weapons will make life-or-death decisions without a human in the loop. When such weapons inflict unwarranted harm, no one appears to be responsible. There seems to be a responsibility gap. I first reconstruct the argument for such responsibility gaps to then argue that this argument is not sound. The argument assumes that commanders have no control over whether autonomous weapons inflict harm. I argue against this assumption. Although this investigation concerns a specific case of autonomous weapons systems, I take (...)
    Download  
     
    Export citation  
     
    Bookmark   36 citations  
  • Robots and Respect: Assessing the Case Against Autonomous Weapon Systems.Robert Sparrow - 2016 - Ethics and International Affairs 30 (1):93-116.
    There is increasing speculation within military and policy circles that the future of armed conflict is likely to include extensive deployment of robots designed to identify targets and destroy them without the direct oversight of a human operator. My aim in this paper is twofold. First, I will argue that the ethical case for allowing autonomous targeting, at least in specific restricted domains, is stronger than critics have acknowledged. Second, I will attempt to uncover, explicate, and defend the intuition that (...)
    Download  
     
    Export citation  
     
    Bookmark   36 citations  
  • A Value-Sensitive Design Approach to Intelligent Agents.Steven Umbrello & Angelo Frank De Bellis - 2018 - In Yampolskiy Roman (ed.), Artificial Intelligence Safety and Security. CRC Press. pp. 395-410.
    This chapter proposed a novel design methodology called Value-Sensitive Design and its potential application to the field of artificial intelligence research and design. It discusses the imperatives in adopting a design philosophy that embeds values into the design of artificial agents at the early stages of AI development. Because of the high risk stakes in the unmitigated design of artificial agents, this chapter proposes that even though VSD may turn out to be a less-than-optimal design methodology, it currently provides a (...)
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  • Robots, Law and the Retribution Gap.John Danaher - 2016 - Ethics and Information Technology 18 (4):299–309.
    We are living through an era of increased robotisation. Some authors have already begun to explore the impact of this robotisation on legal rules and practice. In doing so, many highlight potential liability gaps that might arise through robot misbehaviour. Although these gaps are interesting and socially significant, they do not exhaust the possible gaps that might be created by increased robotisation. In this article, I make the case for one of those alternative gaps: the retribution gap. This gap arises (...)
    Download  
     
    Export citation  
     
    Bookmark   75 citations  
  • Negotiating autonomy and responsibility in military robots.Merel Noorman & Deborah G. Johnson - 2014 - Ethics and Information Technology 16 (1):51-62.
    Central to the ethical concerns raised by the prospect of increasingly autonomous military robots are issues of responsibility. In this paper we examine different conceptions of autonomy within the discourse on these robots to bring into focus what is at stake when it comes to the autonomous nature of military robots. We argue that due to the metaphorical use of the concept of autonomy, the autonomy of robots is often treated as a black box in discussions about autonomous military robots. (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • Reactive Attitudes and AI-Agents – Making Sense of Responsibility and Control Gaps.Andrew P. Rebera - 2024 - Philosophy and Technology 37 (4):1-20.
    Responsibility gaps occur when autonomous machines cause harms for which nobody can be justifiably held morally responsible. The debate around responsibility gaps has focused primarily on the question of responsibility, but other approaches focus on the victims of the associated harms. In this paper I consider how the victims of ‘AI-harm’—by which I mean harms implicated in responsibility gap cases and caused by AI-agents—can make sense of what has happened to them. The reactive attitudes have an important role here. I (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Find the Gap: AI, Responsible Agency and Vulnerability.Shannon Vallor & Tillmann Vierkant - 2024 - Minds and Machines 34 (3):1-23.
    The responsibility gap, commonly described as a core challenge for the effective governance of, and trust in, AI and autonomous systems (AI/AS), is traditionally associated with a failure of the epistemic and/or the control condition of moral responsibility: the ability to know what we are doing and exercise competent control over this doing. Yet these two conditions are a red herring when it comes to understanding the responsibility challenges presented by AI/AS, since evidence from the cognitive sciences shows that individual (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • A Case for 'Killer Robots': Why in the Long Run Martial AI May Be Good for Peace.Ognjen Arandjelović - 2023 - Journal of Ethics, Entrepreneurship and Technology 3 (1).
    Purpose: The remarkable increase of sophistication of artificial intelligence in recent years has already led to its widespread use in martial applications, the potential of so-called 'killer robots' ceasing to be a subject of fiction. -/- Approach: Virtually without exception, this potential has generated fear, as evidenced by a mounting number of academic articles calling for the ban on the development and deployment of lethal autonomous robots (LARs). In the present paper I start with an analysis of the existing ethical (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Artificial intelligence and responsibility gaps: what is the problem?Peter Königs - 2022 - Ethics and Information Technology 24 (3):1-11.
    Recent decades have witnessed tremendous progress in artificial intelligence and in the development of autonomous systems that rely on artificial intelligence. Critics, however, have pointed to the difficulty of allocating responsibility for the actions of an autonomous system, especially when the autonomous system causes harm or damage. The highly autonomous behavior of such systems, for which neither the programmer, the manufacturer, nor the operator seems to be responsible, has been suspected to generate responsibility gaps. This has been the cause of (...)
    Download  
     
    Export citation  
     
    Bookmark   22 citations  
  • Instrumental Robots.Sebastian Köhler - 2020 - Science and Engineering Ethics 26 (6):3121-3141.
    Advances in artificial intelligence research allow us to build fairly sophisticated agents: robots and computer programs capable of acting and deciding on their own. These systems raise questions about who is responsible when something goes wrong—when such systems harm or kill humans. In a recent paper, Sven Nyholm has suggested that, because current AI will likely possess what we might call “supervised agency”, the theory of responsibility for individual agency is the wrong place to look for an answer to the (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  • Granting Automata Human Rights: Challenge to a Basis of Full-Rights Privilege.Lantz Fleming Miller - 2015 - Human Rights Review 16 (4):369-391.
    As engineers propose constructing humanlike automata, the question arises as to whether such machines merit human rights. The issue warrants serious and rigorous examination, although it has not yet cohered into a conversation. To put it into a sure direction, this paper proposes phrasing it in terms of whether humans are morally obligated to extend to maximally humanlike automata full human rights, or those set forth in common international rights documents. This paper’s approach is to consider the ontology of humans (...)
    Download  
     
    Export citation  
     
    Bookmark   15 citations  
  • Smart soldiers: towards a more ethical warfare.Femi Richard Omotoyinbo - 2023 - AI and Society 38 (4):1485-1491.
    It is a truism that, due to human weaknesses, human soldiers have yet to have sufficiently ethical warfare. It is arguable that the likelihood of human soldiers to breach the Principle of Non-Combatant Immunity, for example, is higher in contrast tosmart soldierswho are emotionally inept. Hence, this paper examines the possibility that the integration of ethics into smart soldiers will help address moral challenges in modern warfare. The approach is to develop and employ smart soldiers that are enhanced with ethical (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • (1 other version)Introduction to the Topical Collection on AI and Responsibility.Niël Conradie, Hendrik Kempt & Peter Königs - 2022 - Philosophy and Technology 35 (4):1-6.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Decision Making in Killer Robots Is Not Bias Free.Teresa Limata - 2023 - Journal of Military Ethics 22 (2):118-128.
    Autonomous weapons are systems that, once activated, can identify, select and engage targets by themselves. Scharre (2018. Army of None: Autonomous Weapons and the Future of War. New York: Norton) has given a definition of autonomy based on three dimensions: the automatized tasks, the relationship with the human user and the sophistication of the machine’s decision-making process. Based on this definition of autonomy, this article provides an overview of systematic biases that may occur in each of these three dimensions. Before (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Correction to: The Responsibility Gap and LAWS: a Critical Mapping of the Debate.Ann-Katrien Oimann - 2023 - Philosophy and Technology 36 (1):1-2.
    AI has numerous applications and in various fields, including the military domain. The increase in the degree of autonomy in some decision-making systems leads to discussions on the possible future use of lethal autonomous weapons systems (LAWS). A central issue in these discussions is the assignment of moral responsibility for some AI-based outcomes. Several authors claim that the high autonomous capability of such systems leads to a so-called “responsibility gap.” In recent years, there has been a surge in philosophical literature (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Can Autonomous Agents Without Phenomenal Consciousness Be Morally Responsible?László Bernáth - 2021 - Philosophy and Technology 34 (4):1363-1382.
    It is an increasingly popular view among philosophers that moral responsibility can, in principle, be attributed to unconscious autonomous agents. This trend is already remarkable in itself, but it is even more interesting that most proponents of this view provide more or less the same argument to support their position. I argue that as it stands, the Extension Argument, as I call it, is not sufficient to establish the thesis that unconscious autonomous agents can be morally responsible. I attempt to (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Safety by simulation: theorizing the future of robot regulation.Mika Viljanen - 2024 - AI and Society 39 (1):139-154.
    Mobility robots may soon be among us, triggering a need for safety regulation. Robot safety regulation, however, remains underexplored, with only a few articles analyzing what regulatory approaches could be feasible. This article offers an account of the available regulatory strategies and attempts to theorize the effects of simulation-based safety regulation. The article first discusses the distinctive features of mobility robots as regulatory targets and argues that emergent behavior constitutes the key regulatory concern in designing robot safety regulation regimes. In (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Technology with No Human Responsibility?Deborah G. Johnson - 2015 - Journal of Business Ethics 127 (4):707-715.
    Download  
     
    Export citation  
     
    Bookmark   41 citations  
  • People are averse to machines making moral decisions.Yochanan E. Bigman & Kurt Gray - 2018 - Cognition 181 (C):21-34.
    Download  
     
    Export citation  
     
    Bookmark   33 citations  
  • (1 other version)Introduction to the topical collection on AI and responsibility.Niël Conradie, Hendrik Kempt & Peter Königs - 2022 - Ethics and Information Technology 24 (3).
    Download  
     
    Export citation  
     
    Bookmark  
  • (1 other version)Artificial virtuous agents: from theory to machine implementation.Jakob Stenseke - 2023 - AI and Society 38 (4):1301-1320.
    Virtue ethics has many times been suggested as a promising recipe for the construction of artificial moral agents due to its emphasis on moral character and learning. However, given the complex nature of the theory, hardly any work has de facto attempted to implement the core tenets of virtue ethics in moral machines. The main goal of this paper is to demonstrate how virtue ethics can be taken all the way from theory to machine implementation. To achieve this goal, we (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations