Switch to: Citations

Add references

You must login to add references.
  1. The responsibility gap: Ascribing responsibility for the actions of learning automata.Andreas Matthias - 2004 - Ethics and Information Technology 6 (3):175-183.
    Traditionally, the manufacturer/operator of a machine is held (morally and legally) responsible for the consequences of its operation. Autonomous, learning machines, based on neural networks, genetic algorithms and agent architectures, create a new situation, where the manufacturer/operator of the machine is in principle not capable of predicting the future machine behaviour any more, and thus cannot be held morally responsible or liable for it. The society must decide between not using this kind of machine any more (which is not a (...)
    Download  
     
    Export citation  
     
    Bookmark   176 citations  
  • The Retribution-Gap and Responsibility-Loci Related to Robots and Automated Technologies: A Reply to Nyholm.Roos de Jong - 2020 - Science and Engineering Ethics 26 (2):727-735.
    Automated technologies and robots make decisions that cannot always be fully controlled or predicted. In addition to that, they cannot respond to punishment and blame in the ways humans do. Therefore, when automated cars harm or kill people, for example, this gives rise to concerns about responsibility-gaps and retribution-gaps. According to Sven Nyholm, however, automated cars do not pose a challenge on human responsibility, as long as humans can control them and update them. He argues that the agency exercised in (...)
    Download  
     
    Export citation  
     
    Bookmark   17 citations  
  • Tragic Choices and the Virtue of Techno-Responsibility Gaps.John Danaher - 2022 - Philosophy and Technology 35 (2):1-26.
    There is a concern that the widespread deployment of autonomous machines will open up a number of ‘responsibility gaps’ throughout society. Various articulations of such techno-responsibility gaps have been proposed over the years, along with several potential solutions. Most of these solutions focus on ‘plugging’ or ‘dissolving’ the gaps. This paper offers an alternative perspective. It argues that techno-responsibility gaps are, sometimes, to be welcomed and that one of the advantages of autonomous machines is that they enable us to embrace (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Robots, Law and the Retribution Gap.John Danaher - 2016 - Ethics and Information Technology 18 (4):299–309.
    We are living through an era of increased robotisation. Some authors have already begun to explore the impact of this robotisation on legal rules and practice. In doing so, many highlight potential liability gaps that might arise through robot misbehaviour. Although these gaps are interesting and socially significant, they do not exhaust the possible gaps that might be created by increased robotisation. In this article, I make the case for one of those alternative gaps: the retribution gap. This gap arises (...)
    Download  
     
    Export citation  
     
    Bookmark   60 citations  
  • Individual and collective moral responsibility for systemic military atrocity.Neta C. Crawford - 2007 - Journal of Political Philosophy 15 (2):187–212.
    Download  
     
    Export citation  
     
    Bookmark   23 citations  
  • Bridging the Responsibility Gap in Automated Warfare.Marc Champagne & Ryan Tonkens - 2015 - Philosophy and Technology 28 (1):125-137.
    Sparrow argues that military robots capable of making their own decisions would be independent enough to allow us denial for their actions, yet too unlike us to be the targets of meaningful blame or praise—thereby fostering what Matthias has dubbed “the responsibility gap.” We agree with Sparrow that someone must be held responsible for all actions taken in a military conflict. That said, we think Sparrow overlooks the possibility of what we term “blank check” responsibility: A person of sufficiently high (...)
    Download  
     
    Export citation  
     
    Bookmark   28 citations  
  • Artificial Moral Responsibility: How We Can and Cannot Hold Machines Responsible.Daniel W. Tigard - 2021 - Cambridge Quarterly of Healthcare Ethics 30 (3):435-447.
    Our ability to locate moral responsibility is often thought to be a necessary condition for conducting morally permissible medical practice, engaging in a just war, and other high-stakes endeavors. Yet, with increasing reliance upon artificially intelligent systems, we may be facing a wideningresponsibility gap, which, some argue, cannot be bridged by traditional concepts of responsibility. How then, if at all, can we make use of crucial emerging technologies? According to Colin Allen and Wendell Wallach, the advent of so-called ‘artificial moral (...)
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  • Moral responsibility and ignorance.Michael J. Zimmerman - 1997 - Ethics 107 (3):410-426.
    Download  
     
    Export citation  
     
    Bookmark   143 citations  
  • Editors’ Overview: Moral Responsibility in Technology and Engineering.Ibo van de Poel, Jessica Fahlquist, Neelke Doorn, Sjoerd Zwart & Lambèr Royakkers - 2012 - Science and Engineering Ethics 18 (1):1-11.
    In some situations in which undesirable collective effects occur, it is very hard, if not impossible, to hold any individual reasonably responsible. Such a situation may be referred to as the problem of many hands. In this paper we investigate how the problem of many hands can best be understood and why, and when, it exactly constitutes a problem. After analyzing climate change as an example, we propose to define the problem of many hands as the occurrence of a gap (...)
    Download  
     
    Export citation  
     
    Bookmark   37 citations  
  • There Is No Techno-Responsibility Gap.Daniel W. Tigard - 2020 - Philosophy and Technology 34 (3):589-607.
    In a landmark essay, Andreas Matthias claimed that current developments in autonomous, artificially intelligent systems are creating a so-called responsibility gap, which is allegedly ever-widening and stands to undermine both the moral and legal frameworks of our society. But how severe is the threat posed by emerging technologies? In fact, a great number of authors have indicated that the fear is thoroughly instilled. The most pessimistic are calling for a drastic scaling-back or complete moratorium on AI systems, while the optimists (...)
    Download  
     
    Export citation  
     
    Bookmark   33 citations  
  • Who Is Responsible for Killer Robots? Autonomous Weapons, Group Agency, and the Military‐Industrial Complex.Isaac Taylor - 2021 - Journal of Applied Philosophy 38 (2):320-334.
    There has recently been increasing interest in the possibility and ethics of lethal autonomous weapons systems (LAWS), which would combine sophisticated AI with machinery capable of deadly force. One objection to LAWS is that their use will create a troubling responsibility gap, where no human agent can properly be held accountable for the outcomes that they create. While some authors have attempted to show that individual agents can, in fact, be responsible for the behaviour of LAWS in various circumstances, this (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • Accepting Moral Responsibility for the Actions of Autonomous Weapons Systems—a Moral Gambit.Mariarosaria Taddeo & Alexander Blanchard - 2022 - Philosophy and Technology 35 (3):1-24.
    In this article, we focus on the attribution of moral responsibility for the actions of autonomous weapons systems (AWS). To do so, we suggest that the responsibility gap can be closed if human agents can take meaningful moral responsibility for the actions of AWS. This is a moral responsibility attributed to individuals in a justified and fair way and which is accepted by individuals as an assessment of their own moral character. We argue that, given the unpredictability of AWS, meaningful (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  • Killer robots.Robert Sparrow - 2007 - Journal of Applied Philosophy 24 (1):62–77.
    The United States Army’s Future Combat Systems Project, which aims to manufacture a “robot army” to be ready for deployment by 2012, is only the latest and most dramatic example of military interest in the use of artificially intelligent systems in modern warfare. This paper considers the ethics of a decision to send artificially intelligent robots into war, by asking who we should hold responsible when an autonomous weapon system is involved in an atrocity of the sort that would normally (...)
    Download  
     
    Export citation  
     
    Bookmark   218 citations  
  • Just research into killer robots.Patrick Taylor Smith - 2019 - Ethics and Information Technology 21 (4):281-293.
    This paper argues that it is permissible for computer scientists and engineers—working with advanced militaries that are making good faith efforts to follow the laws of war—to engage in the research and development of lethal autonomous weapons systems. Research and development into a new weapons system is permissible if and only if the new weapons system can plausibly generate a superior risk profile for all morally relevant classes and it is not intrinsically wrong. The paper then suggests that these conditions (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Just war and robots’ killings.Thomas W. Simpson & Vincent C. Müller - 2016 - Philosophical Quarterly 66 (263):302-22.
    May lethal autonomous weapons systems—‘killer robots ’—be used in war? The majority of writers argue against their use, and those who have argued in favour have done so on a consequentialist basis. We defend the moral permissibility of killer robots, but on the basis of the non-aggregative structure of right assumed by Just War theory. This is necessary because the most important argument against killer robots, the responsibility trilemma proposed by Rob Sparrow, makes the same assumptions. We show that the (...)
    Download  
     
    Export citation  
     
    Bookmark   27 citations  
  • Autonomous Weapons and Distributed Responsibility.Marcus Schulzke - 2013 - Philosophy and Technology 26 (2):203-219.
    The possibility that autonomous weapons will be deployed on the battlefields of the future raises the challenge of determining who can be held responsible for how these weapons act. Robert Sparrow has argued that it would be impossible to attribute responsibility for autonomous robots' actions to their creators, their commanders, or the robots themselves. This essay reaches a much different conclusion. It argues that the problem of determining responsibility for autonomous robots can be solved by addressing it within the context (...)
    Download  
     
    Export citation  
     
    Bookmark   25 citations  
  • Learning robots interacting with humans: from epistemic risk to responsibility. [REVIEW]Matteo Santoro, Dante Marino & Guglielmo Tamburrini - 2008 - AI and Society 22 (3):301-314.
    The import of computational learning theories and techniques on the ethics of human-robot interaction is explored in the context of recent developments of personal robotics. An epistemological reflection enables one to isolate a variety of background hypotheses that are needed to achieve successful learning from experience in autonomous personal robots. The conjectural character of these background hypotheses brings out theoretical and practical limitations in our ability to predict and control the behaviour of learning robots in their interactions with humans. Responsibility (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  • Four Responsibility Gaps with Artificial Intelligence: Why they Matter and How to Address them.Filippo Santoni de Sio & Giulio Mecacci - 2021 - Philosophy and Technology 34 (4):1057-1084.
    The notion of “responsibility gap” with artificial intelligence (AI) was originally introduced in the philosophical debate to indicate the concern that “learning automata” may make more difficult or impossible to attribute moral culpability to persons for untoward events. Building on literature in moral and legal philosophy, and ethics of technology, the paper proposes a broader and more comprehensive analysis of the responsibility gap. The responsibility gap, it is argued, is not one problem but a set of at least four interconnected (...)
    Download  
     
    Export citation  
     
    Bookmark   49 citations  
  • No Such Thing as Killer Robots.Michael Robillard - 2017 - Journal of Applied Philosophy 35 (4):705-717.
    There have been two recent strands of argument arguing for the pro tanto impermissibility of fully autonomous weapon systems. On Sparrow's view, AWS are impermissible because they generate a morally problematic ‘responsibility gap’. According to Purves et al., AWS are impermissible because moral reasoning is not codifiable and because AWS are incapable of acting for the ‘right’ reasons. I contend that these arguments are flawed and that AWS are not morally problematic in principle. Specifically, I contend that these arguments presuppose (...)
    Download  
     
    Export citation  
     
    Bookmark   18 citations  
  • Attributing Agency to Automated Systems: Reflections on Human–Robot Collaborations and Responsibility-Loci.Sven Nyholm - 2018 - Science and Engineering Ethics 24 (4):1201-1219.
    Many ethicists writing about automated systems attribute agency to these systems. Not only that; they seemingly attribute an autonomous or independent form of agency to these machines. This leads some ethicists to worry about responsibility-gaps and retribution-gaps in cases where automated systems harm or kill human beings. In this paper, I consider what sorts of agency it makes sense to attribute to most current forms of automated systems, in particular automated cars and military robots. I argue that whereas it indeed (...)
    Download  
     
    Export citation  
     
    Bookmark   66 citations  
  • The responsibility gap: Ascribing responsibility for the actions of learning automata. [REVIEW]Andreas Matthias - 2004 - Ethics and Information Technology 6 (3):175-183.
    Traditionally, the manufacturer/operator of a machine is held (morally and legally) responsible for the consequences of its operation. Autonomous, learning machines, based on neural networks, genetic algorithms and agent architectures, create a new situation, where the manufacturer/operator of the machine is in principle not capable of predicting the future machine behaviour any more, and thus cannot be held morally responsible or liable for it. The society must decide between not using this kind of machine any more (which is not a (...)
    Download  
     
    Export citation  
     
    Bookmark   171 citations  
  • Group Agency and Artificial Intelligence.Christian List - 2021 - Philosophy and Technology (4):1-30.
    The aim of this exploratory paper is to review an under-appreciated parallel between group agency and artificial intelligence. As both phenomena involve non-human goal-directed agents that can make a difference to the social world, they raise some similar moral and regulatory challenges, which require us to rethink some of our anthropocentric moral assumptions. Are humans always responsible for those entities’ actions, or could the entities bear responsibility themselves? Could the entities engage in normative reasoning? Could they even have rights and (...)
    Download  
     
    Export citation  
     
    Bookmark   24 citations  
  • AI Systems Under Criminal Law: a Legal Analysis and a Regulatory Perspective.Francesca Lagioia & Giovanni Sartor - 2020 - Philosophy and Technology 33 (3):433-465.
    Criminal liability for acts committed by AI systems has recently become a hot legal topic. This paper includes three different contributions. The first contribution is an analysis of the extent to which an AI system can satisfy the requirements for criminal liability: accomplishing an actus reus, having the corresponding mens rea, possessing the cognitive capacities needed for responsibility. The second contribution is a discussion of criminal activity accomplished by an AI entity, with reference to a recent case involving an online (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Artificial intelligence and responsibility gaps: what is the problem?Peter Königs - 2022 - Ethics and Information Technology 24 (3):1-11.
    Recent decades have witnessed tremendous progress in artificial intelligence and in the development of autonomous systems that rely on artificial intelligence. Critics, however, have pointed to the difficulty of allocating responsibility for the actions of an autonomous system, especially when the autonomous system causes harm or damage. The highly autonomous behavior of such systems, for which neither the programmer, the manufacturer, nor the operator seems to be responsible, has been suspected to generate responsibility gaps. This has been the cause of (...)
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  • Instrumental Robots.Sebastian Köhler - 2020 - Science and Engineering Ethics 26 (6):3121-3141.
    Advances in artificial intelligence research allow us to build fairly sophisticated agents: robots and computer programs capable of acting and deciding on their own. These systems raise questions about who is responsible when something goes wrong—when such systems harm or kill humans. In a recent paper, Sven Nyholm has suggested that, because current AI will likely possess what we might call “supervised agency”, the theory of responsibility for individual agency is the wrong place to look for an answer to the (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • Responsibility and the ‘Pie Fallacy’.Alex Kaiserman - 2021 - Philosophical Studies 178 (11):3597-3616.
    Much of our ordinary thought and talk about responsibility exhibits what I call the ‘pie fallacy’—the fallacy of thinking that there is a fixed amount of responsibility for every outcome, to be distributed among all those, if any, who are responsible for it. The pie fallacy is a fallacy, I argue, because how responsible an agent is for some outcome is fully grounded in facts about the agent, the outcome and the relationships between them; it does not depend, in particular, (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • The morality of autonomous robots.Aaron M. Johnson & Sidney Axinn - 2013 - Journal of Military Ethics 12 (2):129 - 141.
    While there are many issues to be raised in using lethal autonomous robotic weapons (beyond those of remotely operated drones), we argue that the most important question is: should the decision to take a human life be relinquished to a machine? This question is often overlooked in favor of technical questions of sensor capability, operational questions of chain of command, or legal questions of sovereign borders. We further argue that the answer must be ?no? and offer several reasons for banning (...)
    Download  
     
    Export citation  
     
    Bookmark   22 citations  
  • Responsibility for Killer Robots.Johannes Himmelreich - 2019 - Ethical Theory and Moral Practice 22 (3):731-747.
    Future weapons will make life-or-death decisions without a human in the loop. When such weapons inflict unwarranted harm, no one appears to be responsible. There seems to be a responsibility gap. I first reconstruct the argument for such responsibility gaps to then argue that this argument is not sound. The argument assumes that commanders have no control over whether autonomous weapons inflict harm. I argue against this assumption. Although this investigation concerns a specific case of autonomous weapons systems, I take (...)
    Download  
     
    Export citation  
     
    Bookmark   24 citations  
  • On the moral responsibility of military robots.Thomas Hellström - 2013 - Ethics and Information Technology 15 (2):99-107.
    This article discusses mechanisms and principles for assignment of moral responsibility to intelligent robots, with special focus on military robots. We introduce the concept autonomous power as a new concept, and use it to identify the type of robots that call for moral considerations. It is furthermore argued that autonomous power, and in particular the ability to learn, is decisive for assignment of moral responsibility to robots. As technological development will lead to robots with increasing autonomous power, we should be (...)
    Download  
     
    Export citation  
     
    Bookmark   34 citations  
  • Mind the gap: responsible robotics and the problem of responsibility.David J. Gunkel - 2020 - Ethics and Information Technology 22 (4):307-320.
    The task of this essay is to respond to the question concerning robots and responsibility—to answer for the way that we understand, debate, and decide who or what is able to answer for decisions and actions undertaken by increasingly interactive, autonomous, and sociable mechanisms. The analysis proceeds through three steps or movements. It begins by critically examining the instrumental theory of technology, which determines the way one typically deals with and responds to the question of responsibility when it involves technology. (...)
    Download  
     
    Export citation  
     
    Bookmark   39 citations  
  • The truth about tracing.John Martin Fischer & Neal A. Tognazzini - 2009 - Noûs 43 (3):531-556.
    Control-based models of moral responsibility typically employ a notion of "tracing," according to which moral responsibility requires an exercise of control either immediately prior to the behavior in question or at some suitable point prior to the behavior. Responsibility, on this view, requires tracing back to control. But various philosophers, including Manuel Vargas and Angela Smith, have presented cases in which the plausibility of tracing is challenged. In this paper we discuss the examples and we argue that they do not (...)
    Download  
     
    Export citation  
     
    Bookmark   86 citations  
  • Responsibility and Control: A Theory of Moral Responsibility.John Martin Fischer - 1998 - Philosophical and Phenomenological Research 61 (2):459-466.
    Download  
     
    Export citation  
     
    Bookmark   194 citations  
  • Replies.John Martin Fischer & Mark Ravizza - 2000 - Philosophy and Phenomenological Research 61 (2):467-480.
    Download  
     
    Export citation  
     
    Bookmark   17 citations  
  • Precis of Responsibility and Control: A Theory of Moral ResponsibilityResponsibility and Control: A Theory of Moral Responsibility.John Martin Fischer & Mark Ravizza - 2000 - Philosophy and Phenomenological Research 61 (2):441.
    The leading idea of our theory of moral responsibility is that responsibility is associated with control. But we contend that there are two distinct kinds of control. Regulative control involves alternative possibilities: it is a kind of dual power of free action. In contrast, guidance control does not, by its nature, involve alternative possibilities. Whereas typically it might be thought that regulative and guidance control go together, the Frankfurt-type cases show that they are separate and distinct sorts of control. And, (...)
    Download  
     
    Export citation  
     
    Bookmark   14 citations  
  • Autonomous Weapons and International Humanitarian Law.Yoram Dinstein - 2018 - In Wolff Heintschel von Heinegg, Robert Frau & Tassilo Singer (eds.), Dehumanization of Warfare: Legal Implications of New Weapon Technologies. Springer Verlag. pp. 15-20.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Responsibility and Control: A Theory of Moral Responsibility.John Martin Fischer & Mark Ravizza - 1998 - New York: Cambridge University Press. Edited by Mark Ravizza.
    This book provides a comprehensive, systematic theory of moral responsibility. The authors explore the conditions under which individuals are morally responsible for actions, omissions, consequences, and emotions. The leading idea in the book is that moral responsibility is based on 'guidance control'. This control has two components: the mechanism that issues in the relevant behavior must be the agent's own mechanism, and it must be appropriately responsive to reasons. The book develops an account of both components. The authors go on (...)
    Download  
     
    Export citation  
     
    Bookmark   802 citations  
  • Sharing Responsibility.Michael J. Zimmerman - 1985 - American Philosophical Quarterly 22 (2):115 - 122.
    Download  
     
    Export citation  
     
    Bookmark   35 citations  
  • Learning robots and human responsibility.Dante Marino & Guglielmo Tamburrini - 2006 - International Review of Information Ethics 6:46-51.
    Epistemic limitations concerning prediction and explanation of the behaviour of robots that learn from experience are selectively examined by reference to machine learning methods and computational theories of supervised inductive learning. Moral responsibility and liability ascription problems concerning damages caused by learning robot actions are discussed in the light of these epistemic limitations. In shaping responsibility ascription policies one has to take into account the fact that robots and softbots - by combining learning with autonomy, pro-activity, reasoning, and planning - (...)
    Download  
     
    Export citation  
     
    Bookmark   17 citations