Switch to: Citations

Add references

You must login to add references.
  1. Moral Responsibility of Robots and Hybrid Agents.Raul Hakli & Pekka Mäkelä - 2019 - The Monist 102 (2):259-275.
    We study whether robots can satisfy the conditions of an agent fit to be held morally responsible, with a focus on autonomy and self-control. An analogy between robots and human groups enables us to modify arguments concerning collective responsibility for studying questions of robot responsibility. We employ Mele’s history-sensitive account of autonomy and responsibility to argue that even if robots were to have all the capacities required of moral agency, their history would deprive them from autonomy in a responsibility-undermining way. (...)
    Download  
     
    Export citation  
     
    Bookmark   28 citations  
  • Legal personality of robots, corporations, idols and chimpanzees: a quest for legitimacy.S. M. Solaiman - 2017 - Artificial Intelligence and Law 25 (2):155-179.
    Robots are now associated with various aspects of our lives. These sophisticated machines have been increasingly used in different manufacturing industries and services sectors for decades. During this time, they have been a factor in causing significant harm to humans, prompting questions of liability. Industrial robots are presently regarded as products for liability purposes. In contrast, some commentators have proposed that robots be granted legal personality, with an overarching aim of exonerating the respective creators and users of these artefacts from (...)
    Download  
     
    Export citation  
     
    Bookmark   19 citations  
  • Attributing Agency to Automated Systems: Reflections on Human–Robot Collaborations and Responsibility-Loci.Sven Nyholm - 2018 - Science and Engineering Ethics 24 (4):1201-1219.
    Many ethicists writing about automated systems attribute agency to these systems. Not only that; they seemingly attribute an autonomous or independent form of agency to these machines. This leads some ethicists to worry about responsibility-gaps and retribution-gaps in cases where automated systems harm or kill human beings. In this paper, I consider what sorts of agency it makes sense to attribute to most current forms of automated systems, in particular automated cars and military robots. I argue that whereas it indeed (...)
    Download  
     
    Export citation  
     
    Bookmark   69 citations  
  • The problem of ascribing legal responsibility in the case of robotics.Susanne Beck - 2016 - AI and Society 31 (4):473-481.
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Negotiating autonomy and responsibility in military robots.Merel Noorman & Deborah G. Johnson - 2014 - Ethics and Information Technology 16 (1):51-62.
    Central to the ethical concerns raised by the prospect of increasingly autonomous military robots are issues of responsibility. In this paper we examine different conceptions of autonomy within the discourse on these robots to bring into focus what is at stake when it comes to the autonomous nature of military robots. We argue that due to the metaphorical use of the concept of autonomy, the autonomy of robots is often treated as a black box in discussions about autonomous military robots. (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • Killer robots.Robert Sparrow - 2007 - Journal of Applied Philosophy 24 (1):62–77.
    The United States Army’s Future Combat Systems Project, which aims to manufacture a “robot army” to be ready for deployment by 2012, is only the latest and most dramatic example of military interest in the use of artificially intelligent systems in modern warfare. This paper considers the ethics of a decision to send artificially intelligent robots into war, by asking who we should hold responsible when an autonomous weapon system is involved in an atrocity of the sort that would normally (...)
    Download  
     
    Export citation  
     
    Bookmark   223 citations  
  • Outline of a decision procedure for ethics.John Rawls - 1951 - Philosophical Review 60 (2):177-197.
    Download  
     
    Export citation  
     
    Bookmark   240 citations  
  • (1 other version)The responsibility gap: Ascribing responsibility for the actions of learning automata. [REVIEW]Andreas Matthias - 2004 - Ethics and Information Technology 6 (3):175-183.
    Traditionally, the manufacturer/operator of a machine is held (morally and legally) responsible for the consequences of its operation. Autonomous, learning machines, based on neural networks, genetic algorithms and agent architectures, create a new situation, where the manufacturer/operator of the machine is in principle not capable of predicting the future machine behaviour any more, and thus cannot be held morally responsible or liable for it. The society must decide between not using this kind of machine any more (which is not a (...)
    Download  
     
    Export citation  
     
    Bookmark   183 citations  
  • The Ethics of Accident-Algorithms for Self-Driving Cars: an Applied Trolley Problem?Sven Nyholm & Jilles Smids - 2016 - Ethical Theory and Moral Practice 19 (5):1275-1289.
    Self-driving cars hold out the promise of being safer than manually driven cars. Yet they cannot be a 100 % safe. Collisions are sometimes unavoidable. So self-driving cars need to be programmed for how they should respond to scenarios where collisions are highly likely or unavoidable. The accident-scenarios self-driving cars might face have recently been likened to the key examples and dilemmas associated with the trolley problem. In this article, we critically examine this tempting analogy. We identify three important ways (...)
    Download  
     
    Export citation  
     
    Bookmark   65 citations  
  • (1 other version)The responsibility gap: Ascribing responsibility for the actions of learning automata.Andreas Matthias - 2004 - Ethics and Information Technology 6 (3):175-183.
    Traditionally, the manufacturer/operator of a machine is held (morally and legally) responsible for the consequences of its operation. Autonomous, learning machines, based on neural networks, genetic algorithms and agent architectures, create a new situation, where the manufacturer/operator of the machine is in principle not capable of predicting the future machine behaviour any more, and thus cannot be held morally responsible or liable for it. The society must decide between not using this kind of machine any more (which is not a (...)
    Download  
     
    Export citation  
     
    Bookmark   182 citations  
  • The Moral Machine experiment.Edmond Awad, Sohan Dsouza, Richard Kim, Jonathan Schulz, Joseph Henrich, Azim Shariff, Jean-François Bonnefon & Iyad Rahwan - 2018 - Nature 563 (7729):59-64.
    Download  
     
    Export citation  
     
    Bookmark   109 citations  
  • Responsibility and the Moral Phenomenology of Using Self-Driving Cars.Mark Coeckelbergh - 2016 - Applied Artificial Intelligence 30 (8):748-757.
    This paper explores how the phenomenology of using self-driving cars influences conditions for exercising and ascribing responsibility. First, a working account of responsibility is presented, which identifies two classic Aristotelian conditions for responsibility and adds a relational one, and which makes a distinction between responsibility for (what one does) and responsibility to (others). Then, this account is applied to a phenomenological analysis of what happens when we use a self-driving car and participate in traffic. It is argued that self-driving cars (...)
    Download  
     
    Export citation  
     
    Bookmark   17 citations