Switch to: Citations

Add references

You must login to add references.
  1. Four Responsibility Gaps with Artificial Intelligence: Why they Matter and How to Address them.Filippo Santoni de Sio & Giulio Mecacci - 2021 - Philosophy and Technology 34 (4):1057-1084.
    The notion of “responsibility gap” with artificial intelligence (AI) was originally introduced in the philosophical debate to indicate the concern that “learning automata” may make more difficult or impossible to attribute moral culpability to persons for untoward events. Building on literature in moral and legal philosophy, and ethics of technology, the paper proposes a broader and more comprehensive analysis of the responsibility gap. The responsibility gap, it is argued, is not one problem but a set of at least four interconnected (...)
    Download  
     
    Export citation  
     
    Bookmark   58 citations  
  • There Is No Techno-Responsibility Gap.Daniel W. Tigard - 2021 - Philosophy and Technology 34 (3):589-607.
    In a landmark essay, Andreas Matthias claimed that current developments in autonomous, artificially intelligent (AI) systems are creating a so-called responsibility gap, which is allegedly ever-widening and stands to undermine both the moral and legal frameworks of our society. But how severe is the threat posed by emerging technologies? In fact, a great number of authors have indicated that the fear is thoroughly instilled. The most pessimistic are calling for a drastic scaling-back or complete moratorium on AI systems, while the (...)
    Download  
     
    Export citation  
     
    Bookmark   37 citations  
  • AI Ethics.Mark Coeckelbergh - 2020 - Cambridge, Massachusetts, USA: The MIT Press.
    -/- Artificial intelligence powers Google’s search engine, enables Facebook to target advertising, and allows Alexa and Siri to do their jobs. AI is also behind self-driving cars, predictive policing, and autonomous weapons that can kill without human intervention. These and other AI applications raise complex ethical issues that are the subject of ongoing debate. This volume in the MIT Press Essential Knowledge series offers an accessible synthesis of these issues. Written by a philosopher of technology, AI Ethics goes beyond the (...)
    Download  
     
    Export citation  
     
    Bookmark   40 citations  
  • Responsible Artificial Intelligence: How to Develop and Use Ai in a Responsible Way.Virginia Dignum - 2019 - Springer Verlag.
    In this book, the author examines the ethical implications of Artificial Intelligence systems as they integrate and replace traditional social structures in new sociocognitive-technological environments. She discusses issues related to the integrity of researchers, technologists, and manufacturers as they design, construct, use, and manage artificially intelligent systems; formalisms for reasoning about moral decisions as part of the behavior of artificial autonomous systems such as agents and robots; and design methodologies for social agents based on societal, moral, and legal values. Throughout (...)
    Download  
     
    Export citation  
     
    Bookmark   39 citations  
  • Artificial Intelligence, Responsibility Attribution, and a Relational Justification of Explainability.Mark Coeckelbergh - 2020 - Science and Engineering Ethics 26 (4):2051-2068.
    This paper discusses the problem of responsibility attribution raised by the use of artificial intelligence technologies. It is assumed that only humans can be responsible agents; yet this alone already raises many issues, which are discussed starting from two Aristotelian conditions for responsibility. Next to the well-known problem of many hands, the issue of “many things” is identified and the temporal dimension is emphasized when it comes to the control condition. Special attention is given to the epistemic condition, which draws (...)
    Download  
     
    Export citation  
     
    Bookmark   54 citations  
  • Moral Responsibility of Robots and Hybrid Agents.Raul Hakli & Pekka Mäkelä - 2019 - The Monist 102 (2):259-275.
    We study whether robots can satisfy the conditions of an agent fit to be held morally responsible, with a focus on autonomy and self-control. An analogy between robots and human groups enables us to modify arguments concerning collective responsibility for studying questions of robot responsibility. We employ Mele’s history-sensitive account of autonomy and responsibility to argue that even if robots were to have all the capacities required of moral agency, their history would deprive them from autonomy in a responsibility-undermining way. (...)
    Download  
     
    Export citation  
     
    Bookmark   29 citations  
  • Fittingness.Christopher Howard - 2018 - Philosophy Compass 13 (11):e12542.
    The normative notion of fittingness figures saliently in the work of a number of ethical theorists writing in the late nineteenth and mid-twentieth centuries and has in recent years regained prominence, occupying an important place in the theoretical tool kits of a range of contemporary writers. Yet the notion remains strikingly undertheorized. This article offers a (partial) remedy. I proceed by canvassing a number of attempts to analyze the fittingness relation in other terms, arguing that none is fully adequate. In (...)
    Download  
     
    Export citation  
     
    Bookmark   79 citations  
  • Mind the gap: responsible robotics and the problem of responsibility.David J. Gunkel - 2020 - Ethics and Information Technology 22 (4):307-320.
    The task of this essay is to respond to the question concerning robots and responsibility—to answer for the way that we understand, debate, and decide who or what is able to answer for decisions and actions undertaken by increasingly interactive, autonomous, and sociable mechanisms. The analysis proceeds through three steps or movements. It begins by critically examining the instrumental theory of technology, which determines the way one typically deals with and responds to the question of responsibility when it involves technology. (...)
    Download  
     
    Export citation  
     
    Bookmark   41 citations  
  • When is a robot a moral agent.John P. Sullins - 2006 - International Review of Information Ethics 6 (12):23-30.
    In this paper Sullins argues that in certain circumstances robots can be seen as real moral agents. A distinction is made between persons and moral agents such that, it is not necessary for a robot to have personhood in order to be a moral agent. I detail three requirements for a robot to be seen as a moral agent. The first is achieved when the robot is significantly autonomous from any programmers or operators of the machine. The second is when (...)
    Download  
     
    Export citation  
     
    Bookmark   73 citations  
  • The responsibility gap: Ascribing responsibility for the actions of learning automata. [REVIEW]Andreas Matthias - 2004 - Ethics and Information Technology 6 (3):175-183.
    Traditionally, the manufacturer/operator of a machine is held (morally and legally) responsible for the consequences of its operation. Autonomous, learning machines, based on neural networks, genetic algorithms and agent architectures, create a new situation, where the manufacturer/operator of the machine is in principle not capable of predicting the future machine behaviour any more, and thus cannot be held morally responsible or liable for it. The society must decide between not using this kind of machine any more (which is not a (...)
    Download  
     
    Export citation  
     
    Bookmark   188 citations  
  • Action, Knowledge, and Will.John Hyman - 2015 - Oxford, GB: Oxford University Press.
    John Hyman explores central problems in philosophy of action and the theory of knowledge, and connects these areas of enquiry in a new way. His approach to the dimensions of human action culminates in an original analysis of the relation between knowledge and rational behaviour, which provides the foundation for a new theory of knowledge itself.
    Download  
     
    Export citation  
     
    Bookmark   108 citations  
  • Responsibility for Crashes of Autonomous Vehicles: An Ethical Analysis.Alexander Hevelke & Julian Nida-Rümelin - 2015 - Science and Engineering Ethics 21 (3):619-630.
    A number of companies including Google and BMW are currently working on the development of autonomous cars. But if fully autonomous cars are going to drive on our roads, it must be decided who is to be held responsible in case of accidents. This involves not only legal questions, but also moral ones. The first question discussed is whether we should try to design the tort liability for car manufacturers in a way that will help along the development and improvement (...)
    Download  
     
    Export citation  
     
    Bookmark   55 citations  
  • Enterprise Liability: Justifying Vicarious Liability.Douglas Brodie - 2007 - Oxford Journal of Legal Studies 27 (3):493-508.
    In Lister v Hesley Hall [2002] 1 AC 215 the House of Lords reformed the law on vicarious liability, in the context of a claim arising over the intentional infliction of harm, by introducing the ‘close connection’ test. The immediate catalyst was the desire to facilitate recovery of damages on the part of victims of child abuse. The precise form the revision assumed was derived from two Canadian Supreme Court cases: Bazley v Curry [1999] 174 DLR (4th) 45 and Jacobi (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations