Switch to: References

Add citations

You must login to add citations.
  1. Four Responsibility Gaps with Artificial Intelligence: Why They Matter and How to Address Them.Filippo Santoni de Sio & Giulio Mecacci - forthcoming - Philosophy and Technology:1-28.
    The notion of “responsibility gap” with artificial intelligence was originally introduced in the philosophical debate to indicate the concern that “learning automata” may make more difficult or impossible to attribute moral culpability to persons for untoward events. Building on literature in moral and legal philosophy, and ethics of technology, the paper proposes a broader and more comprehensive analysis of the responsibility gap. The responsibility gap, it is argued, is not one problem but a set of at least four interconnected problems (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Artificial Intelligence and Responsibility.Lode Lauwaert - forthcoming - AI and Society:1-9.
    In the debate on whether to ban LAWS, moral arguments are mainly used. One of these arguments, proposed by Sparrow, is that the use of LAWS goes hand in hand with the responsibility gap. Together with the premise that the ability to hold someone responsible is a necessary condition for the admissibility of an act, Sparrow believes that this leads to the conclusion that LAWS should be prohibited. In this article, it will be shown that Sparrow’s argumentation for both premises (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Agency, Qualia and Life: Connecting Mind and Body Biologically.David Longinotti - 2017 - In Vincent C. Müller (ed.), Philosophy and Theory of Artificial Intelligence 2017. Cham: Springer. pp. 43-56.
    Many believe that a suitably programmed computer could act for its own goals and experience feelings. I challenge this view and argue that agency, mental causation and qualia are all founded in the unique, homeostatic nature of living matter. The theory was formulated for coherence with the concept of an agent, neuroscientific data and laws of physics. By this method, I infer that a successful action is homeostatic for its agent and can be caused by a feeling - which does (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Just Research Into Killer Robots.Patrick Taylor Smith - 2019 - Ethics and Information Technology 21 (4):281-293.
    This paper argues that it is permissible for computer scientists and engineers—working with advanced militaries that are making good faith efforts to follow the laws of war—to engage in the research and development of lethal autonomous weapons systems. Research and development into a new weapons system is permissible if and only if the new weapons system can plausibly generate a superior risk profile for all morally relevant classes and it is not intrinsically wrong. The paper then suggests that these conditions (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Robots, Law and the Retribution Gap.John Danaher - 2016 - Ethics and Information Technology 18 (4):299–309.
    We are living through an era of increased robotisation. Some authors have already begun to explore the impact of this robotisation on legal rules and practice. In doing so, many highlight potential liability gaps that might arise through robot misbehaviour. Although these gaps are interesting and socially significant, they do not exhaust the possible gaps that might be created by increased robotisation. In this article, I make the case for one of those alternative gaps: the retribution gap. This gap arises (...)
    Download  
     
    Export citation  
     
    Bookmark   18 citations  
  • Customizable Ethics Settings for Building Resilience and Narrowing the Responsibility Gap: Case Studies in the Socio-Ethical Engineering of Autonomous Systems.Sadjad Soltanzadeh, Jai Galliott & Natalia Jevglevskaja - 2020 - Science and Engineering Ethics 26 (5):2693-2708.
    Ethics settings allow for morally significant decisions made by humans to be programmed into autonomous machines, such as autonomous vehicles or autonomous weapons. Customizable ethics settings are a type of ethics setting in which the users of autonomous machines make such decisions. Here two arguments are provided in defence of customizable ethics settings. Firstly, by approaching ethics settings in the context of failure management, it is argued that customizable ethics settings are instrumentally and inherently valuable for building resilience into the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Punishing Robots – Way Out of Sparrow’s Responsibility Attribution Problem.Maciek Zając - 2020 - Journal of Military Ethics 19 (4):285-291.
    The Laws of Armed Conflict require that war crimes be attributed to individuals who can be held responsible and be punished. Yet assigning responsibility for the actions of Lethal Autonomous Weapon...
    Download  
     
    Export citation  
     
    Bookmark  
  • Should We Campaign Against Sex Robots?John Danaher, Brian D. Earp & Anders Sandberg - 2017 - In John Danaher & Neil McArthur (eds.), Robot Sex: Social and Ethical Implications. Cambridge, MA: MIT Press.
    In September 2015 a well-publicised Campaign Against Sex Robots (CASR) was launched. Modelled on the longer-standing Campaign to Stop Killer Robots, the CASR opposes the development of sex robots on the grounds that the technology is being developed with a particular model of female-male relations (the prostitute-john model) in mind, and that this will prove harmful in various ways. In this chapter, we consider carefully the merits of campaigning against such a technology. We make three main arguments. First, we argue (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Killer Robots: Regulate, Don’T Ban.Vincent C. Müller & Thomas W. Simpson - 2014 - In University of Oxford, Blavatnik School of Government Policy Memo. Blavatnik School of Government. pp. 1-4.
    Lethal Autonomous Weapon Systems are here. Technological development will see them become widespread in the near future. This is in a matter of years rather than decades. When the UN Convention on Certain Conventional Weapons meets on 10-14th November 2014, well-considered guidance for a decision on the general policy direction for LAWS is clearly needed. While there is widespread opposition to LAWS—or ‘killer robots’, as they are popularly called—and a growing campaign advocates banning them outright, we argue the opposite. LAWS (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Ethics of Artificial Intelligence.Vincent C. Müller - forthcoming - In Anthony Elliott (ed.), The Routledge social science handbook of AI. London: Routledge. pp. 1-20.
    Artificial intelligence (AI) is a digital technology that will be of major importance for the development of humanity in the near future. AI has raised fundamental questions about what we should do with such systems, what the systems themselves should do, what risks they involve and how we can control these. - After the background to the field (1), this article introduces the main debates (2), first on ethical issues that arise with AI systems as objects, i.e. tools made and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Legal Vs. Ethical Obligations – a Comment on the EPSRC’s Principles for Robotics.Vincent C. Müller - 2017 - Connection Science 29 (2):137-141.
    While the 2010 EPSRC principles for robotics state a set of 5 rules of what ‘should’ be done, I argue they should differentiate between legal obligations and ethical demands. Only if we make this difference can we state clearly what the legal obligations already are, and what additional ethical demands we want to make. I provide suggestions how to revise the rules in this light and how to make them more structured.
    Download  
     
    Export citation  
     
    Bookmark