Switch to: Citations

Add references

You must login to add references.
  1. No Such Thing as Killer Robots.Michael Robillard - 2017 - Journal of Applied Philosophy 35 (4):705-717.
    There have been two recent strands of argument arguing for the pro tanto impermissibility of fully autonomous weapon systems. On Sparrow's view, AWS are impermissible because they generate a morally problematic ‘responsibility gap’. According to Purves et al., AWS are impermissible because moral reasoning is not codifiable and because AWS are incapable of acting for the ‘right’ reasons. I contend that these arguments are flawed and that AWS are not morally problematic in principle. Specifically, I contend that these arguments presuppose (...)
    Download  
     
    Export citation  
     
    Bookmark   19 citations  
  • What's So Bad About Killer Robots?Alex Leveringhaus - 2018 - Journal of Applied Philosophy 35 (2):341-358.
    Robotic warfare has now become a real prospect. One issue that has generated heated debate concerns the development of ‘Killer Robots’. These are weapons that, once programmed, are capable of finding and engaging a target without supervision by a human operator. From a conceptual perspective, the debate on Killer Robots has been rather confused, not least because it is unclear how central elements of these weapons can be defined. Offering a precise take on the relevant conceptual issues, the article contends (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  • Just War and Robots’ Killings.Thomas W. Simpson & Vincent C. Müller - 2016 - Philosophical Quarterly 66 (263):302-22.
    May lethal autonomous weapons systems—‘killer robots ’—be used in war? The majority of writers argue against their use, and those who have argued in favour have done so on a consequentialist basis. We defend the moral permissibility of killer robots, but on the basis of the non-aggregative structure of right assumed by Just War theory. This is necessary because the most important argument against killer robots, the responsibility trilemma proposed by Rob Sparrow, makes the same assumptions. We show that the (...)
    Download  
     
    Export citation  
     
    Bookmark   27 citations  
  • Killer robots.Robert Sparrow - 2007 - Journal of Applied Philosophy 24 (1):62–77.
    The United States Army’s Future Combat Systems Project, which aims to manufacture a “robot army” to be ready for deployment by 2012, is only the latest and most dramatic example of military interest in the use of artificially intelligent systems in modern warfare. This paper considers the ethics of a decision to send artificially intelligent robots into war, by asking who we should hold responsible when an autonomous weapon system is involved in an atrocity of the sort that would normally (...)
    Download  
     
    Export citation  
     
    Bookmark   223 citations  
  • (1 other version)The responsibility gap: Ascribing responsibility for the actions of learning automata. [REVIEW]Andreas Matthias - 2004 - Ethics and Information Technology 6 (3):175-183.
    Traditionally, the manufacturer/operator of a machine is held (morally and legally) responsible for the consequences of its operation. Autonomous, learning machines, based on neural networks, genetic algorithms and agent architectures, create a new situation, where the manufacturer/operator of the machine is in principle not capable of predicting the future machine behaviour any more, and thus cannot be held morally responsible or liable for it. The society must decide between not using this kind of machine any more (which is not a (...)
    Download  
     
    Export citation  
     
    Bookmark   183 citations  
  • (1 other version)The responsibility gap: Ascribing responsibility for the actions of learning automata.Andreas Matthias - 2004 - Ethics and Information Technology 6 (3):175-183.
    Traditionally, the manufacturer/operator of a machine is held (morally and legally) responsible for the consequences of its operation. Autonomous, learning machines, based on neural networks, genetic algorithms and agent architectures, create a new situation, where the manufacturer/operator of the machine is in principle not capable of predicting the future machine behaviour any more, and thus cannot be held morally responsible or liable for it. The society must decide between not using this kind of machine any more (which is not a (...)
    Download  
     
    Export citation  
     
    Bookmark   182 citations  
  • Virtual moral agency, virtual moral responsibility: on the moral significance of the appearance, perception, and performance of artificial agents. [REVIEW]Mark Coeckelbergh - 2009 - AI and Society 24 (2):181-189.
    Download  
     
    Export citation  
     
    Bookmark   41 citations  
  • Technology with No Human Responsibility?Deborah G. Johnson - 2015 - Journal of Business Ethics 127 (4):707-715.
    Download  
     
    Export citation  
     
    Bookmark   38 citations