The Problem with Killer Robots

Journal of Military Ethics 19 (3):220-240 (2020)
  Copy   BIBTEX

Abstract

Warfare is becoming increasingly automated, from automatic missile defense systems to micro-UAVs (WASPs) that can maneuver through urban environments with ease, and each advance brings with it ethical questions in need of resolving. Proponents of lethal autonomous weapons systems (LAWS) provide varied arguments in their favor; robots are capable of better identifying combatants and civilians, thus reducing "collateral damage"; robots need not protect themselves and so can incur more risks to protect innocents or gather more information before using deadly force; robots can assess situations more quickly and do so without emotion, reducing the likelihood of fatal mistakes due to human error; and sending robots to war protects our own soldiers from harm. However, these arguments only point in favor of autonomous weapons systems, failing to demonstrate why such systems need be made *lethal*. In this paper I argue that if one grants all of the proponents' points in favor of LAWS, then, contrary to what might be expected, this leads to the conclusion that it would be both immoral and illegal to deploy *lethal* autonomous weapons, because the many features that speak in favor of them also undermine the need for them to be programmed to take lives. In particular, I argue that such systems, if lethal, would violate the moral and legal principle of necessity, which forbids the use of weapons that impose superfluous injury or unnecessary harm. I conclude by highlighting that the argument is not against autonomous weapons per se, but only against *lethal* autonomous weapons.

Author's Profile

Nathan Gabriel Wood
University of Ghent

Analytics

Added to PP
2020-12-15

Downloads
549 (#28,120)

6 months
241 (#9,119)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?