Autonomous weapons systems and the moral equality of combatants

Ethics and Information Technology 22 (3):197-209 (2020)
  Copy   BIBTEX

Abstract

To many, the idea of autonomous weapons systems (AWS) killing human beings is grotesque. Yet critics have had difficulty explaining why it should make a significant moral difference if a human combatant is killed by an AWS as opposed to being killed by a human combatant. The purpose of this paper is to explore the roots of various deontological concerns with AWS and to consider whether these concerns are distinct from any concerns that also apply to long-distance, human-guided weaponry. We suggest that at least one major driver of the intuitive moral aversion to lethal AWS is that their use disrespects their human targets by violating the martial contract between human combatants. On our understanding of this doctrine, service personnel cede a right not to be directly targeted with lethal violence to other human agents alone. Artificial agents, of which AWS are one example, cannot understand the value of human life. A human combatant cannot transfer his privileges of targeting enemy combatants to a robot. Therefore, the human duty-holder who deploys AWS breaches the martial contract between human combatants and disrespects the targeted combatants. We consider whether this novel deontological objection to AWS forms the foundation of several other popular yet imperfect deontological objections to AWS.

Author Profiles

Michael Skerker
United States Naval Academy
Duncan Purves
University of Florida
Ryan Jenkins
California Polytechnic State University, San Luis Obispo

Analytics

Added to PP
2020-06-17

Downloads
100 (#95,066)

6 months
53 (#90,868)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?