Abstract
To many, the idea of autonomous weapons systems (AWS) killing human beings is grotesque. Yet critics have had difficulty explaining why it should make a significant moral difference if a human combatant is killed by an AWS as opposed to being killed by a human combatant. The purpose of this paper is to explore the roots of various deontological concerns with AWS and to consider whether these concerns are distinct from any concerns that also apply to long-distance, human-guided weaponry. We suggest that at least one major driver of the intuitive moral aversion to lethal AWS is that their use disrespects their human targets by violating the martial contract between human combatants. On our understanding of this doctrine, service personnel cede a right not to be directly targeted with lethal violence to other human agents alone. Artificial agents, of which AWS are one example, cannot understand the value of human life. A human combatant cannot transfer his privileges of targeting enemy combatants to a robot. Therefore, the human duty-holder who deploys AWS breaches the martial contract between human combatants and disrespects the targeted combatants. We consider whether this novel deontological objection to AWS forms the foundation of several other popular yet imperfect deontological objections to AWS.