Abstract
Robert Sparrow (among others) claims that if an autonomous weapon were to commit a war crime, it would cause harm for which no one could reasonably be blamed. Since no one would bear responsibility for the soldier’s share of killing in such cases, he argues that they would necessarily violate the requirements of jus in bello, and should be prohibited by international law. I argue this view is mistaken and that our moral understanding of war is sufficient to determine blame for any wrongful killing done by autonomous weapons. Analyzing moral responsibility for autonomous weapons starts by recognizing that although they are capable of causing moral consequences, they are neither praiseworthy nor blameworthy in the moral sense. As such, their military role is that of a tool, albeit a rather sophisticated one, and responsibility for their use is roughly analogous to that of existing “smart” weapons. There will likely be some difficulty in managing these systems as they become more intelligent and more prone to unpredicted behavior, but the moral notion of shared responsibility and the legal notion of command responsibility are sufficient to locate responsibility for their use.