Could slaughterbots wipe out humanity? Assessment of the global catastrophic risk posed by autonomous weapons

Abstract

Recently criticisms against autonomous weapons were presented in a video in which an AI-powered drone kills a person. However, some said that this video is a distraction from the real risk of AI—the risk of unlimitedly self-improving AI systems. In this article, we analyze arguments from both sides and turn them into conditions. The following conditions are identified as leading to autonomous weapons becoming a global catastrophic risk: 1) Artificial General Intelligence (AGI) development is delayed relative to progress in narrow AI and manufacturing. 2) The potential for very cheap manufacture of drones, with prices below 1 USD each. 3) Anti-drone defense capabilities lagging offensive development. 4) Special global military posture encouraging development of drone swarms as a strategic offensive weapon, able to kill civilians. We conclude that while it is unlikely that drone swarms alone will become existential risk, lethal autonomous weapons could contribute to civilizational collapse in case of new world war.

Author's Profile

Analytics

Added to PP
2018-03-19

Downloads
788 (#17,336)

6 months
102 (#35,995)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?