Could slaughterbots wipe out humanity? Assessment of the global catastrophic risk posed by autonomous weapons

Download Edit this record How to cite View on PhilPapers
Abstract
Recently criticisms against autonomous weapons were presented in a video in which an AI-powered drone kills a person. However, some said that this video is a distraction from the real risk of AI—the risk of unlimitedly self-improving AI systems. In this article, we analyze arguments from both sides and turn them into conditions. The following conditions are identified as leading to autonomous weapons becoming a global catastrophic risk: 1) Artificial General Intelligence (AGI) development is delayed relative to progress in narrow AI and manufacturing. 2) The potential for very cheap manufacture of drones, with prices below 1 USD each. 3) Anti-drone defense capabilities lagging offensive development. 4) Special global military posture encouraging development of drone swarms as a strategic offensive weapon, able to kill civilians. We conclude that while it is unlikely that drone swarms alone will become existential risk, lethal autonomous weapons could contribute to civilizational collapse in case of new world war.
PhilPapers/Archive ID
TURCSW
Upload history
First archival date: 2018-03-19
Latest version: 2 (2018-04-17)
View other versions
Added to PP index
2018-03-19

Total views
372 ( #14,508 of 56,905 )

Recent downloads (6 months)
95 ( #6,517 of 56,905 )

How can I increase my downloads?

Downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.