Could slaughterbots wipe out humanity? Assessment of the global catastrophic risk posed by autonomous weapons

Download Edit this record How to cite View on PhilPapers
Abstract
Recently criticisms against autonomous weapons were presented in a video in which an AI-powered drone kills a person. However, some said that this video is a distraction from the real risk of AI—the risk of unlimitedly self-improving AI systems. In this article, we analyze arguments from both sides and turn them into conditions. The following conditions are identified as leading to autonomous weapons becoming a global catastrophic risk: 1) Artificial General Intelligence (AGI) development is delayed relative to progress in narrow AI and manufacturing. 2) The potential for very cheap manufacture of drones, with prices below 1 USD each. 3) Anti-drone defense capabilities lagging offensive development. 4) Special global military posture encouraging development of drone swarms as a strategic offensive weapon, able to kill civilians. We conclude that while it is unlikely that drone swarms alone will become existential risk, lethal autonomous weapons could contribute to civilizational collapse in case of new world war.
PhilPapers/Archive ID
TURCSW
Revision history
First archival date: 2018-03-19
Latest version: 2 (2018-04-17)
View upload history
References found in this work BETA

No references found.

Add more references

Citations of this work BETA

No citations found.

Add more citations

Added to PP index
2018-03-19

Total views
263 ( #16,988 of 50,121 )

Recent downloads (6 months)
60 ( #9,401 of 50,121 )

How can I increase my downloads?

Downloads since first upload
This graph includes both downloads from PhilArchive and clicks to external links.