Classification of Global Catastrophic Risks Connected with Artificial Intelligence

AI and Society 35 (1):147-163 (2020)
Download Edit this record How to cite View on PhilPapers
Abstract
A classification of the global catastrophic risks of AI is presented, along with a comprehensive list of previously identified risks. This classification allows the identification of several new risks. We show that at each level of AI’s intelligence power, separate types of possible catastrophes dominate. Our classification demonstrates that the field of AI risks is diverse, and includes many scenarios beyond the commonly discussed cases of a paperclip maximizer or robot-caused unemployment. Global catastrophic failure could happen at various levels of AI development, namely, before it starts self-improvement, during its takeoff, when it uses various instruments to escape its initial confinement, or after it successfully takes over the world and starts to implement its goal system, which could be plainly unaligned, or feature-flawed friendliness. AI could also halt at later stages of its development either due to technical glitches or ontological problems. Overall, we identified around several dozen scenarios of AI-driven global catastrophe. The extent of this list illustrates that there is no one simple solution to the problem of AI safety, and that AI safety theory is complex and must be customized for each AI development level.
PhilPapers/Archive ID
TURCOG-2
Upload history
First archival date: 2018-03-19
Latest version: 4 (2018-05-21)
View other versions
Added to PP index
2018-03-19

Total views
1,967 ( #1,080 of 51,218 )

Recent downloads (6 months)
560 ( #428 of 51,218 )

How can I increase my downloads?

Downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.