Responses to Catastrophic AGI Risk: A Survey

Physica Scripta 90 (2015)
Download Edit this record How to cite View on PhilPapers
Abstract
Many researchers have argued that humanity will create artificial general intelligence (AGI) within the next twenty to one hundred years. It has been suggested that AGI may inflict serious damage to human well-being on a global scale ('catastrophic risk'). After summarizing the arguments for why AGI may pose such a risk, we review the fieldʼs proposed responses to AGI risk. We consider societal proposals, proposals for external constraints on AGI behaviors and proposals for creating AGIs that are safe due to their internal design.
PhilPapers/Archive ID
SOTRTC-2
Upload history
Archival date: 2019-11-10
View other versions
Added to PP index
2019-11-10

Total views
79 ( #37,246 of 51,557 )

Recent downloads (6 months)
45 ( #12,442 of 51,557 )

How can I increase my downloads?

Downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.