Responses to Catastrophic AGI Risk: A Survey

Physica Scripta 90 (2015)
  Copy   BIBTEX

Abstract

Many researchers have argued that humanity will create artificial general intelligence (AGI) within the next twenty to one hundred years. It has been suggested that AGI may inflict serious damage to human well-being on a global scale ('catastrophic risk'). After summarizing the arguments for why AGI may pose such a risk, we review the fieldʼs proposed responses to AGI risk. We consider societal proposals, proposals for external constraints on AGI behaviors and proposals for creating AGIs that are safe due to their internal design.

Author Profiles

Kaj Sotala
Foundational Research Institute
Roman Yampolskiy
University of Louisville

Analytics

Added to PP
2019-11-10

Downloads
493 (#32,044)

6 months
110 (#31,951)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?