Superintelligence as a Cause or Cure for Risks of Astronomical Suffering

Download Edit this record How to cite View on PhilPapers
Abstract
Discussions about the possible consequences of creating superintelligence have included the possibility of existential risk, often understood mainly as the risk of human extinction. We argue that suffering risks (s-risks) , where an adverse outcome would bring about severe suffering on an astronomical scale, are risks of a comparable severity and probability as risks of extinction. Preventing them is the common interest of many different value systems. Furthermore, we argue that in the same way as superintelligent AI both contributes to existential risk but can also help prevent it, superintelligent AI can both be a suffering risk or help avoid it. Some types of work aimed at making superintelligent AI safe will also help prevent suffering risks, and there may also be a class of safeguards for AI that helps specifically against s-risks.
PhilPapers/Archive ID
SOTSAA
Upload history
Archival date: 2018-01-11
View other versions
Added to PP index
2018-01-11

Total views
263 ( #26,023 of 65,594 )

Recent downloads (6 months)
24 ( #32,034 of 65,594 )

How can I increase my downloads?

Downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.