Superintelligence as a Cause or Cure for Risks of Astronomical Suffering

Informatica: An International Journal of Computing and Informatics 41 (4):389-400 (2017)
  Copy   BIBTEX

Abstract

Discussions about the possible consequences of creating superintelligence have included the possibility of existential risk, often understood mainly as the risk of human extinction. We argue that suffering risks (s-risks) , where an adverse outcome would bring about severe suffering on an astronomical scale, are risks of a comparable severity and probability as risks of extinction. Preventing them is the common interest of many different value systems. Furthermore, we argue that in the same way as superintelligent AI both contributes to existential risk but can also help prevent it, superintelligent AI can both be a suffering risk or help avoid it. Some types of work aimed at making superintelligent AI safe will also help prevent suffering risks, and there may also be a class of safeguards for AI that helps specifically against s-risks.

Author's Profile

Kaj Sotala
Foundational Research Institute

Analytics

Added to PP
2018-01-11

Downloads
699 (#31,198)

6 months
113 (#44,740)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?