Why AI Doomsayers are Like Sceptical Theists and Why it Matters

Minds and Machines 25 (3):231-246 (2015)
Download Edit this record How to cite View on PhilPapers
An advanced artificial intelligence could pose a significant existential risk to humanity. Several research institutes have been set-up to address those risks. And there is an increasing number of academic publications analysing and evaluating their seriousness. Nick Bostrom’s superintelligence: paths, dangers, strategies represents the apotheosis of this trend. In this article, I argue that in defending the credibility of AI risk, Bostrom makes an epistemic move that is analogous to one made by so-called sceptical theists in the debate about the existence of God. And while this analogy is interesting in its own right, what is more interesting are its potential implications. It has been repeatedly argued that sceptical theism has devastating effects on our beliefs and practices. Could it be that AI-doomsaying has similar effects? I argue that it could. Specifically, and somewhat paradoxically, I argue that it could amount to either a reductio of the doomsayers position, or an important and additional reason to join their cause. I use this paradox to suggest that the modal standards for argument in the superintelligence debate need to be addressed.
PhilPapers/Archive ID
Upload history
First archival date: 2015-04-15
Latest version: 2 (2015-04-26)
View other versions
Added to PP index

Total views
3,065 ( #867 of 62,268 )

Recent downloads (6 months)
183 ( #2,990 of 62,268 )

How can I increase my downloads?

Downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.