The rise of artificial intelligence and the crisis of moral passivity

AI and Society:1-3 (forthcoming)
Download Edit this record How to cite View on PhilPapers
Abstract
Set aside fanciful doomsday speculations about AI. Even lower-level AIs, while otherwise friendly and providing us a universal basic income, would be able to do all our jobs. Also, we would over-rely upon AI assistants even in our personal lives. Thus, John Danaher argues that a human crisis of moral passivity would result However, I argue firstly that if AIs are posited to lack the potential to become unfriendly, they may not be intelligent enough to replace us in all our jobs. If instead they are intelligent enough to replace us, the risk they become unfriendly increases, given that they would not need us and humans would just compete for valuable resources. Their hostility will not promote our moral passivity. Secondly, the use of AI assistants in our personal lives will become a problem only if we rely on them for almost all our decision-making and motivation. But such a (maximally) pervasive level of dependence raises the question of whether humans would accept it, and consequently whether the crisis of passivity will arise.
PhilPapers/Archive ID
CHATRO-56
Revision history
Archival date: 2020-03-05
View upload history
References found in this work BETA
Why We Need Friendly Ai.Muehlhauser, Luke & Bostrom, Nick

Add more references

Citations of this work BETA

No citations found.

Add more citations

Added to PP index
2020-03-05

Total views
83 ( #36,024 of 50,280 )

Recent downloads (6 months)
83 ( #6,170 of 50,280 )

How can I increase my downloads?

Downloads since first upload
This graph includes both downloads from PhilArchive and clicks to external links.