AI and Society 35 (4):991-993 (2020)
AbstractSet aside fanciful doomsday speculations about AI. Even lower-level AIs, while otherwise friendly and providing us a universal basic income, would be able to do all our jobs. Also, we would over-rely upon AI assistants even in our personal lives. Thus, John Danaher argues that a human crisis of moral passivity would result However, I argue firstly that if AIs are posited to lack the potential to become unfriendly, they may not be intelligent enough to replace us in all our jobs. If instead they are intelligent enough to replace us, the risk they become unfriendly increases, given that they would not need us and humans would just compete for valuable resources. Their hostility will not promote our moral passivity. Secondly, the use of AI assistants in our personal lives will become a problem only if we rely on them for almost all our decision-making and motivation. But such a (maximally) pervasive level of dependence raises the question of whether humans would accept it, and consequently whether the crisis of passivity will arise.
Archival historyFirst archival date: 2020-03-05
Latest version: 2 (2022-01-20)
View all versions
Added to PP
Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.How can I increase my downloads?