Superintelligence as superethical

In Patrick Lin, Keith Abney & Ryan Jenkins (eds.), Robot Ethics 2.0. New York, USA: Oxford University Press. pp. 322-337 (2017)
Download Edit this record How to cite View on PhilPapers
Abstract
Nick Bostrom's book *Superintelligence* outlines a frightening but realistic scenario for human extinction: true artificial intelligence is likely to bootstrap itself into superintelligence, and thereby become ideally effective at achieving its goals. Human-friendly goals seem too abstract to be pre-programmed with any confidence, and if those goals are *not* explicitly favorable toward humans, the superintelligence will extinguish us---not through any malice, but simply because it will want our resources for its own purposes. In response I argue that things might not be as bad as Bostrom suggests. If the superintelligence must *learn* complex final goals, then this means such a superintelligence must in effect *reason* about its own goals. And because it will be especially clear to a superintelligence that there are no sharp lines between one agent's goals and another's, that reasoning could therefore automatically be ethical in nature.
PhilPapers/Archive ID
PETSAS-12
Revision history
Archival date: 2018-02-12
View upload history
References found in this work BETA
The Sources of Normativity.Korsgaard, Christine M.
Reasons and Persons.Margolis, Joseph

View all 30 references / Add more references

Citations of this work BETA

Add more citations

Added to PP index
2018-02-12

Total views
593 ( #6,304 of 49,054 )

Recent downloads (6 months)
148 ( #3,186 of 49,054 )

How can I increase my downloads?

Downloads since first upload
This graph includes both downloads from PhilArchive and clicks to external links.