Superintelligence as superethical

In Patrick Lin, Keith Abney & Ryan Jenkins (eds.), Robot Ethics 2. 0: New Challenges in Philosophy, Law, and Society. New York, USA: Oxford University Press. pp. 322-337 (2017)
  Copy   BIBTEX

Abstract

Nick Bostrom's book *Superintelligence* outlines a frightening but realistic scenario for human extinction: true artificial intelligence is likely to bootstrap itself into superintelligence, and thereby become ideally effective at achieving its goals. Human-friendly goals seem too abstract to be pre-programmed with any confidence, and if those goals are *not* explicitly favorable toward humans, the superintelligence will extinguish us---not through any malice, but simply because it will want our resources for its own purposes. In response I argue that things might not be as bad as Bostrom suggests. If the superintelligence must *learn* complex final goals, then this means such a superintelligence must in effect *reason* about its own goals. And because it will be especially clear to a superintelligence that there are no sharp lines between one agent's goals and another's, that reasoning could therefore automatically be ethical in nature.

Author's Profile

Steve Petersen
Niagara University

Analytics

Added to PP
2018-02-12

Downloads
1,453 (#6,916)

6 months
167 (#16,568)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?