Friendly Superintelligent AI: All You Need is Love

In Vincent Müller (ed.), The Philosophy & Theory of Artificial Intelligence. Berlin: Springer. pp. 288-301 (2017)
Download Edit this record How to cite View on PhilPapers
Abstract
There is a non-trivial chance that sometime in the (perhaps somewhat distant) future, someone will build an artificial general intelligence that will surpass human-level cognitive proficiency and go on to become "superintelligent", vastly outperforming humans. The advent of superintelligent AI has great potential, for good or ill. It is therefore imperative that we find a way to ensure-long before one arrives-that any superintelligence we build will consistently act in ways congenial to our interests. This is a very difficult challenge in part because most of the final goals we could give an AI admit of so-called "perverse instantiations". I propose a novel solution to this puzzle: instruct the AI to love humanity. The proposal is compared with Yudkowsky's Coherent Extrapolated Volition, and Bostrom's Moral Modeling proposals.
Reprint years
2018
PhilPapers/Archive ID
PRIFSA-2
Revision history
Archival date: 2018-10-12
View upload history
References found in this work BETA
Love as a Moral Emotion.Velleman, J. David

View all 18 references / Add more references

Citations of this work BETA

No citations found.

Add more citations

Added to PP index
2018-09-10

Total views
128 ( #24,047 of 43,688 )

Recent downloads (6 months)
56 ( #12,513 of 43,688 )

How can I increase my downloads?

Downloads since first upload
This graph includes both downloads from PhilArchive and clicks to external links.