Editorial: Risks of general artificial intelligence

Download Edit this record How to cite View on PhilPapers
Abstract
This is the editorial for a special volume of JETAI, featuring papers by Omohundro, Armstrong/Sotala/O’Heigeartaigh, T Goertzel, Brundage, Yampolskiy, B. Goertzel, Potapov/Rodinov, Kornai and Sandberg. - If the general intelligence of artificial systems were to surpass that of humans significantly, this would constitute a significant risk for humanity – so even if we estimate the probability of this event to be fairly low, it is necessary to think about it now. We need to estimate what progress we can expect, what the impact of superintelligent machines might be, how we might design safe and controllable systems, and whether there are directions of research that should best be avoided or strengthened.
PhilPapers/Archive ID
MLLERO
Revision history
Archival date: 2015-11-05
View upload history
References found in this work BETA

No references found.

Add more references

Citations of this work BETA

Add more citations

Added to PP index
2015-11-05

Total views
592 ( #4,511 of 42,191 )

Recent downloads (6 months)
72 ( #7,813 of 42,191 )

How can I increase my downloads?

Downloads since first upload
This graph includes both downloads from PhilArchive and clicks to external links.