Editorial: Risks of general artificial intelligence

Download Edit this record How to cite View on PhilPapers
Abstract
This is the editorial for a special volume of JETAI, featuring papers by Omohundro, Armstrong/Sotala/O’Heigeartaigh, T Goertzel, Brundage, Yampolskiy, B. Goertzel, Potapov/Rodinov, Kornai and Sandberg. - If the general intelligence of artificial systems were to surpass that of humans significantly, this would constitute a significant risk for humanity – so even if we estimate the probability of this event to be fairly low, it is necessary to think about it now. We need to estimate what progress we can expect, what the impact of superintelligent machines might be, how we might design safe and controllable systems, and whether there are directions of research that should best be avoided or strengthened.
PhilPapers/Archive ID
MLLERO
Upload history
Archival date: 2015-11-05
View other versions
Added to PP index
2015-11-05

Total views
735 ( #5,288 of 53,697 )

Recent downloads (6 months)
42 ( #15,721 of 53,697 )

How can I increase my downloads?

Downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.