Robustness to fundamental uncertainty in AGI alignment

Download Edit this record How to cite View on PhilPapers
Abstract
The AGI alignment problem has a bimodal distribution of outcomes with most outcomes clustering around the poles of total success and existential, catastrophic failure. Consequently, attempts to solve AGI alignment should, all else equal, prefer false negatives (ignoring research programs that would have been successful) to false positives (pursuing research programs that will unexpectedly fail). Thus, we propose adopting a policy of responding to points of metaphysical and practical uncertainty associated with the alignment problem by limiting and choosing necessary assumptions to reduce the risk false positives. Herein we explore in detail some of the relevant points of uncertainty that AGI alignment research hinges on and consider how to reduce false positives in response to them.
Categories
PhilPapers/Archive ID
GGORTF
Upload history
Archival date: 2018-08-06
View other versions
Added to PP index
2018-08-06

Total views
55 ( #41,737 of 51,451 )

Recent downloads (6 months)
12 ( #36,861 of 51,451 )

How can I increase my downloads?

Downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.