Levels of Self-Improvement in AI and their Implications for AI Safety

Download Edit this record How to cite View on PhilPapers
Abstract: This article presents a model of self-improving AI in which improvement could happen on several levels: hardware, learning, code and goals system, each of which has several sublevels. We demonstrate that despite diminishing returns at each level and some intrinsic difficulties of recursive self-improvement—like the intelligence-measuring problem, testing problem, parent-child problem and halting risks—even non-recursive self-improvement could produce a mild form of superintelligence by combining small optimizations on different levels and the power of learning. Based on this, we analyze how self-improvement could happen on different stages of the development of AI, including the stages at which AI is boxed or hiding in the internet.
PhilPapers/Archive ID
Upload history
First archival date: 2018-04-08
Latest version: 3 (2018-04-29)
View other versions
Added to PP index

Total views
228 ( #25,328 of 2,432,313 )

Recent downloads (6 months)
19 ( #35,405 of 2,432,313 )

How can I increase my downloads?

Downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.