Catching Treacherous Turn: A Model of the Multilevel AI Boxing

Abstract

With the fast pace of AI development, the problem of preventing its global catastrophic risks arises. However, no satisfactory solution has been found. From several possibilities, the confinement of AI in a box is considered as a low-quality possible solution for AI safety. However, some treacherous AIs can be stopped by effective confinement if it is used as an additional measure. Here, we proposed an idealized model of the best possible confinement by aggregating all known ideas in the field of AI boxing. We model the confinement based on the principles used in the safety engineering of nuclear power plants. We show that AI confinement should be implemented in several levels of defense. These levels include 1) AI design in fail-safe manner 2) limiting its capabilities, preventing self-improving and circuit breakers on treacherous turn 3) isolation from the outside world and, lastly, as the final hope 4) outside measures oriented on stopping AI in the wild. We demonstrate that the substantial number (more than 50 ideas listed in the article) of mutually independent measures could provide a relatively high probability of the containment of a human-level AI but may be not sufficient to preserve runaway of superintelligent AI. Thus, these measures will work only if they are used to prevent superintelligent AI creation, but not for containing superintelligence. We suggest that there could be a safe operation threshold, on which AI is useful, but is not able to hack containment system from the inside, the same way as a safe level of chain reaction inside nuclear power plants is maintained. However, overall, a failure of the confinement is inevitable, so we need to use the full AGI limited number of times (AI-ticks).

Author's Profile

Analytics

Added to PP
2021-06-21

Downloads
699 (#29,376)

6 months
183 (#16,723)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?