Self-Adversarial Surveillance for Superalignment

Abstract

In this paper, first we discuss the conditions under which a Large Language Model (LLM) can emulate a superior LLM and potentially trigger an intelligence explosion, along with the characteristics and dangers of the resulting superintelligence. We also explore ``superalignment,'' the process of safely keeping an intelligence explosion under human control. We discuss the goals that should be set for the initial LLM that might trigger the intelligence explosion and the Self-Adversarial Surveillance (SAS) system, which involves having the LLM evaluate its own output with different prompts to scalably respond to unexpected outputs that may arise with a certain probability. We aim to construct a theoretical framework for achieving safe superalignment based on specific experiments in the field of computer science using LLMs. However, since it also deals with the metaphysical aspect assuming superintelligence being achievable; that is, an intelligence explosion can occur, we present it as an interdisciplinary study between information science and philosophy.

Author's Profile

Ryunosuke Ishizaki
National Institute of Informatics

Analytics

Added to PP
2024-08-12

Downloads
66 (#95,876)

6 months
66 (#88,138)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?