Language Agents Reduce the Risk of Existential Catastrophe

AI and Society:1-11 (forthcoming)
  Copy   BIBTEX

Abstract

Recent advances in natural language processing have given rise to a new kind of AI architecture: the language agent. By repeatedly calling an LLM to perform a variety of cognitive tasks, language agents are able to function autonomously to pursue goals specified in natural language and stored in a human-readable format. Because of their architecture, language agents exhibit behavior that is predictable according to the laws of folk psychology: they function as though they have desires and beliefs, and then make and update plans to pursue their desires given their beliefs. We argue that the rise of language agents significantly reduces the probability of an existential catastrophe due to loss of control over an AGI. This is because the probability of such an existential catastrophe is proportional to the difficulty of aligning AGI systems, and language agents significantly reduce that difficulty. In particular, language agents help to resolve three important issues related to aligning AIs: reward misspecification, goal misgeneralization, and uninterpretability.

Author Profiles

Simon Goldstein
University of Hong Kong
Cameron Domenico Kirk-Giannini
Rutgers University - Newark

Analytics

Added to PP
2023-08-02

Downloads
753 (#19,725)

6 months
397 (#4,678)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?