LLMs Can Never Be Ideally Rational

Abstract

LLMs have dramatically improved in capabilities in recent years. This raises the question of whether LLMs could become genuine agents with beliefs and desires. This paper demonstrates an in principle limit to LLM agency, based on their architecture. LLMs are next word predictors: given a string of text, they calculate the probability that various words can come next. LLMs produce outputs that reflect these probabilities. I show that next word predictors are exploitable. If LLMs are prompted to make probabilistic predictions about the world, these predictions are guaranteed to be incoherent, and so Dutch bookable. If LLMs are prompted to make choices over actions, their preferences are guaranteed to be intransitive, and so money pumpable. In short, the problem is that selecting an action based on its potential value is structurally different then selecting the description of an action that is most likely given a prompt: probability cannot be forced into the shape of expected value. The in principle exploitability of LLMs raises doubts about how agential they can become. This exploitability also offers an opportunity for humanity to safely control such AI systems.

Author's Profile

Simon Goldstein
University of Hong Kong

Analytics

Added to PP
2024-08-18

Downloads
511 (#43,803)

6 months
511 (#1,984)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?