Babbling stochastic parrots? On reference and reference change in large language models

Abstract

Recently developed large language models (LLMs) perform surprisingly well in many language-related tasks, ranging from text correction or authentic chat experiences to the production of entirely new texts or even essays. It is natural to get the impression that LLMs know the meaning of natural language expressions and can use them productively. Recent scholarship, however, has questioned the validity of this impression, arguing that LLMs are ultimately incapable of understanding and producing meaningful texts. This paper develops a more optimistic view. Drawing on classic externalist accounts of reference, it argues that LLMs are very likely capable of reference. Not only that: The combination of a popular externalist account of reference and recent experimental data in machine psychology even suggests that LLMs might play a role in shifting what our words refer to.

Author's Profile

Steffen Koch
Bielefeld University

Analytics

Added to PP
2023-12-20

Downloads
371 (#46,272)

6 months
371 (#5,153)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?