Abstract
Early artificial intelligence research was dominated by intellectualist assumptions, producing explicit representation of facts and rules in “good old-fashioned AI”. After this approach foundered, emphasis shifted to deep learning in neural networks, leading to the creation of Large Language Models which have shown remarkable capacity to automatically generate intelligible texts. This new phase of AI is already producing profound social consequences which invite philosophical reflection. This paper argues that Charles Peirce’s philosophy throws valuable light on genAI’s capabilities first with regard to meaning, then knowledge and truth.
Firstly, I explore how Peirce’s icon/index/symbol distinction illuminates the functioning of genAI. I argue that genAI’s engineers have skilfully captured a form of symbolicity, but no other sign-kind. In lacking indexical signs, LLMs lack connection with, and accountability to, particular worldly objects. In lacking iconic signs, LLMs are insufficiently disciplined by structural – most notably logical – relationships.
Then I argue that GenAI’s astounding stream of articulate, truth-semblant, yet worthless texts issues a timely reckoning to modern philosophy’s representational realism. By contrast, Peirce’s pragmatism scaffolds a rich relational realism (Gili and Maddalena 2022), which shows how meaningful concepts, and a grasp of truth, can only occur across multiple cognitive systems who are simultaneously richly related with one another, and a shared environment in which they continually act and receive feedback, within a logical space of reasons. As Peirce himself noted, “Mere knowledge, though it be systematized, may be a dead memory; while by science we all habitually mean a living and growing body of truth”.