Do Large Language Models Hallucinate Electric Fata Morganas?

Journal of Consciousness Studies (forthcoming)
  Copy   BIBTEX

Abstract

This paper explores the intersection of AI hallucinations and the question of AI consciousness, examining whether the erroneous outputs generated by large language models (LLMs) could be mistaken for signs of emergent intelligence. AI hallucinations, which are false or unverifiable statements produced by LLMs, raise significant philosophical and ethical concerns. While these hallucinations may appear as data anomalies, they challenge our ability to discern whether LLMs are merely sophisticated simulators of intelligence or could develop genuine cognitive processes. By analyzing the causes of AI hallucinations, their impact on the perception of AI cognition, and the potential implications for AI consciousness, this paper contributes to the ongoing discourse on the nature of artificial intelligence and its future evolution.

Author's Profile

Kristina Šekrst
University of Zagreb

Analytics

Added to PP
2025-03-18

Downloads
85 (#101,488)

6 months
85 (#73,693)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?