Chinese Chat Room: AI hallucinations, epistemology and cognition

Studies in Logic, Grammar and Rhetoric (forthcoming)
  Copy   BIBTEX

Abstract

The purpose of this paper is to show that understanding AI hallucination requires an interdisciplinary approach that combines insights from epistemology and cognitive science to address the nature of AI-generated knowledge, with a terminological worry that concepts we often use might carry unnecessary presuppositions. Along with terminological issues, it is demonstrated that AI systems, comparable to human cognition, are susceptible to errors in judgement and reasoning, and proposes that epistemological frameworks, such as reliabilism, can be similarly applied to enhance the trustworthiness of AI outputs. This exploration seeks to deepen our understanding of the possibility of AI cognition and its implications for the broader philosophical questions of knowledge and intelligence.

Author's Profile

Kristina Šekrst
University of Zagreb

Analytics

Added to PP
2024-11-04

Downloads
82 (#96,251)

6 months
82 (#70,230)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?