Why Does AI Lie So Much? The Problem Is More Deep Rooted Than You Think

Arkinfo Notes (2024)
  Copy   BIBTEX

Abstract

The rapid advancements in artificial intelligence, particularly in natural language processing, have brought to light a critical challenge, i.e., the semantic grounding problem. This article explores the root causes of this issue, focusing on the limitations of connectionist models that dominate current AI research. By examining Noam Chomsky's theory of Universal Grammar and his critiques of connectionism, I highlight the fundamental differences between human language understanding and AI language generation. Introducing the concept of semantic grounding, I emphasise the need for AI to connect language with real-world experiences and context to achieve true understanding. While multi-modal models represent a step forward, they fall short of addressing the core issue. The article concludes with a call for a course correction in AI development, advocating for the integration of embodied cognition, dynamic learning, hybrid models, and ethical practices. This shift is essential for creating AI systems that can meaningfully interact with the world, reducing the incidence of hallucinations and enhancing their reliability and utility.

Author's Profile

Analytics

Added to PP
2024-08-05

Downloads
36 (#97,628)

6 months
36 (#95,989)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?