Is Artificial General Intelligence Impossible?

Cosmos+Taxis 12 (5+6):5-22 (2024)
  Copy   BIBTEX

Abstract

In their Why Machines Will Never Rule the World, Landgrebe and Smith (2023) argue that it is impossible for artificial general intelligence (AGI) to succeed, on the grounds that it is impossible to perfectly model or emulate the “complex” “human neurocognitive system”. However, they do not show that it is logically impossible; they only show that it is practically impossible using current mathematical techniques. Nor do they prove that there could not be any other kinds of theories than those in current use. Even if perfect theories were impossible or unlikely, perfection may not be needed and may even be unhelpful.

Author's Profile

William J. Rapaport
State University of New York, Buffalo

Analytics

Added to PP
2024-05-27

Downloads
0

6 months
0

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?