How Much Like Us Do We Want AIs to Be?

Techné Research in Philosophy and Technology 28 (2):137-168 (2024)
  Copy   BIBTEX

Abstract

Replicating or exceeding human intelligence, not just in particular domains but in general, has always been a major goal of Artificial Intelligence (AI). We argue here that “human intelligence” is not only ill-defined, but often conflated with broader aspects of human psychology. Standard arguments for replicating it are morally unacceptable. We then suggest a reframing: that the proper goal of AI is not to replicate humans, but to complement them by creating diverse intelligences capable of collaborating with humans. This goal renders issues of theory of mind, empathy, and caring, or community engagement, central to AI. It also challenges AI to better understand the circumstances in which human intelligence, including human moral intelligence, fails.

Author Profiles

Eric Dietrich
State University of New York at Binghamton
John P. Sullins
Sonoma State University
Robin L. Zebrowski
Beloit College

Analytics

Added to PP
2024-08-08

Downloads
244 (#84,943)

6 months
233 (#10,428)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?