Why you are (probably) anthropomorphizing AI

Abstract

In this paper I argue that, given the way that AI models work and the way that ordinary human rationality works, it is very likely that people are anthropomorphizing AI, with potentially serious consequences. I start with the core idea, recently defended by Thomas Kelly (2022) among others, that bias involves a systematic departure from a genuine standard or norm. I briefly discuss how bias can take on different explicit, implicit, and “truly implicit” (Johnson 2021) forms such as bias by proxy. I then discuss biased anthropomorphism of technology, focusing on the case of Large Language Models (LLMs) like chatGPT. As with other kinds of bias, there are importantly different kinds of anthropomorphism, some of which can persist without others, and some of which can encourage others. Anthropomorphism can take rather subtle, implicit forms, that can be difficult to detect, resist, and dislodge. Attention to these kinds of anthropomorphism can help us avoid confusing importantly different kinds of evaluation, better assess the risks, and inform strategies for bias prevention and mitigation.

Author's Profile

Ali Hasan
University of Iowa

Analytics

Added to PP
2023-03-26

Downloads
989 (#12,565)

6 months
500 (#3,014)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?