A Talking Cure for Autonomy Traps : How to share our social world with chatbots

Abstract

Large Language Models (LLMs) like ChatGPT were trained on human conversation, but in the future they will also train us. As chatbots speak from our smartphones and customer service helplines, they will become a part of everyday life and a growing share of all the conversations we ever have. It’s hard to doubt this will have some effect on us. Here I explore a specific concern about the impact of artificial conversation on our capacity to deliberate and hold ourselves accountable to reason – that is, to be autonomous, in Kant’s sense of the term. I develop ideas from psychologist Jean Piaget to show how chatbots are autonomy traps: their deference to our commands tempts us into venting authoritarian whims, ultimately weakening our own self-control. I argue that the Kantian tradition, including Piaget and sociologist Emile Durkheim, offers powerful conceptual resources for resisting this slide. But it will require us to do something that may seem bizarre: we will need to treat mindless chatbots as if they are autonomous persons too.

Author's Profile

Regina Rini
York University

Analytics

Added to PP
2023-08-04

Downloads
978 (#17,794)

6 months
249 (#8,117)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?