To Improve Literacy, Improve Equality in Education, Not Large Language Models

Cognitive Science 49 (4):e70058 (2025)
  Copy   BIBTEX

Abstract

Huettig and Christiansen in an earlier issue argue that large language models (LLMs) are beneficial to address declining cognitive skills, such as literacy, through combating imbalances in educational equity. However, we warn that this technosolutionism may be the wrong frame. LLMs are labor intensive, are economically infeasible, and pollute the environment, and these properties may outweigh any proposed benefits. For example, poor quality air directly harms human cognition, and thus has compounding effects on educators' and pupils' ability to teach and learn. We urge extreme caution in facilitating the use of LLMs, which like much of modern academia run on private technology sector infrastructure, in classrooms lest we further normalize: pupils losing their right to privacy and security, reducing human contact between learner and educator, deskilling teachers, and polluting the environment. Cognitive scientists instead can learn from past mistakes with the petrochemical and tobacco industries and consider the harms to cognition from LLMs.

Author's Profile

Olivia Guest
Radboud University

Analytics

Added to PP
2025-04-03

Downloads
153 (#98,236)

6 months
153 (#34,953)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?