Should We Discourage AI Extension? Epistemic Responsibility and AI

Philosophy and Technology 37 (3):1-17 (2024)
  Copy   BIBTEX

Abstract

We might worry that our seamless reliance on AI systems makes us prone to adopting the strange errors that these systems commit. One proposed solution is to design AI systems so that they are not phenomenally transparent to their users. This stops cognitive extension and the automatic uptake of errors. Although we acknowledge that some aspects of AI extension are concerning, we can address these concerns without discouraging transparent employment altogether. First, we believe that the potential danger should be put into perspective – many unreliable technologies are unlikely to be used transparently precisely because they are unreliable. Second, an agent who transparently employs a resource may also reflect (opaquely) on its reliability. Finally, agents can rely on a process transparently and be yanked out of their transparent use when it turns unreliable. When an agent is responsive to the reliability of their process in this way, they have epistemically integrated it, and the beliefs they form with it are formed responsibly. This prevents the agent from automatically incorporating problematic beliefs. Responsible (and transparent) use of AI resources – and consequently responsible AI extension – is hence possible. We end the paper with several design and policy recommendations that encourage epistemic integration of AI-involving belief-forming processes.

Author Profiles

Hadeel Naeem
University of Edinburgh (PhD)
Julian Hauser
Universitat de Barcelona

Analytics

Added to PP
2024-06-21

Downloads
172 (#88,079)

6 months
172 (#18,673)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?