SATS 22 (2):149-167 (
2021)
Copy
BIBTEX
Abstract
Deepfakes are audio, video, or still-image digital artifacts created by the use of artificial intelligence technology, as opposed to traditional means of recording. Because deepfakes can look and sound much like genuine digital recordings, they have entered the popular imagination as sources of serious epistemic problems for us, as we attempt to navigate the increasingly treacherous digital information environment of the internet. In this paper, I attempt to clarify what epistemic problems deepfakes pose and why they pose these problems, by drawing parallels between recordings and our own senses as sources of evidence. I show that deepfakes threaten to undermine the status of digital recordings as evidence. The existence of deepfakes thus encourages a kind of skepticism about digital recordings that bears important similarities to classic philosophical skepticism concerning the senses. However, the skepticism concerning digital recordings that deepfakes motivate is also importantly different from classical skepticism concerning the senses, and I argue that these differences illuminate some possible strategies for solving the epistemic problems posed by deepfakes.