Deepfakes, shallow epistemic graves: On the epistemic robustness of photography and videos in the era of deepfakes

Synthese 200 (6):1–22 (2022)
  Copy   BIBTEX

Abstract

The recent proliferation of deepfakes and other digitally produced deceptive representations has revived the debate on the epistemic robustness of photography and other mechanically produced images. Authors such as Rini (2020) and Fallis (2021) claim that the proliferation of deepfakes pose a serious threat to the reliability and the epistemic value of photographs and videos. In particular, Fallis adopts a Skyrmsian account of how signals carry information (Skyrms, 2010) to argue that the existence of deepfakes significantly reduces the information that images carry about the world, which undermines their reliability as a source of evidence. In this paper, we focus on Fallis’ version of the challenge, but our results can be generalized to address similar pessimistic views such as Rini’s. More generally, we offer an account of the epistemic robustness of photography and videos that allows us to understand these systems of representation as continuous with other means of information transmission we find in nature. This account will then give us the necessary tools to put Fallis’ claims into perspective: using a richer approach to animal signaling based on the signaling model of communication (Maynard-Smith and Harper, 2003), we will claim that, while it might be true that deepfake technology increases the probability of obtaining false positives, the dimension of the epistemic threat involved might still be negligible.

Author Profiles

Paloma Atencia Linares
Universidad Nacional de Educación a Distancia
Marc Artiga
Universitat De València

Analytics

Added to PP
2022-12-14

Downloads
157 (#92,822)

6 months
28 (#100,214)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?