Deepfakes and the Epistemic Backstop

Philosophers' Imprint 20 (24):1-16 (2020)
Download Edit this record How to cite View on PhilPapers
Deepfake technology uses machine learning to fabricate video and audio recordings that represent people doing and saying things they've never done. In coming years, malicious actors will likely use this technology in attempts to manipulate public discourse. This paper prepares for that danger by explicating the unappreciated way in which recordings have so far provided an epistemic backstop to our testimonial practices. Our reasonable trust in the testimony of others depends, to a surprising extent, on the regulative effects of the ever-present possibility of recordings of the events they testify about. As deepfakes erode the epistemic value of recordings, we may then face an even more consequential challenge to the reliability of our testimonial practices themselves.
PhilPapers/Archive ID
Upload history
First archival date: 2019-06-24
Latest version: 2 (2020-09-02)
View other versions
Added to PP index

Total views
4,157 ( #600 of 64,190 )

Recent downloads (6 months)
779 ( #361 of 64,190 )

How can I increase my downloads?

Downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.