Deepfakes and the Epistemic Backstop

Philosophers' Imprint 20 (24):1-16 (2020)
  Copy   BIBTEX

Abstract

Deepfake technology uses machine learning to fabricate video and audio recordings that represent people doing and saying things they've never done. In coming years, malicious actors will likely use this technology in attempts to manipulate public discourse. This paper prepares for that danger by explicating the unappreciated way in which recordings have so far provided an epistemic backstop to our testimonial practices. Our reasonable trust in the testimony of others depends, to a surprising extent, on the regulative effects of the ever-present possibility of recordings of the events they testify about. As deepfakes erode the epistemic value of recordings, we may then face an even more consequential challenge to the reliability of our testimonial practices themselves.

Author's Profile

Regina Rini
York University

Analytics

Added to PP
2019-06-24

Downloads
10,980 (#373)

6 months
1,935 (#212)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?