Deepfakes, Deep Harms

Journal of Ethics and Social Philosophy 22 (2) (2022)
  Copy   BIBTEX

Abstract

Deepfakes are algorithmically modified video and audio recordings that project one person’s appearance on to that of another, creating an apparent recording of an event that never took place. Many scholars and journalists have begun attending to the political risks of deepfake deception. Here we investigate other ways in which deepfakes have the potential to cause deeper harms than have been appreciated. First, we consider a form of objectification that occurs in deepfaked ‘frankenporn’ that digitally fuses the parts of different women to create pliable characters incapable of giving consent to their depiction. Next, we develop the idea of ‘illocutionary wronging’, in which an individual is forced to engage in speech acts they would prefer to avoid in order to deny or correct the misleading evidence of a publicized deepfake. Finally, we consider the risk that deepfakes may facilitate campaigns of ‘panoptic gaslighting’, where many systematically altered recordings of a single person's life undermine their memory, eroding their sense of self and ability to engage with others. Taken together, these harms illustrate the roles that social epistemology and technological vulnerabilities play in human ethical life.

Author Profiles

Analytics

Added to PP
2022-03-10

Downloads
1,855 (#4,722)

6 months
689 (#1,788)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?