Results for 'Deepfakes'

24 found
Order:
  1. Deepfake detection by human crowds, machines, and machine-informed crowds.Matthew Groh, Ziv Epstein, Chaz Firestone & Rosalind Picard - 2022 - Proceedings of the National Academy of Sciences 119 (1):e2110013119.
    The recent emergence of machine-manipulated media raises an important societal question: How can we know whether a video that we watch is real or fake? In two online studies with 15,016 participants, we present authentic videos and deepfakes and ask participants to identify which is which. We compare the performance of ordinary human observers with the leading computer vision deepfake detection model and find them similarly accurate, while making different kinds of mistakes. Together, participants with access to the model’s (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  2. Deepfakes and the Epistemic Backstop.Regina Rini - 2020 - Philosophers' Imprint 20 (24):1-16.
    Deepfake technology uses machine learning to fabricate video and audio recordings that represent people doing and saying things they've never done. In coming years, malicious actors will likely use this technology in attempts to manipulate public discourse. This paper prepares for that danger by explicating the unappreciated way in which recordings have so far provided an epistemic backstop to our testimonial practices. Our reasonable trust in the testimony of others depends, to a surprising extent, on the regulative effects of the (...)
    Download  
     
    Export citation  
     
    Bookmark   29 citations  
  3. Deepfakes, Fake Barns, and Knowledge from Videos.Taylor Matthews - 2023 - Synthese 201 (2):1-18.
    Recent develops in AI technology have led to increasingly sophisticated forms of video manipulation. One such form has been the advent of deepfakes. Deepfakes are AI-generated videos that typically depict people doing and saying things they never did. In this paper, I demonstrate that there is a close structural relationship between deepfakes and more traditional fake barn cases in epistemology. Specifically, I argue that deepfakes generate an analogous degree of epistemic risk to that which is found (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  4. Deepfakes and the epistemic apocalypse.Joshua Habgood-Coote - 2023 - Synthese 201 (3):1-23.
    [Author note: There is a video explainer of this paper on youtube at the new work in philosophy channel (search for surname+deepfakes).] -/- It is widely thought that deepfake videos are a significant and unprecedented threat to our epistemic practices. In some writing about deepfakes, manipulated videos appear as the harbingers of an unprecedented _epistemic apocalypse_. In this paper I want to take a critical look at some of the more catastrophic predictions about deepfake videos. I will argue (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  5.  35
    Deepfakes, Public Announcements, and Political Mobilization.Megan Hyska - forthcoming - In Alex Worsnip (ed.), Oxford Studies in Epistemology, vol. 8. Oxford University Press.
    This paper takes up the question of how videographic public announcements (VPAs)---i.e. videos that a wide swath of the public sees and knows that everyone else can see too--- have functioned to mobilize people politically, and how the presence of deepfakes in our information environment stands to change the dynamics of this mobilization. Existing work by Regina Rini, Don Fallis and others has focused on the ways that deepfakes might interrupt our acquisition of first-order knowledge through videos. But (...)
    Download  
     
    Export citation  
     
    Bookmark  
  6. Deepfakes, Intellectual Cynics, and the Cultivation of Digital Sensibility.Taylor Matthews - 2022 - Royal Institute of Philosophy Supplement 92:67-85.
    In recent years, a number of philosophers have turned their attention to developments in Artificial Intelligence, and in particular to deepfakes. A deepfake is a portmanteau of ‘deep learning' and ‘fake', and for the most part they are videos which depict people doing and saying things they never did. As a result, much of the emerging literature on deepfakes has turned on questions of trust, harms, and information-sharing. In this paper, I add to the emerging concerns around (...) by drawing on resources from vice epistemology. As deepfakes become more sophisticated, I claim, they will develop to be a source of online epistemic corruption. More specifically, they will encourage consumers of digital online media to cultivate and manifest various epistemic vices. My immediate focus in this paper is on their propensity to encourage the development of what I call ‘intellectual cynicism'. After sketching a rough account of this epistemic vice, I go on to suggest that we can partially offset such cynicism – and fears around deceptive online media more generally – by encouraging the development what I term a trained ‘digital sensibility'. This, I contend, involves a calibrated sensitivity to the epistemic merits of online content. (shrink)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  7. Deepfakes, Deep Harms.Regina Rini & Leah Cohen - 2022 - Journal of Ethics and Social Philosophy 22 (2).
    Deepfakes are algorithmically modified video and audio recordings that project one person’s appearance on to that of another, creating an apparent recording of an event that never took place. Many scholars and journalists have begun attending to the political risks of deepfake deception. Here we investigate other ways in which deepfakes have the potential to cause deeper harms than have been appreciated. First, we consider a form of objectification that occurs in deepfaked ‘frankenporn’ that digitally fuses the parts (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  8. Deepfake Technology and Individual Rights.Francesco Stellin Sturino - 2023 - Social Theory and Practice 49 (1):161-187.
    Deepfake technology can be used to produce videos of real individuals, saying and doing things that they never in fact said or did, that appear highly authentic. Having accepted the premise that Deepfake content can constitute a legitimate form of expression, it is not immediately clear where the rights of content producers and distributors end, and where the rights of individuals whose likenesses are used in this content begin. This paper explores the question of whether it can be plausibly argued (...)
    Download  
     
    Export citation  
     
    Bookmark  
  9. Deepfakes, engaño y desconfianza.David Villena - 2023 - Filosofía En la Red.
    Download  
     
    Export citation  
     
    Bookmark  
  10. Freedom of expression meets deepfakes.Alex Barber - 2023 - Synthese 202 (40):1-17.
    Would suppressing deepfakes violate freedom of expression norms? The question is pressing because the deepfake phenomenon in its more poisonous manifestations appears to call for a response, and automated targeting of some kind looks to be the most practically viable. Two simple answers are rejected: that deepfakes do not deserve protection under freedom of expression legislation because they are fake by definition; and that deepfakes can be targeted if but only if they are misleadingly presented as authentic. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  11. Artificial intelligence, deepfakes and a future of ectypes.Luciano Floridi - 2018 - Philosophy and Technology 31 (3):317-321.
    AI, especially in the case of Deepfakes, has the capacity to undermine our confidence in the original, genuine, authentic nature of what we see and hear. And yet digital technologies, in the form of databases and other detection tools also make it easier to spot forgeries and to establish the authenticity of a work. Using the notion of ectypes, this paper discusses current conceptions of authenticity and reproduction and examines how, in the future, these might be adapted for use (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  12. How to do things with deepfakes.Tom Roberts - 2023 - Synthese 201 (2):1-18.
    In this paper, I draw a distinction between two types of deepfake, and unpack the deceptive strategies that are made possible by the second. The first category, which has been the focus of existing literature on the topic, consists of those deepfakes that act as a fabricated record of events, talk, and action, where any utterances included in the footage are not addressed to the audience of the deepfake. For instance, a fake video of two politicians conversing with one (...)
    Download  
     
    Export citation  
     
    Bookmark  
  13. Conceptual and moral ambiguities of deepfakes: a decidedly old turn.Matthew Crippen - 2023 - Synthese 202 (1):1-18.
    Everyday (mis)uses of deepfakes define prevailing conceptualizations of what they are and the moral stakes in their deployment. But one complication in understanding deepfakes is that they are not photographic yet nonetheless manipulate lens-based recordings with the intent of mimicking photographs. The harmfulness of deepfakes, moreover, significantly depends on their potential to be mistaken for photographs and on the belief that photographs capture actual events, a tenet known as the transparency thesis, which scholars have somewhat ironically attacked (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  14. AI or Your Lying Eyes: Some Shortcomings of Artificially Intelligent Deepfake Detectors.Keith Raymond Harris - 2024 - Philosophy and Technology 37 (7):1-19.
    Deepfakes pose a multi-faceted threat to the acquisition of knowledge. It is widely hoped that technological solutions—in the form of artificially intelligent systems for detecting deepfakes—will help to address this threat. I argue that the prospects for purely technological solutions to the problem of deepfakes are dim. Especially given the evolving nature of the threat, technological solutions cannot be expected to prevent deception at the hands of deepfakes, or to preserve the authority of video footage. Moreover, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  15. The Ethics and Epistemology of Deepfakes.Taylor Matthews & Ian James Kidd - 2024 - In Carl Fox & Joe Saunders (eds.), Routledge Handbook of Philosophy and Media Ethics. Routledge.
    Download  
     
    Export citation  
     
    Bookmark  
  16. Skepticism and the Digital Information Environment.Matthew Carlson - 2021 - SATS 22 (2):149-167.
    Deepfakes are audio, video, or still-image digital artifacts created by the use of artificial intelligence technology, as opposed to traditional means of recording. Because deepfakes can look and sound much like genuine digital recordings, they have entered the popular imagination as sources of serious epistemic problems for us, as we attempt to navigate the increasingly treacherous digital information environment of the internet. In this paper, I attempt to clarify what epistemic problems deepfakes pose and why they pose (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  17.  58
    The semiotic functioning of synthetic media.Auli Viidalepp - 2022 - Információs Társadalom 4:109-118.
    The interpretation of many texts in the everyday world is concerned with their truth value in relation to the reality around us. The recent publication experiments with computer-generated texts have shown that the distinction between true and false, or reality and fiction, is not always clear from the text itself. Essentially, in today’s media space, one may encounter texts, videos or images that deceive the reader by displaying nonsensical content or nonexistent events, while nevertheless appearing as genuine human-produced messages. This (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  18. Deep learning and synthetic media.Raphaël Millière - 2022 - Synthese 200 (3):1-27.
    Deep learning algorithms are rapidly changing the way in which audiovisual media can be produced. Synthetic audiovisual media generated with deep learning—often subsumed colloquially under the label “deepfakes”—have a number of impressive characteristics; they are increasingly trivial to produce, and can be indistinguishable from real sounds and images recorded with a sensor. Much attention has been dedicated to ethical concerns raised by this technological development. Here, I focus instead on a set of issues related to the notion of synthetic (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  19.  14
    A MACRO-SHIFTED FUTURE: PREFERRED OR ACCIDENTALLY POSSIBLE IN THE CONTEXT OF ADVANCES IN ARTIFICIAL INTELLIGENCE SCIENCE AND TECHNOLOGY.Albert Efimov - 2023 - In Наука и феномен человека в эпоху цивилизационного Макросдвига. Moscow: pp. 748.
    This article is devoted to the topical aspects of the transformation of society, science, and man in the context of E. László’s work «Macroshift». The author offers his own attempt to consider the attributes of macroshift and then use these attributes to operationalize further analysis, highlighting three essential elements: the world has come to a situation of technological indistinguishability between the natural and the artificial, to machines that know everything about humans. Antiquity aspired to beauty and saw beauty in realistic (...)
    Download  
     
    Export citation  
     
    Bookmark  
  20. Imam Mahdi Miracles.Reza Rezaie Khanghah - 2024 - Qeios.
    In the story of Moses and Pharaoh, the magicians who were there became the first believers in Moses because they believed in the miraculous power of Moses, which was from Allah. In fact, those sticks (sticks of magicians) did not turn into snakes, but were seen by others as snakes. When Moses dropped his stick and turned into a snake, the sorcerers realized that the stick had become a real snake, and that is why they believed Moses. Today, this magic (...)
    Download  
     
    Export citation  
     
    Bookmark  
  21.  54
    The politics of past and future: synthetic media, showing, and telling.Megan Hyska - forthcoming - Philosophical Studies:1-22.
    Generative artificial intelligence has given us synthetic media that are increasingly easy to create and increasingly hard to distinguish from photographs and videos. Whereas an existing literature has been concerned with how these new media might make a difference for would-be knowers—the viewers of photographs and videos—I advance a thesis about how they will make a difference for would-be communicators—those who embed photos and videos in their speech acts. I claim that the presence of these media in our information environment (...)
    Download  
     
    Export citation  
     
    Bookmark  
  22.  36
    Misinformation, Content Moderation, and Epistemology: Protecting Knowledge.Keith Raymond Harris - 2024 - Routledge.
    This book argues that misinformation poses a multi-faceted threat to knowledge, while arguing that some forms of content moderation risk exacerbating these threats. It proposes alternative forms of content moderation that aim to address this complexity while enhancing human epistemic agency. The proliferation of fake news, false conspiracy theories, and other forms of misinformation on the internet and especially social media is widely recognized as a threat to individual knowledge and, consequently, to collective deliberation and democracy itself. This book argues (...)
    Download  
     
    Export citation  
     
    Bookmark  
  23. The Question of Algorithmic Personhood and Being (Or: On the Tenuous Nature of Human Status and Humanity Tests in Virtual Spaces—Why All Souls are ‘Necessarily’ Equal When Considered as Energy).Tyler Jaynes - 2021 - J (2571-8800) 3 (4):452-475.
    What separates the unique nature of human consciousness and that of an entity that can only perceive the world via strict logic-based structures? Rather than assume that there is some potential way in which logic-only existence is non-feasible, our species would be better served by assuming that such sentient existence is feasible. Under this assumption, artificial intelligence systems (AIS), which are creations that run solely upon logic to process data, even with self-learning architectures, should therefore not face the opposition they (...)
    Download  
     
    Export citation  
     
    Bookmark  
  24. The Question of Algorithmic Personhood and Being (Or: On the Tenuous Nature of Human Status and Humanity Tests in Virtual Spaces—Why All Souls are ‘Necessarily’ Equal When Considered as Energy).Tyler Jaynes - 2021 - MDPI: J 3 (4):452-475.
    What separates the unique nature of human consciousness and that of an entity that can only perceive the world via strict logic-based structures? Rather than assume that there is some potential way in which logic-only existence is non-feasible, our species would be better served by assuming that such sentient existence is feasible. Under this assumption, artificial intelligence systems (AIS), which are creations that run solely upon logic to process data, even with self-learning architectures, should therefore not face the opposition they (...)
    Download  
     
    Export citation  
     
    Bookmark