Results for 'Deepfake'

32 found
Order:
  1. Deepfakes and the Epistemic Backstop.Regina Rini - 2020 - Philosophers' Imprint 20 (24):1-16.
    Deepfake technology uses machine learning to fabricate video and audio recordings that represent people doing and saying things they've never done. In coming years, malicious actors will likely use this technology in attempts to manipulate public discourse. This paper prepares for that danger by explicating the unappreciated way in which recordings have so far provided an epistemic backstop to our testimonial practices. Our reasonable trust in the testimony of others depends, to a surprising extent, on the regulative effects of (...)
    Download  
     
    Export citation  
     
    Bookmark   31 citations  
  2. Deepfakes: a survey and introduction to the topical collection.Dan Cavedon-Taylor - 2024 - Synthese 204 (1):1-19.
    Deepfakes are extremely realistic audio/video media. They are produced via a complex machine-learning process, one that centrally involves training an algorithm on hundreds or thousands of audio/video recordings of an object or person, S, with the aim of either creating entirely new audio/video media of S or else altering existing audio/video media of S. Deepfakes are widely predicted to have deleterious consequences (principally, moral and epistemic ones) for both individuals and various of our social practices and institutions. In this introduction (...)
    Download  
     
    Export citation  
     
    Bookmark  
  3. Deepfake detection by human crowds, machines, and machine-informed crowds.Matthew Groh, Ziv Epstein, Chaz Firestone & Rosalind Picard - 2022 - Proceedings of the National Academy of Sciences 119 (1):e2110013119.
    The recent emergence of machine-manipulated media raises an important societal question: How can we know whether a video that we watch is real or fake? In two online studies with 15,016 participants, we present authentic videos and deepfakes and ask participants to identify which is which. We compare the performance of ordinary human observers with the leading computer vision deepfake detection model and find them similarly accurate, while making different kinds of mistakes. Together, participants with access to the model’s (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  4. Deepfakes, Deep Harms.Regina Rini & Leah Cohen - 2022 - Journal of Ethics and Social Philosophy 22 (2).
    Deepfakes are algorithmically modified video and audio recordings that project one person’s appearance on to that of another, creating an apparent recording of an event that never took place. Many scholars and journalists have begun attending to the political risks of deepfake deception. Here we investigate other ways in which deepfakes have the potential to cause deeper harms than have been appreciated. First, we consider a form of objectification that occurs in deepfaked ‘frankenporn’ that digitally fuses the parts of (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  5. Deepfakes, Public Announcements, and Political Mobilization.Megan Hyska - forthcoming - In Tamar Szabó Gendler, John Hawthorne, Julianne Chung & Alex Worsnip (eds.), Oxford Studies in Epistemology, Vol. 8. Oxford University Press.
    This paper takes up the question of how videographic public announcements (VPAs)---i.e. videos that a wide swath of the public sees and knows that everyone else can see too--- have functioned to mobilize people politically, and how the presence of deepfakes in our information environment stands to change the dynamics of this mobilization. Existing work by Regina Rini, Don Fallis and others has focused on the ways that deepfakes might interrupt our acquisition of first-order knowledge through videos. But I point (...)
    Download  
     
    Export citation  
     
    Bookmark  
  6. Deepfakes and the epistemic apocalypse.Joshua Habgood-Coote - 2023 - Synthese 201 (3):1-23.
    [Author note: There is a video explainer of this paper on youtube at the new work in philosophy channel (search for surname+deepfakes).] -/- It is widely thought that deepfake videos are a significant and unprecedented threat to our epistemic practices. In some writing about deepfakes, manipulated videos appear as the harbingers of an unprecedented _epistemic apocalypse_. In this paper I want to take a critical look at some of the more catastrophic predictions about deepfake videos. I will argue (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  7. Deepfakes, Fake Barns, and Knowledge from Videos.Taylor Matthews - 2023 - Synthese 201 (2):1-18.
    Recent develops in AI technology have led to increasingly sophisticated forms of video manipulation. One such form has been the advent of deepfakes. Deepfakes are AI-generated videos that typically depict people doing and saying things they never did. In this paper, I demonstrate that there is a close structural relationship between deepfakes and more traditional fake barn cases in epistemology. Specifically, I argue that deepfakes generate an analogous degree of epistemic risk to that which is found in traditional cases. Given (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  8. Deepfakes, Intellectual Cynics, and the Cultivation of Digital Sensibility.Taylor Matthews - 2022 - Royal Institute of Philosophy Supplement 92:67-85.
    In recent years, a number of philosophers have turned their attention to developments in Artificial Intelligence, and in particular to deepfakes. A deepfake is a portmanteau of ‘deep learning' and ‘fake', and for the most part they are videos which depict people doing and saying things they never did. As a result, much of the emerging literature on deepfakes has turned on questions of trust, harms, and information-sharing. In this paper, I add to the emerging concerns around deepfakes by (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  9. (1 other version)Deepfake Technology and Individual Rights.Francesco Stellin Sturino - 2023 - Social Theory and Practice 49 (1):161-187.
    Deepfake technology can be used to produce videos of real individuals, saying and doing things that they never in fact said or did, that appear highly authentic. Having accepted the premise that Deepfake content can constitute a legitimate form of expression, it is not immediately clear where the rights of content producers and distributors end, and where the rights of individuals whose likenesses are used in this content begin. This paper explores the question of whether it can be (...)
    Download  
     
    Export citation  
     
    Bookmark  
  10. Freedom of expression meets deepfakes.Alex Barber - 2023 - Synthese 202 (40):1-17.
    Would suppressing deepfakes violate freedom of expression norms? The question is pressing because the deepfake phenomenon in its more poisonous manifestations appears to call for a response, and automated targeting of some kind looks to be the most practically viable. Two simple answers are rejected: that deepfakes do not deserve protection under freedom of expression legislation because they are fake by definition; and that deepfakes can be targeted if but only if they are misleadingly presented as authentic. To make (...)
    Download  
     
    Export citation  
     
    Bookmark  
  11.  61
    Deepfakes, Simone Weil, and the concept of reading.Steven R. Kraaijeveld - forthcoming - AI and Society:1-3.
    Download  
     
    Export citation  
     
    Bookmark  
  12. Deepfakes, engaño y desconfianza.David Villena - 2023 - Filosofía En la Red.
    Download  
     
    Export citation  
     
    Bookmark  
  13. Artificial intelligence, deepfakes and a future of ectypes.Luciano Floridi - 2018 - Philosophy and Technology 31 (3):317-321.
    AI, especially in the case of Deepfakes, has the capacity to undermine our confidence in the original, genuine, authentic nature of what we see and hear. And yet digital technologies, in the form of databases and other detection tools also make it easier to spot forgeries and to establish the authenticity of a work. Using the notion of ectypes, this paper discusses current conceptions of authenticity and reproduction and examines how, in the future, these might be adapted for use in (...)
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  14. AI or Your Lying Eyes: Some Shortcomings of Artificially Intelligent Deepfake Detectors.Keith Raymond Harris - 2024 - Philosophy and Technology 37 (7):1-19.
    Deepfakes pose a multi-faceted threat to the acquisition of knowledge. It is widely hoped that technological solutions—in the form of artificially intelligent systems for detecting deepfakes—will help to address this threat. I argue that the prospects for purely technological solutions to the problem of deepfakes are dim. Especially given the evolving nature of the threat, technological solutions cannot be expected to prevent deception at the hands of deepfakes, or to preserve the authority of video footage. Moreover, the success of such (...)
    Download  
     
    Export citation  
     
    Bookmark  
  15.  79
    Epistemic Doom In The Deepfake Era.Nadisha-Marie Aliman - manuscript
    This epistemic project examines an understudied existential risk emerging in the deepfake era: the fortunately up to this time (but not indefinitely so) reversible peril of humanity’s epistemic self-sabotage through an overestimation of algorithms linked to quantitative aspects and a paired underestimation of the own epistemic potential whose manifestations are in principle expressible via scientifically analyzable but currently often neglected qualitative facets. This scenario is metaphorically referred to as "π-Doom scenario". Instead of carefully crafting opaque hypotheses and formulating probabilistic (...)
    Download  
     
    Export citation  
     
    Bookmark  
  16. How to do things with deepfakes.Tom Roberts - 2023 - Synthese 201 (2):1-18.
    In this paper, I draw a distinction between two types of deepfake, and unpack the deceptive strategies that are made possible by the second. The first category, which has been the focus of existing literature on the topic, consists of those deepfakes that act as a fabricated record of events, talk, and action, where any utterances included in the footage are not addressed to the audience of the deepfake. For instance, a fake video of two politicians conversing with (...)
    Download  
     
    Export citation  
     
    Bookmark  
  17. Conceptual and moral ambiguities of deepfakes: a decidedly old turn.Matthew Crippen - 2023 - Synthese 202 (1):1-18.
    Everyday (mis)uses of deepfakes define prevailing conceptualizations of what they are and the moral stakes in their deployment. But one complication in understanding deepfakes is that they are not photographic yet nonetheless manipulate lens-based recordings with the intent of mimicking photographs. The harmfulness of deepfakes, moreover, significantly depends on their potential to be mistaken for photographs and on the belief that photographs capture actual events, a tenet known as the transparency thesis, which scholars have somewhat ironically attacked by citing digital (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  18. Legal Definitions of Intimate Images in the Age of Sexual Deepfakes and Generative AI.Suzie Dunn - 2024 - McGill Law Journal 69:1-15.
    In January 2024, non-consensual deepfakes came to public attention with the spread of AI generated sexually abusive images of Taylor Swift. Although this brought new found energy to the debate on what some call non-consensual synthetic intimate images (i.e. images that use technology such as AI or photoshop to make sexual images of a person without their consent), female celebrities like Swift have had deepfakes like these made of them for years. In 2017, a Reddit user named “deepfakes” posted several (...)
    Download  
     
    Export citation  
     
    Bookmark  
  19. The Ethics and Epistemology of Deepfakes.Taylor Matthews & Ian James Kidd - 2024 - In Carl Fox & Joe Saunders (eds.), Routledge Handbook of Philosophy and Media Ethics. Routledge.
    Download  
     
    Export citation  
     
    Bookmark  
  20. Skepticism and the Digital Information Environment.Matthew Carlson - 2021 - SATS 22 (2):149-167.
    Deepfakes are audio, video, or still-image digital artifacts created by the use of artificial intelligence technology, as opposed to traditional means of recording. Because deepfakes can look and sound much like genuine digital recordings, they have entered the popular imagination as sources of serious epistemic problems for us, as we attempt to navigate the increasingly treacherous digital information environment of the internet. In this paper, I attempt to clarify what epistemic problems deepfakes pose and why they pose these problems, by (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  21. The semiotic functioning of synthetic media.Auli Viidalepp - 2022 - Információs Társadalom 4:109-118.
    The interpretation of many texts in the everyday world is concerned with their truth value in relation to the reality around us. The recent publication experiments with computer-generated texts have shown that the distinction between true and false, or reality and fiction, is not always clear from the text itself. Essentially, in today’s media space, one may encounter texts, videos or images that deceive the reader by displaying nonsensical content or nonexistent events, while nevertheless appearing as genuine human-produced messages. This (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  22.  95
    Condensation of Algorithmic Supremacy Claims.Nadisha-Marie Aliman - manuscript
    In the presently unfolding deepfake era, previously unrelated algorithmic superintelligence possibility claims cannot be scientifically analyzed in isolation anymore due to the connected inevitable epistemic interactions that have already commenced. For instance, deep-learning (DL) related algorithmic supremacy claims may intrinsically compete with both neuro-symbolic (NS) algorithmic and further quantum (Q) algorithmic superintelligence achievement claims. Concurrently, a variety of experimental combinations of DL, NS and Q directions are conceivable. While research on these three illustrative variants did not yet offer any (...)
    Download  
     
    Export citation  
     
    Bookmark  
  23.  74
    The Supercomplexity Puzzle.Nadisha-Marie Aliman - manuscript
    In the deepfake era, materialism and idealism seem to clash at multiple epistemic levels with new additional facets unfolding – an epistemic friction which could act as creativity-stimulating impetus for science and philosophy. Could the information-related concept of supercomplexity be instrumental in better clarifying understudied aspects of the apparent dichotomy? Instead of directly answering this question, this short autodidactic paper compactly analyzes a small but potentially relevant puzzle piece to complexity research taking the form of an explanatory bridge from (...)
    Download  
     
    Export citation  
     
    Bookmark  
  24. Deep learning and synthetic media.Raphaël Millière - 2022 - Synthese 200 (3):1-27.
    Deep learning algorithms are rapidly changing the way in which audiovisual media can be produced. Synthetic audiovisual media generated with deep learning—often subsumed colloquially under the label “deepfakes”—have a number of impressive characteristics; they are increasingly trivial to produce, and can be indistinguishable from real sounds and images recorded with a sensor. Much attention has been dedicated to ethical concerns raised by this technological development. Here, I focus instead on a set of issues related to the notion of synthetic audiovisual (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  25. AI-Related Misdirection Awareness in AIVR.Nadisha-Marie Aliman & Leon Kester - manuscript
    Recent AI progress led to a boost in beneficial applications from multiple research areas including VR. Simultaneously, in this newly unfolding deepfake era, ethically and security-relevant disagreements arose in the scientific community regarding the epistemic capabilities of present-day AI. However, given what is at stake, one can postulate that for a responsible approach, prior to engaging in a rigorous epistemic assessment of AI, humans may profit from a self-questioning strategy, an examination and calibration of the experience of their own (...)
    Download  
     
    Export citation  
     
    Bookmark  
  26.  74
    Acentric Intelligence.Nadisha-Marie Aliman - manuscript
    The generation of novel refined scientific conceptions of intelligence, creativity and consciousness is of paramount importance at a time where many scientists deem the technological singularity and the achievement of self-improving superintelligent algorithms to be immanent while numerous other scientists characterize present-day algorithms as the mere implementation of superficial mimicry incapable of yielding outcomes such as superintelligence. The precarious epistemic state of affairs reflected in this discrepancy became increasingly palpable in the unfolding deepfake era even though informed safety- and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  27.  86
    Epistemic Zeno Effect.Nadisha-Marie Aliman - manuscript
    This short autodidactic paper compactly introduces the epistemic algorithmic computation (EaC) paradox, a novel analogy to the Turing paradox. Firstly, it is elucidated why in the deepfake era, crafting a provisional solution to the EaC paradox may be beneficent as it may shed more light on one ingrained consequence of the prevailing algorithmic supremacy paradigm: the retrospective obsolescence of the entire biosphere in the game of life precipitated by algorithms instantiated on inert matter. Secondly, the paper analyzes and deconstructs (...)
    Download  
     
    Export citation  
     
    Bookmark  
  28. A MACRO-SHIFTED FUTURE: PREFERRED OR ACCIDENTALLY POSSIBLE IN THE CONTEXT OF ADVANCES IN ARTIFICIAL INTELLIGENCE SCIENCE AND TECHNOLOGY.Albert Efimov - 2023 - In Наука и феномен человека в эпоху цивилизационного Макросдвига. Moscow: pp. 748.
    This article is devoted to the topical aspects of the transformation of society, science, and man in the context of E. László’s work «Macroshift». The author offers his own attempt to consider the attributes of macroshift and then use these attributes to operationalize further analysis, highlighting three essential elements: the world has come to a situation of technological indistinguishability between the natural and the artificial, to machines that know everything about humans. Antiquity aspired to beauty and saw beauty in realistic (...)
    Download  
     
    Export citation  
     
    Bookmark  
  29. Imam Mahdi Miracles.Reza Rezaie Khanghah - 2024 - Qeios.
    In the story of Moses and Pharaoh, the magicians who were there became the first believers in Moses because they believed in the miraculous power of Moses, which was from Allah. In fact, those sticks (sticks of magicians) did not turn into snakes, but were seen by others as snakes. When Moses dropped his stick and turned into a snake, the sorcerers realized that the stick had become a real snake, and that is why they believed Moses. Today, this magic (...)
    Download  
     
    Export citation  
     
    Bookmark  
  30. The politics of past and future: synthetic media, showing, and telling.Megan Hyska - forthcoming - Philosophical Studies:1-22.
    Generative artificial intelligence has given us synthetic media that are increasingly easy to create and increasingly hard to distinguish from photographs and videos. Whereas an existing literature has been concerned with how these new media might make a difference for would-be knowers—the viewers of photographs and videos—I advance a thesis about how they will make a difference for would-be communicators—those who embed photos and videos in their speech acts. I claim that the presence of these media in our information environment (...)
    Download  
     
    Export citation  
     
    Bookmark  
  31. Misinformation, Content Moderation, and Epistemology: Protecting Knowledge.Keith Raymond Harris - 2024 - Routledge.
    This book argues that misinformation poses a multi-faceted threat to knowledge, while arguing that some forms of content moderation risk exacerbating these threats. It proposes alternative forms of content moderation that aim to address this complexity while enhancing human epistemic agency. The proliferation of fake news, false conspiracy theories, and other forms of misinformation on the internet and especially social media is widely recognized as a threat to individual knowledge and, consequently, to collective deliberation and democracy itself. This book argues (...)
    Download  
     
    Export citation  
     
    Bookmark  
  32. (1 other version)The Question of Algorithmic Personhood and Being (Or: On the Tenuous Nature of Human Status and Humanity Tests in Virtual Spaces—Why All Souls are ‘Necessarily’ Equal When Considered as Energy).Tyler Jaynes - 2021 - J (2571-8800) 3 (4):452-475.
    What separates the unique nature of human consciousness and that of an entity that can only perceive the world via strict logic-based structures? Rather than assume that there is some potential way in which logic-only existence is non-feasible, our species would be better served by assuming that such sentient existence is feasible. Under this assumption, artificial intelligence systems (AIS), which are creations that run solely upon logic to process data, even with self-learning architectures, should therefore not face the opposition they (...)
    Download  
     
    Export citation  
     
    Bookmark