Results for 'deepfakes'

27 found
Order:
  1. Deepfake Technology and Individual Rights.Francesco Stellin Sturino - 2023 - Social Theory and Practice 49 (1):161-187.
    Deepfake technology can be used to produce videos of real individuals, saying and doing things that they never in fact said or did, that appear highly authentic. Having accepted the premise that Deepfake content can constitute a legitimate form of expression, it is not immediately clear where the rights of content producers and distributors end, and where the rights of individuals whose likenesses are used in this content begin. This paper explores the question of whether it can be plausibly argued (...)
    Download  
     
    Export citation  
     
    Bookmark  
  2. Deepfake detection by human crowds, machines, and machine-informed crowds.Matthew Groh, Ziv Epstein, Chaz Firestone & Rosalind Picard - 2022 - Proceedings of the National Academy of Sciences 119 (1):e2110013119.
    The recent emergence of machine-manipulated media raises an important societal question: How can we know whether a video that we watch is real or fake? In two online studies with 15,016 participants, we present authentic videos and deepfakes and ask participants to identify which is which. We compare the performance of ordinary human observers with the leading computer vision deepfake detection model and find them similarly accurate, while making different kinds of mistakes. Together, participants with access to the model’s (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  3. Deepfakes: a survey and introduction to the topical collection.Dan Cavedon-Taylor - 2024 - Synthese 204 (1):1-19.
    Deepfakes are extremely realistic audio/video media. They are produced via a complex machine-learning process, one that centrally involves training an algorithm on hundreds or thousands of audio/video recordings of an object or person, S, with the aim of either creating entirely new audio/video media of S or else altering existing audio/video media of S. Deepfakes are widely predicted to have deleterious consequences (principally, moral and epistemic ones) for both individuals and various of our social practices and institutions. In (...)
    Download  
     
    Export citation  
     
    Bookmark  
  4. Deepfakes and the Epistemic Backstop.Regina Rini - 2020 - Philosophers' Imprint 20 (24):1-16.
    Deepfake technology uses machine learning to fabricate video and audio recordings that represent people doing and saying things they've never done. In coming years, malicious actors will likely use this technology in attempts to manipulate public discourse. This paper prepares for that danger by explicating the unappreciated way in which recordings have so far provided an epistemic backstop to our testimonial practices. Our reasonable trust in the testimony of others depends, to a surprising extent, on the regulative effects of the (...)
    Download  
     
    Export citation  
     
    Bookmark   30 citations  
  5. Deepfakes and the epistemic apocalypse.Joshua Habgood-Coote - 2023 - Synthese 201 (3):1-23.
    [Author note: There is a video explainer of this paper on youtube at the new work in philosophy channel (search for surname+deepfakes).] -/- It is widely thought that deepfake videos are a significant and unprecedented threat to our epistemic practices. In some writing about deepfakes, manipulated videos appear as the harbingers of an unprecedented _epistemic apocalypse_. In this paper I want to take a critical look at some of the more catastrophic predictions about deepfake videos. I will argue (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  6. Deepfakes, Public Announcements, and Political Mobilization.Megan Hyska - forthcoming - In Tamar Szabó Gendler, John Hawthorne, Julianne Chung & Alex Worsnip (eds.), Oxford Studies in Epistemology, Vol. 8. Oxford University Press.
    This paper takes up the question of how videographic public announcements (VPAs)---i.e. videos that a wide swath of the public sees and knows that everyone else can see too--- have functioned to mobilize people politically, and how the presence of deepfakes in our information environment stands to change the dynamics of this mobilization. Existing work by Regina Rini, Don Fallis and others has focused on the ways that deepfakes might interrupt our acquisition of first-order knowledge through videos. But (...)
    Download  
     
    Export citation  
     
    Bookmark  
  7. Deepfakes, Intellectual Cynics, and the Cultivation of Digital Sensibility.Taylor Matthews - 2022 - Royal Institute of Philosophy Supplement 92:67-85.
    In recent years, a number of philosophers have turned their attention to developments in Artificial Intelligence, and in particular to deepfakes. A deepfake is a portmanteau of ‘deep learning' and ‘fake', and for the most part they are videos which depict people doing and saying things they never did. As a result, much of the emerging literature on deepfakes has turned on questions of trust, harms, and information-sharing. In this paper, I add to the emerging concerns around (...) by drawing on resources from vice epistemology. As deepfakes become more sophisticated, I claim, they will develop to be a source of online epistemic corruption. More specifically, they will encourage consumers of digital online media to cultivate and manifest various epistemic vices. My immediate focus in this paper is on their propensity to encourage the development of what I call ‘intellectual cynicism'. After sketching a rough account of this epistemic vice, I go on to suggest that we can partially offset such cynicism – and fears around deceptive online media more generally – by encouraging the development what I term a trained ‘digital sensibility'. This, I contend, involves a calibrated sensitivity to the epistemic merits of online content. (shrink)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  8. Deepfakes, Deep Harms.Regina Rini & Leah Cohen - 2022 - Journal of Ethics and Social Philosophy 22 (2).
    Deepfakes are algorithmically modified video and audio recordings that project one person’s appearance on to that of another, creating an apparent recording of an event that never took place. Many scholars and journalists have begun attending to the political risks of deepfake deception. Here we investigate other ways in which deepfakes have the potential to cause deeper harms than have been appreciated. First, we consider a form of objectification that occurs in deepfaked ‘frankenporn’ that digitally fuses the parts (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  9. Deepfakes, Fake Barns, and Knowledge from Videos.Taylor Matthews - 2023 - Synthese 201 (2):1-18.
    Recent develops in AI technology have led to increasingly sophisticated forms of video manipulation. One such form has been the advent of deepfakes. Deepfakes are AI-generated videos that typically depict people doing and saying things they never did. In this paper, I demonstrate that there is a close structural relationship between deepfakes and more traditional fake barn cases in epistemology. Specifically, I argue that deepfakes generate an analogous degree of epistemic risk to that which is found (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  10. Freedom of expression meets deepfakes.Alex Barber - 2023 - Synthese 202 (40):1-17.
    Would suppressing deepfakes violate freedom of expression norms? The question is pressing because the deepfake phenomenon in its more poisonous manifestations appears to call for a response, and automated targeting of some kind looks to be the most practically viable. Two simple answers are rejected: that deepfakes do not deserve protection under freedom of expression legislation because they are fake by definition; and that deepfakes can be targeted if but only if they are misleadingly presented as authentic. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  11. Deepfakes, engaño y desconfianza.David Villena - 2023 - Filosofía En la Red.
    Download  
     
    Export citation  
     
    Bookmark  
  12. How to do things with deepfakes.Tom Roberts - 2023 - Synthese 201 (2):1-18.
    In this paper, I draw a distinction between two types of deepfake, and unpack the deceptive strategies that are made possible by the second. The first category, which has been the focus of existing literature on the topic, consists of those deepfakes that act as a fabricated record of events, talk, and action, where any utterances included in the footage are not addressed to the audience of the deepfake. For instance, a fake video of two politicians conversing with one (...)
    Download  
     
    Export citation  
     
    Bookmark  
  13. Artificial intelligence, deepfakes and a future of ectypes.Luciano Floridi - 2018 - Philosophy and Technology 31 (3):317-321.
    AI, especially in the case of Deepfakes, has the capacity to undermine our confidence in the original, genuine, authentic nature of what we see and hear. And yet digital technologies, in the form of databases and other detection tools also make it easier to spot forgeries and to establish the authenticity of a work. Using the notion of ectypes, this paper discusses current conceptions of authenticity and reproduction and examines how, in the future, these might be adapted for use (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  14. AI or Your Lying Eyes: Some Shortcomings of Artificially Intelligent Deepfake Detectors.Keith Raymond Harris - 2024 - Philosophy and Technology 37 (7):1-19.
    Deepfakes pose a multi-faceted threat to the acquisition of knowledge. It is widely hoped that technological solutions—in the form of artificially intelligent systems for detecting deepfakes—will help to address this threat. I argue that the prospects for purely technological solutions to the problem of deepfakes are dim. Especially given the evolving nature of the threat, technological solutions cannot be expected to prevent deception at the hands of deepfakes, or to preserve the authority of video footage. Moreover, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  15.  83
    Legal Definitions of Intimate Images in the Age of Sexual Deepfakes and Generative AI.Suzie Dunn - 2024 - McGill Law Journal 69:1-15.
    In January 2024, non-consensual deepfakes came to public attention with the spread of AI generated sexually abusive images of Taylor Swift. Although this brought new found energy to the debate on what some call non-consensual synthetic intimate images (i.e. images that use technology such as AI or photoshop to make sexual images of a person without their consent), female celebrities like Swift have had deepfakes like these made of them for years. In 2017, a Reddit user named “ (...)” posted several videos in which he had used opensource machine learning tools to swap the faces of female celebrities on to the faces of female porn actors, displaying what appeared to be live video footage of the celebrity engaging in sex acts she never engaged in. Since that time, deepfake technology has advanced astronomically. What once were choppy sexualized videos are now nearly flawless videos that can be difficult to distinguish from a real video. According to recent research on deepfakes by Sensity AI, this technology has been used primarily on women to create sexual videos. These women’s sexual autonomy has been co-opted for the purposed of gratifying men’s sexual pleasure, but have also been used in campaigns to delegitimize and humiliate female journalists and politicians. In Canada, civil and criminal legislation has addressed the non-consensual distribution of intimate images, but in only a few provinces – British Columbia, New Brunswick, Prince Edward Island and Saskatchewan – does this offence include altered images that could include deepfakes. This paper explores the evolution of synthetic technology such as AI image generators and Canada’s legal responses to the non-consensual sharing of intimate images. (shrink)
    Download  
     
    Export citation  
     
    Bookmark  
  16. Conceptual and moral ambiguities of deepfakes: a decidedly old turn.Matthew Crippen - 2023 - Synthese 202 (1):1-18.
    Everyday (mis)uses of deepfakes define prevailing conceptualizations of what they are and the moral stakes in their deployment. But one complication in understanding deepfakes is that they are not photographic yet nonetheless manipulate lens-based recordings with the intent of mimicking photographs. The harmfulness of deepfakes, moreover, significantly depends on their potential to be mistaken for photographs and on the belief that photographs capture actual events, a tenet known as the transparency thesis, which scholars have somewhat ironically attacked (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  17. The Ethics and Epistemology of Deepfakes.Taylor Matthews & Ian James Kidd - 2024 - In Carl Fox & Joe Saunders (eds.), Routledge Handbook of Philosophy and Media Ethics. Routledge.
    Download  
     
    Export citation  
     
    Bookmark  
  18. Skepticism and the Digital Information Environment.Matthew Carlson - 2021 - SATS 22 (2):149-167.
    Deepfakes are audio, video, or still-image digital artifacts created by the use of artificial intelligence technology, as opposed to traditional means of recording. Because deepfakes can look and sound much like genuine digital recordings, they have entered the popular imagination as sources of serious epistemic problems for us, as we attempt to navigate the increasingly treacherous digital information environment of the internet. In this paper, I attempt to clarify what epistemic problems deepfakes pose and why they pose (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  19. The semiotic functioning of synthetic media.Auli Viidalepp - 2022 - Információs Társadalom 4:109-118.
    The interpretation of many texts in the everyday world is concerned with their truth value in relation to the reality around us. The recent publication experiments with computer-generated texts have shown that the distinction between true and false, or reality and fiction, is not always clear from the text itself. Essentially, in today’s media space, one may encounter texts, videos or images that deceive the reader by displaying nonsensical content or nonexistent events, while nevertheless appearing as genuine human-produced messages. This (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  20. AI-Related Misdirection Awareness in AIVR.Nadisha-Marie Aliman & Leon Kester - manuscript
    Recent AI progress led to a boost in beneficial applications from multiple research areas including VR. Simultaneously, in this newly unfolding deepfake era, ethically and security-relevant disagreements arose in the scientific community regarding the epistemic capabilities of present-day AI. However, given what is at stake, one can postulate that for a responsible approach, prior to engaging in a rigorous epistemic assessment of AI, humans may profit from a self-questioning strategy, an examination and calibration of the experience of their own epistemic (...)
    Download  
     
    Export citation  
     
    Bookmark  
  21. Deep learning and synthetic media.Raphaël Millière - 2022 - Synthese 200 (3):1-27.
    Deep learning algorithms are rapidly changing the way in which audiovisual media can be produced. Synthetic audiovisual media generated with deep learning—often subsumed colloquially under the label “deepfakes”—have a number of impressive characteristics; they are increasingly trivial to produce, and can be indistinguishable from real sounds and images recorded with a sensor. Much attention has been dedicated to ethical concerns raised by this technological development. Here, I focus instead on a set of issues related to the notion of synthetic (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  22. Imam Mahdi Miracles.Reza Rezaie Khanghah - 2024 - Qeios.
    In the story of Moses and Pharaoh, the magicians who were there became the first believers in Moses because they believed in the miraculous power of Moses, which was from Allah. In fact, those sticks (sticks of magicians) did not turn into snakes, but were seen by others as snakes. When Moses dropped his stick and turned into a snake, the sorcerers realized that the stick had become a real snake, and that is why they believed Moses. Today, this magic (...)
    Download  
     
    Export citation  
     
    Bookmark  
  23.  96
    A MACRO-SHIFTED FUTURE: PREFERRED OR ACCIDENTALLY POSSIBLE IN THE CONTEXT OF ADVANCES IN ARTIFICIAL INTELLIGENCE SCIENCE AND TECHNOLOGY.Albert Efimov - 2023 - In Наука и феномен человека в эпоху цивилизационного Макросдвига. Moscow: pp. 748.
    This article is devoted to the topical aspects of the transformation of society, science, and man in the context of E. László’s work «Macroshift». The author offers his own attempt to consider the attributes of macroshift and then use these attributes to operationalize further analysis, highlighting three essential elements: the world has come to a situation of technological indistinguishability between the natural and the artificial, to machines that know everything about humans. Antiquity aspired to beauty and saw beauty in realistic (...)
    Download  
     
    Export citation  
     
    Bookmark  
  24. The politics of past and future: synthetic media, showing, and telling.Megan Hyska - forthcoming - Philosophical Studies:1-22.
    Generative artificial intelligence has given us synthetic media that are increasingly easy to create and increasingly hard to distinguish from photographs and videos. Whereas an existing literature has been concerned with how these new media might make a difference for would-be knowers—the viewers of photographs and videos—I advance a thesis about how they will make a difference for would-be communicators—those who embed photos and videos in their speech acts. I claim that the presence of these media in our information environment (...)
    Download  
     
    Export citation  
     
    Bookmark  
  25. Misinformation, Content Moderation, and Epistemology: Protecting Knowledge.Keith Raymond Harris - 2024 - Routledge.
    This book argues that misinformation poses a multi-faceted threat to knowledge, while arguing that some forms of content moderation risk exacerbating these threats. It proposes alternative forms of content moderation that aim to address this complexity while enhancing human epistemic agency. The proliferation of fake news, false conspiracy theories, and other forms of misinformation on the internet and especially social media is widely recognized as a threat to individual knowledge and, consequently, to collective deliberation and democracy itself. This book argues (...)
    Download  
     
    Export citation  
     
    Bookmark  
  26. The Question of Algorithmic Personhood and Being (Or: On the Tenuous Nature of Human Status and Humanity Tests in Virtual Spaces—Why All Souls are ‘Necessarily’ Equal When Considered as Energy).Tyler Jaynes - 2021 - MDPI: J 3 (4):452-475.
    What separates the unique nature of human consciousness and that of an entity that can only perceive the world via strict logic-based structures? Rather than assume that there is some potential way in which logic-only existence is non-feasible, our species would be better served by assuming that such sentient existence is feasible. Under this assumption, artificial intelligence systems (AIS), which are creations that run solely upon logic to process data, even with self-learning architectures, should therefore not face the opposition they (...)
    Download  
     
    Export citation  
     
    Bookmark  
  27. The Question of Algorithmic Personhood and Being (Or: On the Tenuous Nature of Human Status and Humanity Tests in Virtual Spaces—Why All Souls are ‘Necessarily’ Equal When Considered as Energy).Tyler Jaynes - 2021 - J (2571-8800) 3 (4):452-475.
    What separates the unique nature of human consciousness and that of an entity that can only perceive the world via strict logic-based structures? Rather than assume that there is some potential way in which logic-only existence is non-feasible, our species would be better served by assuming that such sentient existence is feasible. Under this assumption, artificial intelligence systems (AIS), which are creations that run solely upon logic to process data, even with self-learning architectures, should therefore not face the opposition they (...)
    Download  
     
    Export citation  
     
    Bookmark