Results for 'deepfakes'

38 found
Order:
  1. Deepfakes: a survey and introduction to the topical collection.Dan Cavedon-Taylor - 2024 - Synthese 204 (1):1-19.
    Deepfakes are extremely realistic audio/video media. They are produced via a complex machine-learning process, one that centrally involves training an algorithm on hundreds or thousands of audio/video recordings of an object or person, S, with the aim of either creating entirely new audio/video media of S or else altering existing audio/video media of S. Deepfakes are widely predicted to have deleterious consequences (principally, moral and epistemic ones) for both individuals and various of our social practices and institutions. In (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  2. Deepfakes and the Epistemic Backstop.Regina Rini - 2020 - Philosophers' Imprint 20 (24):1-16.
    Deepfake technology uses machine learning to fabricate video and audio recordings that represent people doing and saying things they've never done. In coming years, malicious actors will likely use this technology in attempts to manipulate public discourse. This paper prepares for that danger by explicating the unappreciated way in which recordings have so far provided an epistemic backstop to our testimonial practices. Our reasonable trust in the testimony of others depends, to a surprising extent, on the regulative effects of the (...)
    Download  
     
    Export citation  
     
    Bookmark   35 citations  
  3. Deepfakes, Pornography and Consent.Claire Benn - forthcoming - Philosophers' Imprint.
    Political deepfakes have prompted outcry about the diminishing trustworthiness of visual depictions, and the epistemic and political threat this poses. Yet this new technique is being used overwhelmingly to create pornography, raising the question of what, if anything, is wrong with the creation of deepfake pornography. Traditional objections focusing on the sexual abuse of those depicted fail to apply to deepfakes. Other objections—that the use and consumption of pornography harms the viewer or other (non-depicted) individuals—fail to explain the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  4. Deepfake detection by human crowds, machines, and machine-informed crowds.Matthew Groh, Ziv Epstein, Chaz Firestone & Rosalind Picard - 2022 - Proceedings of the National Academy of Sciences 119 (1):e2110013119.
    The recent emergence of machine-manipulated media raises an important societal question: How can we know whether a video that we watch is real or fake? In two online studies with 15,016 participants, we present authentic videos and deepfakes and ask participants to identify which is which. We compare the performance of ordinary human observers with the leading computer vision deepfake detection model and find them similarly accurate, while making different kinds of mistakes. Together, participants with access to the model’s (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  5. Deepfakes, Deep Harms.Regina Rini & Leah Cohen - 2022 - Journal of Ethics and Social Philosophy 22 (2).
    Deepfakes are algorithmically modified video and audio recordings that project one person’s appearance on to that of another, creating an apparent recording of an event that never took place. Many scholars and journalists have begun attending to the political risks of deepfake deception. Here we investigate other ways in which deepfakes have the potential to cause deeper harms than have been appreciated. First, we consider a form of objectification that occurs in deepfaked ‘frankenporn’ that digitally fuses the parts (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  6. Deepfakes and Dishonesty.Tobias Flattery & Christian B. Miller - 2024 - Philosophy and Technology 37 (120):1-24.
    Deepfakes raise various concerns: risks of political destabilization, depictions of persons without consent and causing them harms, erosion of trust in video and audio as reliable sources of evidence, and more. These concerns have been the focus of recent work in the philosophical literature on deepfakes. However, there has been almost no sustained philosophical analysis of deepfakes from the perspective of concerns about honesty and dishonesty. That deepfakes are potentially deceptive is unsurprising and has been noted. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  7. Deepfake Technology and Individual Rights.Francesco Stellin Sturino - 2023 - Social Theory and Practice 49 (1):161-187.
    Deepfake technology can be used to produce videos of real individuals, saying and doing things that they never in fact said or did, that appear highly authentic. Having accepted the premise that Deepfake content can constitute a legitimate form of expression, it is not immediately clear where the rights of content producers and distributors end, and where the rights of individuals whose likenesses are used in this content begin. This paper explores the question of whether it can be plausibly argued (...)
    Download  
     
    Export citation  
     
    Bookmark  
  8. Deepfakes, Public Announcements, and Political Mobilization.Megan Hyska - forthcoming - In Tamar Szabó Gendler, John Hawthorne, Julianne Chung & Alex Worsnip (eds.), Oxford Studies in Epistemology, Vol. 8. Oxford University Press.
    This paper takes up the question of how videographic public announcements (VPAs)---i.e. videos that a wide swath of the public sees and knows that everyone else can see too--- have functioned to mobilize people politically, and how the presence of deepfakes in our information environment stands to change the dynamics of this mobilization. Existing work by Regina Rini, Don Fallis and others has focused on the ways that deepfakes might interrupt our acquisition of first-order knowledge through videos. But (...)
    Download  
     
    Export citation  
     
    Bookmark  
  9. Deepfakes and the epistemic apocalypse.Joshua Habgood-Coote - 2023 - Synthese 201 (3):1-23.
    [Author note: There is a video explainer of this paper on youtube at the new work in philosophy channel (search for surname+deepfakes).] -/- It is widely thought that deepfake videos are a significant and unprecedented threat to our epistemic practices. In some writing about deepfakes, manipulated videos appear as the harbingers of an unprecedented _epistemic apocalypse_. In this paper I want to take a critical look at some of the more catastrophic predictions about deepfake videos. I will argue (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  10. Deepfakes, Fake Barns, and Knowledge from Videos.Taylor Matthews - 2023 - Synthese 201 (2):1-18.
    Recent develops in AI technology have led to increasingly sophisticated forms of video manipulation. One such form has been the advent of deepfakes. Deepfakes are AI-generated videos that typically depict people doing and saying things they never did. In this paper, I demonstrate that there is a close structural relationship between deepfakes and more traditional fake barn cases in epistemology. Specifically, I argue that deepfakes generate an analogous degree of epistemic risk to that which is found (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  11. Deepfakes, Intellectual Cynics, and the Cultivation of Digital Sensibility.Taylor Matthews - 2022 - Royal Institute of Philosophy Supplement 92:67-85.
    In recent years, a number of philosophers have turned their attention to developments in Artificial Intelligence, and in particular to deepfakes. A deepfake is a portmanteau of ‘deep learning' and ‘fake', and for the most part they are videos which depict people doing and saying things they never did. As a result, much of the emerging literature on deepfakes has turned on questions of trust, harms, and information-sharing. In this paper, I add to the emerging concerns around (...) by drawing on resources from vice epistemology. As deepfakes become more sophisticated, I claim, they will develop to be a source of online epistemic corruption. More specifically, they will encourage consumers of digital online media to cultivate and manifest various epistemic vices. My immediate focus in this paper is on their propensity to encourage the development of what I call ‘intellectual cynicism'. After sketching a rough account of this epistemic vice, I go on to suggest that we can partially offset such cynicism – and fears around deceptive online media more generally – by encouraging the development what I term a trained ‘digital sensibility'. This, I contend, involves a calibrated sensitivity to the epistemic merits of online content. (shrink)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  12. Deepfakes, engaño y desconfianza.David Villena - 2023 - Filosofía En la Red.
    Download  
     
    Export citation  
     
    Bookmark  
  13. Deepfakes, Simone Weil, and the concept of reading.Steven R. Kraaijeveld - forthcoming - AI and Society:1-3.
    Download  
     
    Export citation  
     
    Bookmark  
  14. The Deepfake Universe Apocalypse?Nadisha-Marie Aliman & Leon Kester - manuscript
    Could 2024 be the year heralding what one could term the deepfake universe apocalypse scenario or could it be the year that a future history of science may e.g. interpret as the year of the first literally universe-sized algorithmic hype bubble? This commentary introduces the metaphor of "GPT-Universe" and the assumptions hidden beneath it.
    Download  
     
    Export citation  
     
    Bookmark  
  15.  72
    Bending Deepfake Geometry?Nadisha-Marie Aliman - manuscript
    This autodidactic paper wraps up an earlier epistemic art project and compactly collates the main unfolded scientific and philosophical strategies for epistemic resiliency against epistemic doom in the deepfake era. Retrospectively speaking, the existence of a dense condensate within which explanatory blockchain (EB) based science, EB-based philosophy and EB-based art overlap acts as a pointer to untapped non-algorithmic epistemic resources that could (if ever activated) exhibit the natural tendency to compel the reach of algorithmic computations – noticeably at the "cost" (...)
    Download  
     
    Export citation  
     
    Bookmark  
  16. Freedom of expression meets deepfakes.Alex Barber - 2023 - Synthese 202 (40):1-17.
    Would suppressing deepfakes violate freedom of expression norms? The question is pressing because the deepfake phenomenon in its more poisonous manifestations appears to call for a response, and automated targeting of some kind looks to be the most practically viable. Two simple answers are rejected: that deepfakes do not deserve protection under freedom of expression legislation because they are fake by definition; and that deepfakes can be targeted if but only if they are misleadingly presented as authentic. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  17. Artificial intelligence, deepfakes and a future of ectypes.Luciano Floridi - 2018 - Philosophy and Technology 31 (3):317-321.
    AI, especially in the case of Deepfakes, has the capacity to undermine our confidence in the original, genuine, authentic nature of what we see and hear. And yet digital technologies, in the form of databases and other detection tools also make it easier to spot forgeries and to establish the authenticity of a work. Using the notion of ectypes, this paper discusses current conceptions of authenticity and reproduction and examines how, in the future, these might be adapted for use (...)
    Download  
     
    Export citation  
     
    Bookmark   15 citations  
  18. AI or Your Lying Eyes: Some Shortcomings of Artificially Intelligent Deepfake Detectors.Keith Raymond Harris - 2024 - Philosophy and Technology 37 (7):1-19.
    Deepfakes pose a multi-faceted threat to the acquisition of knowledge. It is widely hoped that technological solutions—in the form of artificially intelligent systems for detecting deepfakes—will help to address this threat. I argue that the prospects for purely technological solutions to the problem of deepfakes are dim. Especially given the evolving nature of the threat, technological solutions cannot be expected to prevent deception at the hands of deepfakes, or to preserve the authority of video footage. Moreover, (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  19. How to do things with deepfakes.Tom Roberts - 2023 - Synthese 201 (2):1-18.
    In this paper, I draw a distinction between two types of deepfake, and unpack the deceptive strategies that are made possible by the second. The first category, which has been the focus of existing literature on the topic, consists of those deepfakes that act as a fabricated record of events, talk, and action, where any utterances included in the footage are not addressed to the audience of the deepfake. For instance, a fake video of two politicians conversing with one (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  20. Epistemic Doom In The Deepfake Era.Nadisha-Marie Aliman - manuscript
    This epistemic project examines an understudied existential risk emerging in the deepfake era: the fortunately up to this time (but not indefinitely so) reversible peril of humanity’s epistemic self-sabotage through an overestimation of algorithms linked to quantitative aspects and a paired underestimation of the own epistemic potential whose manifestations are in principle expressible via scientifically analyzable but currently often neglected qualitative facets. This scenario is metaphorically referred to as "π-Doom scenario". Instead of carefully crafting opaque hypotheses and formulating probabilistic predictions (...)
    Download  
     
    Export citation  
     
    Bookmark  
  21. Conceptual and moral ambiguities of deepfakes: a decidedly old turn.Matthew Crippen - 2023 - Synthese 202 (1):1-18.
    Everyday (mis)uses of deepfakes define prevailing conceptualizations of what they are and the moral stakes in their deployment. But one complication in understanding deepfakes is that they are not photographic yet nonetheless manipulate lens-based recordings with the intent of mimicking photographs. The harmfulness of deepfakes, moreover, significantly depends on their potential to be mistaken for photographs and on the belief that photographs capture actual events, a tenet known as the transparency thesis, which scholars have somewhat ironically attacked (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  22. Legal Definitions of Intimate Images in the Age of Sexual Deepfakes and Generative AI.Suzie Dunn - 2024 - McGill Law Journal 69:1-15.
    In January 2024, non-consensual deepfakes came to public attention with the spread of AI generated sexually abusive images of Taylor Swift. Although this brought new found energy to the debate on what some call non-consensual synthetic intimate images (i.e. images that use technology such as AI or photoshop to make sexual images of a person without their consent), female celebrities like Swift have had deepfakes like these made of them for years. In 2017, a Reddit user named “ (...)” posted several videos in which he had used opensource machine learning tools to swap the faces of female celebrities on to the faces of female porn actors, displaying what appeared to be live video footage of the celebrity engaging in sex acts she never engaged in. Since that time, deepfake technology has advanced astronomically. What once were choppy sexualized videos are now nearly flawless videos that can be difficult to distinguish from a real video. According to recent research on deepfakes by Sensity AI, this technology has been used primarily on women to create sexual videos. These women’s sexual autonomy has been co-opted for the purposed of gratifying men’s sexual pleasure, but have also been used in campaigns to delegitimize and humiliate female journalists and politicians. In Canada, civil and criminal legislation has addressed the non-consensual distribution of intimate images, but in only a few provinces – British Columbia, New Brunswick, Prince Edward Island and Saskatchewan – does this offence include altered images that could include deepfakes. This paper explores the evolution of synthetic technology such as AI image generators and Canada’s legal responses to the non-consensual sharing of intimate images. (shrink)
    Download  
     
    Export citation  
     
    Bookmark  
  23. The Ethics and Epistemology of Deepfakes.Taylor Matthews & Ian James Kidd - 2024 - In Carl Fox & Joe Saunders (eds.), Routledge Handbook of Philosophy and Media Ethics. Routledge.
    Download  
     
    Export citation  
     
    Bookmark  
  24.  80
    DIREITO À IMAGEM: O PAPEL DO LEGISLATIVO BRASILEIRO FRENTE À DEEPFAKE.Ana Gessica Sousa Ferreira - 2024 - Dissertation, Universidade Federal Do Ceará
    Download  
     
    Export citation  
     
    Bookmark  
  25. Skepticism and the Digital Information Environment.Matthew Carlson - 2021 - SATS 22 (2):149-167.
    Deepfakes are audio, video, or still-image digital artifacts created by the use of artificial intelligence technology, as opposed to traditional means of recording. Because deepfakes can look and sound much like genuine digital recordings, they have entered the popular imagination as sources of serious epistemic problems for us, as we attempt to navigate the increasingly treacherous digital information environment of the internet. In this paper, I attempt to clarify what epistemic problems deepfakes pose and why they pose (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  26. The semiotic functioning of synthetic media.Auli Viidalepp - 2022 - Információs Társadalom 4:109-118.
    The interpretation of many texts in the everyday world is concerned with their truth value in relation to the reality around us. The recent publication experiments with computer-generated texts have shown that the distinction between true and false, or reality and fiction, is not always clear from the text itself. Essentially, in today’s media space, one may encounter texts, videos or images that deceive the reader by displaying nonsensical content or nonexistent events, while nevertheless appearing as genuine human-produced messages. This (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  27. Synthetic Media Detection, the Wheel, and the Burden of Proof.Keith Raymond Harris - 2024 - Philosophy and Technology 37 (4):1-20.
    Deepfakes and other forms of synthetic media are widely regarded as serious threats to our knowledge of the world. Various technological responses to these threats have been proposed. The reactive approach proposes to use artificial intelligence to identify synthetic media. The proactive approach proposes to use blockchain and related technologies to create immutable records of verified media content. I argue that both approaches, but especially the reactive approach, are vulnerable to a problem analogous to the ancient problem of the (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  28. Condensation of Algorithmic Supremacy Claims.Nadisha-Marie Aliman - manuscript
    In the presently unfolding deepfake era, previously unrelated algorithmic superintelligence possibility claims cannot be scientifically analyzed in isolation anymore due to the connected inevitable epistemic interactions that have already commenced. For instance, deep-learning (DL) related algorithmic supremacy claims may intrinsically compete with both neuro-symbolic (NS) algorithmic and further quantum (Q) algorithmic superintelligence achievement claims. Concurrently, a variety of experimental combinations of DL, NS and Q directions are conceivable. While research on these three illustrative variants did not yet offer any clear (...)
    Download  
     
    Export citation  
     
    Bookmark  
  29. The Supercomplexity Puzzle.Nadisha-Marie Aliman - manuscript
    In the deepfake era, materialism and idealism seem to clash at multiple epistemic levels with new additional facets unfolding – an epistemic friction which could act as creativity-stimulating impetus for science and philosophy. Could the information-related concept of supercomplexity be instrumental in better clarifying understudied aspects of the apparent dichotomy? Instead of directly answering this question, this short autodidactic paper compactly analyzes a small but potentially relevant puzzle piece to complexity research taking the form of an explanatory bridge from complexity (...)
    Download  
     
    Export citation  
     
    Bookmark  
  30. Acentric Intelligence.Nadisha-Marie Aliman - manuscript
    The generation of novel refined scientific conceptions of intelligence, creativity and consciousness is of paramount importance at a time where many scientists deem the technological singularity and the achievement of self-improving superintelligent algorithms to be immanent while numerous other scientists characterize present-day algorithms as the mere implementation of superficial mimicry incapable of yielding outcomes such as superintelligence. The precarious epistemic state of affairs reflected in this discrepancy became increasingly palpable in the unfolding deepfake era even though informed safety- and security-relevant (...)
    Download  
     
    Export citation  
     
    Bookmark  
  31. Deep learning and synthetic media.Raphaël Millière - 2022 - Synthese 200 (3):1-27.
    Deep learning algorithms are rapidly changing the way in which audiovisual media can be produced. Synthetic audiovisual media generated with deep learning—often subsumed colloquially under the label “deepfakes”—have a number of impressive characteristics; they are increasingly trivial to produce, and can be indistinguishable from real sounds and images recorded with a sensor. Much attention has been dedicated to ethical concerns raised by this technological development. Here, I focus instead on a set of issues related to the notion of synthetic (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  32. Epistemic Zeno Effect.Nadisha-Marie Aliman - manuscript
    This short autodidactic paper compactly introduces the epistemic algorithmic computation (EaC) paradox, a novel analogy to the Turing paradox. Firstly, it is elucidated why in the deepfake era, crafting a provisional solution to the EaC paradox may be beneficent as it may shed more light on one ingrained consequence of the prevailing algorithmic supremacy paradigm: the retrospective obsolescence of the entire biosphere in the game of life precipitated by algorithms instantiated on inert matter. Secondly, the paper analyzes and deconstructs the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  33. AI-Related Misdirection Awareness in AIVR.Nadisha-Marie Aliman & Leon Kester - manuscript
    Recent AI progress led to a boost in beneficial applications from multiple research areas including VR. Simultaneously, in this newly unfolding deepfake era, ethically and security-relevant disagreements arose in the scientific community regarding the epistemic capabilities of present-day AI. However, given what is at stake, one can postulate that for a responsible approach, prior to engaging in a rigorous epistemic assessment of AI, humans may profit from a self-questioning strategy, an examination and calibration of the experience of their own epistemic (...)
    Download  
     
    Export citation  
     
    Bookmark  
  34. Imam Mahdi Miracles.Reza Rezaie Khanghah - 2024 - Qeios.
    In the story of Moses and Pharaoh, the magicians who were there became the first believers in Moses because they believed in the miraculous power of Moses, which was from Allah. In fact, those sticks (sticks of magicians) did not turn into snakes, but were seen by others as snakes. When Moses dropped his stick and turned into a snake, the sorcerers realized that the stick had become a real snake, and that is why they believed Moses. Today, this magic (...)
    Download  
     
    Export citation  
     
    Bookmark  
  35. A MACRO-SHIFTED FUTURE: PREFERRED OR ACCIDENTALLY POSSIBLE IN THE CONTEXT OF ADVANCES IN ARTIFICIAL INTELLIGENCE SCIENCE AND TECHNOLOGY.Albert Efimov - 2023 - In Наука и феномен человека в эпоху цивилизационного Макросдвига. Moscow: pp. 748.
    This article is devoted to the topical aspects of the transformation of society, science, and man in the context of E. László’s work «Macroshift». The author offers his own attempt to consider the attributes of macroshift and then use these attributes to operationalize further analysis, highlighting three essential elements: the world has come to a situation of technological indistinguishability between the natural and the artificial, to machines that know everything about humans. Antiquity aspired to beauty and saw beauty in realistic (...)
    Download  
     
    Export citation  
     
    Bookmark  
  36. The politics of past and future: synthetic media, showing, and telling.Megan Hyska - forthcoming - Philosophical Studies:1-22.
    Generative artificial intelligence has given us synthetic media that are increasingly easy to create and increasingly hard to distinguish from photographs and videos. Whereas an existing literature has been concerned with how these new media might make a difference for would-be knowers—the viewers of photographs and videos—I advance a thesis about how they will make a difference for would-be communicators—those who embed photos and videos in their speech acts. I claim that the presence of these media in our information environment (...)
    Download  
     
    Export citation  
     
    Bookmark  
  37. Misinformation, Content Moderation, and Epistemology: Protecting Knowledge.Keith Raymond Harris - 2024 - Routledge.
    This book argues that misinformation poses a multi-faceted threat to knowledge, while arguing that some forms of content moderation risk exacerbating these threats. It proposes alternative forms of content moderation that aim to address this complexity while enhancing human epistemic agency. The proliferation of fake news, false conspiracy theories, and other forms of misinformation on the internet and especially social media is widely recognized as a threat to individual knowledge and, consequently, to collective deliberation and democracy itself. This book argues (...)
    Download  
     
    Export citation  
     
    Bookmark  
  38. (1 other version)The Question of Algorithmic Personhood and Being (Or: On the Tenuous Nature of Human Status and Humanity Tests in Virtual Spaces—Why All Souls are ‘Necessarily’ Equal When Considered as Energy).Tyler Jaynes - 2021 - J (2571-8800) 3 (4):452-475.
    What separates the unique nature of human consciousness and that of an entity that can only perceive the world via strict logic-based structures? Rather than assume that there is some potential way in which logic-only existence is non-feasible, our species would be better served by assuming that such sentient existence is feasible. Under this assumption, artificial intelligence systems (AIS), which are creations that run solely upon logic to process data, even with self-learning architectures, should therefore not face the opposition they (...)
    Download  
     
    Export citation  
     
    Bookmark