Switch to: References

Add citations

You must login to add citations.
  1. Misinformation, Content Moderation, and Epistemology: Protecting Knowledge.Keith Raymond Harris - 2024 - Routledge.
    This book argues that misinformation poses a multi-faceted threat to knowledge, while arguing that some forms of content moderation risk exacerbating these threats. It proposes alternative forms of content moderation that aim to address this complexity while enhancing human epistemic agency. The proliferation of fake news, false conspiracy theories, and other forms of misinformation on the internet and especially social media is widely recognized as a threat to individual knowledge and, consequently, to collective deliberation and democracy itself. This book argues (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Real is the new sexy: the influence of perceived realness on self-reported arousal to sexual visual stimuli.Marco Marini, Alessandro Ansani, Alessandro Demichelis, Giovanna Mancini, Fabio Paglieri & Marco Viola - forthcoming - Cognition and Emotion.
    As state-of-art technology can create artificial images that are indistinguishable from real ones, it is urgent to understand whether believing that a picture is real or not has some import over affective phenomena such as sexual arousal. Thus, in two pre-registered online studies, we tested whether 60 images depicting models in underwear elicited higher self-reported sexual arousal when believed to be (N = 57) or presented as (N = 108) real photographs as opposed to artificially generated. In both cases, Realness (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • AI or Your Lying Eyes: Some Shortcomings of Artificially Intelligent Deepfake Detectors.Keith Raymond Harris - 2024 - Philosophy and Technology 37 (7):1-19.
    Deepfakes pose a multi-faceted threat to the acquisition of knowledge. It is widely hoped that technological solutions—in the form of artificially intelligent systems for detecting deepfakes—will help to address this threat. I argue that the prospects for purely technological solutions to the problem of deepfakes are dim. Especially given the evolving nature of the threat, technological solutions cannot be expected to prevent deception at the hands of deepfakes, or to preserve the authority of video footage. Moreover, the success of such (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Deepfake Pornography and the Ethics of Non-Veridical Representations.Daniel Story & Ryan Jenkins - 2023 - Philosophy and Technology 36 (3):1-22.
    We investigate the question of whether (and if so why) creating or distributing deepfake pornography of someone without their consent is inherently objectionable. We argue that nonconsensually distributing deepfake pornography of a living person on the internet is inherently pro tanto wrong in virtue of the fact that nonconsensually distributing intentionally non-veridical representations about someone violates their right that their social identity not be tampered with, a right which is grounded in their interest in being able to exercise autonomy over (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Conceptual and moral ambiguities of deepfakes: a decidedly old turn.Matthew Crippen - 2023 - Synthese 202 (1):1-18.
    Everyday (mis)uses of deepfakes define prevailing conceptualizations of what they are and the moral stakes in their deployment. But one complication in understanding deepfakes is that they are not photographic yet nonetheless manipulate lens-based recordings with the intent of mimicking photographs. The harmfulness of deepfakes, moreover, significantly depends on their potential to be mistaken for photographs and on the belief that photographs capture actual events, a tenet known as the transparency thesis, which scholars have somewhat ironically attacked by citing digital (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Mechanisms of Techno-Moral Change: A Taxonomy and Overview.John Danaher & Henrik Skaug Sætra - 2023 - Ethical Theory and Moral Practice 26 (5):763-784.
    The idea that technologies can change moral beliefs and practices is an old one. But how, exactly, does this happen? This paper builds on an emerging field of inquiry by developing a synoptic taxonomy of the mechanisms of techno-moral change. It argues that technology affects moral beliefs and practices in three main domains: decisional (how we make morally loaded decisions), relational (how we relate to others) and perceptual (how we perceive situations). It argues that across these three domains there are (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Liars and Trolls and Bots Online: The Problem of Fake Persons.Keith Raymond Harris - 2023 - Philosophy and Technology 36 (2):1-19.
    This paper describes the ways in which trolls and bots impede the acquisition of knowledge online. I distinguish between three ways in which trolls and bots can impede knowledge acquisition, namely, by deceiving, by encouraging misplaced skepticism, and by interfering with the acquisition of warrant concerning persons and content encountered online. I argue that these threats are difficult to resist simultaneously. I argue, further, that the threat that trolls and bots pose to knowledge acquisition goes beyond the mere threat of (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Deepfakes and the epistemic apocalypse.Joshua Habgood-Coote - 2023 - Synthese 201 (3):1-23.
    [Author note: There is a video explainer of this paper on youtube at the new work in philosophy channel (search for surname+deepfakes).] -/- It is widely thought that deepfake videos are a significant and unprecedented threat to our epistemic practices. In some writing about deepfakes, manipulated videos appear as the harbingers of an unprecedented _epistemic apocalypse_. In this paper I want to take a critical look at some of the more catastrophic predictions about deepfake videos. I will argue for three (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Deepfakes, Fake Barns, and Knowledge from Videos.Taylor Matthews - 2023 - Synthese 201 (2):1-18.
    Recent develops in AI technology have led to increasingly sophisticated forms of video manipulation. One such form has been the advent of deepfakes. Deepfakes are AI-generated videos that typically depict people doing and saying things they never did. In this paper, I demonstrate that there is a close structural relationship between deepfakes and more traditional fake barn cases in epistemology. Specifically, I argue that deepfakes generate an analogous degree of epistemic risk to that which is found in traditional cases. Given (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Deepfakes, shallow epistemic graves: On the epistemic robustness of photography and videos in the era of deepfakes.Paloma Atencia-Linares & Marc Artiga - 2022 - Synthese 200 (6):1–22.
    The recent proliferation of deepfakes and other digitally produced deceptive representations has revived the debate on the epistemic robustness of photography and other mechanically produced images. Authors such as Rini (2020) and Fallis (2021) claim that the proliferation of deepfakes pose a serious threat to the reliability and the epistemic value of photographs and videos. In particular, Fallis adopts a Skyrmsian account of how signals carry information (Skyrms, 2010) to argue that the existence of deepfakes significantly reduces the information that (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Deceiving versus manipulating: An evidence‐based definition of deception.Don Fallis - 2024 - Analytic Philosophy 65 (2):223-240.
    What distinguishes deception from manipulation? Cohen (Australasian Journal of Philosophy, 96, 483 and 2018) proposes a new answer and explores its ethical implications. Appealing to new cases of “non‐deceptive manipulation” that involve intentionally causing a false belief, he offers a new definition of deception in terms of communication that rules out these counterexamples to the traditional definition. And, he leverages this definition in support of the claim that deception “carries heavier moral weight” than manipulation. In this paper, I argue that (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Real Fakes: The Epistemology of Online Misinformation.Keith Raymond Harris - 2022 - Philosophy and Technology 35 (3):1-24.
    Many of our beliefs are acquired online. Online epistemic environments are replete with fake news, fake science, fake photographs and videos, and fake people in the form of trolls and social bots. The purpose of this paper is to investigate the threat that such online fakes pose to the acquisition of knowledge. I argue that fakes can interfere with one or more of the truth, belief, and warrant conditions on knowledge. I devote most of my attention to the effects of (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Technology and moral change: the transformation of truth and trust.Henrik Skaug Sætra & John Danaher - 2022 - Ethics and Information Technology 24 (3):1-16.
    Technologies can have profound effects on social moral systems. Is there any way to systematically investigate and anticipate these potential effects? This paper aims to contribute to this emerging field on inquiry through a case study method. It focuses on two core human values—truth and trust—describes their structural properties and conceptualisations, and then considers various mechanisms through which technology is changing and can change our perspective on those values. In brief, the paper argues that technology is transforming these values by (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • The identification game: deepfakes and the epistemic limits of identity.Carl Öhman - 2022 - Synthese 200 (4):1-19.
    The fast development of synthetic media, commonly known as deepfakes, has cast new light on an old problem, namely—to what extent do people have a moral claim to their likeness, including personally distinguishing features such as their voice or face? That people have at least some such claim seems uncontroversial. In fact, several jurisdictions already combat deepfakes by appealing to a “right to identity.” Yet, an individual’s disapproval of appearing in a piece of synthetic media is sensible only insofar as (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Deepfakes, Intellectual Cynics, and the Cultivation of Digital Sensibility.Taylor Matthews - 2022 - Royal Institute of Philosophy Supplement 92:67-85.
    In recent years, a number of philosophers have turned their attention to developments in Artificial Intelligence, and in particular to deepfakes. A deepfake is a portmanteau of ‘deep learning' and ‘fake', and for the most part they are videos which depict people doing and saying things they never did. As a result, much of the emerging literature on deepfakes has turned on questions of trust, harms, and information-sharing. In this paper, I add to the emerging concerns around deepfakes by drawing (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Augmented Reality, Augmented Epistemology, and the Real-World Web.Cody Turner - 2022 - Philosophy and Technology 35 (1):1-28.
    Augmented reality (AR) technologies function to ‘augment’ normal perception by superimposing virtual objects onto an agent’s visual field. The philosophy of augmented reality is a small but growing subfield within the philosophy of technology. Existing work in this subfield includes research on the phenomenology of augmented experiences, the metaphysics of virtual objects, and different ethical issues associated with AR systems, including (but not limited to) issues of privacy, property rights, ownership, trust, and informed consent. This paper addresses some epistemological issues (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Reality+: Virtual Worlds and the Problems of Philosophy.David John Chalmers - 2022 - New York: W. W. Norton.
    A leading philosopher takes a mind-bending journey through virtual worlds, illuminating the nature of reality and our place within it. Virtual reality is genuine reality; that's the central thesis of Reality+. In a highly original work of "technophilosophy," David J. Chalmers gives a compelling analysis of our technological future. He argues that virtual worlds are not second-class worlds, and that we can live a meaningful life in virtual reality. We may even be in a virtual world already. Along the way, (...)
    Download  
     
    Export citation  
     
    Bookmark   46 citations  
  • Skepticism and the Digital Information Environment.Matthew Carlson - 2021 - SATS 22 (2):149-167.
    Deepfakes are audio, video, or still-image digital artifacts created by the use of artificial intelligence technology, as opposed to traditional means of recording. Because deepfakes can look and sound much like genuine digital recordings, they have entered the popular imagination as sources of serious epistemic problems for us, as we attempt to navigate the increasingly treacherous digital information environment of the internet. In this paper, I attempt to clarify what epistemic problems deepfakes pose and why they pose these problems, by (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • The Judging Spectator and Forensic Video Analysis: Technological Implications for How We Think and Administer Justice.Justin T. Piccorelli - 2021 - Philosophy and Technology 34 (4):1517-1529.
    The philosophic spectator watches from a distance as a “disinterested” and impartial member of an audience, Lectures on Kant’s political philosophy, University of Chicago Press, 1992; Kant, On history, Prentice Hall Inc, 1957). Judicial systems use many of the elements of the spectator in the concept of an eyewitness but, with increased video technology use, the courts have taken the witness a step further by hiring forensic video analysts. The analyst’s stance is rooted in objectivity, and the process of breaking (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Deep learning and synthetic media.Raphaël Millière - 2022 - Synthese 200 (3):1-27.
    Deep learning algorithms are rapidly changing the way in which audiovisual media can be produced. Synthetic audiovisual media generated with deep learning—often subsumed colloquially under the label “deepfakes”—have a number of impressive characteristics; they are increasingly trivial to produce, and can be indistinguishable from real sounds and images recorded with a sensor. Much attention has been dedicated to ethical concerns raised by this technological development. Here, I focus instead on a set of issues related to the notion of synthetic audiovisual (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Video on demand: what deepfakes do and how they harm.Keith Raymond Harris - 2021 - Synthese 199 (5-6):13373-13391.
    This paper defends two main theses related to emerging deepfake technology. First, fears that deepfakes will bring about epistemic catastrophe are overblown. Such concerns underappreciate that the evidential power of video derives not solely from its content, but also from its source. An audience may find even the most realistic video evidence unconvincing when it is delivered by a dubious source. At the same time, an audience may find even weak video evidence compelling so long as it is delivered by (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  • The Distinct Wrong of Deepfakes.Adrienne de Ruiter - 2021 - Philosophy and Technology 34 (4):1311-1332.
    Deepfake technology presents significant ethical challenges. The ability to produce realistic looking and sounding video or audio files of people doing or saying things they did not do or say brings with it unprecedented opportunities for deception. The literature that addresses the ethical implications of deepfakes raises concerns about their potential use for blackmail, intimidation, and sabotage, ideological influencing, and incitement to violence as well as broader implications for trust and accountability. While this literature importantly identifies and signals the potentially (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Transdisciplinary AI Observatory—Retrospective Analyses and Future-Oriented Contradistinctions.Nadisha-Marie Aliman, Leon Kester & Roman Yampolskiy - 2021 - Philosophies 6 (1):6.
    In the last years, artificial intelligence (AI) safety gained international recognition in the light of heterogeneous safety-critical and ethical issues that risk overshadowing the broad beneficial impacts of AI. In this context, the implementation of AI observatory endeavors represents one key research direction. This paper motivates the need for an inherently _transdisciplinary_ AI observatory approach integrating diverse retrospective and counterfactual views. We delineate aims and limitations while providing hands-on-advice utilizing _concrete practical examples_. Distinguishing between unintentionally and intentionally triggered AI risks (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Infrastructural justice for responsible software engineering.Sarah Robinson, Jim Buckley, Luigina Ciolfi, Conor Linehan, Clare McInerney, Bashar Nuseibeh, John Twomey, Irum Rauf & John McCarthy - 2024 - Journal of Responsible Technology 19 (C):100087.
    Download  
     
    Export citation  
     
    Bookmark  
  • Facing Immersive “Post-Truth” in AIVR?Nadisha-Marie Aliman & Leon Kester - 2020 - Philosophies 5 (4):45.
    In recent years, prevalent global societal issues related to fake news, fakery, misinformation, and disinformation were brought to the fore, leading to the construction of descriptive labels such as “post-truth” to refer to the supposedly new emerging era. Thereby, the (mis-)use of technologies such as AI and VR has been argued to potentially fuel this new loss of “ground-truth”, for instance, via the ethically relevant deepfakes phenomena and the creation of realistic fake worlds, presumably undermining experiential veracity. Indeed, _unethical_ and (...)
    Download  
     
    Export citation  
     
    Bookmark