Switch to: References

Add citations

You must login to add citations.
  1. Misinformation, Content Moderation, and Epistemology: Protecting Knowledge.Keith Raymond Harris - 2024 - Routledge.
    This book argues that misinformation poses a multi-faceted threat to knowledge, while arguing that some forms of content moderation risk exacerbating these threats. It proposes alternative forms of content moderation that aim to address this complexity while enhancing human epistemic agency. The proliferation of fake news, false conspiracy theories, and other forms of misinformation on the internet and especially social media is widely recognized as a threat to individual knowledge and, consequently, to collective deliberation and democracy itself. This book argues (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • AI or Your Lying Eyes: Some Shortcomings of Artificially Intelligent Deepfake Detectors.Keith Raymond Harris - 2024 - Philosophy and Technology 37 (7):1-19.
    Deepfakes pose a multi-faceted threat to the acquisition of knowledge. It is widely hoped that technological solutions—in the form of artificially intelligent systems for detecting deepfakes—will help to address this threat. I argue that the prospects for purely technological solutions to the problem of deepfakes are dim. Especially given the evolving nature of the threat, technological solutions cannot be expected to prevent deception at the hands of deepfakes, or to preserve the authority of video footage. Moreover, the success of such (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Advancing the debate on the consequences of misinformation: clarifying why it’s not (just) about false beliefs.Maarten van Doorn - 2023 - Inquiry: An Interdisciplinary Journal of Philosophy 1.
    The debate on whether and why misinformation is bad primarily focuses on the spread of false beliefs as its main harm. From the assumption that misinformation primarily causes harm through the spread of false beliefs as a starting point, it has been contended that the problem of misinformation has been exaggerated. Its tendency to generate false beliefs appears to be limited. However, the near-exclusive focus on whether or not misinformation dupes people with false beliefs neglects other epistemic harms associated with (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Liars and Trolls and Bots Online: The Problem of Fake Persons.Keith Raymond Harris - 2023 - Philosophy and Technology 36 (2):1-19.
    This paper describes the ways in which trolls and bots impede the acquisition of knowledge online. I distinguish between three ways in which trolls and bots can impede knowledge acquisition, namely, by deceiving, by encouraging misplaced skepticism, and by interfering with the acquisition of warrant concerning persons and content encountered online. I argue that these threats are difficult to resist simultaneously. I argue, further, that the threat that trolls and bots pose to knowledge acquisition goes beyond the mere threat of (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Should we Trust Our Feeds? Social Media, Misinformation, and the Epistemology of Testimony.Charles Côté-Bouchard - 2024 - Topoi 43 (5):1469-1486.
    When should you believe testimony that you receive from your social media feeds? One natural answer is suggested by non-reductionism in the epistemology of testimony. Avoid accepting social media testimony if you have an undefeated defeater for it. Otherwise, you may accept it. I argue that this is too permissive to be an adequate epistemic policy because social media have some characteristics that tend to facilitate the efficacy of misinformation on those platforms. I formulate and defend an alternative epistemic policy (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Social Evidence Tampering and the Epistemology of Content Moderation.Keith Raymond Harris - 2024 - Topoi 43 (5):1421-1431.
    Social media misinformation is widely thought to pose a host of threats to the acquisition of knowledge. One response to these threats is to remove misleading information from social media and to de-platform those who spread it. While content moderation of this sort has been criticized on various grounds—including potential incompatibility with free expression—the epistemic case for the removal of misinformation from social media has received little scrutiny. Here, I provide an overview of some costs and benefits of the removal (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Synthetic Media Detection, the Wheel, and the Burden of Proof.Keith Raymond Harris - 2024 - Philosophy and Technology 37 (4):1-20.
    Deepfakes and other forms of synthetic media are widely regarded as serious threats to our knowledge of the world. Various technological responses to these threats have been proposed. The reactive approach proposes to use artificial intelligence to identify synthetic media. The proactive approach proposes to use blockchain and related technologies to create immutable records of verified media content. I argue that both approaches, but especially the reactive approach, are vulnerable to a problem analogous to the ancient problem of the criterion—a (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations