Switch to: References

Add citations

You must login to add citations.
  1. Synthetic Media Detection, the Wheel, and the Burden of Proof.Keith Raymond Harris - 2024 - Philosophy and Technology 37 (4):1-20.
    Deepfakes and other forms of synthetic media are widely regarded as serious threats to our knowledge of the world. Various technological responses to these threats have been proposed. The reactive approach proposes to use artificial intelligence to identify synthetic media. The proactive approach proposes to use blockchain and related technologies to create immutable records of verified media content. I argue that both approaches, but especially the reactive approach, are vulnerable to a problem analogous to the ancient problem of the criterion—a (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Misinformation, Content Moderation, and Epistemology: Protecting Knowledge.Keith Raymond Harris - 2024 - Routledge.
    This book argues that misinformation poses a multi-faceted threat to knowledge, while arguing that some forms of content moderation risk exacerbating these threats. It proposes alternative forms of content moderation that aim to address this complexity while enhancing human epistemic agency. The proliferation of fake news, false conspiracy theories, and other forms of misinformation on the internet and especially social media is widely recognized as a threat to individual knowledge and, consequently, to collective deliberation and democracy itself. This book argues (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • AI or Your Lying Eyes: Some Shortcomings of Artificially Intelligent Deepfake Detectors.Keith Raymond Harris - 2024 - Philosophy and Technology 37 (7):1-19.
    Deepfakes pose a multi-faceted threat to the acquisition of knowledge. It is widely hoped that technological solutions—in the form of artificially intelligent systems for detecting deepfakes—will help to address this threat. I argue that the prospects for purely technological solutions to the problem of deepfakes are dim. Especially given the evolving nature of the threat, technological solutions cannot be expected to prevent deception at the hands of deepfakes, or to preserve the authority of video footage. Moreover, the success of such (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • The expected AI as a sociocultural construct and its impact on the discourse on technology.Auli Viidalepp - 2023 - Dissertation, University of Tartu
    The thesis introduces and criticizes the discourse on technology, with a specific reference to the concept of AI. The discourse on AI is particularly saturated with reified metaphors which drive connotations and delimit understandings of technology in society. To better analyse the discourse on AI, the thesis proposes the concept of “Expected AI”, a composite signifier filled with historical and sociocultural connotations, and numerous referent objects. Relying on cultural semiotics, science and technology studies, and a diverse selection of heuristic concepts, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Advancing the debate on the consequences of misinformation: clarifying why it’s not (just) about false beliefs.Maarten van Doorn - 2023 - Inquiry: An Interdisciplinary Journal of Philosophy 1.
    The debate on whether and why misinformation is bad primarily focuses on the spread of false beliefs as its main harm. From the assumption that misinformation primarily causes harm through the spread of false beliefs as a starting point, it has been contended that the problem of misinformation has been exaggerated. Its tendency to generate false beliefs appears to be limited. However, the near-exclusive focus on whether or not misinformation dupes people with false beliefs neglects other epistemic harms associated with (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation