Switch to: References

Add citations

You must login to add citations.
  1. AI or Your Lying Eyes: Some Shortcomings of Artificially Intelligent Deepfake Detectors.Keith Raymond Harris - 2024 - Philosophy and Technology 37 (7):1-19.
    Deepfakes pose a multi-faceted threat to the acquisition of knowledge. It is widely hoped that technological solutions—in the form of artificially intelligent systems for detecting deepfakes—will help to address this threat. I argue that the prospects for purely technological solutions to the problem of deepfakes are dim. Especially given the evolving nature of the threat, technological solutions cannot be expected to prevent deception at the hands of deepfakes, or to preserve the authority of video footage. Moreover, the success of such (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Moderating Synthetic Content: the Challenge of Generative AI.Sarah A. Fisher, Jeffrey W. Howard & Beatriz Kira - 2024 - Philosophy and Technology 37 (4):1-20.
    Artificially generated content threatens to seriously disrupt the public sphere. Generative AI massively facilitates the production of convincing portrayals of fabricated events. We have already begun to witness the spread of synthetic misinformation, political propaganda, and non-consensual intimate deepfakes. Malicious uses of the new technologies can only be expected to proliferate over time. In the face of this threat, social media platforms must surely act. But how? While it is tempting to think they need new sui generis policies targeting synthetic (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Should we Trust Our Feeds? Social Media, Misinformation, and the Epistemology of Testimony.Charles Côté-Bouchard - 2024 - Topoi 43 (5):1469-1486.
    When should you believe testimony that you receive from your social media feeds? One natural answer is suggested by non-reductionism in the epistemology of testimony. Avoid accepting social media testimony if you have an undefeated defeater for it. Otherwise, you may accept it. I argue that this is too permissive to be an adequate epistemic policy because social media have some characteristics that tend to facilitate the efficacy of misinformation on those platforms. I formulate and defend an alternative epistemic policy (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The AI-mediated intimacy economy: a paradigm shift in digital interactions.Ayşe Aslı Bozdağ - forthcoming - AI and Society:1-22.
    This article critically examines the paradigm shift from the attention economy to the intimacy economy—a market system where personal and emotional data are exchanged for customized experiences that cater to individual emotional and psychological needs. It explores how AI transforms these personal and emotional inputs into services, thereby raising essential questions about the authenticity of digital interactions and the potential commodification of intimate experiences. The study delineates the roles of human–computer interaction and AI in deepening personal connections, significantly impacting emotional (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Misinformation, Content Moderation, and Epistemology: Protecting Knowledge.Keith Raymond Harris - 2024 - Routledge.
    This book argues that misinformation poses a multi-faceted threat to knowledge, while arguing that some forms of content moderation risk exacerbating these threats. It proposes alternative forms of content moderation that aim to address this complexity while enhancing human epistemic agency. The proliferation of fake news, false conspiracy theories, and other forms of misinformation on the internet and especially social media is widely recognized as a threat to individual knowledge and, consequently, to collective deliberation and democracy itself. This book argues (...)
    Download  
     
    Export citation  
     
    Bookmark