Switch to: References

Add citations

You must login to add citations.
  1. Deception and manipulation in generative AI.Christian Tarsney - forthcoming - Philosophical Studies.
    Large language models now possess human-level linguistic abilities in many contexts. This raises the concern that they can be used to deceive and manipulate on unprecedented scales, for instance spreading political misinformation on social media. In future, agentic AI systems might also deceive and manipulate humans for their own purposes. In this paper, first, I argue that AI-generated content should be subject to stricter standards against deception and manipulation than we ordinarily apply to humans. Second, I offer new characterizations of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Exploring the Essence of the Freedom of Thought – A Normative Framework for Identifying Undue Mind Interventions.Timo Istace - 2025 - Neuroethics 18 (1):1-20.
    The freedom of thought (FoT) has recently gained attention in human rights scholarship, emerging as a key component in the human rights protection of the human mind. However, this newfound interest has exposed significant gaps in the protection offered by the FoT. While the underdevelopment of the FoT is mainly examined in relation to the mind’s vulnerability to emerging neurotechnologies, there are numerous other ways to interfere with the privacy, freedom, and integrity of the mind. Conversations, education, online marketing, and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • LLMs beyond the lab: the ethics and epistemics of real-world AI research.Joost Mollen - 2025 - Ethics and Information Technology 27 (1):1-11.
    Research under real-world conditions is crucial to the development and deployment of robust AI systems. Exposing large language models to complex use settings yields knowledge about their performance and impact, which cannot be obtained under controlled laboratory conditions or through anticipatory methods. This epistemic need for real-world research is exacerbated by large-language models’ opaque internal operations and potential for emergent behavior. However, despite its epistemic value and widespread application, the ethics of real-world AI research has received little scholarly attention. To (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Liberty, Manipulation, and Algorithmic Transparency: Reply to Franke.Michael Klenk - 2024 - Philosophy and Technology 37 (2):1-8.
    Franke, in Philosophy & Technology, 37(1), 1–6, (2024), connects the recent debate about manipulative algorithmic transparency with the concerns about problematic pursuits of positive liberty. I argue that the indifference view of manipulative transparency is not aligned with positive liberty, contrary to Franke’s claim, and even if it is, it is not aligned with the risk that many have attributed to pursuits of positive liberty. Moreover, I suggest that Franke’s worry may generalise beyond the manipulative transparency debate to AI ethics (...)
    Download  
     
    Export citation  
     
    Bookmark