Switch to: References

Add citations

You must login to add citations.
  1. The linguistic dead zone of value-aligned agency, natural and artificial.Travis LaCroix - 2024 - Philosophical Studies:1-23.
    The value alignment problem for artificial intelligence (AI) asks how we can ensure that the “values”—i.e., objective functions—of artificial systems are aligned with the values of humanity. In this paper, I argue that linguistic communication is a necessary condition for robust value alignment. I discuss the consequences that the truth of this claim would have for research programmes that attempt to ensure value alignment for AI systems—or, more loftily, those programmes that seek to design robustly beneficial or ethical artificial agents.
    Download  
     
    Export citation  
     
    Bookmark  
  • Chatting with Bots: AI, Speech-Acts, and the Edge of Assertion.Iwan Williams & Tim Bayne - 2024 - Inquiry: An Interdisciplinary Journal of Philosophy.
    This paper addresses the question of whether large language model-powered chatbots are capable of assertion. According to what we call the Thesis of Chatbot Assertion (TCA), chatbots are the kinds of things that can assert, and at least some of the output produced by current-generation chatbots qualifies as assertion. We provide some motivation for TCA, arguing that it ought to be taken seriously and not simply dismissed. We also review recent objections to TCA, arguing that these objections are weighty. We (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Large language models and their big bullshit potential.Sarah A. Fisher - 2024 - Ethics and Information Technology 26 (4):1-8.
    Newly powerful large language models have burst onto the scene, with applications across a wide range of functions. We can now expect to encounter their outputs at rapidly increasing volumes and frequencies. Some commentators claim that large language models are bullshitting, generating convincing output without regard for the truth. If correct, that would make large language models distinctively dangerous discourse participants. Bullshitters not only undermine the norm of truthfulness (by saying false things) but the normative status of truth itself (by (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Therapeutic Chatbots as Cognitive-Affective Artifacts.J. P. Grodniewicz & Mateusz Hohol - 2024 - Topoi 43 (3):795-807.
    Conversational Artificial Intelligence (CAI) systems (also known as AI “chatbots”) are among the most promising examples of the use of technology in mental health care. With already millions of users worldwide, CAI is likely to change the landscape of psychological help. Most researchers agree that existing CAIs are not “digital therapists” and using them is not a substitute for psychotherapy delivered by a human. But if they are not therapists, what are they, and what role can they play in mental (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Real Feeling and Fictional Time in Human-AI Interactions.Krueger Joel & Tom Roberts - 2024 - Topoi 43 (3).
    As technology improves, artificial systems are increasingly able to behave in human-like ways: holding a conversation; providing information, advice, and support; or taking on the role of therapist, teacher, or counsellor. This enhanced behavioural complexity, we argue, encourages deeper forms of affective engagement on the part of the human user, with the artificial agent helping to stabilise, subdue, prolong, or intensify a person’s emotional condition. Here, we defend a fictionalist account of human/AI interaction, according to which these encounters involve an (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Babbling stochastic parrots? A Kripkean argument for reference in large language models.Steffen Koch - forthcoming - Philosophy of Ai.
    Recently developed large language models (LLMs) perform surprisingly well in many language-related tasks, ranging from text correction or authentic chat experiences to the production of entirely new texts or even essays. It is natural to get the impression that LLMs know the meaning of natural language expressions and can use them productively. Recent scholarship, however, has questioned the validity of this impression, arguing that LLMs are ultimately incapable of understanding and producing meaningful texts. This paper develops a more optimistic view. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Moderating Synthetic Content: the Challenge of Generative AI.Sarah A. Fisher, Jeffrey W. Howard & Beatriz Kira - 2024 - Philosophy and Technology 37 (4):1-20.
    Artificially generated content threatens to seriously disrupt the public sphere. Generative AI massively facilitates the production of convincing portrayals of fabricated events. We have already begun to witness the spread of synthetic misinformation, political propaganda, and non-consensual intimate deepfakes. Malicious uses of the new technologies can only be expected to proliferate over time. In the face of this threat, social media platforms must surely act. But how? While it is tempting to think they need new sui generis policies targeting synthetic (...)
    Download  
     
    Export citation  
     
    Bookmark