Switch to: References

Citations of:

Making AI Intelligible: Philosophical Foundations

New York, USA: Oxford University Press (2021)

Add citations

You must login to add citations.
  1. Are Language Models More Like Libraries or Like Librarians? Bibliotechnism, the Novel Reference Problem, and the Attitudes of LLMs.Harvey Lederman & Kyle Mahowald - 2024 - Transactions of the Association for Computational Linguistics 12:1087-1103.
    Are LLMs cultural technologies like photocopiers or printing presses, which transmit information but cannot create new content? A challenge for this idea, which we call bibliotechnism, is that LLMs generate novel text. We begin with a defense of bibliotechnism, showing how even novel text may inherit its meaning from original human-generated text. We then argue that bibliotechnism faces an independent challenge from examples in which LLMs generate novel reference, using new names to refer to new entities. Such examples could be (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Therapeutic Chatbots as Cognitive-Affective Artifacts.J. P. Grodniewicz & Mateusz Hohol - 2024 - Topoi 43 (3):795-807.
    Conversational Artificial Intelligence (CAI) systems (also known as AI “chatbots”) are among the most promising examples of the use of technology in mental health care. With already millions of users worldwide, CAI is likely to change the landscape of psychological help. Most researchers agree that existing CAIs are not “digital therapists” and using them is not a substitute for psychotherapy delivered by a human. But if they are not therapists, what are they, and what role can they play in mental (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • AI, Opacity, and Personal Autonomy.Bram Vaassen - 2022 - Philosophy and Technology 35 (4):1-20.
    Advancements in machine learning have fuelled the popularity of using AI decision algorithms in procedures such as bail hearings, medical diagnoses and recruitment. Academic articles, policy texts, and popularizing books alike warn that such algorithms tend to be opaque: they do not provide explanations for their outcomes. Building on a causal account of transparency and opacity as well as recent work on the value of causal explanation, I formulate a moral concern for opaque algorithms that is yet to receive a (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • AI with Alien Content and Alien Metasemantics.Herman Cappelen & Joshua Dever - 2024 - In Ernest Lepore & Luvell Anderson (eds.), The Oxford Handbook of Applied Philosophy of Language. New York, NY: Oxford University Press.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Meaning by Courtesy: LLM-Generated Texts and the Illusion of Content.Gary Ostertag - 2023 - American Journal of Bioethics 23 (10):91-93.
    Contrary to how it may seem when we observe its output, an [LLM] is a system for haphazardly stitching together sequences of linguistic forms it has observed in its vast training data, according to...
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Lessons from the Void: What Boltzmann Brains Teach.Bradford Saad - forthcoming - Analytic Philosophy.
    Some physical theories predict that almost all brains in the universe are Boltzmann brains, i.e. short-lived disembodied brains that are accidentally assembled as a result of thermodynamic or quantum fluctuations. Physicists and philosophers of physics widely regard this proliferation as unacceptable, and so take its prediction as a basis for rejecting these theories. But the putatively unacceptable consequences of this prediction follow only given certain philosophical assumptions. This paper develops a strategy for shielding physical theorizing from the threat of Boltzmann (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Babbling stochastic parrots? A Kripkean argument for reference in large language models.Steffen Koch - forthcoming - Philosophy of Ai.
    Recently developed large language models (LLMs) perform surprisingly well in many language-related tasks, ranging from text correction or authentic chat experiences to the production of entirely new texts or even essays. It is natural to get the impression that LLMs know the meaning of natural language expressions and can use them productively. Recent scholarship, however, has questioned the validity of this impression, arguing that LLMs are ultimately incapable of understanding and producing meaningful texts. This paper develops a more optimistic view. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Chatting with Bots: AI, Speech-Acts, and the Edge of Assertion.Iwan Williams & Tim Bayne - 2024 - Inquiry: An Interdisciplinary Journal of Philosophy.
    This paper addresses the question of whether large language model-powered chatbots are capable of assertion. According to what we call the Thesis of Chatbot Assertion (TCA), chatbots are the kinds of things that can assert, and at least some of the output produced by current-generation chatbots qualifies as assertion. We provide some motivation for TCA, arguing that it ought to be taken seriously and not simply dismissed. We also review recent objections to TCA, arguing that these objections are weighty. We (...)
    Download  
     
    Export citation  
     
    Bookmark