Switch to: References

Add citations

You must login to add citations.
  1. Does ChatGPT Have a Mind?Simon Goldstein & Benjamin Anders Levinstein - manuscript
    This paper examines the question of whether Large Language Models (LLMs) like ChatGPT possess minds, focusing specifically on whether they have a genuine folk psychology encompassing beliefs, desires, and intentions. We approach this question by investigating two key aspects: internal representations and dispositions to act. First, we survey various philosophical theories of representation, including informational, causal, structural, and teleosemantic accounts, arguing that LLMs satisfy key conditions proposed by each. We draw on recent interpretability research in machine learning to support these (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Propositional interpretability in artificial intelligence.David J. Chalmers - manuscript
    Mechanistic interpretability is the program of explaining what AI systems are doing in terms of their internal mechanisms. I analyze some aspects of the program, along with setting out some concrete challenges and assessing progress to date. I argue for the importance of propositional interpretability, which involves interpreting a system’s mechanisms and behav- ior in terms of propositional attitudes: attitudes (such as belief, desire, or subjective probabil- ity) to propositions (e.g. the proposition that it is hot outside). Propositional attitudes are (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Deception and manipulation in generative AI.Christian Tarsney - forthcoming - Philosophical Studies.
    Large language models now possess human-level linguistic abilities in many contexts. This raises the concern that they can be used to deceive and manipulate on unprecedented scales, for instance spreading political misinformation on social media. In future, agentic AI systems might also deceive and manipulate humans for their own purposes. In this paper, first, I argue that AI-generated content should be subject to stricter standards against deception and manipulation than we ordinarily apply to humans. Second, I offer new characterizations of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Reference without intentions in large language models.Jessica Pepp - forthcoming - Inquiry: An Interdisciplinary Journal of Philosophy.
    1. During the 1960s and 1970s, Keith Donnellan (1966, 1970, 1974) and Saul Kripke ([1972] 1980) developed influential critiques of then-prevailing ‘description theories’ of reference. In place of s...
    Download  
     
    Export citation  
     
    Bookmark  
  • Carnap’s Robot Redux: LLMs, Intensional Semantics, and the Implementation Problem in Conceptual Engineering (extended abstract).Bradley Allen - manuscript
    In his 1955 essay "Meaning and synonymy in natural languages", Rudolf Carnap presents a thought experiment wherein an investigator provides a hypothetical robot with a definition of a concept together with a description of an individual, and then asks the robot if the individual is in the extension of the concept. In this work, we show how to realize Carnap's Robot through knowledge probing of an large language model (LLM), and argue that this provides a useful cognitive tool for conceptual (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • A Benchmark for the Detection of Metalinguistic Disagreements between LLMs and Knowledge Graphs.Bradley Allen & Paul Groth - forthcoming - In Reham Alharbi, Jacopo de Berardinis, Paul Groth, Albert Meroño-Peñuela, Elena Simperl & Valentina Tamma, ISWC 2024 Special Session on Harmonising Generative AI and Semantic Web Technologies. CEUR-WS.
    Evaluating large language models (LLMs) for tasks like fact extraction in support of knowledge graph construction frequently involves computing accuracy metrics using a ground truth benchmark based on a knowledge graph (KG). These evaluations assume that errors represent factual disagreements. However, human discourse frequently features metalinguistic disagreement, where agents differ not on facts but on the meaning of the language used to express them. Given the complexity of natural language processing and generation using LLMs, we ask: do metalinguistic disagreements occur (...)
    Download  
     
    Export citation  
     
    Bookmark