Switch to: References

Add citations

You must login to add citations.
  1. Large Language Models, Agency, and Why Speech Acts are Beyond Them (For Now) – A Kantian-Cum-Pragmatist Case.Reto Gubelmann - 2024 - Philosophy and Technology 37 (1):1-24.
    This article sets in with the question whether current or foreseeable transformer-based large language models (LLMs), such as the ones powering OpenAI’s ChatGPT, could be language users in a way comparable to humans. It answers the question negatively, presenting the following argument. Apart from niche uses, to use language means to act. But LLMs are unable to act because they lack intentions. This, in turn, is because they are the wrong kind of being: agents with intentions need to be autonomous (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Proxy Assertions and Agency: The Case of Machine-Assertions.Chirag Arora - 2024 - Philosophy and Technology 37 (1):1-19.
    The world is witnessing a rise in speech-enabled devices serving as epistemic informants to their users. Some philosophers take the view that because the utterances produced by such machines can be phenomenologically similar to an equivalent human speech, and they may deliver the same function in terms of delivering content to their audience, such machine utterances should be conceptualized as “assertions”. This paper argues against this view and highlights the theoretical and pragmatic challenges faced by such a conceptualization which seems (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • The Simulative Role of Neural Language Models in Brain Language Processing.Nicola Angius, Pietro Perconti, Alessio Plebe & Alessandro Acciai - 2024 - Philosophies 9 (5):137.
    This paper provides an epistemological and methodological analysis of the recent practice of using neural language models to simulate brain language processing. It is argued that, on the one hand, this practice can be understood as an instance of the traditional simulative method in artificial intelligence, following a mechanistic understanding of the mind; on the other hand, that it modifies the simulative method significantly. Firstly, neural language models are introduced; a study case showing how neural language models are being applied (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Fiction and Epistemic Value: State of the Art.Mitchell Green - 2022 - British Journal of Aesthetics 62 (2):273-289.
    We critically survey prominent recent scholarship on the question of whether fiction can be a source of epistemic value for those who engage with it fully and appropriately. Such epistemic value might take the form of knowledge (for ‘cognitivists’) or understanding (for ‘neo-cognitivists’). Both camps may be sorted according to a further distinction between views explaining fiction’s epistemic value either in terms of the author’s engaging in a form of telling, or instead via their showing some state of affairs to (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • On the Genealogy and Potential Abuse of Assertoric Norms.Mitchell Green - 2023 - Topoi 42 (2):357-368.
    After briefly laying out a cultural-evolutionary approach to speech acts (Sects. 1–2), I argue that the notion of commitment at play in assertion and related speech acts comprises multiple dimensions (Sect. 3). Distinguishing such dimensions enables us to hypothesize evolutionary precursors to the modern practice of assertion, and facilitates a new way of posing the question whether, and if so to what extent, speech acts are conventional (Sect. 4). Our perspective also equips us to consider how a modern speaker might (...)
    Download  
     
    Export citation  
     
    Bookmark