Switch to: References

Add citations

You must login to add citations.
  1. A phenomenology and epistemology of large language models: transparency, trust, and trustworthiness.Richard Heersmink, Barend de Rooij, María Jimena Clavel Vázquez & Matteo Colombo - 2024 - Ethics and Information Technology 26 (3):1-15.
    This paper analyses the phenomenology and epistemology of chatbots such as ChatGPT and Bard. The computational architecture underpinning these chatbots are large language models (LLMs), which are generative artificial intelligence (AI) systems trained on a massive dataset of text extracted from the Web. We conceptualise these LLMs as multifunctional computational cognitive artifacts, used for various cognitive tasks such as translating, summarizing, answering questions, information-seeking, and much more. Phenomenologically, LLMs can be experienced as a “quasi-other”; when that happens, users anthropomorphise them. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • AI as Agency Without Intelligence: on ChatGPT, Large Language Models, and Other Generative Models.Luciano Floridi - 2023 - Philosophy and Technology 36 (1):1-7.
    Download  
     
    Export citation  
     
    Bookmark   27 citations  
  • The Incalculability of the Generated Text.Alžbeta Kuchtová - 2024 - Philosophy and Technology 37 (1):1-20.
    In this paper, I explore Derrida’s concept of exteriorization in relation to texts generated by machine learning. I first discuss Heidegger’s view of machine creation and then present Derrida’s criticism of Heidegger. I explain the concept of iterability, which is the central notion on which Derrida’s criticism is based. The thesis defended in the paper is that Derrida’s account of iterability provides a helpful framework for understanding the phenomenon of machine learning–generated literature. His account of textuality highlights the incalculability and (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation