Switch to: References

Citations of:

Language and Intelligence

Minds and Machines 31 (4):471-486 (2021)

Add citations

You must login to add citations.
  1. How Do You Solve a Problem like DALL-E 2?Kathryn Wojtkiewicz - forthcoming - Journal of Aesthetics and Art Criticism.
    The arrival of image-making generative artificial intelligence (AI) programs has been met with a broad rebuke: to many, it feels inherently wrong to regard images made using generative AI programs as artworks. I am skeptical of this sentiment, and in what follows I aim to demonstrate why. I suspect AI generated images can be considered artworks; more specifically, that generative AI programs are, in many cases, just another tool artists can use to realize their creative intent. I begin with an (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • ChatGPT: deconstructing the debate and moving it forward.Mark Coeckelbergh & David J. Gunkel - 2024 - AI and Society 39 (5):2221-2231.
    Large language models such as ChatGPT enable users to automatically produce text but also raise ethical concerns, for example about authorship and deception. This paper analyses and discusses some key philosophical assumptions in these debates, in particular assumptions about authorship and language and—our focus—the use of the appearance/reality distinction. We show that there are alternative views of what goes on with ChatGPT that do not rely on this distinction. For this purpose, we deploy the two phased approach of deconstruction and (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Language, Common Sense, and the Winograd Schema Challenge.Jacob Browning & Yann LeCun - 2023 - Artificial Intelligence 325 (C).
    Since the 1950s, philosophers and AI researchers have held that disambiguating natural language sentences depended on common sense. In 2012, the Winograd Schema Challenge was established to evaluate the common-sense reasoning abilities of a machine by testing its ability to disambiguate sentences. The designers argued only a system capable of “thinking in the full-bodied sense” would be able to pass the test. However, by 2023, the original authors concede the test has been soundly defeated by large language models which still (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Natural and Artificial Intelligence: A Comparative Analysis of Cognitive Aspects.Francesco Abbate - 2023 - Minds and Machines 33 (4):791-815.
    Moving from a behavioral definition of intelligence, which describes it as the ability to adapt to the surrounding environment and deal effectively with new situations (Anastasi, 1986), this paper explains to what extent the performance obtained by ChatGPT in the linguistic domain can be considered as intelligent behavior and to what extent they cannot. It also explains in what sense the hypothesis of decoupling between cognitive and problem-solving abilities, proposed by Floridi (2017) and Floridi and Chiriatti (2020) should be interpreted. (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Personhood and AI: Why large language models don’t understand us.Jacob Browning - 2023 - AI and Society 39 (5):2499-2506.
    Recent artificial intelligence advances, especially those of large language models (LLMs), have increasingly shown glimpses of human-like intelligence. This has led to bold claims that these systems are no longer a mere “it” but now a “who,” a kind of person deserving respect. In this paper, I argue that this view depends on a Cartesian account of personhood, on which identifying someone as a person is based on their cognitive sophistication and ability to address common-sense reasoning problems. I contrast this (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Playing Games with Ais: The Limits of GPT-3 and Similar Large Language Models.Adam Sobieszek & Tadeusz Price - 2022 - Minds and Machines 32 (2):341-364.
    This article contributes to the debate around the abilities of large language models such as GPT-3, dealing with: firstly, evaluating how well GPT does in the Turing Test, secondly the limits of such models, especially their tendency to generate falsehoods, and thirdly the social consequences of the problems these models have with truth-telling. We start by formalising the recently proposed notion of reversible questions, which Floridi & Chiriatti propose allow one to ‘identify the nature of the source of their answers’, (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Imitation and Large Language Models.Éloïse Boisseau - 2024 - Minds and Machines 34 (4):1-24.
    The concept of imitation is both ubiquitous and curiously under-analysed in theoretical discussions about the cognitive powers and capacities of machines, and in particular—for what is the focus of this paper—the cognitive capacities of large language models (LLMs). The question whether LLMs understand what they say and what is said to them, for instance, is a disputed one, and it is striking to see this concept of imitation being mobilised here for sometimes contradictory purposes. After illustrating and discussing how this (...)
    Download  
     
    Export citation  
     
    Bookmark