Switch to: References

Add citations

You must login to add citations.
  1. Getting it right: the limits of fine-tuning large language models.Jacob Browning - 2024 - Ethics and Information Technology 26 (2):1-9.
    The surge in interest in natural language processing in artificial intelligence has led to an explosion of new language models capable of engaging in plausible language use. But ensuring these language models produce honest, helpful, and inoffensive outputs has proved difficult. In this paper, I argue problems of inappropriate content in current, autoregressive language models—such as ChatGPT and Gemini—are inescapable; merely predicting the next word is incompatible with reliably providing appropriate outputs. The various fine-tuning methods, while helpful, cannot transform the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Large Language Models: A Historical and Sociocultural Perspective.Eugene Yu Ji - 2024 - Cognitive Science 48 (3):e13430.
    This letter explores the intricate historical and contemporary links between large language models (LLMs) and cognitive science through the lens of information theory, statistical language models, and socioanthropological linguistic theories. The emergence of LLMs highlights the enduring significance of information‐based and statistical learning theories in understanding human communication. These theories, initially proposed in the mid‐20th century, offered a visionary framework for integrating computational science, social sciences, and humanities, which nonetheless was not fully fulfilled at that time. The subsequent development of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The Role of Feedback in the Statistical Learning of Language‐Like Regularities.Felicity F. Frinsel, Fabio Trecca & Morten H. Christiansen - 2024 - Cognitive Science 48 (3):e13419.
    In language learning, learners engage with their environment, incorporating cues from different sources. However, in lab‐based experiments, using artificial languages, many of the cues and features that are part of real‐world language learning are stripped away. In three experiments, we investigated the role of positive, negative, and mixed feedback on the gradual learning of language‐like statistical regularities within an active guessing game paradigm. In Experiment 1, participants received deterministic feedback (100%), whereas probabilistic feedback (i.e., 75% or 50%) was introduced in (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Introduction to Progress and Puzzles of Cognitive Science.Rick Dale, Ruth M. J. Byrne, Emma Cohen, Ophelia Deroy, Samuel J. Gershman, Janet H. Hsiao, Ping Li, Padraic Monaghan, David C. Noelle, Iris van Rooij, Priti Shah, Michael J. Spivey & Sashank Varma - 2024 - Cognitive Science 48 (7):e13480.
    Download  
     
    Export citation  
     
    Bookmark  
  • Event Knowledge in Large Language Models: The Gap Between the Impossible and the Unlikely.Carina Kauf, Anna A. Ivanova, Giulia Rambelli, Emmanuele Chersoni, Jingyuan Selena She, Zawad Chowdhury, Evelina Fedorenko & Alessandro Lenci - 2023 - Cognitive Science 47 (11):e13386.
    Word co‐occurrence patterns in language corpora contain a surprising amount of conceptual knowledge. Large language models (LLMs), trained to predict words in context, leverage these patterns to achieve impressive performance on diverse semantic tasks requiring world knowledge. An important but understudied question about LLMs’ semantic abilities is whether they acquire generalized knowledge of common events. Here, we test whether five pretrained LLMs (from 2018's BERT to 2023's MPT) assign a higher likelihood to plausible descriptions of agent−patient interactions than to minimally (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation