Switch to: References

Add citations

You must login to add citations.
  1. Squeezing through the Now-or-Never bottleneck: Reconnecting language processing, acquisition, change, and structure.Nick Chater & Morten H. Christiansen - 2016 - Behavioral and Brain Sciences 39:e91.
    If human language must be squeezed through a narrow cognitive bottleneck, what are the implications for language processing, acquisition, change, and structure? In our target article, we suggested that the implications are far-reaching and form the basis of an integrated account of many apparently unconnected aspects of language and language processing, as well as suggesting revision of many existing theoretical accounts. With some exceptions, commentators were generally supportive both of the existence of the bottleneck and its potential implications. Many commentators (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Do Backward Associations Have Anything to Say About Language?Thomas F. Chartier & Isabelle Dautriche - 2023 - Cognitive Science 47 (4):e13282.
    In this letter, we argue against a recurring idea that early word learning in infants is related to the low-level capacity for backward associations—a notion that suggests a cognitive gap with other animal species. Because backward associations entail the formation of bidirectional associations between sequentially perceived stimulus pairs, they seemingly mirror the label-referent bidirectional mental relations underlying the lexicon of natural language. This appealing but spurious resemblance has led to various speculations on language acquisition, in particular regarding early word learning, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The effect of statistical learning on internal stimulus representations: Predictable items are enhanced even when not predicted.Brandon K. Barakat, Aaron R. Seitz & Ladan Shams - 2013 - Cognition 129 (2):205-211.
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Can Infants Retain Statistically Segmented Words and Mappings Across a Delay?Ferhat Karaman, Jill Lany & Jessica F. Hay - 2024 - Cognitive Science 48 (3):e13433.
    Infants are sensitive to statistics in spoken language that aid word‐form segmentation and immediate mapping to referents. However, it is not clear whether this sensitivity influences the formation and retention of word‐referent mappings across a delay, two real‐world challenges that learners must overcome. We tested how the timing of referent training, relative to familiarization with transitional probabilities (TPs) in speech, impacts English‐learning 23‐month‐olds’ ability to form and retain word‐referent mappings. In Experiment 1, we tested infants’ ability to retain TP information (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Computational Modeling of the Segmentation of Sentence Stimuli From an Infant Word‐Finding Study.Daniel Swingley & Robin Algayres - 2024 - Cognitive Science 48 (3):e13427.
    Computational models of infant word‐finding typically operate over transcriptions of infant‐directed speech corpora. It is now possible to test models of word segmentation on speech materials, rather than transcriptions of speech. We propose that such modeling efforts be conducted over the speech of the experimental stimuli used in studies measuring infants' capacity for learning from spoken sentences. Correspondence with infant outcomes in such experiments is an appropriate benchmark for models of infants. We demonstrate such an analysis by applying the DP‐Parser (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Five Ways in Which Computational Modeling Can Help Advance Cognitive Science: Lessons From Artificial Grammar Learning.Willem Zuidema, Robert M. French, Raquel G. Alhama, Kevin Ellis, Timothy J. O'Donnell, Tim Sainburg & Timothy Q. Gentner - 2020 - Topics in Cognitive Science 12 (3):925-941.
    Zuidema et al. illustrate how empirical AGL studies can benefit from computational models and techniques. Computational models can help clarifying theories, and thus in delineating research questions, but also in facilitating experimental design, stimulus generation, and data analysis. The authors show, with a series of examples, how computational modeling can be integrated with empirical AGL approaches, and how model selection techniques can indicate the most likely model to explain experimental outcomes.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Effects of statistical learning on the acquisition of grammatical categories through Qur’anic memorization: A natural experiment.Fathima Manaar Zuhurudeen & Yi Ting Huang - 2016 - Cognition 148 (C):79-84.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • No need to forget, just keep the balance: Hebbian neural networks for statistical learning.Ángel Eugenio Tovar & Gert Westermann - 2023 - Cognition 230 (C):105176.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Lexical stress constrains English-learning infants’ segmentation in a non-native language.Megha Sundara & Victoria E. Mateu - 2018 - Cognition 181 (C):105-116.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Do Infants Learn Words From Statistics? Evidence From English‐Learning Infants Hearing Italian.Amber Shoaib, Tianlin Wang, Jessica F. Hay & Jill Lany - 2018 - Cognitive Science 42 (8):3083-3099.
    Infants are sensitive to statistical regularities (i.e., transitional probabilities, or TPs) relevant to segmenting words in fluent speech. However, there is debate about whether tracking TPs results in representations of possible words. Infants show preferential learning of sequences with high TPs (HTPs) as object labels relative to those with low TPs (LTPs). Such findings could mean that only the HTP sequences have a word‐like status, and they are more readily mapped to a referent for that reason. But these findings could (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • A computational model of word segmentation from continuous speech using transitional probabilities of atomic acoustic events.Okko Räsänen - 2011 - Cognition 120 (2):149-176.
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Modeling the Influence of Language Input Statistics on Children's Speech Production.Ingeborg Roete, Stefan L. Frank, Paula Fikkert & Marisa Casillas - 2020 - Cognitive Science 44 (12):e12924.
    We trained a computational model (the Chunk-Based Learner; CBL) on a longitudinal corpus of child–caregiver interactions in English to test whether one proposed statistical learning mechanism—backward transitional probability—is able to predict children's speech productions with stable accuracy throughout the first few years of development. We predicted that the model less accurately reconstructs children's speech productions as they grow older because children gradually begin to generate speech using abstracted forms rather than specific “chunks” from their speech environment. To test this idea, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Forward models and their implications for production, comprehension, and dialogue.Martin J. Pickering & Simon Garrod - 2013 - Behavioral and Brain Sciences 36 (4):377-392.
    Our target article proposed that language production and comprehension are interwoven, with speakers making predictions of their own utterances and comprehenders making predictions of other people's utterances at different linguistic levels. Here, we respond to comments about such issues as cognitive architecture and its neural basis, learning and development, monitoring, the nature of forward models, communicative intentions, and dialogue.
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  • An integrated theory of language production and comprehension.Martin J. Pickering & Simon Garrod - 2013 - Behavioral and Brain Sciences 36 (4):329-347.
    Currently, production and comprehension are regarded as quite distinct in accounts of language processing. In rejecting this dichotomy, we instead assert that producing and understanding are interwoven, and that this interweaving is what enables people to predict themselves and each other. We start by noting that production and comprehension are forms of action and action perception. We then consider the evidence for interweaving in action, action perception, and joint action, and explain such evidence in terms of prediction. Specifically, we assume (...)
    Download  
     
    Export citation  
     
    Bookmark   157 citations  
  • The Utility of Cognitive Plausibility in Language Acquisition Modeling: Evidence From Word Segmentation.Lawrence Phillips & Lisa Pearl - 2015 - Cognitive Science 39 (8):1824-1854.
    The informativity of a computational model of language acquisition is directly related to how closely it approximates the actual acquisition task, sometimes referred to as the model's cognitive plausibility. We suggest that though every computational model necessarily idealizes the modeled task, an informative language acquisition model can aim to be cognitively plausible in multiple ways. We discuss these cognitive plausibility checkpoints generally and then apply them to a case study in word segmentation, investigating a promising Bayesian segmentation strategy. We incorporate (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • What Mechanisms Underlie Implicit Statistical Learning? Transitional Probabilities Versus Chunks in Language Learning.Pierre Perruchet - 2019 - Topics in Cognitive Science 11 (3):520-535.
    In 2006, Perruchet and Pacton (2006) asked whether implicit learning and statistical learning represent two approaches to the same phenomenon. This article represents an important follow‐up to their seminal review article. As in the previous paper, the focus is on the formation of elementary cognitive units. Both approaches favor different explanations on what these units consist of and how they are formed. Perruchet weighs up the evidence for different explanations and concludes with a helpful agenda for future research.
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Language experience changes subsequent learning.Luca Onnis & Erik Thiessen - 2013 - Cognition 126 (2):268-284.
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Toward a unified account of comprehension and production in language development.Stewart M. McCauley & Morten H. Christiansen - 2013 - Behavioral and Brain Sciences 36 (4):366-367.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Spanish input accelerates bilingual infants' segmentation of English words.Victoria Mateu & Megha Sundara - 2022 - Cognition 218 (C):104936.
    Download  
     
    Export citation  
     
    Bookmark  
  • Using Predictability for Lexical Segmentation.Çağrı Çöltekin - 2017 - Cognitive Science 41 (7):1988-2021.
    This study investigates a strategy based on predictability of consecutive sub-lexical units in learning to segment a continuous speech stream into lexical units using computational modeling and simulations. Lexical segmentation is one of the early challenges during language acquisition, and it has been studied extensively through psycholinguistic experiments as well as computational methods. However, despite strong empirical evidence, the explicit use of predictability of basic sub-lexical units in models of segmentation is underexplored. This paper presents an incremental computational model of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • How Many Mechanisms Are Needed to Analyze Speech? A Connectionist Simulation of Structural Rule Learning in Artificial Language Acquisition.Aarre Laakso & Paco Calvo - 2011 - Cognitive Science 35 (7):1243-1281.
    Some empirical evidence in the artificial language acquisition literature has been taken to suggest that statistical learning mechanisms are insufficient for extracting structural information from an artificial language. According to the more than one mechanism (MOM) hypothesis, at least two mechanisms are required in order to acquire language from speech: (a) a statistical mechanism for speech segmentation; and (b) an additional rule-following mechanism in order to induce grammatical regularities. In this article, we present a set of neural network studies demonstrating (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • How children use examples to make conditional predictions.Charles W. Kalish - 2010 - Cognition 116 (1):1-14.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Facilitatory Effects of Multi-Word Units in Lexical Processing and Word Learning: A Computational Investigation.Robert Grimm, Giovanni Cassani, Steven Gillis & Walter Daelemans - 2017 - Frontiers in Psychology 8.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • The statistical signature of morphosyntax: A study of Hungarian and Italian infant-directed speech.Judit Gervain & Ramón Guevara Erra - 2012 - Cognition 125 (2):263-287.
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • When forgetting fosters learning: A neural network model for statistical learning.Ansgar D. Endress & Scott P. Johnson - 2021 - Cognition 213 (C):104621.
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Statistical learning and memory.Ansgar D. Endress, Lauren K. Slone & Scott P. Johnson - 2020 - Cognition 204 (C):104346.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • In defense of epicycles: Embracing complexity in psychological explanations.Ansgar D. Endress - 2023 - Mind and Language 38 (5):1208-1237.
    Is formal simplicity a guide to learning in humans, as simplicity is said to be a guide to the acceptability of theories in science? Does simplicity determine the difficulty of various learning tasks? I argue that, similarly to how scientists sometimes preferred complex theories when this facilitated calculations, results from perception, learning and reasoning suggest that formal complexity is generally unrelated to what is easy to learn and process by humans, and depends on assumptions about available representational and processing primitives. (...)
    Download  
     
    Export citation  
     
    Bookmark