Switch to: References

Add citations

You must login to add citations.
  1. Five Ways in Which Computational Modeling Can Help Advance Cognitive Science: Lessons From Artificial Grammar Learning.Willem Zuidema, Robert M. French, Raquel G. Alhama, Kevin Ellis, Timothy J. O'Donnell, Tim Sainburg & Timothy Q. Gentner - 2020 - Topics in Cognitive Science 12 (3):925-941.
    Zuidema et al. illustrate how empirical AGL studies can benefit from computational models and techniques. Computational models can help clarifying theories, and thus in delineating research questions, but also in facilitating experimental design, stimulus generation, and data analysis. The authors show, with a series of examples, how computational modeling can be integrated with empirical AGL approaches, and how model selection techniques can indicate the most likely model to explain experimental outcomes.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • An integrative account of constraints on cross-situational learning.Daniel Yurovsky & Michael C. Frank - 2015 - Cognition 145 (C):53-62.
    Download  
     
    Export citation  
     
    Bookmark   19 citations  
  • Statistical Learning of Unfamiliar Sounds as Trajectories Through a Perceptual Similarity Space.Felix Hao Wang, Elizabeth A. Hutton & Jason D. Zevin - 2019 - Cognitive Science 43 (8):e12740.
    In typical statistical learning studies, researchers define sequences in terms of the probability of the next item in the sequence given the current item (or items), and they show that high probability sequences are treated as more familiar than low probability sequences. Existing accounts of these phenomena all assume that participants represent statistical regularities more or less as they are defined by the experimenters—as sequential probabilities of symbols in a string. Here we offer an alternative, or possibly supplementary, hypothesis. Specifically, (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Perceptual intake explains variability in statistical word segmentation.Felix Hao Wang, Meili Luo & Suiping Wang - 2023 - Cognition 241 (C):105612.
    Download  
     
    Export citation  
     
    Bookmark  
  • No need to forget, just keep the balance: Hebbian neural networks for statistical learning.Ángel Eugenio Tovar & Gert Westermann - 2023 - Cognition 230 (C):105176.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • iMinerva: A Mathematical Model of Distributional Statistical Learning.Erik D. Thiessen & Philip I. Pavlik - 2013 - Cognitive Science 37 (2):310-343.
    Statistical learning refers to the ability to identify structure in the input based on its statistical properties. For many linguistic structures, the relevant statistical features are distributional: They are related to the frequency and variability of exemplars in the input. These distributional regularities have been suggested to play a role in many different aspects of language learning, including phonetic categories, using phonemic distinctions in word learning, and discovering non-adjacent relations. On the surface, these different aspects share few commonalities. Despite this, (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Discovering Words in Fluent Speech: The Contribution of Two Kinds of Statistical Information.Erik D. Thiessen & Lucy C. Erickson - 2012 - Frontiers in Psychology 3.
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • When learning goes beyond statistics: Infants represent visual sequences in terms of chunks.Lauren K. Slone & Scott P. Johnson - 2018 - Cognition 178 (C):92-102.
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • What exactly is learned in visual statistical learning? Insights from Bayesian modeling.Noam Siegelman, Louisa Bogaerts, Blair C. Armstrong & Ram Frost - 2019 - Cognition 192 (C):104002.
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Redefining “Learning” in Statistical Learning: What Does an Online Measure Reveal About the Assimilation of Visual Regularities?Noam Siegelman, Louisa Bogaerts, Ofer Kronenfeld & Ram Frost - 2018 - Cognitive Science 42 (S3):692-727.
    From a theoretical perspective, most discussions of statistical learning have focused on the possible “statistical” properties that are the object of learning. Much less attention has been given to defining what “learning” is in the context of “statistical learning.” One major difficulty is that SL research has been monitoring participants’ performance in laboratory settings with a strikingly narrow set of tasks, where learning is typically assessed offline, through a set of two-alternative-forced-choice questions, which follow a brief visual or auditory familiarization (...)
    Download  
     
    Export citation  
     
    Bookmark   16 citations  
  • Abstract processing of syllabic structures in early infancy.Chiara Santolin, Konstantina Zacharaki, Juan Manuel Toro & Nuria Sebastian-Galles - 2024 - Cognition 244 (C):105663.
    Download  
     
    Export citation  
     
    Bookmark  
  • Pre-linguistic segmentation of speech into syllable-like units.Okko Räsänen, Gabriel Doyle & Michael C. Frank - 2018 - Cognition 171 (C):130-150.
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Regularity Extraction Across Species: Associative Learning Mechanisms Shared by Human and Non‐Human Primates.Arnaud Rey, Laure Minier, Raphaëlle Malassis, Louisa Bogaerts & Joël Fagot - 2019 - Topics in Cognitive Science 11 (3):573-586.
    One of the themes that has been widely addressed in both the implicit learning and statistical learning literatures is that of rule learning. While it is widely agreed that the extraction of regularities from the environment is a fundamental facet of cognition, there is still debate about the nature of rule learning. Rey and colleagues show that the comparison between human and non‐human primates can contribute important insights to this debate.
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • Learning Higher‐Order Transitional Probabilities in Nonhuman Primates.Arnaud Rey, Joël Fagot, Fabien Mathy, Laura Lazartigues, Laure Tosatto, Guillem Bonafos, Jean-Marc Freyermuth & Frédéric Lavigne - 2022 - Cognitive Science 46 (4):e13121.
    Cognitive Science, Volume 46, Issue 4, April 2022.
    Download  
     
    Export citation  
     
    Bookmark  
  • What Mechanisms Underlie Implicit Statistical Learning? Transitional Probabilities Versus Chunks in Language Learning.Pierre Perruchet - 2019 - Topics in Cognitive Science 11 (3):520-535.
    In 2006, Perruchet and Pacton (2006) asked whether implicit learning and statistical learning represent two approaches to the same phenomenon. This article represents an important follow‐up to their seminal review article. As in the previous paper, the focus is on the formation of elementary cognitive units. Both approaches favor different explanations on what these units consist of and how they are formed. Perruchet weighs up the evidence for different explanations and concludes with a helpful agenda for future research.
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • The Role of Stimulus‐Specific Perceptual Fluency in Statistical Learning.Andrew Perfors & Evan Kidd - 2022 - Cognitive Science 46 (2):e13100.
    Cognitive Science, Volume 46, Issue 2, February 2022.
    Download  
     
    Export citation  
     
    Bookmark  
  • The Role of Stimulus‐Specific Perceptual Fluency in Statistical Learning.Andrew Perfors & Evan Kidd - 2022 - Cognitive Science 46 (2):e13100.
    Humans have the ability to learn surprisingly complicated statistical information in a variety of modalities and situations, often based on relatively little input. These statistical learning (SL) skills appear to underlie many kinds of learning, but despite their ubiquity, we still do not fully understand precisely what SL is and what individual differences on SL tasks reflect. Here, we present experimental work suggesting that at least some individual differences arise from stimulus-specific variation in perceptual fluency: the ability to rapidly or (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • A tutorial introduction to Bayesian models of cognitive development.Amy Perfors, Joshua B. Tenenbaum, Thomas L. Griffiths & Fei Xu - 2011 - Cognition 120 (3):302-321.
    Download  
     
    Export citation  
     
    Bookmark   55 citations  
  • Evaluating the Relative Importance of Wordhood Cues Using Statistical Learning.Elizabeth Pankratz, Simon Kirby & Jennifer Culbertson - 2024 - Cognitive Science 48 (3):e13429.
    Identifying wordlike units in language is typically done by applying a battery of criteria, though how to weight these criteria with respect to one another is currently unknown. We address this question by investigating whether certain criteria are also used as cues for learning an artificial language—if they are, then perhaps they can be relied on more as trustworthy top‐down diagnostics. The two criteria for grammatical wordhood that we consider are a unit's free mobility and its internal immutability. These criteria (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Pragmatically Framed Cross-Situational Noun Learning Using Computational Reinforcement Models.Shamima Najnin & Bonny Banerjee - 2018 - Frontiers in Psychology 9.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • The Temporal Dynamics of Regularity Extraction in Non‐Human Primates.Laure Minier, Joël Fagot & Arnaud Rey - 2016 - Cognitive Science 40 (4):1019-1030.
    Extracting the regularities of our environment is one of our core cognitive abilities. To study the fine-grained dynamics of the extraction of embedded regularities, a method combining the advantages of the artificial language paradigm and the serial response time task was used with a group of Guinea baboons in a new automatic experimental device. After a series of random trials, monkeys were exposed to language-like patterns. We found that the extraction of embedded patterns positioned at the end of larger patterns (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Language Processing as Cue Integration: Grounding the Psychology of Language in Perception and Neurophysiology.Andrea E. Martin - 2016 - Frontiers in Psychology 7.
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • Differential Gaze Patterns on Eyes and Mouth During Audiovisual Speech Segmentation.Laina G. Lusk & Aaron D. Mitchel - 2016 - Frontiers in Psychology 7.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Zipfian frequency distributions facilitate word segmentation in context.Chigusa Kurumada, Stephan C. Meylan & Michael C. Frank - 2013 - Cognition 127 (3):439-453.
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • Learning a Generative Probabilistic Grammar of Experience: A Process‐Level Model of Language Acquisition.Oren Kolodny, Arnon Lotem & Shimon Edelman - 2014 - Cognitive Science 38 (4):227-267.
    We introduce a set of biologically and computationally motivated design choices for modeling the learning of language, or of other types of sequential, hierarchically structured experience and behavior, and describe an implemented system that conforms to these choices and is capable of unsupervised learning from raw natural-language corpora. Given a stream of linguistic input, our model incrementally learns a grammar that captures its statistical patterns, which can then be used to parse or generate new data. The grammar constructed in this (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Learning a Generative Probabilistic Grammar of Experience: A Process‐Level Model of Language Acquisition.Oren Kolodny, Arnon Lotem & Shimon Edelman - 2015 - Cognitive Science 39 (2):227-267.
    We introduce a set of biologically and computationally motivated design choices for modeling the learning of language, or of other types of sequential, hierarchically structured experience and behavior, and describe an implemented system that conforms to these choices and is capable of unsupervised learning from raw natural‐language corpora. Given a stream of linguistic input, our model incrementally learns a grammar that captures its statistical patterns, which can then be used to parse or generate new data. The grammar constructed in this (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Statistical Learning of Language: A Meta‐Analysis Into 25 Years of Research.Erin S. Isbilen & Morten H. Christiansen - 2022 - Cognitive Science 46 (9):e13198.
    Cognitive Science, Volume 46, Issue 9, September 2022.
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Does hearing two dialects at different times help infants learn dialect-specific rules?Kalim Gonzales, LouAnn Gerken & Rebecca L. Gómez - 2015 - Cognition 140 (C):60-71.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • The statistical signature of morphosyntax: A study of Hungarian and Italian infant-directed speech.Judit Gervain & Ramón Guevara Erra - 2012 - Cognition 125 (2):263-287.
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Throwing out the Bayesian baby with the optimal bathwater: Response to Endress.Michael C. Frank - 2013 - Cognition 128 (3):417-423.
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Three ideal observer models for rule learning in simple languages.Michael C. Frank & Joshua B. Tenenbaum - 2011 - Cognition 120 (3):360-371.
    Download  
     
    Export citation  
     
    Bookmark   15 citations  
  • Processing Relative Clauses in Supportive Contexts.Evelina Fedorenko, Steve Piantadosi & Edward Gibson - 2012 - Cognitive Science 36 (3):471-497.
    Results from two self-paced reading experiments in English are reported in which subject- and object-extracted relative clauses (SRCs and ORCs, respectively) were presented in contexts that support both types of relative clauses (RCs). Object-extracted versions were read more slowly than subject-extracted versions across both experiments. These results are not consistent with a decay-based working memory account of dependency formation where the amount of decay is a function of the number of new discourse referents that intervene between the dependents (Gibson, 1998; (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • When forgetting fosters learning: A neural network model for statistical learning.Ansgar D. Endress & Scott P. Johnson - 2021 - Cognition 213 (C):104621.
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Statistical learning and memory.Ansgar D. Endress, Lauren K. Slone & Scott P. Johnson - 2020 - Cognition 204 (C):104346.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Linguistics, cognitive psychology, and the Now-or-Never bottleneck.Ansgar D. Endress & Roni Katzir - 2016 - Behavioral and Brain Sciences 39.
    Download  
     
    Export citation  
     
    Bookmark  
  • Bayesian learning and the psychology of rule induction.Ansgar D. Endress - 2013 - Cognition 127 (2):159-176.
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • Cognitive science in the era of artificial intelligence: A roadmap for reverse-engineering the infant language-learner.Emmanuel Dupoux - 2018 - Cognition 173 (C):43-59.
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • A Recurrent Connectionist Model of Melody Perception: An Exploration Using TRACX2.Daniel Defays, Robert M. French & Barbara Tillmann - 2023 - Cognitive Science 47 (4):e13283.
    Are similar, or even identical, mechanisms used in the computational modeling of speech segmentation, serial image processing, and music processing? We address this question by exploring how TRACX2, a recognition‐based, recursive connectionist autoencoder model of chunking and sequence segmentation, which has successfully simulated speech and serial‐image processing, might be applied to elementary melody perception. The model, a three‐layer autoencoder that recognizes “chunks” of short sequences of intervals that have been frequently encountered on input, is trained on the tone intervals of (...)
    Download  
     
    Export citation  
     
    Bookmark