Switch to: References

Add citations

You must login to add citations.
  1. Does sentential prosody help infants organize and remember speech information?Denise R. Mandel, Peter W. Jusczyk & Deborah G. Kemler Nelson - 1994 - Cognition 53 (2):155-180.
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • Phrasal prosody constrains syntactic analysis in toddlers.Alex de Carvalho, Isabelle Dautriche, Isabelle Lin & Anne Christophe - 2017 - Cognition 163 (C):67-79.
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Artificial grammar learning by 1-year-olds leads to specific and abstract knowledge.Rebecca L. Gomez & LouAnn Gerken - 1999 - Cognition 70 (2):109-135.
    Download  
     
    Export citation  
     
    Bookmark   53 citations  
  • Evidence of ordinal position encoding of sequences extracted from continuous speech.Ana Fló - 2021 - Cognition 213 (C):104646.
    Download  
     
    Export citation  
     
    Bookmark  
  • Artificial grammar learning by 1-year-olds leads to specific and abstract knowledge.Rebecca L. Gomez & LouAnn Gerken - 1999 - Cognition 70 (2):109-135.
    Download  
     
    Export citation  
     
    Bookmark   81 citations  
  • Lexical Categories at the Edge of the Word.Luca Onnis & Morten H. Christiansen - 2008 - Cognitive Science 32 (1):184-221.
    Language acquisition may be one of the most difficult tasks that children face during development. They have to segment words from fluent speech, figure out the meanings of these words, and discover the syntactic constraints for joining them together into meaningful sentences. Over the past couple of decades, computational modeling has emerged as a new paradigm for gaining insights into the mechanisms by which children may accomplish these feats. Unfortunately, many of these models assume a computational complexity and linguistic knowledge (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Structural complexity and the time course of grammatical development.Robert Frank - 1998 - Cognition 66 (3):249-301.
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Sensitivity to discontinuous dependencies in language learners: evidence for limitations in processing space.L. Santelmann - 1998 - Cognition 69 (2):105-134.
    Download  
     
    Export citation  
     
    Bookmark   27 citations  
  • Bootstrapping the lexicon: a computational model of infant speech segmentation.Eleanor Olds Batchelder - 2002 - Cognition 83 (2):167-206.
    Prelinguistic infants must find a way to isolate meaningful chunks from the continuous streams of speech that they hear. BootLex, a new model which uses distributional cues to build a lexicon, demonstrates how much can be accomplished using this single source of information. This conceptually simple probabilistic algorithm achieves significant segmentation results on various kinds of language corpora - English, Japanese, and Spanish; child- and adult-directed speech, and written texts; and several variations in coding structure - and reveals which statistical (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  • Bootstrapping language acquisition.Omri Abend, Tom Kwiatkowski, Nathaniel J. Smith, Sharon Goldwater & Mark Steedman - 2017 - Cognition 164 (C):116-143.
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Words in a sea of sounds: the output of infant statistical learning.Jenny R. Saffran - 2001 - Cognition 81 (2):149-169.
    Download  
     
    Export citation  
     
    Bookmark   29 citations  
  • The use of multiple frames in verb learning via syntactic bootstrapping.Letitia R. Naigles - 1996 - Cognition 58 (2):221-251.
    Download  
     
    Export citation  
     
    Bookmark   25 citations  
  • Nested sets theory, full stop: Explaining performance on bayesian inference tasks without dual-systems assumptions.David R. Mandel - 2007 - Behavioral and Brain Sciences 30 (3):275-276.
    Consistent with Barbey & Sloman (B&S), it is proposed that performance on Bayesian inference tasks is well explained by nested sets theory (NST). However, contrary to those authors' view, it is proposed that NST does better by dispelling with dual-systems assumptions. This article examines why, and sketches out a series of NST's core principles, which were not previously defined.
    Download  
     
    Export citation  
     
    Bookmark   1 citation