Switch to: References

Add citations

You must login to add citations.
  1. Two ways of learning associations.Luke Boucher & Zoltán Dienes - 2003 - Cognitive Science 27 (6):807-842.
    How people learn chunks or associations between adjacent items in sequences was modelled. Two previously successful models of how people learn artificial grammars were contrasted: the CCN, a network version of the competitive chunker of Servan‐Schreiber and Anderson [J. Exp. Psychol.: Learn. Mem. Cogn. 16 (1990) 592], which produces local and compositionally‐structured chunk representations acquired incrementally; and the simple recurrent network (SRN) of Elman [Cogn. Sci. 14 (1990) 179], which acquires distributed representations through error correction. The models' susceptibility to two (...)
    Download  
     
    Export citation  
     
    Bookmark   14 citations  
  • Perceptual constraints and the learnability of simple grammars.Ansgar D. Endress, Ghislaine Dehaene-Lambertz & Jacques Mehler - 2007 - Cognition 105 (3):577-614.
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  • The Now-or-Never bottleneck: A fundamental constraint on language.Morten H. Christiansen & Nick Chater - 2016 - Behavioral and Brain Sciences 39:e62.
    Memory is fleeting. New material rapidly obliterates previous material. How, then, can the brain deal successfully with the continual deluge of linguistic input? We argue that, to deal with this “Now-or-Never” bottleneck, the brain must compress and recode linguistic input as rapidly as possible. This observation has strong implications for the nature of language processing: (1) the language system must “eagerly” recode and compress linguistic input; (2) as the bottleneck recurs at each new representational level, the language system must build (...)
    Download  
     
    Export citation  
     
    Bookmark   76 citations  
  • How Many Mechanisms Are Needed to Analyze Speech? A Connectionist Simulation of Structural Rule Learning in Artificial Language Acquisition.Aarre Laakso & Paco Calvo - 2011 - Cognitive Science 35 (7):1243-1281.
    Some empirical evidence in the artificial language acquisition literature has been taken to suggest that statistical learning mechanisms are insufficient for extracting structural information from an artificial language. According to the more than one mechanism (MOM) hypothesis, at least two mechanisms are required in order to acquire language from speech: (a) a statistical mechanism for speech segmentation; and (b) an additional rule-following mechanism in order to induce grammatical regularities. In this article, we present a set of neural network studies demonstrating (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Two apparent 'counterexamples' to Marcus: A closer look. [REVIEW]Marius Vilcu & Robert F. Hadley - 2005 - Minds and Machines 15 (3-4):359-382.
    Marcus et al.’s experiment (1999) concerning infant ability to distinguish between differing syntactic structures has prompted connectionists to strive to show that certain types of neural networks can mimic the infants’ results. In this paper we take a closer look at two such attempts: Shultz and Bale [Shultz, T.R. and Bale, A.C. (2001), Infancy 2, pp. 501–536] Altmann and Dienes [Altmann, G.T.M. and Dienes, Z. (1999) Science 248, p. 875a]. We were not only interested in how well these two models (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Connectionist semantic systematicity.Stefan L. Frank, Willem F. G. Haselager & Iris van Rooij - 2009 - Cognition 110 (3):358-379.
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Rapid learning of syllable classes from a perceptually continuous speech stream.Ansgar D. Endress & Luca L. Bonatti - 2007 - Cognition 105 (2):247-299.
    Download  
     
    Export citation  
     
    Bookmark   20 citations  
  • Unconscious structural knowledge of form–meaning connections.Weiwen Chen, Xiuyan Guo, Jinghua Tang, Lei Zhu, Zhiliang Yang & Zoltan Dienes - 2011 - Consciousness and Cognition 20 (4):1751-1760.
    We investigated the implicit learning of a linguistically relevant variable in a natural language context . Trial by trial subjective measures indicated that exposure to a form–animacy regularity led to unconscious knowledge of that regularity. Under the same conditions, people did not learn about another form–meaning regularity when a linguistically arbitrary variable was used instead of animacy . Implicit learning is constrained to acquire unconscious knowledge about features with high prior probabilities of being relevant in that domain.
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  • Three ideal observer models for rule learning in simple languages.Michael C. Frank & Joshua B. Tenenbaum - 2011 - Cognition 120 (3):360-371.
    Download  
     
    Export citation  
     
    Bookmark   15 citations  
  • Incrementality and Prediction in Human Sentence Processing.Gerry T. M. Altmann & Jelena Mirković - 2009 - Cognitive Science 33 (4):583-609.
    We identify a number of principles with respect to prediction that, we argue, underpin adult language comprehension: (a) comprehension consists in realizing a mapping between the unfolding sentence and the event representation corresponding to the real‐world event being described; (b) the realization of this mapping manifests as the ability to predict both how the language will unfold, and how the real‐world event would unfold if it were being experienced directly; (c) concurrent linguistic and nonlinguistic inputs, and the prior internal states (...)
    Download  
     
    Export citation  
     
    Bookmark   47 citations  
  • Analogy as relational priming: The challenge of self-reflection.Andrea Cheshire, Linden J. Ball & Charlie N. Lewis - 2008 - Behavioral and Brain Sciences 31 (4):381-382.
    Despite its strengths, Leech et al.'s model fails to address the important benefits that derive from self-explanation and task feedback in analogical reasoning development. These components encourage explicit, self-reflective processes that do not necessarily link to knowledge accretion. We wonder, therefore, what mechanisms can be included within a connectionist framework to model self-reflective involvement and its beneficial consequences.
    Download  
     
    Export citation  
     
    Bookmark  
  • Learning non-local dependencies.Gustav Kuhn & Zoltán Dienes - 2008 - Cognition 106 (1):184-206.
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • Analogy as relational priming: A developmental and computational perspective on the origins of a complex cognitive skill.Robert Leech, Denis Mareschal & Richard P. Cooper - 2008 - Behavioral and Brain Sciences 31 (4):357-378.
    The development of analogical reasoning has traditionally been understood in terms of theories of adult competence. This approach emphasizes structured representations and structure mapping. In contrast, we argue that by taking a developmental perspective, analogical reasoning can be viewed as the product of a substantially different cognitive ability – relational priming. To illustrate this, we present a computational (here connectionist) account where analogy arises gradually as a by-product of pattern completion in a recurrent network. Initial exposure to a situation primes (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • Do current connectionist learning models account for reading development in different languages?Florian Hutzler, Johannes C. Ziegler, Conrad Perry, Heinz Wimmer & Marco Zorzi - 2004 - Cognition 91 (3):273-296.
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Similarity of referents influences the learning of phonological word forms: Evidence from concurrent word learning.Libo Zhao, Stephanie Packard, Bob McMurray & Prahlad Gupta - 2019 - Cognition 190 (C):42-60.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Gestalt-like representations hijack Chunk-and-Pass processing.Magda L. Dumitru - 2016 - Behavioral and Brain Sciences 39.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Developing structured representations.Leonidas A. A. Doumas & Lindsey E. Richland - 2008 - Behavioral and Brain Sciences 31 (4):384-385.
    Leech et al.'s model proposes representing relations as primed transformations rather than as structured representations (explicit representations of relations and their roles dynamically bound to fillers). However, this renders the model unable to explain several developmental trends (including relational integration and all changes not attributable to growth in relational knowledge). We suggest looking to an alternative computational model that learns structured representations from examples.
    Download  
     
    Export citation  
     
    Bookmark