Switch to: References

Add citations

You must login to add citations.
  1. Connectionist Natural Language Processing: The State of the Art.Morten H. Christiansen & Nick Chater - 1999 - Cognitive Science 23 (4):417-437.
    Download  
     
    Export citation  
     
    Bookmark   19 citations  
  • Two ways of learning associations.Luke Boucher & Zoltán Dienes - 2003 - Cognitive Science 27 (6):807-842.
    How people learn chunks or associations between adjacent items in sequences was modelled. Two previously successful models of how people learn artificial grammars were contrasted: the CCN, a network version of the competitive chunker of Servan‐Schreiber and Anderson [J. Exp. Psychol.: Learn. Mem. Cogn. 16 (1990) 592], which produces local and compositionally‐structured chunk representations acquired incrementally; and the simple recurrent network (SRN) of Elman [Cogn. Sci. 14 (1990) 179], which acquires distributed representations through error correction. The models' susceptibility to two (...)
    Download  
     
    Export citation  
     
    Bookmark   14 citations  
  • Doing Without Schema Hierarchies: A Recurrent Connectionist Approach to Normal and Impaired Routine Sequential Action.Matthew Botvinick & David C. Plaut - 2004 - Psychological Review 111 (2):395-429.
    Download  
     
    Export citation  
     
    Bookmark   53 citations  
  • Learning Orthographic Structure With Sequential Generative Neural Networks.Alberto Testolin, Ivilin Stoianov, Alessandro Sperduti & Marco Zorzi - 2016 - Cognitive Science 40 (3):579-606.
    Learning the structure of event sequences is a ubiquitous problem in cognition and particularly in language. One possible solution is to learn a probabilistic generative model of sequences that allows making predictions about upcoming events. Though appealing from a neurobiological standpoint, this approach is typically not pursued in connectionist modeling. Here, we investigated a sequential version of the restricted Boltzmann machine, a stochastic recurrent neural network that extracts high-order structure from sensory data through unsupervised generative learning and can encode contextual (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Stress in Context: Morpho-Syntactic Properties Affect Lexical Stress Assignment in Reading Aloud.Giacomo Spinelli, Simone Sulpizio, Silvia Primativo & Cristina Burani - 2016 - Frontiers in Psychology 7.
    Download  
     
    Export citation  
     
    Bookmark  
  • The Emergence of Words: Attentional Learning in Form and Meaning.Terry Regier - 2005 - Cognitive Science 29 (6):819-865.
    Children improve at word learning during the 2nd year of life—sometimes dramatically. This fact has suggested a change in mechanism, from associative learning to a more referential form of learning. This article presents an associative exemplar-based model that accounts for the improvement without a change in mechanism. It provides a unified account of children's growing abilities to (a) learn a new word given only 1 or a few training trials (“fast mapping”); (b) acquire words that differ only slightly in phonological (...)
    Download  
     
    Export citation  
     
    Bookmark   30 citations  
  • Double Trouble: Visual and Phonological Impairments in English Dyslexic Readers.Serena Provazza, Anne-Marie Adams, David Giofrè & Daniel John Roberts - 2019 - Frontiers in Psychology 10.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • A Computational and Empirical Investigation of Graphemes in Reading.Conrad Perry, Johannes C. Ziegler & Marco Zorzi - 2013 - Cognitive Science 37 (5):800-828.
    It is often assumed that graphemes are a crucial level of orthographic representation above letters. Current connectionist models of reading, however, do not address how the mapping from letters to graphemes is learned. One major challenge for computational modeling is therefore developing a model that learns this mapping and can assign the graphemes to linguistically meaningful categories such as the onset, vowel, and coda of a syllable. Here, we present a model that learns to do this in English for strings (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • An Extension of a Parallel‐Distributed Processing Framework of Reading Aloud in Japanese: Human Nonword Reading Accuracy Does Not Require a Sequential Mechanism.Kenji Ikeda, Taiji Ueno, Yuichi Ito, Shinji Kitagami & Jun Kawaguchi - 2017 - Cognitive Science 41 (S6):1288-1317.
    Humans can pronounce a nonword. Some researchers have interpreted this behavior as requiring a sequential mechanism by which a grapheme-phoneme correspondence rule is applied to each grapheme in turn. However, several parallel-distributed processing models in English have simulated human nonword reading accuracy without a sequential mechanism. Interestingly, the Japanese psycholinguistic literature went partly in the same direction, but it has since concluded that a sequential parsing mechanism is required to reproduce human nonword reading accuracy. In this study, by manipulating the (...)
    Download  
     
    Export citation  
     
    Bookmark