Switch to: Citations

Add references

You must login to add references.
  1. Learning Representations of Wordforms With Recurrent Networks: Comment on Sibley, Kello, Plaut, & Elman (2008).Jeffrey S. Bowers & Colin J. Davis - 2009 - Cognitive Science 33 (7):1183-1186.
    Sibley et al. (2008) report a recurrent neural network model designed to learn wordform representations suitable for written and spoken word identification. The authors claim that their sequence encoder network overcomes a key limitation associated with models that code letters by position (e.g., CAT might be coded as C‐in‐position‐1, A‐in‐position‐2, T‐in‐position‐3). The problem with coding letters by position (slot‐coding) is that it is difficult to generalize knowledge across positions; for example, the overlap between CAT and TOMCAT is lost. Although we (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • A connectionist multiple-trace memory model for polysyllabic word reading.Bernard Ans, Serge Carbonnel & Sylviane Valdois - 1998 - Psychological Review 105 (4):678-723.
    Download  
     
    Export citation  
     
    Bookmark   18 citations  
  • Understanding normal and impaired word reading: Computational principles in quasi-regular domains.David C. Plaut, James L. McClelland, Mark S. Seidenberg & Karalyn Patterson - 1996 - Psychological Review 103 (1):56-115.
    Download  
     
    Export citation  
     
    Bookmark   191 citations  
  • Large‐Scale Modeling of Wordform Learning and Representation.Daragh E. Sibley, Christopher T. Kello, David C. Plaut & Jeffrey L. Elman - 2008 - Cognitive Science 32 (4):741-754.
    The forms of words as they appear in text and speech are central to theories and models of lexical processing. Nonetheless, current methods for simulating their learning and representation fail to approach the scale and heterogeneity of real wordform lexicons. A connectionist architecture termed thesequence encoderis used to learn nearly 75,000 wordform representations through exposure to strings of stress‐marked phonemes or letters. First, the mechanisms and efficacy of the sequence encoder are demonstrated and shown to overcome problems with traditional slot‐based (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations