Switch to: References

Add citations

You must login to add citations.
  1. Towards a universal model of reading.Ram Frost, Christina Behme, Madeleine El Beveridge, Thomas H. Bak, Jeffrey S. Bowers, Max Coltheart, Stephen Crain, Colin J. Davis, S. Hélène Deacon & Laurie Beth Feldman - 2012 - Behavioral and Brain Sciences 35 (5):263.
    In the last decade, reading research has seen a paradigmatic shift. A new wave of computational models of orthographic processing that offer various forms of noisy position or context-sensitive coding have revolutionized the field of visual word recognition. The influx of such models stems mainly from consistent findings, coming mostly from European languages, regarding an apparent insensitivity of skilled readers to letter order. Underlying the current revolution is the theoretical assumption that the insensitivity of readers to letter order reflects the (...)
    Download  
     
    Export citation  
     
    Bookmark   26 citations  
  • Position-invariant letter identification is a key component of any universal model of reading.Jeffrey S. Bowers - 2012 - Behavioral and Brain Sciences 35 (5):281-282.
    A universal property of visual word identification is position-invariant letter identification, such that the letter is coded in the same way in CAT and ACT. This should provide a fundamental constraint on theories of word identification, and, indeed, it inspired some of the theories that Frost has criticized. I show how the spatial coding scheme of Colin Davis can, in principle, account for contrasting transposed letter priming effects, and at the same time, position-invariant letter identification.
    Download  
     
    Export citation  
     
    Bookmark  
  • Sequence Encoders Enable Large‐Scale Lexical Modeling: Reply to Bowers and Davis (2009).Daragh E. Sibley, Christopher T. Kello, David C. Plaut & Jeffrey L. Elman - 2009 - Cognitive Science 33 (7):1187-1191.
    Sibley, Kello, Plaut, and Elman (2008) proposed the sequence encoder as a model that learns fixed‐width distributed representations of variable‐length sequences. In doing so, the sequence encoder overcomes problems that have restricted models of word reading and recognition to processing only monosyllabic words. Bowers and Davis (2009) recently claimed that the sequence encoder does not actually overcome the relevant problems, and hence it is not a useful component of large‐scale word‐reading models. In this reply, it is noted that the sequence (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation