Switch to: References

Add citations

You must login to add citations.
  1. EARSHOT: A Minimal Neural Network Model of Incremental Human Speech Recognition.James S. Magnuson, Heejo You, Sahil Luthra, Monica Li, Hosung Nam, Monty Escabí, Kevin Brown, Paul D. Allopenna, Rachel M. Theodore, Nicholas Monto & Jay G. Rueckl - 2020 - Cognitive Science 44 (4):e12823.
    Despite the lack of invariance problem (the many‐to‐many mapping between acoustics and percepts), human listeners experience phonetic constancy and typically perceive what a speaker intends. Most models of human speech recognition (HSR) have side‐stepped this problem, working with abstract, idealized inputs and deferring the challenge of working with real speech. In contrast, carefully engineered deep learning networks allow robust, real‐world automatic speech recognition (ASR). However, the complexities of deep learning architectures and training regimens make it difficult to use them to (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Individual aptitude in Mandarin lexical tone perception predicts effectiveness of high-variability training.Makiko Sadakata & James M. McQueen - 2014 - Frontiers in Psychology 5.
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Shortlist B: A Bayesian model of continuous speech recognition.Dennis Norris & James M. McQueen - 2008 - Psychological Review 115 (2):357-395.
    Download  
     
    Export citation  
     
    Bookmark   68 citations  
  • Phonological abstraction without phonemes in speech perception.Holger Mitterer, Odette Scharenborg & James M. McQueen - 2013 - Cognition 129 (2):356-361.
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Phonological Abstraction in the Mental Lexicon.James M. McQueen, Anne Cutler & Dennis Norris - 2006 - Cognitive Science 30 (6):1113-1126.
    A perceptual learning experiment provides evidence that the mental lexicon cannot consist solely of detailed acoustic traces of recognition episodes. In a training lexical decision phase, listeners heard an ambiguous [f–s] fricative sound, replacing either [f] or [s] in words. In a test phase, listeners then made lexical decisions to visual targets following auditory primes. Critical materials were minimal pairs that could be a word with either [f] or [s] (cf. English knife–nice), none of which had been heard in training. (...)
    Download  
     
    Export citation  
     
    Bookmark   19 citations