Switch to: References

Add citations

You must login to add citations.
  1. Shortlist B: A Bayesian model of continuous speech recognition.Dennis Norris & James M. McQueen - 2008 - Psychological Review 115 (2):357-395.
    Download  
     
    Export citation  
     
    Bookmark   69 citations  
  • The process of spoken word recognition: An introduction.Uli H. Frauenfelder & Lorraine Komisarjevsky Tyler - 1987 - Cognition 25 (1-2):1-20.
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • The Role of Embodied Intention in Early Lexical Acquisition.Chen Yu, Dana H. Ballard & Richard N. Aslin - 2005 - Cognitive Science 29 (6):961-1005.
    We examine the influence of inferring interlocutors' referential intentions from their body movements at the early stage of lexical acquisition. By testing human participants and comparing their performances in different learning conditions, we find that those embodied intentions facilitate both word discovery and word‐meaning association. In light of empirical findings, the main part of this article presents a computational model that can identify the sound patterns of individual words from continuous speech, using nonlinguistic contextual information, and employ body movements as (...)
    Download  
     
    Export citation  
     
    Bookmark   23 citations  
  • How Should a Speech Recognizer Work?Odette Scharenborg, Dennis Norris, Louis Bosch & James M. McQueen - 2005 - Cognitive Science 29 (6):867-918.
    Although researchers studying human speech recognition (HSR) and automatic speech recognition (ASR) share a common interest in how information processing systems (human or machine) recognize spoken language, there is little communication between the two disciplines. We suggest that this lack of communication follows largely from the fact that research in these related fields has focused on the mechanics of how speech can be recognized. In Marr's (1982) terms, emphasis has been on the algorithmic and implementational levels rather than on the (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • How Should a Speech Recognizer Work?Odette Scharenborg, Dennis Norris, Louis ten Bosch & James M. McQueen - 2005 - Cognitive Science 29 (6):867-918.
    Although researchers studying human speech recognition (HSR) and automatic speech recognition (ASR) share a common interest in how information processing systems (human or machine) recognize spoken language, there is little communication between the two disciplines. We suggest that this lack of communication follows largely from the fact that research in these related fields has focused on the mechanics of how speech can be recognized. In Marr's (1982) terms, emphasis has been on the algorithmic and implementational levels rather than on the (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Acoustic-phonetic representations in word recognition.David B. Pisoni & Paul A. Luce - 1987 - Cognition 25 (1-2):21-52.
    Download  
     
    Export citation  
     
    Bookmark   14 citations  
  • Phonotactic cues for segmentation of fluent speech by infants.Sven L. Mattys & Peter W. Jusczyk - 2001 - Cognition 78 (2):91-121.
    Download  
     
    Export citation  
     
    Bookmark   25 citations  
  • The mental representation of lexical form: A phonological approach to the recognition lexicon.Aditi Lahiri & William Marslen-Wilson - 1991 - Cognition 38 (3):245-294.
    Download  
     
    Export citation  
     
    Bookmark   30 citations  
  • Distributional regularity and phonotactic constraints are useful for segmentation.Michael R. Brent & Timothy A. Cartwright - 1996 - Cognition 61 (1-2):93-125.
    Download  
     
    Export citation  
     
    Bookmark   69 citations