Switch to: References

Add citations

You must login to add citations.
  1. Do Musicians and Non-musicians Differ in Speech-on-Speech Processing?Elif Canseza Kaplan, Anita E. Wagner, Paolo Toffanin & Deniz Başkent - 2021 - Frontiers in Psychology 12.
    Earlier studies have shown that musically trained individuals may have a benefit in adverse listening situations when compared to non-musicians, especially in speech-on-speech perception. However, the literature provides mostly conflicting results. In the current study, by employing different measures of spoken language processing, we aimed to test whether we could capture potential differences between musicians and non-musicians in speech-on-speech processing. We used an offline measure of speech perception, which reveals a post-task response, and online measures of real time spoken language (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Waiting for lexical access: Cochlear implants or severely degraded input lead listeners to process speech less incrementally.Bob McMurray, Ashley Farris-Trimble & Hannah Rigler - 2017 - Cognition 169 (C):147-164.
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • (1 other version)What Are You Waiting For? Real‐Time Integration of Cues for Fricatives Suggests Encapsulated Auditory Memory.Marcus E. Galle, Jamie Klein-Packard, Kayleen Schreiber & Bob McMurray - 2019 - Cognitive Science 43 (1):e12700.
    Speech unfolds over time, and the cues for even a single phoneme are rarely available simultaneously. Consequently, to recognize a single phoneme, listeners must integrate material over several hundred milliseconds. Prior work contrasts two accounts: (a) a memory buffer account in which listeners accumulate auditory information in memory and only access higher level representations (i.e., lexical representations) when sufficient information has arrived; and (b) an immediate integration scheme in which lexical representations can be partially activated on the basis of early (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Bottom-up processes dominate early word recognition in toddlers.Janette Chow, Armando Q. Angulo-Chavira, Marlene Spangenberg, Leonie Hentrup & Kim Plunkett - 2022 - Cognition 228 (C):105214.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • One Size Does Not Fit All: Examining the Effects of Working Memory Capacity on Spoken Word Recognition in Older Adults Using Eye Tracking.Gal Nitsan, Karen Banai & Boaz M. Ben-David - 2022 - Frontiers in Psychology 13.
    Difficulties understanding speech form one of the most prevalent complaints among older adults. Successful speech perception depends on top-down linguistic and cognitive processes that interact with the bottom-up sensory processing of the incoming acoustic information. The relative roles of these processes in age-related difficulties in speech perception, especially when listening conditions are not ideal, are still unclear. In the current study, we asked whether older adults with a larger working memory capacity process speech more efficiently than peers with lower capacity (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Efficiency of spoken word recognition slows across the adult lifespan.Sarah E. Colby & Bob McMurray - 2023 - Cognition 240 (C):105588.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Cognitive processes underlying spoken word recognition during soft speech.Kristi Hendrickson, Jessica Spinelli & Elizabeth Walker - 2020 - Cognition 198 (C):104196.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • (1 other version)What Are You Waiting For? Real‐Time Integration of Cues for Fricatives Suggests Encapsulated Auditory Memory.Marcus E. Galle, Jamie Klein-Packard, Kayleen Schreiber & Bob McMurray - 2019 - Cognitive Science 43 (1):e12700.
    Speech unfolds over time, and the cues for even a single phoneme are rarely available simultaneously. Consequently, to recognize a single phoneme, listeners must integrate material over several hundred milliseconds. Prior work contrasts two accounts: (a) a memory buffer account in which listeners accumulate auditory information in memory and only access higher level representations (i.e., lexical representations) when sufficient information has arrived; and (b) an immediate integration scheme in which lexical representations can be partially activated on the basis of early (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • The Two Sides of Linguistic Context: Eye-Tracking as a Measure of Semantic Competition in Spoken Word Recognition Among Younger and Older Adults.Nicolai D. Ayasse & Arthur Wingfield - 2020 - Frontiers in Human Neuroscience 14.
    Download  
     
    Export citation  
     
    Bookmark   3 citations