Switch to: Citations

Add references

You must login to add references.
  1. (1 other version)Iconic Gestures Prime Words.De-Fu Yap, Wing-Chee So, Ju-Min Melvin Yap, Ying-Quan Tan & Ruo-Li Serene Teoh - 2011 - Cognitive Science 35 (1):171-183.
    Using a cross-modal semantic priming paradigm, both experiments of the present study investigated the link between the mental representations of iconic gestures and words. Two groups of the participants performed a primed lexical decision task where they had to discriminate between visually presented words and nonwords (e.g., flirp). Word targets (e.g., bird) were preceded by video clips depicting either semantically related (e.g., pair of hands flapping) or semantically unrelated (e.g., drawing a square with both hands) gestures. The duration of gestures (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Investigating joint attention mechanisms through spoken human–robot interaction.Maria Staudte & Matthew W. Crocker - 2011 - Cognition 120 (2):268-291.
    Download  
     
    Export citation  
     
    Bookmark   16 citations  
  • (1 other version)Iconic Gestures Prime Words.De-Fu Yap, Wing-Chee So, Ju-Min Melvin Yap, Ying-Quan Tan & Ruo-Li Serene Teoh - 2011 - Cognitive Science 35 (1):171-183.
    Using a cross‐modal semantic priming paradigm, both experiments of the present study investigated the link between the mental representations of iconic gestures and words. Two groups of the participants performed a primed lexical decision task where they had to discriminate between visually presented words and nonwords (e.g., flirp). Word targets (e.g., bird) were preceded by video clips depicting either semantically related (e.g., pair of hands flapping) or semantically unrelated (e.g., drawing a square with both hands) gestures. The duration of gestures (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations