Switch to: References

Add citations

You must login to add citations.
  1. From senses to texts: An all-in-one graph-based approach for measuring semantic similarity.Mohammad Taher Pilehvar & Roberto Navigli - 2015 - Artificial Intelligence 228 (C):95-128.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Distributional Models of Category Concepts Based on Names of Category Members.Matthijs Westera, Abhijeet Gupta, Gemma Boleda & Sebastian Padó - 2021 - Cognitive Science 45 (9):e13029.
    Cognitive scientists have long used distributional semantic representations of categories. The predominant approach uses distributional representations of category‐denoting nouns, such as “city” for the category city. We propose a novel scheme that represents categories as prototypes over representations of names of its members, such as “Barcelona,” “Mumbai,” and “Wuhan” for the category city. This name‐based representation empirically outperforms the noun‐based representation on two experiments (modeling human judgments of category relatedness and predicting category membership) with particular improvements for ambiguous nouns. We (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Modeling the Structure and Dynamics of Semantic Processing.Armand S. Rotaru, Gabriella Vigliocco & Stefan L. Frank - 2018 - Cognitive Science 42 (8):2890-2917.
    The contents and structure of semantic memory have been the focus of much recent research, with major advances in the development of distributional models, which use word co‐occurrence information as a window into the semantics of language. In parallel, connectionist modeling has extended our knowledge of the processes engaged in semantic activation. However, these two lines of investigation have rarely been brought together. Here, we describe a processing model based on distributional semantics in which activation spreads throughout a semantic network, (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • On the Importance of a Rich Embodiment in the Grounding of Concepts: Perspectives From Embodied Cognitive Science and Computational Linguistics.Serge Thill, Sebastian Padó & Tom Ziemke - 2014 - Topics in Cognitive Science 6 (3):545-558.
    The recent trend in cognitive robotics experiments on language learning, symbol grounding, and related issues necessarily entails a reduction of sensorimotor aspects from those provided by a human body to those that can be realized in machines, limiting robotic models of symbol grounding in this respect. Here, we argue that there is a need for modeling work in this domain to explicitly take into account the richer human embodiment even for concrete concepts that prima facie relate merely to simple actions, (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Automatic Extraction of Property Norm‐Like Data From Large Text Corpora.Colin Kelly, Barry Devereux & Anna Korhonen - 2014 - Cognitive Science 38 (4):638-682.
    Traditional methods for deriving property-based representations of concepts from text have focused on either extracting only a subset of possible relation types, such as hyponymy/hypernymy (e.g., car is-a vehicle) or meronymy/metonymy (e.g., car has wheels), or unspecified relations (e.g., car—petrol). We propose a system for the challenging task of automatic, large-scale acquisition of unconstrained, human-like property norms from large text corpora, and discuss the theoretical implications of such a system. We employ syntactic, semantic, and encyclopedic information to guide our extraction, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Perceptual Inference Through Global Lexical Similarity.Brendan T. Johns & Michael N. Jones - 2012 - Topics in Cognitive Science 4 (1):103-120.
    The literature contains a disconnect between accounts of how humans learn lexical semantic representations for words. Theories generally propose that lexical semantics are learned either through perceptual experience or through exposure to regularities in language. We propose here a model to integrate these two information sources. Specifically, the model uses the global structure of memory to exploit the redundancy between language and perception in order to generate inferred perceptual representations for words with which the model has no perceptual experience. We (...)
    Download  
     
    Export citation  
     
    Bookmark   17 citations