Switch to: References

Add citations

You must login to add citations.
  1. When Stronger Knowledge Slows You Down: Semantic Relatedness Predicts Children's Co‐Activation of Related Items in a Visual Search Paradigm.Catarina Vales & Anna V. Fisher - 2019 - Cognitive Science 43 (6):e12746.
    A large literature suggests that the organization of words in semantic memory, reflecting meaningful relations among words and the concepts to which they refer, supports many cognitive processes, including memory encoding and retrieval, word learning, and inferential reasoning. The co‐activation of related items has been proposed as a mechanism by which semantic knowledge influences cognition, and contemporary accounts of semantic knowledge propose that this co‐activation is graded—that it depends on how strongly related the items are in semantic memory. Prior research (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • (1 other version)Learning a Generative Probabilistic Grammar of Experience: A Process‐Level Model of Language Acquisition.Oren Kolodny, Arnon Lotem & Shimon Edelman - 2014 - Cognitive Science 38 (4):227-267.
    We introduce a set of biologically and computationally motivated design choices for modeling the learning of language, or of other types of sequential, hierarchically structured experience and behavior, and describe an implemented system that conforms to these choices and is capable of unsupervised learning from raw natural-language corpora. Given a stream of linguistic input, our model incrementally learns a grammar that captures its statistical patterns, which can then be used to parse or generate new data. The grammar constructed in this (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Protein Analysis Meets Visual Word Recognition: A Case for String Kernels in the Brain.Thomas Hannagan & Jonathan Grainger - 2012 - Cognitive Science 36 (4):575-606.
    It has been recently argued that some machine learning techniques known as Kernel methods could be relevant for capturing cognitive and neural mechanisms (Jäkel, Schölkopf, & Wichmann, 2009). We point out that ‘‘String kernels,’’ initially designed for protein function prediction and spam detection, are virtually identical to one contending proposal for how the brain encodes orthographic information during reading. We suggest some reasons for this connection and we derive new ideas for visual word recognition that are successfully put to the (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Comparing Methods for Single Paragraph Similarity Analysis.Benjamin Stone, Simon Dennis & Peter J. Kwantes - 2011 - Topics in Cognitive Science 3 (1):92-122.
    The focus of this paper is two-fold. First, similarities generated from six semantic models were compared to human ratings of paragraph similarity on two datasets—23 World Entertainment News Network paragraphs and 50 ABC newswire paragraphs. Contrary to findings on smaller textual units such as word associations (Griffiths, Tenenbaum, & Steyvers, 2007), our results suggest that when single paragraphs are compared, simple nonreductive models (word overlap and vector space) can provide better similarity estimates than more complex models (LSA, Topic Model, SpNMF, (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Investigating the Extent to which Distributional Semantic Models Capture a Broad Range of Semantic Relations.Kevin S. Brown, Eiling Yee, Gitte Joergensen, Melissa Troyer, Elliot Saltzman, Jay Rueckl, James S. Magnuson & Ken McRae - 2023 - Cognitive Science 47 (5):e13291.
    Distributional semantic models (DSMs) are a primary method for distilling semantic information from corpora. However, a key question remains: What types of semantic relations among words do DSMs detect? Prior work typically has addressed this question using limited human data that are restricted to semantic similarity and/or general semantic relatedness. We tested eight DSMs that are popular in current cognitive and psycholinguistic research (positive pointwise mutual information; global vectors; and three variations each of Skip-gram and continuous bag of words (CBOW) (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • The Construction of Meaning.Walter Kintsch & Praful Mangalath - 2011 - Topics in Cognitive Science 3 (2):346-370.
    We argue that word meanings are not stored in a mental lexicon but are generated in the context of working memory from long-term memory traces that record our experience with words. Current statistical models of semantics, such as latent semantic analysis and the Topic model, describe what is stored in long-term memory. The CI-2 model describes how this information is used to construct sentence meanings. This model is a dual-memory model, in that it distinguishes between a gist level and an (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • Constructing Semantic Representations From a Gradually Changing Representation of Temporal Context.Marc W. Howard, Karthik H. Shankar & Udaya K. K. Jagadisan - 2011 - Topics in Cognitive Science 3 (1):48-73.
    Computational models of semantic memory exploit information about co-occurrences of words in naturally occurring text to extract information about the meaning of the words that are present in the language. Such models implicitly specify a representation of temporal context. Depending on the model, words are said to have occurred in the same context if they are presented within a moving window, within the same sentence, or within the same document. The temporal context model (TCM), which specifies a particular definition of (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Redundancy in Perceptual and Linguistic Experience: Comparing Feature-Based and Distributional Models of Semantic Representation.Brian Riordan & Michael N. Jones - 2011 - Topics in Cognitive Science 3 (2):303-345.
    Abstract Since their inception, distributional models of semantics have been criticized as inadequate cognitive theories of human semantic learning and representation. A principal challenge is that the representations derived by distributional models are purely symbolic and are not grounded in perception and action; this challenge has led many to favor feature-based models of semantic representation. We argue that the amount of perceptual and other semantic information that can be learned from purely distributional statistics has been underappreciated. We compare the representations (...)
    Download  
     
    Export citation  
     
    Bookmark   25 citations  
  • (1 other version)Learning a Generative Probabilistic Grammar of Experience: A Process‐Level Model of Language Acquisition.Oren Kolodny, Arnon Lotem & Shimon Edelman - 2015 - Cognitive Science 39 (2):227-267.
    We introduce a set of biologically and computationally motivated design choices for modeling the learning of language, or of other types of sequential, hierarchically structured experience and behavior, and describe an implemented system that conforms to these choices and is capable of unsupervised learning from raw natural‐language corpora. Given a stream of linguistic input, our model incrementally learns a grammar that captures its statistical patterns, which can then be used to parse or generate new data. The grammar constructed in this (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Computational Methods to Extract Meaning From Text and Advance Theories of Human Cognition.Danielle S. McNamara - 2011 - Topics in Cognitive Science 3 (1):3-17.
    Over the past two decades, researchers have made great advances in the area of computational methods for extracting meaning from text. This research has to a large extent been spurred by the development of latent semantic analysis (LSA), a method for extracting and representing the meaning of words using statistical computations applied to large corpora of text. Since the advent of LSA, researchers have developed and tested alternative statistical methods designed to detect and analyze meaning in text corpora. This research (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations