Switch to: References

Add citations

You must login to add citations.
  1. Dynamical Systems Implementation of Intrinsic Sentence Meaning.Hermann Moisl - 2022 - Minds and Machines 32 (4):627-653.
    This paper proposes a model for implementation of intrinsic natural language sentence meaning in a physical language understanding system, where 'intrinsic' is understood as 'independent of meaning ascription by system-external observers'. The proposal is that intrinsic meaning can be implemented as a point attractor in the state space of a nonlinear dynamical system with feedback which is generated by temporally sequenced inputs. It is motivated by John Searle's well known (Behavioral and Brain Sciences, 3: 417–57, 1980) critique of the then-standard (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Learning a Generative Probabilistic Grammar of Experience: A Process‐Level Model of Language Acquisition.Oren Kolodny, Arnon Lotem & Shimon Edelman - 2015 - Cognitive Science 39 (2):227-267.
    We introduce a set of biologically and computationally motivated design choices for modeling the learning of language, or of other types of sequential, hierarchically structured experience and behavior, and describe an implemented system that conforms to these choices and is capable of unsupervised learning from raw natural‐language corpora. Given a stream of linguistic input, our model incrementally learns a grammar that captures its statistical patterns, which can then be used to parse or generate new data. The grammar constructed in this (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Input and Age‐Dependent Variation in Second Language Learning: A Connectionist Account.Marius Janciauskas & Franklin Chang - 2018 - Cognitive Science 42 (S2):519-554.
    Language learning requires linguistic input, but several studies have found that knowledge of second language rules does not seem to improve with more language exposure. One reason for this is that previous studies did not factor out variation due to the different rules tested. To examine this issue, we reanalyzed grammaticality judgment scores in Flege, Yeni-Komshian, and Liu's study of L2 learners using rule-related predictors and found that, in addition to the overall drop in performance due to a sensitive period, (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Lossy‐Context Surprisal: An Information‐Theoretic Model of Memory Effects in Sentence Processing.Richard Futrell, Edward Gibson & Roger P. Levy - 2020 - Cognitive Science 44 (3):e12814.
    A key component of research on human sentence processing is to characterize the processing difficulty associated with the comprehension of words in context. Models that explain and predict this difficulty can be broadly divided into two kinds, expectation‐based and memory‐based. In this work, we present a new model of incremental sentence processing difficulty that unifies and extends key features of both kinds of models. Our model, lossy‐context surprisal, holds that the processing difficulty at a word in context is proportional to (...)
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  • Reservoir computing and the Sooner-is-Better bottleneck.Stefan L. Frank & Hartmut Fitz - 2016 - Behavioral and Brain Sciences 39.
    Download  
     
    Export citation  
     
    Bookmark  
  • Public language, private language, and subsymbolic theories of mind.Gabe Dupre - 2023 - Mind and Language 38 (2):394-412.
    Language has long been a problem‐case for subsymbolic theories of mind. The reason for this is obvious: Language seems essentially symbolic. However, recent work has developed a potential solution to this problem, arguing that linguistic symbols are public objects which augment a fundamentally subsymbolic mind, rather than components of cognitive symbol‐processing. I shall argue that this strategy cannot work, on the grounds that human language acquisition consists in projecting linguistic structure onto environmental entities, rather than extracting this structure from them.
    Download  
     
    Export citation  
     
    Bookmark  
  • Representing Types as Neural Events.Robin Cooper - 2019 - Journal of Logic, Language and Information 28 (2):131-155.
    One of the claims of Type Theory with Records is that it can be used to model types learned by agents in order to classify objects and events in the world, including speech events. That is, the types can be represented by patterns of neural activation in the brain. This claim would be empty if it turns out that the types are in principle impossible to represent on a finite network of neurons. We will discuss how to represent types in (...)
    Download  
     
    Export citation  
     
    Bookmark