Switch to: References

Add citations

You must login to add citations.
  1. (1 other version)Connectionist Sentence Processing in Perspective.Mark Steedman - 1999 - Cognitive Science 23 (4):615-634.
    The emphasis in the connectionist sentence‐processing literature on distributed representation and emergence of grammar from such systems can easily obscure the often close relations between connectionist and symbolist systems. This paper argues that the Simple Recurrent Network (SRN) models proposed by Jordan (1989) and Elman (1990) are more directly related to stochastic Part‐of‐Speech (POS) Taggers than to parsers or grammars as such, while auto‐associative memory models of the kind pioneered by Longuet–Higgins, Willshaw, Pollack and others may be useful for grammar (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • Connectionism.James Garson & Cameron Buckner - 2019 - Stanford Encyclopedia of Philosophy.
    Download  
     
    Export citation  
     
    Bookmark   26 citations  
  • On the potential of non-classical constituency.W. F. G. Haselager - 1999 - Acta Analytica 144:23-42.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Strong semantic systematicity from Hebbian connectionist learning.Robert Hadley & Michael Hayward - 1997 - Minds and Machines 7 (1):1-55.
    Fodor's and Pylyshyn's stand on systematicity in thought and language has been debated and criticized. Van Gelder and Niklasson, among others, have argued that Fodor and Pylyshyn offer no precise definition of systematicity. However, our concern here is with a learning based formulation of that concept. In particular, Hadley has proposed that a network exhibits strong semantic systematicity when, as a result of training, it can assign appropriate meaning representations to novel sentences (both simple and embedded) which contain words in (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  • Systematic minds, unsystematic models: Learning transfer in humans and networks. [REVIEW]Steven Phillips - 1999 - Minds and Machines 9 (3):383-398.
    Minds are said to be systematic: the capacity to entertain certain thoughts confers to other related thoughts. Although an important property of human cognition, its implication for cognitive architecture has been less than clear. In part, the uncertainty is due to lack of precise accounts on the degree to which cognition is systematic. However, a recent study on learning transfer provides one clear example. This study is used here to compare transfer in humans and feedforward networks. Simulations and analysis show, (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • The Logical Problem of Language Acquisition: A Probabilistic Perspective.Anne S. Hsu & Nick Chater - 2010 - Cognitive Science 34 (6):972-1016.
    Natural language is full of patterns that appear to fit with general linguistic rules but are ungrammatical. There has been much debate over how children acquire these “linguistic restrictions,” and whether innate language knowledge is needed. Recently, it has been shown that restrictions in language can be learned asymptotically via probabilistic inference using the minimum description length (MDL) principle. Here, we extend the MDL approach to give a simple and practical methodology for estimating how much linguistic data are required to (...)
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  • Language Learning From Positive Evidence, Reconsidered: A Simplicity-Based Approach.Anne S. Hsu, Nick Chater & Paul Vitányi - 2013 - Topics in Cognitive Science 5 (1):35-55.
    Children learn their native language by exposure to their linguistic and communicative environment, but apparently without requiring that their mistakes be corrected. Such learning from “positive evidence” has been viewed as raising “logical” problems for language acquisition. In particular, without correction, how is the child to recover from conjecturing an over-general grammar, which will be consistent with any sentence that the child hears? There have been many proposals concerning how this “logical problem” can be dissolved. In this study, we review (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Connectionism, systematicity, and the frame problem.W. F. G. Haselager & J. F. H. Van Rappard - 1998 - Minds and Machines 8 (2):161-179.
    This paper investigates connectionism's potential to solve the frame problem. The frame problem arises in the context of modelling the human ability to see the relevant consequences of events in a situation. It has been claimed to be unsolvable for classical cognitive science, but easily manageable for connectionism. We will focus on a representational approach to the frame problem which advocates the use of intrinsic representations. We argue that although connectionism's distributed representations may look promising from this perspective, doubts can (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  • Systematicity revisited.Robert F. Hadley - 1994 - Mind and Language 9 (4):431-44.
    Download  
     
    Export citation  
     
    Bookmark   14 citations  
  • Systematicity Revisited: Reply to Christiansen and Chater and Niklasson and van Gelder.Robert F. Hadley - 1994 - Mind and Language 9 (4):431-444.
    Download  
     
    Export citation  
     
    Bookmark   15 citations  
  • (1 other version)Connectionist Sentence Processing in Perspective.H. Cres, I. Rossi & M. Steedman - 1999 - Cognitive Science 23 (4):615-634.
    The emphasis in the connectionist sentence‐processing literature on distributed representation and emergence of grammar from such systems can easily obscure the often close relations between connectionist and symbolist systems. This paper argues that the Simple Recurrent Network (SRN) models proposed by Jordan (1989) and Elman (1990) are more directly related to stochastic Part‐of‐Speech (POS) Taggers than to parsers or grammars as such, while auto‐associative memory models of the kind pioneered by Longuet–Higgins, Willshaw, Pollack and others may be useful for grammar (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • The language faculty that wasn't: a usage-based account of natural language recursion.Morten H. Christiansen & Nick Chater - 2015 - Frontiers in Psychology 6:150920.
    In the generative tradition, the language faculty has been shrinking—perhaps to include only the mechanism of recursion. This paper argues that even this view of the language faculty is too expansive. We first argue that a language faculty is difficult to reconcile with evolutionary considerations. We then focus on recursion as a detailed case study, arguing that our ability to process recursive structure does not rely on recursion as a property of the grammar, but instead emerges gradually by piggybacking on (...)
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  • Toward a Connectionist Model of Recursion in Human Linguistic Performance.Morten H. Christiansen & Nick Chater - 1999 - Cognitive Science 23 (2):157-205.
    Naturally occurring speech contains only a limited amount of complex recursive structure, and this is reflected in the empirically documented difficulties that people experience when processing such structures. We present a connectionist model of human performance in processing recursive language structures. The model is trained on simple artificial languages. We find that the qualitative performance profile of the model matches human behavior, both on the relative difficulty of center‐embedding and cross‐dependency, and between the processing of these complex recursive structures and (...)
    Download  
     
    Export citation  
     
    Bookmark   72 citations  
  • (1 other version)Connectionist Natural Language Processing: The State of the Art.M. H. Christiansen, N. Chater & M. S. Seidenberg - 1999 - Cognitive Science 23 (4):417-437.
    This Special Issue on Connectionist Models of Human Language Processing provides an opportunity for an appraisal both of specific connectionist models and of the status and utility of connectionist models of language in general. This introduction provides the background for the papers in the Special Issue. The development of connectionist models of language is traced, from their intellectual origins, to the state of current research. Key themes that arise throughout different areas of connectionist psycholinguistics are highlighted, and recent developments in (...)
    Download  
     
    Export citation  
     
    Bookmark   17 citations  
  • (1 other version)Connectionist Natural Language Processing: The State of the Art.Morten H. Christiansen & Nick Chater - 1999 - Cognitive Science 23 (4):417-437.
    This Special Issue on Connectionist Models of Human Language Processing provides an opportunity for an appraisal both of specific connectionist models and of the status and utility of connectionist models of language in general. This introduction provides the background for the papers in the Special Issue. The development of connectionist models of language is traced, from their intellectual origins, to the state of current research. Key themes that arise throughout different areas of connectionist psycholinguistics are highlighted, and recent developments in (...)
    Download  
     
    Export citation  
     
    Bookmark   24 citations  
  • Compositionality and the modelling of complex concepts.Nick Braisby - 1998 - Minds and Machines 8 (4):479-508.
    The nature of complex concepts has important implications for the computational modelling of the mind, as well as for the cognitive science of concepts. This paper outlines the way in which RVC – a Relational View of Concepts – accommodates a range of complex concepts, cases which have been argued to be non-compositional. RVC attempts to integrate a number of psychological, linguistic and psycholinguistic considerations with the situation-theoretic view that information-carrying relations hold only relative to background situations. The central tenet (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Two ways of learning associations.Luke Boucher & Zoltán Dienes - 2003 - Cognitive Science 27 (6):807-842.
    How people learn chunks or associations between adjacent items in sequences was modelled. Two previously successful models of how people learn artificial grammars were contrasted: the CCN, a network version of the competitive chunker of Servan‐Schreiber and Anderson [J. Exp. Psychol.: Learn. Mem. Cogn. 16 (1990) 592], which produces local and compositionally‐structured chunk representations acquired incrementally; and the simple recurrent network (SRN) of Elman [Cogn. Sci. 14 (1990) 179], which acquires distributed representations through error correction. The models' susceptibility to two (...)
    Download  
     
    Export citation  
     
    Bookmark   14 citations