Switch to: References

Citations of:

Finding Structure in Time

Cognitive Science 14 (2):179-211 (1990)

Add citations

You must login to add citations.
  1. Representation and knowledge are not the same thing.Leslie Smith - 1999 - Behavioral and Brain Sciences 22 (5):784-785.
    Two standard epistemological accounts are conflated in Dienes & Perner's account of knowledge, and this conflation requires the rejection of their four conditions of knowledge. Because their four metarepresentations applied to the explicit-implicit distinction are paired with these conditions, it follows by modus tollens that if the latter are inadequate, then so are the former. Quite simply, their account misses the link between true reasoning and knowledge.
    Download  
     
    Export citation  
     
    Bookmark  
  • What's new here?Bruce Mangan - 1999 - Behavioral and Brain Sciences 22 (1):160-161.
    O'Brien & Opie's (O&O's) theory demands a view of unconscious processing that is incompatible with virtually all current PDP models of neural activity. Relative to the alternatives, the theory is closer to an AI than a parallel distributed processing (PDP) perspective, and its treatment of phenomenology is ad hoc. It raises at least one important question: Could features of network relaxation be the “switch” that turns an unconscious into a conscious network?
    Download  
     
    Export citation  
     
    Bookmark  
  • Models of cognition: Neurological possibility does not indicate neurological plausibility.Peter R. Krebs - 2005 - In Proceedings of CogSci 2005. Mahwah, New Jersey: Lawrence Erlbaum Associates. pp. 184-1189.
    Many activities in Cognitive Science involve complex computer models and simulations of both theoretical and real entities. Artificial Intelligence and the study of artificial neural nets in particular, are seen as major contributors in the quest for understanding the human mind. Computational models serve as objects of experimentation, and results from these virtual experiments are tacitly included in the framework of empirical science. Cognitive functions, like learning to speak, or discovering syntactical structures in language, have been modeled and these models (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Smoke without fire: What do virtual experiments in cognitive science really tell us?Mr Peter R. Krebs - unknown
    Many activities in Cognitive Science involve complex computer models and simulations of both theoretical and real entities. Artificial Intelligence and the study of artificial neural nets in particular, are seen as major contributors in the quest for understanding the human mind. Computational models serve as objects of experimentation, and results from these virtual experiments are tacitly included in the framework of empirical science. Simulations of cognitive functions, like learning to speak, or discovering syntactical structures in language, are the basis for (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Representational redescription and cognitive architectures.Antonella Carassa & Maurizio Tirassa - 1994 - Carassa, Antonella and Tirassa, Maurizio (1994) Representational Redescription and Cognitive Architectures. [Journal (Paginated)] 17 (4):711-712.
    We focus on Karmiloff-Smith's Representational redescription model, arguing that it poses some problems concerning the architecture of a redescribing system. To discuss the topic, we consider the implicit/explicit dichotomy and the relations between natur al language and the language of thought. We argue that the model regards how knowledge is employed rather than how it is represented in the system.
    Download  
     
    Export citation  
     
    Bookmark  
  • A bound on synchronically interpretable structure.Jon M. Slack - 2004 - Mind and Language 19 (3):305–333.
    Multiple explanatory frameworks may be required to provide an adequate account of human cognition. This paper embeds the classical account within a neural network framework, exploring the encoding of syntacticallystructured objects over the synchronicdiachronic characteristics of networks. Synchronic structure is defined in terms of temporal binding and the superposition of states. To accommodate asymmetric relations, synchronic structure is subject to the type uniqueness constraint. The nature of synchronic structure is shown to underlie Xbar theory that characterizes the phrasal structure of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Characteristics of dissociable human learning systems.David R. Shanks & Mark F. St John - 1994 - Behavioral and Brain Sciences 17 (3):367-447.
    A number of ways of taxonomizing human learning have been proposed. We examine the evidence for one such proposal, namely, that there exist independent explicit and implicit learning systems. This combines two further distinctions, (1) between learning that takes place with versus without concurrent awareness, and (2) between learning that involves the encoding of instances (or fragments) versus the induction of abstract rules or hypotheses. Implicit learning is assumed to involve unconscious rule learning. We examine the evidence for implicit learning (...)
    Download  
     
    Export citation  
     
    Bookmark   193 citations  
  • Implicit learning: News from the front.Axel Cleeremans, Arnaud Destrebecqz & Maud Boyer - 1998 - Trends in Cognitive Sciences 2 (10):406-416.
    69 Thompson-Schill, S.L. _et al. _(1997) Role of left inferior prefrontal cortex 59 Buckner, R.L. _et al. _(1996) Functional anatomic studies of memory in retrieval of semantic knowledge: a re-evaluation _Proc. Natl. Acad._ retrieval for auditory words and pictures _J. Neurosci. _16, 6219–6235 _Sci. U. S. A. _94, 14792–14797 60 Buckner, R.L. _et al. _(1995) Functional anatomical studies of explicit and 70 Baddeley, A. (1992) Working memory: the interface between memory implicit memory retrieval tasks _J. Neurosci. _15, 12–29 and cognition (...)
    Download  
     
    Export citation  
     
    Bookmark   84 citations  
  • Accounting for the computational basis of consciousness: A connectionist approach.Ron Sun - 1999 - Consciousness and Cognition 8 (4):529-565.
    This paper argues for an explanation of the mechanistic (computational) basis of consciousness that is based on the distinction between localist (symbolic) representation and distributed representation, the ideas of which have been put forth in the connectionist literature. A model is developed to substantiate and test this approach. The paper also explores the issue of the functional roles of consciousness, in relation to the proposed mechanistic explanation of consciousness. The model, embodying the representational difference, is able to account for the (...)
    Download  
     
    Export citation  
     
    Bookmark   15 citations  
  • Consciousness: A connectionist manifesto. [REVIEW]Dan Lloyd - 1995 - Minds and Machines 5 (2):161-85.
    Connectionism and phenomenology can mutually inform and mutually constrain each other. In this manifesto I outline an approach to consciousness based on distinctions developed by connectionists. Two core identities are central to a connectionist theory of consciouness: conscious states of mind are identical to occurrent activation patterns of processing units; and the variable dispositional strengths on connections between units store latent and unconscious information. Within this broad framework, a connectionist model of consciousness succeeds according to the degree of correspondence between (...)
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  • Consciousness, connectionism, and cognitive neuroscience: A meeting of the minds.Dan Lloyd - 1996 - Philosophical Psychology 9 (1):61-78.
    Accounting for phenomenal structure—the forms, aspects, and features of conscious experience—poses a deep challenge for the scientific study of consciousness, but rather than abandon hope I propose a way forward. Connectionism, I argue, offers a bi-directional analogy, with its oft-noted “neural inspiration” on the one hand, and its largely unnoticed capacity to illuminate our phenomenology on the other. Specifically, distributed representations in a recurrent network enable networks to superpose categorical, contextual, and temporal information on a specific input representation, much as (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Implicit Learning and Consciousness: A Graded, Dynamic Perspective.Axel Cleeremans & Luis Jimenez - 2002 - In Robert M. French & Axel Cleeremans (eds.), Implicit Learning and Consciousness: An Empirical. Psychology Press.
    While the study of implicit learning is nothing new, the field as a whole has come to embody — over the last decade or so — ongoing questioning about three of the most fundamental debates in the cognitive sciences: The nature of consciousness, the nature of mental representation (in particular the difficult issue of abstraction), and the role of experience in shaping the cognitive system. Our main goal in this chapter is to offer a framework that attempts to integrate current (...)
    Download  
     
    Export citation  
     
    Bookmark   30 citations  
  • Comparing direct and indirect measures of sequence learning.Jimenez Luis, Mendez Castor & Cleeremans Axel - 1996 - Journal of Experimental Psychology 22 (4):948-969.
    Comparing the relative sensitivity of direct and indirect measures of learning is proposed as the best way to provide evidence for unconscious learning when both conceptual and operative definitions of awareness are lacking. This approach was first proposed by Reingold & Merikle (1988) in the context of subliminal perception. In this paper, we apply it to a choice reaction time task in which the material is generated based on a probabilistic finite-state grammar (Cleeremans, 1993). We show (1) that participants progressively (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • Inside Doubt: On the Non-Identity of the Theory of Mind and Propositional Attitude Psychology. [REVIEW]David Landy - 2005 - Minds and Machines 15 (3-4):399-414.
    Eliminative materialism is a popular view of the mind which holds that propositional attitudes, the typical units of our traditional understanding, are unsupported by modern connectionist psychology and neuroscience, and consequently that propositional attitudes are a poor scientific postulate, and do not exist. Since our traditional folk psychology employs propositional attitudes, the usual argument runs, it too represents a poor theory, and may in the future be replaced by a more successful neurologically grounded theory, resulting in a drastic improvement in (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • High-level perception, representation, and analogy:A critique of artificial intelligence methodology.David J. Chalmers, Robert M. French & Douglas R. Hofstadter - 1992 - Journal of Experimental and Theoretical Artificial Intellige 4 (3):185 - 211.
    High-level perception--”the process of making sense of complex data at an abstract, conceptual level--”is fundamental to human cognition. Through high-level perception, chaotic environmen- tal stimuli are organized into the mental representations that are used throughout cognitive pro- cessing. Much work in traditional artificial intelligence has ignored the process of high-level perception, by starting with hand-coded representations. In this paper, we argue that this dis- missal of perceptual processes leads to distorted models of human cognition. We examine some existing artificial-intelligence models--”notably (...)
    Download  
     
    Export citation  
     
    Bookmark   18 citations  
  • Do connectionist representations earn their explanatory keep?William Ramsey - 1997 - Mind and Language 12 (1):34-66.
    In this paper I assess the explanatory role of internal representations in connectionist models of cognition. Focusing on both the internal‘hidden’units and the connection weights between units, I argue that the standard reasons for viewing these components as representations are inadequate to bestow an explanatorily useful notion of representation. Hence, nothing would be lost from connectionist accounts of cognitive processes if we were to stop viewing the weights and hidden units as internal representations.
    Download  
     
    Export citation  
     
    Bookmark   25 citations  
  • Content and Its vehicles in connectionist systems.Nicholas Shea - 2007 - Mind and Language 22 (3):246–269.
    This paper advocates explicitness about the type of entity to be considered as content- bearing in connectionist systems; it makes a positive proposal about how vehicles of content should be individuated; and it deploys that proposal to argue in favour of representation in connectionist systems. The proposal is that the vehicles of content in some connectionist systems are clusters in the state space of a hidden layer. Attributing content to such vehicles is required to vindicate the standard explanation for some (...)
    Download  
     
    Export citation  
     
    Bookmark   26 citations  
  • Directions in Connectionist Research: Tractable Computations Without Syntactically Structured Representations.Jonathan Waskan & William Bechtel - 1997 - Metaphilosophy 28 (1‐2):31-62.
    Figure 1: A pr ototyp ical exa mple of a three-layer feed forward network, used by Plunkett and M archm an (1 991 ) to simulate learning the past-tense of En glish verbs. The inpu t units encode representations of the three phonemes of the present tense of the artificial words used in this simulation. Th e netwo rk is trained to produce a representation of the phonemes employed in the past tense form and the suffix (/d/, /ed/, or /t/) (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Generalization and connectionist language learning.Morten H. Christiansen & Nick Chater - 1994 - Mind and Language 9 (3):273-87.
    Download  
     
    Export citation  
     
    Bookmark   15 citations  
  • Systematicity in connectionist language learning.Robert F. Hadley - 1994 - Mind and Language 9 (3):247-72.
    Download  
     
    Export citation  
     
    Bookmark   40 citations  
  • Systematicity revisited.Robert F. Hadley - 1994 - Mind and Language 9 (4):431-44.
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  • Recursive distributed representations.Jordan B. Pollack - 1990 - Artificial Intelligence 46 (1-2):77-105.
    Download  
     
    Export citation  
     
    Bookmark   131 citations  
  • Compositionality in cognitive models: The real issue. [REVIEW]Keith Butler - 1995 - Philosophical Studies 78 (2):153-62.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Philosophy meets the neurosciences.William Bechtel, Pete Mandik & Jennifer Mundale - 2001 - In William P. Bechtel, Pete Mandik, Jennifer Mundale & Robert S. Stufflebeam (eds.), Philosophy and the Neurosciences: A Reader. Malden, Mass.: Blackwell.
    Download  
     
    Export citation  
     
    Bookmark   15 citations  
  • A predictive coding model of the N400.Samer Nour Eddine, Trevor Brothers, Lin Wang, Michael Spratling & Gina R. Kuperberg - 2024 - Cognition 246 (C):105755.
    Download  
     
    Export citation  
     
    Bookmark  
  • Statistical learning of syllable sequences as trajectories through a perceptual similarity space.Wendy Qi & Jason D. Zevin - 2024 - Cognition 244 (C):105689.
    Download  
     
    Export citation  
     
    Bookmark  
  • Cognitive Mechanisms Underlying Recursive Pattern Processing in Human Adults.Abhishek M. Dedhe, Steven T. Piantadosi & Jessica F. Cantlon - 2023 - Cognitive Science 47 (4):e13273.
    The capacity to generate recursive sequences is a marker of rich, algorithmic cognition, and perhaps unique to humans. Yet, the precise processes driving recursive sequence generation remain mysterious. We investigated three potential cognitive mechanisms underlying recursive pattern processing: hierarchical reasoning, ordinal reasoning, and associative chaining. We developed a Bayesian mixture model to quantify the extent to which these three cognitive mechanisms contribute to adult humans’ performance in a sequence generation task. We further tested whether recursive rule discovery depends upon relational (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Origins of Hierarchical Logical Reasoning.Abhishek M. Dedhe, Hayley Clatterbuck, Steven T. Piantadosi & Jessica F. Cantlon - 2023 - Cognitive Science 47 (2):13250.
    Hierarchical cognitive mechanisms underlie sophisticated behaviors, including language, music, mathematics, tool-use, and theory of mind. The origins of hierarchical logical reasoning have long been, and continue to be, an important puzzle for cognitive science. Prior approaches to hierarchical logical reasoning have often failed to distinguish between observable hierarchical behavior and unobservable hierarchical cognitive mechanisms. Furthermore, past research has been largely methodologically restricted to passive recognition tasks as compared to active generation tasks that are stronger tests of hierarchical rules. We argue (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • No need to forget, just keep the balance: Hebbian neural networks for statistical learning.Ángel Eugenio Tovar & Gert Westermann - 2023 - Cognition 230 (C):105176.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • A Unifying Perspective on Perception and Cognition Through Linguistic Representations of Emotion.Prakash Mondal - 2022 - Frontiers in Psychology 13.
    This article will provide a unifying perspective on perception and cognition via the route of linguistic representations of emotion. Linguistic representations of emotions provide a fertile ground for explorations into the nature and form of integration of perception and cognition because emotion has facets of both perceptual and cognitive processes. In particular, this article shows that certain types of linguistic representations of emotion allow for the integration of perception and cognition through a series of steps and operations in cognitive systems, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Knowledge-augmented face perception: Prospects for the Bayesian brain-framework to align AI and human vision.Martin Maier, Florian Blume, Pia Bideau, Olaf Hellwich & Rasha Abdel Rahman - 2022 - Consciousness and Cognition 101:103301.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • A parallel architecture perspective on pre-activation and prediction in language processing.Falk Huettig, Jenny Audring & Ray Jackendoff - 2022 - Cognition 224:105050.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Interaction history as a source of compositionality in emergent communication.Tomasz Korbak, Julian Zubek, Łukasz Kuciński, Piotr Miłoś & Joanna Rączaszek-Leonardi - 2021 - Interaction Studies 22 (2):212-243.
    In this paper, we explore interaction history as a particular source of pressure for achieving emergent compositional communication in multi-agent systems. We propose a training regime implementing template transfer, the idea of carrying over learned biases across contexts. In the presented method, a sender-receiver dyad is first trained with a disentangled pair of objectives, and then the receiver is transferred to train a new sender with a standard objective. Unlike other methods, the template transfer approach does not require imposing inductive (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • (What) Can Deep Learning Contribute to Theoretical Linguistics?Gabe Dupre - 2021 - Minds and Machines 31 (4):617-635.
    Deep learning techniques have revolutionised artificial systems’ performance on myriad tasks, from playing Go to medical diagnosis. Recent developments have extended such successes to natural language processing, an area once deemed beyond such systems’ reach. Despite their different goals, these successes have suggested that such systems may be pertinent to theoretical linguistics. The competence/performance distinction presents a fundamental barrier to such inferences. While DL systems are trained on linguistic performance, linguistic theories are aimed at competence. Such a barrier has traditionally (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Finding event structure in time: What recurrent neural networks can tell us about event structure in mind.Forrest Davis & Gerry T. M. Altmann - 2021 - Cognition 213 (C):104651.
    Download  
     
    Export citation  
     
    Bookmark  
  • When forgetting fosters learning: A neural network model for statistical learning.Ansgar D. Endress & Scott P. Johnson - 2021 - Cognition 213 (C):104621.
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Evaluating models of robust word recognition with serial reproduction.Stephan C. Meylan, Sathvik Nair & Thomas L. Griffiths - 2021 - Cognition 210 (C):104553.
    Spoken communication occurs in a “noisy channel” characterized by high levels of environmental noise, variability within and between speakers, and lexical and syntactic ambiguity. Given these properties of the received linguistic input, robust spoken word recognition—and language processing more generally—relies heavily on listeners' prior knowledge to evaluate whether candidate interpretations of that input are more or less likely. Here we compare several broad-coverage probabilistic generative language models in their ability to capture human linguistic expectations. Serial reproduction, an experimental paradigm where (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • From senses to texts: An all-in-one graph-based approach for measuring semantic similarity.Mohammad Taher Pilehvar & Roberto Navigli - 2015 - Artificial Intelligence 228 (C):95-128.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Response to my critics.Hubert L. Dreyfus - 1996 - Artificial Intelligence 80 (1):171-191.
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Event‐Predictive Cognition: A Root for Conceptual Human Thought.Martin V. Butz, Asya Achimova, David Bilkey & Alistair Knott - 2021 - Topics in Cognitive Science 13 (1):10-24.
    Butz, Achimova, Bilkey, and Knott provide a topic overview and discuss whether the special issue contributions may imply that event‐predictive abilities constitute a root for conceptual human thought, because they enable complex, mutually beneficial, but also intricately competitive, social interactions and language communication.
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • Adjacent and Non‐Adjacent Word Contexts Both Predict Age of Acquisition of English Words: A Distributional Corpus Analysis of Child‐Directed Speech.Lucas M. Chang & Gedeon O. Deák - 2020 - Cognitive Science 44 (11):e12899.
    Children show a remarkable degree of consistency in learning some words earlier than others. What patterns of word usage predict variations among words in age of acquisition? We use distributional analysis of a naturalistic corpus of child‐directed speech to create quantitative features representing natural variability in word contexts. We evaluate two sets of features: One set is generated from the distribution of words into frames defined by the two adjacent words. These features primarily encode syntactic aspects of word usage. The (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Tea With Milk? A Hierarchical Generative Framework of Sequential Event Comprehension.Gina R. Kuperberg - 2021 - Topics in Cognitive Science 13 (1):256-298.
    Inspired by, and in close relation with, the contributions of this special issue, Kuperberg elegantly links event comprehension, production, and learning. She proposes an overarching hierarchical generative framework of processing events enabling us to make sense of the world around us and to interact with it in a competent manner.
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • Dynamical systems theory in cognitive science and neuroscience.Luis H. Favela - 2020 - Philosophy Compass 15 (8):e12695.
    Dynamical systems theory (DST) is a branch of mathematics that assesses abstract or physical systems that change over time. It has a quantitative part (mathematical equations) and a related qualitative part (plotting equations in a state space). Nonlinear dynamical systems theory applies the same tools in research involving phenomena such as chaos and hysteresis. These approaches have provided different ways of investigating and understanding cognitive systems in cognitive science and neuroscience. The ‘dynamical hypothesis’ claims that cognition is and can be (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Finding Structure in Time: Visualizing and Analyzing Behavioral Time Series.Tian Linger Xu, Kaya de Barbaro, Drew H. Abney & Ralf F. A. Cox - 2020 - Frontiers in Psychology 11:521451.
    The temporal structure of behavior contains a rich source of information about its dynamic organization, origins, and development. Today, advances in sensing and data storage allow researchers to collect multiple dimensions of behavioral data at a fine temporal scale both in and out of the laboratory, leading to the curation of massive multimodal corpora of behavior. However, along with these new opportunities come new challenges. Theories are often underspecified as to the exact nature of these unfolding interactions, and psychologists have (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Deep learning and cognitive science.Pietro Perconti & Alessio Plebe - 2020 - Cognition 203:104365.
    In recent years, the family of algorithms collected under the term ``deep learning'' has revolutionized artificial intelligence, enabling machines to reach human-like performances in many complex cognitive tasks. Although deep learning models are grounded in the connectionist paradigm, their recent advances were basically developed with engineering goals in mind. Despite of their applied focus, deep learning models eventually seem fruitful for cognitive purposes. This can be thought as a kind of biological exaptation, where a physiological structure becomes applicable for a (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Learning the generative principles of a symbol system from limited examples.Lei Yuan, Violet Xiang, David Crandall & Linda Smith - 2020 - Cognition 200 (C):104243.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Exploring What Is Encoded in Distributional Word Vectors: A Neurobiologically Motivated Analysis.Akira Utsumi - 2020 - Cognitive Science 44 (6):e12844.
    The pervasive use of distributional semantic models or word embeddings for both cognitive modeling and practical application is because of their remarkable ability to represent the meanings of words. However, relatively little effort has been made to explore what types of information are encoded in distributional word vectors. Knowing the internal knowledge embedded in word vectors is important for cognitive modeling using distributional semantic models. Therefore, in this paper, we attempt to identify the knowledge encoded in word vectors by conducting (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Predictive Modeling of Individual Human Cognition: Upper Bounds and a New Perspective on Performance.Nicolas Riesterer, Daniel Brand & Marco Ragni - 2020 - Topics in Cognitive Science 12 (3):960-974.
    Syllogisms (e.g. “All A are B; All B are C; What is true about A and C?”) are a long‐studied area of human reasoning. Riesterer, Brand, and Ragni compare a variety of models to human performance and show that not only do current models have a lot of room for improvement, but more importantly a large part of this improvement must come from examining individual differences in performance.
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Word Order Typology Interacts With Linguistic Complexity: A Cross‐Linguistic Corpus Study.Himanshu Yadav, Ashwini Vaidya, Vishakha Shukla & Samar Husain - 2020 - Cognitive Science 44 (4):e12822.
    Much previous work has suggested that word order preferences across languages can be explained by the dependency distance minimization constraint (Ferrer‐i Cancho, 2008, 2015; Hawkins, 1994). Consistent with this claim, corpus studies have shown that the average distance between a head (e.g., verb) and its dependent (e.g., noun) tends to be short cross‐linguistically (Ferrer‐i Cancho, 2014; Futrell, Mahowald, & Gibson, 2015; Liu, Xu, & Liang, 2017). This implies that on average languages avoid inefficient or complex structures for simpler structures. But (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Lossy‐Context Surprisal: An Information‐Theoretic Model of Memory Effects in Sentence Processing.Richard Futrell, Edward Gibson & Roger P. Levy - 2020 - Cognitive Science 44 (3):e12814.
    A key component of research on human sentence processing is to characterize the processing difficulty associated with the comprehension of words in context. Models that explain and predict this difficulty can be broadly divided into two kinds, expectation‐based and memory‐based. In this work, we present a new model of incremental sentence processing difficulty that unifies and extends key features of both kinds of models. Our model, lossy‐context surprisal, holds that the processing difficulty at a word in context is proportional to (...)
    Download  
     
    Export citation  
     
    Bookmark   13 citations