Switch to: References

Add citations

You must login to add citations.
  1. An improved probabilistic account of counterfactual reasoning.Christopher G. Lucas & Charles Kemp - 2015 - Psychological Review 122 (4):700-734.
    When people want to identify the causes of an event, assign credit or blame, or learn from their mistakes, they often reflect on how things could have gone differently. In this kind of reasoning, one considers a counterfactual world in which some events are different from their real-world counterparts and considers what else would have changed. Researchers have recently proposed several probabilistic models that aim to capture how people do (or should) reason about counterfactuals. We present a new model and (...)
    Download  
     
    Export citation  
     
    Bookmark   16 citations  
  • Raising the Roof: Situating Verbs in Symbolic and Embodied Language Processing.John Hollander & Andrew Olney - 2024 - Cognitive Science 48 (4):e13442.
    Recent investigations on how people derive meaning from language have focused on task‐dependent shifts between two cognitive systems. The symbolic (amodal) system represents meaning as the statistical relationships between words. The embodied (modal) system represents meaning through neurocognitive simulation of perceptual or sensorimotor systems associated with a word's referent. A primary finding of literature in this field is that the embodied system is only dominant when a task necessitates it, but in certain paradigms, this has only been demonstrated using nouns (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Mapping semantic space: Exploring the higher-order structure of word meaning.Veronica Diveica, Emiko J. Muraki, Richard J. Binney & Penny M. Pexman - 2024 - Cognition 248 (C):105794.
    Download  
     
    Export citation  
     
    Bookmark  
  • A dynamic approach to recognition memory.Gregory E. Cox & Richard M. Shiffrin - 2017 - Psychological Review 124 (6):795-860.
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Overrated gaps: Inter-speaker gaps provide limited information about the timing of turns in conversation.Ruth E. Corps, Birgit Knudsen & Antje S. Meyer - 2022 - Cognition 223 (C):105037.
    Download  
     
    Export citation  
     
    Bookmark  
  • Analyzing the history of Cognition using Topic Models.Uriel Cohen Priva & Joseph L. Austerweil - 2015 - Cognition 135:4-9.
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • Generative Inferences Based on Learned Relations.Dawn Chen, Hongjing Lu & Keith J. Holyoak - 2017 - Cognitive Science 41 (S5):1062-1092.
    A key property of relational representations is their generativity: From partial descriptions of relations between entities, additional inferences can be drawn about other entities. A major theoretical challenge is to demonstrate how the capacity to make generative inferences could arise as a result of learning relations from non-relational inputs. In the present paper, we show that a bottom-up model of relation learning, initially developed to discriminate between positive and negative examples of comparative relations, can be extended to make generative inferences. (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • The importance of iteration in creative conceptual combination.Joel Chan & Christian D. Schunn - 2015 - Cognition 145:104-115.
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • A data-driven computational semiotics: The semantic vector space of Magritte’s artworks.Jean-François Chartier, Davide Pulizzotto, Louis Chartrand & Jean-Guy Meunier - 2019 - Semiotica 2019 (230):19-69.
    The rise of big digital data is changing the framework within which linguists, sociologists, anthropologists, and other researchers are working. Semiotics is not spared by this paradigm shift. A data-driven computational semiotics is the study with an intensive use of computational methods of patterns in human-created contents related to semiotic phenomena. One of the most promising frameworks in this research program is the Semantic Vector Space (SVS) models and their methods. The objective of this article is to contribute to the (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Adjacent and Non‐Adjacent Word Contexts Both Predict Age of Acquisition of English Words: A Distributional Corpus Analysis of Child‐Directed Speech.Lucas M. Chang & Gedeon O. Deák - 2020 - Cognitive Science 44 (11):e12899.
    Children show a remarkable degree of consistency in learning some words earlier than others. What patterns of word usage predict variations among words in age of acquisition? We use distributional analysis of a naturalistic corpus of child‐directed speech to create quantitative features representing natural variability in word contexts. We evaluate two sets of features: One set is generated from the distribution of words into frames defined by the two adjacent words. These features primarily encode syntactic aspects of word usage. The (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Words with Consistent Diachronic Usage Patterns are Learned Earlier: A Computational Analysis Using Temporally Aligned Word Embeddings.Giovanni Cassani, Federico Bianchi & Marco Marelli - 2021 - Cognitive Science 45 (4):e12963.
    In this study, we use temporally aligned word embeddings and a large diachronic corpus of English to quantify language change in a data-driven, scalable way, which is grounded in language use. We show a unique and reliable relation between measures of language change and age of acquisition (AoA) while controlling for frequency, contextual diversity, concreteness, length, dominant part of speech, orthographic neighborhood density, and diachronic frequency variation. We analyze measures of language change tackling both the change in lexical representations and (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Evaluating the inverse reasoning account of object discovery.Christopher D. Carroll & Charles Kemp - 2015 - Cognition 139:130-153.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Modeling Brain Representations of Words' Concreteness in Context Using GPT‐2 and Human Ratings.Andrea Bruera, Yuan Tao, Andrew Anderson, Derya Çokal, Janosch Haber & Massimo Poesio - 2023 - Cognitive Science 47 (12):e13388.
    The meaning of most words in language depends on their context. Understanding how the human brain extracts contextualized meaning, and identifying where in the brain this takes place, remain important scientific challenges. But technological and computational advances in neuroscience and artificial intelligence now provide unprecedented opportunities to study the human brain in action as language is read and understood. Recent contextualized language models seem to be able to capture homonymic meaning variation (“bat”, in a baseball vs. a vampire context), as (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Investigating the Extent to which Distributional Semantic Models Capture a Broad Range of Semantic Relations.Kevin S. Brown, Eiling Yee, Gitte Joergensen, Melissa Troyer, Elliot Saltzman, Jay Rueckl, James S. Magnuson & Ken McRae - 2023 - Cognitive Science 47 (5):e13291.
    Distributional semantic models (DSMs) are a primary method for distilling semantic information from corpora. However, a key question remains: What types of semantic relations among words do DSMs detect? Prior work typically has addressed this question using limited human data that are restricted to semantic similarity and/or general semantic relatedness. We tested eight DSMs that are popular in current cognitive and psycholinguistic research (positive pointwise mutual information; global vectors; and three variations each of Skip-gram and continuous bag of words (CBOW) (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • What corpus-based Cognitive Linguistics can and cannot expect from neurolinguistics.Alice Blumenthal-Dramé - 2016 - Cognitive Linguistics 27 (4):493-505.
    This paper argues that neurolinguistics has the potential to yield insights that can feed back into corpus-based Cognitive Linguistics. It starts by discussing how far the cognitive realism of probabilistic statements derived from corpus data currently goes. Against this background, it argues that the cognitive realism of usage-based models could be further enhanced through deeper engagement with neurolinguistics, but also highlights a number of common misconceptions about what neurolinguistics can and cannot do for linguistic theorizing.
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • The semantic representation of prejudice and stereotypes.Sudeep Bhatia - 2017 - Cognition 164 (C):46-60.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Naturalistic multiattribute choice.Sudeep Bhatia & Neil Stewart - 2018 - Cognition 179 (C):71-88.
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Semantic micro-dynamics as a reflex of occurrence frequency: a semantic networks approach.Andreas Baumann, Klaus Hofmann, Anna Marakasova, Julia Neidhardt & Tanja Wissik - 2023 - Cognitive Linguistics 34 (3-4):533-568.
    This article correlates fine-grained semantic variability and change with measures of occurrence frequency to investigate whether a word’s degree of semantic change is sensitive to how often it is used. We show that this sensitivity can be detected within a short time span (i.e., 20 years), basing our analysis on a large corpus of German allowing for a high temporal resolution (i.e., per month). We measure semantic variability and change with the help of local semantic networks, combining elements of deep (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Strudel: A Corpus‐Based Semantic Model Based on Properties and Types.Marco Baroni, Brian Murphy, Eduard Barbu & Massimo Poesio - 2010 - Cognitive Science 34 (2):222-254.
    Computational models of meaning trained on naturally occurring text successfully model human performance on tasks involving simple similarity measures, but they characterize meaning in terms of undifferentiated bags of words or topical dimensions. This has led some to question their psychological plausibility (Murphy, 2002;Schunn, 1999). We present here a fully automatic method for extracting a structured and comprehensive set of concept descriptions directly from an English part‐of‐speech‐tagged corpus. Concepts are characterized by weighted properties, enriched with concept–property types that approximate classical (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Strudel: A Corpus‐Based Semantic Model Based on Properties and Types.Marco Baroni, Eduard Barbu, Brian Murphy & Massimo Poesio - 2010 - Cognitive Science 34 (2):222-254.
    Computational models of meaning trained on naturally occurring text successfully model human performance on tasks involving simple similarity measures, but they characterize meaning in terms of undifferentiated bags of words or topical dimensions. This has led some to question their psychological plausibility (Murphy, 2002;Schunn, 1999). We present here a fully automatic method for extracting a structured and comprehensive set of concept descriptions directly from an English part‐of‐speech‐tagged corpus. Concepts are characterized by weighted properties, enriched with concept–property types that approximate classical (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • The Hidden Markov Topic Model: A Probabilistic Model of Semantic Representation.Mark Andrews & Gabriella Vigliocco - 2010 - Topics in Cognitive Science 2 (1):101-113.
    In this paper, we describe a model that learns semantic representations from the distributional statistics of language. This model, however, goes beyond the common bag‐of‐words paradigm, and infers semantic representations by taking into account the inherent sequential nature of linguistic data. The model we describe, which we refer to as a Hidden Markov Topics model, is a natural extension of the current state of the art in Bayesian bag‐of‐words models, that is, the Topics model of Griffiths, Steyvers, and Tenenbaum (2007), (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Reconciling Embodied and Distributional Accounts of Meaning in Language.Mark Andrews, Stefan Frank & Gabriella Vigliocco - 2014 - Topics in Cognitive Science 6 (3):359-370.
    Over the past 15 years, there have been two increasingly popular approaches to the study of meaning in cognitive science. One, based on theories of embodied cognition, treats meaning as a simulation of perceptual and motor states. An alternative approach treats meaning as a consequence of the statistical distribution of words across spoken and written language. On the surface, these appear to be opposing scientific paradigms. In this review, we aim to show how recent cross-disciplinary developments have done much to (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  • Integrating experiential and distributional data to learn semantic representations.Mark Andrews, Gabriella Vigliocco & David Vinson - 2009 - Psychological Review 116 (3):463-498.
    Download  
     
    Export citation  
     
    Bookmark   51 citations  
  • Bayesian Fundamentalism or Enlightenment? On the explanatory status and theoretical contributions of Bayesian models of cognition.Matt Jones & Bradley C. Love - 2011 - Behavioral and Brain Sciences 34 (4):169-188.
    The prominence of Bayesian modeling of cognition has increased recently largely because of mathematical advances in specifying and deriving predictions from complex probabilistic models. Much of this research aims to demonstrate that cognitive behavior can be explained from rational principles alone, without recourse to psychological or neurological processes and representations. We note commonalities between this rational approach and other movements in psychology – namely, Behaviorism and evolutionary psychology – that set aside mechanistic explanations or make use of optimality assumptions. Through (...)
    Download  
     
    Export citation  
     
    Bookmark   123 citations  
  • Précis of Doing without Concepts.Edouard Machery - 2010 - Behavioral and Brain Sciences 33 (2-3):195-206.
    Although cognitive scientists have learned a lot about concepts, their findings have yet to be organized in a coherent theoretical framework. In addition, after twenty years of controversy, there is little sign that philosophers and psychologists are converging toward an agreement about the very nature of concepts.Doing without Concepts(Machery 2009) attempts to remedy this state of affairs. In this article, I review the main points and arguments developed at greater length inDoing without Concepts.
    Download  
     
    Export citation  
     
    Bookmark   21 citations  
  • Determining the Relativity of Word Meanings Through the Construction of Individualized Models of Semantic Memory.Brendan T. Johns - 2024 - Cognitive Science 48 (2):e13413.
    Distributional models of lexical semantics are capable of acquiring sophisticated representations of word meanings. The main theoretical insight provided by these models is that they demonstrate the systematic connection between the knowledge that people acquire and the experience that they have with the natural language environment. However, linguistic experience is inherently variable and differs radically across people due to demographic and cultural variables. Recently, distributional models have been used to examine how word meanings vary across languages and it was found (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Salience and Attention in Surprisal-Based Accounts of Language Processing.Alessandra Zarcone, Marten van Schijndel, Jorrig Vogels & Vera Demberg - 2016 - Frontiers in Psychology 7.
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Word learning as Bayesian inference.Fei Xu & Joshua B. Tenenbaum - 2007 - Psychological Review 114 (2):245-272.
    Download  
     
    Export citation  
     
    Bookmark   167 citations  
  • Distributional Models of Category Concepts Based on Names of Category Members.Matthijs Westera, Abhijeet Gupta, Gemma Boleda & Sebastian Padó - 2021 - Cognitive Science 45 (9):e13029.
    Cognitive scientists have long used distributional semantic representations of categories. The predominant approach uses distributional representations of category‐denoting nouns, such as “city” for the category city. We propose a novel scheme that represents categories as prototypes over representations of names of its members, such as “Barcelona,” “Mumbai,” and “Wuhan” for the category city. This name‐based representation empirically outperforms the noun‐based representation on two experiments (modeling human judgments of category relatedness and predicting category membership) with particular improvements for ambiguous nouns. We (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Do Additional Features Help or Hurt Category Learning? The Curse of Dimensionality in Human Learners.Wai Keen Vong, Andrew T. Hendrickson, Danielle J. Navarro & Amy Perfors - 2019 - Cognitive Science 43 (3):e12724.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Do Additional Features Help or Hurt Category Learning? The Curse of Dimensionality in Human Learners.Wai Keen Vong, Andrew T. Hendrickson, Danielle J. Navarro & Andrew Perfors - 2019 - Cognitive Science 43 (3).
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Learning and Processing Abstract Words and Concepts: Insights From Typical and Atypical Development.Gabriella Vigliocco, Marta Ponari & Courtenay Norbury - 2018 - Topics in Cognitive Science 10 (3):533-549.
    The Affective grounding hypothesis suggests that affective experiences play a crucial role in abstract concepts’ processing (Kousta et al. 2011). Vigliocco and colleagues test the role of affective experiences as well as the role of language in learning words denoting abstract concepts, comparing children with typical and atypical development. They conclude that besides the affective experiences also language plays a critical role in the processing of words referring to abstract concepts.
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Spicy Adjectives and Nominal Donkeys: Capturing Semantic Deviance Using Compositionality in Distributional Spaces.Eva M. Vecchi, Marco Marelli, Roberto Zamparelli & Marco Baroni - 2017 - Cognitive Science 41 (1):102-136.
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • When Stronger Knowledge Slows You Down: Semantic Relatedness Predicts Children's Co‐Activation of Related Items in a Visual Search Paradigm.Catarina Vales & Anna V. Fisher - 2019 - Cognitive Science 43 (6):e12746.
    A large literature suggests that the organization of words in semantic memory, reflecting meaningful relations among words and the concepts to which they refer, supports many cognitive processes, including memory encoding and retrieval, word learning, and inferential reasoning. The co‐activation of related items has been proposed as a mechanism by which semantic knowledge influences cognition, and contemporary accounts of semantic knowledge propose that this co‐activation is graded—that it depends on how strongly related the items are in semantic memory. Prior research (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Exploring What Is Encoded in Distributional Word Vectors: A Neurobiologically Motivated Analysis.Akira Utsumi - 2020 - Cognitive Science 44 (6):e12844.
    The pervasive use of distributional semantic models or word embeddings for both cognitive modeling and practical application is because of their remarkable ability to represent the meanings of words. However, relatively little effort has been made to explore what types of information are encoded in distributional word vectors. Knowing the internal knowledge embedded in word vectors is important for cognitive modeling using distributional semantic models. Therefore, in this paper, we attempt to identify the knowledge encoded in word vectors by conducting (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • The new Tweety puzzle: arguments against monistic Bayesian approaches in epistemology and cognitive science.Matthias Unterhuber & Gerhard Schurz - 2013 - Synthese 190 (8):1407-1435.
    In this paper we discuss the new Tweety puzzle. The original Tweety puzzle was addressed by approaches in non-monotonic logic, which aim to adequately represent the Tweety case, namely that Tweety is a penguin and, thus, an exceptional bird, which cannot fly, although in general birds can fly. The new Tweety puzzle is intended as a challenge for probabilistic theories of epistemic states. In the first part of the paper we argue against monistic Bayesians, who assume that epistemic states can (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Starting with tacit knowledge, ending with Durkheim? [REVIEW]Stephen P. Turner - 2011 - Studies in History and Philosophy of Science Part A 42 (3):472-476.
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Comparing Methods for Single Paragraph Similarity Analysis.Benjamin Stone, Simon Dennis & Peter J. Kwantes - 2011 - Topics in Cognitive Science 3 (1):92-122.
    The focus of this paper is two-fold. First, similarities generated from six semantic models were compared to human ratings of paragraph similarity on two datasets—23 World Entertainment News Network paragraphs and 50 ABC newswire paragraphs. Contrary to findings on smaller textual units such as word associations (Griffiths, Tenenbaum, & Steyvers, 2007), our results suggest that when single paragraphs are compared, simple nonreductive models (word overlap and vector space) can provide better similarity estimates than more complex models (LSA, Topic Model, SpNMF, (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • The Episodic Nature of Experience: A Dynamical Systems Analysis.Sreekumar Vishnu, Dennis Simon & Doxas Isidoros - 2017 - Cognitive Science 41 (5):1377-1393.
    Context is an important construct in many domains of cognition, including learning, memory, and emotion. We used dynamical systems methods to demonstrate the episodic nature of experience by showing a natural separation between the scales over which within-context and between-context relationships operate. To do this, we represented an individual's emails extending over about 5 years in a high-dimensional semantic space and computed the dimensionalities of the subspaces occupied by these emails. Personal discourse has a two-scaled geometry with smaller within-context dimensionalities (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Goal-directed decision making as probabilistic inference: A computational framework and potential neural correlates.Alec Solway & Matthew M. Botvinick - 2012 - Psychological Review 119 (1):120-154.
    Download  
     
    Export citation  
     
    Bookmark   17 citations  
  • Modeling the Structure and Dynamics of Semantic Processing.Armand S. Rotaru, Gabriella Vigliocco & Stefan L. Frank - 2018 - Cognitive Science 42 (8):2890-2917.
    The contents and structure of semantic memory have been the focus of much recent research, with major advances in the development of distributional models, which use word co‐occurrence information as a window into the semantics of language. In parallel, connectionist modeling has extended our knowledge of the processes engaged in semantic activation. However, these two lines of investigation have rarely been brought together. Here, we describe a processing model based on distributional semantics in which activation spreads throughout a semantic network, (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Constructing Semantic Models From Words, Images, and Emojis.Armand S. Rotaru & Gabriella Vigliocco - 2020 - Cognitive Science 44 (4):e12830.
    A number of recent models of semantics combine linguistic information, derived from text corpora, and visual information, derived from image collections, demonstrating that the resulting multimodal models are better than either of their unimodal counterparts, in accounting for behavioral data. Empirical work on semantic processing has shown that emotion also plays an important role especially in abstract concepts; however, models integrating emotion along with linguistic and visual information are lacking. Here, we first improve on visual and affective representations, derived from (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Redundancy in Perceptual and Linguistic Experience: Comparing Feature-Based and Distributional Models of Semantic Representation.Brian Riordan & Michael N. Jones - 2011 - Topics in Cognitive Science 3 (2):303-345.
    Abstract Since their inception, distributional models of semantics have been criticized as inadequate cognitive theories of human semantic learning and representation. A principal challenge is that the representations derived by distributional models are purely symbolic and are not grounded in perception and action; this challenge has led many to favor feature-based models of semantic representation. We argue that the amount of perceptual and other semantic information that can be learned from purely distributional statistics has been underappreciated. We compare the representations (...)
    Download  
     
    Export citation  
     
    Bookmark   24 citations  
  • Similarity Judgment Within and Across Categories: A Comprehensive Model Comparison.Russell Richie & Sudeep Bhatia - 2021 - Cognitive Science 45 (8):e13030.
    Similarity is one of the most important relations humans perceive, arguably subserving category learning and categorization, generalization and discrimination, judgment and decision making, and other cognitive functions. Researchers have proposed a wide range of representations and metrics that could be at play in similarity judgment, yet have not comprehensively compared the power of these representations and metrics for predicting similarity within and across different semantic categories. We performed such a comparison by pairing nine prominent vector semantic representations with seven established (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Perspectives on Modeling in Cognitive Science.Richard M. Shiffrin - 2010 - Topics in Cognitive Science 2 (4):736-750.
    This commentary gives a personal perspective on modeling and modeling developments in cognitive science, starting in the 1950s, but focusing on the author’s personal views of modeling since training in the late 1960s, and particularly focusing on advances since the official founding of the Cognitive Science Society. The range and variety of modeling approaches in use today are remarkable, and for many, bewildering. Yet to come to anything approaching adequate insights into the infinitely complex fields of mind, brain, and intelligent (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • One or two dimensions in spontaneous classification: A simplicity approach.Emmanuel M. Pothos & James Close - 2008 - Cognition 107 (2):581-602.
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • Popper's severity of test as an intuitive probabilistic model of hypothesis testing.Fenna H. Poletiek - 2009 - Behavioral and Brain Sciences 32 (1):99-100.
    Severity of Test (SoT) is an alternative to Popper's logical falsification that solves a number of problems of the logical view. It was presented by Popper himself in 1963. SoT is a less sophisticated probabilistic model of hypothesis testing than Oaksford & Chater's (O&C's) information gain model, but it has a number of striking similarities. Moreover, it captures the intuition of everyday hypothesis testing.
    Download  
     
    Export citation  
     
    Bookmark  
  • Parallelograms revisited: Exploring the limitations of vector space models for simple analogies.Joshua C. Peterson, Dawn Chen & Thomas L. Griffiths - 2020 - Cognition 205 (C):104440.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Using Wikipedia to learn semantic feature representations of concrete concepts in neuroimaging experiments.Francisco Pereira, Matthew Botvinick & Greg Detre - 2013 - Artificial Intelligence 194 (C):240-252.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Bayesian Models of Cognition: What's Built in After All?Amy Perfors - 2012 - Philosophy Compass 7 (2):127-138.
    This article explores some of the philosophical implications of the Bayesian modeling paradigm. In particular, it focuses on the ramifications of the fact that Bayesian models pre‐specify an inbuilt hypothesis space. To what extent does this pre‐specification correspond to simply ‘‘building the solution in''? I argue that any learner must have a built‐in hypothesis space in precisely the same sense that Bayesian models have one. This has implications for the nature of learning, Fodor's puzzle of concept acquisition, and the role (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations