Switch to: References

Add citations

You must login to add citations.
  1. Metaphysics of the Bayesian mind.Justin Tiehen - 2022 - Mind and Language 38 (2):336-354.
    Recent years have seen a Bayesian revolution in cognitive science. This should be of interest to metaphysicians of science, whose naturalist project involves working out the metaphysical implications of our leading scientific accounts, and in advancing our understanding of those accounts by drawing on the metaphysical frameworks developed by philosophers. Toward these ends, in this paper I develop a metaphysics of the Bayesian mind. My central claim is that the Bayesian approach supports a novel empirical argument for normativism, the thesis (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Can resources save rationality? ‘Anti-Bayesian’ updating in cognition and perception.Eric Mandelbaum, Isabel Won, Steven Gross & Chaz Firestone - 2020 - Behavioral and Brain Sciences 143:e16.
    Resource rationality may explain suboptimal patterns of reasoning; but what of “anti-Bayesian” effects where the mind updates in a direction opposite the one it should? We present two phenomena — belief polarization and the size-weight illusion — that are not obviously explained by performance- or resource-based constraints, nor by the authors’ brief discussion of reference repulsion. Can resource rationality accommodate them?
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Computational Cognitive Neuroscience.Carlos Zednik - 2018 - In Mark Sprevak & Matteo Colombo (eds.), The Routledge Handbook of the Computational Mind. Routledge.
    This chapter provides an overview of the basic research strategies and analytic techniques deployed in computational cognitive neuroscience. On the one hand, “top-down” strategies are used to infer, from formal characterizations of behavior and cognition, the computational properties of underlying neural mechanisms. On the other hand, “bottom-up” research strategies are used to identify neural mechanisms and to reconstruct their computational capacities. Both of these strategies rely on experimental techniques familiar from other branches of neuroscience, including functional magnetic resonance imaging, single-cell (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Learning the Structure of Social Influence.Samuel J. Gershman, Hillard Thomas Pouncy & Hyowon Gweon - 2017 - Cognitive Science 41 (S3):545-575.
    We routinely observe others’ choices and use them to guide our own. Whose choices influence us more, and why? Prior work has focused on the effect of perceived similarity between two individuals, such as the degree of overlap in past choices or explicitly recognizable group affiliations. In the real world, however, any dyadic relationship is part of a more complex social structure involving multiple social groups that are not directly observable. Here we suggest that human learners go beyond dyadic similarities (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Bayesian reverse-engineering considered as a research strategy for cognitive science.Carlos Zednik & Frank Jäkel - 2016 - Synthese 193 (12):3951-3985.
    Bayesian reverse-engineering is a research strategy for developing three-level explanations of behavior and cognition. Starting from a computational-level analysis of behavior and cognition as optimal probabilistic inference, Bayesian reverse-engineers apply numerous tweaks and heuristics to formulate testable hypotheses at the algorithmic and implementational levels. In so doing, they exploit recent technological advances in Bayesian artificial intelligence, machine learning, and statistics, but also consider established principles from cognitive psychology and neuroscience. Although these tweaks and heuristics are highly pragmatic in character and (...)
    Download  
     
    Export citation  
     
    Bookmark   21 citations  
  • SUSTAIN: A Network Model of Category Learning.Bradley C. Love, Douglas L. Medin & Todd M. Gureckis - 2004 - Psychological Review 111 (2):309-332.
    Download  
     
    Export citation  
     
    Bookmark   109 citations  
  • Decision making under uncertain categorization.Stephanie Y. Chen, Brian H. Ross & Gregory L. Murphy - 2014 - Frontiers in Psychology 5.
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • The Oxford Handbook of Causal Reasoning.Michael Waldmann (ed.) - 2017 - Oxford, England: Oxford University Press.
    Causal reasoning is one of our most central cognitive competencies, enabling us to adapt to our world. Causal knowledge allows us to predict future events, or diagnose the causes of observed facts. We plan actions and solve problems using knowledge about cause-effect relations. Without our ability to discover and empirically test causal theories, we would not have made progress in various empirical sciences. In the past decades, the important role of causal knowledge has been discovered in many areas of cognitive (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • A Rational Analysis of Rule‐Based Concept Learning.Noah D. Goodman, Joshua B. Tenenbaum, Jacob Feldman & Thomas L. Griffiths - 2008 - Cognitive Science 32 (1):108-154.
    This article proposes a new model of human concept learning that provides a rational analysis of learning feature‐based concepts. This model is built upon Bayesian inference for a grammatically structured hypothesis space—a concept language of logical rules. This article compares the model predictions to human generalization judgments in several well‐known category learning experiments, and finds good agreement for both average and individual participant generalizations. This article further investigates judgments for a broad set of 7‐feature concepts—a more natural setting in several (...)
    Download  
     
    Export citation  
     
    Bookmark   69 citations  
  • Rational analysis, intractability, and the prospects of ‘as if’-explanations.Iris van Rooij, Johan Kwisthout, Todd Wareham & Cory Wright - 2018 - Synthese 195 (2):491-510.
    Despite their success in describing and predicting cognitive behavior, the plausibility of so-called ‘rational explanations’ is often contested on the grounds of computational intractability. Several cognitive scientists have argued that such intractability is an orthogonal pseudoproblem, however, since rational explanations account for the ‘why’ of cognition but are agnostic about the ‘how’. Their central premise is that humans do not actually perform the rational calculations posited by their models, but only act as if they do. Whether or not the problem (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • The Algorithmic Level Is the Bridge Between Computation and Brain.Bradley C. Love - 2015 - Topics in Cognitive Science 7 (2):230-242.
    Every scientist chooses a preferred level of analysis and this choice shapes the research program, even determining what counts as evidence. This contribution revisits Marr's three levels of analysis and evaluates the prospect of making progress at each individual level. After reviewing limitations of theorizing within a level, two strategies for integration across levels are considered. One is top–down in that it attempts to build a bridge from the computational to algorithmic level. Limitations of this approach include insufficient theoretical constraint (...)
    Download  
     
    Export citation  
     
    Bookmark   14 citations  
  • Novelty and Inductive Generalization in Human Reinforcement Learning.Samuel J. Gershman & Yael Niv - 2015 - Topics in Cognitive Science 7 (3):391-415.
    In reinforcement learning, a decision maker searching for the most rewarding option is often faced with the question: What is the value of an option that has never been tried before? One way to frame this question is as an inductive problem: How can I generalize my previous experience with one set of options to a novel option? We show how hierarchical Bayesian inference can be used to solve this problem, and we describe an equivalence between the Bayesian model and (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Bayesian Cognitive Science, Unification, and Explanation.Stephan Hartmann & Matteo Colombo - 2017 - British Journal for the Philosophy of Science 68 (2).
    It is often claimed that the greatest value of the Bayesian framework in cognitive science consists in its unifying power. Several Bayesian cognitive scientists assume that unification is obviously linked to explanatory power. But this link is not obvious, as unification in science is a heterogeneous notion, which may have little to do with explanation. While a crucial feature of most adequate explanations in cognitive science is that they reveal aspects of the causal mechanism that produces the phenomenon to be (...)
    Download  
     
    Export citation  
     
    Bookmark   44 citations  
  • How tall is Tall? compositionality, statistics, and gradable adjectives.Lauren A. Schmidt, Noah D. Goodman, David Barner & Joshua B. Tenenbaum - 2009 - In N. A. Taatgen & H. van Rijn (eds.), Proceedings of the 31st Annual Conference of the Cognitive Science Society.
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • One and Done? Optimal Decisions From Very Few Samples.Edward Vul, Noah Goodman, Thomas L. Griffiths & Joshua B. Tenenbaum - 2014 - Cognitive Science 38 (4):599-637.
    In many learning or inference tasks human behavior approximates that of a Bayesian ideal observer, suggesting that, at some level, cognition can be described as Bayesian inference. However, a number of findings have highlighted an intriguing mismatch between human behavior and standard assumptions about optimality: People often appear to make decisions based on just one or a few samples from the appropriate posterior probability distribution, rather than using the full distribution. Although sampling-based approximations are a common way to implement Bayesian (...)
    Download  
     
    Export citation  
     
    Bookmark   56 citations  
  • A tutorial introduction to Bayesian models of cognitive development.Amy Perfors, Joshua B. Tenenbaum, Thomas L. Griffiths & Fei Xu - 2011 - Cognition 120 (3):302-321.
    Download  
     
    Export citation  
     
    Bookmark   56 citations  
  • Inductive reasoning about causally transmitted properties.Patrick Shafto, Charles Kemp, Elizabeth Baraff Bonawitz, John D. Coley & Joshua B. Tenenbaum - 2008 - Cognition 109 (2):175-192.
    Download  
     
    Export citation  
     
    Bookmark   14 citations  
  • Why are different features central for natural kinds and artifacts?: the role of causal status in determining feature centrality.Woo-Kyoung Ahn - 1998 - Cognition 69 (2):135-178.
    Download  
     
    Export citation  
     
    Bookmark   52 citations  
  • Can quantum probability provide a new direction for cognitive modeling?Emmanuel M. Pothos & Jerome R. Busemeyer - 2013 - Behavioral and Brain Sciences 36 (3):255-274.
    Classical (Bayesian) probability (CP) theory has led to an influential research tradition for modeling cognitive processes. Cognitive scientists have been trained to work with CP principles for so long that it is hard even to imagine alternative ways to formalize probabilities. However, in physics, quantum probability (QP) theory has been the dominant probabilistic approach for nearly 100 years. Could QP theory provide us with any advantages in cognitive modeling as well? Note first that both CP and QP theory share the (...)
    Download  
     
    Export citation  
     
    Bookmark   56 citations  
  • Exemplars, Prototypes, Similarities, and Rules in Category Representation: An Example of Hierarchical Bayesian Analysis.Michael D. Lee & Wolf Vanpaemel - 2008 - Cognitive Science 32 (8):1403-1424.
    This article demonstrates the potential of using hierarchical Bayesian methods to relate models and data in the cognitive sciences. This is done using a worked example that considers an existing model of category representation, the Varying Abstraction Model (VAM), which attempts to infer the representations people use from their behavior in category learning tasks. The VAM allows for a wide variety of category representations to be inferred, but this article shows how a hierarchical Bayesian analysis can provide a unifying explanation (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Reasoning with uncertain categories.Gregory L. Murphy, Stephanie Y. Chen & Brian H. Ross - 2012 - Thinking and Reasoning 18 (1):81 - 117.
    Five experiments investigated how people use categories to make inductions about objects whose categorisation is uncertain. Normatively, they should consider all the categories the object might be in and use a weighted combination of information from all the categories: bet-hedging. The experiments presented people with simple, artificial categories and asked them to make an induction about a new object that was most likely in one category but possibly in another. The results showed that the majority of people focused on the (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Bayes and Blickets: Effects of Knowledge on Causal Induction in Children and Adults.Thomas L. Griffiths, David M. Sobel, Joshua B. Tenenbaum & Alison Gopnik - 2011 - Cognitive Science 35 (8):1407-1455.
    People are adept at inferring novel causal relations, even from only a few observations. Prior knowledge about the probability of encountering causal relations of various types and the nature of the mechanisms relating causes and effects plays a crucial role in these inferences. We test a formal account of how this knowledge can be used and acquired, based on analyzing causal induction as Bayesian inference. Five studies explored the predictions of this account with adults and 4-year-olds, using tasks in which (...)
    Download  
     
    Export citation  
     
    Bookmark   18 citations  
  • The cognitive economy: The probabilistic turn in psychology and human cognition.Petko Kusev & Paul van Schaik - 2013 - Behavioral and Brain Sciences 36 (3):294-295.
    According to the foundations of economic theory, agents have stable and coherent preferences that guide their choices among alternatives. However, people are constrained by information-processing and memory limitations and hence have a propensity to avoid cognitive load. We propose that this in turn will encourage them to respond to preferences and goals influenced by context and memory representations.
    Download  
     
    Export citation  
     
    Bookmark  
  • The rational analysis of mind and behavior.Nick Chater & Mike Oaksford - 2000 - Synthese 122 (1-2):93-131.
    Rational analysis (Anderson 1990, 1991a) is an empiricalprogram of attempting to explain why the cognitive system isadaptive, with respect to its goals and the structure of itsenvironment. We argue that rational analysis has two importantimplications for philosophical debate concerning rationality. First,rational analysis provides a model for the relationship betweenformal principles of rationality (such as probability or decisiontheory) and everyday rationality, in the sense of successfulthought and action in daily life. Second, applying the program ofrational analysis to research on human reasoning (...)
    Download  
     
    Export citation  
     
    Bookmark   22 citations  
  • Learning from conditional probabilities.Corina Strößner & Ulrike Hahn - 2025 - Cognition 254 (C):105962.
    Download  
     
    Export citation  
     
    Bookmark  
  • Resource Rationality.Thomas F. Icard - manuscript
    Theories of rational decision making often abstract away from computational and other resource limitations faced by real agents. An alternative approach known as resource rationality puts such matters front and center, grounding choice and decision in the rational use of finite resources. Anticipated by earlier work in economics and in computer science, this approach has recently seen rapid development and application in the cognitive sciences. Here, the theory of rationality plays a dual role, both as a framework for normative assessment (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • A Functional Contextual Account of Background Knowledge in Categorization: Implications for Artificial General Intelligence and Cognitive Accounts of General Knowledge.Darren J. Edwards, Ciara McEnteggart & Yvonne Barnes-Holmes - 2022 - Frontiers in Psychology 13:745306.
    Psychology has benefited from an enormous wealth of knowledge about processes of cognition in relation to how the brain organizes information. Within the categorization literature, this behavior is often explained through theories of memory construction called exemplar theory and prototype theory which are typically based on similarity or rule functions as explanations of how categories emerge. Although these theories work well at modeling highly controlled stimuli in laboratory settings, they often perform less well outside of these settings, such as explaining (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Tea With Milk? A Hierarchical Generative Framework of Sequential Event Comprehension.Gina R. Kuperberg - 2021 - Topics in Cognitive Science 13 (1):256-298.
    Inspired by, and in close relation with, the contributions of this special issue, Kuperberg elegantly links event comprehension, production, and learning. She proposes an overarching hierarchical generative framework of processing events enabling us to make sense of the world around us and to interact with it in a competent manner.
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • Structuring Memory Through Inference‐Based Event Segmentation.Yeon Soon Shin & Sarah DuBrow - 2021 - Topics in Cognitive Science 13 (1):106-127.
    Shin and DuBrow propose that a key principle driving event segmentation relates to causal analyses: specifically, that experiences that are attributed as having the same underlying cause are grouped together into an event. This offers an alternative to accounts of segmentation based on prediction error.
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  • Learning How to Generalize.Joseph L. Austerweil, Sophia Sanborn & Thomas L. Griffiths - 2019 - Cognitive Science 43 (8):e12777.
    Generalization is a fundamental problem solved by every cognitive system in essentially every domain. Although it is known that how people generalize varies in complex ways depending on the context or domain, it is an open question how people learn the appropriate way to generalize for a new context. To understand this capability, we cast the problem of learning how to generalize as a problem of learning the appropriate hypothesis space for generalization. We propose a normative mathematical framework for learning (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Rational approximations to rational models: Alternative algorithms for category learning.Adam N. Sanborn, Thomas L. Griffiths & Daniel J. Navarro - 2010 - Psychological Review 117 (4):1144-1167.
    Download  
     
    Export citation  
     
    Bookmark   82 citations  
  • A neurobiological theory of automaticity in perceptual categorization.F. Gregory Ashby, John M. Ennis & Brian J. Spiering - 2007 - Psychological Review 114 (3):632-656.
    Download  
     
    Export citation  
     
    Bookmark   18 citations  
  • An exemplar-based random walk model of speeded classification.Robert M. Nosofsky & Thomas J. Palmeri - 1997 - Psychological Review 104 (2):266-300.
    Download  
     
    Export citation  
     
    Bookmark   76 citations  
  • The nature of generalization in language.Adele E. Goldberg - 2009 - Cognitive Linguistics 20 (1):93-127.
    This paper provides a concise overview of Constructions at Work (Goldberg 2006). The book aims to investigate the relevant levels of generalization in adult language, how and why generalizations are learned by children, and how to account for cross-linguistic generalizations.
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  • A Computational Model of Early Argument Structure Acquisition.Afra Alishahi & Suzanne Stevenson - 2008 - Cognitive Science 32 (5):789-834.
    How children go about learning the general regularities that govern language, as well as keeping track of the exceptions to them, remains one of the challenging open questions in the cognitive science of language. Computational modeling is an important methodology in research aimed at addressing this issue. We must determine appropriate learning mechanisms that can grasp generalizations from examples of specific usages, and that exhibit patterns of behavior over the course of learning similar to those in children. Early learning of (...)
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  • Rational Use of Cognitive Resources: Levels of Analysis Between the Computational and the Algorithmic.Thomas L. Griffiths, Falk Lieder & Noah D. Goodman - 2015 - Topics in Cognitive Science 7 (2):217-229.
    Marr's levels of analysis—computational, algorithmic, and implementation—have served cognitive science well over the last 30 years. But the recent increase in the popularity of the computational level raises a new challenge: How do we begin to relate models at different levels of analysis? We propose that it is possible to define levels of analysis that lie between the computational and the algorithmic, providing a way to build a bridge between computational- and algorithmic-level models. The key idea is to push the (...)
    Download  
     
    Export citation  
     
    Bookmark   64 citations  
  • Computational models of semantic memory.T. Rogers - 2008 - In Ron Sun (ed.), The Cambridge handbook of computational psychology. New York: Cambridge University Press. pp. 226--266.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • The learnability of abstract syntactic principles.Amy Perfors, Joshua B. Tenenbaum & Terry Regier - 2011 - Cognition 118 (3):306-338.
    Download  
     
    Export citation  
     
    Bookmark   51 citations  
  • A Bayesian framework for word segmentation: Exploring the effects of context.Sharon Goldwater, Thomas L. Griffiths & Mark Johnson - 2009 - Cognition 112 (1):21-54.
    Download  
     
    Export citation  
     
    Bookmark   50 citations  
  • Using Category Structures to Test Iterated Learning as a Method for Identifying Inductive Biases.Thomas L. Griffiths, Brian R. Christian & Michael L. Kalish - 2008 - Cognitive Science 32 (1):68-107.
    Many of the problems studied in cognitive science are inductive problems, requiring people to evaluate hypotheses in the light of data. The key to solving these problems successfully is having the right inductive biases—assumptions about the world that make it possible to choose between hypotheses that are equally consistent with the observed data. This article explores a novel experimental method for identifying the biases that guide human inductive inferences. The idea behind this method is simple: This article uses the responses (...)
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  • (1 other version)Homo Heuristicus: Why Biased Minds Make Better Inferences.Gerd Gigerenzer & Henry Brighton - 2009 - Topics in Cognitive Science 1 (1):107-143.
    Heuristics are efficient cognitive processes that ignore information. In contrast to the widely held view that less processing reduces accuracy, the study of heuristics shows that less information, computation, and time can in fact improve accuracy. We review the major progress made so far: the discovery of less-is-more effects; the study of the ecological rationality of heuristics, which examines in which environments a given strategy succeeds or fails, and why; an advancement from vague labels to computational models of heuristics; the (...)
    Download  
     
    Export citation  
     
    Bookmark   164 citations  
  • Learning to Learn Causal Models.Charles Kemp, Noah D. Goodman & Joshua B. Tenenbaum - 2010 - Cognitive Science 34 (7):1185-1243.
    Learning to understand a single causal system can be an achievement, but humans must learn about multiple causal systems over the course of a lifetime. We present a hierarchical Bayesian framework that helps to explain how learning about several causal systems can accelerate learning about systems that are subsequently encountered. Given experience with a set of objects, our framework learns a causal model for each object and a causal schema that captures commonalities among these causal models. The schema organizes the (...)
    Download  
     
    Export citation  
     
    Bookmark   17 citations  
  • Précis of bayesian rationality: The probabilistic approach to human reasoning.Mike Oaksford & Nick Chater - 2009 - Behavioral and Brain Sciences 32 (1):69-84.
    According to Aristotle, humans are the rational animal. The borderline between rationality and irrationality is fundamental to many aspects of human life including the law, mental health, and language interpretation. But what is it to be rational? One answer, deeply embedded in the Western intellectual tradition since ancient Greece, is that rationality concerns reasoning according to the rules of logic – the formal theory that specifies the inferential connections that hold with certainty between propositions. Piaget viewed logical reasoning as defining (...)
    Download  
     
    Export citation  
     
    Bookmark   151 citations  
  • Structured models of semantic cognition.Charles Kemp & Joshua B. Tenenbaum - 2008 - Behavioral and Brain Sciences 31 (6):717-718.
    Rogers & McClelland (R&M) criticize models that rely on structured representations such as categories, taxonomic hierarchies, and schemata, but we suggest that structured models can account for many of the phenomena that they describe. Structured approaches and parallel distributed processing (PDP) approaches operate at different levels of analysis, and may ultimately be compatible, but structured models seem more likely to offer immediate insight into many of the issues that R&M discuss.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Exploring the hierarchical structure of human plans via program generation.Carlos G. Correa, Sophia Sanborn, Mark K. Ho, Frederick Callaway, Nathaniel D. Daw & Thomas L. Griffiths - 2025 - Cognition 255 (C):105990.
    Download  
     
    Export citation  
     
    Bookmark  
  • Inference in the Wild: A Framework for Human Situation Assessment and a Case Study of Air Combat.Ken McAnally, Catherine Davey, Daniel White, Murray Stimson, Steven Mascaro & Kevin Korb - 2018 - Cognitive Science 42 (7):2181-2204.
    Download  
     
    Export citation  
     
    Bookmark  
  • Incremental Bayesian Category Learning From Natural Language.Lea Frermann & Mirella Lapata - 2016 - Cognitive Science 40 (6):1333-1381.
    Models of category learning have been extensively studied in cognitive science and primarily tested on perceptual abstractions or artificial stimuli. In this paper, we focus on categories acquired from natural language stimuli, that is, words. We present a Bayesian model that, unlike previous work, learns both categories and their features in a single process. We model category induction as two interrelated subproblems: the acquisition of features that discriminate among categories, and the grouping of concepts into categories based on those features. (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Reconciling intuitive physics and Newtonian mechanics for colliding objects.Adam N. Sanborn, Vikash K. Mansinghka & Thomas L. Griffiths - 2013 - Psychological Review 120 (2):411-437.
    Download  
     
    Export citation  
     
    Bookmark   22 citations  
  • On computational explanations.Anna-Mari Rusanen & Otto Lappi - 2016 - Synthese 193 (12):3931-3949.
    Computational explanations focus on information processing required in specific cognitive capacities, such as perception, reasoning or decision-making. These explanations specify the nature of the information processing task, what information needs to be represented, and why it should be operated on in a particular manner. In this article, the focus is on three questions concerning the nature of computational explanations: What type of explanations they are, in what sense computational explanations are explanatory and to what extent they involve a special, “independent” (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations