Switch to: Citations

Add references

You must login to add references.
  1. Computability by Probabilistic Machines.K. de Leeuw, E. F. Moore, C. E. Shannon & N. Shapiro - 1970 - Journal of Symbolic Logic 35 (3):481-482.
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • Psychological predicates.Hilary Putnam - 1967 - In William H. Capitan & Daniel Davy Merrill (eds.), Art, mind, and religion. [Pittsburgh]: University of Pittsburgh Press. pp. 37--48.
    Download  
     
    Export citation  
     
    Bookmark   347 citations  
  • The Algebraic Theory of Context-Free Languages.N. Chomsky - 1967 - Journal of Symbolic Logic 32 (3):388-389.
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Aspects of the Theory of Syntax.Noam Chomsky - 1965 - Cambridge, MA, USA: MIT Press.
    Chomsky proposes a reformulation of the theory of transformational generative grammar that takes recent developments in the descriptive analysis of particular ...
    Download  
     
    Export citation  
     
    Bookmark   1504 citations  
  • Simplicity: A unifying principle in cognitive science?Nick Chater & Paul Vitányi - 2003 - Trends in Cognitive Sciences 7 (1):19-22.
    Download  
     
    Export citation  
     
    Bookmark   63 citations  
  • Programs as Causal Models: Speculations on Mental Programs and Mental Representation.Nick Chater & Mike Oaksford - 2013 - Cognitive Science 37 (6):1171-1191.
    Judea Pearl has argued that counterfactuals and causality are central to intelligence, whether natural or artificial, and has helped create a rich mathematical and computational framework for formally analyzing causality. Here, we draw out connections between these notions and various current issues in cognitive science, including the nature of mental “programs” and mental representation. We argue that programs (consisting of algorithms and data structures) have a causal (counterfactual-supporting) structure; these counterfactuals can reveal the nature of mental representations. Programs can also (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • Probabilistic models of language processing and acquisition.Nick Chater & Christopher D. Manning - 2006 - Trends in Cognitive Sciences 10 (7):335–344.
    Probabilistic methods are providing new explanatory approaches to fundamental cognitive science questions of how humans structure, process and acquire language. This review examines probabilistic models defined over traditional symbolic structures. Language comprehension and production involve probabilistic inference in such models; and acquisition involves choosing the best model, given innate constraints and linguistic and other input. Probabilistic models can account for the learning and processing of language, while maintaining the sophistication of symbolic models. A recent burgeoning of theoretical developments and online (...)
    Download  
     
    Export citation  
     
    Bookmark   45 citations  
  • Random walks on semantic networks can resemble optimal foraging.Joshua T. Abbott, Joseph L. Austerweil & Thomas L. Griffiths - 2015 - Psychological Review 122 (3):558-569.
    Download  
     
    Export citation  
     
    Bookmark   19 citations  
  • Computing machinery and intelligence.Alan M. Turing - 1950 - Mind 59 (October):433-60.
    I propose to consider the question, "Can machines think?" This should begin with definitions of the meaning of the terms "machine" and "think." The definitions might be framed so as to reflect so far as possible the normal use of the words, but this attitude is dangerous, If the meaning of the words "machine" and "think" are to be found by examining how they are commonly used it is difficult to escape the conclusion that the meaning and the answer to (...)
    Download  
     
    Export citation  
     
    Bookmark   1022 citations  
  • Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference.Judea Pearl - 1988 - Morgan Kaufmann.
    The book can also be used as an excellent text for graduate-level courses in AI, operations research, or applied probability.
    Download  
     
    Export citation  
     
    Bookmark   416 citations  
  • Theory-based Bayesian models of inductive learning and reasoning.Joshua B. Tenenbaum, Thomas L. Griffiths & Charles Kemp - 2006 - Trends in Cognitive Sciences 10 (7):309-318.
    Download  
     
    Export citation  
     
    Bookmark   114 citations  
  • Comprehension of Simple Quantifiers: Empirical Evaluation of a Computational Model.Jakub Szymanik & Marcin Zajenkowski - 2010 - Cognitive Science 34 (3):521-532.
    We examine the verification of simple quantifiers in natural language from a computational model perspective. We refer to previous neuropsychological investigations of the same problem and suggest extending their experimental setting. Moreover, we give some direct empirical evidence linking computational complexity predictions with cognitive reality.<br>In the empirical study we compare time needed for understanding different types of quantifiers. We show that the computational distinction between quantifiers recognized by finite-automata and push-down automata is psychologically relevant. Our research improves upon hypothesis and (...)
    Download  
     
    Export citation  
     
    Bookmark   33 citations  
  • Rational approximations to rational models: Alternative algorithms for category learning.Adam N. Sanborn, Thomas L. Griffiths & Daniel J. Navarro - 2010 - Psychological Review 117 (4):1144-1167.
    Download  
     
    Export citation  
     
    Bookmark   82 citations  
  • A theory of memory retrieval.Roger Ratcliff - 1978 - Psychological Review 85 (2):59-108.
    Download  
     
    Export citation  
     
    Bookmark   365 citations  
  • An exemplar-based random walk model of speeded classification.Robert M. Nosofsky & Thomas J. Palmeri - 1997 - Psychological Review 104 (2):266-300.
    Download  
     
    Export citation  
     
    Bookmark   75 citations  
  • Expectation-based syntactic comprehension.Roger Levy - 2008 - Cognition 106 (3):1126-1177.
    Download  
     
    Export citation  
     
    Bookmark   201 citations  
  • Why Be Random?Thomas Icard - 2021 - Mind 130 (517):111-139.
    When does it make sense to act randomly? A persuasive argument from Bayesian decision theory legitimizes randomization essentially only in tie-breaking situations. Rational behaviour in humans, non-human animals, and artificial agents, however, often seems indeterminate, even random. Moreover, rationales for randomized acts have been offered in a number of disciplines, including game theory, experimental design, and machine learning. A common way of accommodating some of these observations is by appeal to a decision-maker’s bounded computational resources. Making this suggestion both precise (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Topics in semantic representation.Thomas L. Griffiths, Mark Steyvers & Joshua B. Tenenbaum - 2007 - Psychological Review 114 (2):211-244.
    Download  
     
    Export citation  
     
    Bookmark   102 citations  
  • How we ought to describe computation in the brain.Chris Eliasmith - 2010 - Studies in History and Philosophy of Science Part A 41 (3):313-320.
    I argue that of the four kinds of quantitative description relevant for understanding brain function, a control theoretic approach is most appealing. This argument proceeds by comparing computational, dynamical, statistical and control theoretic approaches, and identifying criteria for a good description of brain function. These criteria include providing useful decompositions, simple state mappings, and the ability to account for variability. The criteria are justified by their importance in providing unified accounts of multi-level mechanisms that support intervention. Evaluation of the four (...)
    Download  
     
    Export citation  
     
    Bookmark   16 citations  
  • Accurate Unlexicalized Parsing.Dan Klein & Christopher D. Manning - unknown
    We demonstrate that an unlexicalized PCFG can parse much more accurately than previously shown, by making use of simple, linguistically motivated state splits, which break down false independence assumptions latent in a vanilla treebank grammar. Indeed, its performance of 86.36% (LP/LR F1) is better than that of early lexicalized PCFG models, and surprisingly close to the current state-of-theart. This result has potential uses beyond establishing a strong lower bound on the maximum possible accuracy of unlexicalized models: an unlexicalized PCFG is (...)
    Download  
     
    Export citation  
     
    Bookmark   25 citations  
  • Psychological Predicates.Hilary Putnam - 2003 - In John Heil (ed.), Philosophy of Mind: A Guide and Anthology. Oxford University Press.
    Download  
     
    Export citation  
     
    Bookmark   206 citations  
  • Computing Machinery and Intelligence.Alan M. Turing - 2003 - In John Heil (ed.), Philosophy of Mind: A Guide and Anthology. Oxford University Press.
    Download  
     
    Export citation  
     
    Bookmark   597 citations