Switch to: References

Add citations

You must login to add citations.
  1. A Computational Learning Semantics for Inductive Empirical Knowledge.Kevin T. Kelly - 2014 - In Alexandru Baltag & Sonja Smets (eds.), Johan van Benthem on Logic and Information Dynamics. Cham, Switzerland: Springer International Publishing. pp. 289-337.
    This chapter presents a new semantics for inductive empirical knowledge. The epistemic agent is represented concretely as a learner who processes new inputs through time and who forms new beliefs from those inputs by means of a concrete, computable learning program. The agent’s belief state is represented hyper-intensionally as a set of time-indexed sentences. Knowledge is interpreted as avoidance of error in the limit and as having converged to true belief from the present time onward. Familiar topics are re-examined within (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • The new Tweety puzzle: arguments against monistic Bayesian approaches in epistemology and cognitive science.Matthias Unterhuber & Gerhard Schurz - 2013 - Synthese 190 (8):1407-1435.
    In this paper we discuss the new Tweety puzzle. The original Tweety puzzle was addressed by approaches in non-monotonic logic, which aim to adequately represent the Tweety case, namely that Tweety is a penguin and, thus, an exceptional bird, which cannot fly, although in general birds can fly. The new Tweety puzzle is intended as a challenge for probabilistic theories of epistemic states. In the first part of the paper we argue against monistic Bayesians, who assume that epistemic states can (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • The uncertain reasoner: Bayes, logic, and rationality.Mike Oaksford & Nick Chater - 2009 - Behavioral and Brain Sciences 32 (1):105-120.
    Human cognition requires coping with a complex and uncertain world. This suggests that dealing with uncertainty may be the central challenge for human reasoning. In Bayesian Rationality we argue that probability theory, the calculus of uncertainty, is the right framework in which to understand everyday reasoning. We also argue that probability theory explains behavior, even on experimental tasks that have been designed to probe people's logical reasoning abilities. Most commentators agree on the centrality of uncertainty; some suggest that there is (...)
    Download  
     
    Export citation  
     
    Bookmark   14 citations  
  • Reasoning about Criminal Evidence: Revealing Probabilistic Reasoning Behind Logical Conclusions.Michelle B. Cowley-Cunningham - 2007 - SSRN E-Library Maurer School of Law Law and Society eJournals.
    There are two competing theoretical frameworks with which cognitive sciences examines how people reason. These frameworks are broadly categorized into logic and probability. This paper reports two applied experiments to test which framework explains better how people reason about evidence in criminal cases. Logical frameworks predict that people derive conclusions from the presented evidence to endorse an absolute value of certainty such as ‘guilty’ or ‘not guilty’ (e.g., Johnson-Laird, 1999). But probabilistic frameworks predict that people derive conclusions from the presented (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Intractability and the use of heuristics in psychological explanations.Iris van Rooij, Cory Wright & Todd Wareham - 2012 - Synthese 187 (2):471-487.
    Many cognitive scientists, having discovered that some computational-level characterization f of a cognitive capacity φ is intractable, invoke heuristics as algorithmic-level explanations of how cognizers compute f. We argue that such explanations are actually dysfunctional, and rebut five possible objections. We then propose computational-level theory revision as a principled and workable alternative.
    Download  
     
    Export citation  
     
    Bookmark   14 citations  
  • A Bayesian generative model for learning semantic hierarchies.Roni Mittelman, Min Sun, Benjamin Kuipers & Silvio Savarese - 2014 - Frontiers in Psychology 5.
    Download  
     
    Export citation  
     
    Bookmark  
  • The learnability of abstract syntactic principles.Amy Perfors, Joshua B. Tenenbaum & Terry Regier - 2011 - Cognition 118 (3):306-338.
    Download  
     
    Export citation  
     
    Bookmark   51 citations  
  • The Algorithmic Level Is the Bridge Between Computation and Brain.Bradley C. Love - 2015 - Topics in Cognitive Science 7 (2):230-242.
    Every scientist chooses a preferred level of analysis and this choice shapes the research program, even determining what counts as evidence. This contribution revisits Marr's three levels of analysis and evaluates the prospect of making progress at each individual level. After reviewing limitations of theorizing within a level, two strategies for integration across levels are considered. One is top–down in that it attempts to build a bridge from the computational to algorithmic level. Limitations of this approach include insufficient theoretical constraint (...)
    Download  
     
    Export citation  
     
    Bookmark   14 citations  
  • From Universal Laws of Cognition to Specific Cognitive Models.Nick Chater & Gordon D. A. Brown - 2008 - Cognitive Science 32 (1):36-67.
    The remarkable successes of the physical sciences have been built on highly general quantitative laws, which serve as the basis for understanding an enormous variety of specific physical systems. How far is it possible to construct universal principles in the cognitive sciences, in terms of which specific aspects of perception, memory, or decision making might be modelled? Following Shepard (e.g., ), it is argued that some universal principles may be attainable in cognitive science. Here, 2 examples are proposed: the simplicity (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • Popper's severity of test as an intuitive probabilistic model of hypothesis testing.Fenna H. Poletiek - 2009 - Behavioral and Brain Sciences 32 (1):99-100.
    Severity of Test (SoT) is an alternative to Popper's logical falsification that solves a number of problems of the logical view. It was presented by Popper himself in 1963. SoT is a less sophisticated probabilistic model of hypothesis testing than Oaksford & Chater's (O&C's) information gain model, but it has a number of striking similarities. Moreover, it captures the intuition of everyday hypothesis testing.
    Download  
     
    Export citation  
     
    Bookmark  
  • The Bayesian boom: good thing or bad?Ulrike Hahn - 2014 - Frontiers in Psychology 5.
    Download  
     
    Export citation  
     
    Bookmark   24 citations