Switch to: Citations

Add references

You must login to add references.
  1. Conditional Learning Through Causal Models.Jonathan Vandenburgh - 2020 - Synthese (1-2):2415-2437.
    Conditional learning, where agents learn a conditional sentence ‘If A, then B,’ is difficult to incorporate into existing Bayesian models of learning. This is because conditional learning is not uniform: in some cases, learning a conditional requires decreasing the probability of the antecedent, while in other cases, the antecedent probability stays constant or increases. I argue that how one learns a conditional depends on the causal structure relating the antecedent and the consequent, leading to a causal model of conditional learning. (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Belief Revision for Growing Awareness.Katie Steele & H. Orri Stefánsson - 2021 - Mind 130 (520):1207–1232.
    The Bayesian maxim for rational learning could be described as conservative change from one probabilistic belief or credence function to another in response to newinformation. Roughly: ‘Hold fixed any credences that are not directly affected by the learning experience.’ This is precisely articulated for the case when we learn that some proposition that we had previously entertained is indeed true (the rule of conditionalisation). But can this conservative-change maxim be extended to revising one’s credences in response to entertaining propositions or (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  • Bayesian Argumentation and the Value of Logical Validity.Benjamin Eva & Stephan Hartmann - unknown
    According to the Bayesian paradigm in the psychology of reasoning, the norms by which everyday human cognition is best evaluated are probabilistic rather than logical in character. Recently, the Bayesian paradigm has been applied to the domain of argumentation, where the fundamental norms are traditionally assumed to be logical. Here, we present a major generalisation of extant Bayesian approaches to argumentation that (i)utilizes a new class of Bayesian learning methods that are better suited to modelling dynamic and conditional inferences than (...)
    Download  
     
    Export citation  
     
    Bookmark   24 citations  
  • Learning Conditional Information by Jeffrey Imaging on Stalnaker Conditionals.Mario Günther - 2018 - Journal of Philosophical Logic 47 (5):851-876.
    We propose a method of learning indicative conditional information. An agent learns conditional information by Jeffrey imaging on the minimally informative proposition expressed by a Stalnaker conditional. We show that the predictions of the proposed method align with the intuitions in Douven, 239–263 2012)’s benchmark examples. Jeffrey imaging on Stalnaker conditionals can also capture the learning of uncertain conditional information, which we illustrate by generating predictions for the Judy Benjamin Problem.
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • New theory about old evidence. A framework for open-minded Bayesianism.Sylvia9 Wenmackers & Jan-Willem Romeijn - 2016 - Synthese 193 (4).
    We present a conservative extension of a Bayesian account of confirmation that can deal with the problem of old evidence and new theories. So-called open-minded Bayesianism challenges the assumption—implicit in standard Bayesianism—that the correct empirical hypothesis is among the ones currently under consideration. It requires the inclusion of a catch-all hypothesis, which is characterized by means of sets of probability assignments. Upon the introduction of a new theory, the former catch-all is decomposed into a new empirical hypothesis and a new (...)
    Download  
     
    Export citation  
     
    Bookmark   24 citations  
  • A Puzzle About Stalnaker’s Hypothesis.Igor Douven & Richard Dietz - 2011 - Topoi 30 (1):31-37.
    According to Stalnaker’s Hypothesis, the probability of an indicative conditional, $\Pr(\varphi \rightarrow \psi),$ equals the probability of the consequent conditional on its antecedent, $\Pr(\psi | \varphi)$ . While the hypothesis is generally taken to have been conclusively refuted by Lewis’ and others’ triviality arguments, its descriptive adequacy has been confirmed in many experimental studies. In this paper, we consider some possible ways of resolving the apparent tension between the analytical and the empirical results relating to Stalnaker’s Hypothesis and we argue (...)
    Download  
     
    Export citation  
     
    Bookmark   18 citations  
  • A new resolution of the Judy Benjamin Problem.Igor Douven & Jan-Willem Romeijn - 2011 - Mind 120 (479):637 - 670.
    A paper on how to adapt your probabilisitc beliefs when learning a conditional.
    Download  
     
    Export citation  
     
    Bookmark   31 citations  
  • What conditional probability could not be.Alan Hájek - 2003 - Synthese 137 (3):273--323.
    Kolmogorov''s axiomatization of probability includes the familiarratio formula for conditional probability: 0).$$ " align="middle" border="0">.
    Download  
     
    Export citation  
     
    Bookmark   311 citations  
  • On conditionals.Dorothy Edgington - 1995 - Mind 104 (414):235-329.
    Download  
     
    Export citation  
     
    Bookmark   457 citations  
  • Causal and Evidential Conditionals.Mario Günther - 2022 - Minds and Machines 32 (4):613-626.
    We put forth an account for when to believe causal and evidential conditionals. The basic idea is to embed a causal model in an agent’s belief state. For the evaluation of conditionals seems to be relative to beliefs about both particular facts and causal relations. Unlike other attempts using causal models, we show that ours can account rather well not only for various causal but also evidential conditionals.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Learning from Conditionals.Benjamin Eva, Stephan Hartmann & Soroush Rafiee Rad - 2020 - Mind 129 (514):461-508.
    In this article, we address a major outstanding question of probabilistic Bayesian epistemology: how should a rational Bayesian agent update their beliefs upon learning an indicative conditional? A number of authors have recently contended that this question is fundamentally underdetermined by Bayesian norms, and hence that there is no single update procedure that rational agents are obliged to follow upon learning an indicative conditional. Here we resist this trend and argue that a core set of widely accepted Bayesian norms is (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Bayesianism and language change.Jon Williamson - 2003 - Journal of Logic, Language and Information 12 (1):53-97.
    Bayesian probability is normally defined over a fixed language or eventspace. But in practice language is susceptible to change, and thequestion naturally arises as to how Bayesian degrees of belief shouldchange as language changes. I argue here that this question poses aserious challenge to Bayesianism. The Bayesian may be able to meet thischallenge however, and I outline a practical method for changing degreesof belief over changes in finite propositional languages.
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Conditionals and Testimony.Stephan Hartmann, Peter J. Collins, Karolina Krzyżanowska, Gregory Wheeler & Ulrike Hahn - 2020 - Cognitive Psychology 122.
    Conditionals and conditional reasoning have been a long-standing focus of research across a number of disciplines, ranging from psychology through linguistics to philosophy. But almost no work has concerned itself with the question of how hearing or reading a conditional changes our beliefs. Given that we acquire much—perhaps most—of what we believe through the testimony of others, the simple matter of acquiring conditionals via others’ assertion of a conditional seems integral to any full understanding of the conditional and conditional reasoning. (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • A problem for relative information minimizers in probability kinematics.Bas C. van Fraassen - 1981 - British Journal for the Philosophy of Science 32 (4):375-379.
    Download  
     
    Export citation  
     
    Bookmark   60 citations  
  • The Stability Theory of Belief.Hannes Leitgeb - 2014 - Philosophical Review 123 (2):131-171.
    This essay develops a joint theory of rational (all-or-nothing) belief and degrees of belief. The theory is based on three assumptions: the logical closure of rational belief; the axioms of probability for rational degrees of belief; and the so-called Lockean thesis, in which the concepts of rational belief and rational degree of belief figure simultaneously. In spite of what is commonly believed, this essay will show that this combination of principles is satisfiable (and indeed nontrivially so) and that the principles (...)
    Download  
     
    Export citation  
     
    Bookmark   161 citations  
  • Review Essay: Working Without a Net: A Study of Egocentric EpistemologyWorking Without a Net: A Study of Egocentric Epistemology.Marian David & Richard Foley - 1996 - Philosophy and Phenomenological Research 56 (4):943.
    Download  
     
    Export citation  
     
    Bookmark   100 citations  
  • The relevance effect and conditionals.Niels Skovgaard-Olsen, Henrik Singmann & Karl Christoph Klauer - 2016 - Cognition 150 (C):26-36.
    More than a decade of research has found strong evidence for P(if A, then C) = P(C|A) (“the Equation”). We argue, however, that this hypothesis provides an overly simplified picture due to its inability to account for relevance. We manipulated relevance in the evaluation of the probability and acceptability of indicative conditionals and found that relevance moderates the effect of P(C|A). This corroborates the Default and Penalty Hypothesis put forward in this paper. Finally, the probability and acceptability of concessive conditionals (...)
    Download  
     
    Export citation  
     
    Bookmark   58 citations