Switch to: References

Add citations

You must login to add citations.
  1. Ramsey’s conditionals.Mario Günther & Caterina Sisti - 2022 - Synthese 200 (2):1-31.
    In this paper, we propose a unified account of conditionals inspired by Frank Ramsey. Most contemporary philosophers agree that Ramsey’s account applies to indicative conditionals only. We observe against this orthodoxy that his account covers subjunctive conditionals as well—including counterfactuals. In light of this observation, we argue that Ramsey’s account of conditionals resembles Robert Stalnaker’s possible worlds semantics supplemented by a model of belief. The resemblance suggests to reinterpret the notion of conditional degree of belief in order to overcome a (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Causal and Evidential Conditionals.Mario Günther - 2022 - Minds and Machines 32 (4):613-626.
    We put forth an account for when to believe causal and evidential conditionals. The basic idea is to embed a causal model in an agent’s belief state. For the evaluation of conditionals seems to be relative to beliefs about both particular facts and causal relations. Unlike other attempts using causal models, we show that ours can account rather well not only for various causal but also evidential conditionals.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Conditional Learning Through Causal Models.Jonathan Vandenburgh - 2020 - Synthese (1-2):2415-2437.
    Conditional learning, where agents learn a conditional sentence ‘If A, then B,’ is difficult to incorporate into existing Bayesian models of learning. This is because conditional learning is not uniform: in some cases, learning a conditional requires decreasing the probability of the antecedent, while in other cases, the antecedent probability stays constant or increases. I argue that how one learns a conditional depends on the causal structure relating the antecedent and the consequent, leading to a causal model of conditional learning. (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Bayesians Still Don’t Learn from Conditionals.Mario Günther & Borut Trpin - 2022 - Acta Analytica 38 (3):439-451.
    One of the open questions in Bayesian epistemology is how to rationally learn from indicative conditionals (Douven, 2016). Eva et al. (Mind 129(514):461–508, 2020) propose a strategy to resolve this question. They claim that their strategy provides a “uniquely rational response to any given learning scenario”. We show that their updating strategy is neither very general nor always rational. Even worse, we generalize their strategy and show that it still fails. Bad news for the Bayesians.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • The Logic of Partial Supposition.Benjamin Eva & Stephan Hartmann - 2021 - Analysis (2):215-224.
    According to orthodoxy, there are two basic moods of supposition: indicative and subjunctive. The most popular formalizations of the corresponding norms of suppositional judgement are given by Bayesian conditionalization and Lewisian imaging, respectively. It is well known that Bayesian conditionalization can be generalized (via Jeffrey conditionalization) to provide a model for the norms of partial indicative supposition. This raises the question of whether imaging can likewise be generalized to model the norms of ‘partial subjunctive supposition’. The present article casts doubt (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • The logic of conditionals.Horacio Arlo-Costa - 2007 - Stanford Encyclopedia of Philosophy.
    entry for the Entry for the Stanford Encyclopedia of Philosophy, 2007.
    Download  
     
    Export citation  
     
    Bookmark   38 citations