Switch to: References

Add citations

You must login to add citations.
  1. Rational Aversion to Information.Sven Neth - forthcoming - British Journal for the Philosophy of Science.
    Is more information always better? Or are there some situations in which more information can make us worse off? Good (1967) argues that expected utility maximizers should always accept more information if the information is cost-free and relevant. But Good's argument presupposes that you are certain you will update by conditionalization. If we relax this assumption and allow agents to be uncertain about updating, these agents can be rationally required to reject free and relevant information. Since there are good reasons (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • The Modal Logic of Bayesian Belief Revision.Zalán Gyenis, Miklós Rédei & William Brown - 2019 - Journal of Philosophical Logic 48 (5):809-824.
    In Bayesian belief revision a Bayesian agent revises his prior belief by conditionalizing the prior on some evidence using Bayes’ rule. We define a hierarchy of modal logics that capture the logical features of Bayesian belief revision. Elements in the hierarchy are distinguished by the cardinality of the set of elementary propositions on which the agent’s prior is defined. Inclusions among the modal logics in the hierarchy are determined. By linking the modal logics in the hierarchy to the strongest modal (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Blocking an Argument for Emergent Chance.David Kinney - 2021 - Journal of Philosophical Logic 50 (5):1057-1077.
    Several authors have argued that non-extreme probabilities used in special sciences such as chemistry and biology can be objective chances, even if the true microphysical description of the world is deterministic. This article examines an influential version of this argument and shows that it depends on a particular methodology for defining the relationship between coarse-grained and fine-grained events. An alternative methodology for coarse-graining is proposed. This alternative methodology blocks this argument for the existence of emergent chances, and makes better sense (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The Maxim of Probabilism, with special regard to Reichenbach.Miklós Rédei & Zalán Gyenis - 2021 - Synthese 199 (3-4):8857-8874.
    It is shown that by realizing the isomorphism features of the frequency and geometric interpretations of probability, Reichenbach comes very close to the idea of identifying mathematical probability theory with measure theory in his 1949 work on foundations of probability. Some general features of Reichenbach’s axiomatization of probability theory are pointed out as likely obstacles that prevented him making this conceptual move. The role of isomorphisms of Kolmogorovian probability measure spaces is specified in what we call the “Maxim of Probabilism”, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Generalized Learning and Conditional Expectation.Simon M. Huttegger & Michael Nielsen - 2020 - Philosophy of Science 87 (5):868-883.
    Reflection and martingale principles are central to models of rational learning. They can be justified in a variety of ways. In what follows we study martingale and reflection principles in the con...
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Kolmogorov Conditionalizers Can Be Dutch Booked.Alexander Meehan & Snow Zhang - forthcoming - Review of Symbolic Logic:1-36.
    A vexing question in Bayesian epistemology is how an agent should update on evidence which she assigned zero prior credence. Some theorists have suggested that, in such cases, the agent should update by Kolmogorov conditionalization, a norm based on Kolmogorov’s theory of regular conditional distributions. However, it turns out that in some situations, a Kolmogorov conditionalizer will plan to always assign a posterior credence of zero to the evidence she learns. Intuitively, such a plan is irrational and easily Dutch bookable. (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Jeffrey Meets Kolmogorov: A General Theory of Conditioning.Alexander Meehan & Snow Zhang - 2020 - Journal of Philosophical Logic 49 (5):941-979.
    Jeffrey conditionalization is a rule for updating degrees of belief in light of uncertain evidence. It is usually assumed that the partitions involved in Jeffrey conditionalization are finite and only contain positive-credence elements. But there are interesting examples, involving continuous quantities, in which this is not the case. Q1 Can Jeffrey conditionalization be generalized to accommodate continuous cases? Meanwhile, several authors, such as Kenny Easwaran and Michael Rescorla, have been interested in Kolmogorov’s theory of regular conditional distributions as a possible (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • On the Modal Logic of Jeffrey Conditionalization.Zalán Gyenis - 2018 - Logica Universalis 12 (3-4):351-374.
    We continue the investigations initiated in the recent papers where Bayes logics have been introduced to study the general laws of Bayesian belief revision. In Bayesian belief revision a Bayesian agent revises his prior belief by conditionalizing the prior on some evidence using the Bayes rule. In this paper we take the more general Jeffrey formula as a conditioning device and study the corresponding modal logics that we call Jeffrey logics, focusing mainly on the countable case. The containment relations among (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • A dutch book theorem and converse dutch book theorem for Kolmogorov conditionalization.Michael Rescorla - 2018 - Review of Symbolic Logic 11 (4):705-735.
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  • Deterministic Convergence and Strong Regularity.Michael Nielsen - 2018 - British Journal for the Philosophy of Science 71 (4):1461-1491.
    Bayesians since Savage (1972) have appealed to asymptotic results to counter charges of excessive subjectivity. Their claim is that objectionable differences in prior probability judgments will vanish as agents learn from evidence, and individual agents will converge to the truth. Glymour (1980), Earman (1992) and others have voiced the complaint that the theorems used to support these claims tell us, not how probabilities updated on evidence will actually}behave in the limit, but merely how Bayesian agents believe they will behave, suggesting (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • A Dutch Book Theorem and Converse Dutch Book Theorem for Kolmogorov Conditionalization.Michael Rescorla - unknown
    This paper discusses how to update one’s credences based on evidence that has initial probability 0. I advance a diachronic norm, Kolmogorov Conditionalization, that governs credal reallocation in many such learning scenarios. The norm is based upon Kolmogorov’s theory of conditional probability. I prove a Dutch book theorem and converse Dutch book theorem for Kolmogorov Conditionalization. The two theorems establish Kolmogorov Conditionalization as the unique credal reallocation rule that avoids a sure loss in the relevant learning scenarios.
    Download  
     
    Export citation  
     
    Bookmark   14 citations  
  • Recovering a Prior from a Posterior: Some Parameterizations of Jeffrey Conditioning.Carl G. Wagner - forthcoming - Erkenntnis:1-10.
    Given someone’s fully specified posterior probability distribution q and information about the revision method that they employed to produce q, what can you infer about their prior probabilistic commitments? This question provides an entrée into a thoroughgoing discussion of a class of parameterizations of Jeffrey conditioning in which the parameters furnish information above and beyond that incorporated in \. Our analysis highlights the ubiquity of Bayes factors in the study of probability revision.
    Download  
     
    Export citation  
     
    Bookmark  
  • Having a look at the Bayes Blind Spot.Miklós Rédei & Zalán Gyenis - 2019 - Synthese 198 (4):3801-3832.
    The Bayes Blind Spot of a Bayesian Agent is, by definition, the set of probability measures on a Boolean σ\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\sigma $$\end{document}-algebra that are absolutely continuous with respect to the background probability measure of a Bayesian Agent on the algebra and which the Bayesian Agent cannot learn by a single conditionalization no matter what evidence he has about the elements in the Boolean σ\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\sigma (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • A New Argument for Kolomogorov Conditionalization.Michael Nielsen - 2021 - Review of Symbolic Logic 14 (4):1-16.
    This paper contributes to a recent research program that extends arguments supporting elementary conditionalization to arguments supporting conditionalization with general, measure-theoretic conditional probabilities. I begin by suggesting an amendment to the framework that Rescorla (2018) has used to characterize regular conditional probabilities in terms of avoiding Dutch book. If we wish to model learning scenarios in which an agent gains complete membership knowledge about some subcollection of the events of interest to her, then we should focus on updating policies that (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • How much are bold Bayesians favoured?Pavel Janda - 2022 - Synthese 200 (4):1-20.
    Rédei and Gyenis recently displayed strong constraints of Bayesian learning. However, they also presented a positive result for Bayesianism. Despite the limited significance of this positive result, I find it useful to discuss its two possible strengthenings to present new results and open new questions about the limits of Bayesianism. First, I will show that one cannot strengthen the positive result by restricting the evidence to so-called “certain evidence”. Secondly, strengthening the result by restricting the partitions—as parts of one’s evidence—to (...)
    Download  
     
    Export citation  
     
    Bookmark