Switch to: References

Add citations

You must login to add citations.
  1. Bertrand’s Paradox and the Principle of Indifference.Nicholas Shackel - 2024 - Abingdon: Routledge.
    Events between which we have no epistemic reason to discriminate have equal epistemic probabilities. Bertrand’s chord paradox, however, appears to show this to be false, and thereby poses a general threat to probabilities for continuum sized state spaces. Articulating the nature of such spaces involves some deep mathematics and that is perhaps why the recent literature on Bertrand’s Paradox has been almost entirely from mathematicians and physicists, who have often deployed elegant mathematics of considerable sophistication. At the same time, the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Uniform probability in cosmology.Sylvia Wenmackers - 2023 - Studies in History and Philosophy of Science Part A 101 (C):48-60.
    Download  
     
    Export citation  
     
    Bookmark  
  • Bertrand's Paradox and the Maximum Entropy Principle.Nicholas Shackel & Darrell P. Rowbottom - 2019 - Philosophy and Phenomenological Research 101 (3):505-523.
    An important suggestion of objective Bayesians is that the maximum entropy principle can replace a principle which is known to get into paradoxical difficulties: the principle of indifference. No one has previously determined whether the maximum entropy principle is better able to solve Bertrand’s chord paradox than the principle of indifference. In this paper I show that it is not. Additionally, the course of the analysis brings to light a new paradox, a revenge paradox of the chords, that is unique (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • The gradation puzzle of intellectual assurance.Xiaoxing Zhang - 2021 - Analysis 81 (3):488-496.
    The Cartesian thesis that some justifications are infallible faces a gradation puzzle. On the one hand, infallible justification tolerates absolutely no possibility for error. On the other hand, infallible justifications can vary in evidential force: e.g. two persons can both be infallible regarding their pains while the one with stronger pain is nevertheless more justified. However, if a type of justification is gradable in strength, why can it always be absolute? This paper explores the potential of this gradation challenge by (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Conglomerability, disintegrability and the comparative principle.Rush T. Stewart & Michael Nielsen - 2021 - Analysis 81 (3):479-488.
    Our aim here is to present a result that connects some approaches to justifying countable additivity. This result allows us to better understand the force of a recent argument for countable additivity due to Easwaran. We have two main points. First, Easwaran’s argument in favour of countable additivity should have little persuasive force on those permissive probabilists who have already made their peace with violations of conglomerability. As our result shows, Easwaran’s main premiss – the comparative principle – is strictly (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Some epistemological ramifications of the Borel–Kolmogorov paradox.Michael Rescorla - 2015 - Synthese 192 (3):735-767.
    This paper discusses conditional probability $$P$$ P , or the probability of A given B. When $$P>0$$ P > 0 , the ratio formula determines $$P$$ P . When $$P=0$$ P = 0 , the ratio formula breaks down. The Borel–Kolmogorov paradox suggests that conditional probabilities in such cases are indeterminate or ill-posed. To analyze the paradox, I explore the relation between probability and intensionality. I argue that the paradox is a Frege case, similar to those that arise in many (...)
    Download  
     
    Export citation  
     
    Bookmark   19 citations  
  • A dutch book theorem and converse dutch book theorem for Kolmogorov conditionalization.Michael Rescorla - 2018 - Review of Symbolic Logic 11 (4):705-735.
    Download  
     
    Export citation  
     
    Bookmark   14 citations  
  • A Dutch Book Theorem and Converse Dutch Book Theorem for Kolmogorov Conditionalization.Michael Rescorla - unknown
    This paper discusses how to update one’s credences based on evidence that has initial probability 0. I advance a diachronic norm, Kolmogorov Conditionalization, that governs credal reallocation in many such learning scenarios. The norm is based upon Kolmogorov’s theory of conditional probability. I prove a Dutch book theorem and converse Dutch book theorem for Kolmogorov Conditionalization. The two theorems establish Kolmogorov Conditionalization as the unique credal reallocation rule that avoids a sure loss in the relevant learning scenarios.
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  • The Maxim of Probabilism, with special regard to Reichenbach.Miklós Rédei & Zalán Gyenis - 2021 - Synthese 199 (3-4):8857-8874.
    It is shown that by realizing the isomorphism features of the frequency and geometric interpretations of probability, Reichenbach comes very close to the idea of identifying mathematical probability theory with measure theory in his 1949 work on foundations of probability. Some general features of Reichenbach’s axiomatization of probability theory are pointed out as likely obstacles that prevented him making this conceptual move. The role of isomorphisms of Kolmogorovian probability measure spaces is specified in what we call the “Maxim of Probabilism”, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Having a look at the Bayes Blind Spot.Miklós Rédei & Zalán Gyenis - 2019 - Synthese 198 (4):3801-3832.
    The Bayes Blind Spot of a Bayesian Agent is, by definition, the set of probability measures on a Boolean σ\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\sigma $$\end{document}-algebra that are absolutely continuous with respect to the background probability measure of a Bayesian Agent on the algebra and which the Bayesian Agent cannot learn by a single conditionalization no matter what evidence he has about the elements in the Boolean σ\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\sigma (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Obligation, Permission, and Bayesian Orgulity.Michael Nielsen & Rush T. Stewart - 2019 - Ergo: An Open Access Journal of Philosophy 6.
    This essay has two aims. The first is to correct an increasingly popular way of misunderstanding Belot's Orgulity Argument. The Orgulity Argument charges Bayesianism with defect as a normative epistemology. For concreteness, our argument focuses on Cisewski et al.'s recent rejoinder to Belot. The conditions that underwrite their version of the argument are too strong and Belot does not endorse them on our reading. A more compelling version of the Orgulity Argument than Cisewski et al. present is available, however---a point (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Determining Maximal Entropy Functions for Objective Bayesian Inductive Logic.Juergen Landes, Soroush Rafiee Rad & Jon Williamson - 2022 - Journal of Philosophical Logic 52 (2):555-608.
    According to the objective Bayesian approach to inductive logic, premisses inductively entail a conclusion just when every probability function with maximal entropy, from all those that satisfy the premisses, satisfies the conclusion. When premisses and conclusion are constraints on probabilities of sentences of a first-order predicate language, however, it is by no means obvious how to determine these maximal entropy functions. This paper makes progress on the problem in the following ways. Firstly, we introduce the concept of a limit in (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Reflecting on finite additivity.Leendert Huisman - 2015 - Synthese 192 (6):1785-1797.
    An infinite lottery experiment seems to indicate that Bayesian conditionalization may be inconsistent when the prior credence function is finitely additive because, in that experiment, it conflicts with the principle of reflection. I will show that any other form of updating credences would produce the same conflict, and, furthermore, that the conflict is not between conditionalization and reflection but, instead, between finite additivity and reflection. A correct treatment of the infinite lottery experiment requires a careful treatment of finite additivity. I (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • A Continuum-Valued Logic of Degrees of Probability.Colin Howson - 2014 - Erkenntnis 79 (5):1001-1013.
    Leibniz seems to have been the first to suggest a logical interpretation of probability, but there have always seemed formidable mathematical and interpretational barriers to implementing the idea. De Finetti revived it only, it seemed, to reject it in favour of a purely decision-theoretic approach. In this paper I argue that not only is it possible to view (Bayesian) probability as a continuum-valued logic, but that it has a very close formal kinship with classical propositional logic.
    Download  
     
    Export citation  
     
    Bookmark  
  • Chance and the Continuum Hypothesis.Daniel Hoek - 2021 - Philosophy and Phenomenological Research 103 (3):639-60.
    This paper presents and defends an argument that the continuum hypothesis is false, based on considerations about objective chance and an old theorem due to Banach and Kuratowski. More specifically, I argue that the probabilistic inductive methods standardly used in science presuppose that every proposition about the outcome of a chancy process has a certain chance between 0 and 1. I also argue in favour of the standard view that chances are countably additive. Since it is possible to randomly pick (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • On fair countable lotteries.Casper Storm Hansen - 2017 - Philosophical Studies 174 (11):2787-2794.
    Two reverse supertasks—one new and one invented by Pérez Laraudogoitia —are discussed. Contra Kerkvliet and Pérez Laraudogoitia, it is argued that these supertasks cannot be used to conduct fair infinite lotteries, i.e., lotteries on the set of natural numbers with a uniform probability distribution. The new supertask involves an infinity of gods who collectively select a natural number by each removing one ball from a collection of initially infinitely many balls in a reverse omega-sequence of actions.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Fair Countable Lotteries and Reflection.Casper Storm Hansen - 2022 - Acta Analytica 37 (4):595-610.
    The main conclusion is this conditional: If the principle of reflection is a valid constraint on rational credences, then it is not rational to have a uniform credence distribution on a countable outcome space. The argument is a variation on some arguments that are already in the literature, but with crucial differences. The conditional can be used for either a modus ponens or a modus tollens; some reasons for thinking that the former is most reasonable are given.
    Download  
     
    Export citation  
     
    Bookmark  
  • On the Modal Logic of Jeffrey Conditionalization.Zalán Gyenis - 2018 - Logica Universalis 12 (3-4):351-374.
    We continue the investigations initiated in the recent papers where Bayes logics have been introduced to study the general laws of Bayesian belief revision. In Bayesian belief revision a Bayesian agent revises his prior belief by conditionalizing the prior on some evidence using the Bayes rule. In this paper we take the more general Jeffrey formula as a conditioning device and study the corresponding modal logics that we call Jeffrey logics, focusing mainly on the countable case. The containment relations among (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Conditioning using conditional expectations: the Borel–Kolmogorov Paradox.Zalán Gyenis, Gabor Hofer-Szabo & Miklós Rédei - 2017 - Synthese 194 (7):2595-2630.
    The Borel–Kolmogorov Paradox is typically taken to highlight a tension between our intuition that certain conditional probabilities with respect to probability zero conditioning events are well defined and the mathematical definition of conditional probability by Bayes’ formula, which loses its meaning when the conditioning event has probability zero. We argue in this paper that the theory of conditional expectations is the proper mathematical device to conditionalize and that this theory allows conditionalization with respect to probability zero events. The conditional probabilities (...)
    Download  
     
    Export citation  
     
    Bookmark   18 citations  
  • General properties of bayesian learning as statistical inference determined by conditional expectations.Zalán Gyenis & Miklós Rédei - 2017 - Review of Symbolic Logic 10 (4):719-755.
    We investigate the general properties of general Bayesian learning, where “general Bayesian learning” means inferring a state from another that is regarded as evidence, and where the inference is conditionalizing the evidence using the conditional expectation determined by a reference probability measure representing the background subjective degrees of belief of a Bayesian Agent performing the inference. States are linear functionals that encode probability measures by assigning expectation values to random variables via integrating them with respect to the probability measure. If (...)
    Download  
     
    Export citation  
     
    Bookmark   17 citations  
  • General properties of general Bayesian learning.Miklós Rédei & Zalán Gyenis - unknown
    We investigate the general properties of general Bayesian learning, where ``general Bayesian learning'' means inferring a state from another that is regarded as evidence, and where the inference is conditionalizing the evidence using the conditional expectation determined by a reference probability measure representing the background subjective degrees of belief of a Bayesian Agent performing the inference. States are linear functionals that encode probability measures by assigning expectation values to random variables via integrating them with respect to the probability measure. If (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations