Switch to: References

Add citations

You must login to add citations.
  1. Bayesian Epistemology.William Talbott - 2006 - Stanford Encyclopedia of Philosophy.
    ‘Bayesian epistemology’ became an epistemological movement in the 20th century, though its two main features can be traced back to the eponymous Reverend Thomas Bayes (c. 1701-61). Those two features are: (1) the introduction of a formal apparatus for inductive logic; (2) the introduction of a pragmatic self-defeat test (as illustrated by Dutch Book Arguments) for epistemic rationality as a way of extending the justification of the laws of deductive logic to include a justification for the laws of inductive logic. (...)
    Download  
     
    Export citation  
     
    Bookmark   75 citations  
  • A Dilemma for Solomonoff Prediction.Sven Neth - 2023 - Philosophy of Science 90 (2):288-306.
    The framework of Solomonoff prediction assigns prior probability to hypotheses inversely proportional to their Kolmogorov complexity. There are two well-known problems. First, the Solomonoff prior is relative to a choice of Universal Turing machine. Second, the Solomonoff prior is not computable. However, there are responses to both problems. Different Solomonoff priors converge with more and more data. Further, there are computable approximations to the Solomonoff prior. I argue that there is a tension between these two responses. This is because computable (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • How to be an imprecise impermissivist.Seamus Bradley - manuscript
    Rational credence should be coherent in the sense that your attitudes should not leave you open to a sure loss. Rational credence should be such that you can learn when confronted with relevant evidence. Rational credence should not be sensitive to irrelevant differences in the presentation of the epistemic situation. We explore the extent to which orthodox probabilistic approaches to rational credence can satisfy these three desiderata and find them wanting. We demonstrate that an imprecise probability approach does better. Along (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Peirce, Pedigree, Probability.Rush T. Stewart & Tom F. Sterkenburg - 2022 - Transactions of the Charles S. Peirce Society 58 (2):138-166.
    An aspect of Peirce’s thought that may still be underappreciated is his resistance to what Levi calls _pedigree epistemology_, to the idea that a central focus in epistemology should be the justification of current beliefs. Somewhat more widely appreciated is his rejection of the subjective view of probability. We argue that Peirce’s criticisms of subjectivism, to the extent they grant such a conception of probability is viable at all, revert back to pedigree epistemology. A thoroughgoing rejection of pedigree in the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Jeffrey Meets Kolmogorov: A General Theory of Conditioning.Alexander Meehan & Snow Zhang - 2020 - Journal of Philosophical Logic 49 (5):941-979.
    Jeffrey conditionalization is a rule for updating degrees of belief in light of uncertain evidence. It is usually assumed that the partitions involved in Jeffrey conditionalization are finite and only contain positive-credence elements. But there are interesting examples, involving continuous quantities, in which this is not the case. Q1 Can Jeffrey conditionalization be generalized to accommodate continuous cases? Meanwhile, several authors, such as Kenny Easwaran and Michael Rescorla, have been interested in Kolmogorov’s theory of regular conditional distributions as a possible (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Persistent Disagreement and Polarization in a Bayesian Setting.Michael Nielsen & Rush T. Stewart - 2021 - British Journal for the Philosophy of Science 72 (1):51-78.
    For two ideally rational agents, does learning a finite amount of shared evidence necessitate agreement? No. But does it at least guard against belief polarization, the case in which their opinions get further apart? No. OK, but are rational agents guaranteed to avoid polarization if they have access to an infinite, increasing stream of shared evidence? No.
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  • Another Approach to Consensus and Maximally Informed Opinions with Increasing Evidence.Rush T. Stewart & Michael Nielsen - 2018 - Philosophy of Science (2):236-254.
    Merging of opinions results underwrite Bayesian rejoinders to complaints about the subjective nature of personal probability. Such results establish that sufficiently similar priors achieve consensus in the long run when fed the same increasing stream of evidence. Initial subjectivity, the line goes, is of mere transient significance, giving way to intersubjective agreement eventually. Here, we establish a merging result for sets of probability measures that are updated by Jeffrey conditioning. This generalizes a number of different merging results in the literature. (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • On the role of explanatory and systematic power in scientific reasoning.Peter Brössel - 2015 - Synthese 192 (12):3877-3913.
    The paper investigates measures of explanatory power and how to define the inference schema “Inference to the Best Explanation”. It argues that these measures can also be used to quantify the systematic power of a hypothesis and the inference schema “Inference to the Best Systematization” is defined. It demonstrates that systematic power is a fruitful criterion for theory choice and IBS is truth-conducive. It also shows that even radical Bayesians must admit that systemic power is an integral component of Bayesian (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Distention for Sets of Probabilities.Rush T. Stewart & Michael Nielsen - 2022 - Philosophy of Science 89 (3):604-620.
    Bayesians often appeal to “merging of opinions” to rebut charges of excessive subjectivity. But what happens in the short run is often of greater interest than what happens in the limit. Seidenfeld and coauthors use this observation as motivation for investigating the counterintuitive short run phenomenon of dilation, since, they allege, dilation is “the opposite” of asymptotic merging of opinions. The measure of uncertainty relevant for dilation, however, is not the one relevant for merging of opinions. We explicitly investigate the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • A New Argument for Kolomogorov Conditionalization.Michael Nielsen - 2021 - Review of Symbolic Logic 14 (4):1-16.
    This paper contributes to a recent research program that extends arguments supporting elementary conditionalization to arguments supporting conditionalization with general, measure-theoretic conditional probabilities. I begin by suggesting an amendment to the framework that Rescorla (2018) has used to characterize regular conditional probabilities in terms of avoiding Dutch book. If we wish to model learning scenarios in which an agent gains complete membership knowledge about some subcollection of the events of interest to her, then we should focus on updating policies that (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Speed-Optimal Induction and Dynamic Coherence.Michael Nielsen & Eric Wofsey - 2022 - British Journal for the Philosophy of Science 73 (2):439-455.
    A standard way to challenge convergence-based accounts of inductive success is to claim that they are too weak to constrain inductive inferences in the short run. We respond to such a challenge by answering some questions raised by Juhl (1994). When it comes to predicting limiting relative frequencies in the framework of Reichenbach, we show that speed-optimal convergence—a long-run success condition—induces dynamic coherence in the short run.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Obligation, Permission, and Bayesian Orgulity.Michael Nielsen & Rush T. Stewart - 2019 - Ergo: An Open Access Journal of Philosophy 6.
    This essay has two aims. The first is to correct an increasingly popular way of misunderstanding Belot's Orgulity Argument. The Orgulity Argument charges Bayesianism with defect as a normative epistemology. For concreteness, our argument focuses on Cisewski et al.'s recent rejoinder to Belot. The conditions that underwrite their version of the argument are too strong and Belot does not endorse them on our reading. A more compelling version of the Orgulity Argument than Cisewski et al. present is available, however---a point (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Putnam’s Diagonal Argument and the Impossibility of a Universal Learning Machine.Tom F. Sterkenburg - 2019 - Erkenntnis 84 (3):633-656.
    Putnam construed the aim of Carnap’s program of inductive logic as the specification of a “universal learning machine,” and presented a diagonal proof against the very possibility of such a thing. Yet the ideas of Solomonoff and Levin lead to a mathematical foundation of precisely those aspects of Carnap’s program that Putnam took issue with, and in particular, resurrect the notion of a universal mechanical rule for induction. In this paper, I take up the question whether the Solomonoff–Levin proposal is (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • On the truth-convergence of open-minded bayesianism.Tom F. Sterkenburg & Rianne de Heide - 2022 - Review of Symbolic Logic 15 (1):64-100.
    Wenmackers and Romeijn (2016) formalize ideas going back to Shimony (1970) and Putnam (1963) into an open-minded Bayesian inductive logic, that can dynamically incorporate statistical hypotheses proposed in the course of the learning process. In this paper, we show that Wenmackers and Romeijn’s proposal does not preserve the classical Bayesian consistency guarantee of merger with the true hypothesis. We diagnose the problem, and offer a forward-looking open-minded Bayesians that does preserve a version of this guarantee.
    Download  
     
    Export citation  
     
    Bookmark  
  • Learning and Pooling, Pooling and Learning.Rush T. Stewart & Ignacio Ojea Quintana - 2018 - Erkenntnis 83 (3):1-21.
    We explore which types of probabilistic updating commute with convex IP pooling. Positive results are stated for Bayesian conditionalization, imaging, and a certain parameterization of Jeffrey conditioning. This last observation is obtained with the help of a slight generalization of a characterization of externally Bayesian pooling operators due to Wagner :336–345, 2009). These results strengthen the case that pooling should go by imprecise probabilities since no precise pooling method is as versatile.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • (1 other version)A dutch book theorem and converse dutch book theorem for Kolmogorov conditionalization.Michael Rescorla - 2018 - Review of Symbolic Logic 11 (4):705-735.
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  • (1 other version)A Dutch Book Theorem and Converse Dutch Book Theorem for Kolmogorov Conditionalization.Michael Rescorla - unknown
    This paper discusses how to update one’s credences based on evidence that has initial probability 0. I advance a diachronic norm, Kolmogorov Conditionalization, that governs credal reallocation in many such learning scenarios. The norm is based upon Kolmogorov’s theory of conditional probability. I prove a Dutch book theorem and converse Dutch book theorem for Kolmogorov Conditionalization. The two theorems establish Kolmogorov Conditionalization as the unique credal reallocation rule that avoids a sure loss in the relevant learning scenarios.
    Download  
     
    Export citation  
     
    Bookmark   14 citations  
  • Having a look at the Bayes Blind Spot.Miklós Rédei & Zalán Gyenis - 2019 - Synthese 198 (4):3801-3832.
    The Bayes Blind Spot of a Bayesian Agent is, by definition, the set of probability measures on a Boolean σ\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\sigma $$\end{document}-algebra that are absolutely continuous with respect to the background probability measure of a Bayesian Agent on the algebra and which the Bayesian Agent cannot learn by a single conditionalization no matter what evidence he has about the elements in the Boolean σ\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\sigma (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • (2 other versions)Deterministic Convergence and Strong Regularity.Michael Nielsen - 2018 - British Journal for the Philosophy of Science 71 (4):1461-1491.
    Bayesians since Savage (1972) have appealed to asymptotic results to counter charges of excessive subjectivity. Their claim is that objectionable differences in prior probability judgments will vanish as agents learn from evidence, and individual agents will converge to the truth. Glymour (1980), Earman (1992) and others have voiced the complaint that the theorems used to support these claims tell us, not how probabilities updated on evidence will actually}behave in the limit, but merely how Bayesian agents believe they will behave, suggesting (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Kolmogorov Conditionalizers Can Be Dutch Booked.Alexander Meehan & Snow Zhang - forthcoming - Review of Symbolic Logic:1-36.
    A vexing question in Bayesian epistemology is how an agent should update on evidence which she assigned zero prior credence. Some theorists have suggested that, in such cases, the agent should update by Kolmogorov conditionalization, a norm based on Kolmogorov’s theory of regular conditional distributions. However, it turns out that in some situations, a Kolmogorov conditionalizer will plan to always assign a posterior credence of zero to the evidence she learns. Intuitively, such a plan is irrational and easily Dutch bookable. (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Are scrutability conditionals rationally deniable?Jens Kipper & Zeynep Soysal - 2021 - Analysis 81 (3):452-461.
    Chalmers has argued that Bayesianism supports the existence of a priori truths, since it entails that scrutability conditionals are not rationally revisable. However, as we argue, Chalmers's arguments leave open that every proposition is rationally deniable, which would be devastating for large parts of his philosophical program. We suggest that Chalmers should appeal to well-known convergence theorems to argue that ideally rational subjects converge on the truth of scrutability conditionals. However, our discussion reveals that showing that these theorems apply in (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • General properties of bayesian learning as statistical inference determined by conditional expectations.Zalán Gyenis & Miklós Rédei - 2017 - Review of Symbolic Logic 10 (4):719-755.
    We investigate the general properties of general Bayesian learning, where “general Bayesian learning” means inferring a state from another that is regarded as evidence, and where the inference is conditionalizing the evidence using the conditional expectation determined by a reference probability measure representing the background subjective degrees of belief of a Bayesian Agent performing the inference. States are linear functionals that encode probability measures by assigning expectation values to random variables via integrating them with respect to the probability measure. If (...)
    Download  
     
    Export citation  
     
    Bookmark   18 citations