Switch to: References

Add citations

You must login to add citations.
  1. Updating, supposing, and maxent.Brian Skyrms - 1987 - Theory and Decision 22 (3):225-246.
    Download  
     
    Export citation  
     
    Bookmark   27 citations  
  • Maximum entropy inference as a special case of conditionalization.Brian Skyrms - 1985 - Synthese 63 (1):55 - 74.
    Download  
     
    Export citation  
     
    Bookmark   17 citations  
  • Dynamic coherence and probability kinematics.Brian Skyrms - 1987 - Philosophy of Science 54 (1):1-20.
    The question of coherence of rules for changing degrees of belief in the light of new evidence is studied, with special attention being given to cases in which evidence is uncertain. Belief change by the rule of conditionalization on an appropriate proposition and belief change by "probability kinematics" on an appropriate partition are shown to have like status.
    Download  
     
    Export citation  
     
    Bookmark   82 citations  
  • Languages and Designs for Probability Judgment.Glenn Shafer & Amos Tversky - 1985 - Cognitive Science 9 (3):309-339.
    Theories of subjective probability are viewed as formal languages for analyzing evidence and expressing degrees of belief. This article focuses on two probability langauges, the Bayesian language and the language of belief functions (Shafer, 1976). We describe and compare the semantics (i.e., the meaning of the scale) and the syntax (i.e., the formal calculus) of these languages. We also investigate some of the designs for probability judgment afforded by the two languages.
    Download  
     
    Export citation  
     
    Bookmark   20 citations  
  • Jeffrey's rule of conditioning.Glenn Shafer - 1981 - Philosophy of Science 48 (3):337-362.
    Richard Jeffrey's generalization of Bayes' rule of conditioning follows, within the theory of belief functions, from Dempster's rule of combination and the rule of minimal extension. Both Jeffrey's rule and the theory of belief functions can and should be construed constructively, rather than normatively or descriptively. The theory of belief functions gives a more thorough analysis of how beliefs might be constructed than Jeffrey's rule does. The inadequacy of Bayesian conditioning is much more general than Jeffrey's examples of uncertain perception (...)
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  • Changing minds in a changing world.Wolfgang Schwarz - 2012 - Philosophical Studies 159 (2):219-239.
    I defend a general rule for updating beliefs that takes into account both the impact of new evidence and changes in the subject’s location. The rule combines standard conditioning with a shifting operation that moves the center of each doxastic possibility forward to the next point where information arrives. I show that well-known arguments for conditioning lead to this combination when centered information is taken into account. I also discuss how my proposal relates to other recent proposals, what results it (...)
    Download  
     
    Export citation  
     
    Bookmark   25 citations  
  • A new resolution of the Judy Benjamin Problem.Igor Douven & Jan-Willem Romeijn - 2011 - Mind 120 (479):637 - 670.
    A paper on how to adapt your probabilisitc beliefs when learning a conditional.
    Download  
     
    Export citation  
     
    Bookmark   31 citations  
  • On the proper formulation of conditionalization.Michael Rescorla - 2021 - Synthese 198 (3):1935-1965.
    Conditionalization is a norm that governs the rational reallocation of credence. I distinguish between factive and non-factive formulations of Conditionalization. Factive formulations assume that the conditioning proposition is true. Non-factive formulations allow that the conditioning proposition may be false. I argue that non-factive formulations provide a better foundation for philosophical and scientific applications of Bayesian decision theory. I furthermore argue that previous formulations of Conditionalization, factive and non-factive alike, have almost universally ignored, downplayed, or mishandled a crucial causal aspect of (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • Having a look at the Bayes Blind Spot.Miklós Rédei & Zalán Gyenis - 2019 - Synthese 198 (4):3801-3832.
    The Bayes Blind Spot of a Bayesian Agent is, by definition, the set of probability measures on a Boolean σ\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\sigma $$\end{document}-algebra that are absolutely continuous with respect to the background probability measure of a Bayesian Agent on the algebra and which the Bayesian Agent cannot learn by a single conditionalization no matter what evidence he has about the elements in the Boolean σ\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\sigma (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Probabilistic Belief Contraction.Raghav Ramachandran, Arthur Ramer & Abhaya C. Nayak - 2012 - Minds and Machines 22 (4):325-351.
    Probabilistic belief contraction has been a much neglected topic in the field of probabilistic reasoning. This is due to the difficulty in establishing a reasonable reversal of the effect of Bayesian conditionalization on a probabilistic distribution. We show that indifferent contraction, a solution proposed by Ramer to this problem through a judicious use of the principle of maximum entropy, is a probabilistic version of a full meet contraction. We then propose variations of indifferent contraction, using both the Shannon entropy measure (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Open-Minded Orthodox Bayesianism by Epsilon-Conditionalization.Eric Raidl - 2020 - British Journal for the Philosophy of Science 71 (1):139-176.
    Orthodox Bayesianism endorses revising by conditionalization. This paper investigates the zero-raising problem, or equivalently the certainty-dropping problem of orthodox Bayesianism: previously neglected possibilities remain neglected, although the new evidence might suggest otherwise. Yet, one may want to model open-minded agents, that is, agents capable of raising previously neglected possibilities. Different reasons can be given for open-mindedness, one of which is fallibilism. The paper proposes a family of open-minded propositional revisions depending on a parameter ϵ. The basic idea is this: first (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Radical Pooling and Imprecise Probabilities.Ignacio Ojea Quintana - forthcoming - Erkenntnis:1-28.
    This paper focuses on radical pooling, or the question of how to aggregate credences when there is a fundamental disagreement about which is the relevant logical space for inquiry. The solution advanced is based on the notion of consensus as common ground, where agents can find it by suspending judgment on logical possibilities. This is exemplified with cases of scientific revolution. On a formal level, the proposal uses algebraic joins and imprecise probabilities; which is shown to be compatible with the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • A note on deterministic updating and van Fraassen’s symmetry argument for conditionalization.Richard Pettigrew - 2021 - Philosophical Studies 178 (2):665-673.
    In a recent paper, Pettigrew argues that the pragmatic and epistemic arguments for Bayesian updating are based on an unwarranted assumption, which he calls deterministic updating, and which says that your updating plan should be deterministic. In that paper, Pettigrew did not consider whether the symmetry arguments due to Hughes and van Fraassen make the same assumption Scientific inquiry in philosophical perspective. University Press of America, Lanham, pp. 183–223, 1987). In this note, I show that they do.
    Download  
     
    Export citation  
     
    Bookmark  
  • Aggregating incoherent agents who disagree.Richard Pettigrew - 2019 - Synthese 196 (7):2737-2776.
    In this paper, we explore how we should aggregate the degrees of belief of a group of agents to give a single coherent set of degrees of belief, when at least some of those agents might be probabilistically incoherent. There are a number of ways of aggregating degrees of belief, and there are a number of ways of fixing incoherent degrees of belief. When we have picked one of each, should we aggregate first and then fix, or fix first and (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Updating as Communication.Sarah Moss - 2012 - Philosophy and Phenomenological Research 85 (2):225-248.
    Traditional procedures for rational updating fail when it comes to self-locating opinions, such as your credences about where you are and what time it is. This paper develops an updating procedure for rational agents with self-locating beliefs. In short, I argue that rational updating can be factored into two steps. The first step uses information you recall from your previous self to form a hypothetical credence distribution, and the second step changes this hypothetical distribution to reflect information you have genuinely (...)
    Download  
     
    Export citation  
     
    Bookmark   58 citations  
  • Simultaneous belief updates via successive Jeffrey conditionalization.Ilho Park - 2013 - Synthese 190 (16):3511-3533.
    This paper discusses simultaneous belief updates. I argue here that modeling such belief updates using the Principle of Minimum Information can be regarded as applying Jeffrey conditionalization successively, and so that, contrary to what many probabilists have thought, the simultaneous belief updates can be successfully modeled by means of Jeffrey conditionalization.
    Download  
     
    Export citation  
     
    Bookmark  
  • Epistemology Formalized.Sarah Moss - 2013 - Philosophical Review 122 (1):1-43.
    This paper argues that just as full beliefs can constitute knowledge, so can properties of your credence distribution. The resulting notion of probabilistic knowledge helps us give a natural account of knowledge ascriptions embedding language of subjective uncertainty, and a simple diagnosis of probabilistic analogs of Gettier cases. Just like propositional knowledge, probabilistic knowledge is factive, safe, and sensitive. And it helps us build knowledge-based norms of action without accepting implausible semantic assumptions or endorsing the claim that knowledge is interest-relative.
    Download  
     
    Export citation  
     
    Bookmark   83 citations  
  • Jeffrey Meets Kolmogorov: A General Theory of Conditioning.Alexander Meehan & Snow Zhang - 2020 - Journal of Philosophical Logic 49 (5):941-979.
    Jeffrey conditionalization is a rule for updating degrees of belief in light of uncertain evidence. It is usually assumed that the partitions involved in Jeffrey conditionalization are finite and only contain positive-credence elements. But there are interesting examples, involving continuous quantities, in which this is not the case. Q1 Can Jeffrey conditionalization be generalized to accommodate continuous cases? Meanwhile, several authors, such as Kenny Easwaran and Michael Rescorla, have been interested in Kolmogorov’s theory of regular conditional distributions as a possible (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Probability Kinematics and Probability Dynamics.Lydia McGrew - 2010 - Journal of Philosophical Research 35:89-105.
    Richard Jeffrey developed the formula for probability kinematics with the intent that it would show that strong foundations are epistemologically unnecessary. But the reasons that support strong foundationalism are considerations of dynamics rather than kinematics. The strong foundationalist is concerned with the origin of epistemic force; showing how epistemic force is propagated therefore cannot undermine his position. The weakness of personalism is evident in the difficulty the personalist has in giving a principled answer to the question of when the conditions (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • The principle of maximum entropy and a problem in probability kinematics.Stefan Lukits - 2014 - Synthese 191 (7):1-23.
    Sometimes we receive evidence in a form that standard conditioning (or Jeffrey conditioning) cannot accommodate. The principle of maximum entropy (MAXENT) provides a unique solution for the posterior probability distribution based on the intuition that the information gain consistent with assumptions and evidence should be minimal. Opponents of objective methods to determine these probabilities prominently cite van Fraassen’s Judy Benjamin case to undermine the generality of maxent. This article shows that an intuitive approach to Judy Benjamin’s case supports maxent. This (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Probability logic, logical probability, and inductive support.Isaac Levi - 2010 - Synthese 172 (1):97-118.
    This paper seeks to defend the following conclusions: The program advanced by Carnap and other necessarians for probability logic has little to recommend it except for one important point. Credal probability judgments ought to be adapted to changes in evidence or states of full belief in a principled manner in conformity with the inquirer’s confirmational commitments—except when the inquirer has good reason to modify his or her confirmational commitment. Probability logic ought to spell out the constraints on rationally coherent confirmational (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • An Objective Justification of Bayesianism II: The Consequences of Minimizing Inaccuracy.Hannes Leitgeb & Richard Pettigrew - 2010 - Philosophy of Science 77 (2):236-272.
    One of the fundamental problems of epistemology is to say when the evidence in an agent’s possession justifies the beliefs she holds. In this paper and its prequel, we defend the Bayesian solution to this problem by appealing to the following fundamental norm: Accuracy An epistemic agent ought to minimize the inaccuracy of her partial beliefs. In the prequel, we made this norm mathematically precise; in this paper, we derive its consequences. We show that the two core tenets of Bayesianism (...)
    Download  
     
    Export citation  
     
    Bookmark   149 citations  
  • Multiple Studies and Evidential Defeat.Matthew Kotzen - 2011 - Noûs 47 (1):154-180.
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Imprecise Uncertain Reasoning: A Distributional Approach.Gernot D. Kleiter - 2018 - Frontiers in Psychology 9.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • A defense of imprecise credences in inference and decision making1.James M. Joyce - 2010 - Philosophical Perspectives 24 (1):281-323.
    Download  
     
    Export citation  
     
    Bookmark   163 citations  
  • Alias Smith and Jones: The testimony of the senses. [REVIEW]Richard C. Jeffrey - 1987 - Erkenntnis 26 (3):391 - 399.
    Download  
     
    Export citation  
     
    Bookmark   18 citations  
  • How much are bold Bayesians favoured?Pavel Janda - 2022 - Synthese 200 (4):1-20.
    Rédei and Gyenis recently displayed strong constraints of Bayesian learning. However, they also presented a positive result for Bayesianism. Despite the limited significance of this positive result, I find it useful to discuss its two possible strengthenings to present new results and open new questions about the limits of Bayesianism. First, I will show that one cannot strengthen the positive result by restricting the evidence to so-called “certain evidence”. Secondly, strengthening the result by restricting the partitions—as parts of one’s evidence—to (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • A Logic For Inductive Probabilistic Reasoning.Manfred Jaeger - 2005 - Synthese 144 (2):181-248.
    Inductive probabilistic reasoning is understood as the application of inference patterns that use statistical background information to assign (subjective) probabilities to single events. The simplest such inference pattern is direct inference: from “70% of As are Bs” and “a is an A” infer that a is a B with probability 0.7. Direct inference is generalized by Jeffrey’s rule and the principle of cross-entropy minimization. To adequately formalize inductive probabilistic reasoning is an interesting topic for artificial intelligence, as an autonomous system (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Learning from Simple Indicative Conditionals.Leendert M. Huisman - 2017 - Erkenntnis 82 (3):583-601.
    An agent who receives information in the form of an indicative conditional statement and who trusts her source will modify her credences to bring them in line with the conditional. I will argue that the agent, upon the acquisition of such information, should, in general, expand her prior credence function to an indeterminate posterior one; that is, to a set of credence functions. Two different ways the agent might interpret the conditional will be presented, and the properties of the resulting (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Theories of probability.Colin Howson - 1995 - British Journal for the Philosophy of Science 46 (1):1-32.
    My title is intended to recall Terence Fine's excellent survey, Theories of Probability [1973]. I shall consider some developments that have occurred in the intervening years, and try to place some of the theories he discussed in what is now a slightly longer perspective. Completeness is not something one can reasonably hope to achieve in a journal article, and any selection is bound to reflect a view of what is salient. In a subject as prone to dispute as this, there (...)
    Download  
     
    Export citation  
     
    Bookmark   25 citations  
  • Bayesian rules of updating.Colin Howson - 1996 - Erkenntnis 45 (2-3):195 - 208.
    This paper discusses the Bayesian updating rules of ordinary and Jeffrey conditionalisation. Their justification has been a topic of interest for the last quarter century, and several strategies proposed. None has been accepted as conclusive, and it is argued here that this is for a good reason; for by extending the domain of the probability function to include propositions describing the agent's present and future degrees of belief one can systematically generate a class of counterexamples to the rules. Dynamic Dutch (...)
    Download  
     
    Export citation  
     
    Bookmark   18 citations  
  • Bayesian conditionalization and probability kinematics.Colin Howson & Allan Franklin - 1994 - British Journal for the Philosophy of Science 45 (2):451-466.
    Download  
     
    Export citation  
     
    Bookmark   24 citations  
  • Probability, logic, and probability logic.Alan Hójek - 2001 - In Lou Goble (ed.), The Blackwell Guide to Philosophical Logic. Oxford, UK: Blackwell. pp. 362--384.
    ‘Probability logic’ might seem like an oxymoron. Logic traditionally concerns matters immutable, necessary and certain, while probability concerns the uncertain, the random, the capricious. Yet our subject has a distinguished pedigree. Ramsey begins his classic “Truth and Probability” with the words: “In this essay the Theory of Probability is taken as a branch of logic. … “speaks of “the logic of the probable.” And more recently, regards probabilities as estimates of truth values, and thus probability theory as a natural outgrowth (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • The coherence argument against conditionalization.Matthias Hild - 1998 - Synthese 115 (2):229-258.
    I re-examine Coherence Arguments (Dutch Book Arguments, No Arbitrage Arguments) for diachronic constraints on Bayesian reasoning. I suggest to replace the usual game–theoretic coherence condition with a new decision–theoretic condition ('Diachronic Sure Thing Principle'). The new condition meets a large part of the standard objections against the Coherence Argument and frees it, in particular, from a commitment to additive utilities. It also facilitates the proof of the Converse Dutch Book Theorem. I first apply the improved Coherence Argument to van Fraassen's (...)
    Download  
     
    Export citation  
     
    Bookmark   21 citations  
  • Auto-epistemology and updating.Matthias Hild - 1998 - Philosophical Studies 92 (3):321-361.
    Download  
     
    Export citation  
     
    Bookmark   26 citations  
  • Three models of sequential belief updating on uncertain evidence.James Hawthorne - 2004 - Journal of Philosophical Logic 33 (1):89-123.
    Jeffrey updating is a natural extension of Bayesian updating to cases where the evidence is uncertain. But, the resulting degrees of belief appear to be sensitive to the order in which the uncertain evidence is acquired, a rather un-Bayesian looking effect. This order dependence results from the way in which basic Jeffrey updating is usually extended to sequences of updates. The usual extension seems very natural, but there are other plausible ways to extend Bayesian updating that maintain order-independence. I will (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • On the Modal Logic of Jeffrey Conditionalization.Zalán Gyenis - 2018 - Logica Universalis 12 (3-4):351-374.
    We continue the investigations initiated in the recent papers where Bayes logics have been introduced to study the general laws of Bayesian belief revision. In Bayesian belief revision a Bayesian agent revises his prior belief by conditionalizing the prior on some evidence using the Bayes rule. In this paper we take the more general Jeffrey formula as a conditioning device and study the corresponding modal logics that we call Jeffrey logics, focusing mainly on the countable case. The containment relations among (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Diachronic Dutch Books and Evidential Import.J. Dmitri Gallow - 2019 - Philosophy and Phenomenological Research 99 (1):49-80.
    A handful of well-known arguments (the 'diachronic Dutch book arguments') rely upon theorems establishing that, in certain circumstances, you are immune from sure monetary loss (you are not 'diachronically Dutch bookable') if and only if you adopt the strategy of conditionalizing (or Jeffrey conditionalizing) on whatever evidence you happen to receive. These theorems require non-trivial assumptions about which evidence you might acquire---in the case of conditionalization, the assumption is that, if you might learn that e, then it is not the (...)
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  • Explicating formal epistemology: Carnap's legacy as Jeffrey's radical probabilism.Christopher F. French - 2015 - Studies in History and Philosophy of Science Part A 53:33–42.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • What is the “Equal Weight View'?Branden Fitelson & David Jehle - 2009 - Episteme 6 (3):280-293.
    In this paper, we investigate various possible (Bayesian) precisifications of the (somewhat vague) statements of “the equal weight view” (EWV) that have appeared in the recent literature on disagreement. We will show that the renditions of (EWV) that immediately suggest themselves are untenable from a Bayesian point of view. In the end, we will propose some tenable (but not necessarily desirable) interpretations of (EWV). Our aim here will not be to defend any particular Bayesian precisification of (EWV), but rather to (...)
    Download  
     
    Export citation  
     
    Bookmark   45 citations  
  • Learning from Conditionals.Benjamin Eva, Stephan Hartmann & Soroush Rafiee Rad - 2020 - Mind 129 (514):461-508.
    In this article, we address a major outstanding question of probabilistic Bayesian epistemology: how should a rational Bayesian agent update their beliefs upon learning an indicative conditional? A number of authors have recently contended that this question is fundamentally underdetermined by Bayesian norms, and hence that there is no single update procedure that rational agents are obliged to follow upon learning an indicative conditional. Here we resist this trend and argue that a core set of widely accepted Bayesian norms is (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Bayesian argumentation and the value of logical validity.Benjamin Eva & Stephan Hartmann - 2018 - Psychological Review 125 (5):806-821.
    According to the Bayesian paradigm in the psychology of reasoning, the norms by which everyday human cognition is best evaluated are probabilistic rather than logical in character. Recently, the Bayesian paradigm has been applied to the domain of argumentation, where the fundamental norms are traditionally assumed to be logical. Here, we present a major generalisation of extant Bayesian approaches to argumentation that utilizes a new class of Bayesian learning methods that are better suited to modelling dynamic and conditional inferences than (...)
    Download  
     
    Export citation  
     
    Bookmark   23 citations  
  • Bayesianism I: Introduction and Arguments in Favor.Kenny Easwaran - 2011 - Philosophy Compass 6 (5):312-320.
    Bayesianism is a collection of positions in several related fields, centered on the interpretation of probability as something like degree of belief, as contrasted with relative frequency, or objective chance. However, Bayesianism is far from a unified movement. Bayesians are divided about the nature of the probability functions they discuss; about the normative force of this probability function for ordinary and scientific reasoning and decision making; and about what relation (if any) holds between Bayesian and non-Bayesian concepts.
    Download  
     
    Export citation  
     
    Bookmark   40 citations  
  • The value of cost-free uncertain evidence.Patryk Dziurosz-Serafinowicz & Dominika Dziurosz-Serafinowicz - 2021 - Synthese 199 (5-6):13313-13343.
    We explore the question of whether cost-free uncertain evidence is worth waiting for in advance of making a decision. A classical result in Bayesian decision theory, known as the value of evidence theorem, says that, under certain conditions, when you update your credences by conditionalizing on some cost-free and certain evidence, the subjective expected utility of obtaining this evidence is never less than the subjective expected utility of not obtaining it. We extend this result to a type of update method, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Belief revision generalized: A joint characterization of Bayes's and Jeffrey's rules.Franz Dietrich, Christian List & Richard Bradley - 2016 - Journal of Economic Theory 162:352-371.
    We present a general framework for representing belief-revision rules and use it to characterize Bayes's rule as a classical example and Jeffrey's rule as a non-classical one. In Jeffrey's rule, the input to a belief revision is not simply the information that some event has occurred, as in Bayes's rule, but a new assignment of probabilities to some events. Despite their differences, Bayes's and Jeffrey's rules can be characterized in terms of the same axioms: "responsiveness", which requires that revised beliefs (...)
    Download  
     
    Export citation  
     
    Bookmark   15 citations  
  • On justifying an account of moral goodness to each individual: contractualism, utilitarianism, and prioritarianism.Richard Pettigrew - manuscript
    Many welfarists wish to assign to each possible state of the world a numerical value that measures something like its moral goodness. How are we to determine this quantity? This paper proposes a contractualist approach: a legitimate measure of moral goodness is one that could be justified to each member of the population in question. How do we justify a measure of moral goodness to each individual? Each individual recognises the measure of moral goodness must be a compromise between the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Cambridge and Vienna: Frank P. Ramsey and the Vienna Circle.Maria Carla Galavotti (ed.) - 2004 - Dordrecht: Springer Verlag.
    The Institute Vienna Circle held a conference in Vienna in 2003, Cambridge and Vienna a?
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Trivalent Conditionals: Stalnaker's Thesis and Bayesian Inference.Paul Égré, Lorenzo Rossi & Jan Sprenger - manuscript
    This paper develops a trivalent semantics for indicative conditionals and extends it to a probabilistic theory of valid inference and inductive learning with conditionals. On this account, (i) all complex conditionals can be rephrased as simple conditionals, connecting our account to Adams's theory of p-valid inference; (ii) we obtain Stalnaker's Thesis as a theorem while avoiding the well-known triviality results; (iii) we generalize Bayesian conditionalization to an updating principle for conditional sentences. The final result is a unified semantic and probabilistic (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Bayes Nets and Rationality.Stephan Hartmann - 2021 - In The Handbook of Rationality. Boston, Massachusetts, USA:
    Bayes nets are a powerful tool for researchers in statistics and artificial intelligence. This chapter demonstrates that they are also of much use for philosophers and psychologists interested in (Bayesian) rationality. To do so, we outline the general methodology of Bayes nets modeling in rationality research and illustrate it with several examples from the philosophy and psychology of reasoning and argumentation. Along the way, we discuss the normative foundations of Bayes nets modeling and address some of the methodological problems it (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • A Generalization of Aumann's Agreement Theorem.Matthias Hild & Mathias Risse - unknown
    The scope of Aumann’s (1976) Agreement Theorem is needlessly limited by its restriction to Conditioning as the update rule. Here we prove the theorem in a more comprehensive framework, in which the evolution of probabilities is represented directly, without deriving new probabilities from new certainties. The framework allows arbitrary update rules subject only to Goldstein’s (1983) requirement that current expectations agree with current expectations of future expectations.
    Download  
     
    Export citation  
     
    Bookmark