Orthodox Bayesianism is a highly idealized theory of how we ought to live our epistemic lives. One of the most widely discussed idealizations is that of logical omniscience: the assumption that an agent’s degrees of belief must be probabilistically coherent to be rational. It is widely agreed that this assumption is problematic if we want to reason about bounded rationality, logical learning, or other aspects of non-ideal epistemic agency. Yet, we still lack a satisfying way to avoid logical omniscience (...) within a Bayesian framework. Some proposals merely replace logical omniscience with a different logical idealization; others sacrifice all traits of logical competence on the altar of logical non-omniscience. We think a better strategy is available: by enriching the Bayesian framework with tools that allow us to capture what agents can and cannot infer given their limited cognitive resources, we can avoid logical omniscience while retaining the idea that rational degrees of belief are in an important way constrained by the laws of probability. In this paper, we offer a formal implementation of this strategy, show how the resulting framework solves the problem of logical omniscience, and compare it to orthodox Bayesianism as we know it. (shrink)
Traditional Bayesianism requires that an agent’s degrees of belief be represented by a real-valued, probabilistic credence function. However, in many cases it seems that our evidence is not rich enough to warrant such precision. In light of this, some have proposed that we instead represent an agent’s degrees of belief as a set of credence functions. This way, we can respect the evidence by requiring that the set, often called the agent’s credal state, includes all credence functions that are (...) in some sense compatible with the evidence. One known problem for this evidentially motivated imprecise view is that in certain cases, our imprecise credence in a particular proposition will remain the same no matter how much evidence we receive. In this article I argue that the problem is much more general than has been appreciated so far, and that it’s difficult to avoid it without compromising the initial evidentialist motivation. _1_ Introduction _2_ Precision and Its Problems _3_ Imprecise Bayesianism and Respecting Ambiguous Evidence _4_ Local Belief Inertia _5_ From Local to Global Belief Inertia _6_ Responding to Global Belief Inertia _7_ Conclusion. (shrink)
This paper examines the debate between permissive and impermissive forms of Bayesianism. It briefly discusses some considerations that might be offered by both sides of the debate, and then replies to some new arguments in favor of impermissivism offered by Roger White. First, it argues that White’s defense of Indifference Principles is unsuccessful. Second, it contends that White’s arguments against permissive views do not succeed.
A Bayesian mind is, at its core, a rational mind. Bayesianism is thus well-suited to predict and explain mental processes that best exemplify our ability to be rational. However, evidence from belief acquisition and change appears to show that we do not acquire and update information in a Bayesian way. Instead, the principles of belief acquisition and updating seem grounded in maintaining a psychological immune system rather than in approximating a Bayesian processor.
Following the standard practice in sociology, cultural anthropology and history, sociologists, historians of science and some philosophers of science define scientific communities as groups with shared beliefs, values and practices. In this paper it is argued that in real cases the beliefs of the members of such communities often vary significantly in important ways. This has rather dire implications for the convergence defense against the charge of the excessive subjectivity of subjective Bayesianism because that defense requires that communities of (...) Bayesian inquirers share a significant set of modal beliefs. The important implication is then that given the actual variation in modal beliefs across individuals, either Bayesians cannot claim that actual theories have been objectively confirmed or they must accept that such theories have been confirmed relative only to epistemically insignificant communities. (shrink)
Many philosophers in the field of meta-ethics believe that rational degrees of confidence in moral judgments should have a probabilistic structure, in the same way as do rational degrees of belief. The current paper examines this position, termed “moral Bayesianism,” from an empirical point of view. To this end, we assessed the extent to which degrees of moral judgments obey the third axiom of the probability calculus, ifP(A∩B)=0thenP(A∪B)=P(A)+P(B), known as finite additivity, as compared to degrees of beliefs on the (...) one hand and degrees of desires on the other. Results generally converged to show that degrees of moral judgment are more similar to degrees of belief than to degrees of desire in this respect. This supports the adoption of a Bayesian approach to the study of moral judgments. To further support moral Bayesianism, we also demonstrated its predictive power. Finally, we discuss the relevancy of our results to the meta-ethical debate between moral cognitivists and moral non-cognitivists. (shrink)
Chalmers, responding to Braun, continues arguments from Chalmers for the conclusion that Bayesian considerations favor the Fregean in the debate over the objects of belief in Frege’s puzzle. This short paper gets to the heart of the disagreement over whether Bayesian considerations can tell us anything about Frege’s puzzle and answers, no, they cannot.
How should we update our beliefs when we learn new evidence? Bayesian confirmation theory provides a widely accepted and well understood answer – we should conditionalize. But this theory has a problem with self-locating beliefs, beliefs that tell you where you are in the world, as opposed to what the world is like. To see the problem, consider your current belief that it is January. You might be absolutely, 100%, sure that it is January. But you will soon believe it (...) is February. This type of belief change cannot be modelled by conditionalization. We need some new principles of belief change for this kind of case, which I call belief mutation. In part 1, I defend the Relevance-Limiting Thesis, which says that a change in a purely self-locating belief of the kind that results in belief mutation should not shift your degree of belief in a non-self-locating belief, which can only change by conditionalization. My method is to give detailed analyses of the puzzles which threaten this thesis: Duplication, Sleeping Beauty, and The Prisoner. This also requires giving my own theory of observation selection effects. In part 2, I argue that when self-locating evidence is learnt from a position of uncertainty, it should be conditionalized on in the normal way. I defend this position by applying it to various cases where such evidence is found. I defend the Halfer position in Sleeping Beauty, and I defend the Doomsday Argument and the Fine-Tuning Argument. (shrink)
Disagreement is a ubiquitous feature of human life, and philosophers have dutifully attended to it. One important question related to disagreement is epistemological: How does a rational person change her beliefs (if at all) in light of disagreement from others? The typical methodology for answering this question is to endorse a steadfast or conciliatory disagreement norm (and not both) on a priori grounds and selected intuitive cases. In this paper, I argue that this methodology is misguided. Instead, a thoroughgoingly Bayesian (...) strategy is what's needed. Such a strategy provides conciliatory norms in appropriate cases and steadfast norms in appropriate cases. I argue, further, that the few extant efforts to address disagreement in the Bayesian spirit are laudable but uncompelling. A modelling, rather than a functional, approach gets us the right norms and is highly general, allowing the epistemologist to deal with (1) multiple epistemic interlocutors, (2) epistemic superiors and inferiors (i.e. not just epistemic peers), and (3) dependence between interlocutors. (shrink)
The paper discusses the notion of reasoning with comparative moral judgements (i.e judgements of the form “act a is morally superior to act b”) from the point of view of several meta-ethical positions. Using a simple formal result, it is argued that only a version of moral cognitivism that is committed to the claim that moral beliefs come in degrees can give a normatively plausible account of such reasoning. Some implications of accepting such a version of moral cognitivism are discussed.
A piece of folklore enjoys some currency among philosophical Bayesians, according to which Bayesian agents that, intuitively speaking, spread their credence over the entire space of available hypotheses are certain to converge to the truth. The goals of the present discussion are to show that kernel of truth in this folklore is in some ways fairly small and to argue that Bayesian convergence-to-the-truth results are a liability for Bayesianism as an account of rationality, since they render a certain sort (...) of arrogance rationally mandatory. (shrink)
ABSTRACTRational agents have consistent beliefs. Bayesianism is a theory of consistency for partial belief states. Rational agents also respond appropriately to experience. Dogmatism is a theory of how to respond appropriately to experience. Hence, Dogmatism and Bayesianism are theories of two very different aspects of rationality. It's surprising, then, that in recent years it has become common to claim that Dogmatism and Bayesianism are jointly inconsistent: how can two independently consistent theories with distinct subject matter be jointly (...) inconsistent? In this essay I argue that Bayesianism and Dogmatism are inconsistent only with the addition of a specific hypothesis about how the appropriate responses to perceptual experience are to be incorporated into the formal models of the Bayesian. That hypothesis isn't essential either to Bayesianism or to Dogmatism, and so Bayesianism and Dogmatism are jointly consistent. That leaves the matter of how experiences and credences are related, a... (shrink)
A group is often construed as one agent with its own probabilistic beliefs (credences), which are obtained by aggregating those of the individuals, for instance through averaging. In their celebrated “Groupthink”, Russell et al. (2015) require group credences to undergo Bayesian revision whenever new information is learnt, i.e., whenever individual credences undergo Bayesian revision based on this information. To obtain a fully Bayesian group, one should often extend this requirement to non-public or even private information (learnt by not all or (...) just one individual), or to non-representable information (not representable by any event in the domain where credences are held). I pro- pose a taxonomy of six types of ‘group Bayesianism’. They differ in the information for which Bayesian revision of group credences is required: public representable information, private representable information, public non-representable information, etc. Six corre- sponding theorems establish how individual credences must (not) be aggregated to ensure group Bayesianism of any type, respectively. Aggregating through standard averaging is never permitted; instead, different forms of geometric averaging must be used. One theorem—that for public representable information—is essentially Russell et al.’s central result (with minor corrections). Another theorem—that for public non-representable information—fills a gap in the theory of externally Bayesian opinion pooling. (shrink)
It is a widespread intuition that the coherence of independent reports provides a powerful reason to believe that the reports are true. Formal results by Huemer, M. 1997. “Probability and Coherence Justification.” Southern Journal of Philosophy 35: 463–72, Olsson, E. 2002. “What is the Problem of Coherence and Truth?” Journal of Philosophy XCIX : 246–72, Olsson, E. 2005. Against Coherence: Truth, Probability, and Justification. Oxford University Press., Bovens, L., and S. Hartmann. 2003. Bayesian Epistemology. Oxford University Press, prove that, under (...) certain conditions, coherence cannot increase the probability of the target claim. These formal results, known as ‘the impossibility theorems’ have been widely discussed in the literature. They are taken to have significant epistemic upshot. In particular, they are taken to show that reports must first individually confirm the target claim before the coherence of multiple reports offers any positive confirmation. In this paper, I dispute this epistemic interpretation. The impossibility theorems are consistent with the idea that the coherence of independent reports provides a powerful reason to believe that the reports are true even if the reports do not individually confirm prior to coherence. Once we see that the formal discoveries do not have this implication, we can recover a model of coherence justification consistent with Bayesianism and these results. This paper, thus, seeks to turn the tide of the negative findings for coherence reasoning by defending coherence as a unique source of confirmation. (shrink)
Chalmers (Mind 120(479): 587–636, 2011a) presents an argument against “referentialism” (and for his own view) that employs Bayesianism. He aims to make progress in a debate over the objects of belief, which seems to be at a standstill between referentialists and non-referentialists. Chalmers’ argument, in sketch, is that Bayesianism is incompatible with referentialism, and natural attempts to salvage the theory, Chalmers contends, requires giving up referentialism. Given the power and success of Bayesianism, the incompatibility is prima facie (...) evidence against referentialism. In this paper, I review Chalmers’ arguments and give some responses on behalf of the referentialist. (shrink)
In the quantum-Bayesian approach to quantum foundations, a quantum state is viewed as an expression of an agent’s personalist Bayesian degrees of belief, or probabilities, concerning the results of measurements. These probabilities obey the usual probability rules as required by Dutch-book coherence, but quantum mechanics imposes additional constraints upon them. In this paper, we explore the question of deriving the structure of quantum-state space from a set of assumptions in the spirit of quantum Bayesianism. The starting point is the (...) representation of quantum states induced by a symmetric informationally complete measurement or SIC. In this representation, the Born rule takes the form of a particularly simple modification of the law of total probability. We show how to derive key features of quantum-state space from (i) the requirement that the Born rule arises as a simple modification of the law of total probability and (ii) a limited number of additional assumptions of a strong Bayesian flavor. (shrink)
Ted Poston's book Reason and Explanation: A Defense of Explanatory Coherentism is a book worthy of careful study. Poston develops and defends an explanationist theory of (epistemic) justification on which justification is a matter of explanatory coherence which in turn is a matter of conservativeness, explanatory power, and simplicity. He argues that his theory is consistent with Bayesianism. He argues, moreover, that his theory is needed as a supplement to Bayesianism. There are seven chapters. I provide a chapter-by-chapter (...) summary along with some substantive concerns. (shrink)
State of the art paper on the problem of induction: how to justify the conclusion that ‘all Fs are Gs’ from the premise that ‘all observed Fs are Gs’. The most prominent theories of contemporary philosophical literature are discussed and analysed, such as: inductivism, reliabilism, perspective of laws of nature, rationalism, falsificationism, the material theory of induction and probabilistic approaches, according to Carnap, Reichenbach and Bayesianism. In the end, we discuss the new problem of induction of Goodman, raised by (...) the grue predicate. (shrink)
At the heart of the Bayesianism is a rule, Conditionalization, which tells us how to update our beliefs. Typical formulations of this rule are underspecified. This paper considers how, exactly, this rule should be formulated. It focuses on three issues: when a subject’s evidence is received, whether the rule prescribes sequential or interval updates, and whether the rule is narrow or wide scope. After examining these issues, it argues that there are two distinct and equally viable versions of Conditionalization (...) to choose from. And which version we choose has interesting ramifications, bearing on issues such as whether Conditionalization can handle continuous evidence, and whether Jeffrey Conditionalization is really a generalization of Conditionalization. (shrink)
Schervish (1985b) showed that every forecasting system is noncalibrated for uncountably many data sequences that it might see. This result is strengthened here: from a topological point of view, failure of calibration is typical and calibration rare. Meanwhile, Bayesian forecasters are certain that they are calibrated---this invites worries about the connection between Bayesianism and rationality.
Richard Bradley and others endorse Reverse Bayesianism as the way to model awareness growth. I raise a problem for Reverse Bayesianism—at least for the general version that Bradley endorses—and argue that there is no plausible way to restrict the principle that will give us the right results. To get the right results, we need to pay attention to the attitudes that agents have towards propositions of which they are unaware. This raises more general questions about how awareness growth (...) should be modelled. (shrink)
In this paper, I consider the relationship between Inference to the Best Explanation and Bayesianism, both of which are well-known accounts of the nature of scientific inference. In Sect. 2, I give a brief overview of Bayesianism and IBE. In Sect. 3, I argue that IBE in its most prominently defended forms is difficult to reconcile with Bayesianism because not all of the items that feature on popular lists of “explanatory virtues”—by means of which IBE ranks competing (...) explanations—have confirmational import. Rather, some of the items that feature on these lists are “informational virtues”—properties that do not make a hypothesis \ more probable than some competitor \ given evidence E, but that, roughly-speaking, give that hypothesis greater informative content. In Sect. 4, I consider as a response to my argument a recent version of compatibilism which argues that IBE can provide further normative constraints on the objectively correct probability function. I argue that this response does not succeed, owing to the difficulty of defending with any generality such further normative constraints. Lastly, in Sect. 5, I propose that IBE should be regarded, not as a theory of scientific inference, but rather as a theory of when we ought to “accept” H, where the acceptability of H is fixed by the goals of science and concerns whether H is worthy of commitment as research program. In this way, IBE and Bayesianism, as I will show, can be made compatible, and thus the Bayesian and the proponent of IBE can be friends. (shrink)
The purpose of this book is to explain Quantum Bayesianism (‘QBism’) to “people without easy access to mathematical formulas and equations” (4-5). Qbism is an interpretation of quantum mechanics that “doesn’t meddle with the technical aspects of the theory [but instead] reinterprets the fundamental terms of the theory and gives them new meaning” (3). The most important motivation for QBism, enthusiastically stated on the book’s cover, is that QBism provides “a way past quantum theory’s paradoxes and puzzles” such that (...) much of the weirdness associated with quantum theory “dissolves under the lens of QBism”. (shrink)
An influential suggestion about the relationship between Bayesianism and inference to the best explanation holds that IBE functions as a heuristic to approximate Bayesian reasoning. While this view promises to unify Bayesianism and IBE in a very attractive manner, important elements of the view have not yet been spelled out in detail. I present and argue for a heuristic conception of IBE on which IBE serves primarily to locate the most probable available explanatory hypothesis to serve as a (...) working hypothesis in an agent’s further investigations. Along the way, I criticize what I consider to be an overly ambitious conception of the heuristic role of IBE, according to which IBE serves as a guide to absolute probability values. My own conception, by contrast, requires only that IBE can function as a guide to the comparative probability values of available hypotheses. This is shown to be a much more realistic role for IBE given the nature and limitations of the explanatory considerations with which IBE operates. (shrink)
Various sexist and racist beliefs ascribe certain negative qualities to people of a given sex or race. Epistemic allies are people who think that in normal circumstances rationality requires the rejection of such sexist and racist beliefs upon learning of many counter-instances, i.e. members of these groups who lack the target negative quality. Accordingly, epistemic allies think that those who give up their sexist or racist beliefs in such circumstances are rationally responding to their evidence, while those who do not (...) are irrational in failing to respond to their evidence by giving up their belief. This is a common view among philosophers and non-philosophers. But epistemic allies face three problems. First, sexist and racist beliefs often involve generic propositions. These sorts of propositions are notoriously resilient in the face of counter-instances since the truth of generic propositions is typically compatible with the existence of many counter-instances. Second, background beliefs can enable one to explain away counter-instances to one’s beliefs. So even when counter-instances might otherwise constitute strong evidence against the truth of the generic, the ability to explain the counter-instances away with relevant background beliefs can make it rational to retain one’s belief in the generic despite the existence of many counter-instances. The final problem is that the kinds of judgements epistemic allies want to make about the irrationality of sexist and racist beliefs upon encountering many counter-instances is at odds with the judgements that we are inclined to make in seemingly parallel cases about the rationality of non-sexist and non-racist generic beliefs. Thus epistemic allies may end up having to give up on plausible normative supervenience principles. All together, these problems pose a significant prima facie challenge to epistemic allies. In what follows I explain how a Bayesian approach to the relation between evidence and belief can neatly untie these knots. The basic story is one of defeat: Bayesianism explains when one is required to become increasingly confident in chance propositions, and confidence in chance propositions can make belief in corresponding generics irrational. (shrink)
Enjoying great popularity in decision theory, epistemology, and philosophy of science, Bayesianism as understood here is fundamentally concerned with epistemically ideal rationality. It assumes a tight connection between evidential probability and ideally rational credence, and usually interprets evidential probability in terms of such credence. Timothy Williamson challenges Bayesianism by arguing that evidential probabilities cannot be adequately interpreted as the credences of an ideal agent. From this and his assumption that evidential probabilities cannot be interpreted as the actual credences (...) of human agents either, he concludes that no interpretation of evidential probabilities in terms of credence is adequate. I argue to the contrary. My overarching aim is to show on behalf of Bayesians how one can still interpret evidential probabilities in terms of ideally rational credence and how one can maintain a tight connection between evidential probabilities and ideally rational credence even if the former cannot be interpreted in terms of the latter. By achieving this aim I illuminate the limits and prospects of Bayesianism. (shrink)
In this paper, I critically evaluate several related, provocative claims made by proponents of data-intensive science and “Big Data” which bear on scientific methodology, especially the claim that scientists will soon no longer have any use for familiar concepts like causation and explanation. After introducing the issue, in Section 2, I elaborate on the alleged changes to scientific method that feature prominently in discussions of Big Data. In Section 3, I argue that these methodological claims are in tension with a (...) prominent account of scientific method, often called “Inference to the Best Explanation”. Later on, in Section 3, I consider an argument against IBE that will be congenial to proponents of Big Data, namely, the argument due to Roche and Sober Analysis, 73:659–668, that “explanatoriness is evidentially irrelevant.” This argument is based on Bayesianism, one of the most prominent general accounts of theory-confirmation. In Section 4, I consider some extant responses to this argument, especially that of Climenhaga Philosophy of Science, 84:359–368,. In Section 5, I argue that Roche and Sober’s argument does not show that explanatory reasoning is dispensable. In Section 6, I argue that there is good reason to think explanatory reasoning will continue to prove indispensable in scientific practice. Drawing on Cicero’s oft-neglected De Divinatione, I formulate what I call the “Ciceronian Causal-nomological Requirement”, which states, roughly, that causal-nomological knowledge is essential for relying on correlations in predictive inference. I defend a version of the CCR by appealing to the challenge of “spurious correlations,” chance correlations which we should not rely upon for predictive inference. In Section 7, I offer some concluding remarks. (shrink)
In this paper we discuss the new Tweety puzzle. The original Tweety puzzle was addressed by approaches in non-monotonic logic, which aim to adequately represent the Tweety case, namely that Tweety is a penguin and, thus, an exceptional bird, which cannot fly, although in general birds can fly. The new Tweety puzzle is intended as a challenge for probabilistic theories of epistemic states. In the first part of the paper we argue against monistic Bayesians, who assume that epistemic states can (...) at any given time be adequately described by a single subjective probability function. We show that monistic Bayesians cannot provide an adequate solution to the new Tweety puzzle, because this requires one to refer to a frequency-based probability function. We conclude that monistic Bayesianism cannot be a fully adequate theory of epistemic states. In the second part we describe an empirical study, which provides support for the thesis that monistic Bayesianism is also inadequate as a descriptive theory of cognitive states. In the final part of the paper we criticize Bayesian approaches in cognitive science, insofar as their monistic tendency cannot adequately address the new Tweety puzzle. We, further, argue against monistic Bayesianism in cognitive science by means of a case study. In this case study we show that Oaksford and Chater’s (2007, 2008) model of conditional inference—contrary to the authors’ theoretical position—has to refer also to a frequency-based probability function. (shrink)
Following Nancy Cartwright and others, I suggest that most (if not all) theories incorporate, or depend on, one or more idealizing assumptions. I then argue that such theories ought to be regimented as counterfactuals, the antecedents of which are simplifying assumptions. If this account of the logic form of theories is granted, then a serious problem arises for Bayesians concerning the prior probabilities of theories that have counterfactual form. If no such probabilities can be assigned, the the posterior probabilities will (...) be undefined, as the latter are defined in terms of the former. I argue here that the most plausible attempts to address the problem of probabilities of conditionals fail to help Bayesians, and, hence, that Bayesians are faced with a new problem. In so far as these proposed solutions fail, I argue that Bayesians must give up Bayesianism or accept the counterintuitive view that no theories that incorporate any idealizations have ever really been confirmed to any extent whatsoever. Moreover, as it appears that the latter horn of this dilemma is highly implausible, we are left with the conclusion that Bayesianism should be rejected, at least as it stands. (shrink)
In everyday life and in science we acquire evidence of evidence and based on this new evidence we often change our epistemic states. An assumption underlying such practice is that the following EEE Slogan is correct: 'evidence of evidence is evidence' (Feldman 2007, p. 208). We suggest that evidence of evidence is best understood as higher-order evidence about the epistemic state of agents. In order to model evidence of evidence we introduce a new powerful framework for modelling epistemic states, Dyadic (...)Bayesianism. Based on this framework, we then discuss characterizations of evidence of evidence and argue for one of them. Finally, we show that whether the EEE Slogan holds, depends on the specific kind of evidence of evidence. (shrink)
In this chapter I analyse an objection to phenomenal conservatism to the effect that phenomenal conservatism is unacceptable because it is incompatible with Bayesianism. I consider a few responses to it and dismiss them as misled or problematic. Then, I argue that this objection doesn’t go through because it rests on an implausible formalization of the notion of seeming-based justification. In the final part of the chapter, I investigate how seeming-based justification and justification based on one’s reflective belief that (...) one has a seeming interact with one another. (shrink)
Simple random sampling resolutions of the raven paradox relevantly diverge from scientific practice. We develop a stratified random sampling model, yielding a better fit and apparently rehabilitating simple random sampling as a legitimate idealization. However, neither accommodates a second concern, the objection from potential bias. We develop a third model that crucially invokes causal considerations, yielding a novel resolution that handles both concerns. This approach resembles Inference to the Best Explanation (IBE) and relates the generalization’s confirmation to confirmation of an (...) associated law. We give it an objective Bayesian formalization and discuss the compatibility of Bayesianism and IBE. (shrink)
Supra-Bayesianism is the Bayesian response to learning the opinions of others. Probability pooling constitutes an alternative response. One natural question is whether there are cases where probability pooling gives the supra-Bayesian result. This has been called the problem of Bayes-compatibility for pooling functions. It is known that in a common prior setting, under standard assumptions, linear pooling cannot be non-trivially Bayes-compatible. We show by contrast that geometric pooling can be non-trivially Bayes-compatible. Indeed, we show that, under certain assumptions, geometric (...) and Bayes-compatible pooling are equivalent. Granting supra-Bayesianism its usual normative status, one upshot of our study is thus that, in a certain class of epistemic contexts, geometric pooling enjoys a normative advantage over linear pooling as a social learning mechanism. We discuss the philosophical ramifications of this advantage, which we show to be robust to variations in our statement of the Bayes-compatibility problem. (shrink)
Bayesianism is our leading theory of uncertainty. Epistemology is defined as the theory of knowledge. So “Bayesian Epistemology” may sound like an oxymoron. Bayesianism, after all, studies the properties and dynamics of degrees of belief, understood to be probabilities. Traditional epistemology, on the other hand, places the singularly non-probabilistic notion of knowledge at centre stage, and to the extent that it traffics in belief, that notion does not come in degrees. So how can there be a Bayesian epistemology?
In “Bayesianism, Infinite Decisions, and Binding”, Arntzenius et al. (Mind 113:251–283, 2004 ) present cases in which agents who cannot bind themselves are driven by standard decision theory to choose sequences of actions with disastrous consequences. They defend standard decision theory by arguing that if a decision rule leads agents to disaster only when they cannot bind themselves, this should not be taken to be a mark against the decision rule. I show that this claim has surprising implications for (...) a number of other debates in decision theory. I then assess the plausibility of this claim, and suggest that it should be rejected. (shrink)
One guide to an argument's significance is the number and variety of refutations it attracts. By this measure, the Dutch book argument has considerable importance.2 Of course this measure alone is not a sure guide to locating arguments deserving of our attention—if a decisive refutation has really been given, we are better off pursuing other topics. But the presence of many and varied counterarguments at least suggests that either the refutations are controversial, or that their target admits of more than (...) one interpretation, or both. The main point of this paper is to focus on a way of understanding the Dutch Book argument (DBA) that avoids many of the well-known criticisms, and to consider how it fares against an important criticism that still remains: the objection that the DBA presupposes value-independence of bets. (shrink)
Bayesianism is the position that scientific reasoning is probabilistic and that probabilities are adequately interpreted as an agent's actual subjective degrees of belief, measured by her betting behaviour. Confirmation is one important aspect of scientific reasoning. The thesis of this paper is the following: if scientific reasoning is at all probabilistic, the subjective interpretation has to be given up in order to get right confirmation—and thus scientific reasoning in general. The Bayesian approach to scientific reasoning Bayesian confirmation theory The (...) example The less reliable the source of information, the higher the degree of Bayesian confirmation Measure sensitivity A more general version of the problem of old evidence Conditioning on the entailment relation The counterfactual strategy Generalizing the counterfactual strategy The desired result, and a necessary and sufficient condition for it Actual degrees of belief The common knock-down feature, or ‘anything goes’ The problem of prior probabilities. (shrink)
It is widely thought in philosophy and elsewhere that parsimony is a theoretical virtue in that if T1 is more parsimonious than T2, then T1 is preferable to T2, other things being equal. This thesis admits of many distinct precisifications. I focus on a relatively weak precisification on which preferability is a matter of probability, and argue that it is false. This is problematic for various alternative precisifications, and even for Inference to the Best Explanation as standardly understood.
Whereas Bayesians have proposed norms such as probabilism, which requires immediate and permanent certainty in all logical truths, I propose a framework on which credences, including credences in logical truths, are rational because they are based on reasoning that follows plausible rules for the adoption of credences. I argue that my proposed framework has many virtues. In particular, it resolves the problem of logical omniscience.
Where E is the proposition that [If H and O were true, H would explain O], William Roche and Elliot Sober have argued that P(H|O&E) = P(H|O). In this paper I argue that not only is this equality not generally true, it is false in the very kinds of cases that Roche and Sober focus on, involving frequency data. In fact, in such cases O raises the probability of H only given that there is an explanatory connection between them.
Much contemporary epistemology is informed by a kind of confirmational holism, and a consequent rejection of the assumption that all confirmation rests on experiential certainties. Another prominent theme is that belief comes in degrees, and that rationality requires apportioning one's degrees of belief reasonably. Bayesian confirmation models based on Jeffrey Conditionalization attempt to bring together these two appealing strands. I argue, however, that these models cannot account for a certain aspect of confirmation that would be accounted for in any adequate (...) holistic confirmation theory. I then survey the prospects for constructing a formal epistemology that better accommodates holistic insights. (shrink)
Seeing a red hat can (i) increase my credence in the hat is red, and (ii) introduce a negative dependence between that proposition and po- tential undermining defeaters such as the light is red. The rigidity of Jeffrey Conditionalization makes this awkward, as rigidity preserves inde- pendence. The picture is less awkward given ‘Holistic Conditionalization’, or so it is claimed. I defend Jeffrey Conditionalization’s consistency with underminable perceptual learning and its superiority to Holistic Conditionalization, arguing that the latter is merely (...) a special case of the former, is itself rigid, and is committed to implausible accounts of perceptual con- firmation and of undermining defeat. (shrink)
We argue in Roche and Sober (2013) that explanatoriness is evidentially irrelevant in that Pr(H | O&EXPL) = Pr(H | O), where H is a hypothesis, O is an observation, and EXPL is the proposition that if H and O were true, then H would explain O. This is a “screening-off” thesis. Here we clarify that thesis, reply to criticisms advanced by Lange (2017), consider alternative formulations of Inference to the Best Explanation, discuss a strengthened screening-off thesis, and consider how (...) it bears on the claim that unification is evidentially relevant. (shrink)
As I head home from work, I’m not sure whether my daughter’s new bike is green, and I’m also not sure whether I’m on drugs that distort my color perception. One thing that I am sure about is that my attitudes towards those possibilities are evidentially independent of one another, in the sense that changing my confidence in one shouldn’t affect my confidence in the other. When I get home and see the bike it looks green, so I increase my (...) confidence that it is green. But something else has changed: now an increase in my confidence that I’m on color-drugs would undermine my confidence that the bike is green. Jonathan Weisberg and Jim Pryor argue that the preceding story is problematic for standard Bayesian accounts of perceptual learning. Due to the ‘rigidity’ of Conditionalization, a negative probabilistic correlation between two propositions cannot be introduced by updating on one of them. Hence if my beliefs about my own color-sobriety start out independent of my beliefs about the color of the bike, then they must remain independent after I have my perceptual experience and update accordingly. Weisberg takes this to be a reason to reject Conditionalization. I argue that this conclusion is too pessimistic: Conditionalization is only part of the Bayesian story of perceptual learning, and the other part needn’t preserve independence. Hence Bayesian accounts of perceptual learning are perfectly consistent with potential underminers for perceptual beliefs. (shrink)
Resource rationality may explain suboptimal patterns of reasoning; but what of “anti-Bayesian” effects where the mind updates in a direction opposite the one it should? We present two phenomena — belief polarization and the size-weight illusion — that are not obviously explained by performance- or resource-based constraints, nor by the authors’ brief discussion of reference repulsion. Can resource rationality accommodate them?
In this article, I introduce the term “cognitivism” as a name for the thesis that degrees of belief are equivalent to full beliefs about truth-valued propositions. The thesis (of cognitivism) that degrees of belief are equivalent to full beliefs is equivocal, inasmuch as different sorts of equivalence may be postulated between degrees of belief and full beliefs. The simplest sort of equivalence (and the sort of equivalence that I discuss here) identifies having a given degree of belief with having a (...) full belief with a specific content. This sort of view was proposed in [C. Howson and P. Urbach, Scientific reasoning: the Bayesian approach. Chicago: Open Court (1996)].In addition to embracing a form of cognitivism about degrees of belief, Howson and Urbach argued for a brand of probabilism. I call a view, such as Howson and Urbach’s, which combines probabilism with cognitivism about degrees of belief “cognitivist probabilism”. In order to address some problems with Howson and Urbach’s view, I propose a view that incorperates several of modifications of Howson and Urbach’s version of cognitivist probabilism. The view that I finally propose upholds cognitivism about degrees of belief, but deviates from the letter of probabilism, in allowing that a rational agent’s degrees of belief need not conform to the axioms of probability, in the case where the agent’s cognitive resources are limited. (shrink)
It is sometimes claimed that the Bayesian framework automatically implements Ockham’s razor—that conditionalizing on data consistent with both a simple theory and a complex theory more or less inevitably favours the simpler theory. It is shown here that the automatic razor doesn’t in fact cut it for certain mundane curve-fitting problems.
In this paper, we ask: how should an agent who has incoherent credences update when they learn new evidence? The standard Bayesian answer for coherent agents is that they should conditionalize; however, this updating rule is not defined for incoherent starting credences. We show how one of the main arguments for conditionalization, the Dutch strategy argument, can be extended to devise a target property for updating plans that can apply to them regardless of whether the agent starts out with coherent (...) or incoherent credences. The main idea behind this extension is that the agent should avoid updating plans that increase the possible sure loss from Dutch strategies. This happens to be equivalent to avoiding updating plans that increase incoherence according to a distance-based incoherence measure. (shrink)
We argued that explanatoriness is evidentially irrelevant in the following sense: Let H be a hypothesis, O an observation, and E the proposition that H would explain O if H and O were true. Then our claim is that Pr = Pr. We defended this screening-off thesis by discussing an example concerning smoking and cancer. Climenhaga argues that SOT is mistaken because it delivers the wrong verdict about a slightly different smoking-and-cancer case. He also considers a variant of SOT, called (...) “SOT*”, and contends that it too gives the wrong result. We here reply to Climenhaga’s arguments and suggest that SOT provides a criticism of the widely held theory of inference called “inference to the best explanation”. (shrink)
We discuss the cable guy paradox, both as an object of interest in its own right and as something which can be used to illuminate certain issues in the theories of rational choice and belief. We argue that a crucial principle—The Avoid Certain Frustration (ACF) principle—which is used in stating the paradox is false, thus resolving the paradox. We also explain how the paradox gives us new insight into issues related to the Reflection principle. Our general thesis is that principles (...) that base your current opinions on your current opinions about your future opinions need not make reference to the particular times in the future at which you believe you will have those opinions, but they do need to make reference to the particular degrees of belief you believe you will have in the future. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.