Greaves and Wallace argue that conditionalization maximizes expected accuracy. In this paper I show that their result only applies to a restricted range of cases. I then show that the update procedure that maximizes expected accuracy in general is one in which, upon learning P, we conditionalize, not on P, but on the proposition that we learned P. After proving this result, I provide further generalizations and show that much of the accuracy-first epistemology program is committed to KK-like iteration (...) principles and to the existence of a class of propositions that rational agents will be certain of if and only if they are true. (shrink)
The applicability of Bayesian conditionalization in setting one’s posterior probability for a proposition, α, is limited to cases where the value of a corresponding prior probability, PPRI(α|∧E), is available, where ∧E represents one’s complete body of evidence. In order to extend probability updating to cases where the prior probabilities needed for Bayesian conditionalization are unavailable, I introduce an inference schema, defeasible conditionalization, which allows one to update one’s personal probability in a proposition by conditioning on a proposition (...) that represents a proper subset of one’s complete body of evidence. While defeasible conditionalization has wider applicability than standard Bayesian conditionalization (since it may be used when the value of a relevant prior probability, PPRI(α|∧E), is unavailable), there are circumstances under which some instances of defeasible conditionalization are unreasonable. To address this difficulty, I outline the conditions under which instances of defeasible conditionalization are defeated. To conclude the article, I suggest that the prescriptions of direct inference and statistical induction can be encoded within the proposed system of probability updating, by the selection of intuitively reasonable prior probabilities. (shrink)
At the heart of the Bayesianism is a rule, Conditionalization, which tells us how to update our beliefs. Typical formulations of this rule are underspecified. This paper considers how, exactly, this rule should be formulated. It focuses on three issues: when a subject’s evidence is received, whether the rule prescribes sequential or interval updates, and whether the rule is narrow or wide scope. After examining these issues, it argues that there are two distinct and equally viable versions of (...) class='Hi'>Conditionalization to choose from. And which version we choose has interesting ramifications, bearing on issues such as whether Conditionalization can handle continuous evidence, and whether Jeffrey Conditionalization is really a generalization of Conditionalization. (shrink)
Seeing a red hat can (i) increase my credence in the hat is red, and (ii) introduce a negative dependence between that proposition and po- tential undermining defeaters such as the light is red. The rigidity of Jeffrey Conditionalization makes this awkward, as rigidity preserves inde- pendence. The picture is less awkward given ‘Holistic Conditionalization’, or so it is claimed. I defend Jeffrey Conditionalization’s consistency with underminable perceptual learning and its superiority to Holistic Conditionalization, arguing that (...) the latter is merely a special case of the former, is itself rigid, and is committed to implausible accounts of perceptual con- firmation and of undermining defeat. (shrink)
Conditionalization is one of the central norms of Bayesian epistemology. But there are a number of competing formulations, and a number of arguments that purport to establish it. In this paper, I explore which formulations of the norm are supported by which arguments. In their standard formulations, each of the arguments I consider here depends on the same assumption, which I call Deterministic Updating. I will investigate whether it is possible to amend these arguments so that they no longer (...) depend on it. As I show, whether this is possible depends on the formulation of the norm under consideration. (shrink)
Conditionalization is a widely endorsed rule for updating one’s beliefs. But a sea of complaints have been raised about it, including worries regarding how the rule handles error correction, changing desiderata of theory choice, evidence loss, self-locating beliefs, learning about new theories, and confirmation. In light of such worries, a number of authors have suggested replacing Conditionalization with a different rule — one that appeals to what I’ll call “ur-priors”. But different authors have understood the rule in different (...) ways, and these different understandings solve different problems. In this paper, I aim to map out the terrain regarding these issues. I survey the different problems that might motivate the adoption of such a rule, flesh out the different understandings of the rule that have been proposed, and assess their pros and cons. I conclude by suggesting that one particular batch of proposals, proposals that appeal to what I’ll call “loaded evidential standards”, are especially promising. (shrink)
Are counterfactuals with true antecedents and consequents automatically true? That is, is Conjunction Conditionalization: if (X & Y), then (X > Y) valid? Stalnaker and Lewis think so, but many others disagree. We note here that the extant arguments for Conjunction Conditionalization are unpersuasive, before presenting a family of more compelling arguments. These arguments rely on some standard theorems of the logic of counterfactuals as well as a plausible and popular semantic claim about certain semifactuals. Denying Conjunction (...) class='Hi'>Conditionalization, then, requires rejecting other aspects of the standard logic of counterfactuals, or else our intuitive picture of semifactuals. (shrink)
Colin Howson (1995 ) offers a counter-example to the rule of conditionalization. I will argue that the counter-example doesn't hit its target. The problem is that Howson mis-describes the total evidence the agent has. In particular, Howson overlooks how the restriction that the agent learn 'E and nothing else' interacts with the de se evidence 'I have learnt E'.
This paper shows that any view of future contingent claims that treats such claims as having indeterminate truth values or as simply being false implies probabilistic irrationality. This is because such views of the future imply violations of reflection, special reflection and conditionalization.
This discussion note examines a recent argument for the principle that any counterfactual with true components is itself true. That argument rests upon two widely accepted principles of counterfactual logic to which the paper presents counterexamples. The conclusion speculates briefly upon the wider lessons that philosophers should draw from these examples for the semantics of counterfactuals.
It has been argued that if the rigidity condition is satisfied, a rational agent operating with uncertain evidence should update her subjective probabilities by Jeffrey conditionalization or else a series of bets resulting in a sure loss could be made against her. We show, however, that even if the rigidity condition is satisfied, it is not always safe to update probability distributions by JC because there exist such sequences of non-misleading uncertain observations where it may be foreseen that an (...) agent who updates her subjective probabilities by JC will end up nearly certain that a false hypothesis is true. We analyze the features of JC that lead to this problem, specify the conditions in which it arises and respond to potential objections. (shrink)
How do temporal and eternal beliefs interact? I argue that acquiring a temporal belief should have no effect on eternal beliefs for an important range of cases. Thus, I oppose the popular view that new norms of belief change must be introduced for cases where the only change is the passing of time. I defend this position from the purported counter-examples of the Prisoner and Sleeping Beauty. I distinguish two importantly different ways in which temporal beliefs can be acquired and (...) draw some general conclusions about their impact on eternal beliefs. (shrink)
Epistemic decision theory produces arguments with both normative and mathematical premises. I begin by arguing that philosophers should care about whether the mathematical premises (1) are true, (2) are strong, and (3) admit simple proofs. I then discuss a theorem that Briggs and Pettigrew (2020) use as a premise in a novel accuracy-dominance argument for conditionalization. I argue that the theorem and its proof can be improved in a number of ways. First, I present a counterexample that shows that (...) one of the theorem’s claims is false. As a result of this, Briggs and Pettigrew’s argument for conditionalization is unsound. I go on to explore how a sound accuracy-dominance argument for conditionalization might be recovered. In the course of doing this, I prove two new theorems that correct and strengthen the result reported by Briggs and Pettigrew. I show how my results can be combined with various normative premises to produce sound arguments for conditionalization. I also show that my results can be used to support normative conclusions that are stronger than the one that Briggs and Pettigrew’s argument supports. Finally, I show that Briggs and Pettigrew’s proofs can be simplified considerably. (shrink)
Boghossian’s (2003) proposal to conditionalize concepts as a way to secure their legitimacy in disputable cases applies well, not just to pejoratives – on whose account Boghossian first proposed it – but also to thick ethical concepts. It actually has important advantages when dealing with some worries raised by the application of thick ethical terms, and the truth and facticity of corresponding statements. In this paper, I will try to show, however, that thick ethical concepts present a specific case, whose (...) analysis requires a somewhat different reconstruction from that which Boghossian offers. A proper account of thick ethical concepts should be able to explain how ‘evaluated’ and ‘evaluation’ are connected. (shrink)
In this paper, I provide an accuracy-based argument for conditionalization (via reflection) that does not rely on norms of maximizing expected accuracy. -/- (This is a draft of a paper that I wrote in 2013. It stalled for no very good reason. I still believe the content is right).
According to an increasingly popular epistemological view, people need outright beliefs in addition to credences to simplify their reasoning. Outright beliefs simplify reasoning by allowing thinkers to ignore small error probabilities. What is outright believed can change between contexts. It has been claimed that thinkers manage shifts in their outright beliefs and credences across contexts by an updating procedure resembling conditionalization, which I call pseudo-conditionalization (PC). But conditionalization is notoriously complicated. The claim that thinkers manage their beliefs (...) via PC is thus in tension with the view that the function of beliefs is to simplify our reasoning. I propose to resolve this puzzle by rejecting the view that thinkers employ PC. Based on this solution, I furthermore argue for a descriptive and a normative claim. The descriptive claim is that the available strategies for managing beliefs and credences across contexts that are compatible with the simplifying function of outright beliefs can generate synchronic and diachronic incoherence in a thinker’s attitudes. Moreover, I argue that the view of outright belief as a simplifying heuristic is incompatible with the view that there are ideal norms of coherence or consistency governing outright beliefs that are too complicated for human thinkers to comply with. (shrink)
The externalist says that your evidence could fail to tell you what evidence you do or not do have. In that case, it could be rational for you to be uncertain about what your evidence is. This is a kind of uncertainty which orthodox Bayesian epistemology has difficulty modeling. For, if externalism is correct, then the orthodox Bayesian learning norms of conditionalization and reflection are inconsistent with each other. I recommend that an externalist Bayesian reject conditionalization. In its (...) stead, I provide a new theory of rational learning for the externalist. I defend this theory by arguing that its advice will be followed by anyone whose learning dispositions maximize expected accuracy. I then explore some of this theory’s consequences for the rationality of epistemic akrasia, peer disagreement, undercutting defeat, and uncertain evidence. (shrink)
A handful of well-known arguments (the 'diachronic Dutch book arguments') rely upon theorems establishing that, in certain circumstances, you are immune from sure monetary loss (you are not 'diachronically Dutch bookable') if and only if you adopt the strategy of conditionalizing (or Jeffrey conditionalizing) on whatever evidence you happen to receive. These theorems require non-trivial assumptions about which evidence you might acquire---in the case of conditionalization, the assumption is that, if you might learn that e, then it is not (...) the case that you might learn something else that is consistent with e. These assumptions may not be relaxed. When they are, not only will non-(Jeffrey) conditionalizers be immune from diachronic Dutch bookability, but (Jeffrey) conditionalizers will themselves be diachronically Dutch bookable. I argue: 1) that there are epistemic situations in which these assumptions are violated; 2) that this reveals a conflict between the premise that susceptibility to sure monetary loss is irrational, on the one hand, and the view that rational belief revision is a function of your prior beliefs and the acquired evidence alone, on the other; and 3) that this inconsistency demonstrates that diachronic Dutch book arguments for (Jeffrey) conditionalization are invalid. (shrink)
In this paper I present a new way of understanding Dutch Book Arguments: the idea is that an agent is shown to be incoherent iff he would accept as fair a set of bets that would result in a loss under any interpretation of the claims involved. This draws on a standard definition of logical inconsistency. On this new understanding, the Dutch Book Arguments for the probability axioms go through, but the Dutch Book Argument for Reflection fails. The question of (...) whether we have a Dutch Book Argument for Conditionalization is left open. (shrink)
I advocate Time-Slice Rationality, the thesis that the relationship between two time-slices of the same person is not importantly different, for purposes of rational evaluation, from the relationship between time-slices of distinct persons. The locus of rationality, so to speak, is the time-slice rather than the temporally extended agent. This claim is motivated by consideration of puzzle cases for personal identity over time and by a very moderate form of internalism about rationality. Time-Slice Rationality conflicts with two proposed principles of (...) rationality, Conditionalization and Reflection. Conditionalization is a diachronic norm saying how your current degrees of belief should fit with your old ones, while Reflection is a norm enjoining you to defer to the degrees of belief that you expect to have in the future. But they are independently problematic and should be replaced by improved, time-slice-centric principles. Conditionalization should be replaced by a synchronic norm saying what degrees of belief you ought to have given your current evidence and Reflection should be replaced by a norm which instructs you to defer to the degrees of belief of agents you take to be experts. These replacement principles do all the work that the old principles were supposed to do while avoiding their problems. In this way, Time-Slice Rationality puts the theory of rationality on firmer foundations and yields better norms than alternative, non-time-slice-centric approaches. (shrink)
In this essay, I cast doubt on an apparent truism: namely, that if evidence is available for gathering and use at a negligible cost, then it's always instrumentally rational for us to gather that evidence and use it for making decisions. Call this thesis Value of Information. I show that Value of Information conflicts with two other plausible theses. The first is the view that an agent's evidence can entail non-trivial propositions about the external world. The second is the view (...) that epistemic rationality requires us to update our credences by conditionalization. These two theses, given some plausible assumptions, make room for rationally biased inquiries where Value of Information fails. I go on to argue that this is bad news for defenders of Value of Information. (shrink)
How should a group with different opinions (but the same values) make decisions? In a Bayesian setting, the natural question is how to aggregate credences: how to use a single credence function to naturally represent a collection of different credence functions. An extension of the standard Dutch-book arguments that apply to individual decision-makers recommends that group credences should be updated by conditionalization. This imposes a constraint on what aggregation rules can be like. Taking conditionalization as a basic constraint, (...) we gather lessons from the established work on credence aggregation, and extend this work with two new impossibility results. We then explore contrasting features of two kinds of rules that satisfy the constraints we articulate: one kind uses fixed prior credences, and the other uses geometric averaging, as opposed to arithmetic averaging. We also prove a new characterisation result for geometric averaging. Finally we consider applications to neighboring philosophical issues, including the epistemology of disagreement. (shrink)
Dutch Book arguments have been presented for static belief systems and for belief change by conditionalization. An argument is given here that a rule for belief change which under certain conditions violates probability kinematics will leave the agent open to a Dutch Book.
The Reflection Principle can be defended with a Diachronic Dutch Book Argument (DBA), but it is also defeated by numerous compelling counter-examples. It seems then that Diachronic DBAs can lead us astray. Should we reject them en masse—including Lewis’s Diachronic DBA for Conditionalization? Rachael Briggs’s “suppositional test” is supposed to differentiate between Diachronic DBAs that we can safely ignore (including the DBA for Reflection) and Diachronic DBAs that we should find compelling (including the DBA for Conditionalization). I argue (...) that Brigg’s suppositional test is wrong: it sets the bar for coherence too high and places certain cases of self-doubt on the wrong side of the divide. Given that the suppositional test is unsatisfactory, we are left without any justification for discriminating between Diachronic DBAs and ought to reject them all—including the DBA for Conditionalization. (shrink)
Weisberg introduces a phenomenon he terms perceptual undermining. He argues that it poses a problem for Jeffrey conditionalization, and Bayesian epistemology in general. This is Weisberg’s paradox. Weisberg argues that perceptual undermining also poses a problem for ranking theory and for Dempster-Shafer theory. In this note I argue that perceptual undermining does not pose a problem for any of these theories: for true conditionalizers Weisberg’s paradox is a false alarm.
Accuracy-first accounts of rational learning attempt to vindicate the intuitive idea that, while rationally-formed belief need not be true, it is nevertheless likely to be true. To this end, they attempt to show that the Bayesian's rational learning norms are a consequence of the rational pursuit of accuracy. Existing accounts fall short of this goal, for they presuppose evidential norms which are not and cannot be vindicated in terms of the single-minded pursuit of accuracy. I propose an alternative account, according (...) to which learning experiences rationalize changes in the way you value accuracy, which in turn rationalize changes in belief. I show that this account is capable of vindicating the Bayesian's rational learning norms in terms of the single-minded pursuit of accuracy, so long as accuracy is rationally valued. (shrink)
The Sleeping Beauty problem has attracted considerable attention in the literature as a paradigmatic example of how self-locating uncertainty creates problems for the Bayesian principles of Conditionalization and Reflection. Furthermore, it is also thought to raise serious issues for diachronic Dutch Book arguments. I show that, contrary to what is commonly accepted, it is possible to represent the Sleeping Beauty problem within a standard Bayesian framework. Once the problem is correctly represented, the ‘thirder’ solution satisfies standard rationality principles, vindicating (...) why it is not vulnerable to diachronic Dutch Book arguments. Moreover, the diachronic Dutch Books against the ‘halfer’ solutions fail to undermine the standard arguments for Conditionalization. The main upshot that emerges from my discussion is that the disagreement between different solutions does not challenge the applicability of Bayesian reasoning to centered settings, nor the commitment to Conditionalization, but is instead an instance of the familiar problem of choosing the priors. (shrink)
Michael Rescorla (2020) has recently pointed out that the standard arguments for Bayesian Conditionalization assume that whenever you take yourself to learn something with certainty, it's true. Most people would reject this assumption. In response, Rescorla offers an improved Dutch Book argument for Bayesian Conditionalization that does not make this assumption. My purpose in this paper is two-fold. First, I want to illuminate Rescorla's new argument by giving a very general Dutch Book argument that applies to many cases (...) of updating beyond those covered by Conditionalization, and then showing how Rescorla's version follows as a special case of that. Second, I want to show how to generalise Briggs and Pettigrew's Accuracy Dominance argument to avoid the assumption that Rescorla has identified (Briggs & Pettigrew 2018). (shrink)
This develops a framework for second-order conditionalization on statements about one's own epistemic reliability. It is the generalization of the framework of "Second-Guessing" (2009) to the case where the subject is uncertain about her reliability. See also "Epistemic Self-Doubt" (2017).
Suppositions can be introduced in either the indicative or subjunctive mood. The introduction of either type of supposition initiates judgments that may be either qualitative, binary judgments about whether a given proposition is acceptable or quantitative, numerical ones about how acceptable it is. As such, accounts of qualitative/quantitative judgment under indicative/subjunctive supposition have been developed in the literature. We explore these four different types of theories by systematically explicating the relationships canonical representatives of each. Our representative qualitative accounts of indicative (...) and subjunctive supposition are based on the belief change operations provided by AGM revision and KM update respectively; our representative quantitative ones are offered by conditionalization and imaging. This choice is motivated by the familiar approach of understanding supposition as `provisional belief revision' wherein one temporarily treats the supposition as true and forms judgments by making appropriate changes to their other opinions. To compare the numerical judgments recommended by the quantitative theories with the binary ones recommended by the qualitative accounts, we rely on a suitably adapted version of the Lockean thesis. Ultimately, we establish a number of new results that we interpret as vindicating the often-repeated claim that conditionalization is a probabilistic version of revision, while imaging is a probabilistic version of update. (shrink)
This paper develops an information sensitive theory of the semantics and probability of conditionals and statements involving epistemic modals. The theory validates a number of principles linking probability and modality, including the principle that the probability of a conditional 'If A, then C' equals the probability of C, updated with A. The theory avoids so-called triviality results, which are standardly taken to show that principles of this sort cannot be validated. To achieve this, we deny that rational agents update their (...) credences via conditionalization. We offer a new rule of update, Hyperconditionalization, which agrees with Conditionalization whenever nonmodal statements are at stake, but differs for modal and conditional sentences. (shrink)
Our senses provide us with information about the world, but what exactly do they tell us? I argue that in order to optimally respond to sensory stimulations, an agent’s doxastic space may have an extra, “imaginary” dimension of possibility; perceptual experiences confer certainty on propositions in this dimension. To some extent, the resulting picture vindicates the old-fashioned empiricist idea that all empirical knowledge is based on a solid foundation of sense-datum propositions, but it avoids most of the problems traditionally associated (...) with that idea. The proposal might also explain why experiences appear to have a non-physical phenomenal character, even if the world is entirely physical. (shrink)
It seems obvious that when higher-order evidence makes it rational for one to doubt that one’s own belief on some matter is rational, this can undermine the rationality of that belief. This is known as higher-order defeat. However, despite its intuitive plausibility, it has proved puzzling how higher-order defeat works, exactly. To highlight two prominent sources of puzzlement, higher-order defeat seems to defy being understood in terms of conditionalization; and higher-order defeat can sometimes place agents in what seem like (...) epistemic dilemmas. This chapter draws attention to an overlooked aspect of higher-order defeat, namely that it can undermine the resilience of one’s beliefs. The notion of resilience was originally devised to understand how one should reflect the ‘weight’ of one’s evidence. But it can also be applied to understand how one should reflect one’s higher-order evidence. The idea is particularly useful for understanding cases where one’s higher-order evidence indicates that one has failed in correctly assessing the evidence, without indicating whether one has over- or underestimated the degree of evidential support for a proposition. But it is exactly in such cases that the puzzles of higher-order defeat seem most compelling. (shrink)
What sort of doxastic response is rational to learning that one disagrees with an epistemic peer who has evaluated the same evidence? I argue that even weak general recommendations run the risk of being incompatible with a pair of real epistemic phenomena, what I call evidential attenuation and evidential amplification. I focus on a popular and intuitive view of disagreement, the equal weight view. I take it to state that in cases of peer disagreement, a subject ought to end up (...) equally confident that her own opinion is correct as that the opinion of her peer is. I say why we should regard the equal weight view as a synchronic constraint on (prior) credence functions. I then spell out a trilemma for the view: it violates what are intuitively correct updates (also leading to violations of conditionalisation), it poses implausible restrictions on prior credence functions, or it is non-substantive. The sorts of reasons why the equal weight view fails apply to other views as well: there is no blanket answer to the question of how a subject should adjust her opinions in cases of peer disagreement. (shrink)
We introduce a family of rules for adjusting one's credences in response to learning the credences of others. These rules have a number of desirable features. 1. They yield the posterior credences that would result from updating by standard Bayesian conditionalization on one's peers' reported credences if one's likelihood function takes a particular simple form. 2. In the simplest form, they are symmetric among the agents in the group. 3. They map neatly onto the familiar Condorcet voting results. 4. (...) They preserve shared agreement about independence in a wide range of cases. 5. They commute with conditionalization and with multiple peer updates. Importantly, these rules have a surprising property that we call synergy - peer testimony of credences can provide mutually supporting evidence raising an individual's credence higher than any peer's initial prior report. At first, this may seem to be a strike against them. We argue, however, that synergy is actually a desirable feature and the failure of other updating rules to yield synergy is a strike against them. (shrink)
Jeff Paris proves a generalized Dutch Book theorem. If a belief state is not a generalized probability then one faces ‘sure loss’ books of bets. In Williams I showed that Joyce’s accuracy-domination theorem applies to the same set of generalized probabilities. What is the relationship between these two results? This note shows that both results are easy corollaries of the core result that Paris appeals to in proving his dutch book theorem. We see that every point of accuracy-domination deﬁnes a (...) dutch book, but we only have a partial converse. (shrink)
Dutch Book Arguments (DBAs) have been invoked to support various requirements of rationality. Some are plausible: probabilism and conditionalization. Others are less so: credal transparency and reflection. Anna Mahtani has argued for a new understanding of DBAs which, she claims, allow us to keep the DBAs for probabilism (and perhaps conditionalization) and reject the DBAs for credal transparency and reflection. I argue that Mahtani’s new account fails as (a) it does not support highly plausible requirements of rational coherence (...) and (b) it does not, even setting aside the first objection, succeed in undermining the DBAs for credal transparency or reflection. (shrink)
Weisberg ([2009]) provides an argument that neither conditionalization nor Jeffrey conditionalization is capable of accommodating the holist’s claim that beliefs acquired directly from experience can suffer undercutting defeat. I diagnose this failure as stemming from the fact that neither conditionalization nor Jeffrey conditionalization give any advice about how to rationally respond to theory-dependent evidence, and I propose a novel updating procedure that does tell us how to respond to evidence like this. This holistic updating rule yields (...)conditionalization as a special case in which our evidence is entirely theory independent. 1 Introduction2 Conditionalization3 Holism and Conditionalization4 A Holistic Update5 HCondi and Dutch Books6 Commutativity and Learning about Background Theories6.1 Commutativity6.2 Learning about background theories7 In Summation. (shrink)
Epistemic decision theory (EDT) employs the mathematical tools of rational choice theory to justify epistemic norms, including probabilism, conditionalization, and the Principal Principle, among others. Practitioners of EDT endorse two theses: (1) epistemic value is distinct from subjective preference, and (2) belief and epistemic value can be numerically quantified. We argue the first thesis, which we call epistemic puritanism, undermines the second.
Merging of opinions results underwrite Bayesian rejoinders to complaints about the subjective nature of personal probability. Such results establish that sufficiently similar priors achieve consensus in the long run when fed the same increasing stream of evidence. Initial subjectivity, the line goes, is of mere transient significance, giving way to intersubjective agreement eventually. Here, we establish a merging result for sets of probability measures that are updated by Jeffrey conditioning. This generalizes a number of different merging results in the literature. (...) We also show that such sets converge to a shared, maximally informed opinion. Convergence to a maximally informed opinion is a (weak) Jeffrey conditioning analogue of Bayesian “convergence to the truth” for conditional probabilities. Finally, we demonstrate the philosophical significance of our study by detailing applications to the topics of dynamic coherence, imprecise probabilities, and probabilistic opinion pooling. (shrink)
Knowledge-first evidentialism combines the view that it is rational to believe what is supported by one's evidence with the view that one's evidence is what one knows. While there is much to be said for the view, it is widely perceived to fail in the face of cases of reasonable error—particularly extreme ones like new Evil Demon scenarios (Wedgwood, 2002). One reply has been to say that even in such cases what one knows supports the target rational belief (Lord, 201x, (...) this volume). I spell out two versions of the strategy. The direct one uses what one knows as the input to principles of rationality such as conditionalization, dominance avoidance, etc. I argue that it fails in hybrid cases that are Good with respect to one belief and Bad with respect to another. The indirect strategy uses what one knows to determine a body of supported propositions that is in turn the input to principles of rationality. I sketch a simple formal implementation of the indirect strategy and show that it avoids the difficulty. I conclude that the indirect strategy offers the most promising way for knowledge-first evidentialists to deal with the New Evil Demon problem. (shrink)
Defenders of Inference to the Best Explanation claim that explanatory factors should play an important role in empirical inference. They disagree, however, about how exactly to formulate this role. In particular, they disagree about whether to formulate IBE as an inference rule for full beliefs or for degrees of belief, as well as how a rule for degrees of belief should relate to Bayesianism. In this essay I advance a new argument against non-Bayesian versions of IBE. My argument focuses on (...) cases in which we are concerned with multiple levels of explanation of some phenomenon. I show that in many such cases, following IBE as an inference rule for full beliefs leads to deductively inconsistent beliefs, and following IBE as a non-Bayesian updating rule for degrees of belief leads to probabilistically incoherent degrees of belief. (shrink)
A number of Bayesians claim that, if one has no evidence relevant to a proposition P, then one's credence in P should be spread over the interval [0, 1]. Against this, I argue: first, that it is inconsistent with plausible claims about comparative levels of confidence; second, that it precludes inductive learning in certain cases. Two motivations for the view are considered and rejected. A discussion of alternatives leads to the conjecture that there is an in-principle limitation on formal representations (...) of belief: they cannot be both fully accurate and maximally specific. (shrink)
This paper offers a probabilistic treatment of the conditions for argument cogency as endorsed in informal logic: acceptability, relevance, and sufficiency. Treating a natural language argument as a reason-claim-complex, our analysis identifies content features of defeasible argument on which the RSA conditions depend, namely: change in the commitment to the reason, the reason’s sensitivity and selectivity to the claim, one’s prior commitment to the claim, and the contextually determined thresholds of acceptability for reasons and for claims. Results contrast with, and (...) may indeed serve to correct, the informal understanding and applications of the RSA criteria concerning their conceptual dependence, their function as update-thresholds, and their status as obligatory rather than permissive norms, but also show how these formal and informal normative approachs can in fact align. (shrink)
The Sleeping Beauty problem—first presented by A. Elga in a philosophical context—has captured much attention. The problem, we contend, is more aptly regarded as a paradox: apparently, there are cases where one ought to change one’s credence in an event’s taking place even though one gains no new information or evidence, or alternatively, one ought to have a credence other than 1/2 in the outcome of a future coin toss even though one knows that the coin is fair. In this (...) paper we argue for two claims. First, that Sleeping Beauty does gain potentially new relevant information upon waking up on Monday. Second, his credence shift is warranted provided it accords with a calculation that is a result of conditionalization on the relevant information: “this day is an experiment waking day” (a day within the experiment on which one is woken up). Since Sleeping Beauty knows what days d could refer to, he can calculate the probability that the referred to waking day is a Monday or a Tuesday providing an adequate resolution of the paradox. (shrink)
As I head home from work, I’m not sure whether my daughter’s new bike is green, and I’m also not sure whether I’m on drugs that distort my color perception. One thing that I am sure about is that my attitudes towards those possibilities are evidentially independent of one another, in the sense that changing my confidence in one shouldn’t affect my confidence in the other. When I get home and see the bike it looks green, so I increase my (...) confidence that it is green. But something else has changed: now an increase in my confidence that I’m on color-drugs would undermine my confidence that the bike is green. Jonathan Weisberg and Jim Pryor argue that the preceding story is problematic for standard Bayesian accounts of perceptual learning. Due to the ‘rigidity’ of Conditionalization, a negative probabilistic correlation between two propositions cannot be introduced by updating on one of them. Hence if my beliefs about my own color-sobriety start out independent of my beliefs about the color of the bike, then they must remain independent after I have my perceptual experience and update accordingly. Weisberg takes this to be a reason to reject Conditionalization. I argue that this conclusion is too pessimistic: Conditionalization is only part of the Bayesian story of perceptual learning, and the other part needn’t preserve independence. Hence Bayesian accounts of perceptual learning are perfectly consistent with potential underminers for perceptual beliefs. (shrink)
The new paradigm in the psychology of reasoning draws on Bayesian formal frameworks, and some advocates of the new paradigm think of these formal frameworks as providing a computational-level theory of rational human inference. I argue that Bayesian theories should not be seen as providing a computational-level theory of rational human inference, where by “Bayesian theories” I mean theories that claim that all rational credal states are probabilistically coherent and that rational adjustments of degrees of belief in the light of (...) new evidence must be in accordance with some sort of conditionalization. The problems with the view I am criticizing can best be seen when we look at chains of inferences, rather than single-step inferences. Chains of inferences have been neglected almost entirely within the new paradigm. (shrink)
We present a minimal pragmatic restriction on the interpretation of the weights in the “Equal Weight View” regarding peer disagreement and show that the view cannot respect it. Based on this result we argue against the view. The restriction is the following one: if an agent, $$\hbox {i}$$ i, assigns an equal or higher weight to another agent, $$\hbox {j}$$ j,, he must be willing—in exchange for a positive and certain payment—to accept an offer to let a completely rational and (...) sympathetic $$\hbox {j}$$ j choose for him whether to accept a bet with positive expected utility. If $$\hbox {i}$$ i assigns a lower weight to $$\hbox {j}$$ j than to himself, he must not be willing to pay any positive price for letting $$\hbox {j}$$ j choose for him. Respecting the constraint entails, we show, that the impact of disagreement on one’s degree of belief is not independent of what the disagreement is discovered to be. (shrink)
According to certain normative theories in epistemology, rationality requires us to be logically omniscient. Yet this prescription clashes with our ordinary judgments of rationality. How should we resolve this tension? In this paper, I focus particularly on the logical omniscience requirement in Bayesian epistemology. Building on a key insight by Ian Hacking (1967), I develop a version of Bayesianism that permits logical ignorance. This includes an account of the synchronic norms that govern a logically ignorant individual at any given time, (...) as well as an account of how we reduce our logical ignorance by learning logical facts and how we should update our credences in response to such evidence. At the end, I explain why the requirement of logical omniscience remains true of ideal agents with no computational, processing, or storage limitations. (shrink)
Does postulating skeptical theism undermine the claim that evil strongly confirms atheism over theism? According to Perrine and Wykstra, it does undermine the claim, because evil is no more likely on atheism than on skeptical theism. According to Draper, it does not undermine the claim, because evil is much more likely on atheism than on theism in general. I show that the probability facts alone do not resolve their disagreement, which ultimately rests on which updating procedure – conditionalizing or updating (...) on a conditional – fits both the evidence and how we ought to take that evidence into account. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.