Michael Rescorla (2020) has recently pointed out that the standard arguments for Bayesian Conditionalization assume that whenever you take yourself to learn something with certainty, it's true. Most people would reject this assumption. In response, Rescorla offers an improved Dutch Book argument for Bayesian Conditionalization that does not make this assumption. My purpose in this paper is two-fold. First, I want to illuminate Rescorla's new argument by giving a very general Dutch Book argument that applies to many cases (...) of updating beyond those covered by Conditionalization, and then showing how Rescorla's version follows as a special case of that. Second, I want to show how to generalise Briggs and Pettigrew's Accuracy Dominance argument to avoid the assumption that Rescorla has identified (Briggs & Pettigrew 2018). (shrink)
Resource rationality may explain suboptimal patterns of reasoning; but what of “anti-Bayesian” effects where the mind updates in a direction opposite the one it should? We present two phenomena — belief polarization and the size-weight illusion — that are not obviously explained by performance- or resource-based constraints, nor by the authors’ brief discussion of reference repulsion. Can resource rationality accommodate them?
This paper considers a problem for Bayesian epistemology and proposes a solution to it. On the traditional Bayesian framework, an agent updates her beliefs by Bayesian conditioning, a rule that tells her how to revise her beliefs whenever she gets evidence that she holds with certainty. In order to extend the framework to a wider range of cases, Jeffrey (1965) proposed a more liberal version of this rule that has Bayesian conditioning as a special case. Jeffrey (...) conditioning is a rule that tells the agent how to revise her beliefs whenever she gets evidence that she holds with any degree of confidence. The problem? While Bayesian conditioning has a foundationalist structure, this foundationalism disappears once we move to Jeffrey conditioning. If Bayesian conditioning is a special case of Jeffrey conditioning, then they should have the same normative structure. The solution? To reinterpret Bayesianupdating as a form of diachronic coherentism. (shrink)
Recently, some have challenged the idea that there are genuine norms of diachronic rationality. Part of this challenge has involved offering replacements for diachronic principles. Skeptics about diachronic rationality believe that we can provide an error theory for it by appealing to synchronic updating rules that, over time, mimic the behavior of diachronic norms. In this paper, I argue that the most promising attempts to develop this position within the Bayesian framework are unsuccessful. I sketch a new synchronic (...) surrogate that draws upon some of the features of each of these earlier attempts. At the heart of this discussion is the question of what exactly it means to say that one norm is a surrogate for another. I argue that surrogacy, in the given context, can be taken as a proxy for the degree to which formal and traditional epistemology can be made compatible. (shrink)
Accuracy-first epistemology says that the rational update rule is the rule that maximizes expected accuracy. Externalism says, roughly, that we do not always know what our total evidence is. It’s been argued in recent years that the externalist faces a dilemma: Either deny that Bayesian Conditionalization is the rational update rule, thereby rejecting traditional Bayesian epistemology, or else deny that the rational update rule is the rule that maximizes expected accuracy, thereby rejecting the accuracy-first program. Call this the (...)Bayesian Dilemma. Here is roughly how the argument goes. Schoenfield (2017) has shown that following Metaconditionalization maximizes expected accuracy. But if externalism is true, Metaconditionalization is not Bayesian Conditionalization. Therefore, the externalist must choose between the rule that maximizes expected accuracy (Metaconditionalization) and Bayesian Conditionalization. I am not convinced by this argument; once we make the premises fully explicit, we see that it relies on assumptions that the externalist has every reason to reject. Still, I think that the Bayesian Dilemma is a genuine dilemma. I give a new argument—I call it the continuity argument—that doesn’t make any assumptions the externalist rejects. Roughly, I show that if you're sufficiently confident that you will correctly identify your evidence, then you'll expect adopting a rule that I call Accurate Metaconditionalization to be more accurate than adopting Bayesian Conditionalization. (shrink)
The externalist says that your evidence could fail to tell you what evidence you do or not do have. In that case, it could be rational for you to be uncertain about what your evidence is. This is a kind of uncertainty which orthodox Bayesian epistemology has difficulty modeling. For, if externalism is correct, then the orthodox Bayesian learning norms of conditionalization and reflection are inconsistent with each other. I recommend that an externalist Bayesian reject conditionalization. In (...) its stead, I provide a new theory of rational learning for the externalist. I defend this theory by arguing that its advice will be followed by anyone whose learning dispositions maximize expected accuracy. I then explore some of this theory’s consequences for the rationality of epistemic akrasia, peer disagreement, undercutting defeat, and uncertain evidence. (shrink)
Dogmatism is sometimes thought to be incompatible with Bayesian models of rational learning. I show that the best model for updating imprecise credences is compatible with dogmatism.
We introduce a family of rules for adjusting one's credences in response to learning the credences of others. These rules have a number of desirable features. 1. They yield the posterior credences that would result from updating by standard Bayesian conditionalization on one's peers' reported credences if one's likelihood function takes a particular simple form. 2. In the simplest form, they are symmetric among the agents in the group. 3. They map neatly onto the familiar Condorcet voting results. (...) 4. They preserve shared agreement about independence in a wide range of cases. 5. They commute with conditionalization and with multiple peer updates. Importantly, these rules have a surprising property that we call synergy - peer testimony of credences can provide mutually supporting evidence raising an individual's credence higher than any peer's initial prior report. At first, this may seem to be a strike against them. We argue, however, that synergy is actually a desirable feature and the failure of other updating rules to yield synergy is a strike against them. (shrink)
Debunking arguments in ethics contend that our moral beliefs have dubious evolutionary, cultural, or psychological origins—hence concluding that we should doubt such beliefs. Debates about debunking are often couched in coarse-grained terms—about whether our moral beliefs are justified or not, for instance. In this paper, I propose a more detailed Bayesian analysis of debunking arguments, which proceeds in the fine-grained framework of rational confidence. Such analysis promises several payoffs: it highlights how debunking arguments don’t affect all agents, but rather (...) only those agents who updated on their intuitions using a specific range of evidentiary weights; it underscores how the debunkers shouldn’t conclude that we should reduce confidence beyond some threshold, but rather only that we should reduce confidence by some amount; and it proposes a method of integrating different kinds of evidence—about the kinds of epistemic flaws at play, about the different possible origins of our moral beliefs, about the background normative assumptions we’re entitled to make—in order to arrive at a rational moral credence in light of debunking. (shrink)
The Sleeping Beauty problem has attracted considerable attention in the literature as a paradigmatic example of how self-locating uncertainty creates problems for the Bayesian principles of Conditionalization and Reflection. Furthermore, it is also thought to raise serious issues for diachronic Dutch Book arguments. I show that, contrary to what is commonly accepted, it is possible to represent the Sleeping Beauty problem within a standard Bayesian framework. Once the problem is correctly represented, the ‘thirder’ solution satisfies standard rationality principles, (...) vindicating why it is not vulnerable to diachronic Dutch Book arguments. Moreover, the diachronic Dutch Books against the ‘halfer’ solutions fail to undermine the standard arguments for Conditionalization. The main upshot that emerges from my discussion is that the disagreement between different solutions does not challenge the applicability of Bayesian reasoning to centered settings, nor the commitment to Conditionalization, but is instead an instance of the familiar problem of choosing the priors. (shrink)
We model scientific theories as Bayesian networks. Nodes carry credences and function as abstract representations of propositions within the structure. Directed links carry conditional probabilities and represent connections between those propositions. Updating is Bayesian across the network as a whole. The impact of evidence at one point within a scientific theory can have a very different impact on the network than does evidence of the same strength at a different point. A Bayesian model allows us to (...) envisage and analyze the differential impact of evidence and credence change at different points within a single network and across different theoretical structures. (shrink)
Can a group be an orthodox rational agent? This requires the group's aggregate preferences to follow expected utility (static rationality) and to evolve by Bayesianupdating (dynamic rationality). Group rationality is possible, but the only preference aggregation rules which achieve it (and are minimally Paretian and continuous) are the linear-geometric rules, which combine individual values linearly and combine individual beliefs geometrically. Linear-geometric preference aggregation contrasts with classic linear-linear preference aggregation, which combines both values and beliefs linearly, but achieves (...) only static rationality. Our characterisation of linear-geometric preference aggregation has two corollaries: a characterisation of linear aggregation of values (Harsanyi's Theorem) and a characterisation of geometric aggregation of beliefs. (shrink)
In this paper, we ask: how should an agent who has incoherent credences update when they learn new evidence? The standard Bayesian answer for coherent agents is that they should conditionalize; however, this updating rule is not defined for incoherent starting credences. We show how one of the main arguments for conditionalization, the Dutch strategy argument, can be extended to devise a target property for updating plans that can apply to them regardless of whether the agent starts (...) out with coherent or incoherent credences. The main idea behind this extension is that the agent should avoid updating plans that increase the possible sure loss from Dutch strategies. This happens to be equivalent to avoiding updating plans that increase incoherence according to a distance-based incoherence measure. (shrink)
Bayesian epistemologists propose norms of rationality based on the probability calculus. ?Probabilism? states that agents must hold credences that are consistent with the axioms of probability. ?Conditionalization? states that credences must be updated using Bayesian conditionalization. These norms are supported using `maximization arguments' such as Dutch book and accuracy arguments. These arguments presuppose that rationality requires agents to maximize (practical or epistemic) value in every doxastic state, whose evaluation is done from a subjective point of view. Accuracy arguments (...) also presuppose that agents are opinionated. The ?rst assumptions are reasonable, but not mandatory for the notion of rationality. The assumption of opinionation is questionable. In this paper, I investigate whether these norms (or opinionation) are supported by a maximization argument without these assumptions. I have designed AI agents based on the Bayesian model and a nonmonotonic framework and tested how they perform in an epistemic version of the Wumpus World. The nonmonotonic agent, who is not opinionated and fails probabilism and conditionalization, outperforms the Bayesian in some conditions, which suggests a negative answer to the question. (shrink)
As I head home from work, I’m not sure whether my daughter’s new bike is green, and I’m also not sure whether I’m on drugs that distort my color perception. One thing that I am sure about is that my attitudes towards those possibilities are evidentially independent of one another, in the sense that changing my confidence in one shouldn’t affect my confidence in the other. When I get home and see the bike it looks green, so I increase my (...) confidence that it is green. But something else has changed: now an increase in my confidence that I’m on color-drugs would undermine my confidence that the bike is green. Jonathan Weisberg and Jim Pryor argue that the preceding story is problematic for standard Bayesian accounts of perceptual learning. Due to the ‘rigidity’ of Conditionalization, a negative probabilistic correlation between two propositions cannot be introduced by updating on one of them. Hence if my beliefs about my own color-sobriety start out independent of my beliefs about the color of the bike, then they must remain independent after I have my perceptual experience and update accordingly. Weisberg takes this to be a reason to reject Conditionalization. I argue that this conclusion is too pessimistic: Conditionalization is only part of the Bayesian story of perceptual learning, and the other part needn’t preserve independence. Hence Bayesian accounts of perceptual learning are perfectly consistent with potential underminers for perceptual beliefs. (shrink)
Despite the harmful impact of conspiracy theories on the public discourse, there is little agreement about their exact nature. Rather than define conspiracy theories as such, we focus on the notion of conspiracy belief. We analyse three recent proposals that identify belief in conspiracy theories as an effect of irrational reasoning. Although these views are sometimes presented as competing alternatives, they share the main commitment that conspiracy beliefs are epistemically flawed because they resist revision given disconfirming evidence. However, the three (...) views currently lack the formal detail necessary for an adequate comparison. In this paper, we bring these views closer together by exploring the rationality of conspiracy belief under a probabilistic framework. By utilising Michael Strevens’ Bayesian treatment of auxiliary hypotheses, we question the claim that the irrationality associated with conspiracy belief is due to a failure of belief revision given disconfirming evidence. We argue that maintaining a core conspiracy belief can be perfectly Bayes-rational when such beliefs are embedded in networks of auxiliary beliefs, which can be sacrificed to protect the more central ones. We propose that the irrationality associated with conspiracy belief lies not in a flawed updating method according to subjective standards but in a failure to converge towards well-confirmed stable belief networks in the long run. We discuss a set of initial reasoning biases as a possible reason for such a failure. Our approach reconciles previously disjointed views, while at the same time offering a formal platform for their further development. (shrink)
In statistics, there are two main paradigms: classical and Bayesian statistics. The purpose of this article is to investigate the extent to which classicists and Bayesians can agree. My conclusion is that, in certain situations, they cannot. The upshot is that, if we assume that the classicist is not allowed to have a higher degree of belief in a null hypothesis after he has rejected it than before, then he has to either have trivial or incoherent credences to begin (...) with or fail to update his credences by conditionalization. (shrink)
Probability updating via Bayes' rule often entails extensive informational and computational requirements. In consequence, relatively few practical applications of Bayesian adaptive control techniques have been attempted. This paper discusses an alternative approach to adaptive control, Bayesian in spirit, which shifts attention from the updating of probability distributions via transitional probability assessments to the direct updating of the criterion function, itself, via transitional utility assessments. Results are illustrated in terms of an adaptive reinvestment two-armed bandit problem.
Bayesian epistemologists propose norms of rationality based on the proba- bility calculus. ?Probabilism? states that agents must hold credences that are consistent with the axioms of probability. ?Conditionalization? states that credences must be updated using Bayesian conditionalization. These norms are supported using `maximization arguments' such as Dutch book and accuracy arguments. These arguments presuppose that rationality requires agents to maximize (practical or epistemic) value in every doxastic state, whose evaluation is done from a subjective point of view. Accuracy (...) arguments also presuppose that agents are opinionated. The first assumptions are reasonable, but not mandatory for the notion of rationality. The assumption of opinionation is questionable. In this paper, I investigate whether these norms (or opinionation) are supported by a maximization argument without these assumptions. I have designed AI agents based on the Bayesian model and a nonmonotonic framework and tested how they perform in an epistemic version of the Wumpus World. The nonmonotonic agent, who is not opinionated and fails probabilism and conditionalization, outperforms the Bayesian in some conditions, which suggests a negative answer to the question. (shrink)
Jeﬀrey conditioning allows updating in Bayesian style when the evidence is uncertain. A weighted average, essentially, over classically updating on the alternatives. Unlike classical Bayesian conditioning, this allows learning to be unlearned.
In times of populist mistrust towards experts, it is important and the aim of the paper to ascertain the rationality of arguments from expert opinion and to reconstruct their rational foundations as well as to determine their limits. The foundational approach chosen is probabilistic. However, there are at least three correct probabilistic reconstructions of such argumentations: statistical inferences, Bayesianupdating, and interpretive arguments. To solve this competition problem, the paper proposes a recourse to the arguments' justification strengths achievable (...) in the respective situation. (shrink)
According to certain normative theories in epistemology, rationality requires us to be logically omniscient. Yet this prescription clashes with our ordinary judgments of rationality. How should we resolve this tension? In this paper, I focus particularly on the logical omniscience requirement in Bayesian epistemology. Building on a key insight by Hacking :311–325, 1967), I develop a version of Bayesianism that permits logical ignorance. This includes: an account of the synchronic norms that govern a logically ignorant individual at any given (...) time; an account of how we reduce our logical ignorance by learning logical facts and how we should update our credences in response to such evidence; and an account of when logical ignorance is irrational and when it isn’t. At the end, I explain why the requirement of logical omniscience remains true of ideal agents with no computational, processing, or storage limitations. (shrink)
Defenders of Inference to the Best Explanation claim that explanatory factors should play an important role in empirical inference. They disagree, however, about how exactly to formulate this role. In particular, they disagree about whether to formulate IBE as an inference rule for full beliefs or for degrees of belief, as well as how a rule for degrees of belief should relate to Bayesianism. In this essay I advance a new argument against non-Bayesian versions of IBE. My argument focuses (...) on cases in which we are concerned with multiple levels of explanation of some phenomenon. I show that in many such cases, following IBE as an inference rule for full beliefs leads to deductively inconsistent beliefs, and following IBE as a non-Bayesianupdating rule for degrees of belief leads to probabilistically incoherent degrees of belief. (shrink)
Conditionalization is one of the central norms of Bayesian epistemology. But there are a number of competing formulations, and a number of arguments that purport to establish it. In this paper, I explore which formulations of the norm are supported by which arguments. In their standard formulations, each of the arguments I consider here depends on the same assumption, which I call Deterministic Updating. I will investigate whether it is possible to amend these arguments so that they no (...) longer depend on it. As I show, whether this is possible depends on the formulation of the norm under consideration. (shrink)
Approximate coherentism suggests that imperfectly rational agents should hold approximately coherent credences. This norm is intended as a generalization of ordinary coherence. I argue that it may be unable to play this role by considering its application under learning experiences. While it is unclear how imperfect agents should revise their beliefs, I suggest a plausible route is through Bayesianupdating. However, Bayesianupdating can take an incoherent agent from relatively more coherent credences to relatively less coherent (...) credences, depending on the data observed. Thus, comparative rationality judgments among incoherent agents are unduly sensitive to luck. (shrink)
In a series of papers over the past twenty years, and in a new book, Igor Douven has argued that Bayesians are too quick to reject versions of inference to the best explanation that cannot be accommodated within their framework. In this paper, I survey their worries and attempt to answer them using a series of pragmatic and purely epistemic arguments that I take to show that Bayes’ Rule really is the only rational way to respond to your evidence.
We argue that social deliberation may increase an agent’s confidence and credence under certain circumstances. An agent considers a proposition H and assigns a probability to it. However, she is not fully confident that she herself is reliable in this assignment. She then endorses H during deliberation with another person, expecting him to raise serious objections. To her surprise, however, the other person does not raise any objections to H. How should her attitudes toward H change? It seems plausible that (...) she should increase the credence she assigns to H and, at the same time, increase the reliability she assigns to herself concerning H. A Bayesian model helps us to investigate under what conditions, if any, this is rational. (shrink)
A Bayesian mind is, at its core, a rational mind. Bayesianism is thus well-suited to predict and explain mental processes that best exemplify our ability to be rational. However, evidence from belief acquisition and change appears to show that we do not acquire and update information in a Bayesian way. Instead, the principles of belief acquisition and updating seem grounded in maintaining a psychological immune system rather than in approximating a Bayesian processor.
Does postulating skeptical theism undermine the claim that evil strongly confirms atheism over theism? According to Perrine and Wykstra, it does undermine the claim, because evil is no more likely on atheism than on skeptical theism. According to Draper, it does not undermine the claim, because evil is much more likely on atheism than on theism in general. I show that the probability facts alone do not resolve their disagreement, which ultimately rests on which updating procedure – conditionalizing or (...)updating on a conditional – fits both the evidence and how we ought to take that evidence into account. (shrink)
Dilation occurs when an interval probability estimate of some event E is properly included in the interval probability estimate of E conditional on every event F of some partition, which means that one’s initial estimate of E becomes less precise no matter how an experiment turns out. Critics maintain that dilation is a pathological feature of imprecise probability models, while others have thought the problem is with Bayesianupdating. However, two points are often overlooked: (1) knowing that E (...) is stochastically independent of F (for all F in a partition of the underlying state space) is sufficient to avoid dilation, but (2) stochastic independence is not the only independence concept at play within imprecise probability models. In this paper we give a simple characterization of dilation formulated in terms of deviation from stochastic independence, propose a measure of dilation, and distinguish between proper and improper dilation. Through this we revisit the most sensational examples of dilation, which play up independence between dilator and dilatee, and find the sensationalism undermined by either fallacious reasoning with imprecise probabilities or improperly constructed imprecise probability models. (shrink)
In this paper, I present the fundamental ideas of a new theory of justification strength. This theory is based on the epistemological approach to argumentation. Even the thesis of a valid justification can be false for various reasons. The theory outlined here identifies such possible errors. Justification strength is equated with the degree to which such possible errors are excluded. The natural expression of this kind of justification strength is the (rational) degree of certainty of the belief in the thesis.
The applicability of Bayesian conditionalization in setting one’s posterior probability for a proposition, α, is limited to cases where the value of a corresponding prior probability, PPRI(α|∧E), is available, where ∧E represents one’s complete body of evidence. In order to extend probability updating to cases where the prior probabilities needed for Bayesian conditionalization are unavailable, I introduce an inference schema, defeasible conditionalization, which allows one to update one’s personal probability in a proposition by conditioning on a proposition (...) that represents a proper subset of one’s complete body of evidence. While defeasible conditionalization has wider applicability than standard Bayesian conditionalization (since it may be used when the value of a relevant prior probability, PPRI(α|∧E), is unavailable), there are circumstances under which some instances of defeasible conditionalization are unreasonable. To address this difficulty, I outline the conditions under which instances of defeasible conditionalization are defeated. To conclude the article, I suggest that the prescriptions of direct inference and statistical induction can be encoded within the proposed system of probability updating, by the selection of intuitively reasonable prior probabilities. (shrink)
Barnett provides an interesting new challenge for Dogmatist accounts of perceptual justification. The challenge is that such accounts, by accepting that a perceptual experience can provide a distinctive kind of boost to one’s credences, would lead to a form of diachronic irrationality in cases where one has already learnt in advance that one will have such an experience. I show that this challenge rests on a misleading feature of using the 0–1 interval to express probabilities and show that if we (...) switch to using Odds or Log-Odds, the misleading appearance that there is only ‘a little room’ for one’s credences to increase evaporates. Moreover, there are familiar, independent reasons for taking the Log-Odds scale to provide a clearer picture of the confirmatory effect of evidence. Thus the Dogmatist can after all escape the charge of diachronic irrationality. (shrink)
How should we update our beliefs when we learn new evidence? Bayesian confirmation theory provides a widely accepted and well understood answer – we should conditionalize. But this theory has a problem with self-locating beliefs, beliefs that tell you where you are in the world, as opposed to what the world is like. To see the problem, consider your current belief that it is January. You might be absolutely, 100%, sure that it is January. But you will soon believe (...) it is February. This type of belief change cannot be modelled by conditionalization. We need some new principles of belief change for this kind of case, which I call belief mutation. In part 1, I defend the Relevance-Limiting Thesis, which says that a change in a purely self-locating belief of the kind that results in belief mutation should not shift your degree of belief in a non-self-locating belief, which can only change by conditionalization. My method is to give detailed analyses of the puzzles which threaten this thesis: Duplication, Sleeping Beauty, and The Prisoner. This also requires giving my own theory of observation selection effects. In part 2, I argue that when self-locating evidence is learnt from a position of uncertainty, it should be conditionalized on in the normal way. I defend this position by applying it to various cases where such evidence is found. I defend the Halfer position in Sleeping Beauty, and I defend the Doomsday Argument and the Fine-Tuning Argument. (shrink)
Crupi et al. propose a generalization of Bayesian conﬁrmation theory that they claim to adequately deal with conﬁrmation by uncertain evidence. Consider a series of points of time t0, . . . , ti, . . . , tn such that the agent’s subjective probability for an atomic proposition E changes from Pr0 at t0 to . . . to Pri at ti to . . . to Prn at tn. It is understood that the agent’s subjective probabilities change (...) for E and no logically stronger proposition, and that the agent updates her subjective probabilities by Jeffrey conditionalization. For this speciﬁc scenario the authors propose to take the difference between Pr0 and Pri as the degree to which E conﬁrms H for the agent at time ti , C0,i. This proposal is claimed to be adequate, because. (shrink)
Conditionalization is a widely endorsed rule for updating one’s beliefs. But a sea of complaints have been raised about it, including worries regarding how the rule handles error correction, changing desiderata of theory choice, evidence loss, self-locating beliefs, learning about new theories, and confirmation. In light of such worries, a number of authors have suggested replacing Conditionalization with a different rule — one that appeals to what I’ll call “ur-priors”. But different authors have understood the rule in different ways, (...) and these different understandings solve different problems. In this paper, I aim to map out the terrain regarding these issues. I survey the different problems that might motivate the adoption of such a rule, flesh out the different understandings of the rule that have been proposed, and assess their pros and cons. I conclude by suggesting that one particular batch of proposals, proposals that appeal to what I’ll call “loaded evidential standards”, are especially promising. (shrink)
The thesis that agents should calibrate their beliefs in the face of higher-order evidence—i.e., should adjust their first-order beliefs in response to evidence suggesting that the reasoning underlying those beliefs is faulty—is sometimes thought to be in tension with Bayesian approaches to belief update: in order to obey Bayesian norms, it's claimed, agents must remain steadfast in the face of higher-order evidence. But I argue that this claim is incorrect. In particular, I motivate a minimal constraint on a (...) reasonable treatment of the evolution of self-locating beliefs over time and show that calibrationism is compatible with any generalized Bayesian approach that respects this constraint. I then use this result to argue that remaining steadfast isn't the response to higher-order evidence that maximizes expected accuracy. (shrink)
Predictable polarization is everywhere: we can often predict how people’s opinions—including our own—will shift over time. Extant theories either neglect the fact that we can predict our own polarization, or explain it through irrational mechanisms. We needn’t. Empirical studies suggest that polarization is predictable when evidence is ambiguous, i.e. when the rational response is not obvious. I show how Bayesians should model such ambiguity, and then prove that—assuming rational updates are those which obey the value of evidence (Blackwell 1953; Good (...) 1967)—ambiguity is necessary and sufficient for the rationality of predictable polarization. The main theoretical result is that there can be a series of such updates, each of which is individually expected to make you more accurate, but which together will predictably polarize you. Polarization results from asymmetric increases in accuracy. This mechanism is not only theoretically possible, but empirically plausible. I argue that cognitive search—searching a cognitively-accessible space for a particular item—often yields asymmetrically ambiguous evidence; I present an experiment supporting its polarizing effects; and I use simulations to show how it can help explain two of the core causes of polarization: confirmation bias and the group polarization effect. (shrink)
How should a group with different opinions (but the same values) make decisions? In a Bayesian setting, the natural question is how to aggregate credences: how to use a single credence function to naturally represent a collection of different credence functions. An extension of the standard Dutch-book arguments that apply to individual decision-makers recommends that group credences should be updated by conditionalization. This imposes a constraint on what aggregation rules can be like. Taking conditionalization as a basic constraint, we (...) gather lessons from the established work on credence aggregation, and extend this work with two new impossibility results. We then explore contrasting features of two kinds of rules that satisfy the constraints we articulate: one kind uses fixed prior credences, and the other uses geometric averaging, as opposed to arithmetic averaging. We also prove a new characterisation result for geometric averaging. Finally we consider applications to neighboring philosophical issues, including the epistemology of disagreement. (shrink)
We often learn the credences of others without getting to hear the evidence on which they’re based. And, in these cases, it is often unfeasible or overly onerous to update on this social evidence by conditionalizing on it. How, then, should we respond to it? We consider four methods for aggregating your credences with the credences of others: arithmetic, geometric, multiplicative, and harmonic pooling. Each performs well for some purposes and poorly for others. We describe these in Sections 1-4. In (...) Section 5, we explore three specific applications of our general results: How should we understand cases in which each individual raises their credences in response to learning the credences of the others (Section 5.1)? How do the updating rules used by individuals affect the epistemic performance of the group as a whole (Section 5.2)? How does a population that obeys the Uniqueness Thesis perform compared to one that doesn’t (Section 5.3)? (shrink)
While recent discussions of contextualism have mostly focused on other issues, some influential early statements of the view emphasized the possibility of its providing an alternative to both coherentism and traditional versions of foundationalism. In this essay, I will pick up on this strand of contextualist thought, and argue that contextualist versions of foundationalism promise to solve some problems that their non-contextualist cousins cannot. In particular, I will argue that adopting contextualist versions of foundationalism can let us reconcile Bayesian (...) accounts of belief updating with a version of the holist claim that all beliefs are defeasible. (shrink)
Communication facilitates coordination, but coordination might fail if there's too much uncertainty. I discuss a scenario in which vagueness-driven uncertainty undermines the possibility of publicly sharing a belief. I then show that asserting an epistemic modal sentence, 'Might p', can reveal the speaker's uncertainty, and that this may improve the chances of coordination despite the lack of a common epistemic ground. This provides a game-theoretic rationale for epistemic modality. The account draws on a standard relational semantics for epistemic modality, Stalnaker's (...) theory of assertion as informative update, and a Bayesian framework for reasoning under uncertainty. (shrink)
A number of cases involving self-locating beliefs have been discussed in the Bayesian literature. I suggest that many of these cases, such as the sleeping beauty case, are entangled with issues that are independent of self-locating beliefs per se. In light of this, I propose a division of labor: we should address each of these issues separately before we try to provide a comprehensive account of belief updating. By way of example, I sketch some ways of extending Bayesianism (...) in order to accommodate these issues. Then, putting these other issues aside, I sketch some ways of extending Bayesianism in order to accommodate self-locating beliefs. Finally, I propose a constraint on updating rules, the "Learning Principle", which rules out certain kinds of troubling belief changes, and I use this principle to assess some of the available options. (shrink)
Confirmational holism is at odds with Jeffrey conditioning --- the orthodox Bayesian policy for accommodating uncertain learning experiences. Two of the great insights of holist epistemology are that the effects of experience ought to be mediated by one's background beliefs, and the support provided by one's learning experience can and often is undercut by subsequent learning. Jeffrey conditioning fails to vindicate either of these insights. My aim is to describe and defend a new updating policy that does better. (...) In addition to showing that this new policy is more holism-friendly than Jeffrey conditioning, I will also show that it has an accuracy-centered justification. (shrink)
Merging of opinions results underwrite Bayesian rejoinders to complaints about the subjective nature of personal probability. Such results establish that sufficiently similar priors achieve consensus in the long run when fed the same increasing stream of evidence. Initial subjectivity, the line goes, is of mere transient significance, giving way to intersubjective agreement eventually. Here, we establish a merging result for sets of probability measures that are updated by Jeffrey conditioning. This generalizes a number of different merging results in the (...) literature. We also show that such sets converge to a shared, maximally informed opinion. Convergence to a maximally informed opinion is a (weak) Jeffrey conditioning analogue of Bayesian “convergence to the truth” for conditional probabilities. Finally, we demonstrate the philosophical significance of our study by detailing applications to the topics of dynamic coherence, imprecise probabilities, and probabilistic opinion pooling. (shrink)
One is occasionally reminded of Foucault's proclamation in a 1970 interview that "perhaps, one day this century will be known as Deleuzian." Less often is one compelled to update and restart with a supplementary counter-proclamation of the mathematician, David Lindley: "the twenty-first century would be a Bayesian era..." The verb tenses of both are conspicuous. // To critically attend to what is today often feared and demonized, but also revered, deployed, and commonly referred to as algorithm(s), one cannot avoid (...) the mathematical and philosophical legacies of probability. // But attending to these probabilistic or Bayesian legacies must include an undeniable theological legacy in which they remain entangled. // We are not, today, discovering quirky theological metaphors in contemporary technics. It's the other way around. The technologies are mere metaphors of past theologies. (shrink)
Our main aims in this paper is to discuss and criticise the core thesis of a position that has become known as phenomenal conservatism. According to this thesis, its seeming to one that p provides enough justification for a belief in p to be prima facie justified (a thesis we label Standard Phenomenal Conservatism). This thesis captures the special kind of epistemic import that seemings are claimed to have. To get clearer on this thesis, we embed it, first, in a (...) probabilistic framework in which updating on new evidence happens by Bayesian conditionalization, and second, a framework in which updating happens by Jeffrey conditionalization. We spell out problems for both views, and then generalize some of these to non-probabilistic frameworks. The main theme of our discussion is that the epistemic import of a seeming (or experience) should depend on its content in a plethora of ways that phenomenal conservatism is insensitive to. (shrink)
We discuss a well-known puzzle about the lexicalization of logical operators in natural language, in particular connectives and quantifiers. Of the many logically possible operators, only few appear in the lexicon of natural languages: the connectives in English, for example, are conjunction and, disjunction or, and negated disjunction nor; the lexical quantifiers are all, some and no. The logically possible nand and Nall are not expressed by lexical entries in English, nor in any natural language. Moreover, the lexicalized operators are (...) all upward or downward monotone, an observation known as the Monotonicity Universal. We propose a logical explanation of lexical gaps and of the Monotonicity Universal, based on the dynamic behaviour of connectives and quantifiers. We define update potentials for logical operators as procedures to modify the context, under the assumption that an update by \ depends on the logical form of \ and on the speech act performed: assertion or rejection. We conjecture that the adequacy of update potentials determines the limits of lexicalizability for logical operators in natural language. Finally, we show that on this framework the Monotonicity Universal follows from the logical properties of the updates that correspond to each operator. (shrink)
Sometimes you are unreliable at fulfilling your doxastic plans: for example, if you plan to be fully confident in all truths, probably you will end up being fully confident in some falsehoods by mistake. In some cases, there is information that plays the classical role of *evidence*—your beliefs are perfectly discriminating with respect to some possible facts about the world—and there is a standard expected-accuracy-based justification for planning to *conditionalize* on this evidence. This planning-oriented justification extends to some cases where (...) you do not have transparent evidence, in the sense that your beliefs are not perfectly discriminating with respect to any non-trivial facts. In other cases, accuracy considerations do not tell you to plan to conditionalize on any information at all, but rather to plan to follow a different updating rule. Even in the absence of evidence, accuracy considerations can guide your doxastic plan. (shrink)
Learning is fundamentally about action, enabling the successful navigation of a changing and uncertain environment. The experience of pain is central to this process, indicating the need for a change in action so as to mitigate potential threat to bodily integrity. This review considers the application of Bayesian models of learning in pain that inherently accommodate uncertainty and action, which, we shall propose are essential in understanding learning in both acute and persistent cases of pain.
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.