Citations of:
Add citations
You must login to add citations.


Searching for information is critical in many situations. In medicine, for instance, careful choice of a diagnostic test can help narrow down the range of plausible diseases that the patient might have. In a probabilistic framework, test selection is often modeled by assuming that people’s goal is to reduce uncertainty about possible states of the world. In cognitive science, psychology, and medical decision making, Shannon entropy is the most prominent and most widely used model to formalize probabilistic uncertainty and the (...) 

Two compelling principles, the Reasonable Range Principle and the Preservation of Irrelevant Evidence Principle, are necessary conditions that any response to peer disagreements ought to abide by. The Reasonable Range Principle maintains that a resolution to a peer disagreement should not fall outside the range of views expressed by the peers in their dispute, whereas the Preservation of Irrelevant Evidence Principle maintains that a resolution strategy should be able to preserve unanimous judgments of evidential irrelevance among the peers. No standard (...) 

The question of how the probabilistic opinions of different individuals should be aggregated to form a group opinion is controversial. But one assumption seems to be pretty much common ground: for a group of Bayesians, the representation of group opinion should itself be a unique probability distribution, 410–414, [45]; Bordley Management Science, 28, 1137–1148, [5]; Genest et al. The Annals of Statistics, 487–501, [21]; Genest and Zidek Statistical Science, 114–135, [23]; Mongin Journal of Economic Theory, 66, 313–351, [46]; Clemen and (...) 

The purpose of this paper is to show that if one adopts conditional probabilities as the primitive concept of probability, one must deal with the fact that even in very ordinary circumstances at least some probability values may be imprecise, and that some probability questions may fail to have numerically precise answers. 

The basic Bayesian model of credence states, where each individual’s belief state is represented by a single probability measure, has been criticized as psychologically implausible, unable to represent the intuitive distinction between precise and imprecise probabilities, and normatively unjustifiable due to a need to adopt arbitrary, unmotivated priors. These arguments are often used to motivate a model on which imprecise credal states are represented by sets of probability measures. I connect this debate with recent work in Bayesian cognitive science, where (...) 

In this paper I offer an alternative  the ‘dispositional account’  to the standard account of imprecise probabilism. Whereas for the imprecise probabilist, an agent’s credal state is modelled by a set of credence functions, on the dispositional account an agent’s credal state is modelled by a set of sets of credence functions. On the face of it, the dispositional account looks less elegant than the standard account – so why should we be interested? I argue that the dispositional (...) 

Direct inferences identify certain probabilistic credences or confirmationfunctionlikelihoods with values of objective chances or relative frequencies. The best known version of a direct inference principle is David Lewis’s Principal Principle. Certain kinds of statements undermine direct inferences. Lewis calls such statements inadmissible. We show that on any Bayesian account of direct inference several kinds of intuitively innocent statements turn out to be inadmissible. This may pose a significant challenge to Bayesian accounts of direct inference. We suggest some ways in which (...) 

It is often suggested that when opinions differ among individuals in a group, the opinions should be aggregated to form a compromise. This paper compares two approaches to aggregating opinions, linear pooling and what I call opinion agglomeration. In evaluating both strategies, I propose a pragmatic criterion, No Regrets, entailing that an aggregation strategy should prevent groups from buying and selling bets on events at prices regretted by their members. I show that only opinion agglomeration is able to satisfy the (...) 

A prominent pillar of Bayesian philosophy is that, relative to just a few constraints, priors “wash out” in the limit. Bayesians often appeal to such asymptotic results as a defense against charges of excessive subjectivity. But, as Seidenfeld and coauthors observe, what happens in the short run is often of greater interest than what happens in the limit. They use this point as one motivation for investigating the counterintuitive short run phenomenon of dilation since, it is alleged, “dilation contrasts with (...) 

The theory of lower previsions is designed around the principles of coherence and sureloss avoidance, thus steers clear of all the updating anomalies highlighted in Gong and Meng's "Judicious Judgment Meets Unsettling Updating: Dilation, Sure Loss, and Simpson's Paradox" except dilation. In fact, the traditional problem with the theory of imprecise probability is that coherent inference is too complicated rather than unsettling. Progress has been made simplifying coherent inference by demoting sets of probabilities from fundamental building blocks to secondary representations (...) 

For two ideally rational agents, does learning a finite amount of shared evidence necessitate agreement? No. But does it at least guard against belief polarization, the case in which their opinions get further apart? No. OK, but are rational agents guaranteed to avoid polarization if they have access to an infinite, increasing stream of shared evidence? No. 

According to the Imprecise Credence Framework (ICF), a rational believer's doxastic state should be modelled by a set of probability functions rather than a single probability function, namely, the set of probability functions allowed by the evidence ( Joyce [2005] ). Roger White ( [2010] ) has recently given an arresting argument against the ICF, which has garnered a number of responses. In this article, I attempt to cast doubt on his argument. First, I point out that it's not an (...) 



A standard way to challenge convergencebased accounts of inductive success is to claim that they are too weak to constrain inductive inferences in the short run. We respond to such a challenge by answering some questions raised by Juhl (1994). When it comes to predicting limiting relative frequencies in the framework of Reichenbach, we show that speedoptimal convergence—a longrun success condition—induces dynamic coherence in the short run. 

We explore which types of probabilistic updating commute with convex IP pooling. Positive results are stated for Bayesian conditionalization, imaging, and a certain parameterization of Jeffrey conditioning. This last observation is obtained with the help of a slight generalization of a characterization of externally Bayesian pooling operators due to Wagner :336–345, 2009). These results strengthen the case that pooling should go by imprecise probabilities since no precise pooling method is as versatile. 

There is a growing interest in the foundations as well as the application of imprecise probability in contemporary epistemology. This dissertation is concerned with the application. In particular, the research presented concerns ways in which imprecise probability, i.e. sets of probability measures, may helpfully address certain philosophical problems pertaining to rational belief. The issues I consider are disagreement among epistemic peers, complete ignorance, and inductive reasoning with imprecise priors. For each of these topics, it is assumed that belief can be (...) 

The principle of indifference has fallen from grace in contemporary philosophy, yet some papers have recently sought to vindicate its plausibility. This paper follows suit. In it, I articulate a version of the principle and provide what appears to be a novel argument in favour of it. The argument relies on a thought experiment where, intuitively, an agent’s confidence in any particular outcome being true should decrease with the addition of outcomes to the relevant space of possible outcomes. Put simply: (...) 

The principle of indifference has fallen from grace in contemporary philosophy, yet some papers have recently sought to vindicate its plausibility. This paper follows suit. In it, I articulate a version of the principle and provide what appears to be a novel argument in favour of it. The argument relies on a thought experiment where, intuitively, an agent’s confidence in any particular outcome being true should decrease with the addition of outcomes to the relevant space of possible outcomes. Put simply: (...) 

Does rationality require imprecise credences? Many hold that it does: imprecise evidence requires correspondingly imprecise credences. I argue that this is false. The imprecise view faces the same arbitrariness worries that were meant to motivate it in the first place. It faces these worries because it incorporates a certain idealization. But doing away with this idealization effectively collapses the imprecise view into a particular kind of precise view. On this alternative, our attitudes should reflect a kind of normative uncertainty: uncertainty (...) 

In this paper I focus on a recently discussed phenomenon illustrated by sentences containing predicates of taste: the phenomenon of " perspectival plurality " , whereby sentences containing two or more predicates of taste have readings according to which each predicate pertains to a different perspective. This phenomenon has been shown to be problematic for (at least certain versions of) relativism. My main aim is to further the discussion by showing that the phenomenon extends to other perspectival expressions than predicates (...) 



It is natural to think of precise probabilities as being special cases of imprecise probabilities, the special case being when one’s lower and upper probabilities are equal. I argue, however, that it is better to think of the two models as representing two different aspects of our credences, which are often vague to some degree. I show that by combining the two models into one model, and understanding that model as a model of vague credence, a natural interpretation arises that (...) 



Pedersen and Wheeler (2014) and Pedersen and Wheeler (2015) offer a wideranging and indepth exploration of the phenomenon of dilation. We find that these studies raise many interesting and important points. However, purportedly general characterizations of dilation are reported in them that, unfortunately, admit counterexamples. The purpose of this note is to show in some detail that these characterization results are false. 