Citations of:
Add citations
You must login to add citations.


Searching for information is critical in many situations. In medicine, for instance, careful choice of a diagnostic test can help narrow down the range of plausible diseases that the patient might have. In a probabilistic framework, test selection is often modeled by assuming that people’s goal is to reduce uncertainty about possible states of the world. In cognitive science, psychology, and medical decision making, Shannon entropy is the most prominent and most widely used model to formalize probabilistic uncertainty and the (...) 

For two ideally rational agents, does learning a finite amount of shared evidence necessitate agreement? No. But does it at least guard against belief polarization, the case in which their opinions get further apart? No. OK, but are rational agents guaranteed to avoid polarization if they have access to an infinite, increasing stream of shared evidence? No. 

Bayesians often appeal to “merging of opinions” to rebut charges of excessive subjectivity. But what happens in the short run is often of greater interest than what happens in the limit. Seidenfeld and coauthors use this observation as motivation for investigating the counterintuitive short run phenomenon of dilation, since, they allege, dilation is “the opposite” of asymptotic merging of opinions. The measure of uncertainty relevant for dilation, however, is not the one relevant for merging of opinions. We explicitly investigate the (...) 

We explore which types of probabilistic updating commute with convex IP pooling. Positive results are stated for Bayesian conditionalization, imaging, and a certain parameterization of Jeffrey conditioning. This last observation is obtained with the help of a slight generalization of a characterization of externally Bayesian pooling operators due to Wagner :336–345, 2009). These results strengthen the case that pooling should go by imprecise probabilities since no precise pooling method is as versatile. 

Direct inferences identify certain probabilistic credences or confirmationfunctionlikelihoods with values of objective chances or relative frequencies. The best known version of a direct inference principle is David Lewis’s Principal Principle. Certain kinds of statements undermine direct inferences. Lewis calls such statements inadmissible. We show that on any Bayesian account of direct inference several kinds of intuitively innocent statements turn out to be inadmissible. This may pose a significant challenge to Bayesian accounts of direct inference. We suggest some ways in which (...) 



This paper is about a tension between two theses. The first is Value of Evidence: roughly, the thesis that it is always rational for an agent to gather and use costfree evidence for making decisions. The second is Rationality of Imprecision: the thesis that an agent can be rationally required to adopt doxastic states that are imprecise, i.e., not representable by a single credence function. While others have noticed this tension, I offer a new diagnosis of it. I show that (...) 

The theory of lower previsions is designed around the principles of coherence and sureloss avoidance, thus steers clear of all the updating anomalies highlighted in Gong and Meng's "Judicious Judgment Meets Unsettling Updating: Dilation, Sure Loss, and Simpson's Paradox" except dilation. In fact, the traditional problem with the theory of imprecise probability is that coherent inference is too complicated rather than unsettling. Progress has been made simplifying coherent inference by demoting sets of probabilities from fundamental building blocks to secondary representations (...) 

We provide counterexamples to some purported characterizations of dilation due to Pedersen and Wheeler :1305–1342, 2014, ISIPTA ’15: Proceedings of the 9th international symposium on imprecise probability: theories and applications, 2015). 

A standard way to challenge convergencebased accounts of inductive success is to claim that they are too weak to constrain inductive inferences in the short run. We respond to such a challenge by answering some questions raised by Juhl (1994). When it comes to predicting limiting relative frequencies in the framework of Reichenbach, we show that speedoptimal convergence—a longrun success condition—induces dynamic coherence in the short run. 

Rational credence should be coherent in the sense that your attitudes should not leave you open to a sure loss. Rational credence should be such that you can learn when confronted with relevant evidence. Rational credence should not be sensitive to irrelevant differences in the presentation of the epistemic situation. We explore the extent to which orthodox probabilistic approaches to rational credence can satisfy these three desiderata and find them wanting. We demonstrate that an imprecise probability approach does better. Along (...) 

Imprecise probabilities are an increasingly popular way of reasoning about rational credence. However they are subject to an apparent failure to display convincing inductive learning. This paper demonstrates that a small modification to the update rule for IP allows us to overcome this problem, albeit at the cost of satisfying only a weaker concept of coherence. 

Two compelling principles, the Reasonable Range Principle and the Preservation of Irrelevant Evidence Principle, are necessary conditions that any response to peer disagreements ought to abide by. The Reasonable Range Principle maintains that a resolution to a peer disagreement should not fall outside the range of views expressed by the peers in their dispute, whereas the Preservation of Irrelevant Evidence Principle maintains that a resolution strategy should be able to preserve unanimous judgments of evidential irrelevance among the peers. No standard (...) 

We offer a new motivation for imprecise probabilities. We argue that there are propositions to which precise probability cannot be assigned, but to which imprecise probability can be assigned. In such cases the alternative to imprecise probability is not precise probability, but no probability at all. And an imprecise probability is substantially better than no probability at all. Our argument is based on the mathematical phenomenon of nonmeasurable sets. Nonmeasurable propositions cannot receive precise probabilities, but there is a natural way (...) 

This paper considers a puzzling conflict between two positions that are each compelling: it is irrational for an agent to pay to avoid `free' evidence before making a decision, and rational agents may have imprecise beliefs and/or desires. Indeed, we show that Good's theorem concerning the invariable choiceworthiness of free evidence does not generalise to the imprecise realm, given the plausible existing decision theories for handling imprecision. A key ingredient in the analysis, and a potential source of controversy, is the (...) 

According to the Imprecise Credence Framework (ICF), a rational believer's doxastic state should be modelled by a set of probability functions rather than a single probability function, namely, the set of probability functions allowed by the evidence ( Joyce [2005] ). Roger White ( [2010] ) has recently given an arresting argument against the ICF, which has garnered a number of responses. In this article, I attempt to cast doubt on his argument. First, I point out that it's not an (...) 

The purpose of this paper is to show that if one adopts conditional probabilities as the primitive concept of probability, one must deal with the fact that even in very ordinary circumstances at least some probability values may be imprecise, and that some probability questions may fail to have numerically precise answers. 

The principle of indifference has fallen from grace in contemporary philosophy, yet some papers have recently sought to vindicate its plausibility. This paper follows suit. In it, I articulate a version of the principle and provide what appears to be a novel argument in favour of it. The argument relies on a thought experiment where, intuitively, an agent’s confidence in any particular outcome being true should decrease with the addition of outcomes to the relevant space of possible outcomes. Put simply: (...) 

It is often suggested that when opinions differ among individuals in a group, the opinions should be aggregated to form a compromise. This paper compares two approaches to aggregating opinions, linear pooling and what I call opinion agglomeration. In evaluating both strategies, I propose a pragmatic criterion, No Regrets, entailing that an aggregation strategy should prevent groups from buying and selling bets on events at prices regretted by their members. I show that only opinion agglomeration is able to satisfy the (...) 

In this paper I focus on a recently discussed phenomenon illustrated by sentences containing predicates of taste: the phenomenon of " perspectival plurality " , whereby sentences containing two or more predicates of taste have readings according to which each predicate pertains to a different perspective. This phenomenon has been shown to be problematic for (at least certain versions of) relativism. My main aim is to further the discussion by showing that the phenomenon extends to other perspectival expressions than predicates (...) 

There is a growing interest in the foundations as well as the application of imprecise probability in contemporary epistemology. This dissertation is concerned with the application. In particular, the research presented concerns ways in which imprecise probability, i.e. sets of probability measures, may helpfully address certain philosophical problems pertaining to rational belief. The issues I consider are disagreement among epistemic peers, complete ignorance, and inductive reasoning with imprecise priors. For each of these topics, it is assumed that belief can be (...) 

The question of how the probabilistic opinions of different individuals should be aggregated to form a group opinion is controversial. But one assumption seems to be pretty much common ground: for a group of Bayesians, the representation of group opinion should itself be a unique probability distribution, 410–414, [45]; Bordley Management Science, 28, 1137–1148, [5]; Genest et al. The Annals of Statistics, 487–501, [21]; Genest and Zidek Statistical Science, 114–135, [23]; Mongin Journal of Economic Theory, 66, 313–351, [46]; Clemen and (...) 

In this paper I offer an alternative  the ‘dispositional account’  to the standard account of imprecise probabilism. Whereas for the imprecise probabilist, an agent’s credal state is modelled by a set of credence functions, on the dispositional account an agent’s credal state is modelled by a set of sets of credence functions. On the face of it, the dispositional account looks less elegant than the standard account – so why should we be interested? I argue that the dispositional (...) 

A widely shared view in the cognitive sciences is that discovering and assessing explanations of cognitive phenomena whose production involves uncertainty should be done in a Bayesian framework. One assumption supporting this modelling choice is that Bayes provides the best approach for representing uncertainty. However, it is unclear that Bayes possesses special epistemic virtues over alternative modelling frameworks, since a systematic comparison has yet to be attempted. Currently, it is then premature to assert that cognitive phenomena involving uncertainty are best (...) 



The principle of indifference has fallen from grace in contemporary philosophy, yet some papers have recently sought to vindicate its plausibility. This paper follows suit. In it, I articulate a version of the principle and provide what appears to be a novel argument in favour of it. The argument relies on a thought experiment where, intuitively, an agent’s confidence in any particular outcome being true should decrease with the addition of outcomes to the relevant space of possible outcomes. Put simply: (...) 

Does rationality require imprecise credences? Many hold that it does: imprecise evidence requires correspondingly imprecise credences. I argue that this is false. The imprecise view faces the same arbitrariness worries that were meant to motivate it in the first place. It faces these worries because it incorporates a certain idealization. But doing away with this idealization effectively collapses the imprecise view into a particular kind of precise view. On this alternative, our attitudes should reflect a kind of normative uncertainty: uncertainty (...) 

It is natural to think of precise probabilities as being special cases of imprecise probabilities, the special case being when one’s lower and upper probabilities are equal. I argue, however, that it is better to think of the two models as representing two different aspects of our credences, which are often vague to some degree. I show that by combining the two models into one model, and understanding that model as a model of vague credence, a natural interpretation arises that (...) 

