Citations of:
Add citations
You must login to add citations.


Searching for information is critical in many situations. In medicine, for instance, careful choice of a diagnostic test can help narrow down the range of plausible diseases that the patient might have. In a probabilistic framework, test selection is often modeled by assuming that people’s goal is to reduce uncertainty about possible states of the world. In cognitive science, psychology, and medical decision making, Shannon entropy is the most prominent and most widely used model to formalize probabilistic uncertainty and the (...) 

Barnett provides an interesting new challenge for Dogmatist accounts of perceptual justification. The challenge is that such accounts, by accepting that a perceptual experience can provide a distinctive kind of boost to one’s credences, would lead to a form of diachronic irrationality in cases where one has already learnt in advance that one will have such an experience. I show that this challenge rests on a misleading feature of using the 0–1 interval to express probabilities and show that if we (...) 

We show that as a chain of confirmation becomes longer, confirmation dwindles under screeningoff. For example, if E confirms H1, H1 confirms H2, and H1 screens off E from H2, then the degree to which E confirms H2 is less than the degree to which E confirms H1. Although there are many measures of confirmation, our result holds on any measure that satisfies the Weak Law of Likelihood. We apply our result to testimony cases, relate it to the DataProcessing Inequality (...) 

This paper develops axiomatic foundations for a probabilisticinterventionist theory of causal strength. Transferring methods from Bayesian confirmation theory, I proceed in three steps: I develop a framework for defining and comparing measures of causal strength; I argue that no single measure can satisfy all natural constraints; I prove two representation theorems for popular measures of causal strength: Pearl's causal effect measure and Eells' difference measure. In other words, I demonstrate these two measures can be derived from a set of plausible (...) 

Bayesian confirmation theory is rife with confirmation measures. Many of them differ from each other in important respects. It turns out, though, that all the standard confirmation measures in the literature run counter to the socalled “Reverse Matthew Effect” (“RME” for short). Suppose, to illustrate, that H1 and H2 are equally successful in predicting E in that p(E  H1)/p(E) = p(E  H2)/p(E) > 1. Suppose, further, that initially H1 is less probable than H2 in that p(H1) < p(H2). (...) 

Any theory of confirmation must answer the following question: what is the purpose of its conception of confirmation for scientific inquiry? In this article, we argue that no Bayesian conception of confirmation can be used for its primary intended purpose, which we take to be making a claim about how worthy of belief various hypotheses are. Then we consider a different use to which Bayesian confirmation might be put, namely, determining the epistemic value of experimental outcomes, and thus to decide (...) 

Bayesian confirmation theory is rife with confirmation measures. Zalabardo focuses on the probability difference measure, the probability ratio measure, the likelihood difference measure, and the likelihood ratio measure. He argues that the likelihood ratio measure is adequate, but each of the other three measures is not. He argues for this by setting out three adequacy conditions on confirmation measures and arguing in effect that all of them are met by the likelihood ratio measure but not by any of the other (...) 

Is evidential support transitive? The answer is negative when evidential support is understood as confirmation so that X evidentially supports Y if and only if p(Y  X) > p(Y). I call evidential support so understood “support” (for short) and set out three alternative ways of understanding evidential support: supportt (support plus a sufficiently high probability), supportt* (support plus a substantial degree of support), and supporttt* (support plus both a sufficiently high probability and a substantial degree of support). I also (...) 



There are numerous (Bayesian) confirmation measures in the literature. Festa provides a formal characterization of a certain class of such measures. He calls the members of this class “incremental measures”. Festa then introduces six rather interesting properties called “Matthew properties” and puts forward two theses, hereafter “T1” and “T2”, concerning which of the various extant incremental measures have which of the various Matthew properties. Festa’s discussion is potentially helpful with the problem of measure sensitivity. I argue, that, while Festa’s discussion (...) 

According to influential accounts of scientific method, such as critical rationalism, scientific knowledge grows by repeatedly testing our best hypotheses. But despite the popularity of hypothesis tests in statistical inference and science in general, their philosophical foundations remain shaky. In particular, the interpretation of nonsignificant results—those that do not reject the tested hypothesis—poses a major philosophical challenge. To what extent do they corroborate the tested hypothesis, or provide a reason to accept it? Popper sought for measures of corroboration that could (...) 

Tacking by conjunction is a deep problem for Bayesian confirmation theory. It is based on the insight that to each hypothesis h that is confirmed by a piece of evidence e one can ‘tack’ an irrelevant hypothesis h′ so that h∧h′ is also confirmed by e. This seems counterintuitive. Existing Bayesian solution proposals try to soften the negative impact of this result by showing that although h∧h′ is confirmed by e, it is so only to a lower degree. In this (...) 

Analyses of the Sleeping Beauty Problem are polarised between those advocating the “1/2 view” and those endorsing the “1/3 view”. The disagreement concerns the evidential relevance of selflocating information. Unlike halfers, thirders regard selflocating information as evidentially relevant in the Sleeping Beauty Problem. In the present study, we systematically manipulate the kind of information available in different formulations of the Sleeping Beauty Problem. Our findings indicate that patterns of judgment on different formulations of the Sleeping Beauty Problem do not fit (...) 

Striving for a probabilistic explication of coherence, scholars proposed a distinction between agreement and striking agreement. In this paper I argue that only the former should be considered a genuine concept of coherence. In a second step the relation between coherence and reliability is assessed. I show that it is possible to concur with common intuitions regarding the impact of coherence on reliability in various types of witness scenarios by means of an agreement measure of coherence. Highlighting the need to (...) 

Probabilistic dependence and independence are among the key concepts of Bayesian epistemology. This paper focuses on the study of one specific quantitative notion of probabilistic dependence. More specifically, section 1 introduces Keynes’s coefficient of dependence and shows how it is related to pivotal aspects of scientific reasoning such as confirmation, coherence, the explanatory and unificatory power of theories, and the diversity of evidence. The intimate connection between Keynes’s coefficient of dependence and scientific reasoning raises the question of how Keynes’s coefficient (...) 

What sort of evidence can confer the strongest support to a hypothesis? A natural answer is that the evidence entails the hypothesis. Roush claims that the likelihood ratio measure of degree of incremental support can deliver this intuitively natural result, and regards it as unifying “[the] account of induction and deduction in the only way that makes sense”. In this paper, we highlight a difficulty in the treatment of this case, and question the great significance that is attached to this (...) 

The present paper investigates the first step of rational belief acquisition. It, thus, focuses on justificatory relations between perceptual experiences and perceptual beliefs, and between their contents, respectively. In particular, the paper aims at outlining how it is possible to reason from the content of perceptual experiences to the content of perceptual beliefs. The paper thereby approaches this aim by combining a formal epistemology perspective with an eye towards recent advances in philosophy of cognition. Furthermore the paper restricts its focus, (...) 



This article proposes a new interpretation of mutual information. We examine three extant interpretations of MI by reduction in doubt, by reduction in uncertainty, and by divergence. We argue that the first two are inconsistent with the epistemic value of information assumed in many applications of MI: the greater is the amount of information we acquire, the better is our epistemic position, other things being equal. The third interpretation is consistent with EVI, but it is faced with the problem of (...) 

This paper proposes a new interpretation of mutual information (MI). We examine three extant interpretations of MI by reduction in doubt, by reduction in uncertainty, and by divergence. We argue that the first two are inconsistent with the epistemic value of information (EVI) assumed in many applications of MI: the greater is the amount of information we acquire, the better is our epistemic position, other things being equal. The third interpretation is consistent with EVI, but it is faced with the (...) 

One of the integral parts of Bayesian coherentism is the view that the relation of ‘being no less coherent than’ is fully determined by the probabilistic features of the sets of propositions to be ordered. In the last one and a half decades, a variety of probabilistic measures of coherence have been put forward. However, there is large disagreement as to which of these measures best captures the pretheoretic notion of coherence. This paper contributes to the debate on coherence measures (...) 

The current state of inductive logic is puzzling. Survey presentations are recurrently offered and a very rich and extensive handbook was entirely dedicated to the topic just a few years ago [23]. Among the contributions to this very volume, however, one finds forceful arguments to the effect that inductive logic is not needed and that the belief in its existence is itself a misguided illusion , while other distinguished observers have eventually come to see at least the label as “slightly (...) 

Inductive reasoning requires exploiting links between evidence and hypotheses. This can be done focusing either on the posterior probability of the hypothesis when updated on the new evidence or on the impact of the new evidence on the credibility of the hypothesis. But are these two cognitive representations equally reliable? This study investigates this question by comparing probability and impact judgments on the same experimental materials. The results indicate that impact judgments are more consistent in time and more accurate than (...) 



The proposition that Tweety is a bird coheres better with the proposition that Tweety has wings than with the proposition that Tweety cannot fly. This relationship of contrastive coherence is the focus of the present paper. Based on recent work in formal epistemology we consider various possibilities to model this relationship by means of probability theory. In a second step we consider different applications of these models. Among others, we offer a coherentist interpretation of the conjunction fallacy. 

