Citations of:
Add citations
You must login to add citations.


Bayesian confirmation theory is rife with confirmation measures. Zalabardo focuses on the probability difference measure, the probability ratio measure, the likelihood difference measure, and the likelihood ratio measure. He argues that the likelihood ratio measure is adequate, but each of the other three measures is not. He argues for this by setting out three adequacy conditions on confirmation measures and arguing in effect that all of them are met by the likelihood ratio measure but not by any of the other (...) 

Inductive reasoning requires exploiting links between evidence and hypotheses. This can be done focusing either on the posterior probability of the hypothesis when updated on the new evidence or on the impact of the new evidence on the credibility of the hypothesis. But are these two cognitive representations equally reliable? This study investigates this question by comparing probability and impact judgments on the same experimental materials. The results indicate that impact judgments are more consistent in time and more accurate than (...) 



According to influential accounts of scientific method, such as critical rationalism, scientific knowledge grows by repeatedly testing our best hypotheses. But despite the popularity of hypothesis tests in statistical inference and science in general, their philosophical foundations remain shaky. In particular, the interpretation of nonsignificant results—those that do not reject the tested hypothesis—poses a major philosophical challenge. To what extent do they corroborate the tested hypothesis, or provide a reason to accept it? Popper sought for measures of corroboration that could (...) 

This paper develops axiomatic foundations for a probabilisticinterventionist theory of causal strength. Transferring methods from Bayesian confirmation theory, I proceed in three steps: I develop a framework for defining and comparing measures of causal strength; I argue that no single measure can satisfy all natural constraints; I prove two representation theorems for popular measures of causal strength: Pearl's causal effect measure and Eells' difference measure. In other words, I demonstrate these two measures can be derived from a set of plausible (...) 

One of the integral parts of Bayesian coherentism is the view that the relation of ‘being no less coherent than’ is fully determined by the probabilistic features of the sets of propositions to be ordered. In the last one and a half decades, a variety of probabilistic measures of coherence have been put forward. However, there is large disagreement as to which of these measures best captures the pretheoretic notion of coherence. This paper contributes to the debate on coherence measures (...) 

Tacking by conjunction is a deep problem for Bayesian confirmation theory. It is based on the insight that to each hypothesis h that is confirmed by a piece of evidence e one can ‘tack’ an irrelevant hypothesis h′ so that h∧h′ is also confirmed by e. This seems counterintuitive. Existing Bayesian solution proposals try to soften the negative impact of this result by showing that although h∧h′ is confirmed by e, it is so only to a lower degree. In this (...) 

The concepts of coherence and confirmation are closely intertwined: according to a prominent proposal coherence is nothing but mutual confirmation. Accordingly, it should come as no surprise that both are confronted with similar problems. As regards Bayesian confirmation measures these are illustrated by the problem of tacking by conjunction. On the other hand, Bayesian coherence measures face the problem of belief individuation. In this paper we want to outline the benefit of an approach to coherence and confirmation based on content (...) 

Striving for a probabilistic explication of coherence, scholars proposed a distinction between agreement and striking agreement. In this paper I argue that only the former should be considered a genuine concept of coherence. In a second step the relation between coherence and reliability is assessed. I show that it is possible to concur with common intuitions regarding the impact of coherence on reliability in various types of witness scenarios by means of an agreement measure of coherence. Highlighting the need to (...) 

The proposition that Tweety is a bird coheres better with the proposition that Tweety has wings than with the proposition that Tweety cannot fly. This relationship of contrastive coherence is the focus of the present paper. Based on recent work in formal epistemology we consider various possibilities to model this relationship by means of probability theory. In a second step we consider different applications of these models. Among others, we offer a coherentist interpretation of the conjunction fallacy. 





Proposals for rigorously explicating the concept of confirmation in probabilistic terms abound. To foster discussions on the formal properties of the proposed measures, recent years have seen the upshot of a number of representation theorems that uniquely determine a confirmation measure based on a number of desiderata. However, the results that have been presented so far focus exclusively on the concept of incremental confirmation. This leaves open the question whether similar results can be obtained for the concept of absolute confirmation. (...) 

Bayesian confirmation theory is rife with confirmation measures. Many of them differ from each other in important respects. It turns out, though, that all the standard confirmation measures in the literature run counter to the socalled “Reverse Matthew Effect” (“RME” for short). Suppose, to illustrate, that H1 and H2 are equally successful in predicting E in that p(E  H1)/p(E) = p(E  H2)/p(E) > 1. Suppose, further, that initially H1 is less probable than H2 in that p(H1) < p(H2). (...) 

This article proposes a new interpretation of mutual information. We examine three extant interpretations of MI by reduction in doubt, by reduction in uncertainty, and by divergence. We argue that the first two are inconsistent with the epistemic value of information assumed in many applications of MI: the greater is the amount of information we acquire, the better is our epistemic position, other things being equal. The third interpretation is consistent with EVI, but it is faced with the problem of (...) 

This paper proposes a new interpretation of mutual information (MI). We examine three extant interpretations of MI by reduction in doubt, by reduction in uncertainty, and by divergence. We argue that the first two are inconsistent with the epistemic value of information (EVI) assumed in many applications of MI: the greater is the amount of information we acquire, the better is our epistemic position, other things being equal. The third interpretation is consistent with EVI, but it is faced with the (...) 

Is evidential support transitive? The answer is negative when evidential support is understood as confirmation so that X evidentially supports Y if and only if p(Y  X) > p(Y). I call evidential support so understood “support” (for short) and set out three alternative ways of understanding evidential support: supportt (support plus a sufficiently high probability), supportt* (support plus a substantial degree of support), and supporttt* (support plus both a sufficiently high probability and a substantial degree of support). I also (...) 

We show that as a chain of confirmation becomes longer, confirmation dwindles under screeningoff. For example, if E confirms H1, H1 confirms H2, and H1 screens off E from H2, then the degree to which E confirms H2 is less than the degree to which E confirms H1. Although there are many measures of confirmation, our result holds on any measure that satisfies the Weak Law of Likelihood. We apply our result to testimony cases, relate it to the DataProcessing Inequality (...) 

Barnett provides an interesting new challenge for Dogmatist accounts of perceptual justification. The challenge is that such accounts, by accepting that a perceptual experience can provide a distinctive kind of boost to one’s credences, would lead to a form of diachronic irrationality in cases where one has already learnt in advance that one will have such an experience. I show that this challenge rests on a misleading feature of using the 0–1 interval to express probabilities and show that if we (...) 

What sort of evidence can confer the strongest support to a hypothesis? A natural answer is that the evidence entails the hypothesis. Roush claims that the likelihood ratio measure of degree of incremental support can deliver this intuitively natural result, and regards it as unifying “[the] account of induction and deduction in the only way that makes sense”. In this paper, we highlight a difficulty in the treatment of this case, and question the great significance that is attached to this (...) 

After Karl Popper’s original work, several approaches were developed to provide a sound explication of the notion of verisimilitude. With few exceptions, these contributions have assumed that the truth to be approximated is deterministic. This collection of ten papers addresses the more general problem of approaching probabilistic truths. They include attempts to find appropriate measures for the closeness to probabilistic truth and to evaluate claims about such distances on the basis of empirical evidence. The papers employ multiple analytical approaches, and (...) 

We explore the grammar of Bayesian confirmation by focusing on some likelihood principles, including the Weak Law of Likelihood. We show that none of the likelihood principles proposed so far is satisfied by all incremental measures of confirmation, and we argue that some of these measures indeed obey new, prima facie strange, antilikelihood principles. To prove this, we introduce a new measure that violates the Weak Law of Likelihood while satisfying a strong antilikelihood condition. We conclude by hinting at some (...) 



The current state of inductive logic is puzzling. Survey presentations are recurrently offered and a very rich and extensive handbook was entirely dedicated to the topic just a few years ago [23]. Among the contributions to this very volume, however, one finds forceful arguments to the effect that inductive logic is not needed and that the belief in its existence is itself a misguided illusion , while other distinguished observers have eventually come to see at least the label as “slightly (...) 

Searching for information is critical in many situations. In medicine, for instance, careful choice of a diagnostic test can help narrow down the range of plausible diseases that the patient might have. In a probabilistic framework, test selection is often modeled by assuming that people’s goal is to reduce uncertainty about possible states of the world. In cognitive science, psychology, and medical decision making, Shannon entropy is the most prominent and most widely used model to formalize probabilistic uncertainty and the (...) 

Analyses of the Sleeping Beauty Problem are polarised between those advocating the “1/2 view” and those endorsing the “1/3 view”. The disagreement concerns the evidential relevance of selflocating information. Unlike halfers, thirders regard selflocating information as evidentially relevant in the Sleeping Beauty Problem. In the present study, we systematically manipulate the kind of information available in different formulations of the Sleeping Beauty Problem. Our findings indicate that patterns of judgment on different formulations of the Sleeping Beauty Problem do not fit (...) 

What sort of evidence can confer the strongest support to a hypothesis? A natural answer is that the evidence entails the hypothesis. Roush claims that the likelihood ratio measure of degree of incremental support can deliver this intuitively natural result, and regards it as unifying “[the] account of induction and deduction in the only way that makes sense”. In this paper, we highlight a difficulty in the treatment of this case, and question the great significance that is attached to this (...) 

The present paper investigates the first step of rational belief acquisition. It, thus, focuses on justificatory relations between perceptual experiences and perceptual beliefs, and between their contents, respectively. In particular, the paper aims at outlining how it is possible to reason from the content of perceptual experiences to the content of perceptual beliefs. The paper thereby approaches this aim by combining a formal epistemology perspective with an eye towards recent advances in philosophy of cognition. Furthermore the paper restricts its focus, (...) 

Probabilistic dependence and independence are among the key concepts of Bayesian epistemology. This paper focuses on the study of one specific quantitative notion of probabilistic dependence. More specifically, section 1 introduces Keynes’s coefficient of dependence and shows how it is related to pivotal aspects of scientific reasoning such as confirmation, coherence, the explanatory and unificatory power of theories, and the diversity of evidence. The intimate connection between Keynes’s coefficient of dependence and scientific reasoning raises the question of how Keynes’s coefficient (...) 

Any theory of confirmation must answer the following question: what is the purpose of its conception of confirmation for scientific inquiry? In this article, we argue that no Bayesian conception of confirmation can be used for its primary intended purpose, which we take to be making a claim about how worthy of belief various hypotheses are. Then we consider a different use to which Bayesian confirmation might be put, namely, determining the epistemic value of experimental outcomes, and thus to decide (...) 

A characterization of epistemic rationality, or epistemic justification, is typically taken to require a process of conceptual clarification, and is seen as comprising the core of a theory of (epistemic) rationality. I propose to explicate the concept of rationality. / It is essential, I argue, that the normativity of rationality, and the purpose, or goal, for which the particular theory of rationality is being proposed, is taken into account when explicating the concept of rationality. My position thus amounts to an (...) 

In everyday life and in science we acquire evidence of evidence and based on this new evidence we often change our epistemic states. An assumption underlying such practice is that the following EEE Slogan is correct: 'evidence of evidence is evidence' (Feldman 2007, p. 208). We suggest that evidence of evidence is best understood as higherorder evidence about the epistemic state of agents. In order to model evidence of evidence we introduce a new powerful framework for modelling epistemic states, Dyadic (...) 

Jan Sprenger and Stephan Hartmann offer a fresh approach to central topics in philosophy of science, including causation, explanation, evidence, and scientific models. Their Bayesian approach uses the concept of degrees of belief to explain and to elucidate manifold aspects of scientific reasoning. 

There are numerous (Bayesian) confirmation measures in the literature. Festa provides a formal characterization of a certain class of such measures. He calls the members of this class “incremental measures”. Festa then introduces six rather interesting properties called “Matthew properties” and puts forward two theses, hereafter “T1” and “T2”, concerning which of the various extant incremental measures have which of the various Matthew properties. Festa’s discussion is potentially helpful with the problem of measure sensitivity. I argue, that, while Festa’s discussion (...) 