An important question in the current debate on the epistemic significance of peer disagreement is whether evidence of evidence is evidence. Fitelson argues that, at least on some renderings of the thesis that evidence of evidence is evidence, there are cases where evidence of evidence is not evidence. I introduce a condition and show that under this condition evidence of evidence is evidence.
We show that as a chain of confirmation becomes longer, confirmation dwindles under screening-off. For example, if E confirms H1, H1 confirms H2, and H1 screens off E from H2, then the degree to which E confirms H2 is less than the degree to which E confirms H1. Although there are many measures of confirmation, our result holds on any measure that satisfies the Weak Law of Likelihood. We apply our result to testimony cases, relate it to the Data-Processing Inequality (...) in information theory, and extend it in two respects so that it covers a broader range of cases. (shrink)
It is well known that the probabilistic relation of confirmation is not transitive in that even if E confirms H1 and H1 confirms H2, E may not confirm H2. In this paper we distinguish four senses of confirmation and examine additional conditions under which confirmation in different senses becomes transitive. We conduct this examination both in the general case where H1 confirms H2 and in the special case where H1 also logically entails H2. Based on these analyses, we argue that (...) the Screening-Off Condition is the most important condition for transitivity in confirmation because of its generality and ease of application. We illustrate our point with the example of Moore’s “proof” of the existence of a material world, where H1 logically entails H2, the Screening-Off Condition holds, and confirmation in all four senses turns out to be transitive. (shrink)
Probabilistic support is not transitive. There are cases in which x probabilistically supports y , i.e., Pr( y | x ) > Pr( y ), y , in turn, probabilistically supports z , and yet it is not the case that x probabilistically supports z . Tomoji Shogenji, though, establishes a condition for transitivity in probabilistic support, that is, a condition such that, for any x , y , and z , if Pr( y | x ) > Pr( y (...) ), Pr( z | y ) > Pr( z ), and the condition in question is satisfied, then Pr( z | x ) > Pr( z ). I argue for a second and weaker condition for transitivity in probabilistic support. This condition, or the principle involving it, makes it easier (than does the condition Shogenji provides) to establish claims of probabilistic support, and has the potential to play an important role in at least some areas of philosophy. (shrink)
Igor Douven establishes several new intransitivity results concerning evidential support. I add to Douven’s very instructive discussion by establishing two further intransitivity results and a transitivity result.
We argue elsewhere that explanatoriness is evidentially irrelevant . Let H be some hypothesis, O some observation, and E the proposition that H would explain O if H and O were true. Then O screens-off E from H: Pr = Pr. This thesis, hereafter “SOT” , is defended by appeal to a representative case. The case concerns smoking and lung cancer. McCain and Poston grant that SOT holds in cases, like our case concerning smoking and lung cancer, that involve frequency (...) data. However, McCain and Poston contend that there is a wider sense of evidential relevance—wider than the sense at play in SOT—on which explanatoriness is evidentially relevant even in cases involving frequency data. This is their main point, but they also contend that SOT does not hold in certain cases not involving frequency data. We reply to each of these points and conclude with some general remarks on screening-off as a test of evidential relevance. (shrink)
Is evidential support transitive? The answer is negative when evidential support is understood as confirmation so that X evidentially supports Y if and only if p(Y | X) > p(Y). I call evidential support so understood “support” (for short) and set out three alternative ways of understanding evidential support: support-t (support plus a sufficiently high probability), support-t* (support plus a substantial degree of support), and support-tt* (support plus both a sufficiently high probability and a substantial degree of support). I also (...) set out two screening-off conditions (under which support is transitive): SOC1 and SOC2. It has already been shown that support-t is non-transitive in the general case (where it is not required that SOC1 holds and it is not required that SOC2 holds), in the special case where SOC1 holds, and in the special case where SOC2 holds. I introduce two rather weak adequacy conditions on support measures and argue that on any support measure meeting those conditions it follows that neither support-t* nor support-tt* is transitive in the general case, in the special case where SOC1 holds, or in the special case where SOC2 holds. I then relate some of the results to Douven’s evidential support theory of conditionals along with a few rival theories. (shrink)
There is a plethora of confirmation measures in the literature. Zalabardo considers four such measures: PD, PR, LD, and LR. He argues for LR and against each of PD, PR, and LD. First, he argues that PR is the better of the two probability measures. Next, he argues that LR is the better of the two likelihood measures. Finally, he argues that LR is superior to PR. I set aside LD and focus on the trio of PD, PR, and LR. (...) The question I address is whether Zalabardo succeeds in showing that LR is superior to each of PD and PR. I argue that the answer is negative. I also argue, though, that measures such as PD and PR, on one hand, and measures such as LR, on the other hand, are naturally understood as explications of distinct senses of confirmation. (shrink)
This article proposes a new interpretation of mutual information. We examine three extant interpretations of MI by reduction in doubt, by reduction in uncertainty, and by divergence. We argue that the first two are inconsistent with the epistemic value of information assumed in many applications of MI: the greater is the amount of information we acquire, the better is our epistemic position, other things being equal. The third interpretation is consistent with EVI, but it is faced with the problem of (...) measure sensitivity and fails to justify the use of MI in giving definitive answers to questions of information. We propose a fourth interpretation of MI by reduction in expected inaccuracy, where inaccuracy is measured by a strictly proper monotonic scoring rule. It is shown that the answers to questions of information given by MI are definitive whenever this interpretation is appropriate, and that it is appropriate in a wide range of applications with epistemic implications. _1_ Introduction _2_ Formal Analyses of the Three Interpretations _2.1_ Reduction in doubt _2.2_ Reduction in uncertainty _2.3_ Divergence _3_ Inconsistency with Epistemic Value of Information _4_ Problem of Measure Sensitivity _5_ Reduction in Expected Inaccuracy _6_ Resolution of the Problem of Measure Sensitivity _6.1_ Alternative measures of inaccuracy _6.2_ Resolution by strict propriety _6.3_ Range of applications _7_ Global Scoring Rules _8_ Conclusion. (shrink)
Andrew Cling presents a new version of the epistemic regress problem, and argues that intuitionist foundationalism, social contextualism, holistic coherentism, and infinitism fail to solve it. Cling’s discussion is quite instructive, and deserving of careful consideration. But, I argue, Cling’s discussion is not in all respects decisive. I argue that Cling’s dilemma argument against holistic coherentism fails.
We argued that explanatoriness is evidentially irrelevant in the following sense: Let H be a hypothesis, O an observation, and E the proposition that H would explain O if H and O were true. Then our claim is that Pr = Pr. We defended this screening-off thesis by discussing an example concerning smoking and cancer. Climenhaga argues that SOT is mistaken because it delivers the wrong verdict about a slightly different smoking-and-cancer case. He also considers a variant of SOT, called (...) “SOT*”, and contends that it too gives the wrong result. We here reply to Climenhaga’s arguments and suggest that SOT provides a criticism of the widely held theory of inference called “inference to the best explanation”. (shrink)
Coherentists on epistemic justification claim that all justification is inferential, and that beliefs, when justified, get their justification together (not in isolation) as members of a coherent belief system. Some recent work in formal epistemology shows that “individual credibility” is needed for “witness agreement” to increase the probability of truth and generate a high probability of truth. It can seem that, from this result in formal epistemology, it follows that coherentist justification is not truth-conducive, that it is not the case (...) that, under the requisite conditions, coherentist justification increases the probability of truth and generates a high probability of truth. I argue that this does not follow. (shrink)
I argue that coherence is truth-conducive in that coherence implies an increase in the probability of truth. Central to my argument is a certain principle for transitivity in probabilistic support. I then address a question concerning the truth-conduciveness of coherence as it relates to (something else I argue for) the truth-conduciveness of consistency, and consider how the truth-conduciveness of coherence bears on coherentist theories of justification.
It is widely thought in philosophy and elsewhere that parsimony is a theoretical virtue in that if T1 is more parsimonious than T2, then T1 is preferable to T2, other things being equal. This thesis admits of many distinct precisifications. I focus on a relatively weak precisification on which preferability is a matter of probability, and argue that it is false. This is problematic for various alternative precisifications, and even for Inference to the Best Explanation as standardly understood.
Some recent work in formal epistemology shows that “witness agreement” by itself implies neither an increase in the probability of truth nor a high probability of truth—the witnesses need to have some “individual credibility.” It can seem that, from this formal epistemological result, it follows that coherentist justification (i.e., doxastic coherence) is not truth-conducive. I argue that this does not follow. Central to my argument is the thesis that, though coherentists deny that there can be noninferential justification, coherentists do not (...) deny that there can be individual credibility. (shrink)
Recently there have been several attempts in formal epistemology to develop an adequate probabilistic measure of coherence. There is much to recommend probabilistic measures of coherence. They are quantitative and render formally precise a notion—coherence—notorious for its elusiveness. Further, some of them do very well, intuitively, on a variety of test cases. Siebel, however, argues that there can be no adequate probabilistic measure of coherence. Take some set of propositions A, some probabilistic measure of coherence, and a probability distribution such (...) that all the probabilities on which A’s degree of coherence depends (according to the measure in question) are defined. Then, the argument goes, the degree to which A is coherent depends solely on the details of the distribution in question and not at all on the explanatory relations, if any, standing between the propositions in A. This is problematic, the argument continues, because, first, explanation matters for coherence, and, second, explanation cannot be adequately captured solely in terms of probability. We argue that Siebel’s argument falls short. (shrink)
This is an excellent collection of essays on introspection and consciousness. There are fifteen essays in total (all new except for Sydney Shoemaker’s essay). There is also an introduction where the editors explain the impetus for the collection and provide a helpful overview. The essays contain a wealth of new and challenging material sure to excite specialists and shape future research. Below we extract a skeptical argument from Fred Dretske’s essay and relate the remaining essays to that argument. Due to (...) space limitations we focus in detail on just a few of the essays. We regret that we cannot give them all the attention they merit. (shrink)
Bayesian confirmation theory is rife with confirmation measures. Many of them differ from each other in important respects. It turns out, though, that all the standard confirmation measures in the literature run counter to the so-called “Reverse Matthew Effect” (“RME” for short). Suppose, to illustrate, that H1 and H2 are equally successful in predicting E in that p(E | H1)/p(E) = p(E | H2)/p(E) > 1. Suppose, further, that initially H1 is less probable than H2 in that p(H1) < p(H2). (...) Then by RME it follows that the degree to which E confirms H1 is greater than the degree to which it confirms H2. But by all the standard confirmation measures in the literature, in contrast, it follows that the degree to which E confirms H1 is less than or equal to the degree to which it confirms H2. It might seem, then, that RME should be rejected as implausible. Festa (2012), however, argues that there are scientific contexts in which RME holds. If Festa’s argument is sound, it follows that there are scientific contexts in which none of the standard confirmation measures in the literature is adequate. Festa’s argument is thus interesting, important, and deserving of careful examination. I consider five distinct respects in which E can be related to H, use them to construct five distinct ways of understanding confirmation measures, which I call “Increase in Probability”, “Partial Dependence”, “Partial Entailment”, “Partial Discrimination”, and “Popper Corroboration”, and argue that each such way runs counter to RME. The result is that it is not at all clear that there is a place in Bayesian confirmation theory for RME. (shrink)
Dretske’s theory of self-knowledge is interesting but peculiar and can seem implausible. He denies that we can know by introspection that we have thoughts, feelings, and experiences. But he allows that we can know by introspection what we think, feel, and experience. We consider two puzzles. The first puzzle, PUZZLE 1, is interpretive. Is there a way of understanding Dretske’s theory on which the knowledge affirmed by its positive side is different than the knowledge denied by its negative side? The (...) second puzzle, PUZZLE 2, is substantive. Each of the following theses has some prima facie plausibility: there is introspective knowledge of thoughts, knowledge requires evidence, and there are no experiences of thoughts. It is unclear, though, that these claims form a consistent set. These puzzles are not unrelated. Dretske’s theory of self-knowledge is a potential solution to PUZZLE 2 in that Dretske’s theory is meant to show how,, and can all be true. We provide a solution to PUZZLE 1 by appeal to Dretske’s early work in the philosophy of language on contrastive focus. We then distinguish between “Closure” and “Transmissibility”, and raise and answer a worry to the effect that Dretske’s theory of self-knowledge runs counter to Transmissibility. These results help to secure Dretske’s theory as a viable solution to PUZZLE 2. (shrink)
There are many scientific and everyday cases where each of Pr and Pr is high and it seems that Pr is high. But high probability is not transitive and so it might be in such cases that each of Pr and Pr is high and in fact Pr is not high. There is no issue in the special case where the following condition, which I call “C1”, holds: H 1 entails H 2. This condition is sufficient for transitivity in high (...) probability. But many of the scientific and everyday cases referred to above are cases where it is not the case that H 1 entails H 2. I consider whether there are additional conditions sufficient for transitivity in high probability. I consider three candidate conditions. I call them “C2”, “C3”, and “C2&3”. I argue that C2&3, but neither C2 nor C3, is sufficient for transitivity in high probability. I then set out some further results and relate the discussion to the Bayesian requirement of coherence. (shrink)
It is standard practice, when distinguishing between the foundationalist and the coherentist, to construe the coherentist as an internalist. The coherentist, the construal goes, says that justification is solely a matter of coherence, and that coherence, in turn, is solely a matter of internal relations between beliefs. The coherentist, so construed, is an internalist (in the sense I have in mind) in that the coherentist, so construed, says that whether a belief is justified hinges solely on what the subject is (...) like mentally. I argue that this practice is fundamentally misguided, by arguing that the foundationalism / coherentism debate and the internalism / externalism debate are about two very different things, so that there is nothing, qua coherentist, precluding the coherentist from siding with the externalist. I then argue that this spells trouble for two of the three most pressing and widely known objections to coherentism: the Alternative-Systems Objection and the Isolation Objection. (shrink)
If a subject’s belief system is inconsistent, does it follow that the subject’s beliefs (all of them) are unjustified? It seems not. But, coherentist theories of justification (at least some of them) imply otherwise, and so, it seems, are open to counterexample. This is the “Problem of Justified Inconsistent Beliefs”. I examine two main versions of the Problem of Justified Inconsistent Beliefs, and argue that coherentists can give at least a promising line of response to each of them.
Dretske is a “conciliatory skeptic” on self-knowledge. Take some subject S such that S thinks that P and S knows that she has thoughts. Dretske’s theory can be put as follows: S has a privileged way of knowing what she thinks, but she has no privileged way of knowing that she thinks it. There is much to be said on behalf of conciliatory skepticism and Dretske’s defense of it. We aim to show, however, that Dretske’s defense fails, in that if (...) his defense of CS’s skeptical half succeeds, then his defense of CS’s conciliatory half fails. We then suggest a potential way forward. We suggest in particular that the correct way of being a Dretskean conciliatory skeptic is to deny that S has a privileged way of knowing about her thoughts, but to grant that she is nonetheless an authority on her thoughts. (shrink)
The disjunction problem and the distality problem each presents a challenge that any theory of mental content must address. Here we consider their bearing on purely probabilistic causal (ppc) theories. In addition to considering these problems separately, we consider a third challenge – that a theory must solve both. We call this “the hard problem.” We consider 8 basic ppc theories along with 240 hybrids of them, and show that some can handle the disjunction problem and some can handle the (...) distality problem, but none can handle the hard problem. This is our main result. We then discuss three possible responses to that result, and argue that though the first two fail, the third has some promise. (shrink)
There is a long-standing debate in epistemology on the structure of justification. Some recent work in formal epistemology promises to shed some new light on that debate. I have in mind here some recent work by David Atkinson and Jeanne Peijnenburg, hereafter “A&P”, on infinite regresses of probabilistic support. A&P show that there are probability distributions defined over an infinite set of propositions {\ such that \ is probabilistically supported by \ for all i and \ has a high probability. (...) Let this result be “APR”. A&P oftentimes write as though they believe that APR runs counter to foundationalism. This makes sense, since there is some prima facie plausibility in the idea that APR runs counter to foundationalism, and since some prominent foundationalists argue for theses inconsistent with APR. I argue, though, that in fact APR does not run counter to foundationalism. I further argue that there is a place in foundationalism for infinite regresses of probabilistic support. (shrink)
Is there some general reason to expect organisms that have beliefs to have false beliefs? And after you observe that an organism occasionally occupies a given neural state that you think encodes a perceptual belief, how do you evaluate hypotheses about the semantic content that that state has, where some of those hypotheses attribute beliefs that are sometimes false while others attribute beliefs that are always true? To address the first of these questions, we discuss evolution by natural selection and (...) show how organisms that are risk-prone in the beliefs they form can be fitter than organisms that are risk-free. To address the second question, we discuss a problem that is widely recognized in statistics – the problem of over-fitting – and one influential device for addressing that problem, the Akaike Information Criterion (AIC). We then use AIC to solve epistemological versions of the disjunction and distality problems, which are two key problems concerning what it is for a belief state to have one semantic content rather than another. (shrink)
We argue in Roche and Sober (2013) that explanatoriness is evidentially irrelevant in that Pr(H | O&EXPL) = Pr(H | O), where H is a hypothesis, O is an observation, and EXPL is the proposition that if H and O were true, then H would explain O. This is a “screening-off” thesis. Here we clarify that thesis, reply to criticisms advanced by Lange (2017), consider alternative formulations of Inference to the Best Explanation, discuss a strengthened screening-off thesis, and consider how (...) it bears on the claim that unification is evidentially relevant. (shrink)
We conceptualize observation selection effects (OSEs) by considering how a shift from one process of observation to another affects discrimination-conduciveness, by which we mean the degree to which possible observations discriminate between hypotheses, given the observation process at work. OSEs in this sense come in degrees and are causal, where the cause is the shift in process, and the effect is a change in degree of discrimination-conduciveness. We contrast our understanding of OSEs with others that have appeared in the literature. (...) After describing conditions of adequacy that an acceptable measure of degree of discrimination-conduciveness must satisfy, we use those conditions of adequacy to evaluate several possible measures. We also discuss how the effect of shifting from one observation process to another might be measured. We apply our framework to several examples, including the ravens paradox and the phenomenon of publication bias. (shrink)
Hempel’s Converse Consequence Condition (CCC), Entailment Condition (EC), and Special Consequence Condition (SCC) have some prima facie plausibility when taken individually. Hempel, though, shows that they have no plausibility when taken together, for together they entail that E confirms H for any propositions E and H. This is “Hempel’s paradox”. It turns out that Hempel’s argument would fail if one or more of CCC, EC, and SCC were modified in terms of explanation. This opens up the possibility that Hempel’s paradox (...) can be solved by modifying one or more of CCC, EC, and SCC in terms of explanation. I explore this possibility by modifying CCC and SCC in terms of explanation and considering whether CCC and SCC so modified are correct. I also relate that possibility to Inference to the Best Explanation. (shrink)
Bayesian confirmation theory is rife with confirmation measures. Zalabardo focuses on the probability difference measure, the probability ratio measure, the likelihood difference measure, and the likelihood ratio measure. He argues that the likelihood ratio measure is adequate, but each of the other three measures is not. He argues for this by setting out three adequacy conditions on confirmation measures and arguing in effect that all of them are met by the likelihood ratio measure but not by any of the other (...) three measures. Glass and McCartney, hereafter “G&M,” accept the conclusion of Zalabardo’s argument along with each of the premises in it. They nonetheless try to improve on Zalabardo’s argument by replacing his third adequacy condition with a weaker condition. They do this because of a worry to the effect that Zalabardo’s third adequacy condition runs counter to the idea behind his first adequacy condition. G&M have in mind confirmation in the sense of increase in probability: the degree to which E confirms H is a matter of the degree to which E increases H’s probability. I call this sense of confirmation “IP.” I set out four ways of precisifying IP. I call them “IP1,” “IP2,” “IP3,” and “IP4.” Each of them is based on the assumption that the degree to which E increases H’s probability is a matter of the distance between p and a certain other probability involving H. I then evaluate G&M’s argument in light of them. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.