Karl Popper discovered in 1938 that the unconditional probability of a conditional of the form ‘If A, then B’ normally exceeds the conditionalprobability of B given A, provided that ‘If A, then B’ is taken to mean the same as ‘Not (A and not B)’. So it was clear (but presumably only to him at that time) that the conditionalprobability of B given A cannot be reduced to the unconditional probability of (...) the material conditional ‘If A, then B’. I describe how this insight was developed in Popper’s writings and I add to this historical study a logical one, in which I compare laws of excess in Kolmogorov probability theory with laws of excess in Popper probability theory. (shrink)
Conditionalprobability is often used to represent the probability of the conditional. However, triviality results suggest that the thesis that the probability of the conditional always equals conditionalprobability leads to untenable conclusions. In this paper, I offer an interpretation of this thesis in a possible worlds framework, arguing that the triviality results make assumptions at odds with the use of conditionalprobability. I argue that these assumptions come from a (...) theory called the operator theory and that the rival restrictor theory can avoid these problematic assumptions. In doing so, I argue that recent extensions of the triviality arguments to restrictor conditionals fail, making assumptions which are only justified on the operator theory. (shrink)
This paper argues that the technical notion of conditionalprobability, as given by the ratio analysis, is unsuitable for dealing with our pretheoretical and intuitive understanding of both conditionality and probability. This is an ontological account of conditionals that include an irreducible dispositional connection between the antecedent and consequent conditions and where the conditional has to be treated as an indivisible whole rather than compositional. The relevant type of conditionality is found in some well-defined group of (...)conditional statements. As an alternative, therefore, we briefly offer grounds for what we would call an ontological reading: for both conditionality and conditionalprobability in general. It is not offered as a fully developed theory of conditionality but can be used, we claim, to explain why calculations according to the RATIO scheme does not coincide with our intuitive notion of conditionalprobability. What it shows us is that for an understanding of the whole range of conditionals we will need what John Heil (2003), in response to Quine (1953), calls an ontological point of view. (shrink)
Studies of several languages, including Swahili [swa], suggest that realis (actual, realizable) and irrealis (unlikely, counterfactual) meanings vary along a scale (e.g., 0.0–1.0). T-values (True, False) and P-values (probability) account for this pattern. However, logic cannot describe or explain (a) epistemic stances toward beliefs, (b) deontic and dynamic stances toward states-of-being and actions, and (c) context-sensitivity in conditional interpretations. (a)–(b) are deictic properties (positions, distance) of ‘embodied’ Frames of Reference (FoRs)—space-time loci in which agents perceive and from which (...) they contextually act (Rohrer 2007a, b). I argue that the embodied FoR describes and explains (a)–(c) better than T-values and P-values alone. In this cognitive-functional-descriptive study, I represent these embodied FoRs using Unified Modeling Language (UML) mental spaces in analyzing Swahili conditional constructions to show how necessary, sufficient, and contributing conditions obtain on the embodied FoR networks level. (shrink)
Why are conditional degrees of belief in an observation E, given a statistical hypothesis H, aligned with the objective probabilities expressed by H? After showing that standard replies are not satisfactory, I develop a suppositional analysis of conditional degree of belief, transferring Ramsey’s classical proposal to statistical inference. The analysis saves the alignment, explains the role of chance-credence coordination, and rebuts the charge of arbitrary assessment of evidence in Bayesian inference. Finally, I explore the implications of this analysis (...) for Bayesian reasoning with idealized models in science. (shrink)
The logic of indicative conditionals remains the topic of deep and intractable philosophical disagreement. I show that two influential epistemic norms—the Lockean theory of belief and the Ramsey test for conditional belief—are jointly sufficient to ground a powerful new argument for a particular conception of the logic of indicative conditionals. Specifically, the argument demonstrates, contrary to the received historical narrative, that there is a real sense in which Stalnaker’s semantics for the indicative did succeed in capturing the logic of (...) the Ramseyan indicative conditional. (shrink)
A longstanding issue in attempts to understand the Everett (Many-Worlds) approach to quantum mechanics is the origin of the Born rule: why is the probability given by the square of the amplitude? Following Vaidman, we note that observers are in a position of self-locating uncertainty during the period between the branches of the wave function splitting via decoherence and the observer registering the outcome of the measurement. In this period it is tempting to regard each branch as equiprobable, but (...) we argue that the temptation should be resisted. Applying lessons from this analysis, we demonstrate (using methods similar to those of Zurek's envariance-based derivation) that the Born rule is the uniquely rational way of apportioning credence in Everettian quantum mechanics. In doing so, we rely on a single key principle: changes purely to the environment do not affect the probabilities one ought to assign to measurement outcomes in a local subsystem. We arrive at a method for assigning probabilities in cases that involve both classical and quantum self-locating uncertainty. This method provides unique answers to quantum Sleeping Beauty problems, as well as a well-defined procedure for calculating probabilities in quantum cosmological multiverses with multiple similar observers. (shrink)
Laurence BonJour has recently proposed a novel and interesting approach to the problem of induction. He grants that it is contingent, and so not a priori, that our patterns of inductive inference are reliable. Nevertheless, he claims, it is necessary and a priori that those patterns are highly likely to be reliable, and that is enough to ground an a priori justification induction. This paper examines an important defect in BonJour's proposal. Once we make sense of the claim that inductive (...) inference is "necessarily highly likely" to be reliable, we find that it is not knowable a priori after all. (shrink)
The standard treatment of conditionalprobability leaves conditionalprobability undefined when the conditioning proposition has zero probability. Nonetheless, some find the option of extending the scope of conditionalprobability to include zero-probability conditions attractive or even compelling. This article reviews some of the pitfalls associated with this move, and concludes that, for the most part, probabilities conditional on zero-probability propositions are more trouble than they are worth.
In his recent book Warranted Christian Belief, Alvin Plantinga argues that the defender of naturalistic evolution is faced with adefeater for his position: as products of naturalistic evolution, we have no way of knowing if our cognitive faculties are in fact reliably aimed at the truth. This defeater is successfully avoided by the theist in that, given theism, we can be reasonably secure that out cognitive faculties are indeed reliable. I argue that Plantinga’s argument is ultimately based on a faulty (...) comparison, that he is comparing naturalistic evolution generally to one particular model of theism. In light of this analysis, the two models either stand or fall together with respect to the defeater that Plantinga offers. (shrink)
I discuss Richard Swinburne’s account of religious experience in his probabilistic case for theism. I argue, pace Swinburne, that even if cosmological considerations render theism not too improbable, religious experience does not render it more probable than not.
The history of science is often conceptualized through 'paradigm shifts,' where the accumulation of evidence leads to abrupt changes in scientific theories. Experimental evidence suggests that this kind of hypothesis revision occurs in more mundane circumstances, such as when children learn concepts and when adults engage in strategic behavior. In this paper, I argue that the model of hypothesis testing can explain how people learn certain complex, theory-laden propositions such as conditional sentences ('If A, then B') and probabilistic constraints (...) ('The probability that A is p'). Theories are formalized as probability distributions over a set of possible outcomes and theory change is triggered by a constraint which is incompatible with the initial theory. This leads agents to consult a higher order probability function, or a 'prior over priors,' to choose the most likely alternative theory which satisfies the constraint. The hypothesis testing model is applied to three examples: a simple probabilistic constraint involving coin bias, the sundowners problem for conditional learning, and the Judy Benjamin problem for learning conditionalprobability constraints. The model of hypothesis testing is contrasted with the more conservative learning theory of relative information minimization, which dominates current approaches to learning conditional and probabilistic information. (shrink)
This Open Access book addresses the age-old problem of infinite regresses in epistemology. How can we ever come to know something if knowing requires having good reasons, and reasons can only be good if they are backed by good reasons in turn? The problem has puzzled philosophers ever since antiquity, giving rise to what is often called Agrippa's Trilemma. The current volume approaches the old problem in a provocative and thoroughly contemporary way. Taking seriously the idea that good reasons are (...) typically probabilistic in character, it develops and defends a new solution that challenges venerable philosophical intuitions and explains why they were mistakenly held. Key to the new solution is the phenomenon of fading foundations, according to which distant reasons are less important than those that are nearby. The phenomenon takes the sting out of Agrippa's Trilemma; moreover, since the theory that describes it is general and abstract, it is readily applicable outside epistemology, notably to debates on infinite regresses in metaphysics. (shrink)
The expression conditional fallacy identifies a family of arguments deemed to entail odd and false consequences for notions defined in terms of counterfactuals. The antirealist notion of truth is typically defined in terms of what a rational enquirer or a community of rational enquirers would believe if they were suitably informed. This notion is deemed to entail, via the conditional fallacy, odd and false propositions, for example that there necessarily exists a rational enquirer. If these consequences do indeed (...) follow from the antirealist notion of truth, alethic antirealism should probably be rejected. In this paper we analyse the conditional fallacy from a semantic (i.e. model-theoretic) point of view. This allows us to identify with precision the philosophical commitments that ground the validity of this type of argument. We show that the conditional fallacy arguments against alethic antirealism are valid only if controversial metaphysical assumptions are accepted. We suggest that the antirealist is not committed to the conditional fallacy because she is not committed to some of these assumptions. (shrink)
Polysemy seems to be a relatively neglected phenomenon within philosophy of language as well as in many quarters in linguistic semantics. Not all variations in a word’s contribution to truth-conditional contents are to be thought as expressions of the phenomenon of polysemy, but it can be argued that many are. Polysemous terms are said to contribute senses or aspects to truth-conditional contents. In this paper, I will make use of the notion of aspect to argue that some apparently (...) wild variations in an utterance’s truth conditions are instead quite systematic. In particular, I will focus on Travis’ much debated green leaves case and explain it in terms of the polysemy of the noun; and in particular, in terms of the as-it-is and the as-it-looks aspects associated with kind words. (shrink)
This book explores a question central to philosophy--namely, what does it take for a belief to be justified or rational? According to a widespread view, whether one has justification for believing a proposition is determined by how probable that proposition is, given one's evidence. In this book this view is rejected and replaced with another: in order for one to have justification for believing a proposition, one's evidence must normically support it--roughly, one's evidence must make the falsity of that proposition (...) abnormal in the sense of calling for special, independent explanation. This conception of justification bears upon a range of topics in epistemology and beyond. Ultimately, this way of looking at justification guides us to a new, unfamiliar picture of how we should respond to our evidence and manage our own fallibility. This picture is developed here. (shrink)
Michael Fara's ‘habitual analysis’ of disposition ascriptions is equivalent to a kind of ceteris paribus conditional analysis which has no evident advantage over Martin's well known and simpler analysis. I describe an unsatisfactory hypothetical response to Martin's challenge, which is lacking in just the same respect as the analysis considered by Martin; Fara's habitual analysis is equivalent to this hypothetical analysis. The feature of the habitual analysis that is responsible for this cannot be harmlessly excised, for the resulting analysis (...) would be subject to familiar counter-examples. (shrink)
Many philosophers argue that Keynes’s concept of the “weight of arguments” is an important aspect of argument appraisal. The weight of an argument is the quantity of relevant evidence cited in the premises. However, this dimension of argumentation does not have a received method for formalisation. Kyburg has suggested a measure of weight that uses the degree of imprecision in his system of “Evidential Probability” to quantify weight. I develop and defend this approach to measuring weight. I illustrate the (...) usefulness of this measure by employing it to develop an answer to Popper’s Paradox of Ideal Evidence. (shrink)
How were reliable predictions made before Pascal and Fermat's discovery of the mathematics of probability in 1654? What methods in law, science, commerce, philosophy, and logic helped us to get at the truth in cases where certainty was not attainable? The book examines how judges, witch inquisitors, and juries evaluated evidence; how scientists weighed reasons for and against scientific theories; and how merchants counted shipwrecks to determine insurance rates. Also included are the problem of induction before Hume, design arguments (...) for the existence of God, and theories on how to evaluate scientific and historical hypotheses. It is explained how Pascal and Fermat's work on chance arose out of legal thought on aleatory contracts. The book interprets pre-Pascalian unquantified probability in a generally objective Bayesian or logical probabilist sense. (shrink)
This paper argues for the importance of Chapter 33 of Book 2 of Locke's _Essay Concerning Human Understanding_ ("Of the Association of Ideas) both for Locke's own philosophy and for its subsequent reception by Hume. It is argued that in the 4th edition of the Essay of 1700, in which the chapter was added, Locke acknowledged that many beliefs, particularly in religion, are not voluntary and cannot be eradicated through reason and evidence. The author discusses the origins of the chapter (...) in Locke's own earlier writings on madness and in discussions of Enthusiasm in religion. While recognizing association of ideas as derived through custom and habit is the source of prejudice as Locke argued, Hume went on to show how it also is the basis for what Locke himself called "the highest degree of probability", namely "constant and never-failing Experience in like cases" and our belief in “steady and regular Causes.”. (shrink)
A common objection to probabilistic theories of causation is that there are prima facie causes that lower the probability of their effects. Among the many replies to this objection, little attention has been given to Mellor's (1995) indirect strategy to deny that probability-lowering factors are bona fide causes. According to Mellor, such factors do not satisfy the evidential, explanatory, and instrumental connotations of causation. The paper argues that the evidential connotation only entails an epistemically relativized form of causal (...) attribution, not causation itself, and that there are clear cases of explanation and instrumental reasoning that must appeal to negatively relevant factors. In the end, it suggests a more liberal interpretation of causation that restores its connotations. Una objeción común a las teorías probabilísticas de la causalidad es que aparentemente existen causas que disminuyen la probabilidad de sus efectos. Entre las muchas respuestas a esta objeción, se le ha dado poca atención a la estrategia indirecta de D. H. Mellor (1995) para negar que un factor que disminuya la probabilidad de un efecto sea una causa legítima. Según Mellor, tales factores no satisfacen las connotaciones evidenciales, explicativas e instrumentales de la causalidad. El artículo argumenta que la connotación evidencial sólo implica una forma epistémicamente relativizada de atribución causal y no la causalidad misma, y que hay casos claros de explicación y razonamiento instrumental que deben apelar a factores negativamente relevantes. Se sugiere una interpretación más liberal de la causalidad que reinstaura sus connotaciones. (shrink)
Most contractualist ethical theories have a subjunctivist structure. This means that they attempt to make sense of right and wrong in terms of a set of principles which would be accepted in some idealized, non-actual circumstances. This makes these views vulnerable to the so-called conditional fallacy objection. The moral principles that are appropriate for the idealized circumstances fail to give a correct account of what is right and wrong in the ordinary situations. This chapter uses two versions of contractualism (...) to illustrate this problem: Nicholas Southwood’s and a standard contractualist theory inspired by T.M. Scanlon’s contractualism. It then develops a version of Scanlon’s view that can avoid the problem. This solution is based on the idea that we also need to compare different inculcation elements of moral codes in the contractualist framework. This idea also provides a new solution to the problem of at what level of social acceptance should principles be compared. (shrink)
Sometimes different partitions of the same space each seem to divide that space into propositions that call for equal epistemic treatment. Famously, equal treatment in the form of equal point-valued credence leads to incoherence. Some have argued that equal treatment in the form of equal interval-valued credence solves the puzzle. This paper shows that, once we rule out intervals with extreme endpoints, this proposal also leads to incoherence.
I argue that taking the Practical Conditionals Thesis seriously demands a new understanding of the semantics of such conditionals. Practical Conditionals Thesis: A practical conditional [if A][ought] expresses B’s conditional preferability given A Paul Weirich has argued that the conditional utility of a state of affairs B on A is to be identified as the degree to which it is desired under indicative supposition that A. Similarly, exploiting the PCT, I will argue that the proper analysis of (...) indicative practical conditionals is in terms of what is planned, desired, or preferred, given suppositional changes to an agent’s information. Implementing such a conception of conditional preference in a semantic analysis of indicative practical conditionals turns out to be incompatible with any approach which treats the indicative conditional as expressing non-vacuous universal quantification over some domain of relevant antecedent-possibilities. Such analyses, I argue, encode a fundamental misunderstanding of what it is to be best, given some condition. The analysis that does the best vis-à-vis the PCT is, instead, one that blends a Context-Shifty account of indicative antecedents with an Expressivistic, or non-propositional, treatment of their practical consequents. (shrink)
The epistemic probability of A given B is the degree to which B evidentially supports A, or makes A plausible. This paper is a first step in answering the question of what determines the values of epistemic probabilities. I break this question into two parts: the structural question and the substantive question. Just as an object’s weight is determined by its mass and gravitational acceleration, some probabilities are determined by other, more basic ones. The structural question asks what probabilities (...) are not determined in this way—these are the basic probabilities which determine values for all other probabilities. The substantive question asks how the values of these basic probabilities are determined. I defend an answer to the structural question on which basic probabilities are the probabilities of atomic propositions conditional on potential direct explanations. I defend this against the view, implicit in orthodox mathematical treatments of probability, that basic probabilities are the unconditional probabilities of complete worlds. I then apply my answer to the structural question to clear up common confusions in expositions of Bayesianism and shed light on the “problem of the priors.”. (shrink)
Given a few assumptions, the probability of a conjunction is raised, and the probability of its negation is lowered, by conditionalising upon one of the conjuncts. This simple result appears to bring Bayesian confirmation theory into tension with the prominent dogmatist view of perceptual justification – a tension often portrayed as a kind of ‘Bayesian objection’ to dogmatism. In a recent paper, David Jehle and Brian Weatherson observe that, while this crucial result holds within classical probability theory, (...) it fails within intuitionistic probability theory. They conclude that the dogmatist who is willing to take intuitionistic logic seriously can make a convincing reply to the Bayesian objection. In this paper, I argue that this conclusion is premature – the Bayesian objection can survive the transition from classical to intuitionistic probability, albeit in a slightly altered form. I shall conclude with some general thoughts about what the Bayesian objection to dogmatism does and doesn’t show. (shrink)
Early work on the frequency theory of probability made extensive use of the notion of randomness, conceived of as a property possessed by disorderly collections of outcomes. Growing out of this work, a rich mathematical literature on algorithmic randomness and Kolmogorov complexity developed through the twentieth century, but largely lost contact with the philosophical literature on physical probability. The present chapter begins with a clarification of the notions of randomness and probability, conceiving of the former as a (...) property of a sequence of outcomes, and the latter as a property of the process generating those outcomes. A discussion follows of the nature and limits of the relationship between the two notions, with largely negative verdicts on the prospects for any reduction of one to the other, although the existence of an apparently random sequence of outcomes is good evidence for the involvement of a genuinely chancy process. (shrink)
I examine what the mathematical theory of random structures can teach us about the probability of Plenitude, a thesis closely related to David Lewis's modal realism. Given some natural assumptions, Plenitude is reasonably probable a priori, but in principle it can be (and plausibly it has been) empirically disconfirmed—not by any general qualitative evidence, but rather by our de re evidence.
There is a trade-off between specificity and accuracy in existing models of belief. Descriptions of agents in the tripartite model, which recognizes only three doxastic attitudes—belief, disbelief, and suspension of judgment—are typically accurate, but not sufficiently specific. The orthodox Bayesian model, which requires real-valued credences, is perfectly specific, but often inaccurate: we often lack precise credences. I argue, first, that a popular attempt to fix the Bayesian model by using sets of functions is also inaccurate, since it requires us to (...) have interval-valued credences with perfectly precise endpoints. We can see this problem as analogous to the problem of higher order vagueness. Ultimately, I argue, the only way to avoid these problems is to endorse Insurmountable Unclassifiability. This principle has some surprising and radical consequences. For example, it entails that the trade-off between accuracy and specificity is in-principle unavoidable: sometimes it is simply impossible to characterize an agent’s doxastic state in a way that is both fully accurate and maximally specific. What we can do, however, is improve on both the tripartite and existing Bayesian models. I construct a new model of belief—the minimal model—that allows us to characterize agents with much greater specificity than the tripartite model, and yet which remains, unlike existing Bayesian models, perfectly accurate. (shrink)
Effects associated in quantum mechanics with a divisible probability wave are explained as physically real consequences of the equal but opposite reaction of the apparatus as a particle is measured. Taking as illustration a Mach-Zehnder interferometer operating by refraction, it is shown that this reaction must comprise a fluctuation in the reradiation field of complementary effect to the changes occurring in the photon as it is projected into one or other path. The evolution of this fluctuation through the experiment (...) will explain the alternative states of the particle discerned in self interference, while the maintenance of equilibrium in the face of such fluctuations becomes the source of the Born probabilities. In this scheme, the probability wave is a mathematical artifact, epistemic rather than ontic, and akin in this respect to the simplifying constructions of geometrical optics. (shrink)
Logical Probability (LP) is strictly distinguished from Statistical Probability (SP). To measure semantic information or confirm hypotheses, we need to use sampling distribution (conditional SP function) to test or confirm fuzzy truth function (conditional LP function). The Semantic Information Measure (SIM) proposed is compatible with Shannon’s information theory and Fisher’s likelihood method. It can ensure that the less the LP of a predicate is and the larger the true value of the proposition is, the more information (...) there is. So the SIM can be used as Popper's information criterion for falsification or test. The SIM also allows us to optimize the true-value of counterexamples or degrees of disbelief in a hypothesis to get the optimized degree of belief, i. e. Degree of Confirmation (DOC). To explain confirmation, this paper 1) provides the calculation method of the DOC of universal hypotheses; 2) discusses how to resolve Raven Paradox with new DOC and its increment; 3) derives the DOC of rapid HIV tests: DOC of “+” =1-(1-specificity)/sensitivity, which is similar to Likelihood Ratio (=sensitivity/(1-specificity)) but has the upper limit 1; 4) discusses negative DOC for excessive affirmations, wrong hypotheses, or lies; and 5) discusses the DOC of general hypotheses with GPS as example. (shrink)
Brogaard and Salerno (2005, Nous, 39, 123–139) have argued that antirealism resting on a counterfactual analysis of truth is flawed because it commits a conditional fallacy by entailing the absurdity that there is necessarily an epistemic agent. Brogaard and Salerno's argument relies on a formal proof built upon the criticism of two parallel proofs given by Plantinga (1982, "Proceedings and Addresses of the American Philosophical Association", 56, 47–70) and Rea (2000, "Nous," 34, 291–301). If this argument were conclusive, antirealism (...) resting on a counterfactual analysis of truth should probably be abandoned. I argue however that the antirealist is not committed to a controversial reading of counterfactuals presupposed in Brogaard and Salerno's proof, and that the antirealist can in principle adopt an alternative reading that makes this proof invalid. My conclusion is that no reductio of antirealism resting on a counterfactual analysis of truth has yet been provided. (shrink)
This dissertation is devoted to empirically contrasting the Suppositional Theory of conditionals, which holds that indicative conditionals serve the purpose of engaging in hypothetical thought, and Inferentialism, which holds that indicative conditionals express reason relations. Throughout a series of experiments, probabilistic and truth-conditional variants of Inferentialism are investigated using new stimulus materials, which manipulate previously overlooked relevance conditions. These studies are some of the first published studies to directly investigate the central claims of Inferentialism empirically. In contrast, the Suppositional (...) Theory of conditionals has an impressive track record through more than a decade of intensive testing. The evidence for the Suppositional Theory encompasses three sources. Firstly, direct investigations of the probability of indicative conditionals, which substantiate “the Equation” (P(if A, then C) = P(C|A)). Secondly, the pattern of results known as “the defective truth table” effect, which corroborates the de Finetti truth table. And thirdly, indirect evidence from the uncertain and-to-if inference task. Through four studies each of these sources of evidence are scrutinized anew under the application of novel stimulus materials that factorially combine all permutations of prior and relevance levels of two conjoined sentences. The results indicate that the Equation only holds under positive relevance (P(C|A) – P(C|¬A) > 0) for indicative conditionals. In the case of irrelevance (P(C|A) – P(C|¬A) = 0), or negative relevance (P(C|A) – P(C|¬A) < 0), the strong relationship between P(if A, then C) and P(C|A) is disrupted. This finding suggests that participants tend to view natural language conditionals as defective under irrelevance and negative relevance (Chapter 2). Furthermore, most of the participants turn out only to be probabilistically coherent above chance levels for the uncertain and-to-if inference in the positive relevance condition, when applying the Equation (Chapter 3). Finally, the results on the truth table task indicate that the de Finetti truth table is at most descriptive for about a third of the participants (Chapter 4). Conversely, strong evidence for a probabilistic implementation of Inferentialism could be obtained from assessments of P(if A, then C) across relevance levels (Chapter 2) and the participants’ performance on the uncertain-and-to-if inference task (Chapter 3). Yet the results from the truth table task suggest that these findings could not be extended to truth-conditional Inferentialism (Chapter 4). On the contrary, strong dissociations could be found between the presence of an effect of the reason relation reading on the probability and acceptability evaluations of indicative conditionals (and connate sentences), and the lack of an effect of the reason relation reading on the truth evaluation of the same sentences. A bird’s eye view on these surprising results is taken in the final chapter and it is discussed which perspectives these results open up for future research. (shrink)
I develop a probabilistic account of coherence, and argue that at least in certain respects it is preferable to (at least some of) the main extant probabilistic accounts of coherence: (i) Igor Douven and Wouter Meijs’s account, (ii) Branden Fitelson’s account, (iii) Erik Olsson’s account, and (iv) Tomoji Shogenji’s account. Further, I relate the account to an important, but little discussed, problem for standard varieties of coherentism, viz., the “Problem of Justified Inconsistent Beliefs.”.
Recently there have been several attempts in formal epistemology to develop an adequate probabilistic measure of coherence. There is much to recommend probabilistic measures of coherence. They are quantitative and render formally precise a notion—coherence—notorious for its elusiveness. Further, some of them do very well, intuitively, on a variety of test cases. Siebel, however, argues that there can be no adequate probabilistic measure of coherence. Take some set of propositions A, some probabilistic measure of coherence, and a probability distribution (...) such that all the probabilities on which A’s degree of coherence depends (according to the measure in question) are defined. Then, the argument goes, the degree to which A is coherent depends solely on the details of the distribution in question and not at all on the explanatory relations, if any, standing between the propositions in A. This is problematic, the argument continues, because, first, explanation matters for coherence, and, second, explanation cannot be adequately captured solely in terms of probability. We argue that Siebel’s argument falls short. (shrink)
*This work is no longer under development* Two major themes in the literature on indicative conditionals are that the content of indicative conditionals typically depends on what is known;1 that conditionals are intimately related to conditional probabilities.2 In possible world semantics for counterfactual conditionals, a standard assumption is that conditionals whose antecedents are metaphysically impossible are vacuously true.3 This aspect has recently been brought to the fore, and defended by Tim Williamson, who uses it in to characterize alethic necessity (...) by exploiting such equivalences as: A⇔¬A A. One might wish to postulate an analogous connection for indicative conditionals, with indicatives whose antecedents are epistemically impossible being vacuously true: and indeed, the modal account of indicative conditionals of Brian Weatherson has exactly this feature.4 This allows one to characterize an epistemic modal by the equivalence A⇔¬A→A. For simplicity, in what follows we write A as KA and think of it as expressing that subject S knows that A.5 The connection to probability has received much attention. Stalnaker suggested, as a way of articulating the ‘Ramsey Test’, the following very general schema for indicative conditionals relative to some probability function P: P = P 1For example, Nolan ; Weatherson ; Gillies. 2For example Stalnaker ; McGee ; Adams. 3Lewis. See Nolan for criticism. 4‘epistemically possible’ here means incompatible with what is known. 5This idea was suggested to me in conversation by John Hawthorne. I do not know of it being explored in print. The plausibility of this characterization will depend on the exact sense of ‘epistemically possible’ in play—if it is compatibility with what a single subject knows, then can be read ‘the relevant subject knows that p’. If it is more delicately formulated, we might be able to read as the epistemic modal ‘must’. (shrink)
If we add as an extra premise that the agent does know H, then it is possible for her to know E H, we get the conclusion that the agent does not really know H. But even without that closure premise, or something like it, the conclusion seems quite dramatic. One possible response to the argument, floated by both Descartes and Hume, is to accept the conclusion and embrace scepticism. We cannot know anything that goes beyond our evidence, so (...) we do not know very much at all. This is a remarkably sceptical conclusion, so we should resist it if at all possible. A more modern response, associated with externalists like John McDowell and Timothy Williamson, is to accept the conclusion but deny it is as sceptical as it first appears. The Humean argument, even if it works, only shows that our evidence and our knowledge are more closely linked than we might have thought. Perhaps that’s true because we have a lot of evidence, not because we have very little knowledge. There’s something right about this response I think. We have more evidence than Descartes or Hume thought we had. But I think we still need the idea of ampliative knowledge. It stretches the concept of evidence to breaking point to suggest that all of our knowledge, including knowledge about the future, is part of our evidence. So the conclusion really is unacceptable. Or, at least, I think we should try to see what an epistemology that rejects the conclusion looks like. (shrink)
Dutch Book arguments have been presented for static belief systems and for belief change by conditionalization. An argument is given here that a rule for belief change which under certain conditions violates probability kinematics will leave the agent open to a Dutch Book.
The major competing statistical paradigms share a common remarkable but unremarked thread: in many of their inferential applications, different probability interpretations are combined. How this plays out in different theories of inference depends on the type of question asked. We distinguish four question types: confirmation, evidence, decision, and prediction. We show that Bayesian confirmation theory mixes what are intuitively “subjective” and “objective” interpretations of probability, whereas the likelihood-based account of evidence melds three conceptions of what constitutes an “objective” (...)probability. (shrink)
We provide a 'verisimilitudinarian' analysis of the well-known Linda paradox or conjunction fallacy, i.e., the fact that most people judge the probability of the conjunctive statement "Linda is a bank teller and is active in the feminist movement" (B & F) as more probable than the isolated statement "Linda is a bank teller" (B), contrary to an uncontroversial principle of probability theory. The basic idea is that experimental participants may judge B & F a better hypothesis about Linda (...) as compared to B because they evaluate B & F as more verisimilar than B. In fact, the hypothesis "feminist bank teller", while less likely to be true than "bank teller", may well be a better approximation to the truth about Linda. (shrink)
Bayesian confirmation theory is rife with confirmation measures. Zalabardo focuses on the probability difference measure, the probability ratio measure, the likelihood difference measure, and the likelihood ratio measure. He argues that the likelihood ratio measure is adequate, but each of the other three measures is not. He argues for this by setting out three adequacy conditions on confirmation measures and arguing in effect that all of them are met by the likelihood ratio measure but not by any of (...) the other three measures. Glass and McCartney, hereafter “G&M,” accept the conclusion of Zalabardo’s argument along with each of the premises in it. They nonetheless try to improve on Zalabardo’s argument by replacing his third adequacy condition with a weaker condition. They do this because of a worry to the effect that Zalabardo’s third adequacy condition runs counter to the idea behind his first adequacy condition. G&M have in mind confirmation in the sense of increase in probability: the degree to which E confirms H is a matter of the degree to which E increases H’s probability. I call this sense of confirmation “IP.” I set out four ways of precisifying IP. I call them “IP1,” “IP2,” “IP3,” and “IP4.” Each of them is based on the assumption that the degree to which E increases H’s probability is a matter of the distance between p and a certain other probability involving H. I then evaluate G&M’s argument in light of them. (shrink)
There is a plethora of confirmation measures in the literature. Zalabardo considers four such measures: PD, PR, LD, and LR. He argues for LR and against each of PD, PR, and LD. First, he argues that PR is the better of the two probability measures. Next, he argues that LR is the better of the two likelihood measures. Finally, he argues that LR is superior to PR. I set aside LD and focus on the trio of PD, PR, and (...) LR. The question I address is whether Zalabardo succeeds in showing that LR is superior to each of PD and PR. I argue that the answer is negative. I also argue, though, that measures such as PD and PR, on one hand, and measures such as LR, on the other hand, are naturally understood as explications of distinct senses of confirmation. (shrink)
A theory of cognitive systems individuation is presented and defended. The approach has some affinity with Leonard Talmy's Overlapping Systems Model of Cognitive Organization, and the paper's first section explores aspects of Talmy's view that are shared by the view developed herein. According to the view on offer -- the conditionalprobability of co-contribution account (CPC) -- a cognitive system is a collection of mechanisms that contribute, in overlapping subsets, to a wide variety of forms of intelligent behavior. (...) Central to this approach is the idea of an integrated system. A formal characterization of integration is laid out in the form of a conditional-probabilitybased measure of the clustering of causal contributors to the production of intelligent behavior. I relate the view to the debate over extended and embodied cognition and respond to objections that have been raised in print by Andy Clark, Colin Klein, and Felipe de Brigard. (shrink)
In a series of pre-registered studies, we explored (a) the difference between people’s intuitions about indeterministic scenarios and their intuitions about deterministic scenarios, (b) the difference between people’s intuitions about indeterministic scenarios and their intuitions about neurodeterministic scenarios (that is, scenarios where the determinism is described at the neurological level), (c) the difference between people’s intuitions about neutral scenarios (e.g., walking a dog in the park) and their intuitions about negatively valenced scenarios (e.g., murdering a stranger), and (d) the difference (...) between people’s intuitions about free will and responsibility in response to first-person scenarios and third-person scenarios. We predicted that once we focused participants’ attention on the two different abilities to do otherwise available to agents in indeterministic and deterministic scenarios, their intuitions would support natural incompatibilism—the view that laypersons judge that free will and moral responsibility are incompatible with determinism. This prediction was borne out by our findings. (shrink)
Many epistemologists hold that an agent can come to justifiably believe that p is true by seeing that it appears that p is true, without having any antecedent reason to believe that visual impressions are generally reliable. Certain reliabilists think this, at least if the agent’s vision is generally reliable. And it is a central tenet of dogmatism (as described by Pryor (2000) and Pryor (2004)) that this is possible. Against these positions it has been argued (e.g. by Cohen (2005) (...) and White (2006)) that this violates some principles from probabilistic learning theory. To see the problem, let’s note what the dogmatist thinks we can learn by paying attention to how things appear. (The reliabilist says the same things, but we’ll focus on the dogmatist.) Suppose an agent receives an appearance that p, and comes to believe that p. Letting Ap be the proposition that it appears to the agent that p, and → be the material implication, we can say that the agent learns that p, and hence is in a position to infer Ap → p, once they receive the evidence Ap.1 This is surprising, because we can prove the following. (shrink)
NOTE: This paper is a reworking of some aspects of an earlier paper – ‘What else justification could be’ and also an early draft of chapter 2 of Between Probability and Certainty. I'm leaving it online as it has a couple of citations and there is some material here which didn't make it into the book (and which I may yet try to develop elsewhere). My concern in this paper is with a certain, pervasive picture of epistemic justification. On (...) this picture, acquiring justification for believing something is essentially a matter of minimising one’s risk of error – so one is justified in believing something just in case it is sufficiently likely, given one’s evidence, to be true. This view is motivated by an admittedly natural thought: If we want to be fallibilists about justification then we shouldn’t demand that something be certain – that we completely eliminate error risk – before we can be justified in believing it. But if justification does not require the complete elimination of error risk, then what could it possibly require if not its minimisation? If justification does not require epistemic certainty then what could it possibly require if not epistemic likelihood? When all is said and done, I’m not sure that I can offer satisfactory answers to these questions – but I will attempt to trace out some possible answers here. The alternative picture that I’ll outline makes use of a notion of normalcy that I take to be irreducible to notions of statistical frequency or predominance. (shrink)
This paper is a response to Tyler Wunder’s ‘The modality of theism and probabilistic natural theology: a tension in Alvin Plantinga's philosophy’ (this journal). In his article, Wunder argues that if the proponent of the Evolutionary Argument Against Naturalism (EAAN) holds theism to be non-contingent and frames the argument in terms of objective probability, that the EAAN is either unsound or theism is necessarily false. I argue that a modest revision of the EAAN renders Wunder’s objection irrelevant, and that (...) this revision actually widens the scope of the argument. (shrink)
Inductive logic would be the logic of arguments that are not valid, but nevertheless justify belief in something like the way in which valid arguments would. Maybe we could describe it as the logic of “almost valid” arguments. There is a sort of transitivity to valid arguments. Valid arguments can be chained together to form arguments and such arguments are themselves valid. One wants to distinguish the “almost valid” arguments by noting that chains of “almost valid” arguments are weaker than (...) the links that form them. But it is not clear that this is so. I have an apparent counterexample the claim. Though: as is typical in these sorts of situations, it is hard to tell where the problem lies. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.