The epistemic modal auxiliaries must and might are vehicles for expressing the force with which a proposition follows from some body of evidence or information. Standard approaches model these operators using quantificational modal logic, but probabilistic approaches are becoming increasingly influential. According to a traditional view, must is a maximally strong epistemic operator and might is a bare possibility one. A competing account—popular amongst proponents of a probabilisitic turn—says that, given a body of evidence, must \ entails that \\) is (...) high but non-maximal and might \ that \\) is significantly greater than 0. Drawing on several observations concerning the behavior of must, might and similar epistemic operators in evidential contexts, deductive inferences, downplaying and retractions scenarios, and expressions of epistemic tension, I argue that those two influential accounts have systematic descriptive shortcomings. To better make sense of their complex behavior, I propose instead a broadly Kratzerian account according to which must \ entails that \ = 1\) and might \ that \ > 0\), given a body of evidence and a set of normality assumptions about the world. From this perspective, must and might are vehicles for expressing a common mode of reasoning whereby we draw inferences from specific bits of evidence against a rich set of background assumptions—some of which we represent as defeasible—which capture our general expectations about the world. I will show that the predictions of this Kratzerian account can be substantially refined once it is combined with a specific yet independently motivated ‘grammatical’ approach to the computation of scalar implicatures. Finally, I discuss some implications of these results for more general discussions concerning the empirical and theoretical motivation to adopt a probabilisitic semantic framework. (shrink)
In this paper I expound an argument which seems to establish that probabilism and special relativity are incompatible. I examine the argument critically, and consider its implications for interpretative problems of quantum theory, and for theoretical physics as a whole.
We investigate a basic probabilistic dynamic semantics for a fragment containing conditionals, probability operators, modals, and attitude verbs, with the aim of shedding light on the prospects for adding probabilistic structure to models of the conversational common ground.
Suppose several individuals (e.g., experts on a panel) each assign probabilities to some events. How can these individual probability assignments be aggregated into a single collective probability assignment? This article reviews several proposed solutions to this problem. We focus on three salient proposals: linear pooling (the weighted or unweighted linear averaging of probabilities), geometric pooling (the weighted or unweighted geometric averaging of probabilities), and multiplicative pooling (where probabilities are multiplied rather than averaged). We present axiomatic characterisations of each class of (...) pooling functions (most of them classic, but one new) and argue that linear pooling can be justified procedurally, but not epistemically, while the other two pooling methods can be justified epistemically. The choice between them, in turn, depends on whether the individuals' probability assignments are based on shared information or on private information. We conclude by mentioning a number of other pooling methods. (shrink)
How can different individuals' probability assignments to some events be aggregated into a collective probability assignment? Classic results on this problem assume that the set of relevant events -- the agenda -- is a sigma-algebra and is thus closed under disjunction (union) and conjunction (intersection). We drop this demanding assumption and explore probabilistic opinion pooling on general agendas. One might be interested in the probability of rain and that of an interest-rate increase, but not in the probability of rain or (...) an interest-rate increase. We characterize linear pooling and neutral pooling for general agendas, with classic results as special cases for agendas that are sigma-algebras. As an illustrative application, we also consider probabilistic preference aggregation. Finally, we compare our results with existing results on binary judgment aggregation and Arrovian preference aggregation. This paper is the first of two self-contained, but technically related companion papers inspired by binary judgment-aggregation theory. (shrink)
Are special relativity and probabilism compatible? Dieks argues that they are. But the possible universe he specifies, designed to exemplify both probabilism and special relativity, either incorporates a universal "now" (and is thus incompatible with special relativity), or amounts to a many world universe (which I have discussed, and rejected as too ad hoc to be taken seriously), or fails to have any one definite overall Minkowskian-type space-time structure (and thus differs drastically from special relativity as ordinarily understood). (...)Probabilism and special relativity appear to be incompatible after all. What is at issue is not whether "the flow of time" can be reconciled with special relativity, but rather whether explicitly probabilistic versions of quantum theory should be rejected because of incompatibility with special relativity. (shrink)
The aim of the paper is to develop general criteria of argumentative validity and adequacy for probabilistic arguments on the basis of the epistemological approach to argumentation. In this approach, as in most other approaches to argumentation, proabilistic arguments have been neglected somewhat. Nonetheless, criteria for several special types of probabilistic arguments have been developed, in particular by Richard Feldman and Christoph Lumer. In the first part (sects. 2-5) the epistemological basis of probabilistic arguments is discussed. With regard to the (...) philosophical interpretation of probabilities a new subjectivist, epistemic interpretation is proposed, which identifies probabilities with tendencies of evidence (sect. 2). After drawing the conclusions of this interpretation with respect to the syntactic features of the probability concept, e.g. one variable referring to the data base (sect. 3), the justification of basic probabilities (priors) by judgements of relative frequency (sect. 4) and the justification of derivative probabilities by means of the probability calculus are explained (sect. 5). The core of the paper is the definition of '(argumentatively) valid derivative probabilistic arguments', which provides exact conditions for epistemically good probabilistic arguments, together with conditions for the adequate use of such arguments for the aim of rationally convincing an addressee (sect. 6). Finally, some measures for improving the applicability of probabilistic reasoning are proposed (sect. 7). (shrink)
Nick Shea’s Representation in Cognitive Science commits him to representations in perceptual processing that are about probabilities. This commentary concerns how to adjudicate between this view and an alternative that locates the probabilities rather in the representational states’ associated “attitudes”. As background and motivation, evidence for probabilistic representations in perceptual processing is adduced, and it is shown how, on either conception, one can address a specific challenge Ned Block has raised to this evidence.
The explanatory role of natural selection is one of the long-term debates in evolutionary biology. Nevertheless, the consensus has been slippery because conceptual confusions and the absence of a unified, formal causal model that integrates different explanatory scopes of natural selection. In this study we attempt to examine two questions: (i) What can the theory of natural selection explain? and (ii) Is there a causal or explanatory model that integrates all natural selection explananda? For the first question, we argue that (...) five explananda have been assigned to the theory of natural selection and that four of them may be actually considered explananda of natural selection. For the second question, we claim that a probabilistic conception of causality and the statistical relevance concept of explanation are both good models for understanding the explanatory role of natural selection. We review the biological and philosophical disputes about the explanatory role of natural selection and formalize some explananda in probabilistic terms using classical results from population genetics. Most of these explananda have been discussed in philosophical terms but some of them have been mixed up and confused. We analyze and set the limits of these problems. (shrink)
According to a standard assumption in epistemology, if one only partially believes that p , then one cannot thereby have knowledge that p. For example, if one only partially believes that that it is raining outside, one cannot know that it is raining outside; and if one only partially believes that it is likely that it will rain outside, one cannot know that it is likely that it will rain outside. Many epistemologists will agree that epistemic agents are capable of (...) partial beliefs in addition to full beliefs and that partial beliefs can be epistemically assessed along some dimensions. However, it has been generally assumed that such doxastic attitudes cannot possibly amount to knowledge. In Probabilistic Knowledge, Moss challenges this standard assumption and provides a formidable defense of the claim that probabilistic beliefs—a class of doxastic attitudes including credences and degrees of beliefs—can amount to knowledge too. Call this the probabilistic knowledge claim . Throughout the book, Moss goes to great lengths to show that probabilistic knowledge can be fruitfully applied to a variety of debates in epistemology and beyond. My goal in this essay is to explore a further application for probabilistic knowledge. I want to look at the role of probabilistic knowledge within a “knowledge-centered” psychology—a kind of psychology that assigns knowledge a central stage in explanations of intentional behavior. My suggestion is that Moss’s notion of probabilistic knowledge considerably helps further both a knowledge-centered psychology and a broadly intellectualist picture of action and know-how that naturally goes along with it. At the same time, though, it raises some interesting issues about the notion of explanation afforded by the resulting psychology. (shrink)
We often have some reason to do actions insofar as they promote outcomes or states of affairs, such as the satisfaction of a desire. But what is it to promote an outcome? I defend a new version of 'probabilism about promotion'. According to Minimal Probabilistic Promotion, we promote some outcome when we make that outcome more likely than it would have been if we had done something else. This makes promotion easy and reasons cheap.
This paper offers a probabilistic treatment of the conditions for argument cogency as endorsed in informal logic: acceptability, relevance, and sufficiency. Treating a natural language argument as a reason-claim-complex, our analysis identifies content features of defeasible argument on which the RSA conditions depend, namely: change in the commitment to the reason, the reason’s sensitivity and selectivity to the claim, one’s prior commitment to the claim, and the contextually determined thresholds of acceptability for reasons and for claims. Results contrast with, and (...) may indeed serve to correct, the informal understanding and applications of the RSA criteria concerning their conceptual dependence, their function as update-thresholds, and their status as obligatory rather than permissive norms, but also show how these formal and informal normative approachs can in fact align. (shrink)
This paper examines a promising probabilistic theory of singular causation developed by David Lewis. I argue that Lewis' theory must be made more sophisticated to deal with certain counterexamples involving pre-emption. These counterexamples appear to show that in the usual case singular causation requires an unbroken causal process to link cause with effect. I propose a new probabilistic account of singular causation, within the framework developed by Lewis, which captures this intuition.
The best accuracy arguments for probabilism apply only to credence functions with finite domains, that is, credence functions that assign credence to at most finitely many propositions. This is a significant limitation. It reveals that the support for the accuracy-first program in epistemology is a lot weaker than it seems at first glance, and it means that accuracy arguments cannot yet accomplish everything that their competitors, the pragmatic (Dutch book) arguments, can. In this paper, I investigate the extent to (...) which this limitation can be overcome. Building on the best arguments in finite domains, I present two accuracy arguments for probabilism that are perfectly general—they apply to credence functions with arbitrary domains. I then discuss how the arguments’ premises can be challenged. We will see that it is particularly difficult to characterize admissible accuracy measures in infinite domains. (shrink)
We report a series of experiments examining whether people ascribe knowledge for true beliefs based on probabilistic evidence. Participants were less likely to ascribe knowledge for beliefs based on probabilistic evidence than for beliefs based on perceptual evidence or testimony providing causal information. Denial of knowledge for beliefs based on probabilistic evidence did not arise because participants viewed such beliefs as unjustified, nor because such beliefs leave open the possibility of error. These findings rule out traditional philosophical accounts for why (...) probabilistic evidence does not produce knowledge. The experiments instead suggest that people deny knowledge because they distrust drawing conclusions about an individual based on reasoning about the population to which it belongs, a tendency previously identified by “judgment and decision making” researchers. Consistent with this, participants were more willing to ascribe knowledge for beliefs based on probabilistic evidence that is specific to a particular case. 2016 APA, all rights reserved). (shrink)
A common view among nontheists combines the de jure objection that theism is epistemically unacceptable with agnosticism about the de facto objection that theism is false. Following Plantinga, we can call this a “proper” de jure objection—a de jure objection that does not depend on any de facto objection. In his Warranted Christian Belief, Plantinga has produced a general argument against all proper de jure objections. Here I first show that this argument is logically fallacious (it makes subtle probabilistic fallacies (...) disguised by scope ambiguities), and proceed to lay the groundwork for the construction of actual proper de jure objections. (shrink)
Comparativism is the view that comparative confidences (e.g., being more confident that P than that Q) are more fundamental than degrees of belief (e.g., believing that P with some strength x). In this paper, I outline the basis for a new, non-probabilistic version of comparativism inspired by a suggestion made by Frank Ramsey in `Probability and Partial Belief'. I show how, and to what extent, `Ramseyan comparativism' might be used to weaken the (unrealistically strong) probabilistic coherence conditions that comparativism traditionally (...) relies on. (shrink)
Conceptual combination performs a fundamental role in creating the broad range of compound phrases utilised in everyday language. This article provides a novel probabilistic framework for assessing whether the semantics of conceptual combinations are compositional, and so can be considered as a function of the semantics of the constituent concepts, or not. While the systematicity and productivity of language provide a strong argument in favor of assuming compositionality, this very assumption is still regularly questioned in both cognitive science and philosophy. (...) Additionally, the principle of semantic compositionality is underspecifi ed, which means that notions of both "strong" and "weak" compositionality appear in the literature. Rather than adjudicating between different grades of compositionality, the framework presented here contributes formal methods for determining a clear dividing line between compositional and non-compositional semantics. In addition, we suggest that the distinction between these is contextually sensitive. Compositionality is equated with a joint probability distribution modeling how the constituent concepts in the combination are interpreted. Marginal selectivity is introduced as a pivotal probabilistic constraint for the application of the Bell/CH and CHSH systems of inequalities. Non-compositionality is equated with a failure of marginal selectivity, or violation of either system of inequalities in the presence of marginal selectivity. This means that the conceptual combination cannot be modeled in a joint probability distribution, the variables of which correspond to how the constituent concepts are being interpreted. The formal analysis methods are demonstrated by applying them to an empirical illustration of twenty-four non-lexicalised conceptual combinations. (shrink)
In previous work, we studied four well known systems of qualitative probabilistic inference, and presented data from computer simulations in an attempt to illustrate the performance of the systems. These simulations evaluated the four systems in terms of their tendency to license inference to accurate and informative conclusions, given incomplete information about a randomly selected probability distribution. In our earlier work, the procedure used in generating the unknown probability distribution (representing the true stochastic state of the world) tended to yield (...) probability distributions with moderately high entropy levels. In the present article, we present data charting the performance of the four systems when reasoning in environments of various entropy levels. The results illustrate variations in the performance of the respective reasoning systems that derive from the entropy of the environment, and allow for a more inclusive assessment of the reliability and robustness of the four systems. (shrink)
The starting point in the development of probabilistic analyses of token causation has usually been the naïve intuition that, in some relevant sense, a cause raises the probability of its effect. But there are well-known examples both of non-probability-raising causation and of probability-raising non-causation. Sophisticated extant probabilistic analyses treat many such cases correctly, but only at the cost of excluding the possibilities of direct non-probability-raising causation, failures of causal transitivity, action-at-a-distance, prevention, and causation by absence and omission. I show that (...) an examination of the structure of these problem cases suggests a different treatment, one which avoids the costs of extant probabilistic analyses. (shrink)
How can different individuals' probability functions on a given sigma-algebra of events be aggregated into a collective probability function? Classic approaches to this problem often require 'event-wise independence': the collective probability for each event should depend only on the individuals' probabilities for that event. In practice, however, some events may be 'basic' and others 'derivative', so that it makes sense first to aggregate the probabilities for the former and then to let these constrain the probabilities for the latter. We formalize (...) this idea by introducing a 'premise-based' approach to probabilistic opinion pooling, and show that, under a variety of assumptions, it leads to linear or neutral opinion pooling on the 'premises'. This paper is the second of two self-contained, but technically related companion papers inspired by binary judgment-aggregation theory. (shrink)
Justification logics are constructive analogues of modal logics. They are often used as epistemic logics, particularly as models of evidentialist justification. However, in this role, justification logics are defective insofar as they represent justification with a necessity-like operator, whereas actual evidentialist justification is usually probabilistic. This paper first examines and rejects extant candidates for solving this problem: Milnikel’s Logic of Uncertain Justifications, Ghari’s Hájek–Pavelka-Style Justification Logics and a version of probabilistic justification logic developed by Kokkinis et al. It then proposes (...) a new solution to the problem in the form of a justification logic that incorporates the essential features of both a fuzzy logic and a probabilistic logic. (shrink)
Coherentism in epistemology has long suffered from lack of formal and quantitative explication of the notion of coherence. One might hope that probabilistic accounts of coherence such as those proposed by Lewis, Shogenji, Olsson, Fitelson, and Bovens and Hartmann will finally help solve this problem. This paper shows, however, that those accounts have a serious common problem: the problem of belief individuation. The coherence degree that each of the accounts assigns to an information set (or the verdict it gives as (...) to whether the set is coherent tout court) depends on how beliefs (or propositions) that represent the set are individuated. Indeed, logically equivalent belief sets that represent the same information set can be given drastically different degrees of coherence. This feature clashes with our natural and reasonable expectation that the coherence degree of a belief set does not change unless the believer adds essentially new information to the set or drops old information from it; or, to put it simply, that the believer cannot raise or lower the degree of coherence by purely logical reasoning. None of the accounts in question can adequately deal with coherence once logical inferences get into the picture. Toward the end of the paper, another notion of coherence that takes into account not only the contents but also the origins (or sources) of the relevant beliefs is considered. It is argued that this notion of coherence is of dubious significance, and that it does not help solve the problem of belief individuation. (shrink)
Jeff Paris proves a generalized Dutch Book theorem. If a belief state is not a generalized probability then one faces ‘sure loss’ books of bets. In Williams I showed that Joyce’s accuracy-domination theorem applies to the same set of generalized probabilities. What is the relationship between these two results? This note shows that both results are easy corollaries of the core result that Paris appeals to in proving his dutch book theorem. We see that every point of accuracy-domination deﬁnes a (...) dutch book, but we only have a partial converse. (shrink)
In mathematics, any form of probabilistic proof obtained through the application of a probabilistic method is not considered as a legitimate way of gaining mathematical knowledge. In a series of papers, Don Fallis has defended the thesis that there are no epistemic reasons justifying mathematicians’ rejection of probabilistic proofs. This paper identifies such an epistemic reason. More specifically, it is argued here that if one adopts a conception of mathematical knowledge in which an epistemic subject can know a mathematical proposition (...) based solely on a probabilistic proof, one is then forced to admit that such an epistemic subject can know several lottery propositions based solely on probabilistic evidence. Insofar as knowledge of lottery propositions on the basis of probabilistic evidence alone is denied by the vast majority of epistemologists, it is concluded that this constitutes an epistemic reason for rejecting probabilistic proofs as a means of acquiring mathematical knowledge. (shrink)
Are probabilism and special relativity compatible? Dieks argues that they are. But the possible universe he specifies, designed to exemplify both probabilism and special relativity, either incorporates a universal “now”, or amounts to a many world universe, or fails to have any one definite overall Minkowskian-type space-time structure. Probabilism and special relativity appear to be incompatible after all. What is at issue is not whether “the flow of time” can be reconciled with special relativity, but rather whether (...) explicitly probabilistic versions of quantum theory should be rejected because of incompatibility with special relativity. (shrink)
In their article 'Causes and Explanations: A Structural-Model Approach. Part I: Causes', Joseph Halpern and Judea Pearl draw upon structural equation models to develop an attractive analysis of 'actual cause'. Their analysis is designed for the case of deterministic causation. I show that their account can be naturally extended to provide an elegant treatment of probabilistic causation.
The success of the Bayesian approach to perception suggests probabilistic perceptual representations. But if perceptual representation is probabilistic, why doesn't normal conscious perception reflect the full probability distributions that the probabilistic point of view endorses? For example, neurons in MT/V5 that respond to the direction of motion are broadly tuned: a patch of cortex that is tuned to vertical motion also responds to horizontal motion, but when we see vertical motion, foveally, in good conditions, it does not look at all (...) horizontal. This article argues that the best Bayesian approach to this problem does not require probabilistic representation. (shrink)
In this paper, we discuss three probabilistic arguments for the existence of multiple universes. First, we provide an analysis of total evidence and use that analysis to defend Roger White's "this universe" objection to a standard fine-tuning argument for multiple universes. Second, we explain why Rodney Holder's recent cosmological argument for multiple universes is unconvincing. Third, we develop a "Cartesian argument" for multiple universes. While this argument is not open to the objections previously noted, we show that, given certain highly (...) plausible assumptions about evidence and epistemic probability, the proposition which it treats as evidence cannot coherently be regarded as evidence for anything. This raises the question of whether to reject the assumptions or accept that such a proposition cannot be evidence. (shrink)
Is it right to convict a person of a crime on the basis of purely statistical evidence? Many who have considered this question agree that it is not, posing a direct challenge to legal probabilism – the claim that the criminal standard of proof should be understood in terms of a high probability threshold. Some defenders of legal probabilism have, however, held their ground: Schoeman (1987) argues that there are no clear epistemic or moral problems with convictions based (...) on purely statistical evidence, and speculates that our aversion to such convictions may be nothing more than an irrational bias. More recently, Hedden and Colyvan (2019, section VI) describe our reluctance to convict on the basis of purely statistical evidence as an ‘intuition’, but suggest that there may be no ‘in principle’ problem with such convictions (see also Papineau, forthcoming, section 6). In this paper, I argue that there is, in some cases, an in principle problem with a conviction based upon statistical evidence alone – namely, it commits us to a precedent which, if consistently followed through, could lead to the deliberate conviction of an innocent person. I conclude with some reflections on the idea that the criminal justice system should strive to maximise the accuracy of its verdicts – and the related idea that we should each strive to maximise the accuracy of our beliefs. (shrink)
What sort of entities are electrons, photons and atoms given their wave-like and particle-like properties? Is nature fundamentally deterministic or probabilistic? Orthodox quantum theory evades answering these two basic questions by being a theory about the results of performing measurements on quantum systems. But this evasion results in OQT being a seriously defective theory. A rival, somewhat ignored strategy is to conjecture that the quantum domain is fundamentally probabilistic. This means quantum entities, interacting with one another probabilistically, must differ radically (...) from the entities of deterministic classical physics, the classical wave or particle. It becomes possible to conceive of quantum entities as a new kind of fundamentally probabilistic entity, the “propensiton”, neither wave nor particle. A fully micro realistic, testable rival to OQT results. (shrink)
I defend an analog of probabilism that characterizes rationally coherent estimates for chances. Specifically, I demonstrate the following accuracy-dominance result for stochastic theories in the C*-algebraic framework: supposing an assignment of chance values is possible if and only if it is given by a pure state on a given algebra, your estimates for chances avoid accuracy-dominance if and only if they are given by a state on that algebra. When your estimates avoid accuracy-dominance (roughly: when you cannot guarantee that (...) other estimates would be more accurate), I say that they are sufficiently coherent. In formal epistemology and quantum foundations, the notion of rational coherence that gets more attention requires that you never allow for a sure loss (or “Dutch book”) in a given sort of betting game; I call this notion full coherence. I characterize when these two notions of rational coherence align, and I show that there is a quantum state giving estimates that are sufficiently coherent, but not fully coherent. (shrink)
Recent work by Peijnenburg, Atkinson, and Herzberg suggests that infinitists who accept a probabilistic construal of justification can overcome significant challenges to their position by attending to mathematical treatments of infinite probabilistic regresses. In this essay, it is argued that care must be taken when assessing the significance of these formal results. Though valuable lessons can be drawn from these mathematical exercises (many of which are not disputed here), the essay argues that it is entirely unclear that the form of (...) infinitism that results meets a basic requirement: namely, providing an account of infinite chains of propositions qua reasons made available to agents. (shrink)
In this article, I introduce the term “cognitivism” as a name for the thesis that degrees of belief are equivalent to full beliefs about truth-valued propositions. The thesis (of cognitivism) that degrees of belief are equivalent to full beliefs is equivocal, inasmuch as different sorts of equivalence may be postulated between degrees of belief and full beliefs. The simplest sort of equivalence (and the sort of equivalence that I discuss here) identifies having a given degree of belief with having a (...) full belief with a specific content. This sort of view was proposed in [C. Howson and P. Urbach, Scientific reasoning: the Bayesian approach. Chicago: Open Court (1996)].In addition to embracing a form of cognitivism about degrees of belief, Howson and Urbach argued for a brand of probabilism. I call a view, such as Howson and Urbach’s, which combines probabilism with cognitivism about degrees of belief “cognitivist probabilism”. In order to address some problems with Howson and Urbach’s view, I propose a view that incorperates several of modifications of Howson and Urbach’s version of cognitivist probabilism. The view that I finally propose upholds cognitivism about degrees of belief, but deviates from the letter of probabilism, in allowing that a rational agent’s degrees of belief need not conform to the axioms of probability, in the case where the agent’s cognitive resources are limited. (shrink)
The best and most popular argument for probabilism is the accuracy-dominance argument, which purports to show that alethic considerations alone support the view that an agent’s degrees of belief should always obey the axioms of probability. I argue that extant versions of the accuracy-dominance argument face a problem. In order for the mathematics of the argument to function as advertised, we must assume that every omniscient credence function is classically consistent; there can be no worlds in the set of (...) dominance-relevant worlds that obey some other logic. This restriction cannot be motivated on alethic grounds unless we’re also willing to accept that rationality requires belief in every metaphysical necessity, as the distinction between a priori logical necessities and a posteriori metaphysical ones is not an alethic distinction. To justify the restriction to classically consistent worlds, non-alethic motivation is required. And thus, if there is a version of the accuracy-dominance argument in support of probabilism, it isn’t one that is grounded in alethic considerations alone. (shrink)
All extant purely probabilistic measures of explanatory power satisfy the following technical condition: if Pr(E | H1) > Pr(E | H2) and Pr(E | ~H1) < Pr(E | ~H2), then H1’s explanatory power with respect to E is greater than H2’s explanatory power with respect to E. We argue that any measure satisfying this condition faces three serious problems – the Problem of Temporal Shallowness, the Problem of Negative Causal Interactions, and the Problem of Non-Explanations. We further argue that many (...) such measures face a fourth problem – the Problem of Explanatory Irrelevance. (shrink)
When people want to identify the causes of an event, assign credit or blame, or learn from their mistakes, they often reflect on how things could have gone differently. In this kind of reasoning, one considers a counterfactual world in which some events are different from their real-world counterparts and considers what else would have changed. Researchers have recently proposed several probabilistic models that aim to capture how people do (or should) reason about counterfactuals. We present a new model and (...) show that it accounts better for human inferences than several alternative models. Our model builds on the work of Pearl (2000), and extends his approach in a way that accommodates backtracking inferences and that acknowledges the difference between counterfactual interventions and counterfactual observations. We present six new experiments and analyze data from four experiments carried out by Rips (2010), and the results suggest that the new model provides an accurate account of both mean human judgments and the judgments of individuals. (shrink)
In my article, I present a new version of a probabilistic truth prescribing semantics for natural language indicative conditionals. The proposed truth conditions can be paraphrased as follows: an indicative conditional is true if the corresponding conditional probability is high and the antecedent is positively probabilistically relevant for the consequent or the probability of the antecedent of the conditional equals 0. In the paper, the truth conditions are defended and some of the logical properties of the proposed semantics are described.
To understand something involves some sort of commitment to a set of propositions comprising an account of the understood phenomenon. Some take this commitment to be a species of belief; others, such as Elgin and I, take it to be a kind of cognitive policy. This paper takes a step back from debates about the nature of understanding and asks when this commitment involved in understanding is epistemically appropriate, or ‘acceptable’ in Elgin’s terminology. In particular, appealing to lessons from the (...) lottery and preface paradoxes, it is argued that this type of commitment is sometimes acceptable even when it would be rational to assign arbitrarily low probabilities to the relevant propositions. This strongly suggests that the relevant type of commitment is sometimes acceptable in the absence of epistemic justification for belief, which in turn implies that understanding does not require justification in the traditional sense. The paper goes on to develop a new probabilistic model of acceptability, based on the idea that the maximally informative accounts of the understood phenomenon should be optimally probable. Interestingly, this probabilistic model ends up being similar in important ways to Elgin’s proposal to analyze the acceptability of such commitments in terms of ‘reflective equilibrium’. (shrink)
Actual causes - e.g. Suzy's being exposed to asbestos - often bring about their effects - e.g. Suzy's suffering mesothelioma - probabilistically. I use probabilistic causal models to tackle one of the thornier difficulties for traditional accounts of probabilistic actual causation: namely probabilistic preemption.
Moss (2018) argues that rational agents are best thought of not as having degrees of belief in various propositions but as having beliefs in probabilistic contents, or probabilistic beliefs. Probabilistic contents are sets of probability functions. Probabilistic belief states, in turn, are modeled by sets of probabilistic contents, or sets of sets of probability functions. We argue that this Mossean framework is of considerable interest quite independently of its role in Moss’ account of probabilistic knowledge or her semantics for epistemic (...) modals and probability operators. It is an extremely general model of uncertainty. Indeed, it is at least as general and expressively powerful as every other current imprecise probability framework, including lower probabilities, lower previsions, sets of probabilities, sets of desirable gambles, and choice functions. In addition, we partially answer an important question that Moss leaves open, viz., why should rational agents have consistent probabilistic beliefs? We show that an important subclass of Mossean believers avoid Dutch bookability iff they have consistent probabilistic beliefs. (shrink)
In recent years, a number of theorists have claimed that beliefs about probability are transparent. To believe probably p is simply to have a high credence that p. In this paper, I prove a variety of triviality results for theses like the above. I show that such claims are inconsistent with the thesis that probabilistic modal sentences have propositions or sets of worlds as their meaning. Then I consider the extent to which a dynamic semantics for probabilistic modals can capture (...) theses connecting belief, certainty, credence, and probability. I show that although a dynamic semantics for probabilistic modals does allow one to validate such theses, it can only do so at a cost. I prove that such theses can only be valid if probabilistic modals do not satisfy the axioms of the probability calculus. (shrink)
There are numerous formal systems that allow inference of new conditionals based on a conditional knowledge base. Many of these systems have been analysed theoretically and some have been tested against human reasoning in psychological studies, but experiments evaluating the performance of such systems are rare. In this article, we extend the experiments in [19] in order to evaluate the inferential properties of c-representations in comparison to the well-known Systems P and Z. Since it is known that System Z and (...) c-representations mainly differ in the sorts of inheritance inferences they allow, we discuss subclass inheritance and present experimental data for this type of inference in particular. (shrink)
Both Representation Theorem Arguments and Dutch Book Arguments support taking probabilistic coherence as an epistemic norm. Both depend on connecting beliefs to preferences, which are not clearly within the epistemic domain. Moreover, these connections are standardly grounded in questionable definitional/metaphysical claims. The paper argues that these definitional/metaphysical claims are insupportable. It offers a way of reconceiving Representation Theorem arguments which avoids the untenable premises. It then develops a parallel approach to Dutch Book Arguments, and compares the results. In each case (...) preferencedefects serve as a diagnostic tool, indicating purely epistemic defects. (shrink)
This paper calls for a re-appraisal of McGee's analysis of the semantics, logic and probabilities of indicative conditionals presented in his 1989 paper Conditional probabilities and compounds of conditionals. The probabilistic measures introduced by McGee are given a new axiomatisation built on the principle that the antecedent of a conditional is probabilistically independent of the conditional and a more transparent method of constructing such measures is provided. McGee's Dutch book argument is restructured to more clearly reveal that it introduces a (...) novel contribution to the epistemology of semantic indeterminacy, and shows that its more controversial implications are unavoidable if we want to maintain the Ramsey Test along with the standard laws of probability. Importantly, it is shown that the counterexamples that have been levelled at McGee's analysis|generating a rather wide consensus that it yields `unintuitive' or `wrong' probabilities for compounds |fail to strike at their intended target; for to honour the intuitions of the counterexamples one must either give up the Ramsey Test or the standard laws of probability. It will be argued that we need to give up neither if we take the counterexamples as further evidence that the indicative conditional sometimes allows for a non-epistemic `causal' interpretation alongside its usual epistemic interpretation. (shrink)
I develop a probabilistic account of coherence, and argue that at least in certain respects it is preferable to (at least some of) the main extant probabilistic accounts of coherence: (i) Igor Douven and Wouter Meijs’s account, (ii) Branden Fitelson’s account, (iii) Erik Olsson’s account, and (iv) Tomoji Shogenji’s account. Further, I relate the account to an important, but little discussed, problem for standard varieties of coherentism, viz., the “Problem of Justified Inconsistent Beliefs.”.
The legal scholar Henry Wigmore asserted that cross-examination is ‘the greatest legal engine ever invented for the discovery of truth.’ Was Wigmore right? Instead of addressing this question upfront, this paper offers a conceptual ground clearing. It is difficult to say whether Wigmore was right or wrong without becoming clear about what we mean by cross-examination; how it operates at trial; what it is intended to accomplish. Despite the growing importance of legal epistemology, there is virtually no philosophical work that (...) discusses cross-examination, its scope and function at trial. This paper makes a first attempt at clearing the ground by articulating an analysis of cross-examination using probability theory and Bayesian networks. This analysis relies on the distinction between undercutting and rebutting evidence. A preliminary assessment of the truth-seeking function of cross-examination is offered at the end of the paper. (shrink)
There is wide support in logic, philosophy, and psychology for the hypothesis that the probability of the indicative conditional of natural language, $P(\textit{if } A \textit{ then } B)$, is the conditional probability of $B$ given $A$, $P(B|A)$. We identify a conditional which is such that $P(\textit{if } A \textit{ then } B)= P(B|A)$ with de Finetti's conditional event, $B|A$. An objection to making this identification in the past was that it appeared unclear how to form compounds and iterations of (...) conditional events. In this paper, we illustrate how to overcome this objection with a probabilistic analysis, based on coherence, of these compounds and iterations. We interpret the compounds and iterations as conditional random quantities which, given some logical dependencies, may reduce to conditional events. We show how the inference to $B|A$ from $A$ and $B$ can be extended to compounds and iterations of both conditional events and biconditional events. Moreover, we determine the respective uncertainty propagation rules. Finally, we make some comments on extending our analysis to counterfactuals. (shrink)
Probabilistic support is not transitive. There are cases in which x probabilistically supports y , i.e., Pr( y | x ) > Pr( y ), y , in turn, probabilistically supports z , and yet it is not the case that x probabilistically supports z . Tomoji Shogenji, though, establishes a condition for transitivity in probabilistic support, that is, a condition such that, for any x , y , and z , if Pr( y | x ) > Pr( y (...) ), Pr( z | y ) > Pr( z ), and the condition in question is satisfied, then Pr( z | x ) > Pr( z ). I argue for a second and weaker condition for transitivity in probabilistic support. This condition, or the principle involving it, makes it easier (than does the condition Shogenji provides) to establish claims of probabilistic support, and has the potential to play an important role in at least some areas of philosophy. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.