This paper argues that the technical notion of conditionalprobability, as given by the ratio analysis, is unsuitable for dealing with our pretheoretical and intuitive understanding of both conditionality and probability. This is an ontological account of conditionals that include an irreducible dispositional connection between the antecedent and consequent conditions and where the conditional has to be treated as an indivisible whole rather than compositional. The relevant type of conditionality is found in some well-defined group of (...)conditional statements. As an alternative, therefore, we briefly offer grounds for what we would call an ontological reading: for both conditionality and conditionalprobability in general. It is not offered as a fully developed theory of conditionality but can be used, we claim, to explain why calculations according to the RATIO scheme does not coincide with our intuitive notion of conditionalprobability. What it shows us is that for an understanding of the whole range of conditionals we will need what John Heil (2003), in response to Quine (1953), calls an ontological point of view. (shrink)
Karl Popper discovered in 1938 that the unconditional probability of a conditional of the form ‘If A, then B’ normally exceeds the conditionalprobability of B given A, provided that ‘If A, then B’ is taken to mean the same as ‘Not (A and not B)’. So it was clear (but presumably only to him at that time) that the conditionalprobability of B given A cannot be reduced to the unconditional probability of (...) the material conditional ‘If A, then B’. I describe how this insight was developed in Popper’s writings and I add to this historical study a logical one, in which I compare laws of excess in Kolmogorov probability theory with laws of excess in Popper probability theory. (shrink)
Conditionalprobability is often used to represent the probability of the conditional. However, triviality results suggest that the thesis that the probability of the conditional always equals conditionalprobability leads to untenable conclusions. In this paper, I offer an interpretation of this thesis in a possible worlds framework, arguing that the triviality results make assumptions at odds with the use of conditionalprobability. I argue that these assumptions come from a (...) theory called the operator theory and that the rival restrictor theory can avoid these problematic assumptions. In doing so, I argue that recent extensions of the triviality arguments to restrictor conditionals fail, making assumptions which are only justified on the operator theory. (shrink)
Why are conditional degrees of belief in an observation E, given a statistical hypothesis H, aligned with the objective probabilities expressed by H? After showing that standard replies are not satisfactory, I develop a suppositional analysis of conditional degree of belief, transferring Ramsey’s classical proposal to statistical inference. The analysis saves the alignment, explains the role of chance-credence coordination, and rebuts the charge of arbitrary assessment of evidence in Bayesian inference. Finally, I explore the implications of this analysis (...) for Bayesian reasoning with idealized models in science. (shrink)
The standard treatment of conditionalprobability leaves conditionalprobability undefined when the conditioning proposition has zero probability. Nonetheless, some find the option of extending the scope of conditionalprobability to include zero-probability conditions attractive or even compelling. This article reviews some of the pitfalls associated with this move, and concludes that, for the most part, probabilities conditional on zero-probability propositions are more trouble than they are worth.
Studies of several languages, including Swahili [swa], suggest that realis (actual, realizable) and irrealis (unlikely, counterfactual) meanings vary along a scale (e.g., 0.0–1.0). T-values (True, False) and P-values (probability) account for this pattern. However, logic cannot describe or explain (a) epistemic stances toward beliefs, (b) deontic and dynamic stances toward states-of-being and actions, and (c) context-sensitivity in conditional interpretations. (a)–(b) are deictic properties (positions, distance) of ‘embodied’ Frames of Reference (FoRs)—space-time loci in which agents perceive and from which (...) they contextually act (Rohrer 2007a, b). I argue that the embodied FoR describes and explains (a)–(c) better than T-values and P-values alone. In this cognitive-functional-descriptive study, I represent these embodied FoRs using Unified Modeling Language (UML) mental spaces in analyzing Swahili conditional constructions to show how necessary, sufficient, and contributing conditions obtain on the embodied FoR networks level. (shrink)
Dutch Book arguments have been presented for static belief systems and for belief change by conditionalization. An argument is given here that a rule for belief change which under certain conditions violates probability kinematics will leave the agent open to a Dutch Book.
Stalnaker's Thesis about indicative conditionals is, roughly, that the probability one ought to assign to an indicative conditional equals the probability that one ought to assign to its consequent conditional on its antecedent. The thesis seems right. If you draw a card from a standard 52-card deck, how confident are you that the card is a diamond if it's a red card? To answer this, you calculate the proportion of red cards that are diamonds -- that (...) is, you calculate the probability of drawing a diamond conditional on drawing a red card. Skyrms' Thesis about counterfactual conditionals is, roughly, that the probability that one ought to assign to a counterfactual equals one's rational expectation of the chance, at a relevant past time, of its consequent conditional on its antecedent. This thesis also seems right. If you decide not to enter a 100-ticket lottery, how confident are you that you would have won had you bought a ticket? To answer this, you calculate the prior chance--that is, the chance just before your decision not to buy a ticket---of winning conditional on entering the lottery. The central project of this article is to develop a new uniform theory of conditionals that allows us to derive a version of Skyrms' Thesis from a version of Stalnaker's Thesis, together with a chance-deference norm relating rational credence to beliefs about objective chance. (shrink)
The logic of indicative conditionals remains the topic of deep and intractable philosophical disagreement. I show that two influential epistemic norms—the Lockean theory of belief and the Ramsey test for conditional belief—are jointly sufficient to ground a powerful new argument for a particular conception of the logic of indicative conditionals. Specifically, the argument demonstrates, contrary to the received historical narrative, that there is a real sense in which Stalnaker’s semantics for the indicative did succeed in capturing the logic of (...) the Ramseyan indicative conditional. (shrink)
The history of science is often conceptualized through 'paradigm shifts,' where the accumulation of evidence leads to abrupt changes in scientific theories. Experimental evidence suggests that this kind of hypothesis revision occurs in more mundane circumstances, such as when children learn concepts and when adults engage in strategic behavior. In this paper, I argue that the model of hypothesis testing can explain how people learn certain complex, theory-laden propositions such as conditional sentences ('If A, then B') and probabilistic constraints (...) ('The probability that A is p'). Theories are formalized as probability distributions over a set of possible outcomes and theory change is triggered by a constraint which is incompatible with the initial theory. This leads agents to consult a higher order probability function, or a 'prior over priors,' to choose the most likely alternative theory which satisfies the constraint. The hypothesis testing model is applied to three examples: a simple probabilistic constraint involving coin bias, the sundowners problem for conditional learning, and the Judy Benjamin problem for learning conditionalprobability constraints. The model of hypothesis testing is contrasted with the more conservative learning theory of relative information minimization, which dominates current approaches to learning conditional and probabilistic information. (shrink)
*This work is no longer under development* Two major themes in the literature on indicative conditionals are that the content of indicative conditionals typically depends on what is known;1 that conditionals are intimately related to conditional probabilities.2 In possible world semantics for counterfactual conditionals, a standard assumption is that conditionals whose antecedents are metaphysically impossible are vacuously true.3 This aspect has recently been brought to the fore, and defended by Tim Williamson, who uses it in to characterize alethic necessity (...) by exploiting such equivalences as: A⇔¬A A. One might wish to postulate an analogous connection for indicative conditionals, with indicatives whose antecedents are epistemically impossible being vacuously true: and indeed, the modal account of indicative conditionals of Brian Weatherson has exactly this feature.4 This allows one to characterize an epistemic modal by the equivalence A⇔¬A→A. For simplicity, in what follows we write A as KA and think of it as expressing that subject S knows that A.5 The connection to probability has received much attention. Stalnaker suggested, as a way of articulating the ‘Ramsey Test’, the following very general schema for indicative conditionals relative to some probability function P: P = P 1For example, Nolan ; Weatherson ; Gillies. 2For example Stalnaker ; McGee ; Adams. 3Lewis. See Nolan for criticism. 4‘epistemically possible’ here means incompatible with what is known. 5This idea was suggested to me in conversation by John Hawthorne. I do not know of it being explored in print. The plausibility of this characterization will depend on the exact sense of ‘epistemically possible’ in play—if it is compatibility with what a single subject knows, then can be read ‘the relevant subject knows that p’. If it is more delicately formulated, we might be able to read as the epistemic modal ‘must’. (shrink)
Inductive logic would be the logic of arguments that are not valid, but nevertheless justify belief in something like the way in which valid arguments would. Maybe we could describe it as the logic of “almost valid” arguments. There is a sort of transitivity to valid arguments. Valid arguments can be chained together to form arguments and such arguments are themselves valid. One wants to distinguish the “almost valid” arguments by noting that chains of “almost valid” arguments are weaker than (...) the links that form them. But it is not clear that this is so. I have an apparent counterexample the claim. Though: as is typical in these sorts of situations, it is hard to tell where the problem lies. (shrink)
Logical Probability (LP) is strictly distinguished from Statistical Probability (SP). To measure semantic information or confirm hypotheses, we need to use sampling distribution (conditional SP function) to test or confirm fuzzy truth function (conditional LP function). The Semantic Information Measure (SIM) proposed is compatible with Shannon’s information theory and Fisher’s likelihood method. It can ensure that the less the LP of a predicate is and the larger the true value of the proposition is, the more information (...) there is. So the SIM can be used as Popper's information criterion for falsification or test. The SIM also allows us to optimize the true-value of counterexamples or degrees of disbelief in a hypothesis to get the optimized degree of belief, i. e. Degree of Confirmation (DOC). To explain confirmation, this paper 1) provides the calculation method of the DOC of universal hypotheses; 2) discusses how to resolve Raven Paradox with new DOC and its increment; 3) derives the DOC of rapid HIV tests: DOC of “+” =1-(1-specificity)/sensitivity, which is similar to Likelihood Ratio (=sensitivity/(1-specificity)) but has the upper limit 1; 4) discusses negative DOC for excessive affirmations, wrong hypotheses, or lies; and 5) discusses the DOC of general hypotheses with GPS as example. (shrink)
In “Process Reliabilism and the Value Problem” I argue that Erik Olsson and Alvin Goldman's conditionalprobability solution to the value problem in epistemology is unsuccessful and that it makes significant internalist concessions. In “Kinds of Learning and the Likelihood of Future True Beliefs” Olsson and Martin Jönsson try to show that my argument does “not in the end reduce the plausibility” of Olsson and Goldman's account. Here I argue that, while Olsson and Jönsson clarify and amend the (...)conditionalprobability approach in a number of helpful ways, my case against it remains intact. I conclude with a constructive proposal as to how their account may be steered in a more promising direction. (shrink)
We present a puzzle about knowledge, probability and conditionals. We show that in certain cases some basic and plausible principles governing our reasoning come into conflict. In particular, we show that there is a simple argument that a person may be in a position to know a conditional the consequent of which has a low probabilityconditional on its antecedent, contra Adams’ Thesis. We suggest that the puzzle motivates a very strong restriction on the inference of (...) a conditional from a disjunction. (shrink)
We investigate a basic probabilistic dynamic semantics for a fragment containing conditionals, probability operators, modals, and attitude verbs, with the aim of shedding light on the prospects for adding probabilistic structure to models of the conversational common ground.
In this paper I present a precise version of Stalnaker's thesis and show that it is both consistent and predicts our intuitive judgments about the probabilities of conditionals. The thesis states that someone whose total evidence is E should have the same credence in the proposition expressed by 'if A then B' in a context where E is salient as they have conditional credence in the proposition B expresses given the proposition A expresses in that context. The thesis is (...) formalised rigorously and two models are provided that demonstrate that the new thesis is indeed tenable within a standard possible world semantics based on selection functions. Unlike the Stalnaker-Lewis semantics the selection functions cannot be understood in terms of similarity. A probabilistic account of selection is defended in its place. -/- I end the paper by suggesting that this approach overcomes some of the objections often leveled at accounts of indicatives based on the notion of similarity. (shrink)
Does postulating skeptical theism undermine the claim that evil strongly confirms atheism over theism? According to Perrine and Wykstra, it does undermine the claim, because evil is no more likely on atheism than on skeptical theism. According to Draper, it does not undermine the claim, because evil is much more likely on atheism than on theism in general. I show that the probability facts alone do not resolve their disagreement, which ultimately rests on which updating procedure – conditionalizing or (...) updating on a conditional – fits both the evidence and how we ought to take that evidence into account. (shrink)
I argue against the Ramsey test connecting indicative conditionals with conditionalprobability, by means of examples in which conditionalprobability is high but the conditional is intuitively implausible. At the end of the paper, I connect these issues to patterns of belief revision.
Abstract In this paper I consider an easier-to-read and improved to a certain extent version of the causal chance-based analysis of counterfactuals that I proposed and argued for in my A Theory of Counterfactuals. Sections 2, 3 and 4 form Part I: In it, I survey the analysis of the core counterfactuals (in which, very roughly, the antecedent is compatible with history prior to it). In section 2 I go through the three main aspects of this analysis, which are the (...) following. First, it is a causal analysis, in that it requires that intermediate events to which the antecedent event is not a cause be preserved in the main truth-condition schema. Second, it highlights the central notion to the semantics of counterfactuals on the account presented here -- the notion of the counterfactual probability of a given counterfactual, which is the probability of the consequent given the following: the antecedent, the prior history, and the preserved intermediate events. Third, it considers the truth conditions for counterfactuals of this sort as consisting in this counterfactual probability being higher than a threshold. In section 3, I re-formulate the analysis of preservational counterfactuals in terms of the notion of being a cause, which ends up being quite compact. In section 4 I illustrate this analysis by showing how it handles two examples that have been considered puzzling – Morgenbesser's counterfactual and Edgington's counterfactual. Sections 5 and on constitute Part II: Its main initial thrust is provided in section 5, where I present the main lines of the extension of the theory from the core counterfactuals (analyzed in part I) to counterfactuals (roughly) whose antecedents are not compatible with their prior history. In this part II, I elaborate on counterfactuals that don't belong to the core, and more specifically on so-called reconstructional counterfactuals (as opposed to the preservational counterfactuals, which constitute the core counterfactual-type). The heart of the analysis is formulated in terms of processes leading to the antecedent (event/state), and more specifically in terms of processes likely to have led to the antecedent, a notion which is analyzed entirely in terms of chance. It covers so-called reconstructional counterfactuals as opposed to the core, so-called preservational counterfactuals, which are analyzed in sections 2 and 3 of part I. The counterfactual probability of such reconstructional counterfactuals is determined via the probability of possible processes leading to the antecedent weighed, primarily and roughly, by the conditionalprobability of the antecedent given such process: The counterfactual probability is thus, very roughly, a weighted sum for all processes most likely to have led to the antecedent, diverging at a fixed time. In section 6 I explain and elaborate further on the main points in section 5. In section 7 I illustrate the reconstructional analysis. I specify counterfactuals which are so-called process-pointers, since their consequent specifies stages in processes likely to have led to their antecedent. I argue that so-called backtracking counterfactuals are process-pointers counterfactuals, which fit into the reconstructional analysis, and do not call for a separate reading. I then illustrate cases where a speaker unwittingly employs a certain counterfactual while charitably construable as intending to assert (or ‘having in mind’) another. Here I also cover the issue of how to construe what one can take as back-tracking counterfactuals, or counterfactuals of the reconstructional sort, and more specifically, which divergence point they should be taken as alluding to (prior to which the history is held fixed). Some such cases also give rise to what one can take as a dual reading of a counterfactual between preservational and reconstructional readings. Such cases may yield an ambiguity, where in many cases one construal is dominant. In section 8 I illustrate the analysis by applying it to the famous Bizet-Verdi counterfactuals. This detailed analysis of counterfactuals (designed for the indeterministic case) has three main distinctive elements: its being chance-based, its causal aspect, and the use it makes of processes most likely to have led to the antecedent-event. This analysis is couched in a very different conceptual base from, and is an alternative account to, analyses in terms of the standard notion of closeness or distance of possible worlds, which is the main feature of the Stalnaker-Lewis-type analyses of counterfactuals. This notion of closeness or distance plays no role whatsoever in the analysis presented here. (This notion of closeness has been left open by Stalnaker, and to significant extent also by Lewis's second account.) . (shrink)
A theory of cognitive systems individuation is presented and defended. The approach has some affinity with Leonard Talmy's Overlapping Systems Model of Cognitive Organization, and the paper's first section explores aspects of Talmy's view that are shared by the view developed herein. According to the view on offer -- the conditionalprobability of co-contribution account (CPC) -- a cognitive system is a collection of mechanisms that contribute, in overlapping subsets, to a wide variety of forms of intelligent behavior. (...) Central to this approach is the idea of an integrated system. A formal characterization of integration is laid out in the form of a conditional-probabilitybased measure of the clustering of causal contributors to the production of intelligent behavior. I relate the view to the debate over extended and embodied cognition and respond to objections that have been raised in print by Andy Clark, Colin Klein, and Felipe de Brigard. (shrink)
David Builes presents a paradox concerning how confident you should be that any given member of an infinite collection of fair coins landed heads, conditional on the information that they were all flipped and only finitely many of them landed heads. We argue that if you should have any conditional credence at all, it should be 1/2.
There is a long tradition in formal epistemology and in the psychology of reasoning to investigate indicative conditionals. In psychology, the propositional calculus was taken for granted to be the normative standard of reference. Experimental tasks, evaluation of the participants’ responses and psychological model building, were inspired by the semantics of the material conditional. Recent empirical work on indicative conditionals focuses on uncertainty. Consequently, the normative standard of reference has changed. I argue why neither logic nor standard probability (...) theory provide appropriate rationality norms for uncertain conditionals. I advocate coherence based probability logic as an appropriate framework for investigating uncertain conditionals. Detailed proofs of the probabilistic non-informativeness of a paradox of the material conditional illustrate the approach from a formal point of view. I survey selected data on human reasoning about uncertain conditionals which additionally support the plausibility of the approach from an empirical point of view. (shrink)
A study is reported testing two hypotheses about a close parallel relation between indicative conditionals, if A then B, and conditional bets, I bet you that if A then B. The first is that both the indicative conditional and the conditional bet are related to the conditionalprobability, P(B|A). The second is that de Finetti's three-valued truth table has psychological reality for both types of conditional – true, false, or void for indicative conditionals and (...) win, lose or void for conditional bets. The participants were presented with an array of chips in two different colours and two different shapes, and an indicative conditional or a conditional bet about a random chip. They had to make judgments in two conditions: either about the chances of making the indicative conditional true or false or about the chances of winning or losing the conditional bet. The observed distributions of responses in the two conditions were generally related to the conditionalprobability, supporting the first hypothesis. In addition, a majority of participants in further conditions chose the third option, “void”, when the antecedent of the conditional was false, supporting the second hypothesis. (shrink)
In this paper we discuss the new Tweety puzzle. The original Tweety puzzle was addressed by approaches in non-monotonic logic, which aim to adequately represent the Tweety case, namely that Tweety is a penguin and, thus, an exceptional bird, which cannot fly, although in general birds can fly. The new Tweety puzzle is intended as a challenge for probabilistic theories of epistemic states. In the first part of the paper we argue against monistic Bayesians, who assume that epistemic states can (...) at any given time be adequately described by a single subjective probability function. We show that monistic Bayesians cannot provide an adequate solution to the new Tweety puzzle, because this requires one to refer to a frequency-based probability function. We conclude that monistic Bayesianism cannot be a fully adequate theory of epistemic states. In the second part we describe an empirical study, which provides support for the thesis that monistic Bayesianism is also inadequate as a descriptive theory of cognitive states. In the final part of the paper we criticize Bayesian approaches in cognitive science, insofar as their monistic tendency cannot adequately address the new Tweety puzzle. We, further, argue against monistic Bayesianism in cognitive science by means of a case study. In this case study we show that Oaksford and Chater’s (2007, 2008) model of conditional inference—contrary to the authors’ theoretical position—has to refer also to a frequency-based probability function. (shrink)
This paper discusses and relates two puzzles for indicative conditionals: a puzzle about indeterminacy and a puzzle about triviality. Both puzzles arise because of Ramsey's Observation, which states that the probability of a conditional is equal to the conditionalprobability of its consequent given its antecedent. The puzzle of indeterminacy is the problem of reconciling this fact about conditionals with the fact that they seem to lack truth values at worlds where their antecedents are false. The (...) puzzle of triviality is the problem of reconciling Ramsey's Observation with various triviality proofs which establish that Ramsey's Observation cannot hold in full generality. In the paper, I argue for a solution to the indeterminacy puzzle and then apply the resulting theory to the triviality puzzle. On the theory I defend, the truth conditions of indicative conditionals are highly context dependent and such that an indicative conditional may be indeterminate in truth value at each possible world throughout some region of logical space and yet still have a nonzero probability throughout that region. (shrink)
The Equation (TE) states that the probability of A → B is the probability of B given A (Jeffrey, 1964: 702–703). Lewis has shown that the acceptance of TE implies that the probability of A → B is the probability of B, which is implausible: the probability of a conditional cannot plausibly be the same as the probability of its consequent, e.g., the probability that the match will light given that is struck (...) is not intuitively the same as the probability that it will light (Lewis, 1976: 299–300). Here I want to counter Lewis’ claim. My aim is to argue that: (1) (TE) doesn’t track the probability of A → B, but instead our willingness to employ it on a modus ponens; (2) the triviality result doesn’t strike us as implausible if our willingness to employ A → B on a modus ponens implies a similar result; (3) (TE) is still inadequate in this limited role given that some conditionals are only employable on a modus tollens or can’t be employed on a modus ponens; (4) (TE) does not have the logical significance that is usually attributed to it, since inferential disposition is a pragmatic phenomenon. (shrink)
The epistemic probability of A given B is the degree to which B evidentially supports A, or makes A plausible. This paper is a first step in answering the question of what determines the values of epistemic probabilities. I break this question into two parts: the structural question and the substantive question. Just as an object’s weight is determined by its mass and gravitational acceleration, some probabilities are determined by other, more basic ones. The structural question asks what probabilities (...) are not determined in this way—these are the basic probabilities which determine values for all other probabilities. The substantive question asks how the values of these basic probabilities are determined. I defend an answer to the structural question on which basic probabilities are the probabilities of atomic propositions conditional on potential direct explanations. I defend this against the view, implicit in orthodox mathematical treatments of probability, that basic probabilities are the unconditional probabilities of complete worlds. I then apply my answer to the structural question to clear up common confusions in expositions of Bayesianism and shed light on the “problem of the priors.”. (shrink)
Abstract The Preface Paradox, first introduced by David Makinson (1961), presents a plausible scenario where an agent is evidentially certain of each of a set of propositions without being evidentially certain of the conjunction of the set of propositions. Given reasonable assumptions about the nature of evidential certainty, this appears to be a straightforward contradiction. We solve the paradox by appeal to stake size sensitivity, which is the claim that evidential probability is sensitive to stake size. The argument is (...) that because the informational content in the conjunction is greater than the sum of the informational content of the conjuncts, the stake size in the conjunction is higher than the sum of the stake sizes in the conjuncts. We present a theory of evidential probability that identifies knowledge with value and allows for coherent stake sensitive beliefs. An agent’s beliefs are represented two dimensionally as a bid – ask spread, which gives a bid price and an ask price for bets at each stake size. The bid ask spread gets wider when there is less valuable evidence relative to the stake size, and narrower when there is more valuable evidence according to a simple formula. The bid-ask spread can represent the uncertainty in the first order probabilistic judgement. According to the theory it can be coherent to be evidentially certain at low stakes, but less than certain at high stakes, and therefore there is no contradiction in the Preface. The theory not only solves the paradox, but also gives a good model of decisions under risk that overcomes many of the problems associated with classic expected utility theory. (shrink)
I set up two axiomatic theories of inductive support within the framework of Kolmogorovian probability theory. I call these theories ‘Popperian theories of inductive support’ because I think that their specific axioms express the core meaning of the word ‘inductive support’ as used by Popper (and, presumably, by many others, including some inductivists). As is to be expected from Popperian theories of inductive support, the main theorem of each of them is an anti-induction theorem, the stronger one of them (...) saying, in fact, that the relation of inductive support is identical with the empty relation. It seems to me that an axiomatic treatment of the idea(s) of inductive support within orthodox probability theory could be worthwhile for at least three reasons. Firstly, an axiomatic treatment demands from the builder of a theory of inductive support to state clearly in the form of specific axioms what he means by ‘inductive support’. Perhaps the discussion of the new anti-induction proofs of Karl Popper and David Miller would have been more fruitful if they had given an explicit definition of what inductive support is or should be. Secondly, an axiomatic treatment of the idea(s) of inductive support within Kolmogorovian probability theory might be accommodating to those philosophers who do not completely trust Popperian probability theory for having theorems which orthodox Kolmogorovian probability theory lacks; a transparent derivation of anti-induction theorems within a Kolmogorovian frame might bring additional persuasive power to the original anti-induction proofs of Popper and Miller, developed within the framework of Popperian probability theory. Thirdly, one of the main advantages of the axiomatic method is that it facilitates criticism of its products: the axiomatic theories. On the one hand, it is much easier than usual to check whether those statements which have been distinguished as theorems really are theorems of the theory under examination. On the other hand, after we have convinced ourselves that these statements are indeed theorems, we can take a critical look at the axioms—especially if we have a negative attitude towards one of the theorems. Since anti-induction theorems are not popular at all, the adequacy of some of the axioms they are derived from will certainly be doubted. If doubt should lead to a search for alternative axioms, sheer negative attitudes might develop into constructive criticism and even lead to new discoveries. -/- I proceed as follows. In section 1, I start with a small but sufficiently strong axiomatic theory of deductive dependence, closely following Popper and Miller (1987). In section 2, I extend that starting theory to an elementary Kolmogorovian theory of unconditional probability, which I extend, in section 3, to an elementary Kolmogorovian theory of conditionalprobability, which in its turn gets extended, in section 4, to a standard theory of probabilistic dependence, which also gets extended, in section 5, to a standard theory of probabilistic support, the main theorem of which will be a theorem about the incompatibility of probabilistic support and deductive independence. In section 6, I extend the theory of probabilistic support to a weak Popperian theory of inductive support, which I extend, in section 7, to a strong Popperian theory of inductive support. In section 8, I reconsider Popper's anti-inductivist theses in the light of the anti-induction theorems. I conclude the paper with a short discussion of possible objections to our anti-induction theorems, paying special attention to the topic of deductive relevance, which has so far been neglected in the discussion of the anti-induction proofs of Popper and Miller. (shrink)
One thousand fair causally isolated coins will be independently flipped tomorrow morning and you know this fact. I argue that the probability, conditional on your knowledge, that any coin will land tails is almost 1 if that coin in fact lands tails, and almost 0 if it in fact lands heads. I also show that the coin flips are not probabilistically independent given your knowledge. These results are uncomfortable for those, like Timothy Williamson, who take these probabilities to (...) play a central role in their theorizing. (shrink)
The question I am addressing in this paper is the following: how is it possible to empirically test, or confirm, counterfactuals? After motivating this question in Section 1, I will look at two approaches to counterfactuals, and at how counterfactuals can be empirically tested, or confirmed, if at all, on these accounts in Section 2. I will then digress into the philosophy of probability in Section 3. The reason for this digression is that I want to use the way (...) observable absolute and relative frequencies, two empirical notions, are used to empirically test, or confirm, hypotheses about objective chances, a metaphysical notion, as a role-model. Specifically, I want to use this probabilistic account of the testing of chance hypotheses as a role-model for the account of the testing of counterfactuals, another metaphysical notion, that I will present in Sections 4 to 8. I will conclude by comparing my proposal to one non-probabilistic and one probabilistic alternative in Section 9. (shrink)
Philosophers typically rely on intuitions when providing a semantics for counterfactual conditionals. However, intuitions regarding counterfactual conditionals are notoriously shaky. The aim of this paper is to provide a principled account of the semantics of counterfactual conditionals. This principled account is provided by what I dub the Royal Rule, a deterministic analogue of the Principal Principle relating chance and credence. The Royal Rule says that an ideal doxastic agent’s initial grade of disbelief in a proposition \(A\) , given that the (...) counterfactual distance in a given context to the closest \(A\) -worlds equals \(n\) , and no further information that is not admissible in this context, should equal \(n\) . Under the two assumptions that the presuppositions of a given context are admissible in this context, and that the theory of deterministic alethic or metaphysical modality is admissible in any context, it follows that the counterfactual distance distribution in a given context has the structure of a ranking function. The basic conditional logic V is shown to be sound and complete with respect to the resulting rank-theoretic semantics of counterfactuals. (shrink)
Systems of logico-probabilistic (LP) reasoning characterize inference from conditional assertions interpreted as expressing high conditional probabilities. In the present article, we investigate four prominent LP systems (namely, systems O, P, Z, and QC) by means of computer simulations. The results reported here extend our previous work in this area, and evaluate the four systems in terms of the expected utility of the dispositions to act that derive from the conclusions that the systems license. In addition to conforming to (...) the dominant paradigm for assessing the rationality of actions and decisions, our present evaluation complements our previous work, since our previous evaluation may have been too severe in its assessment of inferences to false and uninformative conclusions. In the end, our new results provide additional support for the conclusion that (of the four systems considered) inference by system Z offers the best balance of error avoidance and inferential power. Our new results also suggest that improved performance could be achieved by a modest strengthening of system Z. (shrink)
In this paper, new evidence is presented for the assumption that the reason-relation reading of indicative conditionals ('if A, then C') reflects a conventional implicature. In four experiments, it is investigated whether relevance effects found for the probability assessment of indicative conditionals (Skovgaard-Olsen, Singmann, and Klauer, 2016a) can be classified as being produced by a) a conversational implicature, b) a (probabilistic) presupposition failure, or c) a conventional implicature. After considering several alternative hypotheses and the accumulating evidence from other studies (...) as well, we conclude that the evidence is most consistent with the Relevance Effect being the outcome of a conventional implicature. This finding indicates that the reason-relation reading is part of the semantic content of indicative conditionals, albeit not part of their primary truth-conditional content. (shrink)
In a quantum universe with a strong arrow of time, it is standard to postulate that the initial wave function started in a particular macrostate---the special low-entropy macrostate selected by the Past Hypothesis. Moreover, there is an additional postulate about statistical mechanical probabilities according to which the initial wave function is a ''typical'' choice in the macrostate. Together, they support a probabilistic version of the Second Law of Thermodynamics: typical initial wave functions will increase in entropy. Hence, there are two (...) sources of randomness in such a universe: the quantum-mechanical probabilities of the Born rule and the statistical mechanical probabilities of the Statistical Postulate. I propose a new way to understand time's arrow in a quantum universe. It is based on what I call the Thermodynamic Theories of Quantum Mechanics. According to this perspective, there is a natural choice for the initial quantum state of the universe, which is given by not a wave function but by a density matrix. The density matrix plays a microscopic role: it appears in the fundamental dynamical equations of those theories. The density matrix also plays a macroscopic / thermodynamic role: it is exactly the projection operator onto the Past Hypothesis subspace. Thus, given an initial subspace, we obtain a unique choice of the initial density matrix. I call this property "the conditional uniqueness" of the initial quantum state. The conditional uniqueness provides a new and general strategy to eliminate statistical mechanical probabilities in the fundamental physical theories, by which we can reduce the two sources of randomness to only the quantum mechanical one. I also explore the idea of an absolutely unique initial quantum state, in a way that might realize Penrose's idea of a strongly deterministic universe. (shrink)
The value of knowledge can vary in that knowledge of important facts is more valuable than knowledge of trivialities. This variation in the value of knowledge is mirrored by a variation in evidential standards. Matters of greater importance require greater evidential support. But all knowledge, however trivial, needs to be evidentially certain. So on one hand we have a variable evidential standard that depends on the value of the knowledge, and on the other, we have the invariant standard of evidential (...) certainty. This paradox in the concept of knowledge runs deep in the history of philosophy. We approach this paradox by proposing a bet settlement theory of knowledge. Degrees of belief can be measured by the expected value of a bet divided by stake size, with the highest degree of belief being probability 1, or certainty. Evidence sufficient to settle the bet makes the expectation equal to the stake size and therefore has evidential probability 1. This gives us the invariant evidential certainty standard for knowledge. The value of knowledge relative to a bet is given by the stake size. We propose that evidential probability can vary with stake size, so that evidential certainty at low stakes does not entail evidential certainty at high stakes. This solves the paradox by allowing that certainty is necessary for knowledge at any stakes, but that the evidential standards for knowledge vary according to what is at stake. We give a Stake Size Variation Principle that calculates evidential probability from the value of evidence and the stakes. Stake size variant degrees of belief are probabilistically coherent and explain a greater range of preferences than orthodox expected utility theory, namely the Ellsberg and Allais preferences. The resulting theory of knowledge gives an empirically adequate, rationally grounded, unified account of evidence, value and probability. (shrink)
Famous results by David Lewis show that plausible-sounding constraints on the probabilities of conditionals or evaluative claims lead to unacceptable results, by standard probabilistic reasoning. Existing presentations of these results rely on stronger assumptions than they really need. When we strip these arguments down to a minimal core, we can see both how certain replies miss the mark, and also how to devise parallel arguments for other domains, including epistemic “might,” probability claims, claims about comparative value, and so on. (...) A popular reply to Lewis's results is to claim that conditional claims, or claims about subjective value, lack truth conditions. For this strategy to have a chance of success, it needs to give up basic structural principles about how epistemic states can be updated—in a way that is strikingly parallel to the commitments of the project of dynamic semantics. (shrink)
Dilation occurs when an interval probability estimate of some event E is properly included in the interval probability estimate of E conditional on every event F of some partition, which means that one’s initial estimate of E becomes less precise no matter how an experiment turns out. Critics maintain that dilation is a pathological feature of imprecise probability models, while others have thought the problem is with Bayesian updating. However, two points are often overlooked: (1) knowing (...) that E is stochastically independent of F (for all F in a partition of the underlying state space) is sufficient to avoid dilation, but (2) stochastic independence is not the only independence concept at play within imprecise probability models. In this paper we give a simple characterization of dilation formulated in terms of deviation from stochastic independence, propose a measure of dilation, and distinguish between proper and improper dilation. Through this we revisit the most sensational examples of dilation, which play up independence between dilator and dilatee, and find the sensationalism undermined by either fallacious reasoning with imprecise probabilities or improperly constructed imprecise probability models. (shrink)
Merging of opinions results underwrite Bayesian rejoinders to complaints about the subjective nature of personal probability. Such results establish that sufficiently similar priors achieve consensus in the long run when fed the same increasing stream of evidence. Initial subjectivity, the line goes, is of mere transient significance, giving way to intersubjective agreement eventually. Here, we establish a merging result for sets of probability measures that are updated by Jeffrey conditioning. This generalizes a number of different merging results in (...) the literature. We also show that such sets converge to a shared, maximally informed opinion. Convergence to a maximally informed opinion is a (weak) Jeffrey conditioning analogue of Bayesian “convergence to the truth” for conditional probabilities. Finally, we demonstrate the philosophical significance of our study by detailing applications to the topics of dynamic coherence, imprecise probabilities, and probabilistic opinion pooling. (shrink)
KK is the thesis that if you can know p, you can know that you can know p. Though it’s unpopular, a flurry of considerations has recently emerged in its favour. Here we add fuel to the fire: standard resources allow us to show that any failure of KK will lead to the knowability and assertability of abominable indicative conditionals of the form ‘If I don’t know it, p’. Such conditionals are manifestly not assertable—a fact that KK defenders can easily (...) explain. I survey a variety of KK-denying responses and find them wanting. Those who object to the knowability of such conditionals must either deny the possibility of harmony between knowledge and belief, or deny well-supported connections between conditional and unconditional attitudes. Meanwhile, those who grant knowability owe us an explanation of such conditionals’ unassertability—yet no successful explanations are on offer. Upshot: we have new evidence for KK. (shrink)
How should we account for the contextual variability of knowledge claims? Many philosophers favour an invariantist account on which such contextual variability is due entirely to pragmatic factors, leaving no interesting context-sensitivity in the semantic meaning of ‘know that.’ I reject this invariantist division of labor by arguing that pragmatic invariantists have no principled account of embedded occurrences of ‘S knows/doesn’t know that p’: Occurrences embedded within larger linguistic con- structions such as conditional sentences, attitude verbs, expressions of (...) class='Hi'>probability, comparatives, and many others, I argue, give rise to a threefold problem of embedded implicatures. (shrink)
The justificatory force of empirical reasoning always depends upon the existence of some synthetic, a priori justification. The reasoner must begin with justified, substantive constraints on both the prior probability of the conclusion and certain conditional probabilities; otherwise, all possible degrees of belief in the conclusion are left open given the premises. Such constraints cannot in general be empirically justified, on pain of infinite regress. Nor does subjective Bayesianism offer a way out for the empiricist. Despite often-cited convergence (...) theorems, subjective Bayesians cannot hold that any empirical hypothesis is ever objectively justified in the relevant sense. Rationalism is thus the only alternative to an implausible skepticism. (shrink)
Consider two epistemic experts—for concreteness, let them be two weather forecasters. Suppose that you aren’t certain that they will issue identical forecasts, and you would like to proportion your degrees of belief to theirs in the following way: first, conditional on either’s forecast of rain being x, you’d like your own degree of belief in rain to be x. Secondly, conditional on them issuing different forecasts of rain, you’d like your own degree of belief in rain to be (...) some weighted average of the forecast of each. Finally, you’d like your degrees of belief to be given by an orthodox probability measure. Moderate ambitions, all. But you can’t always get what you want. (shrink)
This paper outlines a formal recursive wager resolution calculus (WRC) that provides a novel conceptual framework for sentential logic via bridge rules that link wager resolution with truth values. When paired with a traditional truth-centric criterion of logical soundness WRC generates a sentential logic that is broadly truth-conditional but not truth-functional, supports the rules of proof employed in standard mathematics, and is immune to the most vexing features of their traditional implementation. WRC also supports a novel probabilistic criterion of (...) logical soundness, the fair betting probability criterion (FBP). It guarantees that the conclusion of an FBP-valid argument is at least as credible as a conjunction of premises, and also that the conclusion is true if the premises are. In addition, WRC provides a platform for a novel non-probabilistic, computationally simpler criterion of logical soundness – the criterion of Super-validity - that issues the same logical appraisals as FBP, and hence the same guarantees. (shrink)
The logical basis for information theory is the newly developed logic of partitions that is dual to the usual Boolean logic of subsets. The key concept is a "distinction" of a partition, an ordered pair of elements in distinct blocks of the partition. The logical concept of entropy based on partition logic is the normalized counting measure of the set of distinctions of a partition on a finite set--just as the usual logical notion of probability based on the Boolean (...) logic of subsets is the normalized counting measure of the subsets (events). Thus logical entropy is a measure on the set of ordered pairs, and all the compound notions of entropy (join entropy, conditional entropy, and mutual information) arise in the usual way from the measure (e.g., the inclusion-exclusion principle)--just like the corresponding notions of probability. The usual Shannon entropy of a partition is developed by replacing the normalized count of distinctions (dits) by the average number of binary partitions (bits) necessary to make all the distinctions of the partition. (shrink)
This dissertation is devoted to empirically contrasting the Suppositional Theory of conditionals, which holds that indicative conditionals serve the purpose of engaging in hypothetical thought, and Inferentialism, which holds that indicative conditionals express reason relations. Throughout a series of experiments, probabilistic and truth-conditional variants of Inferentialism are investigated using new stimulus materials, which manipulate previously overlooked relevance conditions. These studies are some of the first published studies to directly investigate the central claims of Inferentialism empirically. In contrast, the Suppositional (...) Theory of conditionals has an impressive track record through more than a decade of intensive testing. The evidence for the Suppositional Theory encompasses three sources. Firstly, direct investigations of the probability of indicative conditionals, which substantiate “the Equation” (P(if A, then C) = P(C|A)). Secondly, the pattern of results known as “the defective truth table” effect, which corroborates the de Finetti truth table. And thirdly, indirect evidence from the uncertain and-to-if inference task. Through four studies each of these sources of evidence are scrutinized anew under the application of novel stimulus materials that factorially combine all permutations of prior and relevance levels of two conjoined sentences. The results indicate that the Equation only holds under positive relevance (P(C|A) – P(C|¬A) > 0) for indicative conditionals. In the case of irrelevance (P(C|A) – P(C|¬A) = 0), or negative relevance (P(C|A) – P(C|¬A) < 0), the strong relationship between P(if A, then C) and P(C|A) is disrupted. This finding suggests that participants tend to view natural language conditionals as defective under irrelevance and negative relevance (Chapter 2). Furthermore, most of the participants turn out only to be probabilistically coherent above chance levels for the uncertain and-to-if inference in the positive relevance condition, when applying the Equation (Chapter 3). Finally, the results on the truth table task indicate that the de Finetti truth table is at most descriptive for about a third of the participants (Chapter 4). Conversely, strong evidence for a probabilistic implementation of Inferentialism could be obtained from assessments of P(if A, then C) across relevance levels (Chapter 2) and the participants’ performance on the uncertain-and-to-if inference task (Chapter 3). Yet the results from the truth table task suggest that these findings could not be extended to truth-conditional Inferentialism (Chapter 4). On the contrary, strong dissociations could be found between the presence of an effect of the reason relation reading on the probability and acceptability evaluations of indicative conditionals (and connate sentences), and the lack of an effect of the reason relation reading on the truth evaluation of the same sentences. A bird’s eye view on these surprising results is taken in the final chapter and it is discussed which perspectives these results open up for future research. (shrink)
Logical information theory is the quantitative version of the logic of partitions just as logical probability theory is the quantitative version of the dual Boolean logic of subsets. The resulting notion of information is about distinctions, differences and distinguishability and is formalized using the distinctions of a partition. All the definitions of simple, joint, conditional and mutual entropy of Shannon information theory are derived by a uniform transformation from the corresponding definitions at the logical level. The purpose of (...) this paper is to give the direct generalization to quantum logical information theory that similarly focuses on the pairs of eigenstates distinguished by an observable, i.e., qudits of an observable. The fundamental theorem for quantum logical entropy and measurement establishes a direct quantitative connection between the increase in quantum logical entropy due to a projective measurement and the eigenstates that are distinguished by the measurement. Both the classical and quantum versions of logical entropy have simple interpretations as “two-draw” probabilities for distinctions. The conclusion is that quantum logical entropy is the simple and natural notion of information for quantum information theory focusing on the distinguishing of quantum states. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.