The standard treatment of conditional probability leaves conditional probability undefined when the conditioning proposition has zero probability. Nonetheless, some find the option of extending the scope of conditional probability to include zero-probability conditions attractive or even compelling. This article reviews some of the pitfalls associated with this move, and concludes that, for the most part, probabilitiesconditional on zero-probability propositions are more trouble than they are worth.
Why are conditional degrees of belief in an observation E, given a statistical hypothesis H, aligned with the objective probabilities expressed by H? After showing that standard replies are not satisfactory, I develop a suppositional analysis of conditional degree of belief, transferring Ramsey’s classical proposal to statistical inference. The analysis saves the alignment, explains the role of chance-credence coordination, and rebuts the charge of arbitrary assessment of evidence in Bayesian inference. Finally, I explore the implications of this (...) analysis for Bayesian reasoning with idealized models in science. (shrink)
The epistemic probability of A given B is the degree to which B evidentially supports A, or makes A plausible. This paper is a first step in answering the question of what determines the values of epistemic probabilities. I break this question into two parts: the structural question and the substantive question. Just as an object’s weight is determined by its mass and gravitational acceleration, some probabilities are determined by other, more basic ones. The structural question asks what (...)probabilities are not determined in this way—these are the basic probabilities which determine values for all other probabilities. The substantive question asks how the values of these basic probabilities are determined. I defend an answer to the structural question on which basic probabilities are the probabilities of atomic propositions conditional on potential direct explanations. I defend this against the view, implicit in orthodox mathematical treatments of probability, that basic probabilities are the unconditional probabilities of complete worlds. I then apply my answer to the structural question to clear up common confusions in expositions of Bayesianism and shed light on the “problem of the priors.”. (shrink)
In a quantum universe with a strong arrow of time, it is standard to postulate that the initial wave function started in a particular macrostate---the special low-entropy macrostate selected by the Past Hypothesis. Moreover, there is an additional postulate about statistical mechanical probabilities according to which the initial wave function is a ''typical'' choice in the macrostate. Together, they support a probabilistic version of the Second Law of Thermodynamics: typical initial wave functions will increase in entropy. Hence, there are (...) two sources of randomness in such a universe: the quantum-mechanical probabilities of the Born rule and the statistical mechanical probabilities of the Statistical Postulate. I propose a new way to understand time's arrow in a quantum universe. It is based on what I call the Thermodynamic Theories of Quantum Mechanics. According to this perspective, there is a natural choice for the initial quantum state of the universe, which is given by not a wave function but by a density matrix. The density matrix plays a microscopic role: it appears in the fundamental dynamical equations of those theories. The density matrix also plays a macroscopic / thermodynamic role: it is exactly the projection operator onto the Past Hypothesis subspace. Thus, given an initial subspace, we obtain a unique choice of the initial density matrix. I call this property "the conditional uniqueness" of the initial quantum state. The conditional uniqueness provides a new and general strategy to eliminate statistical mechanical probabilities in the fundamental physical theories, by which we can reduce the two sources of randomness to only the quantum mechanical one. I also explore the idea of an absolutely unique initial quantum state, in a way that might realize Penrose's idea of a strongly deterministic universe. (shrink)
IBE ('Inference to the best explanation' or abduction) is a popular and highly plausible theory of how we should judge the evidence for claims of past events based on present evidence. It has been notably developed and supported recently by Meyer following Lipton. I believe this theory is essentially correct. This paper supports IBE from a probability perspective, and argues that the retrodictive probabilities involved in such inferences should be analysed in terms of predictive probabilities and a priori (...) probability ratios of initial events. The key point is to separate these two features. Disagreements over evidence can be traced to disagreements over either the a priori probability ratios or predictive conditional ratios. In many cases, in real science, judgements of the former are necessarily subjective. The principles of iterated evidence are also discussed. The Sceptic's position is criticised as ignoring iteration of evidence, and characteristically failing to adjust a priori probability ratios in response to empirical evidence. (shrink)
Starting from a recent paper by S. Kaufmann, we introduce a notion of conjunction of two conditional events and then we analyze it in the setting of coherence. We give a representation of the conjoined conditional and we show that this new object is a conditional random quantity, whose set of possible values normally contains the probabilities assessed for the two conditional events. We examine some cases of logical dependencies, where the conjunction is a (...) class='Hi'>conditional event; moreover, we give the lower and upper bounds on the conjunction. We also examine an apparent paradox concerning stochastic independence which can actually be explained in terms of uncorrelation. We briefly introduce the notions of disjunction and iterated conditioning and we show that the usual probabilistic properties still hold. (shrink)
The justificatory force of empirical reasoning always depends upon the existence of some synthetic, a priori justification. The reasoner must begin with justified, substantive constraints on both the prior probability of the conclusion and certain conditionalprobabilities; otherwise, all possible degrees of belief in the conclusion are left open given the premises. Such constraints cannot in general be empirically justified, on pain of infinite regress. Nor does subjective Bayesianism offer a way out for the empiricist. Despite often-cited convergence (...) theorems, subjective Bayesians cannot hold that any empirical hypothesis is ever objectively justified in the relevant sense. Rationalism is thus the only alternative to an implausible skepticism. (shrink)
Merging of opinions results underwrite Bayesian rejoinders to complaints about the subjective nature of personal probability. Such results establish that sufficiently similar priors achieve consensus in the long run when fed the same increasing stream of evidence. Initial subjectivity, the line goes, is of mere transient significance, giving way to intersubjective agreement eventually. Here, we establish a merging result for sets of probability measures that are updated by Jeffrey conditioning. This generalizes a number of different merging results in the literature. (...) We also show that such sets converge to a shared, maximally informed opinion. Convergence to a maximally informed opinion is a (weak) Jeffrey conditioning analogue of Bayesian “convergence to the truth” for conditionalprobabilities. Finally, we demonstrate the philosophical significance of our study by detailing applications to the topics of dynamic coherence, imprecise probabilities, and probabilistic opinion pooling. (shrink)
Systems of logico-probabilistic (LP) reasoning characterize inference from conditional assertions interpreted as expressing high conditionalprobabilities. In the present article, we investigate four prominent LP systems (namely, systems O, P, Z, and QC) by means of computer simulations. The results reported here extend our previous work in this area, and evaluate the four systems in terms of the expected utility of the dispositions to act that derive from the conclusions that the systems license. In addition to conforming (...) to the dominant paradigm for assessing the rationality of actions and decisions, our present evaluation complements our previous work, since our previous evaluation may have been too severe in its assessment of inferences to false and uninformative conclusions. In the end, our new results provide additional support for the conclusion that (of the four systems considered) inference by system Z offers the best balance of error avoidance and inferential power. Our new results also suggest that improved performance could be achieved by a modest strengthening of system Z. (shrink)
In a recent article, David Kyle Johnson has claimed to have provided a ‘refutation’ of skeptical theism. Johnson’s refutation raises several interesting issues. But in this note, I focus on only one—an implicit principle Johnson uses in his refutation to update probabilities after receiving new evidence. I argue that this principle is false. Consequently, Johnson’s refutation, as it currently stands, is undermined.
A popular informal argument suggests that statistics about the preponderance of criminal involvement among particular demographic groups partially justify others in making defensive mistakes against members of the group. One could worry that evidence-relative accounts of moral rights vindicate this argument. After constructing the strongest form of this objection, I offer several replies: most demographic statistics face an unmet challenge from reference class problems, even those that meet it fail to ground non-negligible conditionalprobabilities, even if they did, (...) they introduce new costs likely to cancel out any justificatory contribution of the statistic, but even if they didn’t, demographic facts are the wrong sort to make a moral difference to agents’ negative rights. I conclude that the popular argument should be rejected, and evidence-relative theories do not have the worrisome implication. (shrink)
Dilation occurs when an interval probability estimate of some event E is properly included in the interval probability estimate of E conditional on every event F of some partition, which means that one’s initial estimate of E becomes less precise no matter how an experiment turns out. Critics maintain that dilation is a pathological feature of imprecise probability models, while others have thought the problem is with Bayesian updating. However, two points are often overlooked: (1) knowing that E is (...) stochastically independent of F (for all F in a partition of the underlying state space) is sufficient to avoid dilation, but (2) stochastic independence is not the only independence concept at play within imprecise probability models. In this paper we give a simple characterization of dilation formulated in terms of deviation from stochastic independence, propose a measure of dilation, and distinguish between proper and improper dilation. Through this we revisit the most sensational examples of dilation, which play up independence between dilator and dilatee, and find the sensationalism undermined by either fallacious reasoning with imprecise probabilities or improperly constructed imprecise probability models. (shrink)
The article focuses on prosecutor's fallacy and interrogator's fallacy, the two kinds of reasoning in inferring a suspect's guilt. The prosecutor's fallacy is a combination of two conditionalprobabilities that lead to unfortunate commission of error in the process due to the inclination of the prosecutor in the establishment of strong evidence that will indict the defendant. It provides a comprehensive discussion of Gerd Gigerenzer's discourse on a criminal case in Germany explaining the perils of prosecutor's fallacy in (...) his application of probability to practical problems. It also discusses the interrogator's fallacy which was introduced by Robert A. J. Matthews as the error on the assumption that confessional evidence can never reduce the probability of guilt. (shrink)
In this paper I present a precise version of Stalnaker's thesis and show that it is both consistent and predicts our intuitive judgments about the probabilities of conditionals. The thesis states that someone whose total evidence is E should have the same credence in the proposition expressed by 'if A then B' in a context where E is salient as they have conditional credence in the proposition B expresses given the proposition A expresses in that context. The thesis (...) is formalised rigorously and two models are provided that demonstrate that the new thesis is indeed tenable within a standard possible world semantics based on selection functions. Unlike the Stalnaker-Lewis semantics the selection functions cannot be understood in terms of similarity. A probabilistic account of selection is defended in its place. -/- I end the paper by suggesting that this approach overcomes some of the objections often leveled at accounts of indicatives based on the notion of similarity. (shrink)
When an agent learns of an expert's credence in a proposition about which they are an expert, the agent should defer to the expert and adopt that credence as their own. This is a popular thought about how agents ought to respond to (ideal) experts. In a Bayesian framework, it is often modelled by endowing the agent with a set of priors that achieves this result. But this model faces a number of challenges, especially when applied to non-ideal agents (who (...) nevertheless interact with ideal experts). I outline these problems, and use them as desiderata for the development of a new model. Taking inspiration from Richard Jeffrey's development of Jeffrey conditioning, I develop a model in which expert reports are taken as exogenous constraints on the agent's posterior probabilities. I show how this model can handle a much wider class of expert reports (for example reports of conditionalprobabilities), and can be naturally extended to cover propositions for which the agent has no prior. (shrink)
*This work is no longer under development* Two major themes in the literature on indicative conditionals are that the content of indicative conditionals typically depends on what is known;1 that conditionals are intimately related to conditionalprobabilities.2 In possible world semantics for counterfactual conditionals, a standard assumption is that conditionals whose antecedents are metaphysically impossible are vacuously true.3 This aspect has recently been brought to the fore, and defended by Tim Williamson, who uses it in to characterize alethic (...) necessity by exploiting such equivalences as: A⇔¬A A. One might wish to postulate an analogous connection for indicative conditionals, with indicatives whose antecedents are epistemically impossible being vacuously true: and indeed, the modal account of indicative conditionals of Brian Weatherson has exactly this feature.4 This allows one to characterize an epistemic modal by the equivalence A⇔¬A→A. For simplicity, in what follows we write A as KA and think of it as expressing that subject S knows that A.5 The connection to probability has received much attention. Stalnaker suggested, as a way of articulating the ‘Ramsey Test’, the following very general schema for indicative conditionals relative to some probability function P: P = P 1For example, Nolan ; Weatherson ; Gillies. 2For example Stalnaker ; McGee ; Adams. 3Lewis. See Nolan for criticism. 4‘epistemically possible’ here means incompatible with what is known. 5This idea was suggested to me in conversation by John Hawthorne. I do not know of it being explored in print. The plausibility of this characterization will depend on the exact sense of ‘epistemically possible’ in play—if it is compatibility with what a single subject knows, then can be read ‘the relevant subject knows that p’. If it is more delicately formulated, we might be able to read as the epistemic modal ‘must’. (shrink)
A large number of essays address the Sleeping Beauty problem, which undermines the validity of Bayesian inference and Bas Van Fraassen's 'Reflection Principle'. In this study a straightforward analysis of the problem based on probability theory is presented. The key difference from previous works is that apart from the random experiment imposed by the problem's description, a different one is also considered, in order to negate the confusion on the involved conditionalprobabilities. The results of the analysis indicate (...) that no inconsistency takes place, whereas both Bayesian inference and 'Reflection Principle' are valid. (shrink)
Famous results by David Lewis show that plausible-sounding constraints on the probabilities of conditionals or evaluative claims lead to unacceptable results, by standard probabilistic reasoning. Existing presentations of these results rely on stronger assumptions than they really need. When we strip these arguments down to a minimal core, we can see both how certain replies miss the mark, and also how to devise parallel arguments for other domains, including epistemic “might,” probability claims, claims about comparative value, and so on. (...) A popular reply to Lewis's results is to claim that conditional claims, or claims about subjective value, lack truth conditions. For this strategy to have a chance of success, it needs to give up basic structural principles about how epistemic states can be updated—in a way that is strikingly parallel to the commitments of the project of dynamic semantics. (shrink)
One thousand fair causally isolated coins will be independently flipped tomorrow morning and you know this fact. I argue that the probability, conditional on your knowledge, that any coin will land tails is almost 1 if that coin in fact lands tails, and almost 0 if it in fact lands heads. I also show that the coin flips are not probabilistically independent given your knowledge. These results are uncomfortable for those, like Timothy Williamson, who take these probabilities to (...) play a central role in their theorizing. (shrink)
Logical information theory is the quantitative version of the logic of partitions just as logical probability theory is the quantitative version of the dual Boolean logic of subsets. The resulting notion of information is about distinctions, differences and distinguishability and is formalized using the distinctions of a partition. All the definitions of simple, joint, conditional and mutual entropy of Shannon information theory are derived by a uniform transformation from the corresponding definitions at the logical level. The purpose of this (...) paper is to give the direct generalization to quantum logical information theory that similarly focuses on the pairs of eigenstates distinguished by an observable, i.e., qudits of an observable. The fundamental theorem for quantum logical entropy and measurement establishes a direct quantitative connection between the increase in quantum logical entropy due to a projective measurement and the eigenstates that are distinguished by the measurement. Both the classical and quantum versions of logical entropy have simple interpretations as “two-draw” probabilities for distinctions. The conclusion is that quantum logical entropy is the simple and natural notion of information for quantum information theory focusing on the distinguishing of quantum states. (shrink)
This paper explores the possibility that causal decision theory can be formulated in terms of probabilities of conditionals. It is argued that a generalized Stalnaker semantics in combination with an underlying branching time structure not only provides the basis for a plausible account of the semantics of indicative conditionals, but also that the resulting conditionals have properties that make them well-suited as a basis for formulating causal decision theory. Decision theory (at least if we omit the frills) is not (...) an esoteric science, however unfamiliar it may seem to an outsider. Rather it is a systematic exposition of the consequences of certain well-chosen platitudes about belief, desire, preference and choice. It is the very core of our common-sense theory of persons, dissected out and elegantly systematized. (David Lewis, Synthese 23:331–344, 1974, p. 337). A small distortion in the analysis of the conditional may create spurious problems with the analysis of other concepts. So if the facts about usage favor one among a number of subtly different theories, it may be important to determine which one it is. (Robert Stalnaker, A Defense of Conditional Excluded Middle, pp. 87–104, 1980, p. 87). (shrink)
According to the traditional Bayesian view of credence, its structure is that of precise probability, its objects are descriptive propositions about the empirical world, and its dynamics are given by conditionalization. Each of the three essays that make up this thesis deals with a different variation on this traditional picture. The first variation replaces precise probability with sets of probabilities. The resulting imprecise Bayesianism is sometimes motivated on the grounds that our beliefs should not be more precise than the (...) evidence calls for. One known problem for this evidentially motivated imprecise view is that in certain cases, our imprecise credence in a particular proposition will remain the same no matter how much evidence we receive. In the first essay I argue that the problem is much more general than has been appreciated so far, and that it’s difficult to avoid without compromising the initial evidentialist motivation. The second variation replaces descriptive claims with moral claims as the objects of credence. I consider three standard arguments for probabilism with respect to descriptive uncertainty—representation theorem arguments, Dutch book arguments, and accuracy arguments—in order to examine whether such arguments can also be used to establish probabilism with respect to moral uncertainty. In the second essay, I argue that by and large they can, with some caveats. First, I don’t examine whether these arguments can be given sound non-cognitivist readings, and any conclusions therefore only hold conditional on cognitivism. Second, decision-theoretic representation theorems are found to be less convincing in the moral case, because there they implausibly commit us to thinking that intertheoretic comparisons of value are always possible. Third and finally, certain considerations may lead one to think that imprecise probabilism provides a more plausible model of moral epistemology. The third variation considers whether, in addition to conditionalization, agents may also change their minds by becoming aware of propositions they had not previously entertained, and therefore not previously assigned any probability. More specifically, I argue that if we wish to make room for reflective equilibrium in a probabilistic moral epistemology, we must allow for awareness growth. In the third essay, I sketch the outline of such a Bayesian account of reflective equilibrium. Given that this account gives a central place to awareness growth, and that the rationality constraints on belief change by awareness growth are much weaker than those on belief change by conditionalization, it follows that the rationality constraints on the credences of agents who are seeking reflective equilibrium are correspondingly weaker. (shrink)
Many have argued that a rational agent's attitude towards a proposition may be better represented by a probability range than by a single number. I show that in such cases an agent will have unstable betting behaviour, and so will behave in an unpredictable way. I use this point to argue against a range of responses to the ‘two bets’ argument for sharp probabilities.
This chapter will review selected aspects of the terrain of discussions about probabilities in statistical mechanics (with no pretensions to exhaustiveness, though the major issues will be touched upon), and will argue for a number of claims. None of the claims to be defended is entirely original, but all deserve emphasis. The first, and least controversial, is that probabilistic notions are needed to make sense of statistical mechanics. The reason for this is the same reason that convinced Maxwell, Gibbs, (...) and Boltzmann that probabilities would be needed, namely, that the second law of thermodynamics, which in its original formulation says that certain processes are impossible, must, on the kinetic theory, be replaced by a weaker formulation according to which what the original version deems impossible is merely improbable. Second is that we ought not take the standard measures invoked in equilibrium statistical mechanics as giving, in any sense, the correct probabilities about microstates of the system. We can settle for a much weaker claim: that the probabilities for outcomes of experiments yielded by the standard distributions are effectively the same as those yielded by any distribution that we should take as a representing probabilities over microstates. Lastly, (and most controversially): in asking about the status of probabilities in statistical mechanics, the familiar dichotomy between epistemic probabilities (credences, or degrees of belief) and ontic (physical) probabilities is insufficient; the concept of probability that is best suited to the needs of statistical mechanics is one that combines epistemic and physical considerations. (shrink)
Lewis (1973) gave a short argument against conditional excluded middle, based on his treatment of ‘might’ counterfactuals. Bennett (2003), with much of the recent literature, gives an alternative take on ‘might’ counterfactuals. But Bennett claims the might-argument against CEM still goes through. This turns on a specific claim I call Bennett’s Hypothesis. I argue that independently of issues to do with the proper analysis of might-counterfactuals, Bennett’s Hypothesis is inconsistent with CEM. But Bennett’s Hypothesis is independently objectionable, so we (...) should resolve this tension by dropping the Hypothesis, not by dropping CEM. (shrink)
It is well known that classical, aka ‘sharp’, Bayesian decision theory, which models belief states as single probability functions, faces a number of serious difficulties with respect to its handling of agnosticism. These difficulties have led to the increasing popularity of so-called ‘imprecise’ models of decision-making, which represent belief states as sets of probability functions. In a recent paper, however, Adam Elga has argued in favour of a putative normative principle of sequential choice that he claims to be borne out (...) by the sharp model but not by any promising incarnation of its imprecise counterpart. After first pointing out that Elga has fallen short of establishing that his principle is indeed uniquely borne out by the sharp model, I cast aspersions on its plausibility. I show that a slight weakening of the principle is satisfied by at least one, but interestingly not all, varieties of the imprecise model and point out that Elga has failed to motivate his stronger commitment. (shrink)
In this paper, I will discuss the various ways in which intentions can be said to be conditional, with particular attention to the internal conditions on the intentions’ content. I will first consider what it takes to carry out a conditional intention. I will then discuss how the distinctive norms of intention apply to conditional intentions and whether conditional intentions are a weaker sort of commitments than the unconditional ones. This discussion will lead to the idea (...) of what I call the ‘deep structure’ of intentions. Roughly, this is the idea that the conditional nature of our intentions is only partially made explicit in the expressions we use to communicate our intentions and in the explicit form of our thinking about and reasoning with them. Most conditions that qualify our intentions are part of a deep functional structure that can be evinced by observing the actual psychological functioning of intentions and by considering the rational requirements that they engage. I will argue that the deep structure of intentions is characteristically conditional. Genuinely unconditional intentions are only limiting instances of conditional intentions and their contribution to agency can only be understood in light of this fact. I will conclude by showing that the characteristic conditional structure of intentions is intimately related to distinctive features of human agency, especially to its unity over time. (shrink)
Enjoying great popularity in decision theory, epistemology, and philosophy of science, Bayesianism as understood here is fundamentally concerned with epistemically ideal rationality. It assumes a tight connection between evidential probability and ideally rational credence, and usually interprets evidential probability in terms of such credence. Timothy Williamson challenges Bayesianism by arguing that evidential probabilities cannot be adequately interpreted as the credences of an ideal agent. From this and his assumption that evidential probabilities cannot be interpreted as the actual credences (...) of human agents either, he concludes that no interpretation of evidential probabilities in terms of credence is adequate. I argue to the contrary. My overarching aim is to show on behalf of Bayesians how one can still interpret evidential probabilities in terms of ideally rational credence and how one can maintain a tight connection between evidential probabilities and ideally rational credence even if the former cannot be interpreted in terms of the latter. By achieving this aim I illuminate the limits and prospects of Bayesianism. (shrink)
Bayesianism is the position that scientific reasoning is probabilistic and that probabilities are adequately interpreted as an agent's actual subjective degrees of belief, measured by her betting behaviour. Confirmation is one important aspect of scientific reasoning. The thesis of this paper is the following: if scientific reasoning is at all probabilistic, the subjective interpretation has to be given up in order to get right confirmation—and thus scientific reasoning in general. The Bayesian approach to scientific reasoning Bayesian confirmation theory The (...) example The less reliable the source of information, the higher the degree of Bayesian confirmation Measure sensitivity A more general version of the problem of old evidence Conditioning on the entailment relation The counterfactual strategy Generalizing the counterfactual strategy The desired result, and a necessary and sufficient condition for it Actual degrees of belief The common knock-down feature, or ‘anything goes’ The problem of prior probabilities. (shrink)
Non-Archimedean probability functions allow us to combine regularity with perfect additivity. We discuss the philosophical motivation for a particular choice of axioms for a non-Archimedean probability theory and answer some philosophical objections that have been raised against infinitesimal probabilities in general.
I argue that taking the Practical Conditionals Thesis seriously demands a new understanding of the semantics of such conditionals. Practical Conditionals Thesis: A practical conditional [if A][ought] expresses B’s conditional preferability given A Paul Weirich has argued that the conditional utility of a state of affairs B on A is to be identified as the degree to which it is desired under indicative supposition that A. Similarly, exploiting the PCT, I will argue that the proper analysis of (...) indicative practical conditionals is in terms of what is planned, desired, or preferred, given suppositional changes to an agent’s information. Implementing such a conception of conditional preference in a semantic analysis of indicative practical conditionals turns out to be incompatible with any approach which treats the indicative conditional as expressing non-vacuous universal quantification over some domain of relevant antecedent-possibilities. Such analyses, I argue, encode a fundamental misunderstanding of what it is to be best, given some condition. The analysis that does the best vis-à-vis the PCT is, instead, one that blends a Context-Shifty account of indicative antecedents with an Expressivistic, or non-propositional, treatment of their practical consequents. (shrink)
This paper outlines an account of conditionals, the evidential account, which rests on the idea that a conditional is true just in case its antecedent supports its consequent. As we will show, the evidential account exhibits some distinctive logical features that deserve careful consideration. On the one hand, it departs from the material reading of ‘if then’ exactly in the way we would like it to depart from that reading. On the other, it significantly differs from the non-material accounts (...) which hinge on the Ramsey Test, advocated by Adams, Stalnaker, Lewis, and others. (shrink)
The conditional analysis of dispositions is widely rejected, mainly due to counterexamples in which dispositions are either “finkish” or “masked.” David Lewis proposed a reformed conditional analysis. This view avoids the problem of finkish dispositions, but it fails to solve the problem of masking. I will propose a reformulation of Lewis’ analysis, and I will argue that this reformulation can easily be modified so that it avoids the problem of masking. In the final section, I will address the (...) challenge that some dispositions appear to lack any stimulus condition, and I will briefly turn to the issue of reductionism. (shrink)
Ranking theory is a formal epistemology that has been developed in over 600 pages in Spohn's recent book The Laws of Belief, which aims to provide a normative account of the dynamics of beliefs that presents an alternative to current probabilistic approaches. It has long been received in the AI community, but it has not yet found application in experimental psychology. The purpose of this paper is to derive clear, quantitative predictions by exploiting a parallel between ranking theory and a (...) statistical model called logistic regression. This approach is illustrated by the development of a model for the conditional inference task using Spohn's ranking theoretic approach to conditionals. (shrink)
Michael Fara's ‘habitual analysis’ of disposition ascriptions is equivalent to a kind of ceteris paribus conditional analysis which has no evident advantage over Martin's well known and simpler analysis. I describe an unsatisfactory hypothetical response to Martin's challenge, which is lacking in just the same respect as the analysis considered by Martin; Fara's habitual analysis is equivalent to this hypothetical analysis. The feature of the habitual analysis that is responsible for this cannot be harmlessly excised, for the resulting analysis (...) would be subject to familiar counter-examples. (shrink)
Children approach counterfactual questions about stories with a reasoning strategy that falls short of adults’ Counterfactual Reasoning (CFR). It was dubbed “Basic Conditional Reasoning” (BCR) in Rafetseder et al. (Child Dev 81(1):376–389, 2010). In this paper we provide a characterisation of the differences between BCR and CFR using a distinction between permanent and nonpermanent features of stories and Lewis/Stalnaker counterfactual logic. The critical difference pertains to how consistency between a story and a conditional antecedent incompatible with a nonpermanent (...) feature of the story is achieved. Basic conditional reasoners simply drop all nonpermanent features of the story. Counterfactual reasoners preserve as much of the story as possible while accommodating the antecedent. (shrink)
According to the Lockean thesis, a proposition is believed just in case it is highly probable. While this thesis enjoys strong intuitive support, it is known to conflict with seemingly plausible logical constraints on our beliefs. One way out of this conflict is to make probability 1 a requirement for belief, but most have rejected this option for entailing what they see as an untenable skepticism. Recently, two new solutions to the conflict have been proposed that are alleged to be (...) non-skeptical. We compare these proposals with each other and with the Lockean thesis, in particular with regard to the question of how much we gain by adopting any one of them instead of the probability 1 requirement, that is, of how likely it is that one believes more than the things one is fully certain of. (shrink)
In this short survey article, I discuss Bell’s theorem and some strategies that attempt to avoid the conclusion of non-locality. I focus on two that intersect with the philosophy of probability: (1) quantum probabilities and (2) superdeterminism. The issues they raised not only apply to a wide class of no-go theorems about quantum mechanics but are also of general philosophical interest.
The main thesis of this paper is that, whereas an intention simpliciter is a commitment to a plan of action, a conditional intention is a commitment to a contingency plan, a commitment about what to do upon (learning of) a certain contingency relevant to one’s interests obtaining. In unconditional intending, our commitment to acting is not contingent on finding out that some condition obtains. In conditional intending, we intend to undertake an action on some condition, impinging on our (...) interests, which is as yet unsettled for us, but about which we can find out without undue cost. (shrink)
An expected utility model of individual choice is formulated which allows the decision maker to specify his available actions in the form of controls (partial contingency plans) and to simultaneously choose goals and controls in end-mean pairs. It is shown that the Savage expected utility model, the Marschak- Radner team model, the Bayesian statistical decision model, and the standard optimal control model can be viewed as special cases of this goal-control expected utility model.
This paper makes two essential claims about the nature of shame and shame punishment. I argue that, if we properly understand the nature of shame, that it is sometimes justifiable to shame others in the context of a pluralistic multicultural society. I begin by assessing the accounts of shame provided by Cheshire Calhoun (2004) and Julien Deonna, Raffaele Rodogno, & Fabrice Teroni (2012). I argue that both views have problems. I defend a theory of shame and embarrassment that connects both (...) emotions to “whole-self” properties. Shame and embarrassment, I claim, are products of the same underlying emotion. I distinguish between moralized and nonmoralized shame in order to show when, and how, moral and non-moral shame may be justly deployed. Shame is appropriate, I argue, if and only if it targets malleable moral or non-moral normative imperfections of a person’s ‘whole-self.’ Shame is unjustifiable when it targets durable aspects of a person’s “whole-self.” I conclude by distinguishing shame punishments from guilt punishments and show that my account can explain why it is wrong to shame individuals on account of their race, sex, gender, or body while permitting us to sometimes levy shame and shame punishment against others, even those otherwise immune to moral reasons. (shrink)
Conditional excluded middle (CEM) is the following principe of counterfactual logic: either, if it were the case that φ, it would be the case that ψ, or, if it were the case that φ, it would be the case that not-ψ. I will first show that CEM entails the identity of indiscernibles, the falsity of physicalism, and the failure of the modal to supervene on the categorical and of the vague to supervene on the precise. I will then argue (...) that we should accept these startling conclusions, since CEM is valid. (shrink)
Automated reasoning about uncertain knowledge has many applications. One difficulty when developing such systems is the lack of a completely satisfactory integration of logic and probability. We address this problem directly. Expressive languages like higher-order logic are ideally suited for representing and reasoning about structured knowledge. Uncertain knowledge can be modeled by using graded probabilities rather than binary truth-values. The main technical problem studied in this paper is the following: Given a set of sentences, each having some probability of (...) being true, what probability should be ascribed to other (query) sentences? A natural wish-list, among others, is that the probability distribution (i) is consistent with the knowledge base, (ii) allows for a consistent inference procedure and in particular (iii) reduces to deductive logic in the limit of probabilities being 0 and 1, (iv) allows (Bayesian) inductive reasoning and (v) learning in the limit and in particular (vi) allows confirmation of universally quantified hypotheses/sentences. We translate this wish-list into technical requirements for a prior probability and show that probabilities satisfying all our criteria exist. We also give explicit constructions and several general characterizations of probabilities that satisfy some or all of the criteria and various (counter) examples. We also derive necessary and sufficient conditions for extending beliefs about finitely many sentences to suitable probabilities over all sentences, and in particular least dogmatic or least biased ones. We conclude with a brief outlook on how the developed theory might be used and approximated in autonomous reasoning agents. Our theory is a step towards a globally consistent and empirically satisfactory unification of probability and logic. (shrink)
Most contractualist ethical theories have a subjunctivist structure. This means that they attempt to make sense of right and wrong in terms of a set of principles which would be accepted in some idealized, non-actual circumstances. This makes these views vulnerable to the so-called conditional fallacy objection. The moral principles that are appropriate for the idealized circumstances fail to give a correct account of what is right and wrong in the ordinary situations. This chapter uses two versions of contractualism (...) to illustrate this problem: Nicholas Southwood’s and a standard contractualist theory inspired by T.M. Scanlon’s contractualism. It then develops a version of Scanlon’s view that can avoid the problem. This solution is based on the idea that we also need to compare different inculcation elements of moral codes in the contractualist framework. This idea also provides a new solution to the problem of at what level of social acceptance should principles be compared. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.