The notion of comparativeprobability defined in Bayesian subjectivist theory stems from an intuitive idea that, for a given pair of events, one event may be considered “more probable” than the other. Yet it is conceivable that there are cases where it is indeterminate as to which event is more probable, due to, e.g., lack of robust statistical information. We take that these cases involve indeterminate comparative probabilities. This paper provides a Savage-style decision-theoretic foundation for indeterminate (...) class='Hi'>comparative probabilities. (shrink)
Comparativism is the view that comparative beliefs (e.g., believing p to be more likely than q) are more fundamental than partial beliefs (e.g., believing p to some degree x), with the latter explicable as theoretical constructs designed to facilitate reasoning about patterns within systems of comparative beliefs that exist under special conditions. In this paper, I fi rst outline several varieties of comparativism, including two `Ramseyan' varieties which generalise the standard `probabilistic' approaches. I then provide a general critique (...) that applies to any and all comparativist views. Ultimately, there are too many things that we ought to be able to say about partial beliefs that comparativism renders unintelligible. Moreover, there are alternative ways to account for the measurement of belief that need not face the same expressive limitations. (shrink)
This paper proposes a new theory of rational choice, Expected Comparative Utility (ECU) Theory. It is first argued that for any decision option, a, and any state of the world, G, the measure of the choiceworthiness of a in G is the comparative utility of a in G – that is, the difference in utility, in G, between a and whichever alternative to a carries the greatest utility in G. On the basis of this principle, it is then (...) argued, roughly speaking, that an agent should rank her decision options (in terms of how choiceworthy they are) according to their expected comparative utility. For any decision option, a, the expected comparative utility of a is the probability-weighted average of the comparative utilities of a across the different states of the world. It is lastly demonstrated that in a number of decision cases, ECU Theory delivers different verdicts from those of standard decision theory. (shrink)
This note discusses three issues that Allen and Pardo believe to be especially problematic for a probabilistic interpretation of standards of proof: (1) the subjectivity of probability assignments; (2) the conjunction paradox; and (3) the non-comparative nature of probabilistic standards. I offer a reading of probabilistic standards that avoids these criticisms.
An influential suggestion about the relationship between Bayesianism and inference to the best explanation holds that IBE functions as a heuristic to approximate Bayesian reasoning. While this view promises to unify Bayesianism and IBE in a very attractive manner, important elements of the view have not yet been spelled out in detail. I present and argue for a heuristic conception of IBE on which IBE serves primarily to locate the most probable available explanatory hypothesis to serve as a working hypothesis (...) in an agent’s further investigations. Along the way, I criticize what I consider to be an overly ambitious conception of the heuristic role of IBE, according to which IBE serves as a guide to absolute probability values. My own conception, by contrast, requires only that IBE can function as a guide to the comparativeprobability values of available hypotheses. This is shown to be a much more realistic role for IBE given the nature and limitations of the explanatory considerations with which IBE operates. (shrink)
The article is a plea for ethicists to regard probability as one of their most important concerns. It outlines a series of topics of central importance in ethical theory in which probability is implicated, often in a surprisingly deep way, and lists a number of open problems. Topics covered include: interpretations of probability in ethical contexts; the evaluative and normative significance of risk or uncertainty; uses and abuses of expected utility theory; veils of ignorance; Harsanyi’s aggregation theorem; (...) population size problems; equality; fairness; giving priority to the worse off; continuity; incommensurability; nonexpected utility theory; evaluative measurement; aggregation; causal and evidential decision theory; act consequentialism; rule consequentialism; and deontology. (shrink)
Although brain size and the concept of intelligence have been extensively used in comparative neuroscience to study cognition and its evolution, such coarse-grained traits may not be informative enough about important aspects of neurocognitive systems. By taking into account the different evolutionary trajectories and the selection pressures on neurophysiology across species, Logan and colleagues suggest that the cognitive abilities of an organism should be investigated by considering the fine-grained and species-specific phenotypic traits that characterize it. In such a way, (...) we would avoid adopting human-oriented, coarse-grained traits, typical of the standard approach in cognitive neuroscience. We argue that this standard approach can fail in some cases, but can, however, work in others, by discussing two major topics in contemporary neuroscience as examples: general intelligence and brain asymmetries. (shrink)
In finite probability theory, events are subsets S⊆U of the outcome set. Subsets can be represented by 1-dimensional column vectors. By extending the representation of events to two dimensional matrices, we can introduce "superposition events." Probabilities are introduced for classical events, superposition events, and their mixtures by using density matrices. Then probabilities for experiments or `measurements' of all these events can be determined in a manner exactly like in quantum mechanics (QM) using density matrices. Moreover the transformation of the (...) density matrices induced by the experiments or `measurements' is the Lüders mixture operation as in QM. And finally by moving the machinery into the n-dimensional vector space over ℤ₂, different basis sets become different outcome sets. That `non-commutative' extension of finite probability theory yields the pedagogical model of quantum mechanics over ℤ₂ that can model many characteristic non-classical results of QM. (shrink)
This paper motivates and develops a novel semantic framework for deontic modals. The framework is designed to shed light on two things: the relationship between deontic modals and substantive theories of practical rationality and the interaction of deontic modals with conditionals, epistemic modals and probability operators. I argue that, in order to model inferential connections between deontic modals and probability operators, we need more structure than is provided by classical intensional theories. In particular, we need probabilistic structure that (...) interacts directly with the compositional semantics of deontic modals. However, I reject theories that provide this probabilistic structure by claiming that the semantics of deontic modals is linked to the Bayesian notion of expectation. I offer a probabilistic premise semantics that explains all the data that create trouble for the rival theories. (shrink)
In this study we investigate the influence of reason-relation readings of indicative conditionals and ‘and’/‘but’/‘therefore’ sentences on various cognitive assessments. According to the Frege-Grice tradition, a dissociation is expected. Specifically, differences in the reason-relation reading of these sentences should affect participants’ evaluations of their acceptability but not of their truth value. In two experiments we tested this assumption by introducing a relevance manipulation into the truth-table task as well as in other tasks assessing the participants’ acceptability and probability evaluations. (...) Across the two experiments a strong dissociation was found. The reason-relation reading of all four sentences strongly affected their probability and acceptability evaluations, but hardly affected their respective truth evaluations. Implications of this result for recent work on indicative conditionals are discussed. (shrink)
This paper demarcates a theoretically interesting class of "evaluational adjectives." This class includes predicates expressing various kinds of normative and epistemic evaluation, such as predicates of personal taste, aesthetic adjectives, moral adjectives, and epistemic adjectives, among others. Evaluational adjectives are distinguished, empirically, in exhibiting phenomena such as discourse-oriented use, felicitous embedding under the attitude verb `find', and sorites-susceptibility in the comparative form. A unified degree-based semantics is developed: What distinguishes evaluational adjectives, semantically, is that they denote context-dependent measure functions (...) ("evaluational perspectives")—context-dependent mappings to degrees of taste, beauty, probability, etc., depending on the adjective. This perspective-sensitivity characterizing the class of evaluational adjectives cannot be assimilated to vagueness, sensitivity to an experiencer argument, or multidimensionality; and it cannot be demarcated in terms of pretheoretic notions of subjectivity, common in the literature. I propose that certain diagnostics for "subjective" expressions be analyzed instead in terms of a precisely specified kind of discourse-oriented use of context-sensitive language. I close by applying the account to `find x PRED' ascriptions. (shrink)
DOI: 10.1080/00031305.2018.1564697 When the editors of Basic and Applied Social Psychology effectively banned the use of null hypothesis significance testing (NHST) from articles published in their journal, it set off a fire-storm of discussions both supporting the decision and defending the utility of NHST in scientific research. At the heart of NHST is the p-value which is the probability of obtaining an effect equal to or more extreme than the one observed in the sample data, given the null hypothesis (...) and other model assumptions. Although this is conceptually different from the probability of the null hypothesis being true, given the sample, p-values nonetheless can provide evidential information, toward making an inference about a parameter. Applying a 10,000-case simulation described in this article, the authors found that p-values’ inferential signals to either reject or not reject a null hypothesis about the mean (α = 0.05) were consistent for almost 70% of the cases with the parameter’s true location for the sampled-from population. Success increases if a hybrid decision criterion, minimum effect size plus p-value (MESP), is used. Here, rejecting the null also requires the difference of the observed statistic from the exact null to be meaningfully large or practically significant, in the researcher’s judgment and experience. The simulation compares performances of several methods: from p-value and/or effect size-based, to confidence-interval based, under various conditions of true location of the mean, test power, and comparative sizes of the meaningful distance and population variability. For any inference procedure that outputs a binary indicator, like flagging whether a p-value is significant, the output of one single experiment is not sufficient evidence for a definitive conclusion. Yet, if a tool like MESP generates a relatively reliable signal and is used knowledgeably as part of a research process, it can provide useful information. (shrink)
We provide a 'verisimilitudinarian' analysis of the well-known Linda paradox or conjunction fallacy, i.e., the fact that most people judge the probability of the conjunctive statement "Linda is a bank teller and is active in the feminist movement" (B & F) as more probable than the isolated statement "Linda is a bank teller" (B), contrary to an uncontroversial principle of probability theory. The basic idea is that experimental participants may judge B & F a better hypothesis about Linda (...) as compared to B because they evaluate B & F as more verisimilar than B. In fact, the hypothesis "feminist bank teller", while less likely to be true than "bank teller", may well be a better approximation to the truth about Linda. (shrink)
This book explores a question central to philosophy--namely, what does it take for a belief to be justified or rational? According to a widespread view, whether one has justification for believing a proposition is determined by how probable that proposition is, given one's evidence. In this book this view is rejected and replaced with another: in order for one to have justification for believing a proposition, one's evidence must normically support it--roughly, one's evidence must make the falsity of that proposition (...) abnormal in the sense of calling for special, independent explanation. This conception of justification bears upon a range of topics in epistemology and beyond. Ultimately, this way of looking at justification guides us to a new, unfamiliar picture of how we should respond to our evidence and manage our own fallibility. This picture is developed here. (shrink)
The ancient Greeks already used to give ethnic names to their different scales, and observations on differences in music of the various nations always raised the interest of musicians and philosophers. Yet, it was only in the late nineteenth century that “comparative musicology” became an institutional science. An important role in this process was played by Carl Stumpf, a former pupil of Brentano’s who pioneered these researches in Berlin. Stumpf founded the Phonogrammarchiv to collect recordings of folk and extra-European (...) music and a dedicated journal, the Sammelbände für vergleichende Musikwissenschaft. Gifted in the field of science no less than in that of musicology, Stumpf developed an empirically-oriented approach to phenomenology, deeply divergent from Husserl’s and highly influential over the Berlin school of Gestalt psychology. A self-declared “outsider” among armchair philosophers, Stumpf experimentally investigated the perception of sounds and the origins of musical consonance. Developing the physiological studies of Ernst Weber on the sense of touch, Stumpf discovered that two sensations of tone, given at the same time, tend to mix in a certain degree. Musical consonance – he claimed – lays in this level of “tonal fusion”, not in the allegedly “natural” series of the harmonic partials of a vibrating chord, as suggested by the naturalists of all times from Pythagoras to Stumpf’s great contemporary Hermann Helmholtz. Accordingly, no musical system can claim for preponderance over the others: Stumpf’s researches in comparative musicology served to corroborate his theses on “tonal fusion” and the psychological foundations of consonance. Although Stumpf later revised and finally abandoned this theory, its permanent value lays in its opposition to dominant naturalistic approaches. The commitment for comparative musicology at the Berlin School is then no concession to a positivistic fashion for exoticism. The fundamentally Eurocentric stance of naturalistic theories of music is also fiercely contrasted by Stumpf’s pupil Erich Hornbostel, who suggests that music ought to be considered as culture, rather than as nature, and focuses attention on the eventually melting human cultures. The Berlin school flourished until the Nazis forced most of its exponents to emigration and, for tragically obvious reasons, heavily discouraged researches on these topics. (shrink)
Many philosophers argue that Keynes’s concept of the “weight of arguments” is an important aspect of argument appraisal. The weight of an argument is the quantity of relevant evidence cited in the premises. However, this dimension of argumentation does not have a received method for formalisation. Kyburg has suggested a measure of weight that uses the degree of imprecision in his system of “Evidential Probability” to quantify weight. I develop and defend this approach to measuring weight. I illustrate the (...) usefulness of this measure by employing it to develop an answer to Popper’s Paradox of Ideal Evidence. (shrink)
This paper defends David Hume's "Of Miracles" from John Earman's (2000) Bayesian attack by showing that Earman misrepresents Hume's argument against believing in miracles and misunderstands Hume's epistemology of probable belief. It argues, moreover, that Hume's account of evidence is fundamentally non-mathematical and thus cannot be properly represented in a Bayesian framework. Hume's account of probability is show to be consistent with a long and laudable tradition of evidential reasoning going back to ancient Roman law.
A probability distribution is regular if no possible event is assigned probability zero. While some hold that probabilities should always be regular, three counter-arguments have been posed based on examples where, if regularity holds, then perfectly similar events must have different probabilities. Howson (2017) and Benci et al. (2016) have raised technical objections to these symmetry arguments, but we see here that their objections fail. Howson says that Williamson’s (2007) “isomorphic” events are not in fact isomorphic, but Howson (...) is speaking of set-theoretic representations of events in a probability model. While those sets are not isomorphic, Williamson’s physical events are, in the relevant sense. Benci et al. claim that all three arguments rest on a conflation of different models, but they do not. They are founded on the premise that similar events should have the same probability in the same model, or in one case, on the assumption that a single rotation-invariant distribution is possible. Having failed to refute the symmetry arguments on such technical grounds, one could deny their implicit premises, which is a heavy cost, or adopt varying degrees of instrumentalism or pluralism about regularity, but that would not serve the project of accurately modelling chances. (shrink)
Famous results by David Lewis show that plausible-sounding constraints on the probabilities of conditionals or evaluative claims lead to unacceptable results, by standard probabilistic reasoning. Existing presentations of these results rely on stronger assumptions than they really need. When we strip these arguments down to a minimal core, we can see both how certain replies miss the mark, and also how to devise parallel arguments for other domains, including epistemic “might,” probability claims, claims about comparative value, and so (...) on. A popular reply to Lewis's results is to claim that conditional claims, or claims about subjective value, lack truth conditions. For this strategy to have a chance of success, it needs to give up basic structural principles about how epistemic states can be updated—in a way that is strikingly parallel to the commitments of the project of dynamic semantics. (shrink)
When probability discounting (or probability weighting), one multiplies the value of an outcome by one's subjective probability that the outcome will obtain in decision-making. The broader import of defending probability discounting is to help justify cost-benefit analyses in contexts such as climate change. This chapter defends probability discounting under risk both negatively, from arguments by Simon Caney (2008, 2009), and with a new positive argument. First, in responding to Caney, I argue that small costs and (...) benefits need to be evaluated, and that viewing practices at the social level is too coarse-grained. Second, I argue for probability discounting, using a distinction between causal responsibility and moral responsibility. Moral responsibility can be cashed out in terms of blameworthiness and praiseworthiness, while causal responsibility obtains in full for any effect which is part of a causal chain linked to one's act. With this distinction in hand, unlike causal responsibility, moral responsibility can be seen as coming in degrees. My argument is, given that we can limit our deliberation and consideration to that which we are morally responsible for and that our moral responsibility for outcomes is limited by our subjective probabilities, our subjective probabilities can ground probability discounting. (shrink)
The essentially comparative conception of value entails that the value of a state of affairs does not depend solely upon features intrinsic to the state of affairs, but also upon extrinsic features, such as the set of feasible alternatives. It has been argued that this conception of value gives us reason to abandon the transitivity of the better than relation. This paper shows that the support for intransitivity derived from this conception of value is very limited. On its most (...) plausible interpretations, it merely provides a necessary, but not sufficient, condition for intransitivity. It is further argued that the essentially comparative conception of value appears to support a disjunctive conclusion: there is incommensurability of value or betterness is not transitive. Of these two alternatives, incommensurability is preferable, because it is far less threatening to our other axiological commitments. (shrink)
Comparativism is the view that comparative confidences (e.g., being more confident that P than that Q) are more fundamental than degrees of belief (e.g., believing that P with some strength x). In this paper, I outline the basis for a new, non-probabilistic version of comparativism inspired by a suggestion made by Frank Ramsey in `Probability and Partial Belief'. I show how, and to what extent, `Ramseyan comparativism' might be used to weaken the (unrealistically strong) probabilistic coherence conditions that (...) comparativism traditionally relies on. (shrink)
Comparative philosophy between two disparate cultural-philosophic traditions, such as Western and Chinese philosophy, has become a new trend of philosophical fashion in the late twentieth and early twenty-first centuries. Having learned from the past, contemporary comparative philosophers cautiously safeguard their comparative studies against two potential pitfalls, namely cultural universalism and cultural relativism. The Orientalism that assumed the superiority of the Occidental has become a memory of the past. The historical pendulum has apparently swung to the other extreme. (...) The more recent "reverse Orientalism" has started to reclaim the superiority of the Oriental. We have even been told that the twenty-first... (shrink)
This paper is a response to Tyler Wunder’s ‘The modality of theism and probabilistic natural theology: a tension in Alvin Plantinga's philosophy’ (this journal). In his article, Wunder argues that if the proponent of the Evolutionary Argument Against Naturalism (EAAN) holds theism to be non-contingent and frames the argument in terms of objective probability, that the EAAN is either unsound or theism is necessarily false. I argue that a modest revision of the EAAN renders Wunder’s objection irrelevant, and that (...) this revision actually widens the scope of the argument. (shrink)
The major competing statistical paradigms share a common remarkable but unremarked thread: in many of their inferential applications, different probability interpretations are combined. How this plays out in different theories of inference depends on the type of question asked. We distinguish four question types: confirmation, evidence, decision, and prediction. We show that Bayesian confirmation theory mixes what are intuitively “subjective” and “objective” interpretations of probability, whereas the likelihood-based account of evidence melds three conceptions of what constitutes an “objective” (...)probability. (shrink)
In 1947, the U.S. Secretary of State, George C. Marshall announced that the USA would provide development aid to help the recovery and reconstruction of the economies of Europe, which was widely known as the ‘Marshall Plan’. In Italy, this plan generated a resurgence of modern industrialization and remodeled Italian Industry based on American models of production. As the result of these transnational transfers, the systemic approach known as Fordism largely succeeded and allowed some Italian firms such as Fiat to (...) flourish. During this period, Detroit and Turin, homes to the most powerful automobile corporations of the twentieth century, became intertwined in a web of common features such as industrial concentration, mass flows of immigrations, uneven urban sprawl, radical iconography and inner-city decay, which characterized Fordism in both cities. In the crucial decades of the postwar expansion of the automobile industries, both cities were hubs of labor battles and social movements. However, after the radical decline in their industries as previous auto cities, they experienced the radical shift toward post-Fordist urbanization and production of political urbanism. This research responds to the recent interest for a comparative (re)turn in urban studies by suggesting the conceptual theoretical baseline for the proposed comparative framework in post-Fordist cities. In better words, it develops a “theory” on the challenges of comparative urbanism in post-Fordist cities. (shrink)
A definition of causation as probability-raising is threatened by two kinds of counterexample: first, when a cause lowers the probability of its effect; and second, when the probability of an effect is raised by a non-cause. In this paper, I present an account that deals successfully with problem cases of both these kinds. In doing so, I also explore some novel implications of incorporating into the metaphysical investigation considerations of causal psychology.
The concept of “harm” is ubiquitous in moral theorising, and yet remains poorly defined. Bradley suggests that the counterfactual comparative account of harm is the most plausible account currently available, but also argues that it is fatally flawed, since it falters on the omission and pre-emption problems. Hanna attempts to defend the counterfactual comparative account of harm against both problems. In this paper, I argue that Hanna’s defence fails. I also show how his defence highlights the fact that (...) both the omission and the pre-emption problems have the same root cause – the inability of the counterfactual comparative account of harm to allow for our implicit considerations regarding well-being when assessing harm. While its purported neutrality with regard to substantive theories of well-being is one of the reasons that this account is considered to be the most plausible on offer, I will argue that this neutrality is illusory. (shrink)
There is a plethora of confirmation measures in the literature. Zalabardo considers four such measures: PD, PR, LD, and LR. He argues for LR and against each of PD, PR, and LD. First, he argues that PR is the better of the two probability measures. Next, he argues that LR is the better of the two likelihood measures. Finally, he argues that LR is superior to PR. I set aside LD and focus on the trio of PD, PR, and (...) LR. The question I address is whether Zalabardo succeeds in showing that LR is superior to each of PD and PR. I argue that the answer is negative. I also argue, though, that measures such as PD and PR, on one hand, and measures such as LR, on the other hand, are naturally understood as explications of distinct senses of confirmation. (shrink)
We generalize the Kolmogorov axioms for probability calculus to obtain conditions defining, for any given logic, a class of probability functions relative to that logic, coinciding with the standard probability functions in the special case of classical logic but allowing consideration of other classes of "essentially Kolmogorovian" probability functions relative to other logics. We take a broad view of the Bayesian approach as dictating inter alia that from the perspective of a given logic, rational degrees of (...) belief are those representable by probability functions from the class appropriate to that logic. Classical Bayesianism, which fixes the logic as classical logic, is only one version of this general approach. Another, which we call Intuitionistic Bayesianism, selects intuitionistic logic as the preferred logic and the associated class of probability functions as the right class of candidate representions of epistemic states (rational allocations of degrees of belief). Various objections to classical Bayesianism are, we argue, best met by passing to intuitionistic Bayesianism—in which the probability functions are taken relative to intuitionistic logic—rather than by adopting a radically non-Kolmogorovian, for example, nonadditive, conception of (or substitute for) probability functions, in spite of the popularity of the latter response among those who have raised these objections. The interest of intuitionistic Bayesianism is further enhanced by the availability of a Dutch Book argument justifying the selection of intuitionistic probability functions as guides to rational betting behavior when due consideration is paid to the fact that bets are settled only when/if the outcome bet on becomes known. (shrink)
How were reliable predictions made before Pascal and Fermat's discovery of the mathematics of probability in 1654? What methods in law, science, commerce, philosophy, and logic helped us to get at the truth in cases where certainty was not attainable? The book examines how judges, witch inquisitors, and juries evaluated evidence; how scientists weighed reasons for and against scientific theories; and how merchants counted shipwrecks to determine insurance rates. Also included are the problem of induction before Hume, design arguments (...) for the existence of God, and theories on how to evaluate scientific and historical hypotheses. It is explained how Pascal and Fermat's work on chance arose out of legal thought on aleatory contracts. The book interprets pre-Pascalian unquantified probability in a generally objective Bayesian or logical probabilist sense. (shrink)
Modern scientific cosmology pushes the boundaries of knowledge and the knowable. This is prompting questions on the nature of scientific knowledge. A central issue is what defines a 'good' model. When addressing global properties of the Universe or its initial state this becomes a particularly pressing issue. How to assess the probability of the Universe as a whole is empirically ambiguous, since we can examine only part of a single realisation of the system under investigation: at some point, data (...) will run out. We review the basics of applying Bayesian statistical explanation to the Universe as a whole. We argue that a conventional Bayesian approach to model inference generally fails in such circumstances, and cannot resolve, e.g., the so-called 'measure problem' in inflationary cosmology. Implicit and non-empirical valuations inevitably enter model assessment in these cases. This undermines the possibility to perform Bayesian model comparison. One must therefore either stay silent, or pursue a more general form of systematic and rational model assessment. We outline a generalised axiological Bayesian model inference framework, based on mathematical lattices. This extends inference based on empirical data (evidence) to additionally consider the properties of model structure (elegance) and model possibility space (beneficence). We propose this as a natural and theoretically well-motivated framework for introducing an explicit, rational approach to theoretical model prejudice and inference beyond data. (shrink)
Leibniz’s account of probability has come into better focus over the past decades. However, less attention has been paid to a certain domain of application of that account, that is, the application of it to the moral or ethical domain—the sphere of action, choice and practice. This is significant, as Leibniz had some things to say about applying probability theory to the moral domain, and thought the matter quite relevant. Leibniz’s work in this area is conducted at a (...) high level of abstraction. It establishes a proof of concept, rather than concrete guidelines for how to apply calculations to specific cases. Still, this highly abstract material does allow us to begin to construct a framework for thinking about Leibniz’s approach to the ethical side of probability. (shrink)
NOTE: This paper is a reworking of some aspects of an earlier paper – ‘What else justification could be’ and also an early draft of chapter 2 of Between Probability and Certainty. I'm leaving it online as it has a couple of citations and there is some material here which didn't make it into the book (and which I may yet try to develop elsewhere). My concern in this paper is with a certain, pervasive picture of epistemic justification. On (...) this picture, acquiring justification for believing something is essentially a matter of minimising one’s risk of error – so one is justified in believing something just in case it is sufficiently likely, given one’s evidence, to be true. This view is motivated by an admittedly natural thought: If we want to be fallibilists about justification then we shouldn’t demand that something be certain – that we completely eliminate error risk – before we can be justified in believing it. But if justification does not require the complete elimination of error risk, then what could it possibly require if not its minimisation? If justification does not require epistemic certainty then what could it possibly require if not epistemic likelihood? When all is said and done, I’m not sure that I can offer satisfactory answers to these questions – but I will attempt to trace out some possible answers here. The alternative picture that I’ll outline makes use of a notion of normalcy that I take to be irreducible to notions of statistical frequency or predominance. (shrink)
Dutch Book arguments have been presented for static belief systems and for belief change by conditionalization. An argument is given here that a rule for belief change which under certain conditions violates probability kinematics will leave the agent open to a Dutch Book.
In the following we will investigate whether von Mises’ frequency interpretation of probability can be modified to make it philosophically acceptable. We will reject certain elements of von Mises’ theory, but retain others. In the interpretation we propose we do not use von Mises’ often criticized ‘infinite collectives’ but we retain two essential claims of his interpretation, stating that probability can only be defined for events that can be repeated in similar conditions, and that exhibit frequency stabilization. The (...) central idea of the present article is that the mentioned ‘conditions’ should be well-defined and ‘partitioned’. More precisely, we will divide probabilistic systems into object, initializing, and probing subsystem, and show that such partitioning allows to solve problems. Moreover we will argue that a key idea of the Copenhagen interpretation of quantum mechanics (the determinant role of the observing system) can be seen as deriving from an analytic definition of probability as frequency. Thus a secondary aim of the article is to illustrate the virtues of analytic definition of concepts, consisting of making explicit what is implicit. (shrink)
Bayesian confirmation theory is rife with confirmation measures. Zalabardo focuses on the probability difference measure, the probability ratio measure, the likelihood difference measure, and the likelihood ratio measure. He argues that the likelihood ratio measure is adequate, but each of the other three measures is not. He argues for this by setting out three adequacy conditions on confirmation measures and arguing in effect that all of them are met by the likelihood ratio measure but not by any of (...) the other three measures. Glass and McCartney, hereafter “G&M,” accept the conclusion of Zalabardo’s argument along with each of the premises in it. They nonetheless try to improve on Zalabardo’s argument by replacing his third adequacy condition with a weaker condition. They do this because of a worry to the effect that Zalabardo’s third adequacy condition runs counter to the idea behind his first adequacy condition. G&M have in mind confirmation in the sense of increase in probability: the degree to which E confirms H is a matter of the degree to which E increases H’s probability. I call this sense of confirmation “IP.” I set out four ways of precisifying IP. I call them “IP1,” “IP2,” “IP3,” and “IP4.” Each of them is based on the assumption that the degree to which E increases H’s probability is a matter of the distance between p and a certain other probability involving H. I then evaluate G&M’s argument in light of them. (shrink)
There are many scientific and everyday cases where each of Pr and Pr is high and it seems that Pr is high. But high probability is not transitive and so it might be in such cases that each of Pr and Pr is high and in fact Pr is not high. There is no issue in the special case where the following condition, which I call “C1”, holds: H 1 entails H 2. This condition is sufficient for transitivity in (...) high probability. But many of the scientific and everyday cases referred to above are cases where it is not the case that H 1 entails H 2. I consider whether there are additional conditions sufficient for transitivity in high probability. I consider three candidate conditions. I call them “C2”, “C3”, and “C2&3”. I argue that C2&3, but neither C2 nor C3, is sufficient for transitivity in high probability. I then set out some further results and relate the discussion to the Bayesian requirement of coherence. (shrink)
This paper argues that the technical notion of conditional probability, as given by the ratio analysis, is unsuitable for dealing with our pretheoretical and intuitive understanding of both conditionality and probability. This is an ontological account of conditionals that include an irreducible dispositional connection between the antecedent and consequent conditions and where the conditional has to be treated as an indivisible whole rather than compositional. The relevant type of conditionality is found in some well-defined group of conditional statements. (...) As an alternative, therefore, we briefly offer grounds for what we would call an ontological reading: for both conditionality and conditional probability in general. It is not offered as a fully developed theory of conditionality but can be used, we claim, to explain why calculations according to the RATIO scheme does not coincide with our intuitive notion of conditional probability. What it shows us is that for an understanding of the whole range of conditionals we will need what John Heil (2003), in response to Quine (1953), calls an ontological point of view. (shrink)
Karl Popper discovered in 1938 that the unconditional probability of a conditional of the form ‘If A, then B’ normally exceeds the conditional probability of B given A, provided that ‘If A, then B’ is taken to mean the same as ‘Not (A and not B)’. So it was clear (but presumably only to him at that time) that the conditional probability of B given A cannot be reduced to the unconditional probability of the material conditional (...) ‘If A, then B’. I describe how this insight was developed in Popper’s writings and I add to this historical study a logical one, in which I compare laws of excess in Kolmogorov probability theory with laws of excess in Popper probability theory. (shrink)
The comparative utility argument holds that the descendants of African slaves in America are not owed any compensation because they have not been harmed by slavery. Rather, slavery in America was beneficial to the descendants of slaves because they are now able to live in a country that is considerably richer today than any of the African countries from which slaves were taken. In this paper, I show that the comparative utility argument is a red herring with no (...) bearing whatsoever on the question of slave reparations because it conflates two separate wrongs: slavery and forced immigration. The fact that the descendants of slaves now live in America is a consequence of the latter, but not the former. As such, it has no bearing on the legitimacy reparations for slavery. (shrink)
This dissertation is a contribution to formal and computational philosophy. -/- In the first part, we show that by exploiting the parallels between large, yet finite lotteries on the one hand and countably infinite lotteries on the other, we gain insights in the foundations of probability theory as well as in epistemology. Case 1: Infinite lotteries. We discuss how the concept of a fair finite lottery can best be extended to denumerably infinite lotteries. The solution boils down to the (...) introduction of infinitesimal probability values, which can be achieved using non-standard analysis. Our solution can be generalized to uncountable sample spaces, giving rise to a Non-Archimedean Probability (NAP) theory. Case 2: Large but finite lotteries. We propose application of the language of relative analysis (a type of non-standard analysis) to formulate a new model for rational belief, called Stratified Belief. This contextualist model seems well-suited to deal with a concept of beliefs based on probabilities ‘sufficiently close to unity’. -/- The second part presents a case study in social epistemology. We model a group of agents who update their opinions by averaging the opinions of other agents. Our main goal is to calculate the probability for an agent to end up in an inconsistent belief state due to updating. To that end, an analytical expression is given and evaluated numerically, both exactly and using statistical sampling. The probability of ending up in an inconsistent belief state turns out to be always smaller than 2%. (shrink)
With the growing focus on prevention in medicine, studies of how to describe risk have become increasing important. Recently, some researchers have argued against giving patients “comparative risk information,” such as data about whether their baseline risk of developing a particular disease is above or below average. The concern is that giving patients this information will interfere with their consideration of more relevant data, such as the specific chance of getting the disease (the “personal risk”), the risk reduction the (...) treatment provides, and any possible side effects. I explore this view and the theories of rationality that ground it, and I argue instead that comparative risk information can play a positive role in decision-making. The criticism of disclosing this sort of information to patients, I conclude, rests on a mistakenly narrow account of the goals of prevention and the nature of rational choice in medicine. (shrink)
This article presents a comparative theory of subjective argument strength simple enough for application. Using the axioms and corollaries of the theory, anyone with an elementary knowledge of logic and probability theory can produce an at least minimally rational ranking of any set of arguments according to their subjective strength, provided that the arguments in question are descriptive ones in standard form. The basic idea is that the strength of argument A as seen by person x is a (...) function of three factors: x's degree of belief in the premisses of A; x's degree of belief in the conclusion of A under the assumption that all premisses of A are true; and x's belief in the conclusion of A under the assumption that not all premisses of A are true. (shrink)
Epistemic closure under known implication is the principle that knowledge of "p" and knowledge of "p implies q", together, imply knowledge of "q". This principle is intuitive, yet several putative counterexamples have been formulated against it. This paper addresses the question, why is epistemic closure both intuitive and prone to counterexamples? In particular, the paper examines whether probability theory can offer an answer to this question based on four strategies. The first probability-based strategy rests on the accumulation of (...) risks. The problem with this strategy is that risk accumulation cannot accommodate certain counterexamples to epistemic closure. The second strategy is based on the idea of evidential support, that is, a piece of evidence supports a proposition whenever it increases the probability of the proposition. This strategy makes progress and can accommodate certain putative counterexamples to closure. However, this strategy also gives rise to a number of counterintuitive results. Finally, there are two broadly probabilistic strategies, one based on the idea of resilient probability and the other on the idea of assumptions that are taken for granted. These strategies are promising but are prone to some of the shortcomings of the second strategy. All in all, I conclude that each strategy fails. Probability theory, then, is unlikely to offer the account we need. (shrink)
This thesis focuses on expressively rich languages that can formalise talk about probability. These languages have sentences that say something about probabilities of probabilities, but also sentences that say something about the probability of themselves. For example: (π): “The probability of the sentence labelled π is not greater than 1/2.” Such sentences lead to philosophical and technical challenges; but can be useful. For example they bear a close connection to situations where ones confidence in something can affect (...) whether it is the case or not. The motivating interpretation of probability as an agent's degrees of belief will be focused on throughout the thesis. -/- This thesis aims to answer two questions relevant to such frameworks, which correspond to the two parts of the thesis: “How can one develop a formal semantics for this framework?” and “What rational constraints are there on an agent once such expressive frameworks are considered?”. (shrink)
Contrary to Bell’s theorem it is demonstrated that with the use of classical probability theory the quantum correlation can be approximated. Hence, one may not conclude from experiment that all local hidden variable theories are ruled out by a violation of inequality result.
The paper attempts to set a guideline to contemporary common morality debate. The author points out what he sees as two common problems that occur in the field of comparative cultural studies related to a common morality debate. The first problem is that the advocates and opponents of common morality, consciously or unconsciously, define the moral terms in question in a way that their respective meanings would naturally lead to the outcomes that each party desires. The second problem is (...) that the examples each party chooses as the empirical evidences may not be as simple and clear-cut as the researchers think they are, mainly because the situational contexts where the examples are located between two different cultures vastly differ. To prevent these mistakes, the author emphasizes that we should pay attention to a subtle distinction between "thick" and "thin" construed from the levels of "theoretical status" and of "material content". With the conceptual distinctions in mind, the author shows how different cultures (i.e., Western individualist society and East Asian neo-Confucian society) see the moral principles like autonomy and beneficence in different lexical orders. (shrink)
This paper shows how the classical finite probability theory (with equiprobable outcomes) can be reinterpreted and recast as the quantum probability calculus of a pedagogical or "toy" model of quantum mechanics over sets (QM/sets). There are two parts. The notion of an "event" is reinterpreted from being an epistemological state of indefiniteness to being an objective state of indefiniteness. And the mathematical framework of finite probability theory is recast as the quantum probability calculus for QM/sets. The (...) point is not to clarify finite probability theory but to elucidate quantum mechanics itself by seeing some of its quantum features in a classical setting. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.