This paper generalises the classical Condorcet jury theorem from majority voting over two options to plurality voting over multiple options. The paper further discusses the debate between epistemic and procedural democracy and situates its formal results in that debate. The paper finally compares a number of different social choice procedures for many-option choices in terms of their epistemic merits. An appendix explores the implications of some of the present mathematical results for the question of how probable majority cycles (as (...) in Condorcet's paradox) are in large electorates. (shrink)
What are the probabilities that this universe is repeated exactly the same with you in it again? Is God invented by human imagination or is the result of human intuition? The intuition that the same laws/mechanisms (evolution, stability winning probability) that have created something like the human being capable of self-awareness and controlling its surroundings, could create a being capable of controlling all what it exists? Will be the characteristics of the next universes random or tend to something? All (...) these ques-tions that with different shapes (but the same essence) have been asked by human be-ings from the beginning of times will be developed in this paper. (shrink)
I discuss Richard Swinburne’s account of religious experience in his probabilistic case for theism. I argue, pace Swinburne, that even if cosmological considerations render theism not too improbable, religious experience does not render it more probable than not.
This book explores a question central to philosophy--namely, what does it take for a belief to be justified or rational? According to a widespread view, whether one has justification for believing a proposition is determined by how probable that proposition is, given one's evidence. In this book this view is rejected and replaced with another: in order for one to have justification for believing a proposition, one's evidence must normically support it--roughly, one's evidence must make the falsity of that proposition (...) abnormal in the sense of calling for special, independent explanation. This conception of justification bears upon a range of topics in epistemology and beyond. Ultimately, this way of looking at justification guides us to a new, unfamiliar picture of how we should respond to our evidence and manage our own fallibility. This picture is developed here. (shrink)
I examine what the mathematical theory of random structures can teach us about the probability of Plenitude, a thesis closely related to David Lewis's modal realism. Given some natural assumptions, Plenitude is reasonably probable a priori, but in principle it can be (and plausibly it has been) empirically disconfirmed—not by any general qualitative evidence, but rather by our de re evidence.
This paper articulates a way to ground a relatively high prior probability for grand explanatory theories apart from an appeal to simplicity. I explore the possibility of enumerating the space of plausible grand theories of the universe by using the explanatory properties of possible views to limit the number of plausible theories. I motivate this alternative grounding by showing that Swinburne’s appeal to simplicity is problematic along several dimensions. I then argue that there are three plausible grand views—theism, atheism, (...) and axiarchism—which satisfy explanatory requirements for plausibility. Other possible views lack the explanatory virtue of these three theories. Consequently, this explanatory grounding provides a way of securing a nontrivial prior probability for theism, atheism, and axiarchism. An important upshot of my approach is that a modest amount of empirical evidence can bear significantly on the posterior probability of grand theories of the universe. (shrink)
This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifi cally the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfi lms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this (...) publication does not imply, even in the absence of a specifi c statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, express or implied, with respect to the material contained herein or for any errors or omissions that may have been made. (shrink)
This paper develops an information sensitive theory of the semantics and probability of conditionals and statements involving epistemic modals. The theory validates a number of principles linking probability and modality, including the principle that the probability of a conditional 'If A, then C' equals the probability of C, updated with A. The theory avoids so-called triviality results, which are standardly taken to show that principles of this sort cannot be validated. To achieve this, we deny that (...) rational agents update their credences via conditionalization. We offer a new rule of update, Hyperconditionalization, which agrees with Conditionalization whenever nonmodal statements are at stake, but differs for modal and conditional sentences. (shrink)
Karl Popper discovered in 1938 that the unconditional probability of a conditional of the form ‘If A, then B’ normally exceeds the conditional probability of B given A, provided that ‘If A, then B’ is taken to mean the same as ‘Not (A and not B)’. So it was clear (but presumably only to him at that time) that the conditional probability of B given A cannot be reduced to the unconditional probability of the material conditional (...) ‘If A, then B’. I describe how this insight was developed in Popper’s writings and I add to this historical study a logical one, in which I compare laws of excess in Kolmogorov probability theory with laws of excess in Popper probability theory. (shrink)
This dissertation is a contribution to formal and computational philosophy. -/- In the first part, we show that by exploiting the parallels between large, yet finite lotteries on the one hand and countably infinite lotteries on the other, we gain insights in the foundations of probability theory as well as in epistemology. Case 1: Infinite lotteries. We discuss how the concept of a fair finite lottery can best be extended to denumerably infinite lotteries. The solution boils down to the (...) introduction of infinitesimal probability values, which can be achieved using non-standard analysis. Our solution can be generalized to uncountable sample spaces, giving rise to a Non-Archimedean Probability (NAP) theory. Case 2: Large but finite lotteries. We propose application of the language of relative analysis (a type of non-standard analysis) to formulate a new model for rational belief, called Stratified Belief. This contextualist model seems well-suited to deal with a concept of beliefs based on probabilities ‘sufficiently close to unity’. -/- The second part presents a case study in social epistemology. We model a group of agents who update their opinions by averaging the opinions of other agents. Our main goal is to calculate the probability for an agent to end up in an inconsistent belief state due to updating. To that end, an analytical expression is given and evaluated numerically, both exactly and using statistical sampling. The probability of ending up in an inconsistent belief state turns out to be always smaller than 2%. (shrink)
In his recent book Warranted Christian Belief, Alvin Plantinga argues that the defender of naturalistic evolution is faced with adefeater for his position: as products of naturalistic evolution, we have no way of knowing if our cognitive faculties are in fact reliably aimed at the truth. This defeater is successfully avoided by the theist in that, given theism, we can be reasonably secure that out cognitive faculties are indeed reliable. I argue that Plantinga’s argument is ultimately based on a faulty (...) comparison, that he is comparing naturalistic evolution generally to one particular model of theism. In light of this analysis, the two models either stand or fall together with respect to the defeater that Plantinga offers. (shrink)
This article argues for an epistemology of music, stating that dealing with music can be considered as a process of knowledge acquisition. What really matters is not the representation of an ontological musical reality, but the generation of music knowledge as a tool for adaptation to the sonic world. Three major positions are brought together: the epistemological claims of Jean Piaget, the biological methodology of Jakob von Uexküll, and the constructivistic conceptions of Ernst von Glasersfeld, each ingstress the role of (...) the music user rather than the music. Dealing with music, in this view, is not a matter of representation, but a process of semiotization of the sonorous environment as the outcome of interactions with the sound. Hence the role of enactive cognition and perceptual-motor interaction with the sonic environment. What is considered a central issue is the way how listeners as subjects experience their own phenomenal world or Umwelt, and how they can make sense out of their sonic environment. Umwelt-research, therefore, is highly relevant for music education in stressing the role of the listener and his/her listening strategies. (shrink)
Evolutionary theory (ET) is teeming with probabilities. Probabilities exist at all levels: the level of mutation, the level of microevolution, and the level of macroevolution. This uncontroversial claim raises a number of contentious issues. For example, is the evolutionary process (as opposed to the theory) indeterministic, or is it deterministic? Philosophers of biology have taken different sides on this issue. Millstein (1997) has argued that we are not currently able answer this question, and that even scientific realists ought to remain (...) agnostic concerning the determinism or indeterminism of evolutionary processes. If this argument is correct, it suggests that, whatever we take probabilities in ET to be, they must be consistent with either determinism or indeterminism. This raises some interesting philosophical questions: How should we understand the probabilities used in ET? In other words, what is meant by saying that a certain evolutionary change is more or less probable? Which interpretation of probability is the most appropriate for ET? I argue that the probabilities used in ET are objective in a realist sense, if not in an indeterministic sense. Furthermore, there are a number of interpretations of probability that are objective and would be consistent with ET under determinism or indeterminism. However, I argue that evolutionary probabilities are best understood as propensities of population-level kinds. (shrink)
There is a trade-off between specificity and accuracy in existing models of belief. Descriptions of agents in the tripartite model, which recognizes only three doxastic attitudes—belief, disbelief, and suspension of judgment—are typically accurate, but not sufficiently specific. The orthodox Bayesian model, which requires real-valued credences, is perfectly specific, but often inaccurate: we often lack precise credences. I argue, first, that a popular attempt to fix the Bayesian model by using sets of functions is also inaccurate, since it requires us to (...) have interval-valued credences with perfectly precise endpoints. We can see this problem as analogous to the problem of higher order vagueness. Ultimately, I argue, the only way to avoid these problems is to endorse Insurmountable Unclassifiability. This principle has some surprising and radical consequences. For example, it entails that the trade-off between accuracy and specificity is in-principle unavoidable: sometimes it is simply impossible to characterize an agent’s doxastic state in a way that is both fully accurate and maximally specific. What we can do, however, is improve on both the tripartite and existing Bayesian models. I construct a new model of belief—the minimal model—that allows us to characterize agents with much greater specificity than the tripartite model, and yet which remains, unlike existing Bayesian models, perfectly accurate. (shrink)
Subjective probability plays an increasingly important role in many fields concerned with human cognition and behavior. Yet there have been significant criticisms of the idea that probabilities could actually be represented in the mind. This paper presents and elaborates a view of subjective probability as a kind of sampling propensity associated with internally represented generative models. The resulting view answers to some of the most well known criticisms of subjective probability, and is also supported by empirical work (...) in neuroscience and behavioral psychology. The repercussions of the view for how we conceive of many ordinary instances of subjective probability, and how it relates to more traditional conceptions of subjective probability, are discussed in some detail. (shrink)
The present Yearbook (which is the fourth in the series) is subtitled Trends & Cycles. Already ancient historians (see, e.g., the second Chapter of Book VI of Polybius' Histories) described rather well the cyclical component of historical dynamics, whereas new interesting analyses of such dynamics also appeared in the Medieval and Early Modern periods (see, e.g., Ibn Khaldūn 1958 [1377], or Machiavelli 1996 [1531] 1). This is not surprising as the cyclical dynamics was dominant in the agrarian social systems. (...) With modernization, the trend dynamics became much more pronounced and these are trends to which the students of modern societies pay more attention. Note that the term trend – as regards its contents and application – is tightly connected with a formal mathematical analysis. Trends may be described by various equations – linear, exponential, power-law, etc. On the other hand, the cliodynamic research has demonstrated that the cyclical historical dynamics can be also modeled mathematically in a rather effective way (see, e.g., Usher 1989; Chu and Lee 1994; Turchin 2003, 2005a, 2005b; Turchin and Korotayev 2006; Turchin and Nefedov 2009; Nefedov 2004; Korotayev and Komarova 2004; Korotayev, Malkov, and Khaltourina 2006; Korotayev and Khaltourina 2006; Korotayev 2007; Grinin 2007), whereas the trend and cycle components of historical dynamics turn out to be of equal importance. (shrink)
A common objection to probabilistic theories of causation is that there are prima facie causes that lower the probability of their effects. Among the many replies to this objection, little attention has been given to Mellor's (1995) indirect strategy to deny that probability-lowering factors are bona fide causes. According to Mellor, such factors do not satisfy the evidential, explanatory, and instrumental connotations of causation. The paper argues that the evidential connotation only entails an epistemically relativized form of causal (...) attribution, not causation itself, and that there are clear cases of explanation and instrumental reasoning that must appeal to negatively relevant factors. In the end, it suggests a more liberal interpretation of causation that restores its connotations. Una objeción común a las teorías probabilísticas de la causalidad es que aparentemente existen causas que disminuyen la probabilidad de sus efectos. Entre las muchas respuestas a esta objeción, se le ha dado poca atención a la estrategia indirecta de D. H. Mellor (1995) para negar que un factor que disminuya la probabilidad de un efecto sea una causa legítima. Según Mellor, tales factores no satisfacen las connotaciones evidenciales, explicativas e instrumentales de la causalidad. El artículo argumenta que la connotación evidencial sólo implica una forma epistémicamente relativizada de atribución causal y no la causalidad misma, y que hay casos claros de explicación y razonamiento instrumental que deben apelar a factores negativamente relevantes. Se sugiere una interpretación más liberal de la causalidad que reinstaura sus connotaciones. (shrink)
When probability discounting (or probability weighting), one multiplies the value of an outcome by one's subjective probability that the outcome will obtain in decision-making. The broader import of defending probability discounting is to help justify cost-benefit analyses in contexts such as climate change. This chapter defends probability discounting under risk both negatively, from arguments by Simon Caney (2008, 2009), and with a new positive argument. First, in responding to Caney, I argue that small costs and (...) benefits need to be evaluated, and that viewing practices at the social level is too coarse-grained. Second, I argue for probability discounting, using a distinction between causal responsibility and moral responsibility. Moral responsibility can be cashed out in terms of blameworthiness and praiseworthiness, while causal responsibility obtains in full for any effect which is part of a causal chain linked to one's act. With this distinction in hand, unlike causal responsibility, moral responsibility can be seen as coming in degrees. My argument is, given that we can limit our deliberation and consideration to that which we are morally responsible for and that our moral responsibility for outcomes is limited by our subjective probabilities, our subjective probabilities can ground probability discounting. (shrink)
The article is a plea for ethicists to regard probability as one of their most important concerns. It outlines a series of topics of central importance in ethical theory in which probability is implicated, often in a surprisingly deep way, and lists a number of open problems. Topics covered include: interpretations of probability in ethical contexts; the evaluative and normative significance of risk or uncertainty; uses and abuses of expected utility theory; veils of ignorance; Harsanyi’s aggregation theorem; (...) population size problems; equality; fairness; giving priority to the worse off; continuity; incommensurability; nonexpected utility theory; evaluative measurement; aggregation; causal and evidential decision theory; act consequentialism; rule consequentialism; and deontology. (shrink)
If we add as an extra premise that the agent does know H, then it is possible for her to know E H, we get the conclusion that the agent does not really know H. But even without that closure premise, or something like it, the conclusion seems quite dramatic. One possible response to the argument, floated by both Descartes and Hume, is to accept the conclusion and embrace scepticism. We cannot know anything that goes beyond our evidence, so (...) we do not know very much at all. This is a remarkably sceptical conclusion, so we should resist it if at all possible. A more modern response, associated with externalists like John McDowell and Timothy Williamson, is to accept the conclusion but deny it is as sceptical as it first appears. The Humean argument, even if it works, only shows that our evidence and our knowledge are more closely linked than we might have thought. Perhaps that’s true because we have a lot of evidence, not because we have very little knowledge. There’s something right about this response I think. We have more evidence than Descartes or Hume thought we had. But I think we still need the idea of ampliative knowledge. It stretches the concept of evidence to breaking point to suggest that all of our knowledge, including knowledge about the future, is part of our evidence. So the conclusion really is unacceptable. Or, at least, I think we should try to see what an epistemology that rejects the conclusion looks like. (shrink)
Sometimes different partitions of the same space each seem to divide that space into propositions that call for equal epistemic treatment. Famously, equal treatment in the form of equal point-valued credence leads to incoherence. Some have argued that equal treatment in the form of equal interval-valued credence solves the puzzle. This paper shows that, once we rule out intervals with extreme endpoints, this proposal also leads to incoherence.
Recently there have been several attempts in formal epistemology to develop an adequate probabilistic measure of coherence. There is much to recommend probabilistic measures of coherence. They are quantitative and render formally precise a notion—coherence—notorious for its elusiveness. Further, some of them do very well, intuitively, on a variety of test cases. Siebel, however, argues that there can be no adequate probabilistic measure of coherence. Take some set of propositions A, some probabilistic measure of coherence, and a probability distribution (...) such that all the probabilities on which A’s degree of coherence depends (according to the measure in question) are defined. Then, the argument goes, the degree to which A is coherent depends solely on the details of the distribution in question and not at all on the explanatory relations, if any, standing between the propositions in A. This is problematic, the argument continues, because, first, explanation matters for coherence, and, second, explanation cannot be adequately captured solely in terms of probability. We argue that Siebel’s argument falls short. (shrink)
A probability distribution is regular if no possible event is assigned probability zero. While some hold that probabilities should always be regular, three counter-arguments have been posed based on examples where, if regularity holds, then perfectly similar events must have different probabilities. Howson (2017) and Benci et al. (2016) have raised technical objections to these symmetry arguments, but we see here that their objections fail. Howson says that Williamson’s (2007) “isomorphic” events are not in fact isomorphic, but Howson (...) is speaking of set-theoretic representations of events in a probability model. While those sets are not isomorphic, Williamson’s physical events are, in the relevant sense. Benci et al. claim that all three arguments rest on a conflation of different models, but they do not. They are founded on the premise that similar events should have the same probability in the same model, or in one case, on the assumption that a single rotation-invariant distribution is possible. Having failed to refute the symmetry arguments on such technical grounds, one could deny their implicit premises, which is a heavy cost, or adopt varying degrees of instrumentalism or pluralism about regularity, but that would not serve the project of accurately modelling chances. (shrink)
Some recent work in philosophy of religion addresses what can be called the “axiological question,” i.e., regardless of whether God exists, would it be good or bad if God exists? Would the existence of God make the world a better or a worse place? Call the view that the existence of God would make the world a better place “Pro-Theism.” We argue that Pro-Theism is not implausible, and moreover, many Theists, at least, (often implicitly) think that it is true. That (...) is, many Theists think that various good outcomes would arise if Theism is true. We then discuss work in cognitive science concerning human cognitive bias, before discussing two noteworthy attempts to show that at least some religious beliefs arise because of cognitive bias: Hume’s, and Draper’s and Nichols’s. We then argue that, as a result of certain cognitive biases that result when good outcomes might be at stake, Pro-Theism causes many Theists to inflate the epistemic probability that God exists, and as a result, Theists should lower the probability they assign to God’s existence. Finally, based our arguments, we develop a novel objection to Pascal’s wager. (shrink)
Early work on the frequency theory of probability made extensive use of the notion of randomness, conceived of as a property possessed by disorderly collections of outcomes. Growing out of this work, a rich mathematical literature on algorithmic randomness and Kolmogorov complexity developed through the twentieth century, but largely lost contact with the philosophical literature on physical probability. The present chapter begins with a clarification of the notions of randomness and probability, conceiving of the former as a (...) property of a sequence of outcomes, and the latter as a property of the process generating those outcomes. A discussion follows of the nature and limits of the relationship between the two notions, with largely negative verdicts on the prospects for any reduction of one to the other, although the existence of an apparently random sequence of outcomes is good evidence for the involvement of a genuinely chancy process. (shrink)
Some important correlations between medium-term economic cycles (7–11 years) known as Juglar cycles and long (40–60 years) Kondratieff cycles are presented in this paper. The research into the history of this issue shows that this aspect is insufficiently studied. Meanwhile, in our opinion, it can significantly clarify both the reasons of alternation of upswing and downswing phases in K-waves and the reasons of relative stability of the length of these waves. It also can provide the certain means (...) for forecasting. The authors show that adjacent 2–4 medium cycles form the system the important characteristic of which is the dynamics of economic trend. The latter can be upswing (active) or downswing (depressive). The mechanisms of formation of such medium-term trends and changing tendencies are explained. The presence of such clusters of medium cycles (general duration of which is 20–30 years) determines to a large degree the long-wave dynamics and the characteristics of its timing. Thus, not medium-term J-cycles depend on the character of K-wave phase as Kondratieff supposed, but the character of the cluster of J-cycles determines significantly the character of K-wave phases. (shrink)
There is a plethora of confirmation measures in the literature. Zalabardo considers four such measures: PD, PR, LD, and LR. He argues for LR and against each of PD, PR, and LD. First, he argues that PR is the better of the two probability measures. Next, he argues that LR is the better of the two likelihood measures. Finally, he argues that LR is superior to PR. I set aside LD and focus on the trio of PD, PR, and (...) LR. The question I address is whether Zalabardo succeeds in showing that LR is superior to each of PD and PR. I argue that the answer is negative. I also argue, though, that measures such as PD and PR, on one hand, and measures such as LR, on the other hand, are naturally understood as explications of distinct senses of confirmation. (shrink)
In a new theory of conflict escalation, Randall Collins (2012) engages critical issues of violent conflict and presents a compellingly plausible theoretical description based on his extensive empirical research. He also sets a new challenge for sociology: explaining the time dynamics of social interaction. However, despite heavy reliance on the quantitative concept of positive feedback loops in his theory, Collins presents no mathematical specification of the dynamic relationships among his variables. This article seeks to fill that gap by offering a (...) computational model that can parsimoniously account for many features of Collins’s theory. My model uses perceptual control theory (PCT) to create an agent-based computational model of the time dynamics of conflict. With greater conceptual clarity and more wide-ranging generalizability, my alternative model opens the door to further advances in theory development by revealing dynamic aspects of conflict escalation not found in Collins’s model. (shrink)
We call something a paradox if it strikes us as peculiar in a certain way, if it strikes us as something that is not simply nonsense, and yet it poses some difficulty in seeing how it could make sense. When we examine paradoxes more closely, we find that for some the peculiarity is relieved and for others it intensifies. Some are peculiar because they jar with how we expect things to go, but the jarring is to do with imprecision and (...) misunderstandings in our thought, failures to appreciate the breadth of possibility consistent with our beliefs. Other paradoxes, however, pose deep problems. Closer examination does not explain them away. Instead, they challenge the coherence of certain conceptual resources and hence challenge the significance of beliefs which deploy those resources. I shall call the former kind weak paradoxes and the latter, strong paradoxes. Whether a particular paradox is weak or strong is sometimes a matter of controversy—sometimes it has been realised that what was thought strong is in fact weak, and vice versa,— but the distinction between the two kinds is generally thought to be worth drawing. In this Cchapter, I shall cover both weak and strong probabilistic paradoxes. (shrink)
This paper motivates and develops a novel semantic framework for deontic modals. The framework is designed to shed light on two things: the relationship between deontic modals and substantive theories of practical rationality and the interaction of deontic modals with conditionals, epistemic modals and probability operators. I argue that, in order to model inferential connections between deontic modals and probability operators, we need more structure than is provided by classical intensional theories. In particular, we need probabilistic structure that (...) interacts directly with the compositional semantics of deontic modals. However, I reject theories that provide this probabilistic structure by claiming that the semantics of deontic modals is linked to the Bayesian notion of expectation. I offer a probabilistic premise semantics that explains all the data that create trouble for the rival theories. (shrink)
A definition of causation as probability-raising is threatened by two kinds of counterexample: first, when a cause lowers the probability of its effect; and second, when the probability of an effect is raised by a non-cause. In this paper, I present an account that deals successfully with problem cases of both these kinds. In doing so, I also explore some novel implications of incorporating into the metaphysical investigation considerations of causal psychology.
A probability distribution is regular if it does not assign probability zero to any possible event. While some hold that probabilities should always be regular, three counter-arguments have been posed based on examples where, if regularity holds, then perfectly similar events must have different probabilities. Howson and Benci et al. have raised technical objections to these symmetry arguments, but we see here that their objections fail. Howson says that Williamson’s “isomorphic” events are not in fact isomorphic, but Howson (...) is speaking of set-theoretic representations of events in a probability model. While those sets are not isomorphic, Williamson’s physical events are, in the relevant sense. Benci et al. claim that all three arguments rest on a conflation of different models, but they do not. They are founded on the premise that similar events should have the same probability in the same model, or in one case, on the assumption that a single rotation-invariant distribution is possible. Having failed to refute the symmetry arguments on such technical grounds, one could deny their implicit premises, which is a heavy cost, or adopt varying degrees of instrumentalism or pluralism about regularity, but that would not serve the project of accurately modelling chances. (shrink)
In this study we investigate the influence of reason-relation readings of indicative conditionals and ‘and’/‘but’/‘therefore’ sentences on various cognitive assessments. According to the Frege-Grice tradition, a dissociation is expected. Specifically, differences in the reason-relation reading of these sentences should affect participants’ evaluations of their acceptability but not of their truth value. In two experiments we tested this assumption by introducing a relevance manipulation into the truth-table task as well as in other tasks assessing the participants’ acceptability and probability evaluations. (...) Across the two experiments a strong dissociation was found. The reason-relation reading of all four sentences strongly affected their probability and acceptability evaluations, but hardly affected their respective truth evaluations. Implications of this result for recent work on indicative conditionals are discussed. (shrink)
This Open Access book addresses the age-old problem of infinite regresses in epistemology. How can we ever come to know something if knowing requires having good reasons, and reasons can only be good if they are backed by good reasons in turn? The problem has puzzled philosophers ever since antiquity, giving rise to what is often called Agrippa's Trilemma. The current volume approaches the old problem in a provocative and thoroughly contemporary way. Taking seriously the idea that good reasons are (...) typically probabilistic in character, it develops and defends a new solution that challenges venerable philosophical intuitions and explains why they were mistakenly held. Key to the new solution is the phenomenon of fading foundations, according to which distant reasons are less important than those that are nearby. The phenomenon takes the sting out of Agrippa's Trilemma; moreover, since the theory that describes it is general and abstract, it is readily applicable outside epistemology, notably to debates on infinite regresses in metaphysics. (shrink)
A longstanding issue in attempts to understand the Everett (Many-Worlds) approach to quantum mechanics is the origin of the Born rule: why is the probability given by the square of the amplitude? Following Vaidman, we note that observers are in a position of self-locating uncertainty during the period between the branches of the wave function splitting via decoherence and the observer registering the outcome of the measurement. In this period it is tempting to regard each branch as equiprobable, but (...) we argue that the temptation should be resisted. Applying lessons from this analysis, we demonstrate (using methods similar to those of Zurek's envariance-based derivation) that the Born rule is the uniquely rational way of apportioning credence in Everettian quantum mechanics. In doing so, we rely on a single key principle: changes purely to the environment do not affect the probabilities one ought to assign to measurement outcomes in a local subsystem. We arrive at a method for assigning probabilities in cases that involve both classical and quantum self-locating uncertainty. This method provides unique answers to quantum Sleeping Beauty problems, as well as a well-defined procedure for calculating probabilities in quantum cosmological multiverses with multiple similar observers. (shrink)
We generalize the Kolmogorov axioms for probability calculus to obtain conditions defining, for any given logic, a class of probability functions relative to that logic, coinciding with the standard probability functions in the special case of classical logic but allowing consideration of other classes of "essentially Kolmogorovian" probability functions relative to other logics. We take a broad view of the Bayesian approach as dictating inter alia that from the perspective of a given logic, rational degrees of (...) belief are those representable by probability functions from the class appropriate to that logic. Classical Bayesianism, which fixes the logic as classical logic, is only one version of this general approach. Another, which we call Intuitionistic Bayesianism, selects intuitionistic logic as the preferred logic and the associated class of probability functions as the right class of candidate representions of epistemic states (rational allocations of degrees of belief). Various objections to classical Bayesianism are, we argue, best met by passing to intuitionistic Bayesianism—in which the probability functions are taken relative to intuitionistic logic—rather than by adopting a radically non-Kolmogorovian, for example, nonadditive, conception of (or substitute for) probability functions, in spite of the popularity of the latter response among those who have raised these objections. The interest of intuitionistic Bayesianism is further enhanced by the availability of a Dutch Book argument justifying the selection of intuitionistic probability functions as guides to rational betting behavior when due consideration is paid to the fact that bets are settled only when/if the outcome bet on becomes known. (shrink)
I develop a probabilistic account of coherence, and argue that at least in certain respects it is preferable to (at least some of) the main extant probabilistic accounts of coherence: (i) Igor Douven and Wouter Meijs’s account, (ii) Branden Fitelson’s account, (iii) Erik Olsson’s account, and (iv) Tomoji Shogenji’s account. Further, I relate the account to an important, but little discussed, problem for standard varieties of coherentism, viz., the “Problem of Justified Inconsistent Beliefs.”.
Laurence BonJour has recently proposed a novel and interesting approach to the problem of induction. He grants that it is contingent, and so not a priori, that our patterns of inductive inference are reliable. Nevertheless, he claims, it is necessary and a priori that those patterns are highly likely to be reliable, and that is enough to ground an a priori justification induction. This paper examines an important defect in BonJour's proposal. Once we make sense of the claim that inductive (...) inference is "necessarily highly likely" to be reliable, we find that it is not knowable a priori after all. (shrink)
Peter Achinstein has argued at length and on many occasions that the view according to which evidential support is defined in terms of probability-raising faces serious counterexamples and, hence, should be abandoned. Proponents of the positive probabilistic relevance view have remained unconvinced. The debate seems to be in a deadlock. This paper is an attempt to move the debate forward and revisit some of the central claims within this debate. My conclusion here will be that while Achinstein may be (...) right that his counterexamples undermine probabilistic relevance views of what it is for e to be evidence that h, there is still room for a defence of a related probabilistic view about an increase in being supported, according to which, if p > p, then h is more supported given e than it is without e. My argument relies crucially on an insight from recent work on the linguistics of gradable adjectives. (shrink)
Many epistemologists hold that an agent can come to justifiably believe that p is true by seeing that it appears that p is true, without having any antecedent reason to believe that visual impressions are generally reliable. Certain reliabilists think this, at least if the agent’s vision is generally reliable. And it is a central tenet of dogmatism (as described by Pryor (2000) and Pryor (2004)) that this is possible. Against these positions it has been argued (e.g. by Cohen (2005) (...) and White (2006)) that this violates some principles from probabilistic learning theory. To see the problem, let’s note what the dogmatist thinks we can learn by paying attention to how things appear. (The reliabilist says the same things, but we’ll focus on the dogmatist.) Suppose an agent receives an appearance that p, and comes to believe that p. Letting Ap be the proposition that it appears to the agent that p, and → be the material implication, we can say that the agent learns that p, and hence is in a position to infer Ap → p, once they receive the evidence Ap.1 This is surprising, because we can prove the following. (shrink)
Several scholars, including Martin Hengel, R. Alan Culpepper, and Richard Bauckham, have argued that Papias had knowledge of the Gospel of John on the grounds that Papias’s prologue lists six of Jesus’s disciples in the same order that they are named in the Gospel of John: Andrew, Peter, Philip, Thomas, James, and John. In “A Note on Papias’s Knowledge of the Fourth Gospel” (JBL 129 [2010]: 793–794), Jake H. O’Connell presents a statistical analysis of this argument, according to which the (...)probability of this correspondence occurring by chance is lower than 1%. O’Connell concludes that it is more than 99% probable that this correspondence is the result of Papias copying John, rather than chance. I show that O’Connell’s analysis contains multiple mistakes, both substantive and mathematical: it ignores relevant evidence, overstates the correspondence between John and Papias, wrongly assumes that if Papias did not know John he ordered the disciples randomly, and conflates the probability of A given B with the probability of B given A. In discussing these errors, I aim to inform both Johannine scholarship and the use of probabilistic methods in historical reasoning. (shrink)
How were reliable predictions made before Pascal and Fermat's discovery of the mathematics of probability in 1654? What methods in law, science, commerce, philosophy, and logic helped us to get at the truth in cases where certainty was not attainable? The book examines how judges, witch inquisitors, and juries evaluated evidence; how scientists weighed reasons for and against scientific theories; and how merchants counted shipwrecks to determine insurance rates. Also included are the problem of induction before Hume, design arguments (...) for the existence of God, and theories on how to evaluate scientific and historical hypotheses. It is explained how Pascal and Fermat's work on chance arose out of legal thought on aleatory contracts. The book interprets pre-Pascalian unquantified probability in a generally objective Bayesian or logical probabilist sense. (shrink)
This paper is a response to Tyler Wunder’s ‘The modality of theism and probabilistic natural theology: a tension in Alvin Plantinga's philosophy’ (this journal). In his article, Wunder argues that if the proponent of the Evolutionary Argument Against Naturalism (EAAN) holds theism to be non-contingent and frames the argument in terms of objective probability, that the EAAN is either unsound or theism is necessarily false. I argue that a modest revision of the EAAN renders Wunder’s objection irrelevant, and that (...) this revision actually widens the scope of the argument. (shrink)
This thesis focuses on expressively rich languages that can formalise talk about probability. These languages have sentences that say something about probabilities of probabilities, but also sentences that say something about the probability of themselves. For example: (π): “The probability of the sentence labelled π is not greater than 1/2.” Such sentences lead to philosophical and technical challenges; but can be useful. For example they bear a close connection to situations where ones confidence in something can affect (...) whether it is the case or not. The motivating interpretation of probability as an agent's degrees of belief will be focused on throughout the thesis. -/- This thesis aims to answer two questions relevant to such frameworks, which correspond to the two parts of the thesis: “How can one develop a formal semantics for this framework?” and “What rational constraints are there on an agent once such expressive frameworks are considered?”. (shrink)
This paper explores the interaction of well-motivated (if controversial) principles governing the probability conditionals, with accounts of what it is for a sentence to be indefinite. The conclusion can be played in a variety of ways. It could be regarded as a new reason to be suspicious of the intuitive data about the probability of conditionals; or, holding fixed the data, it could be used to give traction on the philosophical analysis of a contentious notion—indefiniteness. The paper outlines (...) the various options, and shows that ‘rejectionist’ theories of indefiniteness are incompatible with the results. Rejectionist theories include popular accounts such as supervaluationism, non-classical truth-value gap theories, and accounts of indeterminacy that centre on rejecting the law of excluded middle. An appendix compares the results obtained here with the ‘impossibility’ results descending from Lewis ( 1976 ). (shrink)
Many epistemologists have responded to the lottery paradox by proposing formal rules according to which high probability defeasibly warrants acceptance. Douven and Williamson present an ingenious argument purporting to show that such rules invariably trivialise, in that they reduce to the claim that a probability of 1 warrants acceptance. Douven and Williamson’s argument does, however, rest upon significant assumptions – amongst them a relatively strong structural assumption to the effect that the underlying probability space is both finite (...) and uniform. In this paper, I will show that something very like Douven and Williamson’s argument can in fact survive with much weaker structural assumptions – and, in particular, can apply to infinite probability spaces. (shrink)
NOTE: This paper is a reworking of some aspects of an earlier paper – ‘What else justification could be’ and also an early draft of chapter 2 of Between Probability and Certainty. I'm leaving it online as it has a couple of citations and there is some material here which didn't make it into the book (and which I may yet try to develop elsewhere). My concern in this paper is with a certain, pervasive picture of epistemic justification. On (...) this picture, acquiring justification for believing something is essentially a matter of minimising one’s risk of error – so one is justified in believing something just in case it is sufficiently likely, given one’s evidence, to be true. This view is motivated by an admittedly natural thought: If we want to be fallibilists about justification then we shouldn’t demand that something be certain – that we completely eliminate error risk – before we can be justified in believing it. But if justification does not require the complete elimination of error risk, then what could it possibly require if not its minimisation? If justification does not require epistemic certainty then what could it possibly require if not epistemic likelihood? When all is said and done, I’m not sure that I can offer satisfactory answers to these questions – but I will attempt to trace out some possible answers here. The alternative picture that I’ll outline makes use of a notion of normalcy that I take to be irreducible to notions of statistical frequency or predominance. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.