Many have argued that a rational agent's attitude towards a proposition may be better represented by a probability range than by a single number. I show that in such cases an agent will have unstable betting behaviour, and so will behave in an unpredictable way. I use this point to argue against a range of responses to the ‘two bets’ argument for sharp probabilities.
There is a trade-off between specificity and accuracy in existing models of belief. Descriptions of agents in the tripartite model, which recognizes only three doxastic attitudes—belief, disbelief, and suspension of judgment—are typically accurate, but not sufficiently specific. The orthodox Bayesian model, which requires real-valued credences, is perfectly specific, but often inaccurate: we often lack precise credences. I argue, first, that a popular attempt to fix the Bayesian model by using sets of functions is also inaccurate, since it requires us to (...) have interval-valued credences with perfectly precise endpoints. We can see this problem as analogous to the problem of higher order vagueness. Ultimately, I argue, the only way to avoid these problems is to endorse Insurmountable Unclassifiability. This principle has some surprising and radical consequences. For example, it entails that the trade-off between accuracy and specificity is in-principle unavoidable: sometimes it is simply impossible to characterize an agent’s doxastic state in a way that is both fully accurate and maximally specific. What we can do, however, is improve on both the tripartite and existing Bayesian models. I construct a new model of belief—the minimal model—that allows us to characterize agents with much greater specificity than the tripartite model, and yet which remains, unlike existing Bayesian models, perfectly accurate. (shrink)
Sometimes different partitions of the same space each seem to divide that space into propositions that call for equal epistemic treatment. Famously, equal treatment in the form of equal point-valued credence leads to incoherence. Some have argued that equal treatment in the form of equal interval-valued credence solves the puzzle. This paper shows that, once we rule out intervals with extreme endpoints, this proposal also leads to incoherence.
In his entry on "Quantum Logic and Probability Theory" in the Stanford Encyclopedia of Philosophy, Alexander Wilce (2012) writes that "it is uncontroversial (though remarkable) the formal apparatus quantum mechanics reduces neatly to a generalization of classical probability in which the role played by a Boolean algebra of events in the latter is taken over the 'quantum logic' of projection operators on a Hilbert space." For a long time, Patrick Suppes has opposed this view (see, for example, the (...) paper collected in Suppes and Zanotti (1996). Instead of changing the logic and moving from a Boolean algebra to a non-Boolean algebra, one can also 'save the phenomena' by weakening the axioms of probability theory and work instead with upper and lower probabilities. However, it is fair to say that despite Suppes' efforts upper and lower probabilities are not particularly popular in physics as well as in the foundations of physics, at least so far. Instead, quantum logic is booming again, especially since quantum information and computation became hot topics. Interestingly, however, imprecise probabilities are becoming more and more popular in formal epistemology as recent work by authors such as James Joye (2010) and Roger White (2010) demonstrates. (shrink)
Many philosophers argue that Keynes’s concept of the “weight of arguments” is an important aspect of argument appraisal. The weight of an argument is the quantity of relevant evidence cited in the premises. However, this dimension of argumentation does not have a received method for formalisation. Kyburg has suggested a measure of weight that uses the degree of imprecision in his system of “Evidential Probability” to quantify weight. I develop and defend this approach to measuring weight. I illustrate the (...) usefulness of this measure by employing it to develop an answer to Popper’s Paradox of Ideal Evidence. (shrink)
Evidentialists say that a necessary condition of sound epistemic reasoning is that our beliefs reflect only our evidence. This thesis arguably conflicts with standard Bayesianism, due to the importance of prior probabilities in the latter. Some evidentialists have responded by modelling belief-states using imprecise probabilities (Joyce 2005). However, Roger White (2010) and Aron Vallinder (2018) argue that this Imprecise Bayesianism is incompatible with evidentialism due to “inertia”, where Imprecise Bayesian agents become stuck in a state of ambivalence (...) towards hypotheses. Additionally, escapes from inertia apparently only create further conflicts with evidentialism. This dilemma gives a reason for evidentialist imprecise probabilists to look for alternatives without inertia. I shall argue that Henry E. Kyburg’s approach offers an evidentialist-friendly impreciseprobability theory without inertia, and that its relevant anti-inertia features are independently justified. I also connect the traditional epistemological debates concerning the “ethics of belief” more systematically with formal epistemology than has been hitherto done. (shrink)
Jim Joyce argues for two amendments to probabilism. The first is the doctrine that credences are rational, or not, in virtue of their accuracy or “closeness to the truth” (1998). The second is a shift from a numerically precise model of belief to an imprecise model represented by a set of probability functions (2010). We argue that both amendments cannot be satisfied simultaneously. To do so, we employ a (slightly-generalized) impossibility theorem of Seidenfeld, Schervish, and Kadane (2012), who (...) show that there is no strictly proper scoring rule for imprecise probabilities. -/- The question then is what should give way. Joyce, who is well aware of this no-go result, thinks that a quantifiability constraint on epistemic accuracy should be relaxed to accommodate imprecision. We argue instead that another Joycean assumption — called strict immodesty— should be rejected, and we prove a representation theorem that characterizes all “mildly” immodest measures of inaccuracy. (shrink)
It is well known that classical, aka ‘sharp’, Bayesian decision theory, which models belief states as single probability functions, faces a number of serious difficulties with respect to its handling of agnosticism. These difficulties have led to the increasing popularity of so-called ‘imprecise’ models of decision-making, which represent belief states as sets of probability functions. In a recent paper, however, Adam Elga has argued in favour of a putative normative principle of sequential choice that he claims to (...) be borne out by the sharp model but not by any promising incarnation of its imprecise counterpart. After first pointing out that Elga has fallen short of establishing that his principle is indeed uniquely borne out by the sharp model, I cast aspersions on its plausibility. I show that a slight weakening of the principle is satisfied by at least one, but interestingly not all, varieties of the imprecise model and point out that Elga has failed to motivate his stronger commitment. (shrink)
A number of Bayesians claim that, if one has no evidence relevant to a proposition P, then one's credence in P should be spread over the interval [0, 1]. Against this, I argue: first, that it is inconsistent with plausible claims about comparative levels of confidence; second, that it precludes inductive learning in certain cases. Two motivations for the view are considered and rejected. A discussion of alternatives leads to the conjecture that there is an in-principle limitation on formal representations (...) of belief: they cannot be both fully accurate and maximally specific. (shrink)
Does the strength of a particular belief depend upon the significance we attach to it? Do we move from one context to another, remaining in the same doxastic state concerning p yet holding a stronger belief that p in one context than in the other? For that to be so, a doxastic state must have a certain sort of context-sensitive complexity. So the question is about the nature of belief states, as we understand them, or as we think a theory (...) should model them. I explore the idea and how it relates to work on imprecise probabilities and second-order confidence. (shrink)
John Maynard Keynes’s A Treatise on Probability is the seminal text for the logical interpretation of probability. According to his analysis, probabilities are evidential relations between a hypothesis and some evidence, just like the relations of deductive logic. While some philosophers had suggested similar ideas prior to Keynes, it was not until his Treatise that the logical interpretation of probability was advocated in a clear, systematic and rigorous way. I trace Keynes’s influence in the philosophy of (...) class='Hi'>probability through a heterogeneous sample of thinkers who adopted his interpretation. This sample consists of Frederick C. Benenson, Roy Harrod, Donald C. Williams, Henry E. Kyburg and David Stove. The ideas of Keynes prove to be adaptable to their diverse theories of probability. My discussion indicates both the robustness of Keynes’s probability theory and the importance of its influence on the philosophers whom I describe. I also discuss the Problem of the Priors. I argue that none of those I discuss have obviously improved on Keynes’s theory with respect to this issue. (shrink)
Dilation occurs when an interval probability estimate of some event E is properly included in the interval probability estimate of E conditional on every event F of some partition, which means that one’s initial estimate of E becomes less precise no matter how an experiment turns out. Critics maintain that dilation is a pathological feature of impreciseprobability models, while others have thought the problem is with Bayesian updating. However, two points are often overlooked: (1) knowing (...) that E is stochastically independent of F (for all F in a partition of the underlying state space) is sufficient to avoid dilation, but (2) stochastic independence is not the only independence concept at play within impreciseprobability models. In this paper we give a simple characterization of dilation formulated in terms of deviation from stochastic independence, propose a measure of dilation, and distinguish between proper and improper dilation. Through this we revisit the most sensational examples of dilation, which play up independence between dilator and dilatee, and find the sensationalism undermined by either fallacious reasoning with imprecise probabilities or improperly constructed impreciseprobability models. (shrink)
It is well known that there are, at least, two sorts of cases where one should not prefer a direct inference based on a narrower reference class, in particular: cases where the narrower reference class is gerrymandered, and cases where one lacks an evidential basis for forming a precise-valued frequency judgment for the narrower reference class. I here propose (1) that the preceding exceptions exhaust the circumstances where one should not prefer direct inference based on a narrower reference class, and (...) (2) that minimal frequency information for a narrower (non-gerrymandered) reference class is sufficient to yield the defeat of a direct inference for a broader reference class. By the application of a method for inferring relatively informative expected frequencies, I argue that the latter claim does not result in an overly incredulous approach to direct inference. The method introduced here permits one to infer a relatively informative expected frequency for a reference class R', given frequency information for a superset of R' and/or frequency information for a sample drawn from R'. (shrink)
The theory of lower previsions is designed around the principles of coherence and sure-loss avoidance, thus steers clear of all the updating anomalies highlighted in Gong and Meng's "Judicious Judgment Meets Unsettling Updating: Dilation, Sure Loss, and Simpson's Paradox" except dilation. In fact, the traditional problem with the theory of impreciseprobability is that coherent inference is too complicated rather than unsettling. Progress has been made simplifying coherent inference by demoting sets of probabilities from fundamental building blocks to (...) secondary representations that are derived or discarded as needed. (shrink)
Moss (2018) argues that rational agents are best thought of not as having degrees of belief in various propositions but as having beliefs in probabilistic contents, or probabilistic beliefs. Probabilistic contents are sets of probability functions. Probabilistic belief states, in turn, are modeled by sets of probabilistic contents, or sets of sets of probability functions. We argue that this Mossean framework is of considerable interest quite independently of its role in Moss’ account of probabilistic knowledge or her semantics (...) for epistemic modals and probability operators. It is an extremely general model of uncertainty. Indeed, it is at least as general and expressively powerful as every other current impreciseprobability framework, including lower probabilities, lower previsions, sets of probabilities, sets of desirable gambles, and choice functions. In addition, we partially answer an important question that Moss leaves open, viz., why should rational agents have consistent probabilistic beliefs? We show that an important subclass of Mossean believers avoid Dutch bookability iff they have consistent probabilistic beliefs. (shrink)
A prominent pillar of Bayesian philosophy is that, relative to just a few constraints, priors “wash out” in the limit. Bayesians often appeal to such asymptotic results as a defense against charges of excessive subjectivity. But, as Seidenfeld and coauthors observe, what happens in the short run is often of greater interest than what happens in the limit. They use this point as one motivation for investigating the counterintuitive short run phenomenon of dilation since, it is alleged, “dilation contrasts with (...) the asymptotic merging of posterior probabilities reported by Savage (1954) and by Blackwell and Dubins (1962)” (Herron et al., 1994). A partition dilates an event if, relative to every cell of the partition, uncertainty concerning that event increases. The measure of uncertainty relevant for dilation, however, is not the same measure that is relevant in the context of results concerning whether priors wash out or “opinions merge.” Here, we explicitly investigate the short run behavior of the metric relevant to merging of opinions. As with dilation, it is possible for uncertainty (as gauged by this metric) to increase relative to every cell of a partition. We call this phenomenon distention. It turns out that dilation and distention are orthogonal phenomena. (shrink)
We call something a paradox if it strikes us as peculiar in a certain way, if it strikes us as something that is not simply nonsense, and yet it poses some difficulty in seeing how it could make sense. When we examine paradoxes more closely, we find that for some the peculiarity is relieved and for others it intensifies. Some are peculiar because they jar with how we expect things to go, but the jarring is to do with imprecision and (...) misunderstandings in our thought, failures to appreciate the breadth of possibility consistent with our beliefs. Other paradoxes, however, pose deep problems. Closer examination does not explain them away. Instead, they challenge the coherence of certain conceptual resources and hence challenge the significance of beliefs which deploy those resources. I shall call the former kind weak paradoxes and the latter, strong paradoxes. Whether a particular paradox is weak or strong is sometimes a matter of controversy—sometimes it has been realised that what was thought strong is in fact weak, and vice versa,— but the distinction between the two kinds is generally thought to be worth drawing. In this Cchapter, I shall cover both weak and strong probabilistic paradoxes. (shrink)
Merging of opinions results underwrite Bayesian rejoinders to complaints about the subjective nature of personal probability. Such results establish that sufficiently similar priors achieve consensus in the long run when fed the same increasing stream of evidence. Initial subjectivity, the line goes, is of mere transient significance, giving way to intersubjective agreement eventually. Here, we establish a merging result for sets of probability measures that are updated by Jeffrey conditioning. This generalizes a number of different merging results in (...) the literature. We also show that such sets converge to a shared, maximally informed opinion. Convergence to a maximally informed opinion is a (weak) Jeffrey conditioning analogue of Bayesian “convergence to the truth” for conditional probabilities. Finally, we demonstrate the philosophical significance of our study by detailing applications to the topics of dynamic coherence, imprecise probabilities, and probabilistic opinion pooling. (shrink)
In attempting to form rational personal probabilities by direct inference, it is usually assumed that one should prefer frequency information concerning more specific reference classes. While the preceding assumption is intuitively plausible, little energy has been expended in explaining why it should be accepted. In the present article, I address this omission by showing that, among the principled policies that may be used in setting one’s personal probabilities, the policy of making direct inferences with a preference for frequency information for (...) more specific reference classes yields personal probabilities whose accuracy is optimal, according to all proper scoring rules, in situations where all of the relevant frequency information is point-valued. Assuming that frequency information for narrower reference classes is preferred, when the relevant frequency statements are point-valued, a dilemma arises when choosing whether to make a direct inference based upon relatively precise-valued frequency information for a broad reference class, R, or upon relatively imprecise-valued frequency information for a more specific reference class, R*. I address such cases, by showing that it is often possible to make a precise-valued frequency judgment regarding R* based on precise-valued frequency information for R, using standard principles of direct inference. Having made such a frequency judgment, the dilemma of choosing between and is removed, and one may proceed by using the precise-valued frequency estimate for the more specific reference class as a premise for direct inference. (shrink)
Epistemic states of uncertainty play important roles in ethical and political theorizing. Theories that appeal to a “veil of ignorance,” for example, analyze fairness or impartiality in terms of certain states of ignorance. It is important, then, to scrutinize proposed conceptions of ignorance and explore promising alternatives in such contexts. Here, I study Lerner’s probabilistic egalitarian theorem in the setting of imprecise probabilities. Lerner’s theorem assumes that a social planner tasked with distributing income to individuals in a population is (...) “completely ignorant” about which utility functions belong to which individuals. Lerner models this ignorance with a certain uniform probability distribution, and shows that, under certain further assumptions, income should be equally distributed. Much of the criticism of the relevance of Lerner’s result centers on the representation of ignorance involved. Imprecise probabilities provide a general framework for reasoning about various forms of uncertainty including, in particular, ignorance. To what extent can Lerner’s conclusion be maintained in this setting? (shrink)
Dogmatism is sometimes thought to be incompatible with Bayesian models of rational learning. I show that the best model for updating imprecise credences is compatible with dogmatism.
In some severely uncertain situations, exemplified by climate change and novel pandemics, policymakers lack a reasoned basis for assigning probabilities to the possible outcomes of the policies they must choose between. I outline and defend an uncertainty averse, egalitarian approach to policy evaluation in these contexts. The upshot is a theory of distributive justice which offers especially strong reasons to guard against individual and collective misfortune.
Pedersen and Wheeler (2014) and Pedersen and Wheeler (2015) offer a wide-ranging and in-depth exploration of the phenomenon of dilation. We find that these studies raise many interesting and important points. However, purportedly general characterizations of dilation are reported in them that, unfortunately, admit counterexamples. The purpose of this note is to show in some detail that these characterization results are false.
Why should we refrain from doing things that, taken collectively, are environmentally destructive, if our individual acts seem almost certain to make no difference? According to the expected consequences approach, we should refrain from doing these things because our individual acts have small risks of causing great harm, which outweigh the expected benefits of performing them. Several authors have argued convincingly that this provides a plausible account of our moral reasons to do things like vote for policies that will reduce (...) our countries’ greenhouse gas emissions, adopt plant-based diets, and otherwise reduce our individual emissions. But this approach has recently been challenged by authors like Bernward Gesang and Julia Nefsky. Gesang contends that it may be genuinely impossible for our individual emissions to make a morally relevant difference. Nefsky argues more generally that the expected consequences approach cannot adequately explain our reasons not to do things if there is no precise fact of the matter about whether their outcomes are harmful. -/- In the following chapter, author Howard Nye defends the expected consequences approach against these objections. Nye contends that Gesang has shown at most that our emissions could have metaphysically indeterministic effects that lack precise objective chances. He argues, moreover, that the expected consequences approach can draw upon existing extensions to cases of indeterminism and imprecise probabilities to deliver the result that we have the same moral reasons to reduce our emissions in Gesang’s scenario as in deterministic scenarios. Nye also shows how the expected consequences approach can draw upon these extensions to handle Nefsky’s concern about the absence of precise facts concerning whether the outcomes of certain acts are harmful. The author concludes that the expected consequences approach provides a fully adequate account of our moral reasons to take both political and personal action to reduce our ecological footprints. (shrink)
If there are fundamental laws of nature, can they fail to be exact? In this paper, I consider the possibility that some fundamental laws are vague. I call this phenomenon 'fundamental nomic vagueness.' I characterize fundamental nomic vagueness as the existence of borderline lawful worlds and the presence of several other accompanying features. Under certain assumptions, such vagueness prevents the fundamental physical theory from being completely expressible in the mathematical language. Moreover, I suggest that such vagueness can be regarded as (...) 'vagueness in the world.' For a case study, we turn to the Past Hypothesis, a postulate that (partially) explains the direction of time in our world. We have reasons to take it seriously as a candidate fundamental law of nature. Yet it is vague: it admits borderline (nomologically) possible worlds. An exact version would lead to an untraceable arbitrariness absent in any other fundamental laws. However, the dilemma between fundamental nomic vagueness and untraceable arbitrariness is dissolved in a new quantum theory of time's arrow. (shrink)
One recent topic of debate in Bayesian epistemology has been the question of whether imprecise credences can be rational. I argue that one account of imprecise credences, the orthodox treatment as defended by James M. Joyce, is untenable. Despite Joyce’s claims to the contrary, a puzzle introduced by Roger White shows that the orthodox account, when paired with Bas C. van Fraassen’s Reflection Principle, can lead to inconsistent beliefs. Proponents of imprecise credences, then, must either provide a (...) compelling reason to reject Reflection or admit that the rational credences in White’s case are precise. (shrink)
It is often suggested that when opinions differ among individuals in a group, the opinions should be aggregated to form a compromise. This paper compares two approaches to aggregating opinions, linear pooling and what I call opinion agglomeration. In evaluating both strategies, I propose a pragmatic criterion, No Regrets, entailing that an aggregation strategy should prevent groups from buying and selling bets on events at prices regretted by their members. I show that only opinion agglomeration is able to satisfy the (...) demand. I then proceed to give normative and empirical arguments in support of the pragmatic criterion for opinion aggregation, and that ultimately favor opinion agglomeration. (shrink)
Fifty years of effort in artificial intelligence (AI) and the formalization of legal reasoning have produced both successes and failures. Considerable success in organizing and displaying evidence and its interrelationships has been accompanied by failure to achieve the original ambition of AI as applied to law: fully automated legal decision-making. The obstacles to formalizing legal reasoning have proved to be the same ones that make the formalization of commonsense reasoning so difficult, and are most evident where legal reasoning has to (...) meld with the vast web of ordinary human knowledge of the world. Underlying many of the problems is the mismatch between the discreteness of symbol manipulation and the continuous nature of imprecise natural language, of degrees of similarity and analogy, and of probabilities. (shrink)
According to the traditional Bayesian view of credence, its structure is that of precise probability, its objects are descriptive propositions about the empirical world, and its dynamics are given by conditionalization. Each of the three essays that make up this thesis deals with a different variation on this traditional picture. The first variation replaces precise probability with sets of probabilities. The resulting imprecise Bayesianism is sometimes motivated on the grounds that our beliefs should not be more precise (...) than the evidence calls for. One known problem for this evidentially motivated imprecise view is that in certain cases, our imprecise credence in a particular proposition will remain the same no matter how much evidence we receive. In the first essay I argue that the problem is much more general than has been appreciated so far, and that it’s difficult to avoid without compromising the initial evidentialist motivation. The second variation replaces descriptive claims with moral claims as the objects of credence. I consider three standard arguments for probabilism with respect to descriptive uncertainty—representation theorem arguments, Dutch book arguments, and accuracy arguments—in order to examine whether such arguments can also be used to establish probabilism with respect to moral uncertainty. In the second essay, I argue that by and large they can, with some caveats. First, I don’t examine whether these arguments can be given sound non-cognitivist readings, and any conclusions therefore only hold conditional on cognitivism. Second, decision-theoretic representation theorems are found to be less convincing in the moral case, because there they implausibly commit us to thinking that intertheoretic comparisons of value are always possible. Third and finally, certain considerations may lead one to think that imprecise probabilism provides a more plausible model of moral epistemology. The third variation considers whether, in addition to conditionalization, agents may also change their minds by becoming aware of propositions they had not previously entertained, and therefore not previously assigned any probability. More specifically, I argue that if we wish to make room for reflective equilibrium in a probabilistic moral epistemology, we must allow for awareness growth. In the third essay, I sketch the outline of such a Bayesian account of reflective equilibrium. Given that this account gives a central place to awareness growth, and that the rationality constraints on belief change by awareness growth are much weaker than those on belief change by conditionalization, it follows that the rationality constraints on the credences of agents who are seeking reflective equilibrium are correspondingly weaker. (shrink)
This article explores the main similarities and differences between Derek Parfit’s notion of imprecise comparability and a related notion I have proposed of parity. I argue that the main difference between imprecise comparability and parity can be understood by reference to ‘the standard view’. The standard view claims that 1) differences between cardinally ranked items can always be measured by a scale of units of the relevant value, and 2) all rankings proceed in terms of the trichotomy of (...) ‘better than’, ‘worse than’, and ‘equally good’. Imprecise comparability, which can be understood in terms of the more familiar notions of cardinality and incommensurability, rejects only the first claim while parity rejects both claims of the standard view. -/- I then argue that insofar as those attracted to imprecise comparability assume that all rankings are trichotomous, as Parfit appears to, the view should be rejected. This is because imprecise equality is not a form of equality but is a sui generis ‘fourth’ basic way in which items can be ranked. We should, I argue, understand imprecise equality as parity, and imprecise comparability as entailing ‘tetrachotomy’ – that if two items are comparable, one must better than, worse than, equal to, or on a par with the other. Thus those attracted to the idea that cardinality can be imprecise should abandon trichotomy and accept parity and tetrachotomy instead. -/- Finally, I illustrate the difference between Parfit’s trichotomous notion of imprecise comparability and parity by examining how each notion might be employed in different solutions to the problem posed by the Repugnant Conclusion in population ethics. I suggest that parity provides the arguably more ecumenical solution to the problem. (shrink)
Traditional Bayesianism requires that an agent’s degrees of belief be represented by a real-valued, probabilistic credence function. However, in many cases it seems that our evidence is not rich enough to warrant such precision. In light of this, some have proposed that we instead represent an agent’s degrees of belief as a set of credence functions. This way, we can respect the evidence by requiring that the set, often called the agent’s credal state, includes all credence functions that are in (...) some sense compatible with the evidence. One known problem for this evidentially motivated imprecise view is that in certain cases, our imprecise credence in a particular proposition will remain the same no matter how much evidence we receive. In this article I argue that the problem is much more general than has been appreciated so far, and that it’s difficult to avoid it without compromising the initial evidentialist motivation. _1_ Introduction _2_ Precision and Its Problems _3_ Imprecise Bayesianism and Respecting Ambiguous Evidence _4_ Local Belief Inertia _5_ From Local to Global Belief Inertia _6_ Responding to Global Belief Inertia _7_ Conclusion. (shrink)
This paper develops an information sensitive theory of the semantics and probability of conditionals and statements involving epistemic modals. The theory validates a number of principles linking probability and modality, including the principle that the probability of a conditional 'If A, then C' equals the probability of C, updated with A. The theory avoids so-called triviality results, which are standardly taken to show that principles of this sort cannot be validated. To achieve this, we deny that (...) rational agents update their credences via conditionalization. We offer a new rule of update, Hyperconditionalization, which agrees with Conditionalization whenever nonmodal statements are at stake, but differs for modal and conditional sentences. (shrink)
Many have claimed that epistemic rationality sometimes requires us to have imprecise credal states (i.e. credal states representable only by sets of credence functions) rather than precise ones (i.e. credal states representable by single credence functions). Some writers have recently argued that this claim conflicts with accuracy-centered epistemology, i.e., the project of justifying epistemic norms by appealing solely to the overall accuracy of the doxastic states they recommend. But these arguments are far from decisive. In this essay, we prove (...) some new results, which show that there is little hope for reconciling the rationality of credal imprecision with accuracy-centered epistemology. (shrink)
This book explores a question central to philosophy--namely, what does it take for a belief to be justified or rational? According to a widespread view, whether one has justification for believing a proposition is determined by how probable that proposition is, given one's evidence. In this book this view is rejected and replaced with another: in order for one to have justification for believing a proposition, one's evidence must normically support it--roughly, one's evidence must make the falsity of that proposition (...) abnormal in the sense of calling for special, independent explanation. This conception of justification bears upon a range of topics in epistemology and beyond. Ultimately, this way of looking at justification guides us to a new, unfamiliar picture of how we should respond to our evidence and manage our own fallibility. This picture is developed here. (shrink)
This Open Access book addresses the age-old problem of infinite regresses in epistemology. How can we ever come to know something if knowing requires having good reasons, and reasons can only be good if they are backed by good reasons in turn? The problem has puzzled philosophers ever since antiquity, giving rise to what is often called Agrippa's Trilemma. The current volume approaches the old problem in a provocative and thoroughly contemporary way. Taking seriously the idea that good reasons are (...) typically probabilistic in character, it develops and defends a new solution that challenges venerable philosophical intuitions and explains why they were mistakenly held. Key to the new solution is the phenomenon of fading foundations, according to which distant reasons are less important than those that are nearby. The phenomenon takes the sting out of Agrippa's Trilemma; moreover, since the theory that describes it is general and abstract, it is readily applicable outside epistemology, notably to debates on infinite regresses in metaphysics. (shrink)
According to the Lockean thesis, a proposition is believed just in case it is highly probable. While this thesis enjoys strong intuitive support, it is known to conflict with seemingly plausible logical constraints on our beliefs. One way out of this conflict is to make probability 1 a requirement for belief, but most have rejected this option for entailing what they see as an untenable skepticism. Recently, two new solutions to the conflict have been proposed that are alleged to (...) be non-skeptical. We compare these proposals with each other and with the Lockean thesis, in particular with regard to the question of how much we gain by adopting any one of them instead of the probability 1 requirement, that is, of how likely it is that one believes more than the things one is fully certain of. (shrink)
This chapter will review selected aspects of the terrain of discussions about probabilities in statistical mechanics (with no pretensions to exhaustiveness, though the major issues will be touched upon), and will argue for a number of claims. None of the claims to be defended is entirely original, but all deserve emphasis. The first, and least controversial, is that probabilistic notions are needed to make sense of statistical mechanics. The reason for this is the same reason that convinced Maxwell, Gibbs, and (...) Boltzmann that probabilities would be needed, namely, that the second law of thermodynamics, which in its original formulation says that certain processes are impossible, must, on the kinetic theory, be replaced by a weaker formulation according to which what the original version deems impossible is merely improbable. Second is that we ought not take the standard measures invoked in equilibrium statistical mechanics as giving, in any sense, the correct probabilities about microstates of the system. We can settle for a much weaker claim: that the probabilities for outcomes of experiments yielded by the standard distributions are effectively the same as those yielded by any distribution that we should take as a representing probabilities over microstates. Lastly, (and most controversially): in asking about the status of probabilities in statistical mechanics, the familiar dichotomy between epistemic probabilities (credences, or degrees of belief) and ontic (physical) probabilities is insufficient; the concept of probability that is best suited to the needs of statistical mechanics is one that combines epistemic and physical considerations. (shrink)
Enjoying great popularity in decision theory, epistemology, and philosophy of science, Bayesianism as understood here is fundamentally concerned with epistemically ideal rationality. It assumes a tight connection between evidential probability and ideally rational credence, and usually interprets evidential probability in terms of such credence. Timothy Williamson challenges Bayesianism by arguing that evidential probabilities cannot be adequately interpreted as the credences of an ideal agent. From this and his assumption that evidential probabilities cannot be interpreted as the actual credences (...) of human agents either, he concludes that no interpretation of evidential probabilities in terms of credence is adequate. I argue to the contrary. My overarching aim is to show on behalf of Bayesians how one can still interpret evidential probabilities in terms of ideally rational credence and how one can maintain a tight connection between evidential probabilities and ideally rational credence even if the former cannot be interpreted in terms of the latter. By achieving this aim I illuminate the limits and prospects of Bayesianism. (shrink)
We propose a new account of indicative conditionals, giving acceptability and logical closure conditions for them. We start from Adams’ Thesis: the claim that the acceptability of a simple indicative equals the corresponding conditional probability. The Thesis is widely endorsed, but arguably false and refuted by empirical research. To fix it, we submit, we need a relevance constraint: we accept a simple conditional 'If φ, then ψ' to the extent that (i) the conditional probability p(ψ|φ) is high, provided (...) that (ii) φ is relevant for ψ. How (i) should work is well-understood. It is (ii) that holds the key to improve our understanding of conditionals. Our account has (i) a probabilistic component, using Popper functions; (ii) a relevance component, given via an algebraic structure of topics or subject matters. We present a probabilistic logic for simple indicatives, and argue that its (in)validities are both theoretically desirable and in line with empirical results on how people reason with conditionals. (shrink)
The epistemic probability of A given B is the degree to which B evidentially supports A, or makes A plausible. This paper is a first step in answering the question of what determines the values of epistemic probabilities. I break this question into two parts: the structural question and the substantive question. Just as an object’s weight is determined by its mass and gravitational acceleration, some probabilities are determined by other, more basic ones. The structural question asks what probabilities (...) are not determined in this way—these are the basic probabilities which determine values for all other probabilities. The substantive question asks how the values of these basic probabilities are determined. I defend an answer to the structural question on which basic probabilities are the probabilities of atomic propositions conditional on potential direct explanations. I defend this against the view, implicit in orthodox mathematical treatments of probability, that basic probabilities are the unconditional probabilities of complete worlds. I then apply my answer to the structural question to clear up common confusions in expositions of Bayesianism and shed light on the “problem of the priors.”. (shrink)
Subjective probability plays an increasingly important role in many fields concerned with human cognition and behavior. Yet there have been significant criticisms of the idea that probabilities could actually be represented in the mind. This paper presents and elaborates a view of subjective probability as a kind of sampling propensity associated with internally represented generative models. The resulting view answers to some of the most well known criticisms of subjective probability, and is also supported by empirical work (...) in neuroscience and behavioral psychology. The repercussions of the view for how we conceive of many ordinary instances of subjective probability, and how it relates to more traditional conceptions of subjective probability, are discussed in some detail. (shrink)
According to Critical-Level Views in population axiology, an extra life improves a population only if that life’s welfare exceeds some fixed ‘critical level.’ An extra life at the critical level leaves the new population equally good as the original. According to Critical-Range Views, an extra life improves a population only if that life’s welfare exceeds some fixed ‘critical range.’ An extra life within the critical range leaves the new population incommensurable with the original. -/- In this paper, I sharpen some (...) old objections to these views and offer some new ones. Critical-Level Views cannot avoid certain Repugnant and Sadistic Conclusions. Critical-Range Views imply that lives featuring no good or bad components whatsoever can nevertheless swallow up and neutralise goodness or badness. Both classes of view entail that certain small changes in welfare correspond to worryingly large differences in contributive value. -/- I then offer a view that retains much of the appeal of Critical-Level and Critical-Range Views while avoiding the above pitfalls. On the Imprecise Exchange Rates View, the quantity of some good required to outweigh a given unit of some bad is imprecise. This imprecision is the source of incommensurability between lives and populations. (shrink)
Stalnaker's Thesis about indicative conditionals is, roughly, that the probability one ought to assign to an indicative conditional equals the probability that one ought to assign to its consequent conditional on its antecedent. The thesis seems right. If you draw a card from a standard 52-card deck, how confident are you that the card is a diamond if it's a red card? To answer this, you calculate the proportion of red cards that are diamonds -- that is, you (...) calculate the probability of drawing a diamond conditional on drawing a red card. Skyrms' Thesis about counterfactual conditionals is, roughly, that the probability that one ought to assign to a counterfactual equals one's rational expectation of the chance, at a relevant past time, of its consequent conditional on its antecedent. This thesis also seems right. If you decide not to enter a 100-ticket lottery, how confident are you that you would have won had you bought a ticket? To answer this, you calculate the prior chance--that is, the chance just before your decision not to buy a ticket---of winning conditional on entering the lottery. The central project of this article is to develop a new uniform theory of conditionals that allows us to derive a version of Skyrms' Thesis from a version of Stalnaker's Thesis, together with a chance-deference norm relating rational credence to beliefs about objective chance. (shrink)
Early work on the frequency theory of probability made extensive use of the notion of randomness, conceived of as a property possessed by disorderly collections of outcomes. Growing out of this work, a rich mathematical literature on algorithmic randomness and Kolmogorov complexity developed through the twentieth century, but largely lost contact with the philosophical literature on physical probability. The present chapter begins with a clarification of the notions of randomness and probability, conceiving of the former as a (...) property of a sequence of outcomes, and the latter as a property of the process generating those outcomes. A discussion follows of the nature and limits of the relationship between the two notions, with largely negative verdicts on the prospects for any reduction of one to the other, although the existence of an apparently random sequence of outcomes is good evidence for the involvement of a genuinely chancy process. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.