Subjectiveprobability plays an increasingly important role in many fields concerned with human cognition and behavior. Yet there have been significant criticisms of the idea that probabilities could actually be represented in the mind. This paper presents and elaborates a view of subjectiveprobability as a kind of sampling propensity associated with internally represented generative models. The resulting view answers to some of the most well known criticisms of subjectiveprobability, and is also supported (...) by empirical work in neuroscience and behavioral psychology. The repercussions of the view for how we conceive of many ordinary instances of subjectiveprobability, and how it relates to more traditional conceptions of subjectiveprobability, are discussed in some detail. (shrink)
In his classic book “the Foundations of Statistics” Savage developed a formal system of rational decision making. The system is based on (i) a set of possible states of the world, (ii) a set of consequences, (iii) a set of acts, which are functions from states to consequences, and (iv) a preference relation over the acts, which represents the preferences of an idealized rational agent. The goal and the culmination of the enterprise is a representation theorem: Any preference relation that (...) satisfies certain arguably acceptable postulates determines a (finitely additive) probability distribution over the states and a utility assignment to the consequences, such that the preferences among acts are determined by their expected utilities. Additional problematic assumptions are however required in Savage's proofs. First, there is a Boolean algebra of events (sets of states) which determines the richness of the set of acts. The probabilities are assigned to members of this algebra. Savage's proof requires that this be a σ-algebra (i.e., closed under infinite countable unions and intersections), which makes for an extremely rich preference relation. On Savage's view we should not require subjective probabilities to be σ-additive. He therefore finds the insistence on a σ-algebra peculiar and is unhappy with it. But he sees no way of avoiding it. Second, the assignment of utilities requires the constant act assumption: for every consequence there is a constant act, which produces that consequence in every state. This assumption is known to be highly counterintuitive. The present work contains two mathematical results. The first, and the more difficult one, shows that the σ-algebra assumption can be dropped. The second states that, as long as utilities are assigned to finite gambles only, the constant act assumption can be replaced by the more plausible and much weaker assumption that there are at least two non-equivalent constant acts. The second result also employs a novel way of deriving utilities in Savage-style systems -- without appealing to von Neumann-Morgenstern lotteries. The paper discusses the notion of “idealized agent" that underlies Savage's approach, and argues that the simplified system, which is adequate for all the actual purposes for which the system is designed, involves a more realistic notion of an idealized agent. (shrink)
The major competing statistical paradigms share a common remarkable but unremarked thread: in many of their inferential applications, different probability interpretations are combined. How this plays out in different theories of inference depends on the type of question asked. We distinguish four question types: confirmation, evidence, decision, and prediction. We show that Bayesian confirmation theory mixes what are intuitively “subjective” and “objective” interpretations of probability, whereas the likelihood-based account of evidence melds three conceptions of what constitutes an (...) “objective” probability. (shrink)
The notion of comparative probability defined in Bayesian subjectivist theory stems from an intuitive idea that, for a given pair of events, one event may be considered “more probable” than the other. Yet it is conceivable that there are cases where it is indeterminate as to which event is more probable, due to, e.g., lack of robust statistical information. We take that these cases involve indeterminate comparative probabilities. This paper provides a Savage-style decision-theoretic foundation for indeterminate comparative probabilities.
There is a trade-off between specificity and accuracy in existing models of belief. Descriptions of agents in the tripartite model, which recognizes only three doxastic attitudes—belief, disbelief, and suspension of judgment—are typically accurate, but not sufficiently specific. The orthodox Bayesian model, which requires real-valued credences, is perfectly specific, but often inaccurate: we often lack precise credences. I argue, first, that a popular attempt to fix the Bayesian model by using sets of functions is also inaccurate, since it requires us to (...) have interval-valued credences with perfectly precise endpoints. We can see this problem as analogous to the problem of higher order vagueness. Ultimately, I argue, the only way to avoid these problems is to endorse Insurmountable Unclassifiability. This principle has some surprising and radical consequences. For example, it entails that the trade-off between accuracy and specificity is in-principle unavoidable: sometimes it is simply impossible to characterize an agent’s doxastic state in a way that is both fully accurate and maximally specific. What we can do, however, is improve on both the tripartite and existing Bayesian models. I construct a new model of belief—the minimal model—that allows us to characterize agents with much greater specificity than the tripartite model, and yet which remains, unlike existing Bayesian models, perfectly accurate. (shrink)
As stochastic independence is essential to the mathematical development of probability theory, it seems that any foundational work on probability should be able to account for this property. Bayesian decision theory appears to be wanting in this respect. Savage’s postulates on preferences under uncertainty entail a subjective expected utility representation, and this asserts only the existence and uniqueness of a subjectiveprobability measure, regardless of its properties. What is missing is a preference condition corresponding to (...) stochastic independence. To fill this significant gap, the article axiomatizes Bayesian decision theory afresh and proves several representation theorems in this novel framework. (shrink)
We introduce a ranking of multidimensional alternatives, including uncertain prospects as a particular case, when these objects can be given a matrix form. This ranking is separable in terms of rows and columns, and continuous and monotonic in the basic quantities. Owing to the theory of additive separability developed here, we derive very precise numerical representations over a large class of domains (i.e., typically notof the Cartesian product form). We apply these representationsto (1)streams of commodity baskets through time, (2)uncertain social (...) prospects, (3)uncertain individual prospects. Concerning(1), we propose a finite horizon variant of Koopmans’s (1960) axiomatization of infinite discounted utility sums. The main results concern(2). We push the classic comparison between the exanteand expostsocial welfare criteria one step further by avoiding any expected utility assumptions, and as a consequence obtain what appears to be the strongest existing form of Harsanyi’s (1955) Aggregation Theorem. Concerning(3), we derive a subjectiveprobability for Anscombe and Aumann’s (1963) finite case by merely assuming that there are two epistemically independent sources of uncertainty. (shrink)
Following a long-standing philosophical tradition, impartiality is a distinctive and determining feature of moral judgments, especially in matters of distributive justice. This broad ethical tradition was revived in welfare economics by Vickrey, and above all, Harsanyi, under the form of the so-called Impartial Observer Theorem. The paper offers an analytical reconstruction of this argument and a step-wise philosophical critique of its premisses. It eventually provides a new formal version of the theorem based on subjectiveprobability.
Taking the philosophical standpoint, this article compares the mathematical theory of individual decision-making with the folk psychology conception of action, desire and belief. It narrows down its topic by carrying the comparison vis-à-vis Savage's system and its technical concept of subjectiveprobability, which is referred to the basic model of betting as in Ramsey. The argument is organized around three philosophical theses: (i) decision theory is nothing but folk psychology stated in formal language (Lewis), (ii) the former substantially (...) improves on the latter, but is unable to overcome its typical limitations, especially its failure to separate desire and belief empirically (Davidson), (iii) the former substantially improves on the latter, and through these innovations, overcomes some of the limitations. The aim of the article is to establish (iii) not only against the all too simple thesis (i), but also against the subtle thesis (ii). (shrink)
We investigate the conflict between the ex ante and ex post criteria of social welfare in a new framework of individual and social decisions, which distinguishes between two sources of uncertainty, here interpreted as an objective and a subjective source respectively. This framework makes it possible to endow the individuals and society not only with ex ante and ex post preferences, as is usually done, but also with interim preferences of two kinds, and correspondingly, to introduce interim forms of (...) the Pareto principle. After characterizing the ex ante and ex post criteria, we present a first solution to their conflict that extends the former as much possible in the direction of the latter. Then, we present a second solution, which goes in the opposite direction, and is also maximally assertive. Both solutions translate the assumed Pareto conditions into weighted additive utility representations, and both attribute to the individuals common probability values on the objective source of uncertainty, and different probability values on the subjective source. We discuss these solutions in terms of two conceptual arguments, i.e., the by now classic spurious unanimity argument and a novel informational argument labelled complementary ignorance. The paper complies with the standard economic methodology of basing probability and utility representations on preference axioms, but for the sake of completeness, also considers a construal of objective uncertainty based on the assumption of an exogeneously given probability measure. JEL classification: D70; D81. (shrink)
When probability discounting (or probability weighting), one multiplies the value of an outcome by one's subjectiveprobability that the outcome will obtain in decision-making. The broader import of defending probability discounting is to help justify cost-benefit analyses in contexts such as climate change. This chapter defends probability discounting under risk both negatively, from arguments by Simon Caney (2008, 2009), and with a new positive argument. First, in responding to Caney, I argue that small costs (...) and benefits need to be evaluated, and that viewing practices at the social level is too coarse-grained. Second, I argue for probability discounting, using a distinction between causal responsibility and moral responsibility. Moral responsibility can be cashed out in terms of blameworthiness and praiseworthiness, while causal responsibility obtains in full for any effect which is part of a causal chain linked to one's act. With this distinction in hand, unlike causal responsibility, moral responsibility can be seen as coming in degrees. My argument is, given that we can limit our deliberation and consideration to that which we are morally responsible for and that our moral responsibility for outcomes is limited by our subjective probabilities, our subjective probabilities can ground probability discounting. (shrink)
Ramsey (1926) sketches a proposal for measuring the subjective probabilities of an agent by their observable preferences, assuming that the agent is an expected utility maximizer. I show how to extend the spirit of Ramsey's method to a strictly wider class of agents: risk-weighted expected utility maximizers (Buchak 2013). In particular, I show how we can measure the risk attitudes of an agent by their observable preferences, assuming that the agent is a risk-weighted expected utility maximizer. Further, we can (...) leverage this method to measure the subjective probabilities of a risk-weighted expected utility maximizer. (shrink)
I argue that existing objectivist accounts of subjective reasons face systematic problems with cases involving probability and possibility. I then offer a diagnosis of why objectivists face these problems, and recommend that objectivists seek to provide indirect analyses of subjective reasons.
How can different individuals' probability functions on a given sigma-algebra of events be aggregated into a collective probability function? Classic approaches to this problem often require 'event-wise independence': the collective probability for each event should depend only on the individuals' probabilities for that event. In practice, however, some events may be 'basic' and others 'derivative', so that it makes sense first to aggregate the probabilities for the former and then to let these constrain the probabilities for the (...) latter. We formalize this idea by introducing a 'premise-based' approach to probabilistic opinion pooling, and show that, under a variety of assumptions, it leads to linear or neutral opinion pooling on the 'premises'. This paper is the second of two self-contained, but technically related companion papers inspired by binary judgment-aggregation theory. (shrink)
In a recent article, Gordon Belot uses the so-called undermining phenomenon to try to raise a new difficulty for reductive accounts of objective probability, such as Humean Best System accounts. In this paper I will give a critical discussion of Belot’s paper and argue that, in fact, there is no new difficulty here for chance reductionists to address.
In “Can it be rational to have faith?”, it was argued that to have faith in some proposition consists, roughly speaking, in stopping one’s search for evidence and committing to act on that proposition without further evidence. That paper also outlined when and why stopping the search for evidence and acting is rationally required. Because the framework of that paper was that of formal decision theory, it primarily considered the relationship between faith and degrees of belief, rather than between faith (...) and belief full stop. This paper explores the relationship between rational faith and justified belief, by considering four prominent proposals about the relationship between belief and degrees of belief, and by examining what follows about faith and belief according to each of these proposals. It is argued that we cannot reach consensus concerning the relationship between faith and belief at present because of the more general epistemological lack of consensus over how belief relates to rationality: in particular, over how belief relates to the degrees of belief it is rational to have given one’s evidence. (shrink)
Can an agent deliberating about an action A hold a meaningful credence that she will do A? 'No', say some authors, for 'Deliberation Crowds Out Prediction' (DCOP). Others disagree, but we argue here that such disagreements are often terminological. We explain why DCOP holds in a Ramseyian operationalist model of credence, but show that it is trivial to extend this model so that DCOP fails. We then discuss a model due to Joyce, and show that Joyce's rejection of DCOP rests (...) on terminological choices about terms such as 'intention', 'prediction', and 'belief'. Once these choices are in view, they reveal underlying agreement between Joyce and the DCOP-favouring tradition that descends from Ramsey. Joyce's Evidential Autonomy Thesis (EAT) is effectively DCOP, in different terminological clothing. Both principles rest on the so-called 'transparency' of first-person present-tensed reflection on one's own mental states. (shrink)
What is the relationship between degrees of belief and binary beliefs? Can the latter be expressed as a function of the former—a so-called “belief-binarization rule”—without running into difficulties such as the lottery paradox? We show that this problem can be usefully analyzed from the perspective of judgment-aggregation theory. Although some formal similarities between belief binarization and judgment aggregation have been noted before, the connection between the two problems has not yet been studied in full generality. In this paper, we seek (...) to fill this gap. The paper is organized around a baseline impossibility theorem, which we use to map out the space of possible solutions to the belief-binarization problem. Our theorem shows that, except in limiting cases, there exists no belief-binarization rule satisfying four initially plausible desiderata. Surprisingly, this result is a direct corollary of the judgment-aggregation variant of Arrow’s classic impossibility theorem in social choice theory. (shrink)
One recent topic of debate in Bayesian epistemology has been the question of whether imprecise credences can be rational. I argue that one account of imprecise credences, the orthodox treatment as defended by James M. Joyce, is untenable. Despite Joyce’s claims to the contrary, a puzzle introduced by Roger White shows that the orthodox account, when paired with Bas C. van Fraassen’s Reflection Principle, can lead to inconsistent beliefs. Proponents of imprecise credences, then, must either provide a compelling reason to (...) reject Reflection or admit that the rational credences in White’s case are precise. (shrink)
One guide to an argument's significance is the number and variety of refutations it attracts. By this measure, the Dutch book argument has considerable importance.2 Of course this measure alone is not a sure guide to locating arguments deserving of our attention—if a decisive refutation has really been given, we are better off pursuing other topics. But the presence of many and varied counterarguments at least suggests that either the refutations are controversial, or that their target admits of more than (...) one interpretation, or both. The main point of this paper is to focus on a way of understanding the Dutch Book argument (DBA) that avoids many of the well-known criticisms, and to consider how it fares against an important criticism that still remains: the objection that the DBA presupposes value-independence of bets. (shrink)
This paper examines the preference-based approach to the identification of beliefs. It focuses on the main problem to which this approach is exposed, namely that of state-dependent utility. First, the problem is illustrated in full detail. Four types of state-dependent utility issues are distinguished. Second, a comprehensive strategy for identifying beliefs under state-dependent utility is presented and discussed. For the problem to be solved following this strategy, however, preferences need to extend beyond choices. We claim that this a necessary feature (...) of any complete solution to the problem of state-dependent utility. We also argue that this is the main conceptual lesson to draw from it. We show that this lesson is of interest to both economists and philosophers. (shrink)
We offer a new argument for the claim that there can be non-degenerate objective chance (“true randomness”) in a deterministic world. Using a formal model of the relationship between different levels of description of a system, we show how objective chance at a higher level can coexist with its absence at a lower level. Unlike previous arguments for the level-specificity of chance, our argument shows, in a precise sense, that higher-level chance does not collapse into epistemic probability, despite higher-level (...) properties supervening on lower-level ones. We show that the distinction between objective chance and epistemic probability can be drawn, and operationalized, at every level of description. There is, therefore, not a single distinction between objective and epistemic probability, but a family of such distinctions. (shrink)
Probability can be used to measure degree of belief in two ways: objectively and subjectively. The objective measure is a measure of the rational degree of belief in a proposition given a set of evidential propositions. The subjective measure is the measure of a particular subject’s dispositions to decide between options. In both measures, certainty is a degree of belief 1. I will show, however, that there can be cases where one belief is stronger than another yet both (...) beliefs are plausibly measurable as objectively and subjectively certain. In ordinary language, we can say that while both beliefs are certain, one belief is more certain than the other. I will then propose second, non probabilistic dimension of measurement, which tracks this variation in certainty in such cases where the probability is 1. A general principle of rationality is that one’s subjective degree of belief should match the rational degree of belief given the evidence available. In this paper I hope to show that it is also a rational principle that the maximum stake size at which one should remain certain should match the rational weight of certainty given the evidence available. Neither objective nor subjective measures of certainty conform to the axioms of probability, but instead are measured in utility. This has the consequence that, although it is often rational to be certain to some degree, there is no such thing as absolute certainty. (shrink)
In this dissertation, I construct scientifically and practically adequate moral analogs of cognitive heuristics and biases. Cognitive heuristics are reasoning “shortcuts” that are efficient but flawed. Such flaws yield systematic judgment errors—i.e., cognitive biases. For example, the availability heuristic infers an event’s probability by seeing how easy it is to recall similar events. Since dramatic events, such as airplane crashes, are disproportionately easy to recall, this heuristic explains systematic overestimations of their probability (availability bias). The research program on (...) cognitive heuristics and biases (e.g., Daniel Kahneman’s work) has been scientifically successful and has yielded useful error-prevention techniques—i.e., cognitive debiasing. I attempt to apply this framework to moral reasoning to yield moral heuristics and biases. For instance, a moral bias of unjustified differences in the treatment of particular animal species might be partially explained by a moral heuristic that dubiously infers animals’ moral status from their aesthetic features. While the basis for identifying judgments as cognitive errors is often unassailable (e.g., per violating laws of logic), identifying moral errors seemingly requires appealing to moral truth, which, I argue, is problematic within science. Such appeals can be avoided by repackaging moral theories as mere “standards-of-interest” (a la non-normative metrics of purportedly right-making features/properties). However, standards-of-interest do not provide authority, which is needed for effective debiasing. Nevertheless, since each person deems their own subjective morality authoritative, subjective morality (qua standard-of-interest and not moral subjectivism) satisfies both scientific and practical concerns. As such, (idealized) subjective morality grounds a moral analog of cognitive biases—namely, subjective moral biases (e.g., committed anti-racists unconsciously discriminating). I also argue that "cognitive heuristic" is defined by its contrast with rationality. Consequently, heuristics explain biases, which are also so defined. However, such contrasting with rationality is causally irrelevant to cognition. This frustrates the presumed usefulness of the kind, heuristic, in causal explanation. As such, in the moral case, I jettison the role of causal explanation and tailor categories solely for contrastive explanation. As such, “moral heuristic” is replaced with "subjective moral fallacy," which is defined by its contrast with subjective morality and explains subjective moral biases. The resultant subjective moral biases and fallacies framework can undergird future empirical research. (shrink)
In this paper, we provide a Bayesian analysis of the well-known surprise exam paradox. Central to our analysis is a probabilistic account of what it means for the student to accept the teacher's announcement that he will receive a surprise exam. According to this account, the student can be said to have accepted the teacher's announcement provided he adopts a subjectiveprobability distribution relative to which he expects to receive the exam on a day on which he expects (...) not to receive it. We show that as long as expectation is not equated with subjective certainty there will be contexts in which it is possible for the student to accept the teacher's announcement, in this sense. In addition, we show how a Bayesian modeling of the scenario can yield plausible explanations of the following three intuitive claims: (1) the teacher's announcement becomes easier to accept the more days there are in class; (2) a strict interpretation of the teacher's announcement does not provide the student with any categorical information as to the date of the exam; and (3) the teacher's announcement contains less information about the date of the exam the more days there are in class. To conclude, we show how the surprise exam paradox can be seen as one among the larger class of paradoxes of doxastic fallibilism, foremost among which is the paradox of the preface. (shrink)
In this paper we discuss the new Tweety puzzle. The original Tweety puzzle was addressed by approaches in non-monotonic logic, which aim to adequately represent the Tweety case, namely that Tweety is a penguin and, thus, an exceptional bird, which cannot fly, although in general birds can fly. The new Tweety puzzle is intended as a challenge for probabilistic theories of epistemic states. In the first part of the paper we argue against monistic Bayesians, who assume that epistemic states can (...) at any given time be adequately described by a single subjectiveprobability function. We show that monistic Bayesians cannot provide an adequate solution to the new Tweety puzzle, because this requires one to refer to a frequency-based probability function. We conclude that monistic Bayesianism cannot be a fully adequate theory of epistemic states. In the second part we describe an empirical study, which provides support for the thesis that monistic Bayesianism is also inadequate as a descriptive theory of cognitive states. In the final part of the paper we criticize Bayesian approaches in cognitive science, insofar as their monistic tendency cannot adequately address the new Tweety puzzle. We, further, argue against monistic Bayesianism in cognitive science by means of a case study. In this case study we show that Oaksford and Chater’s (2007, 2008) model of conditional inference—contrary to the authors’ theoretical position—has to refer also to a frequency-based probability function. (shrink)
Sometimes epistemologists theorize about belief, a tripartite attitude on which one can believe, withhold belief, or disbelieve a proposition. In other cases, epistemologists theorize about credence, a fine-grained attitude that represents one’s subjectiveprobability or confidence level toward a proposition. How do these two attitudes relate to each other? This article explores the relationship between belief and credence in two categories: descriptive and normative. It then explains the broader significance of the belief-credence connection and concludes with general lessons (...) from the debate thus far. (shrink)
It is well known that classical, aka ‘sharp’, Bayesian decision theory, which models belief states as single probability functions, faces a number of serious difficulties with respect to its handling of agnosticism. These difficulties have led to the increasing popularity of so-called ‘imprecise’ models of decision-making, which represent belief states as sets of probability functions. In a recent paper, however, Adam Elga has argued in favour of a putative normative principle of sequential choice that he claims to be (...) borne out by the sharp model but not by any promising incarnation of its imprecise counterpart. After first pointing out that Elga has fallen short of establishing that his principle is indeed uniquely borne out by the sharp model, I cast aspersions on its plausibility. I show that a slight weakening of the principle is satisfied by at least one, but interestingly not all, varieties of the imprecise model and point out that Elga has failed to motivate his stronger commitment. (shrink)
This paper addresses the issue of finite versus countable additivity in Bayesian probability and decision theory -- in particular, Savage's theory of subjective expected utility and personal probability. I show that Savage's reason for not requiring countable additivity in his theory is inconclusive. The assessment leads to an analysis of various highly idealised assumptions commonly adopted in Bayesian theory, where I argue that a healthy dose of, what I call, conceptual realism is often helpful in understanding the (...) interpretational value of sophisticated mathematical structures employed in applied sciences like decision theory. In the last part, I introduce countable additivity into Savage's theory and explore some technical properties in relation to other axioms of the system. (shrink)
IBE ('Inference to the best explanation' or abduction) is a popular and highly plausible theory of how we should judge the evidence for claims of past events based on present evidence. It has been notably developed and supported recently by Meyer following Lipton. I believe this theory is essentially correct. This paper supports IBE from a probability perspective, and argues that the retrodictive probabilities involved in such inferences should be analysed in terms of predictive probabilities and a priori (...) class='Hi'>probability ratios of initial events. The key point is to separate these two features. Disagreements over evidence can be traced to disagreements over either the a priori probability ratios or predictive conditional ratios. In many cases, in real science, judgements of the former are necessarily subjective. The principles of iterated evidence are also discussed. The Sceptic's position is criticised as ignoring iteration of evidence, and characteristically failing to adjust a priori probability ratios in response to empirical evidence. (shrink)
One's inaccuracy for a proposition is defined as the squared difference between the truth value (1 or 0) of the proposition and the credence (or subjectiveprobability, or degree of belief) assigned to the proposition. One should have the epistemic goal of minimizing the expected inaccuracies of one's credences. We show that the method of minimizing expected inaccuracy can be used to solve certain probability problems involving information loss and self-locating beliefs (where a self-locating belief of a (...) temporal part of an individual is a belief about where or when that temporal part is located). We analyze the Sleeping Beauty problem, the duplication version of the Sleeping Beauty problem, and various related problems. (shrink)
The orthodox theory of instrumental rationality, expected utility (EU) theory, severely restricts the way in which risk-considerations can figure into a rational individual's preferences. It is argued here that this is because EU theory neglects an important component of instrumental rationality. This paper presents a more general theory of decision-making, risk-weighted expected utility (REU) theory, of which expected utility maximization is a special case. According to REU theory, the weight that each outcome gets in decision-making is not the subjective (...)probability of that outcome; rather, the weight each outcome gets depends on both its subjectiveprobability and its position in the gamble. Furthermore, the individual's utility function, her subjectiveprobability function, and a function that measures her attitude towards risk can be separately derived from her preferences via a Representation Theorem. This theorem illuminates the role that each of these entities plays in preferences, and shows how REU theory explicates the components of instrumental rationality. (shrink)
De Finetti would claim that we can make sense of a draw in which each positive integer has equal probability of winning. This requires a uniform probability distribution over the natural numbers, violating countable additivity. Countable additivity thus appears not to be a fundamental constraint on subjectiveprobability. It does, however, seem mandated by Dutch Book arguments similar to those that support the other axioms of the probability calculus as compulsory for subjective interpretations. These (...) two lines of reasoning can be reconciled through a slight generalization of the Dutch Book framework. Countable additivity may indeed be abandoned for de Finetti's lottery, but this poses no serious threat to its adoption in most applications of subjectiveprobability. Introduction The de Finetti lottery Two objections to equiprobability 3.1 The ‘No random mechanism’ argument 3.2 The Dutch Book argument Equiprobability and relative betting quotients The re-labelling paradox 5.1 The paradox 5.2 Resolution: from symmetry to relative probability Beyond the de Finetti lottery. (shrink)
The Spohnian paradigm of ranking functions is in many respects like an order-of-magnitude reverse of subjectiveprobability theory. Unlike probabilities, however, ranking functions are only indirectly—via a pointwise ranking function on the underlying set of possibilities W —defined on a field of propositions A over W. This research note shows under which conditions ranking functions on a field of propositions A over W and rankings on a language L are induced by pointwise ranking functions on W and the (...) set of models for L, ModL, respectively. (shrink)
Crupi et al. propose a generalization of Bayesian conﬁrmation theory that they claim to adequately deal with conﬁrmation by uncertain evidence. Consider a series of points of time t0, . . . , ti, . . . , tn such that the agent’s subjectiveprobability for an atomic proposition E changes from Pr0 at t0 to . . . to Pri at ti to . . . to Prn at tn. It is understood that the agent’s subjective (...) probabilities change for E and no logically stronger proposition, and that the agent updates her subjective probabilities by Jeffrey conditionalization. For this speciﬁc scenario the authors propose to take the difference between Pr0 and Pri as the degree to which E conﬁrms H for the agent at time ti , C0,i. This proposal is claimed to be adequate, because. (shrink)
[1] You have a crystal ball. Unfortunately, it’s defective. Rather than predicting the future, it gives you the chances of future events. Is it then of any use? It certainly seems so. You may not know for sure whether the stock market will crash next week; but if you know for sure that it has an 80% chance of crashing, then you should be 80% confident that it will—and you should plan accordingly. More generally, given that the chance of a (...) proposition A is x%, your conditional credence in A should be x%. This is a chance-credence principle: a principle relating chance (objective probability) with credence (subjectiveprobability, degree of belief). Let’s call it the Minimal Principle (MP). (shrink)
The article is a plea for ethicists to regard probability as one of their most important concerns. It outlines a series of topics of central importance in ethical theory in which probability is implicated, often in a surprisingly deep way, and lists a number of open problems. Topics covered include: interpretations of probability in ethical contexts; the evaluative and normative significance of risk or uncertainty; uses and abuses of expected utility theory; veils of ignorance; Harsanyi’s aggregation theorem; (...) population size problems; equality; fairness; giving priority to the worse off; continuity; incommensurability; nonexpected utility theory; evaluative measurement; aggregation; causal and evidential decision theory; act consequentialism; rule consequentialism; and deontology. (shrink)
The justificatory force of empirical reasoning always depends upon the existence of some synthetic, a priori justification. The reasoner must begin with justified, substantive constraints on both the prior probability of the conclusion and certain conditional probabilities; otherwise, all possible degrees of belief in the conclusion are left open given the premises. Such constraints cannot in general be empirically justified, on pain of infinite regress. Nor does subjective Bayesianism offer a way out for the empiricist. Despite often-cited convergence (...) theorems, subjective Bayesians cannot hold that any empirical hypothesis is ever objectively justified in the relevant sense. Rationalism is thus the only alternative to an implausible skepticism. (shrink)
Few of Kant’s distinctions have generated as much puzzlement and criticism as the one he draws in the Prolegomena between judgments of experience, which he describes as objectively and universally valid, and judgments of perception, which he says are merely subjectively valid. Yet the distinction between objective and subjective validity is central to Kant’s account of experience and plays a key role in his Transcendental Deduction of the categories. In this paper, I reject a standard interpretation of the distinction, (...) according to which judgments of perception are merely subjectively valid because they are made without sufficient investigation. In its place, I argue that for Kant, judgments of perception are merely subjectively valid because they merely report sequences of perceptions had by a subject without claiming that what is represented by the perceptions is connected in the objects the perceptions are of. Whereas the interpretation I criticize undercuts Kant’s strategy in the Deduction, I argue, my interpretation illuminates it. (shrink)
This article explores theoretical conditions necessary for “quantum immortality” (QI) as well as its possible practical implications. It is demonstrated that the QI is a particular case of “multiverse immortality” (MI) which is based on two main assumptions: the very large size of the Universe (not necessary because of quantum effects), and the copy-friendly theory of personal identity. It is shown that a popular objection about the lowering of the world-share (measure) of an observer in the case of QI doesn’t (...) work, as the world-share decline could be compensated by the merging timelines for the simpler minds, and also some types of personal preferences are not dependent on such changes. Despite large uncertainty about MI’s validity, it still has appreciable practical consequences in some important outcomes like suicide and aging. The article demonstrates that MI could be used to significantly increase the expected subjectiveprobability of success of risky life extension technologies, like cryonics, but makes euthanasia impractical, because of the risks of eternal suffering. Euthanasia should be replaced with cryothanasia, i.e. cryopreservation after voluntary death. Another possible application of MI is as a last chance to survive a global catastrophe. MI could be considered a plan D of reaching immortality, where plan A consists of the survival until the creation of the beneficial AI via fighting aging, plan B is cryonics, and plan C is digital immortality. (shrink)
In this thought-provoking book, Richard Healey proposes a new interpretation of quantum theory inspired by pragmatist philosophy. Healey puts forward the interpretation as an alternative to realist quantum theories on the one hand such as Bohmian mechanics, spontaneous collapse theories, and many-worlds interpretations, which are different proposals for describing what the quantum world is like and what the basic laws of physics are, and non-realist interpretations on the other hand such as quantum Bayesianism, which proposes to understand quantum theory as (...) describing agents’ subjective epistemic states. The central idea of Healey’s proposal is to understand quantum theory as providing not a description of the physical world but a set of authoritative and objectively correct prescriptions about how agents should act. The book provides a detailed development and defense of that idea, and it contains interesting discussions about a wide range of philosophical issues such as representation, probability, explanation, causation, objectivity, meaning, and fundamentality. (shrink)
This paper motivates and develops a novel semantic framework for deontic modals. The framework is designed to shed light on two things: the relationship between deontic modals and substantive theories of practical rationality and the interaction of deontic modals with conditionals, epistemic modals and probability operators. I argue that, in order to model inferential connections between deontic modals and probability operators, we need more structure than is provided by classical intensional theories. In particular, we need probabilistic structure that (...) interacts directly with the compositional semantics of deontic modals. However, I reject theories that provide this probabilistic structure by claiming that the semantics of deontic modals is linked to the Bayesian notion of expectation. I offer a probabilistic premise semantics that explains all the data that create trouble for the rival theories. (shrink)
This book explores a question central to philosophy--namely, what does it take for a belief to be justified or rational? According to a widespread view, whether one has justification for believing a proposition is determined by how probable that proposition is, given one's evidence. In this book this view is rejected and replaced with another: in order for one to have justification for believing a proposition, one's evidence must normically support it--roughly, one's evidence must make the falsity of that proposition (...) abnormal in the sense of calling for special, independent explanation. This conception of justification bears upon a range of topics in epistemology and beyond. Ultimately, this way of looking at justification guides us to a new, unfamiliar picture of how we should respond to our evidence and manage our own fallibility. This picture is developed here. (shrink)
In this study we investigate the influence of reason-relation readings of indicative conditionals and ‘and’/‘but’/‘therefore’ sentences on various cognitive assessments. According to the Frege-Grice tradition, a dissociation is expected. Specifically, differences in the reason-relation reading of these sentences should affect participants’ evaluations of their acceptability but not of their truth value. In two experiments we tested this assumption by introducing a relevance manipulation into the truth-table task as well as in other tasks assessing the participants’ acceptability and probability evaluations. (...) Across the two experiments a strong dissociation was found. The reason-relation reading of all four sentences strongly affected their probability and acceptability evaluations, but hardly affected their respective truth evaluations. Implications of this result for recent work on indicative conditionals are discussed. (shrink)
Twentieth century philosophers introduced the distinction between “objective rightness” and “subjective rightness” to achieve two primary goals. The first goal is to reduce the paradoxical tension between our judgments of (i) what is best for an agent to do in light of the actual circumstances in which she acts and (ii) what is wisest for her to do in light of her mistaken or uncertain beliefs about her circumstances. The second goal is to provide moral guidance to an agent (...) who may be uncertain about the circumstances in which she acts, and hence is unable to use her standard moral principle directly in deciding what to do. This paper distinguishes two important senses of “moral guidance”; proposes criteria of adequacy for accounts of subjective rightness; canvasses existing definitions for “subjective rightness”; finds them all deficient; and proposes a new and more successful account. It argues that each comprehensive moral theory must include multiple principles of subjective rightness to address the epistemic situations of the full range of moral decision-makers, and shows that accounts of subjective rightness formulated in terms of what it would reasonable for the agent to believe cannot provide that guidance. -/- . (shrink)
In finite probability theory, events are subsets S⊆U of the outcome set. Subsets can be represented by 1-dimensional column vectors. By extending the representation of events to two dimensional matrices, we can introduce "superposition events." Probabilities are introduced for classical events, superposition events, and their mixtures by using density matrices. Then probabilities for experiments or `measurements' of all these events can be determined in a manner exactly like in quantum mechanics (QM) using density matrices. Moreover the transformation of the (...) density matrices induced by the experiments or `measurements' is the Lüders mixture operation as in QM. And finally by moving the machinery into the n-dimensional vector space over ℤ₂, different basis sets become different outcome sets. That `non-commutative' extension of finite probability theory yields the pedagogical model of quantum mechanics over ℤ₂ that can model many characteristic non-classical results of QM. (shrink)
This paper demarcates a theoretically interesting class of "evaluational adjectives." This class includes predicates expressing various kinds of normative and epistemic evaluation, such as predicates of personal taste, aesthetic adjectives, moral adjectives, and epistemic adjectives, among others. Evaluational adjectives are distinguished, empirically, in exhibiting phenomena such as discourse-oriented use, felicitous embedding under the attitude verb `find', and sorites-susceptibility in the comparative form. A unified degree-based semantics is developed: What distinguishes evaluational adjectives, semantically, is that they denote context-dependent measure functions ("evaluational (...) perspectives")—context-dependent mappings to degrees of taste, beauty, probability, etc., depending on the adjective. This perspective-sensitivity characterizing the class of evaluational adjectives cannot be assimilated to vagueness, sensitivity to an experiencer argument, or multidimensionality; and it cannot be demarcated in terms of pretheoretic notions of subjectivity, common in the literature. I propose that certain diagnostics for "subjective" expressions be analyzed instead in terms of a precisely specified kind of discourse-oriented use of context-sensitive language. I close by applying the account to `find x PRED' ascriptions. (shrink)
We provide a 'verisimilitudinarian' analysis of the well-known Linda paradox or conjunction fallacy, i.e., the fact that most people judge the probability of the conjunctive statement "Linda is a bank teller and is active in the feminist movement" (B & F) as more probable than the isolated statement "Linda is a bank teller" (B), contrary to an uncontroversial principle of probability theory. The basic idea is that experimental participants may judge B & F a better hypothesis about Linda (...) as compared to B because they evaluate B & F as more verisimilar than B. In fact, the hypothesis "feminist bank teller", while less likely to be true than "bank teller", may well be a better approximation to the truth about Linda. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.