Subjectiveprobability plays an increasingly important role in many fields concerned with human cognition and behavior. Yet there have been significant criticisms of the idea that probabilities could actually be represented in the mind. This paper presents and elaborates a view of subjectiveprobability as a kind of sampling propensity associated with internally represented generative models. The resulting view answers to some of the most well known criticisms of subjectiveprobability, and is also supported (...) by empirical work in neuroscience and behavioral psychology. The repercussions of the view for how we conceive of many ordinary instances of subjectiveprobability, and how it relates to more traditional conceptions of subjectiveprobability, are discussed in some detail. (shrink)
This article deals with the nature of the objective-subjective dichotomy, first from a general historical point of view, and then with regard to the use of these terms over time to describe theories of probability. The different (metaphysical and epistemological) meanings of “objective” and “subjective” are analyzed, and then used to show that all probability theories can be divided into three broad classes.
In his classic book “the Foundations of Statistics” Savage developed a formal system of rational decision making. The system is based on (i) a set of possible states of the world, (ii) a set of consequences, (iii) a set of acts, which are functions from states to consequences, and (iv) a preference relation over the acts, which represents the preferences of an idealized rational agent. The goal and the culmination of the enterprise is a representation theorem: Any preference relation that (...) satisfies certain arguably acceptable postulates determines a (finitely additive) probability distribution over the states and a utility assignment to the consequences, such that the preferences among acts are determined by their expected utilities. Additional problematic assumptions are however required in Savage's proofs. First, there is a Boolean algebra of events (sets of states) which determines the richness of the set of acts. The probabilities are assigned to members of this algebra. Savage's proof requires that this be a σ-algebra (i.e., closed under infinite countable unions and intersections), which makes for an extremely rich preference relation. On Savage's view we should not require subjective probabilities to be σ-additive. He therefore finds the insistence on a σ-algebra peculiar and is unhappy with it. But he sees no way of avoiding it. Second, the assignment of utilities requires the constant act assumption: for every consequence there is a constant act, which produces that consequence in every state. This assumption is known to be highly counterintuitive. The present work contains two mathematical results. The first, and the more difficult one, shows that the σ-algebra assumption can be dropped. The second states that, as long as utilities are assigned to finite gambles only, the constant act assumption can be replaced by the more plausible and much weaker assumption that there are at least two non-equivalent constant acts. The second result also employs a novel way of deriving utilities in Savage-style systems -- without appealing to von Neumann-Morgenstern lotteries. The paper discusses the notion of “idealized agent" that underlies Savage's approach, and argues that the simplified system, which is adequate for all the actual purposes for which the system is designed, involves a more realistic notion of an idealized agent. (shrink)
Stochastic independence (SI) has a complex status in probability theory. It is not part of the definition of a probability measure, but it is nonetheless an essential property for the mathematical development of this theory, hence a property that any theory on the foundations of probability should be able to account for. Bayesian decision theory, which is one such theory, appears to be wanting in this respect. In Savage's classic treatment, postulates on preferences under uncertainty are shown (...) to entail a subjective expected utility (SEU) representation, and this permits asserting only the existence and uniqueness of a subjectiveprobability, regardless of its properties. What is missing is a preference postulate that would specifically connect with the SI property. The paper develops a version of Bayesian decision theory that fills this gap. In a framework of multiple sources of uncertainty, we introduce preference conditions that jointly entail the SEU representation and the property that the subjectiveprobability in this representation treats the sources of uncertainty as being stochastically independent. We give two representation theorems of graded complexity to demonstrate the power of our preference conditions. Two sections of comments follow, one connecting the theorems with earlier results in Bayesian decision theory, and the other connecting them with the foundational discussion on SI in probability theory and the philosophy of probability. Appendices offer more technical material. (shrink)
There is a trade-off between specificity and accuracy in existing models of belief. Descriptions of agents in the tripartite model, which recognizes only three doxastic attitudes—belief, disbelief, and suspension of judgment—are typically accurate, but not sufficiently specific. The orthodox Bayesian model, which requires real-valued credences, is perfectly specific, but often inaccurate: we often lack precise credences. I argue, first, that a popular attempt to fix the Bayesian model by using sets of functions is also inaccurate, since it requires us to (...) have interval-valued credences with perfectly precise endpoints. We can see this problem as analogous to the problem of higher order vagueness. Ultimately, I argue, the only way to avoid these problems is to endorse Insurmountable Unclassifiability. This principle has some surprising and radical consequences. For example, it entails that the trade-off between accuracy and specificity is in-principle unavoidable: sometimes it is simply impossible to characterize an agent’s doxastic state in a way that is both fully accurate and maximally specific. What we can do, however, is improve on both the tripartite and existing Bayesian models. I construct a new model of belief—the minimal model—that allows us to characterize agents with much greater specificity than the tripartite model, and yet which remains, unlike existing Bayesian models, perfectly accurate. (shrink)
We introduce a ranking of multidimensional alternatives, including uncertain prospects as a particular case, when these objects can be given a matrix form. This ranking is separable in terms of rows and columns, and continuous and monotonic in the basic quantities. Owing to the theory of additive separability developed here, we derive very precise numerical representations over a large class of domains (i.e., typically notof the Cartesian product form). We apply these representationsto (1)streams of commodity baskets through time, (2)uncertain social (...) prospects, (3)uncertain individual prospects. Concerning(1), we propose a finite horizon variant of Koopmans’s (1960) axiomatization of infinite discounted utility sums. The main results concern(2). We push the classic comparison between the exanteand expostsocial welfare criteria one step further by avoiding any expected utility assumptions, and as a consequence obtain what appears to be the strongest existing form of Harsanyi’s (1955) Aggregation Theorem. Concerning(3), we derive a subjectiveprobability for Anscombe and Aumann’s (1963) finite case by merely assuming that there are two epistemically independent sources of uncertainty. (shrink)
Following a long-standing philosophical tradition, impartiality is a distinctive and determining feature of moral judgments, especially in matters of distributive justice. This broad ethical tradition was revived in welfare economics by Vickrey, and above all, Harsanyi, under the form of the so-called Impartial Observer Theorem. The paper offers an analytical reconstruction of this argument and a step-wise philosophical critique of its premisses. It eventually provides a new formal version of the theorem based on subjectiveprobability.
Taking the philosophical standpoint, this article compares the mathematical theory of individual decision-making with the folk psychology conception of action, desire and belief. It narrows down its topic by carrying the comparison vis-à-vis Savage's system and its technical concept of subjectiveprobability, which is referred to the basic model of betting as in Ramsey. The argument is organized around three philosophical theses: (i) decision theory is nothing but folk psychology stated in formal language (Lewis), (ii) the former substantially (...) improves on the latter, but is unable to overcome its typical limitations, especially its failure to separate desire and belief empirically (Davidson), (iii) the former substantially improves on the latter, and through these innovations, overcomes some of the limitations. The aim of the article is to establish (iii) not only against the all too simple thesis (i), but also against the subtle thesis (ii). (shrink)
We investigate the conflict between the ex ante and ex post criteria of social welfare in a new framework of individual and social decisions, which distinguishes between two sources of uncertainty, here interpreted as an objective and a subjective source respectively. This framework makes it possible to endow the individuals and society not only with ex ante and ex post preferences, as is usually done, but also with interim preferences of two kinds, and correspondingly, to introduce interim forms of (...) the Pareto principle. After characterizing the ex ante and ex post criteria, we present a first solution to their conflict that extends the former as much possible in the direction of the latter. Then, we present a second solution, which goes in the opposite direction, and is also maximally assertive. Both solutions translate the assumed Pareto conditions into weighted additive utility representations, and both attribute to the individuals common probability values on the objective source of uncertainty, and different probability values on the subjective source. We discuss these solutions in terms of two conceptual arguments, i.e., the by now classic spurious unanimity argument and a novel informational argument labelled complementary ignorance. The paper complies with the standard economic methodology of basing probability and utility representations on preference axioms, but for the sake of completeness, also considers a construal of objective uncertainty based on the assumption of an exogeneously given probability measure. JEL classification: D70; D81. (shrink)
When probability discounting (or probability weighting), one multiplies the value of an outcome by one's subjectiveprobability that the outcome will obtain in decision-making. The broader import of defending probability discounting is to help justify cost-benefit analyses in contexts such as climate change. This chapter defends probability discounting under risk both negatively, from arguments by Simon Caney (2008, 2009), and with a new positive argument. First, in responding to Caney, I argue that small costs (...) and benefits need to be evaluated, and that viewing practices at the social level is too coarse-grained. Second, I argue for probability discounting, using a distinction between causal responsibility and moral responsibility. Moral responsibility can be cashed out in terms of blameworthiness and praiseworthiness, while causal responsibility obtains in full for any effect which is part of a causal chain linked to one's act. With this distinction in hand, unlike causal responsibility, moral responsibility can be seen as coming in degrees. My argument is, given that we can limit our deliberation and consideration to that which we are morally responsible for and that our moral responsibility for outcomes is limited by our subjective probabilities, our subjective probabilities can ground probability discounting. (shrink)
Ramsey (1926) sketches a proposal for measuring the subjective probabilities of an agent by their observable preferences, assuming that the agent is an expected utility maximizer. I show how to extend the spirit of Ramsey's method to a strictly wider class of agents: risk-weighted expected utility maximizers (Buchak 2013). In particular, I show how we can measure the risk attitudes of an agent by their observable preferences, assuming that the agent is a risk-weighted expected utility maximizer. Further, we can (...) leverage this method to measure the subjective probabilities of a risk-weighted expected utility maximizer. (shrink)
How can different individuals' probability functions on a given sigma-algebra of events be aggregated into a collective probability function? Classic approaches to this problem often require 'event-wise independence': the collective probability for each event should depend only on the individuals' probabilities for that event. In practice, however, some events may be 'basic' and others 'derivative', so that it makes sense first to aggregate the probabilities for the former and then to let these constrain the probabilities for the (...) latter. We formalize this idea by introducing a 'premise-based' approach to probabilistic opinon pooling, and show that, under a variety of assumptions, it leads to linear or neutral opinion pooling on the 'premises'. This paper is the second of two self-contained, but technically related companion papers inspired by binary judgment-aggregation theory. (shrink)
In “Can it be rational to have faith?”, it was argued that to have faith in some proposition consists, roughly speaking, in stopping one’s search for evidence and committing to act on that proposition without further evidence. That paper also outlined when and why stopping the search for evidence and acting is rationally required. Because the framework of that paper was that of formal decision theory, it primarily considered the relationship between faith and degrees of belief, rather than between faith (...) and belief full stop. This paper explores the relationship between rational faith and justified belief, by considering four prominent proposals about the relationship between belief and degrees of belief, and by examining what follows about faith and belief according to each of these proposals. It is argued that we cannot reach consensus concerning the relationship between faith and belief at present because of the more general epistemological lack of consensus over how belief relates to rationality: in particular, over how belief relates to the degrees of belief it is rational to have given one’s evidence. (shrink)
Can an agent deliberating about an action A hold a meaningful credence that she will do A? 'No', say some authors, for 'Deliberation Crowds Out Prediction' (DCOP). Others disagree, but we argue here that such disagreements are often terminological. We explain why DCOP holds in a Ramseyian operationalist model of credence, but show that it is trivial to extend this model so that DCOP fails. We then discuss a model due to Joyce, and show that Joyce's rejection of DCOP rests (...) on terminological choices about terms such as 'intention', 'prediction', and 'belief'. Once these choices are in view, they reveal underlying agreement between Joyce and the DCOP-favouring tradition that descends from Ramsey. Joyce's Evidential Autonomy Thesis (EAT) is effectively DCOP, in different terminological clothing. Both principles rest on the so-called 'transparency' of first-person present-tensed reflection on one's own mental states. (shrink)
One guide to an argument's significance is the number and variety of refutations it attracts. By this measure, the Dutch book argument has considerable importance.2 Of course this measure alone is not a sure guide to locating arguments deserving of our attention—if a decisive refutation has really been given, we are better off pursuing other topics. But the presence of many and varied counterarguments at least suggests that either the refutations are controversial, or that their target admits of more than (...) one interpretation, or both. The main point of this paper is to focus on a way of understanding the Dutch Book argument (DBA) that avoids many of the well-known criticisms, and to consider how it fares against an important criticism that still remains: the objection that the DBA presupposes value-independence of bets. (shrink)
One recent topic of debate in Bayesian epistemology has been the question of whether imprecise credences can be rational. I argue that one account of imprecise credences, the orthodox treatment as defended by James M. Joyce, is untenable. Despite Joyce’s claims to the contrary, a puzzle introduced by Roger White shows that the orthodox account, when paired with Bas C. van Fraassen’s Reflection Principle, can lead to inconsistent beliefs. Proponents of imprecise credences, then, must either provide a compelling reason to (...) reject Reflection or admit that the rational credences in White’s case are precise. (shrink)
We offer a new argument for the claim that there can be non-degenerate objective chance (“true randomness”) in a deterministic world. Using a formal model of the relationship between different levels of description of a system, we show how objective chance at a higher level can coexist with its absence at a lower level. Unlike previous arguments for the level-specificity of chance, our argument shows, in a precise sense, that higher-level chance does not collapse into epistemic probability, despite higher-level (...) properties supervening on lower-level ones. We show that the distinction between objective chance and epistemic probability can be drawn, and operationalized, at every level of description. There is, therefore, not a single distinction between objective and epistemic probability, but a family of such distinctions. (shrink)
A group is often construed as a single agent with its own probabilistic beliefs (credences), which are obtained by aggregating those of the individuals, for instance through averaging. In their celebrated contribution “Groupthink”, Russell et al. (2015) apply the Bayesian paradigm to groups by requiring group credences to undergo a Bayesian revision whenever new information is learnt, i.e., whenever the individual credences undergo a Bayesian revision based on this information. Bayesians should often strengthen this requirement by extending it to non-public (...) or even private information (learnt by not all or just one individual), or to non-representable information (not corresponding to an event in the algebra on which credences are held). I propose a taxonomy of six kinds of `group Bayesianism', which differ in the type of information for which Bayesian revision of group credences is required: public representable information, private representable information, public non-representable information, and so on. Six corresponding theorems establish exactly how individual credences must (not) be aggregated such that the resulting group credences obey group Bayesianism of any given type, respectively. Aggregating individual credences through averaging is never permitted. One of the theorems – the one concerned with public representable information – is essentially Russell et al.'s central result (with minor corrections). (shrink)
In this paper, we provide a Bayesian analysis of the well-known surprise exam paradox. Central to our analysis is a probabilistic account of what it means for the student to accept the teacher's announcement that he will receive a surprise exam. According to this account, the student can be said to have accepted the teacher's announcement provided he adopts a subjectiveprobability distribution relative to which he expects to receive the exam on a day on which he expects (...) not to receive it. We show that as long as expectation is not equated with subjective certainty there will be contexts in which it is possible for the student to accept the teacher's announcement, in this sense. In addition, we show how a Bayesian modeling of the scenario can yield plausible explanations of the following three intuitive claims: (1) the teacher's announcement becomes easier to accept the more days there are in class; (2) a strict interpretation of the teacher's announcement does not provide the student with any categorical information as to the date of the exam; and (3) the teacher's announcement contains less information about the date of the exam the more days there are in class. To conclude, we show how the surprise exam paradox can be seen as one among the larger class of paradoxes of doxastic fallibilism, foremost among which is the paradox of the preface. (shrink)
In this paper we discuss the new Tweety puzzle. The original Tweety puzzle was addressed by approaches in non-monotonic logic, which aim to adequately represent the Tweety case, namely that Tweety is a penguin and, thus, an exceptional bird, which cannot fly, although in general birds can fly. The new Tweety puzzle is intended as a challenge for probabilistic theories of epistemic states. In the first part of the paper we argue against monistic Bayesians, who assume that epistemic states can (...) at any given time be adequately described by a single subjectiveprobability function. We show that monistic Bayesians cannot provide an adequate solution to the new Tweety puzzle, because this requires one to refer to a frequency-based probability function. We conclude that monistic Bayesianism cannot be a fully adequate theory of epistemic states. In the second part we describe an empirical study, which provides support for the thesis that monistic Bayesianism is also inadequate as a descriptive theory of cognitive states. In the final part of the paper we criticize Bayesian approaches in cognitive science, insofar as their monistic tendency cannot adequately address the new Tweety puzzle. We, further, argue against monistic Bayesianism in cognitive science by means of a case study. In this case study we show that Oaksford and Chater’s (2007, 2008) model of conditional inference—contrary to the authors’ theoretical position—has to refer also to a frequency-based probability function. (shrink)
This paper addresses the issue of finite versus countable additivity in Bayesian probability and decision theory -- in particular, Savage's theory of subjective expected utility and personal probability. I show that Savage's reason for not requiring countable additivity in his theory is inconclusive. The assessment leads to an analysis of various highly idealised assumptions commonly adopted in Bayesian theory, where I argue that a healthy dose of, what I call, conceptual realism is often helpful in understanding the (...) interpretational value of sophisticated mathematical structures employed in applied sciences like decision theory. In the last part, I introduce countable additivity into Savage's theory and explore some technical properties in relation to other axioms of the system. (shrink)
It is well known that classical, aka ‘sharp’, Bayesian decision theory, which models belief states as single probability functions, faces a number of serious difficulties with respect to its handling of agnosticism. These difficulties have led to the increasing popularity of so-called ‘imprecise’ models of decision-making, which represent belief states as sets of probability functions. In a recent paper, however, Adam Elga has argued in favour of a putative normative principle of sequential choice that he claims to be (...) borne out by the sharp model but not by any promising incarnation of its imprecise counterpart. After first pointing out that Elga has fallen short of establishing that his principle is indeed uniquely borne out by the sharp model, I cast aspersions on its plausibility. I show that a slight weakening of the principle is satisfied by at least one, but interestingly not all, varieties of the imprecise model and point out that Elga has failed to motivate his stronger commitment. (shrink)
IBE ('Inference to the best explanation' or abduction) is a popular and highly plausible theory of how we should judge the evidence for claims of past events based on present evidence. It has been notably developed and supported recently by Meyer following Lipton. I believe this theory is essentially correct. This paper supports IBE from a probability perspective, and argues that the retrodictive probabilities involved in such inferences should be analysed in terms of predictive probabilities and a priori (...) class='Hi'>probability ratios of initial events. The key point is to separate these two features. Disagreements over evidence can be traced to disagreements over either the a priori probability ratios or predictive conditional ratios. In many cases, in real science, judgements of the former are necessarily subjective. The principles of iterated evidence are also discussed. The Sceptic's position is criticised as ignoring iteration of evidence, and characteristically failing to adjust a priori probability ratios in response to empirical evidence. (shrink)
One's inaccuracy for a proposition is defined as the squared difference between the truth value (1 or 0) of the proposition and the credence (or subjectiveprobability, or degree of belief) assigned to the proposition. One should have the epistemic goal of minimizing the expected inaccuracies of one's credences. We show that the method of minimizing expected inaccuracy can be used to solve certain probability problems involving information loss and self-locating beliefs (where a self-locating belief of a (...) temporal part of an individual is a belief about where or when that temporal part is located). We analyze the Sleeping Beauty problem, the duplication version of the Sleeping Beauty problem, and various related problems. (shrink)
The orthodox theory of instrumental rationality, expected utility (EU) theory, severely restricts the way in which risk-considerations can figure into a rational individual's preferences. It is argued here that this is because EU theory neglects an important component of instrumental rationality. This paper presents a more general theory of decision-making, risk-weighted expected utility (REU) theory, of which expected utility maximization is a special case. According to REU theory, the weight that each outcome gets in decision-making is not the subjective (...)probability of that outcome; rather, the weight each outcome gets depends on both its subjectiveprobability and its position in the gamble. Furthermore, the individual's utility function, her subjectiveprobability function, and a function that measures her attitude towards risk can be separately derived from her preferences via a Representation Theorem. This theorem illuminates the role that each of these entities plays in preferences, and shows how REU theory explicates the components of instrumental rationality. (shrink)
The Spohnian paradigm of ranking functions is in many respects like an order-of-magnitude reverse of subjectiveprobability theory. Unlike probabilities, however, ranking functions are only indirectly—via a pointwise ranking function on the underlying set of possibilities W —defined on a field of propositions A over W. This research note shows under which conditions ranking functions on a field of propositions A over W and rankings on a language L are induced by pointwise ranking functions on W and the (...) set of models for L, ModL, respectively. (shrink)
De Finetti would claim that we can make sense of a draw in which each positive integer has equal probability of winning. This requires a uniform probability distribution over the natural numbers, violating countable additivity. Countable additivity thus appears not to be a fundamental constraint on subjectiveprobability. It does, however, seem mandated by Dutch Book arguments similar to those that support the other axioms of the probability calculus as compulsory for subjective interpretations. These (...) two lines of reasoning can be reconciled through a slight generalization of the Dutch Book framework. Countable additivity may indeed be abandoned for de Finetti's lottery, but this poses no serious threat to its adoption in most applications of subjectiveprobability. Introduction The de Finetti lottery Two objections to equiprobability 3.1 The ‘No random mechanism’ argument 3.2 The Dutch Book argument Equiprobability and relative betting quotients The re-labelling paradox 5.1 The paradox 5.2 Resolution: from symmetry to relative probability Beyond the de Finetti lottery. (shrink)
Crupi et al. propose a generalization of Bayesian conﬁrmation theory that they claim to adequately deal with conﬁrmation by uncertain evidence. Consider a series of points of time t0, . . . , ti, . . . , tn such that the agent’s subjectiveprobability for an atomic proposition E changes from Pr0 at t0 to . . . to Pri at ti to . . . to Prn at tn. It is understood that the agent’s subjective (...) probabilities change for E and no logically stronger proposition, and that the agent updates her subjective probabilities by Jeffrey conditionalization. For this speciﬁc scenario the authors propose to take the difference between Pr0 and Pri as the degree to which E conﬁrms H for the agent at time ti , C0,i. This proposal is claimed to be adequate, because. (shrink)
[1] You have a crystal ball. Unfortunately, it’s defective. Rather than predicting the future, it gives you the chances of future events. Is it then of any use? It certainly seems so. You may not know for sure whether the stock market will crash next week; but if you know for sure that it has an 80% chance of crashing, then you should be 80% confident that it will—and you should plan accordingly. More generally, given that the chance of a (...) proposition A is x%, your conditional credence in A should be x%. This is a chance-credence principle: a principle relating chance (objective probability) with credence (subjectiveprobability, degree of belief). Let’s call it the Minimal Principle (MP). (shrink)
The article is a plea for ethicists to regard probability as one of their most important concerns. It outlines a series of topics of central importance in ethical theory in which probability is implicated, often in a surprisingly deep way, and lists a number of open problems. Topics covered include: interpretations of probability in ethical contexts; the evaluative and normative significance of risk or uncertainty; uses and abuses of expected utility theory; veils of ignorance; Harsanyi’s aggregation theorem; (...) population size problems; equality; fairness; giving priority to the worse off; continuity; incommensurability; nonexpected utility theory; evaluative measurement; aggregation; causal and evidential decision theory; act consequentialism; rule consequentialism; and deontology. (shrink)
A theory of what we should believe should include a theory of what we should believe when we are uncertain about what we should believe and/or uncertain about the factors that determine what we should believe. In this paper, I present a novel theory of what we should believe that gives normative externalists a way of responding to a suite of objections having to do with various kinds of error, ignorance, and uncertainty. This theory is inspired by recent work in (...) ethical theory in which non-consequentialists 'consequentialize' their theories and then use the tools of decision-theory to give us an account of what we ought (in some sense) to do when we're uncertain about what we ought (in some primary sense) to do. On my proposal, because what we ought to do is acquire knowledge and avoid ignorance, we ought to believe iff the probability of coming to know is sufficiently high. This view has a number of important virtues. Among them, it gives us a unified story about how defeaters defeat (a theory developed with Julien Dutant), explains puzzling intuitions about the differences between lottery cases, preface cases, and cases of perceptual knowledge, and provides externalists (and internalists!) with a general framework for thinking about subjective normativity. It's also relatively brief. (shrink)
The justificatory force of empirical reasoning always depends upon the existence of some synthetic, a priori justification. The reasoner must begin with justified, substantive constraints on both the prior probability of the conclusion and certain conditional probabilities; otherwise, all possible degrees of belief in the conclusion are left open given the premises. Such constraints cannot in general be empirically justified, on pain of infinite regress. Nor does subjective Bayesianism offer a way out for the empiricist. Despite often-cited convergence (...) theorems, subjective Bayesians cannot hold that any empirical hypothesis is ever objectively justified in the relevant sense. Rationalism is thus the only alternative to an implausible skepticism. (shrink)
This article explores theoretical conditions necessary for “quantum immortality” (QI) as well as its possible practical implications. It is demonstrated that the QI is a particular case of “multiverse immortality” (MI) which is based on two main assumptions: the very large size of the Universe (not necessary because of quantum effects), and the copy-friendly theory of personal identity. It is shown that a popular objection about the lowering of the world-share (measure) of an observer in the case of QI doesn’t (...) work, as the world-share decline could be compensated by the merging timelines for the simpler minds, and also some types of personal preferences are not dependent on such changes. Despite large uncertainty about MI’s validity, it still has appreciable practical consequences in some important outcomes like suicide and aging. The article demonstrates that MI could be used to significantly increase the expected subjectiveprobability of success of risky life extension technologies, like cryonics, but makes euthanasia impractical, because of the risks of eternal suffering. Euthanasia should be replaced with cryothanasia, i.e. cryopreservation after voluntary death. Another possible application of MI is as a last chance to survive a global catastrophe. MI could be considered a plan D of reaching immortality, where plan A consists of the survival until the creation of the beneficial AI via fighting aging, plan B is cryonics, and plan C is digital immortality. (shrink)
This paper demarcates a theoretically interesting class of "evaluational adjectives." This class includes predicates expressing various kinds of normative and epistemic evaluation, such as predicates of personal taste, aesthetic adjectives, moral adjectives, and epistemic adjectives, among others. Evaluational adjectives are distinguished, empirically, in exhibiting phenomena such as discourse-oriented use, felicitous embedding under the attitude verb `find', and sorites-susceptibility in the comparative form. A unified degree-based semantics is developed: What distinguishes evaluational adjectives, semantically, is that they denote context-dependent measure functions ("evaluational (...) perspectives")—context-dependent mappings to degrees of taste, beauty, probability, etc., depending on the adjective. This perspective-sensitivity characterizing the class of evaluational adjectives cannot be assimilated to vagueness, sensitivity to an experiencer argument, or multidimensionality; and it cannot be demarcated in terms of pretheoretic notions of subjectivity, common in the literature. I propose that certain diagnostics for "subjective" expressions be analyzed instead in terms of a precisely specified kind of discourse-oriented use of context-sensitive language. I close by applying the account to `find x PRED' ascriptions. (shrink)
This book explores a question central to philosophy--namely, what does it take for a belief to be justified or rational? According to a widespread view, whether one has justification for believing a proposition is determined by how probable that proposition is, given one's evidence. In this book this view is rejected and replaced with another: in order for one to have justification for believing a proposition, one's evidence must normically support it--roughly, one's evidence must make the falsity of that proposition (...) abnormal in the sense of calling for special, independent explanation. This conception of justification bears upon a range of topics in epistemology and beyond. Ultimately, this way of looking at justification guides us to a new, unfamiliar picture of how we should respond to our evidence and manage our own fallibility. This picture is developed here. (shrink)
This paper motivates and develops a novel semantic framework for deontic modals. The framework is designed to shed light on two things: the relationship between deontic modals and substantive theories of practical rationality and the interaction of deontic modals with conditionals, epistemic modals and probability operators. I argue that, in order to model inferential connections between deontic modals and probability operators, we need more structure than is provided by classical intensional theories. In particular, we need probabilistic structure that (...) interacts directly with the compositional semantics of deontic modals. However, I reject theories that provide this probabilistic structure by claiming that the semantics of deontic modals is linked to the Bayesian notion of expectation. I offer a probabilistic premise semantics that explains all the data that create trouble for the rival theories. (shrink)
In this study we investigate the influence of reason-relation readings of indicative conditionals and ‘and’/‘but’/‘therefore’ sentences on various cognitive assessments. According to the Frege-Grice tradition, a dissociation is expected. Specifically, differences in the reason-relation reading of these sentences should affect participants’ evaluations of their acceptability but not of their truth value. In two experiments we tested this assumption by introducing a relevance manipulation into the truth-table task as well as in other tasks assessing the participants’ acceptability and probability evaluations. (...) Across the two experiments a strong dissociation was found. The reason-relation reading of all four sentences strongly affected their probability and acceptability evaluations, but hardly affected their respective truth evaluations. Implications of this result for recent work on indicative conditionals are discussed. (shrink)
Twentieth century philosophers introduced the distinction between “objective rightness” and “subjective rightness” to achieve two primary goals. The first goal is to reduce the paradoxical tension between our judgments of (i) what is best for an agent to do in light of the actual circumstances in which she acts and (ii) what is wisest for her to do in light of her mistaken or uncertain beliefs about her circumstances. The second goal is to provide moral guidance to an agent (...) who may be uncertain about the circumstances in which she acts, and hence is unable to use her standard moral principle directly in deciding what to do. This paper distinguishes two important senses of “moral guidance”; proposes criteria of adequacy for accounts of subjective rightness; canvasses existing definitions for “subjective rightness”; finds them all deficient; and proposes a new and more successful account. It argues that each comprehensive moral theory must include multiple principles of subjective rightness to address the epistemic situations of the full range of moral decision-makers, and shows that accounts of subjective rightness formulated in terms of what it would reasonable for the agent to believe cannot provide that guidance. -/- . (shrink)
While much has been written about the functional profile of intentions, and about their normative or rational status, comparatively little has been said about the subjective authority of intention. What is it about intending that explains the ‘hold’ that an intention has on an agent—a hold that is palpable from her first-person perspective? I argue that several prima facie appealing explanations are not promising. Instead, I maintain that the subjective authority of intention can be explained in terms of (...) the inner structure of intention. In adopting an intention the agent comes to see herself as criticizable depending on whether she executes the intention. This allows us to explain in first-personal terms why the agent becomes disposed to act and deliberate in ways that are characteristic of intention. As intention-formation involves profound changes to reflexive evaluative attitudes, this is the ‘Self-Evaluation’ view of the subjective authority of intention. (shrink)
Merging of opinions results underwrite Bayesian rejoinders to complaints about the subjective nature of personal probability. Such results establish that sufficiently similar priors achieve consensus in the long run when fed the same increasing stream of evidence. Initial subjectivity, the line goes, is of mere transient significance, giving way to intersubjective agreement eventually. Here, we establish a merging result for sets of probability measures that are updated by Jeffrey conditioning. This generalizes a number of different merging results (...) in the literature. We also show that such sets converge to a shared, maximally informed opinion. Convergence to a maximally informed opinion is a (weak) Jeffrey conditioning analogue of Bayesian “convergence to the truth” for conditional probabilities. Finally, we demonstrate the philosophical significance of our study by detailing applications to the topics of dynamic coherence, imprecise probabilities, and probabilistic opinion pooling. (shrink)
This chapter discusses the main types of so-called ’subjective measures of consciousness’ used in current-day science of consciousness. After explaining the key worry about such measures, namely the problem of an ever-present response bias, I discuss the question of whether subjective measures of consciousness are introspective. I show that there is no clear answer to this question, as proponents of subjective measures do not employ a worked-out notion of subjective access. In turn, this makes the problem (...) of response bias less tractable than it might otherwise be. (shrink)
The problem addressed in this paper is “the main epistemic problem concerning science”, viz. “the explication of how we compare and evaluate theories [...] in the light of the available evidence” (van Fraassen, BC, 1983, Theory comparison and relevant Evidence. In J. Earman (Ed.), Testing scientific theories (pp. 27–42). Minneapolis: University of Minnesota Press). Sections 1– 3 contain the general plausibility-informativeness theory of theory assessment. In a nutshell, the message is (1) that there are two values a theory should exhibit: (...) truth and informativeness—measured respectively by a truth indicator and a strength indicator; (2) that these two values are conflicting in the sense that the former is a decreasing and the latter an increasing function of the logical strength of the theory to be assessed; and (3) that in assessing a given theory by the available data one should weigh between these two conflicting aspects in such a way that any surplus in informativeness succeeds, if the shortfall in plausibility is small enough. Particular accounts of this general theory arise by inserting particular strength indicators and truth indicators. In Section 4 the theory is spelt out for the Bayesian paradigm of subjective probabilities. It is then compared to incremental Bayesian confirmation theory. Section 4 closes by asking whether it is likely to be lovely. Section 5 discusses a few problems of confirmation theory in the light of the present approach. In particular, it is briefly indicated how the present account gives rise to a new analysis of Hempel’s conditions of adequacy for any relation of confirmation (Hempel, CG, 1945, Studies in the logic of comfirmation. Mind, 54, 1–26, 97–121.), differing from the one Carnap gave in § 87 of his Logical foundations of probability (1962, Chicago: University of Chicago Press). Section 6 adresses the question of justification any theory of theory assessment has to face: why should one stick to theories given high assessment values rather than to any other theories? The answer given by the Bayesian version of the account presented in section 4 is that one should accept theories given high assessment values, because, in the medium run, theory assessment almost surely takes one to the most informative among all true theories when presented separating data. The concluding section 7 continues the comparison between the present account and incremental Bayesian confirmation theory. (shrink)
We provide a 'verisimilitudinarian' analysis of the well-known Linda paradox or conjunction fallacy, i.e., the fact that most people judge the probability of the conjunctive statement "Linda is a bank teller and is active in the feminist movement" (B & F) as more probable than the isolated statement "Linda is a bank teller" (B), contrary to an uncontroversial principle of probability theory. The basic idea is that experimental participants may judge B & F a better hypothesis about Linda (...) as compared to B because they evaluate B & F as more verisimilar than B. In fact, the hypothesis "feminist bank teller", while less likely to be true than "bank teller", may well be a better approximation to the truth about Linda. (shrink)
In this chapter we examine the tendency to view future-oriented mental time travel as a unitary faculty that, despite task-driven surface variation, ultimately reduces to a common phenomenological state. We review evidence that FMTT is neither unitary nor beholden to episodic memory: Rather, it is varied both in its memorial underpinnings and experiential realization. We conclude that the phenomenological diversity characterizing FMTT is dependent not on the type of memory activated during task performance, but on the kind of subjective (...) temporality associated with the memory in play. (shrink)
Famous results by David Lewis show that plausible-sounding constraints on the probabilities of conditionals or evaluative claims lead to unacceptable results, by standard probabilistic reasoning. Existing presentations of these results rely on stronger assumptions than they really need. When we strip these arguments down to a minimal core, we can see both how certain replies miss the mark, and also how to devise parallel arguments for other domains, including epistemic “might,” probability claims, claims about comparative value, and so on. (...) A popular reply to Lewis's results is to claim that conditional claims, or claims about subjective value, lack truth conditions. For this strategy to have a chance of success, it needs to give up basic structural principles about how epistemic states can be updated—in a way that is strikingly parallel to the commitments of the project of dynamic semantics. (shrink)
Modern scientific cosmology pushes the boundaries of knowledge and the knowable. This is prompting questions on the nature of scientific knowledge. A central issue is what defines a 'good' model. When addressing global properties of the Universe or its initial state this becomes a particularly pressing issue. How to assess the probability of the Universe as a whole is empirically ambiguous, since we can examine only part of a single realisation of the system under investigation: at some point, data (...) will run out. We review the basics of applying Bayesian statistical explanation to the Universe as a whole. We argue that a conventional Bayesian approach to model inference generally fails in such circumstances, and cannot resolve, e.g., the so-called 'measure problem' in inflationary cosmology. Implicit and non-empirical valuations inevitably enter model assessment in these cases. This undermines the possibility to perform Bayesian model comparison. One must therefore either stay silent, or pursue a more general form of systematic and rational model assessment. We outline a generalised axiological Bayesian model inference framework, based on mathematical lattices. This extends inference based on empirical data (evidence) to additionally consider the properties of model structure (elegance) and model possibility space (beneficence). We propose this as a natural and theoretically well-motivated framework for introducing an explicit, rational approach to theoretical model prejudice and inference beyond data. (shrink)
There is a plethora of confirmation measures in the literature. Zalabardo considers four such measures: PD, PR, LD, and LR. He argues for LR and against each of PD, PR, and LD. First, he argues that PR is the better of the two probability measures. Next, he argues that LR is the better of the two likelihood measures. Finally, he argues that LR is superior to PR. I set aside LD and focus on the trio of PD, PR, and (...) LR. The question I address is whether Zalabardo succeeds in showing that LR is superior to each of PD and PR. I argue that the answer is negative. I also argue, though, that measures such as PD and PR, on one hand, and measures such as LR, on the other hand, are naturally understood as explications of distinct senses of confirmation. (shrink)
This paper defends David Hume's "Of Miracles" from John Earman's (2000) Bayesian attack by showing that Earman misrepresents Hume's argument against believing in miracles and misunderstands Hume's epistemology of probable belief. It argues, moreover, that Hume's account of evidence is fundamentally non-mathematical and thus cannot be properly represented in a Bayesian framework. Hume's account of probability is show to be consistent with a long and laudable tradition of evidential reasoning going back to ancient Roman law.
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.