In the Bayesian approach to quantum mechanics, probabilities—and thus quantum states—represent an agent’s degrees of belief, rather than corresponding to objective properties of physical systems. In this paper we investigate the concept of certainty in quantum mechanics. Particularly, we show how the probability-1 predictions derived from pure quantum states highlight a fundamental difference between our Bayesian approach, on the one hand, and Copenhagen and similar interpretations on the other. We first review the main arguments for the general claim that (...) probabilities always represent degrees of belief. We then argue that a quantum state prepared by some physical device always depends on an agent’s prior beliefs, implying that the probability-1 predictions derived from that state also depend on the agent’s prior beliefs. Quantum certainty is therefore always some agent’s certainty. Conversely, if facts about an experimental setup could imply agent-independent certainty for a measurement outcome, as in many Copenhagen-like interpretations, that outcome would effectively correspond to a preexisting system property. The idea that measurement outcomes occurring with certainty correspond to preexisting system properties is, however, in conflict with locality. We emphasize this by giving a version of an argument of Stairs [(1983). Quantum logic, realism, and value-definiteness. Philosophy of Science, 50, 578], which applies the Kochen–Specker theorem to an entangled bipartite system. (shrink)
Subjectiveprobability plays an increasingly important role in many fields concerned with human cognition and behavior. Yet there have been significant criticisms of the idea that probabilities could actually be represented in the mind. This paper presents and elaborates a view of subjectiveprobability as a kind of sampling propensity associated with internally represented generative models. The resulting view answers to some of the most well known criticisms of subjectiveprobability, and is also supported (...) by empirical work in neuroscience and behavioral psychology. The repercussions of the view for how we conceive of many ordinary instances of subjectiveprobability, and how it relates to more traditional conceptions of subjectiveprobability, are discussed in some detail. (shrink)
It is well known that classical, aka ‘sharp’, Bayesian decision theory, which models belief states as single probability functions, faces a number of serious difficulties with respect to its handling of agnosticism. These difficulties have led to the increasing popularity of so-called ‘imprecise’ models of decision-making, which represent belief states as sets of probability functions. In a recent paper, however, Adam Elga has argued in favour of a putative normative principle of sequential choice that he claims to be (...) borne out by the sharp model but not by any promising incarnation of its imprecise counterpart. After first pointing out that Elga has fallen short of establishing that his principle is indeed uniquely borne out by the sharp model, I cast aspersions on its plausibility. I show that a slight weakening of the principle is satisfied by at least one, but interestingly not all, varieties of the imprecise model and point out that Elga has failed to motivate his stronger commitment. (shrink)
Bayesianism is the position that scientific reasoning is probabilistic and that probabilities are adequately interpreted as an agent's actual subjective degrees of belief, measured by her betting behaviour. Confirmation is one important aspect of scientific reasoning. The thesis of this paper is the following: if scientific reasoning is at all probabilistic, the subjective interpretation has to be given up in order to get right confirmation—and thus scientific reasoning in general. The Bayesian approach to scientific reasoning Bayesian confirmation theory (...) The example The less reliable the source of information, the higher the degree of Bayesian confirmation Measure sensitivity A more general version of the problem of old evidence Conditioning on the entailment relation The counterfactual strategy Generalizing the counterfactual strategy The desired result, and a necessary and sufficient condition for it Actual degrees of belief The common knock-down feature, or ‘anything goes’ The problem of prior probabilities. (shrink)
This article analyzes the role of entropy in Bayesian statistics, focusing on its use as a tool for detection, recognition and validation of eigen-solutions. “Objects as eigen-solutions” is a key metaphor of the cognitive constructivism epistemological framework developed by the philosopher Heinz von Foerster. Special attention is given to some objections to the concepts of probability, statistics and randomization posed by George Spencer-Brown, a figure of great influence in the field of radical constructivism.
We propose a new account of indicative conditionals, giving acceptability and logical closure conditions for them. We start from Adams’ Thesis: the claim that the acceptability of a simple indicative equals the corresponding conditional probability. The Thesis is widely endorsed, but arguably false and refuted by empirical research. To fix it, we submit, we need a relevance constraint: we accept a simple conditional 'If φ, then ψ' to the extent that (i) the conditional probability p(ψ|φ) is high, provided (...) that (ii) φ is relevant for ψ. How (i) should work is well-understood. It is (ii) that holds the key to improve our understanding of conditionals. Our account has (i) a probabilistic component, using Popper functions; (ii) a relevance component, given via an algebraic structure of topics or subject matters. We present a probabilistic logic for simple indicatives, and argue that its (in)validities are both theoretically desirable and in line with empirical results on how people reason with conditionals. (shrink)
There is a trade-off between specificity and accuracy in existing models of belief. Descriptions of agents in the tripartite model, which recognizes only three doxastic attitudes—belief, disbelief, and suspension of judgment—are typically accurate, but not sufficiently specific. The orthodox Bayesian model, which requires real-valued credences, is perfectly specific, but often inaccurate: we often lack precise credences. I argue, first, that a popular attempt to fix the Bayesian model by using sets of functions is also inaccurate, since it requires us to (...) have interval-valued credences with perfectly precise endpoints. We can see this problem as analogous to the problem of higher order vagueness. Ultimately, I argue, the only way to avoid these problems is to endorse Insurmountable Unclassifiability. This principle has some surprising and radical consequences. For example, it entails that the trade-off between accuracy and specificity is in-principle unavoidable: sometimes it is simply impossible to characterize an agent’s doxastic state in a way that is both fully accurate and maximally specific. What we can do, however, is improve on both the tripartite and existing Bayesian models. I construct a new model of belief—the minimal model—that allows us to characterize agents with much greater specificity than the tripartite model, and yet which remains, unlike existing Bayesian models, perfectly accurate. (shrink)
In his classic book “the Foundations of Statistics” Savage developed a formal system of rational decision making. The system is based on (i) a set of possible states of the world, (ii) a set of consequences, (iii) a set of acts, which are functions from states to consequences, and (iv) a preference relation over the acts, which represents the preferences of an idealized rational agent. The goal and the culmination of the enterprise is a representation theorem: Any preference relation that (...) satisfies certain arguably acceptable postulates determines a (finitely additive) probability distribution over the states and a utility assignment to the consequences, such that the preferences among acts are determined by their expected utilities. Additional problematic assumptions are however required in Savage's proofs. First, there is a Boolean algebra of events (sets of states) which determines the richness of the set of acts. The probabilities are assigned to members of this algebra. Savage's proof requires that this be a σ-algebra (i.e., closed under infinite countable unions and intersections), which makes for an extremely rich preference relation. On Savage's view we should not require subjective probabilities to be σ-additive. He therefore finds the insistence on a σ-algebra peculiar and is unhappy with it. But he sees no way of avoiding it. Second, the assignment of utilities requires the constant act assumption: for every consequence there is a constant act, which produces that consequence in every state. This assumption is known to be highly counterintuitive. The present work contains two mathematical results. The first, and the more difficult one, shows that the σ-algebra assumption can be dropped. The second states that, as long as utilities are assigned to finite gambles only, the constant act assumption can be replaced by the more plausible and much weaker assumption that there are at least two non-equivalent constant acts. The second result also employs a novel way of deriving utilities in Savage-style systems -- without appealing to von Neumann-Morgenstern lotteries. The paper discusses the notion of “idealized agent" that underlies Savage's approach, and argues that the simplified system, which is adequate for all the actual purposes for which the system is designed, involves a more realistic notion of an idealized agent. (shrink)
When probability discounting (or probability weighting), one multiplies the value of an outcome by one's subjectiveprobability that the outcome will obtain in decision-making. The broader import of defending probability discounting is to help justify cost-benefit analyses in contexts such as climate change. This chapter defends probability discounting under risk both negatively, from arguments by Simon Caney (2008, 2009), and with a new positive argument. First, in responding to Caney, I argue that small costs (...) and benefits need to be evaluated, and that viewing practices at the social level is too coarse-grained. Second, I argue for probability discounting using a distinction between causal responsibility and moral responsibility. Moral responsibility can be cashed out in terms of blameworthiness and praiseworthiness, while causal responsibility obtains in full for any effect which is part of a causal chain linked to one's act. With this distinction in hand, unlike causal responsibility, moral responsibility can be seen as coming in degrees. My argument is, given that we can limit our deliberation and consideration to that which we are morally responsible for and that our moral responsibility for outcomes is limited by our subjective probabilities, our subjective probabilities can ground probability discounting. (shrink)
An aspect of Peirce’s thought that may still be underappreciated is his resistance to what Levi calls _pedigree epistemology_, to the idea that a central focus in epistemology should be the justification of current beliefs. Somewhat more widely appreciated is his rejection of the subjective view of probability. We argue that Peirce’s criticisms of subjectivism, to the extent they grant such a conception of probability is viable at all, revert back to pedigree epistemology. A thoroughgoing rejection of (...) pedigree in the context of probabilistic epistemology, however, _does_ challenge prominent subjectivist responses to the problem of the priors. (shrink)
Probability can be used to measure degree of belief in two ways: objectively and subjectively. The objective measure is a measure of the rational degree of belief in a proposition given a set of evidential propositions. The subjective measure is the measure of a particular subject’s dispositions to decide between options. In both measures, certainty is a degree of belief 1. I will show, however, that there can be cases where one belief is stronger than another yet both (...) beliefs are plausibly measurable as objectively and subjectively certain. In ordinary language, we can say that while both beliefs are certain, one belief is more certain than the other. I will then propose second, non probabilistic dimension of measurement, which tracks this variation in certainty in such cases where the probability is 1. A general principle of rationality is that one’s subjective degree of belief should match the rational degree of belief given the evidence available. In this paper I hope to show that it is also a rational principle that the maximum stake size at which one should remain certain should match the rational weight of certainty given the evidence available. Neither objective nor subjective measures of certainty conform to the axioms of probability, but instead are measured in utility. This has the consequence that, although it is often rational to be certain to some degree, there is no such thing as absolute certainty. (shrink)
One very popular framework in contemporary epistemology is Bayesian. The central epistemic state is subjective confidence, or credence. Traditional epistemic states like belief and knowledge tend to be sidelined, or even dispensed with entirely. Credences are often introduced as familiar mental states, merely in need of a special label for the purposes of epistemology. But whether they are implicitly recognized by the folk or posits of a sophisticated scientific psychology, they do not appear to fit well with perception, as (...) is often noted. -/- This paper investigates the tension between probabilistic cognition and non-probabilistic perception. The tension is real, and the solution—to adapt a phrase from Quine and Goodman—is to renounce credences altogether. (shrink)
The major competing statistical paradigms share a common remarkable but unremarked thread: in many of their inferential applications, different probability interpretations are combined. How this plays out in different theories of inference depends on the type of question asked. We distinguish four question types: confirmation, evidence, decision, and prediction. We show that Bayesian confirmation theory mixes what are intuitively “subjective” and “objective” interpretations of probability, whereas the likelihood-based account of evidence melds three conceptions of what constitutes an (...) “objective” probability. (shrink)
I argue that existing objectivist accounts of subjective reasons face systematic problems with cases involving probability and possibility. I then offer a diagnosis of why objectivists face these problems, and recommend that objectivists seek to provide indirect analyses of subjective reasons.
A complete probability guide of Hold'em Poker, this guide covers all possible gaming situations. The author focuses on the practical side of the presentation and use of the probabilities involved in Hold'em, while taking into account the subjective side of the probability-based criteria of each player's strategy.
IBE ('Inference to the best explanation' or abduction) is a popular and highly plausible theory of how we should judge the evidence for claims of past events based on present evidence. It has been notably developed and supported recently by Meyer following Lipton. I believe this theory is essentially correct. This paper supports IBE from a probability perspective, and argues that the retrodictive probabilities involved in such inferences should be analysed in terms of predictive probabilities and a priori (...) class='Hi'>probability ratios of initial events. The key point is to separate these two features. Disagreements over evidence can be traced to disagreements over either the a priori probability ratios or predictive conditional ratios. In many cases, in real science, judgements of the former are necessarily subjective. The principles of iterated evidence are also discussed. The Sceptic's position is criticised as ignoring iteration of evidence, and characteristically failing to adjust a priori probability ratios in response to empirical evidence. (shrink)
The notion of comparative probability defined in Bayesian subjectivist theory stems from an intuitive idea that, for a given pair of events, one event may be considered “more probable” than the other. Yet it is conceivable that there are cases where it is indeterminate as to which event is more probable, due to, e.g., lack of robust statistical information. We take that these cases involve indeterminate comparative probabilities. This paper provides a Savage-style decision-theoretic foundation for indeterminate comparative probabilities.
This note discusses three issues that Allen and Pardo believe to be especially problematic for a probabilistic interpretation of standards of proof: (1) the subjectivity of probability assignments; (2) the conjunction paradox; and (3) the non-comparative nature of probabilistic standards. I offer a reading of probabilistic standards that avoids these criticisms.
Bayesians often appeal to “merging of opinions” to rebut charges of excessive subjectivity. But what happens in the short run is often of greater interest than what happens in the limit. Seidenfeld and coauthors use this observation as motivation for investigating the counterintuitive short run phenomenon of dilation, since, they allege, dilation is “the opposite” of asymptotic merging of opinions. The measure of uncertainty relevant for dilation, however, is not the one relevant for merging of opinions. We explicitly investigate the (...) short run behavior of the metric relevant for merging, and show that dilation is independent of the opposite of merging. (shrink)
*This work is no longer under development* Two major themes in the literature on indicative conditionals are that the content of indicative conditionals typically depends on what is known;1 that conditionals are intimately related to conditional probabilities.2 In possible world semantics for counterfactual conditionals, a standard assumption is that conditionals whose antecedents are metaphysically impossible are vacuously true.3 This aspect has recently been brought to the fore, and defended by Tim Williamson, who uses it in to characterize alethic necessity by (...) exploiting such equivalences as: A⇔¬A A. One might wish to postulate an analogous connection for indicative conditionals, with indicatives whose antecedents are epistemically impossible being vacuously true: and indeed, the modal account of indicative conditionals of Brian Weatherson has exactly this feature.4 This allows one to characterize an epistemic modal by the equivalence A⇔¬A→A. For simplicity, in what follows we write A as KA and think of it as expressing that subject S knows that A.5 The connection to probability has received much attention. Stalnaker suggested, as a way of articulating the ‘Ramsey Test’, the following very general schema for indicative conditionals relative to some probability function P: P = P 1For example, Nolan ; Weatherson ; Gillies. 2For example Stalnaker ; McGee ; Adams. 3Lewis. See Nolan for criticism. 4‘epistemically possible’ here means incompatible with what is known. 5This idea was suggested to me in conversation by John Hawthorne. I do not know of it being explored in print. The plausibility of this characterization will depend on the exact sense of ‘epistemically possible’ in play—if it is compatibility with what a single subject knows, then can be read ‘the relevant subject knows that p’. If it is more delicately formulated, we might be able to read as the epistemic modal ‘must’. (shrink)
Probability plays a crucial role regarding the understanding of the relationship which exists between mathematics and physics. It will be the point of departure of this brief reflection concerning this subject, as well as about the placement of Poincaré’s thought in the scenario offered by some contemporary perspectives.
Contains a description of a generalized and constructive formal model for the processes of subjective and creative thinking. According to the author, the algorithm presented in the article is capable of real and arbitrarily complex thinking and is potentially able to report on the presence of consciousness.
Objective: In this essay, I will try to track some historical and modern stages of the discussion on the Gettier problem, and point out the interrelations of the questions that this problem raises for epistemologists, with sceptical arguments, and a so-called problem of relevance. Methods: historical analysis, induction, generalization, deduction, discourse, intuition results: Albeit the contextual theories of knowledge, the use of different definitions of knowledge, and the different ways of the uses of knowledge do not resolve all the issues (...) that the sceptic can put forward, but they can be productive in giving clarity to a concept of knowledge for us. On the other hand, our knowledge will always have an element of intuition and subjectivity, however not equating to epistemic luck and probability. Significance novelty: the approach to the context in general, not giving up being a Subject may give us a clarity about the sense of what it means to say – “I know”. (shrink)
In this dissertation, I construct scientifically and practically adequate moral analogs of cognitive heuristics and biases. Cognitive heuristics are reasoning “shortcuts” that are efficient but flawed. Such flaws yield systematic judgment errors—i.e., cognitive biases. For example, the availability heuristic infers an event’s probability by seeing how easy it is to recall similar events. Since dramatic events, such as airplane crashes, are disproportionately easy to recall, this heuristic explains systematic overestimations of their probability (availability bias). The research program on (...) cognitive heuristics and biases (e.g., Daniel Kahneman’s work) has been scientifically successful and has yielded useful error-prevention techniques—i.e., cognitive debiasing. I attempt to apply this framework to moral reasoning to yield moral heuristics and biases. For instance, a moral bias of unjustified differences in the treatment of particular animal species might be partially explained by a moral heuristic that dubiously infers animals’ moral status from their aesthetic features. While the basis for identifying judgments as cognitive errors is often unassailable (e.g., per violating laws of logic), identifying moral errors seemingly requires appealing to moral truth, which, I argue, is problematic within science. Such appeals can be avoided by repackaging moral theories as mere “standards-of-interest” (a la non-normative metrics of purportedly right-making features/properties). However, standards-of-interest do not provide authority, which is needed for effective debiasing. Nevertheless, since each person deems their own subjective morality authoritative, subjective morality (qua standard-of-interest and not moral subjectivism) satisfies both scientific and practical concerns. As such, (idealized) subjective morality grounds a moral analog of cognitive biases—namely, subjective moral biases (e.g., committed anti-racists unconsciously discriminating). I also argue that "cognitive heuristic" is defined by its contrast with rationality. Consequently, heuristics explain biases, which are also so defined. However, such contrasting with rationality is causally irrelevant to cognition. This frustrates the presumed usefulness of the kind, heuristic, in causal explanation. As such, in the moral case, I jettison the role of causal explanation and tailor categories solely for contrastive explanation. As such, “moral heuristic” is replaced with "subjective moral fallacy," which is defined by its contrast with subjective morality and explains subjective moral biases. The resultant subjective moral biases and fallacies framework can undergird future empirical research. (shrink)
The typical habitat of overt nominative subjects is in finite clauses. But infinitival complements and infinitival adjuncts are also known to have overt nominative subjects, e.g. in Italian (Rizzi 1982), European Portuguese (Raposo 1987), and Spanish (Torrego 1998, Mensching 2000). The analyses make crucial reference to the movement of Aux or Infl to Comp, and to overt or covert infinitival inflection. This working paper is concerned with a novel set of data that appear to be of a different sort, in (...) that they probably do not depend on either rich infinitival inflection or on movement to C. (shrink)
DNA evidence is one of the most significant modern advances in the search for truth since the cross examination, but its format as a random-match-probability makes it difficult for people to assign an appropriate probative value (Koehler, 2001). While Frequentist theories propose that the presentation of the match as a frequency rather than a probability facilitates more accurate assessment (e.g., Slovic et al., 2000), Exemplar-Cueing Theory predicts that the subjective weight assigned may be affected by the frequency (...) or probability format, and how easily examples of the event, i.e., ‘exemplars’, are generated from linguistic cues that frame the match in light of further evidence (Koehler & Macchi, 2004). This paper presents two juror research studies to examine the difficulties that jurors have in assigning appropriate probative value to DNA evidence when contradictory evidence is presented. Study 1 showed that refuting evidence significantly reduced guilt judgments when exemplars were linguistically cued, even when the probability match and the refuting evidence had the same objective probative value. Moreover, qualitative reason for judgment responses revealed that interpreting refuting evidence was found to be complex and not necessarily reductive; refutation was found indicative of innocence or guilt depending on whether exemplars have been cued or not. Study 2 showed that the introduction of judges’ directions to linguistically cue exemplars, did not increase the impact of refuting evidence beyond its objective probative value, but less guilty verdicts were returned when jurors were instructed to consider all possible explanations of the evidence. The results are discussed in light of contradictory frequentist and exemplar-cueing theoretical positions, and their real-world consequences. (shrink)
Is it not surprising that we look with so much pleasure and emotion at works of art that were made thousands of years ago? Works depicting people we do not know, people whose backgrounds are usually a mystery to us, who lived in a very different society and time and who, moreover, have been ‘frozen’ by the artist in a very deliberate pose. It was the Classical Greek philosopher Aristotle who observed in his Poetics that people could apparently be moved (...) even by the imitation of a person or an act. And although we are usually well aware that it is a simulacrum, not a real situation, it nevertheless sometimes seems as if we ourselves are standing there on the stage or in the painting, so intense and emotional is our response, even though we are just spectators. Aristotle concludes from this that we have intellectual capacities which allow us put ourselves in another’s place and consequently to react to simulated situations as though they are actually happening to us, here and now. In this process, he contends, observation, memory, imagination and emotions are crucial elements. In the past it was not customary to invoke human mental faculties to explain our response to works of art. The Ancient Greeks, after all, knew little about the human body or brain and usually referred to the extended world of the gods in their endeavours to comprehend the ‘inner world’ of human beings. In our time the situation is completely different—such an allusion to the brain no longer surprises us. Whether it is about the mystery of the consciousness, the question of free will or accounts of bizarre psychological aberrations or disorders, we have become accustomed to references to parts of the brain, to images of brain scans, to reports about neural networks and the like. However, because there are so many factors that play a part in our appreciation of works of art we need a complex explanation for it, and it is not enough to look only at certain properties of the brain that are determined by evolution. Those properties are shared by every human being, and so are rather useless in explaining people’s different reactions to the same work of art. Evidently the brains of individuals differ so much that they make it possible for people to respond differently to one and the same artwork. This, of course, raises questions concerning the painted emotions that can be seen in this exhibition. Virtually everyone, after all, is fascinated by such paintings and usually recognizes the emotions they represent. The reactions to these painted emotions are also often similar. This is probably why artworks like this are generally highly valued, then and now, here and elsewhere: from the enigmatically smiling Egyptian Queen Nefertiti and the startled Rembrandt to a seemingly despairing African mask. Aristotle observed that in the theatre players imitate actions that are associated with emotions in a number of ways and that these emotions are shared in a particular fashion by the playwright, the actors and the audience. The audience may even be carried away by these emotions to such an extent that they are in a sense purged of them and can subsequently leave the theatre relieved. Are such emotional reactions perhaps related to the fact that emotions are universal and that brains respond similarly to them? Is this why we can so readily identify painted emotions? May we therefore also assume that the properties of the brain determined by evolution help us to explain these emotions? In answering these questions we shall discuss a number of insights into emotions in psychology and brain science and explore some theories about the possible function of emotions and their expression. (shrink)
The idea of a common currency underlying our choice behaviour has played an important role in sciences of behaviour, from neurobiology to psychology and economics. However, while it has been mainly investigated in terms of values, with a common scale on which goods would be evaluated and compared, the question of a common scale for subjective probabilities and confidence in particular has received only little empirical investigation so far. The present study extends previous work addressing this question, by showing (...) that confidence can be compared across visual and auditory decisions, with the same precision as for the comparison of two trials within the same task. We discuss the possibility that confidence could serve as a common currency when describing our choices to ourselves and to others. others. (shrink)
Taking the philosophical standpoint, this article compares the mathematical theory of individual decision-making with the folk psychology conception of action, desire and belief. It narrows down its topic by carrying the comparison vis-à-vis Savage's system and its technical concept of subjectiveprobability, which is referred to the basic model of betting as in Ramsey. The argument is organized around three philosophical theses: (i) decision theory is nothing but folk psychology stated in formal language (Lewis), (ii) the former substantially (...) improves on the latter, but is unable to overcome its typical limitations, especially its failure to separate desire and belief empirically (Davidson), (iii) the former substantially improves on the latter, and through these innovations, overcomes some of the limitations. The aim of the article is to establish (iii) not only against the all too simple thesis (i), but also against the subtle thesis (ii). (shrink)
We introduce a ranking of multidimensional alternatives, including uncertain prospects as a particular case, when these objects can be given a matrix form. This ranking is separable in terms of rows and columns, and continuous and monotonic in the basic quantities. Owing to the theory of additive separability developed here, we derive very precise numerical representations over a large class of domains (i.e., typically notof the Cartesian product form). We apply these representationsto (1)streams of commodity baskets through time, (2)uncertain social (...) prospects, (3)uncertain individual prospects. Concerning(1), we propose a finite horizon variant of Koopmans’s (1960) axiomatization of infinite discounted utility sums. The main results concern(2). We push the classic comparison between the exanteand expostsocial welfare criteria one step further by avoiding any expected utility assumptions, and as a consequence obtain what appears to be the strongest existing form of Harsanyi’s (1955) Aggregation Theorem. Concerning(3), we derive a subjectiveprobability for Anscombe and Aumann’s (1963) finite case by merely assuming that there are two epistemically independent sources of uncertainty. (shrink)
As stochastic independence is essential to the mathematical development of probability theory, it seems that any foundational work on probability should be able to account for this property. Bayesian decision theory appears to be wanting in this respect. Savage’s postulates on preferences under uncertainty entail a subjective expected utility representation, and this asserts only the existence and uniqueness of a subjectiveprobability measure, regardless of its properties. What is missing is a preference condition corresponding to (...) stochastic independence. To fill this significant gap, the article axiomatizes Bayesian decision theory afresh and proves several representation theorems in this novel framework. (shrink)
Following a long-standing philosophical tradition, impartiality is a distinctive and determining feature of moral judgments, especially in matters of distributive justice. This broad ethical tradition was revived in welfare economics by Vickrey, and above all, Harsanyi, under the form of the so-called Impartial Observer Theorem. The paper offers an analytical reconstruction of this argument and a step-wise philosophical critique of its premisses. It eventually provides a new formal version of the theorem based on subjectiveprobability.
Ramsey (1926) sketches a proposal for measuring the subjective probabilities of an agent by their observable preferences, assuming that the agent is an expected utility maximizer. I show how to extend the spirit of Ramsey's method to a strictly wider class of agents: risk-weighted expected utility maximizers (Buchak 2013). In particular, I show how we can measure the risk attitudes of an agent by their observable preferences, assuming that the agent is a risk-weighted expected utility maximizer. Further, we can (...) leverage this method to measure the subjective probabilities of a risk-weighted expected utility maximizer. (shrink)
We investigate the conflict between the ex ante and ex post criteria of social welfare in a new framework of individual and social decisions, which distinguishes between two sources of uncertainty, here interpreted as an objective and a subjective source respectively. This framework makes it possible to endow the individuals and society not only with ex ante and ex post preferences, as is usually done, but also with interim preferences of two kinds, and correspondingly, to introduce interim forms of (...) the Pareto principle. After characterizing the ex ante and ex post criteria, we present a first solution to their conflict that extends the former as much possible in the direction of the latter. Then, we present a second solution, which goes in the opposite direction, and is also maximally assertive. Both solutions translate the assumed Pareto conditions into weighted additive utility representations, and both attribute to the individuals common probability values on the objective source of uncertainty, and different probability values on the subjective source. We discuss these solutions in terms of two conceptual arguments, i.e., the by now classic spurious unanimity argument and a novel informational argument labelled complementary ignorance. The paper complies with the standard economic methodology of basing probability and utility representations on preference axioms, but for the sake of completeness, also considers a construal of objective uncertainty based on the assumption of an exogeneously given probability measure. JEL classification: D70; D81. (shrink)
Expected Utility in 3D.Jean Baccelli - forthcoming - In Reflections on the Foundations of Statistics: Essays in Honor of Teddy Seidenfeld.details
Consider a subjective expected utility preference relation. It is usually held that the representations which this relation admits differ only in one respect, namely, the possible scales for the measurement of utility. In this paper, I discuss the fact that there are, metaphorically speaking, two additional dimensions along which infinitely many more admissible representations can be found. The first additional dimension is that of state-dependence. The second—and, in this context, much lesser-known—additional dimension is that of act-dependence. The simplest implication (...) of their usually neglected existence is that the standard axiomatizations of subjective expected utility fail to provide the measurement of subjectiveprobability with satisfactory behavioral foundations. (shrink)
Sometimes epistemologists theorize about belief, a tripartite attitude on which one can believe, withhold belief, or disbelieve a proposition. In other cases, epistemologists theorize about credence, a fine-grained attitude that represents one’s subjectiveprobability or confidence level toward a proposition. How do these two attitudes relate to each other? This article explores the relationship between belief and credence in two categories: descriptive and normative. It then explains the broader significance of the belief-credence connection and concludes with general lessons (...) from the debate thus far. (shrink)
In this paper we discuss the new Tweety puzzle. The original Tweety puzzle was addressed by approaches in non-monotonic logic, which aim to adequately represent the Tweety case, namely that Tweety is a penguin and, thus, an exceptional bird, which cannot fly, although in general birds can fly. The new Tweety puzzle is intended as a challenge for probabilistic theories of epistemic states. In the first part of the paper we argue against monistic Bayesians, who assume that epistemic states can (...) at any given time be adequately described by a single subjectiveprobability function. We show that monistic Bayesians cannot provide an adequate solution to the new Tweety puzzle, because this requires one to refer to a frequency-based probability function. We conclude that monistic Bayesianism cannot be a fully adequate theory of epistemic states. In the second part we describe an empirical study, which provides support for the thesis that monistic Bayesianism is also inadequate as a descriptive theory of cognitive states. In the final part of the paper we criticize Bayesian approaches in cognitive science, insofar as their monistic tendency cannot adequately address the new Tweety puzzle. We, further, argue against monistic Bayesianism in cognitive science by means of a case study. In this case study we show that Oaksford and Chater’s (2007, 2008) model of conditional inference—contrary to the authors’ theoretical position—has to refer also to a frequency-based probability function. (shrink)
Savage's framework of subjective preference among acts provides a paradigmatic derivation of rational subjective probabilities within a more general theory of rational decisions. The system is based on a set of possible states of the world, and on acts, which are functions that assign to each state a consequence. The representation theorem states that the given preference between acts is determined by their expected utilities, based on uniquely determined probabilities (assigned to sets of states), and numeric utilities assigned (...) to consequences. Savage's derivation, however, is based on a highly problematic well-known assumption not included among his postulates: for any consequence of an act in some state, there is a "constant act" which has that consequence in all states. This ability to transfer consequences from state to state is, in many cases, miraculous -- including simple scenarios suggested by Savage as natural cases for applying his theory. We propose a simplification of the system, which yields the representation theorem without the constant act assumption. We need only postulates P1-P6. This is done at the cost of reducing the set of acts included in the setup. The reduction excludes certain theoretical infinitary scenarios, but includes the scenarios that should be handled by a system that models human decisions. (shrink)
This paper explains and defends the idea that metaphysical necessity is the strongest kind of objective necessity. Plausible closure conditions on the family of objective modalities are shown to entail that the logic of metaphysical necessity is S5. Evidence is provided that some objective modalities are studied in the natural sciences. In particular, the modal assumptions implicit in physical applications of dynamical systems theory are made explicit by using such systems to define models of a modal temporal logic. Those assumptions (...) arguably include some necessitist principles. -/- Too often, philosophers have discussed ‘metaphysical’ modality — possibility, contingency, necessity — in isolation. Yet metaphysical modality is just a special case of a broad range of modalities, which we may call ‘objective’ by contrast with epistemic and doxastic modalities, and indeed deontic and teleological ones (compare the distinction between objective probabilities and epistemic or subjective probabilities). Thus metaphysical possibility, physical possibility and immediate practical possibility are all types of objective possibility. We should study the metaphysics and epistemology of metaphysical modality as part of a broader study of the metaphysics and epistemology of the objective modalities, on pain of radical misunderstanding. Since objective modalities are in general open to, and receive, natural scientific investigation, we should not treat the metaphysics and epistemology of metaphysical modality in isolation from the metaphysics and epistemology of the natural sciences. -/- In what follows, Section 1 gives a preliminary sketch of metaphysical modality and its place in the general category of objective modality. Section 2 reviews some familiar forms of scepticism about metaphysical modality in that light. Later sections explore a few of the many ways in which natural science deals with questions of objective modality, including questions of quantified modal logic. (shrink)
In this paper, we provide a Bayesian analysis of the well-known surprise exam paradox. Central to our analysis is a probabilistic account of what it means for the student to accept the teacher's announcement that he will receive a surprise exam. According to this account, the student can be said to have accepted the teacher's announcement provided he adopts a subjectiveprobability distribution relative to which he expects to receive the exam on a day on which he expects (...) not to receive it. We show that as long as expectation is not equated with subjective certainty there will be contexts in which it is possible for the student to accept the teacher's announcement, in this sense. In addition, we show how a Bayesian modeling of the scenario can yield plausible explanations of the following three intuitive claims: (1) the teacher's announcement becomes easier to accept the more days there are in class; (2) a strict interpretation of the teacher's announcement does not provide the student with any categorical information as to the date of the exam; and (3) the teacher's announcement contains less information about the date of the exam the more days there are in class. To conclude, we show how the surprise exam paradox can be seen as one among the larger class of paradoxes of doxastic fallibilism, foremost among which is the paradox of the preface. (shrink)
How can different individuals' probability functions on a given sigma-algebra of events be aggregated into a collective probability function? Classic approaches to this problem often require 'event-wise independence': the collective probability for each event should depend only on the individuals' probabilities for that event. In practice, however, some events may be 'basic' and others 'derivative', so that it makes sense first to aggregate the probabilities for the former and then to let these constrain the probabilities for the (...) latter. We formalize this idea by introducing a 'premise-based' approach to probabilistic opinion pooling, and show that, under a variety of assumptions, it leads to linear or neutral opinion pooling on the 'premises'. This paper is the second of two self-contained, but technically related companion papers inspired by binary judgment-aggregation theory. (shrink)
This paper examines the preference-based approach to the identification of beliefs. It focuses on the main problem to which this approach is exposed, namely that of state-dependent utility. First, the problem is illustrated in full detail. Four types of state-dependent utility issues are distinguished. Second, a comprehensive strategy for identifying beliefs under state-dependent utility is presented and discussed. For the problem to be solved following this strategy, however, preferences need to extend beyond choices. We claim that this a necessary feature (...) of any complete solution to the problem of state-dependent utility. We also argue that this is the main conceptual lesson to draw from it. We show that this lesson is of interest to both economists and philosophers. (shrink)
What is the relationship between degrees of belief and binary beliefs? Can the latter be expressed as a function of the former—a so-called “belief-binarization rule”—without running into difficulties such as the lottery paradox? We show that this problem can be usefully analyzed from the perspective of judgment-aggregation theory. Although some formal similarities between belief binarization and judgment aggregation have been noted before, the connection between the two problems has not yet been studied in full generality. In this paper, we seek (...) to fill this gap. The paper is organized around a baseline impossibility theorem, which we use to map out the space of possible solutions to the belief-binarization problem. Our theorem shows that, except in limiting cases, there exists no belief-binarization rule satisfying four initially plausible desiderata. Surprisingly, this result is a direct corollary of the judgment-aggregation variant of Arrow’s classic impossibility theorem in social choice theory. (shrink)
I argue that riskier killings of innocent people are, other things equal, objectively worse than less risky killings. I ground these views in considerations of disrespect and security. Killing someone more riskily shows greater disrespect for him by more grievously undervaluing his standing and interests, and more seriously undermines his security by exposing a disposition to harm him across all counterfactual scenarios in which the probability of killing an innocent person is that high or less. I argue that the (...) salient probabilities are the agent’s sincere, sane, subjective probabilities, and that this thesis is relevant whether your risk-taking pertains to the probability of killing a person or to the probability that the person you kill is not liable to be killed. I then defend the view’s relevance to intentional killing; show how it differs from an account of blameworthiness; and explain its significance for all-things-considered justification and justification under uncertainty. (shrink)
The problem addressed in this paper is “the main epistemic problem concerning science”, viz. “the explication of how we compare and evaluate theories [...] in the light of the available evidence” (van Fraassen, BC, 1983, Theory comparison and relevant Evidence. In J. Earman (Ed.), Testing scientific theories (pp. 27–42). Minneapolis: University of Minnesota Press). Sections 1– 3 contain the general plausibility-informativeness theory of theory assessment. In a nutshell, the message is (1) that there are two values a theory should exhibit: (...) truth and informativeness—measured respectively by a truth indicator and a strength indicator; (2) that these two values are conflicting in the sense that the former is a decreasing and the latter an increasing function of the logical strength of the theory to be assessed; and (3) that in assessing a given theory by the available data one should weigh between these two conflicting aspects in such a way that any surplus in informativeness succeeds, if the shortfall in plausibility is small enough. Particular accounts of this general theory arise by inserting particular strength indicators and truth indicators. In Section 4 the theory is spelt out for the Bayesian paradigm of subjective probabilities. It is then compared to incremental Bayesian confirmation theory. Section 4 closes by asking whether it is likely to be lovely. Section 5 discusses a few problems of confirmation theory in the light of the present approach. In particular, it is briefly indicated how the present account gives rise to a new analysis of Hempel’s conditions of adequacy for any relation of confirmation (Hempel, CG, 1945, Studies in the logic of comfirmation. Mind, 54, 1–26, 97–121.), differing from the one Carnap gave in § 87 of his Logical foundations of probability (1962, Chicago: University of Chicago Press). Section 6 adresses the question of justification any theory of theory assessment has to face: why should one stick to theories given high assessment values rather than to any other theories? The answer given by the Bayesian version of the account presented in section 4 is that one should accept theories given high assessment values, because, in the medium run, theory assessment almost surely takes one to the most informative among all true theories when presented separating data. The concluding section 7 continues the comparison between the present account and incremental Bayesian confirmation theory. (shrink)
One's inaccuracy for a proposition is defined as the squared difference between the truth value (1 or 0) of the proposition and the credence (or subjectiveprobability, or degree of belief) assigned to the proposition. One should have the epistemic goal of minimizing the expected inaccuracies of one's credences. We show that the method of minimizing expected inaccuracy can be used to solve certain probability problems involving information loss and self-locating beliefs (where a self-locating belief of a (...) temporal part of an individual is a belief about where or when that temporal part is located). We analyze the Sleeping Beauty problem, the duplication version of the Sleeping Beauty problem, and various related problems. (shrink)
De Finetti would claim that we can make sense of a draw in which each positive integer has equal probability of winning. This requires a uniform probability distribution over the natural numbers, violating countable additivity. Countable additivity thus appears not to be a fundamental constraint on subjectiveprobability. It does, however, seem mandated by Dutch Book arguments similar to those that support the other axioms of the probability calculus as compulsory for subjective interpretations. These (...) two lines of reasoning can be reconciled through a slight generalization of the Dutch Book framework. Countable additivity may indeed be abandoned for de Finetti's lottery, but this poses no serious threat to its adoption in most applications of subjectiveprobability. Introduction The de Finetti lottery Two objections to equiprobability 3.1 The ‘No random mechanism’ argument 3.2 The Dutch Book argument Equiprobability and relative betting quotients The re-labelling paradox 5.1 The paradox 5.2 Resolution: from symmetry to relative probability Beyond the de Finetti lottery. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.