References in:
Add references
You must login to add references.




The objects of credence are the entities to which credences are assigned for the purposes of a successful theory of credence. I use cases akin to Frege's puzzle to argue against referentialism about credence : the view that objects of credence are determined by the objects and properties at which one's credence is directed. I go on to develop a nonreferential account of the objects of credence in terms of sets of epistemically possible scenarios. 

The central propositional attitudes of belief, desire, and meaning are interdependent; it is therefore fruitless to analyse one or two of them in terms of the others. A method is outlined in this paper that yields a theory for interpreting speech, a measure of degree of belief, and a measure of desirability. The method combines in a novel way features of Bayesean decision theory, and a Quinean approach to radical interpretation. 

The central propositional attitudes of belief, desire, and meaning are interdependent; it is therefore fruitless to analyse one or two of them in terms of the others. A method is outlined in this paper that yields a theory for interpreting speech, a measure of degree of belief, and a measure of desirability. The method combines in a novel way features of Bayesean decision theory, and a Quinean approach to radical interpretation. 









Decisiontheoretic representation theorems have been developed and appealed to in the service of two important philosophical projects: in attempts to characterise credences in terms of preferences, and in arguments for probabilism. Theorems developed within the formal framework that Savage developed have played an especially prominent role here. I argue that the use of these ‘Savagean’ theorems create significant difficulties for both projects, but particularly the latter. The origin of the problem directly relates to the question of whether we can have (...) 

Recently a number of authors have tried to avoid the failures of traditional Dutch book arguments by separating them from pragmatic concerns of avoiding a sure loss. In this paper I examine defenses of this kind by Howson and Urbach, Hellman, and Christensen. I construct rigorous explications of their arguments and show that they are not cogent. I advocate abandoning Dutch book arguments in favor of a representation theorem. 

The representation theorems of expected utility theory show that having certain types of preferences is both necessary and sufficient for being representable as having subjective probabilities. However, unless the expected utility framework is simply assumed, such preferences are also consistent with being representable as having degrees of belief that do not obey the laws of probability. This fact shows that being representable as having subjective probabilities is not necessarily the same as having subjective probabilities. Probabilism can be defended on the (...) 

Lara Buchak sets out a new account of rational decisionmaking in the face of risk. She argues that the orthodox view is too narrow, and suggests an alternative, more permissive theory: one that allows individuals to pay attention to the worstcase or bestcase scenario, and vindicates the ordinary decisionmaker. 

A natural way to think about epistemic possibility is as follows. When it is epistemically possible (for a subject) that p, there is an epistemically possible scenario (for that subject) in which p. The epistemic scenarios together constitute epistemic space. It is surprisingly difficult to make the intuitive picture precise. What sort of possibilities are we dealing with here? In particular, what is a scenario? And what is the relationship between scenarios and items of knowledge and belief? This chapter tries (...) 



Colin Howson has recently argued that accuracy arguments for probabilism fail because they assume a privileged ‘coding’ in which TRUE is assigned the value 1 and FALSE is assigned the value 0. I explain why this is wrong by first showing that Howson’s objections are based on a misconception about the way in which degrees of confidence are measured, and then reformulating the accuracy argument in a way that manifestly does not depend on the coding of truthvalues. Along the way, (...) 

In past years, the traditional Bayesian theory of rational decision making, based on subjective calculations of expected utility, has faced powerful attack from philosophers such as David Lewis and Brian Skyrms, who advance an alternative causal decision theory. The test they present for the Bayesian is exemplified in the decision problem known as 'Newcomb's paradox' and in related decision problems and is held to support the prescriptions of the causal theory. As well as his conclusions, the concepts and methods of (...) 



Economic theory reduces the concept of rationality to internal consistency. As far as beliefs are concerned, rationality is equated with having a prior belief over a “Grand State Space”, describing all possible sources of uncertainties. We argue that this notion is too weak in some senses and too strong in others. It is too weak because it does not distinguish between rational and irrational beliefs. Relatedly, the Bayesian approach, when applied to the Grand State Space, is inherently incapable of describing (...) 

The main question of this paper is: how do we manage to know what our own degrees of belief are? Section 1 briefly reviews and criticizes the traditional functionalist view, a view notably associated with David Lewis and sometimes called the theorytheory. I use this criticism to motivate the approach I want to promote. Section 2, the bulk of the paper, examines and begins to develop the view that we have a special kind of introspective access to our degrees of (...) 

In Rabinowicz, I considered how value relations can best be analysed in terms of fitting proattitudes. In the formal model of that paper, fitting proattitudes are represented by the class of permissible preference orderings on a domain of items that are being compared. As it turns out, this approach opens up for a multiplicity of different types of value relationships, along with the standard relations of ‘better’, ‘worse’, ‘equally as good as’ and ‘incomparable in value’. Unfortunately, the approach is vulnerable (...) 

What knowledge would suffice to yield an interpretation of an arbitrary utterance of a language when such knowledge is based on evidence plausibly available to a nonspeaker of that language? it is argued that it is enough to know a theory of truth for the language and that the theory satisfies tarski's 'convention t' and that it gives an optimal fit to data about sentences held true, Under specified conditions, By native speakers. 



The following values have no corresponding Zotero field: PB  JSTOR. 

"[This book] proposes new foundations for the Bayesian principle of rational action, and goes on to develop a new logic of desirability and probabtility."—Frederic Schick, _Journal of Philosophy_. 







First published in 1982, Ellery Eells' original work on rational decision making had extensive implications for probability theorists, economists, statisticians and psychologists concerned with decision making and the employment of Bayesian principles. His analysis of the philosophical and psychological significance of Bayesian decision theories, causal decision theories and Newcomb's paradox continues to be influential in philosophy of science. His book is now revived for a new generation of readers and presented in a fresh twentyfirstcentury series livery, including a specially commissioned (...) 

Within traditional decision theory, common decision principles  e.g. the principle to maximize utility  generally invoke idealization; they govern ideal agents in ideal circumstances. In Realistic Decision Theory, Paul Weirch adds practicality to decision theory by formulating principles applying to nonideal agents in nonideal circumstances, such as real people coping with complex decisions. Bridging the gap between normative demands and psychological resources, Realistic Decision Theory is essential reading for theorists seeking precise normative decision principles that acknowledge the limits and (...) 

Contemporary decision theory places crucial emphasis on a family of mathematical results called representation theorems, which relate criteria for evaluating the available options to axioms pertaining to the decisionmaker’s preferences. Various claims have been made concerning the reasons for the importance of these results. The goal of this article is to assess their semantic role: representation theorems are purported to provide definitions of the decisiontheoretic concepts involved in the evaluation criteria. In particular, this claim shall be examined from the perspective (...) 

Naive versions of decision theory take probabilities and utilities as primitive and use expected value to give norms on rational decision. However, standard decision theory takes rational preference as primitive and uses it to construct probability and utility. This paper shows how to justify a version of the naive theory, by taking dominance as the most basic normatively required preference relation, and then extending it by various conditions under which agents should be indifferent between acts. The resulting theory can make (...) 

Frank Ramsey's ‘Truth and Probability’ sketches a proposal for the empirical measurement of credences, along with a corresponding set of axioms for a representation theorem intended to characterize the preference conditions under which this measurement process is applicable. There are several features of Ramsey's formal system which make it attractive and worth developing. However, in specifying his measurement process and his axioms, Ramsey introduces the notion of an ethically neutral proposition, the assumed existence of which plays a key role throughout (...) 





Several axiom systems for preference among acts lead to a unique probability and a stateindependent utility such that acts are ranked according to their expected utilities. These axioms have been used as a foundation for Bayesian decision theory and subjective probability calculus. In this article we note that the uniqueness of the probability is relative to the choice of whatcounts as a constant outcome. Although it is sometimes clear what should be considered constant, in many cases there are several possible (...) 

Foundations of Bayesianism is an authoritative collection of papers addressing the key challenges that face the Bayesian interpretation of probability today. Some of these papers seek to clarify the relationships between Bayesian, causal and logical reasoning. Others consider the application of Bayesianism to artificial intelligence, decision theory, statistics and the philosophy of science and mathematics. The volume includes important criticisms of Bayesian reasoning and also gives an insight into some of the points of disagreement amongst advocates of the Bayesian approach. (...) 