The Dutch Book Argument for Probabilism assumes Ramsey's Thesis (RT), which purports to determine the prices an agent is rationally required to pay for a bet. Recently, a new objection to Ramsey's Thesis has emerged (Hedden 2013, Wronski & Godziszewski 2017, Wronski 2018)--I call this the ExpectedUtility Objection. According to this objection, it is Maximise SubjectiveExpectedUtility (MSEU) that determines the prices an agent is required to pay for a bet, and this often (...) disagrees with Ramsey's Thesis. I suggest two responses to Hedden's objection. First, we might be permissive: agents are permitted to pay any price that is required or permitted by RT, and they are permitted to pay any price that is required or permitted by MSEU. This allows us to give a revised version of the Dutch Book Argument for Probabilism, which I call the Permissive Dutch Book Argument. Second, I suggest that even the proponent of the ExpectedUtility Objection should admit that RT gives the correct answer in certain very limited cases, and I show that, together with MSEU, this very restricted version of RT gives a new pragmatic argument for Probabilism, which I call the Bookless Pragmatic Argument. (shrink)
This monographic chapter explains how expectedutility (EU) theory arose in von Neumann and Morgenstern, how it was called into question by Allais and others, and how it gave way to non-EU theories, at least among the specialized quarters of decion theory. I organize the narrative around the idea that the successive theoretical moves amounted to resolving Duhem-Quine underdetermination problems, so they can be assessed in terms of the philosophical recommendations made to overcome these problems. I actually follow (...) Duhem's recommendation, which was essentially to rely on the passing of time to make many experiments and arguments available, and evebntually strike a balance between competing theories on the basis of this improved knowledge. Although Duhem's solution seems disappointingly vague, relying as it does on "bon sens" to bring an end to the temporal process, I do not think there is any better one in the philosophical literature, and I apply it here for what it is worth. In this perspective, EU theorists were justified in resisting the first attempts at refuting their theory, including Allais's in the 50s, but they would have lacked "bon sens" in not acknowledging their defeat in the 80s, after the long process of pros and cons had sufficiently matured. This primary Duhemian theme is actually combined with a secondary theme - normativity. I suggest that EU theory was normative at its very beginning and has remained so all along, and I express dissatisfaction with the orthodox view that it could be treated as a straightforward descriptive theory for purposes of prediction and scientific test. This view is usually accompanied with a faulty historical reconstruction, according to which EU theorists initially formulated the VNM axioms descriptively and retreated to a normative construal once they fell threatened by empirical refutation. From my historical study, things did not evolve in this way, and the theory was both proposed and rebutted on the basis of normative arguments already in the 1950s. The ensuing, major problem was to make choice experiments compatible with this inherently normative feature of theory. Compability was obtained in some experiments, but implicitly and somewhat confusingly, for instance by excluding overtly incoherent subjects or by creating strong incentives for the subjects to reflect on the questions and provide answers they would be able to defend. I also claim that Allais had an intuition of how to combine testability and normativity, unlike most later experimenters, and that it would have been more fruitful to work from his intuition than to make choice experiments of the naively empirical style that flourished after him. In sum, it can be said that the underdetermination process accompanying EUT was resolved in a Duhemian way, but this was not without major inefficiencies. To embody explicit rationality considerations into experimental schemes right from the beginning would have limited the scope of empirical research, avoided wasting resources to get only minor findings, and speeded up the Duhemian process of groping towards a choice among competing theories. (shrink)
Savage's framework of subjective preference among acts provides a paradigmatic derivation of rational subjective probabilities within a more general theory of rational decisions. The system is based on a set of possible states of the world, and on acts, which are functions that assign to each state a consequence. The representation theorem states that the given preference between acts is determined by their expected utilities, based on uniquely determined probabilities (assigned to sets of states), and numeric utilities (...) assigned to consequences. Savage's derivation, however, is based on a highly problematic well-known assumption not included among his postulates: for any consequence of an act in some state, there is a "constant act" which has that consequence in all states. This ability to transfer consequences from state to state is, in many cases, miraculous -- including simple scenarios suggested by Savage as natural cases for applying his theory. We propose a simplification of the system, which yields the representation theorem without the constant act assumption. We need only postulates P1-P6. This is done at the cost of reducing the set of acts included in the setup. The reduction excludes certain theoretical infinitary scenarios, but includes the scenarios that should be handled by a system that models human decisions. (shrink)
Suppose that it is rational to choose or intend a course of action if and only if the course of action maximizes some sort of expectation of some sort of value. What sort of value should this definition appeal to? According to an influential neo-Humean view, the answer is “Utility”, where utility is defined as a measure of subjective preference. According to a rival neo-Aristotelian view, the answer is “Choiceworthiness”, where choiceworthiness is an irreducibly normative notion of (...) a course of action that is good in a certain way. The neo-Humean view requires preferences to be measurable by means of a utility function. Various interpretations of what exactly a “preference” is are explored, to see if there is any interpretation that supports the claim that a rational agent’s “preferences” must satisfy the “axioms” that are necessary for them to be measurable in this way. It is argued that the only interpretation that supports the idea that the rational agent’s preferences must meet these axioms interprets “preferences” as a kind of value-judgment. But this turns out to be version of the neo-Aristotelian view, rather than the neo-Humean view. Rational intentions maximize expected choiceworthiness, not expectedutility. (shrink)
In his classic book “the Foundations of Statistics” Savage developed a formal system of rational decision making. The system is based on (i) a set of possible states of the world, (ii) a set of consequences, (iii) a set of acts, which are functions from states to consequences, and (iv) a preference relation over the acts, which represents the preferences of an idealized rational agent. The goal and the culmination of the enterprise is a representation theorem: Any preference relation that (...) satisfies certain arguably acceptable postulates determines a (finitely additive) probability distribution over the states and a utility assignment to the consequences, such that the preferences among acts are determined by their expected utilities. Additional problematic assumptions are however required in Savage's proofs. First, there is a Boolean algebra of events (sets of states) which determines the richness of the set of acts. The probabilities are assigned to members of this algebra. Savage's proof requires that this be a σ-algebra (i.e., closed under infinite countable unions and intersections), which makes for an extremely rich preference relation. On Savage's view we should not require subjective probabilities to be σ-additive. He therefore finds the insistence on a σ-algebra peculiar and is unhappy with it. But he sees no way of avoiding it. Second, the assignment of utilities requires the constant act assumption: for every consequence there is a constant act, which produces that consequence in every state. This assumption is known to be highly counterintuitive. The present work contains two mathematical results. The first, and the more difficult one, shows that the σ-algebra assumption can be dropped. The second states that, as long as utilities are assigned to finite gambles only, the constant act assumption can be replaced by the more plausible and much weaker assumption that there are at least two non-equivalent constant acts. The second result also employs a novel way of deriving utilities in Savage-style systems -- without appealing to von Neumann-Morgenstern lotteries. The paper discusses the notion of “idealized agent" that underlies Savage's approach, and argues that the simplified system, which is adequate for all the actual purposes for which the system is designed, involves a more realistic notion of an idealized agent. (shrink)
As stochastic independence is essential to the mathematical development of probability theory, it seems that any foundational work on probability should be able to account for this property. Bayesian decision theory appears to be wanting in this respect. Savage’s postulates on preferences under uncertainty entail a subjectiveexpectedutility representation, and this asserts only the existence and uniqueness of a subjective probability measure, regardless of its properties. What is missing is a preference condition corresponding to stochastic (...) independence. To fill this significant gap, the article axiomatizes Bayesian decision theory afresh and proves several representation theorems in this novel framework. (shrink)
We give two social aggregation theorems under conditions of risk, one for constant population cases, the other an extension to variable populations. Intra and interpersonal welfare comparisons are encoded in a single ‘individual preorder’. The theorems give axioms that uniquely determine a social preorder in terms of this individual preorder. The social preorders described by these theorems have features that may be considered characteristic of Harsanyi-style utilitarianism, such as indifference to ex ante and ex post equality. However, the theorems are (...) also consistent with the rejection of all of the expectedutility axioms, completeness, continuity, and independence, at both the individual and social levels. In that sense, expectedutility is inessential to Harsanyi-style utilitarianism. In fact, the variable population theorem imposes only a mild constraint on the individual preorder, while the constant population theorem imposes no constraint at all. We then derive further results under the assumption of our basic axioms. First, the individual preorder satisfies the main expectedutility axiom of strong independence if and only if the social preorder has a vector-valued expected total utility representation, covering Harsanyi’s utilitarian theorem as a special case. Second, stronger utilitarian-friendly assumptions, like Pareto or strong separability, are essentially equivalent to strong independence. Third, if the individual preorder satisfies a ‘local expectedutility’ condition popular in non-expectedutility theory, then the social preorder has a ‘local expected total utility’ representation. Fourth, a wide range of non-expectedutility theories nevertheless lead to social preorders of outcomes that have been seen as canonically egalitarian, such as rank-dependent social preorders. Although our aggregation theorems are stated under conditions of risk, they are valid in more general frameworks for representing uncertainty or ambiguity. (shrink)
The paper summarizes expectedutility theory, both in its original von Neumann-Morgenstern version and its later developments, and discusses the normative claims to rationality made by this theory.
Some early phase clinical studies of candidate HIV cure and remission interventions appear to have adverse medical risk–benefit ratios for participants. Why, then, do people participate? And is it ethically permissible to allow them to participate? Recent work in decision theory sheds light on both of these questions, by casting doubt on the idea that rational individuals prefer choices that maximise expectedutility, and therefore by casting doubt on the idea that researchers have an ethical obligation not to (...) enrol participants in studies with high risk–benefit ratios. This work supports the view that researchers should instead defer to the considered preferences of the participants themselves. This essay briefly explains this recent work, and then explores its application to these two questions in more detail. (shrink)
An expectedutility model of individual choice is formulated which allows the decision maker to specify his available actions in the form of controls (partial contingency plans) and to simultaneously choose goals and controls in end-mean pairs. It is shown that the Savage expectedutility model, the Marschak- Radner team model, the Bayesian statistical decision model, and the standard optimal control model can be viewed as special cases of this goal-control expectedutility model.
The orthodox theory of instrumental rationality, expectedutility (EU) theory, severely restricts the way in which risk-considerations can figure into a rational individual's preferences. It is argued here that this is because EU theory neglects an important component of instrumental rationality. This paper presents a more general theory of decision-making, risk-weighted expectedutility (REU) theory, of which expectedutility maximization is a special case. According to REU theory, the weight that each outcome gets in (...) decision-making is not the subjective probability of that outcome; rather, the weight each outcome gets depends on both its subjective probability and its position in the gamble. Furthermore, the individual's utility function, her subjective probability function, and a function that measures her attitude towards risk can be separately derived from her preferences via a Representation Theorem. This theorem illuminates the role that each of these entities plays in preferences, and shows how REU theory explicates the components of instrumental rationality. (shrink)
In Richard Bradley's book, Decision Theory with a Human Face (2017), we have selected two themes for discussion. The first is the Bolker-Jeffrey (BJ) theory of decision, which the book uses throughout as a tool to reorganize the whole field of decision theory, and in particular to evaluate the extent to which expectedutility (EU) theories may be normatively too demanding. The second theme is the redefinition strategy that can be used to defend EU theories against the Allais (...) and Ellsberg paradoxes, a strategy that the book by and large endorses, and even develops in an original way concerning the Ellsberg paradox. We argue that the BJ theory is too specific to fulfil Bradley’s foundational project and that the redefinition strategy fails in both the Allais and Ellsberg cases. Although we share Bradley’s conclusion that EU theories do not state universal rationality requirements, we reach it not by a comparison with BJ theory, but by a comparison with the non-EU theories that the paradoxes have heuristically suggested. (shrink)
It is widely held that the influence of risk on rational decisions is not entirely explained by the shape of an agent’s utility curve. Buchak (Erkenntnis, 2013, Risk and rationality, Oxford University Press, Oxford, in press) presents an axiomatic decision theory, risk-weighted expectedutility theory (REU), in which decision weights are the agent’s subjective probabilities modified by his risk-function r. REU is briefly described, and the global applicability of r is discussed. Rabin’s (Econometrica 68:1281–1292, 2000) calibration (...) theorem strongly suggests that plausible levels of risk aversion cannot be fully explained by concave utility functions; this provides motivation for REU and other theories. But applied to the synchronic preferences of an individual agent, Rabin’s result is not as problematic as it may first appear. Theories that treat outcomes as gains and losses (e.g. prospect theory and cumulative prospect theory) account for risk sensitivity in a way not available to REU. Reference points that mark the difference between gains and losses are subject to framing, many instances of which cannot be regarded as rational. However, rational decision theory may recognize the difference between gains and losses, without endorsing all ways of fixing the point of reference. In any event, REU is a very interesting theory. (shrink)
A cursory glance at the list of Nobel Laureates for Economics is sufficient to confirm Stanovich’s description of the project to evaluate human rationality as seminal. Herbert Simon, Reinhard Selten, John Nash, Daniel Kahneman, and others, were awarded their prizes less for their work in economics, per se, than for their work on rationality, as such. Although philosophical works have for millennia attempted to describe, explicate and evaluate individual and collective aspects of rationality, new impetus was brought to this endeavor (...) over the last century as mathematical logic along with the social and behavioral sciences emerged. Yet more recently, over the last several decades, propelled by the emergence of artificial intelligence, cognitive science, evolutionary psychology, neuropsychology, and related fields, even more sophisticated approaches to the study of rationality have emerged. (shrink)
We introduce a ranking of multidimensional alternatives, including uncertain prospects as a particular case, when these objects can be given a matrix form. This ranking is separable in terms of rows and columns, and continuous and monotonic in the basic quantities. Owing to the theory of additive separability developed here, we derive very precise numerical representations over a large class of domains (i.e., typically notof the Cartesian product form). We apply these representationsto (1)streams of commodity baskets through time, (2)uncertain social (...) prospects, (3)uncertain individual prospects. Concerning(1), we propose a finite horizon variant of Koopmans’s (1960) axiomatization of infinite discounted utility sums. The main results concern(2). We push the classic comparison between the exanteand expostsocial welfare criteria one step further by avoiding any expectedutility assumptions, and as a consequence obtain what appears to be the strongest existing form of Harsanyi’s (1955) Aggregation Theorem. Concerning(3), we derive a subjective probability for Anscombe and Aumann’s (1963) finite case by merely assuming that there are two epistemically independent sources of uncertainty. (shrink)
In this paper, I argue for a new normative theory of rational choice under risk, namely expected comparative utility (ECU) theory. I first show that for any choice option, a, and for any state of the world, G, the measure of the choiceworthiness of a in G is the comparative utility (CU) of a in G—that is, the difference in utility, in G, between a and whichever alternative to a carries the greatest utility in G. (...) On the basis of this principle, I then argue that for any agent, S, faced with any decision under risk, S should rank his or her decision options (in terms of how choiceworthy they are) according to their comparative expected comparative utility (CECU) and should choose whichever option carries the greatest CECU. For any option, a, a’s CECU is the difference between its ECU and that of whichever alternative to a carries the greatest ECU, where a’s ECU is a probability‐weighted sum of a’s CUs across the various possible states of the world. I lastly demonstrate that in some ordinary decisions under risk, ECU theory delivers different verdicts from those of standard decision theory. (shrink)
Neoclassical economists use expectedutility theory to explain, predict, and prescribe choices under risk, that is, choices where the decision-maker knows---or at least deems suitable to act as if she knew---the relevant probabilities. Expectedutility theory has been subject to both empirical and conceptual criticism. This chapter reviews expectedutility theory and the main criticism it has faced. It ends with a brief discussion of subjectiveexpectedutility theory, which is the (...) theory neoclassical economists use to explain, predict, and prescribe choices under uncertainty, that is, choices where the decision-maker cannot act on the basis of objective probabilities but must instead consult her own subjective probabilities. (shrink)
Ramsey (1926) sketches a proposal for measuring the subjective probabilities of an agent by their observable preferences, assuming that the agent is an expectedutility maximizer. I show how to extend the spirit of Ramsey's method to a strictly wider class of agents: risk-weighted expectedutility maximizers (Buchak 2013). In particular, I show how we can measure the risk attitudes of an agent by their observable preferences, assuming that the agent is a risk-weighted expected (...)utility maximizer. Further, we can leverage this method to measure the subjective probabilities of a risk-weighted expectedutility maximizer. (shrink)
Gauthier's version of the Lockean proviso (in Morals by Agreement) is inappropriate as the foundation for moral rights he takes it to be. This is so for a number of reasons. It lacks any proportionality test thus allowing arbitrarily severe harms to others to prevent trivial harms to oneself. It allows one to inflict any harm on another provided that if one did not do so, someone else would. And, by interpreting the notion of bettering or worsening one's position in (...) terms of subjectiveexpectedutility, it allows immoral manipulation of others and imposes unwarranted restrictions based on preferences that should carry no moral weight. (shrink)
The standard representation theorem for expectedutility theory tells us that if a subject’s preferences conform to certain axioms, then she can be represented as maximising her expectedutility given a particular set of credences and utilities—and, moreover, that having those credences and utilities is the only way that she could be maximising her expectedutility. However, the kinds of agents these theorems seem apt to tell us anything about are highly idealised, being always (...) probabilistically coherent with infinitely precise degrees of belief and full knowledge of all a priori truths. Ordinary subjects do not look very rational when compared to the kinds of agents usually talked about in decision theory. In this paper, I will develop an expectedutility representation theorem aimed at the representation of those who are neither probabilistically coherent, logically omniscient, nor expectedutility maximisers across the board—that is, agents who are frequently irrational. The agents in question may be deductively fallible, have incoherent credences, limited representational capacities, and fail to maximise expectedutility for all but a limited class of gambles. (shrink)
This chapter concerns the nature of our obligations as individuals when it comes to our emissions-producing activities and climate change. The first half of the chapter argues that the popular ‘expectedutility’ approach to this question faces a problematic dilemma: either it gives skeptical verdicts, saying that there are no such obligations, or it yields implausibly strong verdicts. The second half of the chapter diagnoses the problem. It is argued that the dilemma arises from a very general feature (...) of the view, and thus is shared by other views as well. The chapter then discusses what an account of our individual obligations needs to look like if it is to avoid the dilemma. Finally, the discussion is extended beyond climate change to other collective impact contexts. (shrink)
This paper (first published under the same title in Journal of Mathematical Economics, 29, 1998, p. 331-361) is a sequel to "Consistent Bayesian Aggregation", Journal of Economic Theory, 66, 1995, p. 313-351, by the same author. Both papers examine mathematically whether the the following assumptions are compatible: the individuals and the group both form their preferences according to SubjectiveExpectedUtility (SEU) theory, and the preferences of the group satisfy the Pareto principle with respect to those of (...) the individuals. While the 1995 paper explored these assumptions in the axiomatic context of Savage's (1954-1972) SEU theory, the present paper explores them in the context of Anscombe and Aumann's (1963) alternative SEU theory. We first show that the problematic assumptions become compatible when the Anscombe-Aumann utility functions are state-dependent and no subjective probabilities are elicited. Then we show that the problematic assumptions become incompatible when the Anscombe-Aumann utility functions are state-dependent, like before, but subjective probabilities are elicited using a relevant technical scheme. This last result reinstates the impossibilities proved by the 1995 paper, and thus shows them to be robust with respect to the choice of the SEU axiomatic framework. The technical scheme used for the elicitation of subjective probabilities is that of Karni, Schmeidler and Vind (1983). (shrink)
Whereas many others have scrutinized the Allais paradox from a theoretical angle, we study the paradox from an historical perspective and link our findings to a suggestion as to how decision theory could make use of it today. We emphasize that Allais proposed the paradox as a normative argument, concerned with ‘the rational man’ and not the ‘real man’, to use his words. Moreover, and more subtly, we argue that Allais had an unusual sense of the normative, being concerned not (...) so much with the rationality of choices as with the rationality of the agent as a person. These two claims are buttressed by a detailed investigation – the first of its kind – of the 1952 Paris conference on risk, which set the context for the invention of the paradox, and a detailed reconstruction – also the first of its kind – of Allais’s specific normative argument from his numerous but allusive writings. The paper contrasts these interpretations of what the paradox historically represented, with how it generally came to function within decision theory from the late 1970s onwards: that is, as an empirical refutation of the expectedutility hypothesis, and more specifically of the condition of von Neumann–Morgenstern independence that underlies that hypothesis. While not denying that this use of the paradox was fruitful in many ways, we propose another use that turns out also to be compatible with an experimental perspective. Following Allais’s hints on ‘the experimental definition of rationality’, this new use consists in letting the experiment itself speak of the rationality or otherwise of the subjects. In the 1970s, a short sequence of papers inspired by Allais implemented original ways of eliciting the reasons guiding the subjects’ choices, and claimed to be able to draw relevant normative consequences from this information. We end by reviewing this forgotten experimental avenue not simply historically, but with a view to recommending it for possible use by decision theorists today. (shrink)
The paper re-expresses arguments against the normative validity of expectedutility theory in Robin Pope (1983, 1991a, 1991b, 1985, 1995, 2000, 2001, 2005, 2006, 2007). These concern the neglect of the evolving stages of knowledge ahead (stages of what the future will bring). Such evolution is fundamental to an experience of risk, yet not consistently incorporated even in axiomatised temporal versions of expectedutility. Its neglect entails a disregard of emotional and financial effects on well-being before (...) a particular risk is resolved. These arguments are complemented with an analysis of the essential uniqueness property in the context of temporal and atemporal expectedutility theory and a proof of the absence of a limit property natural in an axiomatised approach to temporal expectedutility theory. Problems of the time structure of risk are investigated in a simple temporal framework restricted to a subclass of temporal lotteries in the sense of David Kreps and Evan Porteus (1978). This subclass is narrow but wide enough to discuss basic issues. It will be shown that there are serious objections against the modification of expectedutility theory axiomatised by Kreps and Porteus (1978, 1979). By contrast the umbrella theory proffered by Pope that she has now termed SKAT, the Stages of Knowledge Ahead Theory, offers an epistemically consistent framework within which to construct particular models to deal with particular decision situations. A model by Caplin and Leahy (2001) will also be discussed and contrasted with the modelling within SKAT (Pope, Leopold and Leitner 2007). (shrink)
This paper addresses the issue of finite versus countable additivity in Bayesian probability and decision theory -- in particular, Savage's theory of subjectiveexpectedutility and personal probability. I show that Savage's reason for not requiring countable additivity in his theory is inconclusive. The assessment leads to an analysis of various highly idealised assumptions commonly adopted in Bayesian theory, where I argue that a healthy dose of, what I call, conceptual realism is often helpful in understanding the (...) interpretational value of sophisticated mathematical structures employed in applied sciences like decision theory. In the last part, I introduce countable additivity into Savage's theory and explore some technical properties in relation to other axioms of the system. (shrink)
Decision theory is concerned with how agents should act when the consequences of their actions are uncertain. The central principle of contemporary decision theory is that the rational choice is the choice that maximizes subjectiveexpectedutility. This entry explains what this means, and discusses the philosophical motivations and consequences of the theory. The entry will consider some of the main problems and paradoxes that decision theory faces, and some of responses that can be given. Finally the (...) entry will briefly consider how decision theory applies to choices involving more than one agent. (shrink)
Normative theories can be useful in developing descriptive theories, as when normative subjectiveexpectedutility theory is used to develop descriptive rational choice theory and behavioral game theory. “Ought” questions are also the essence of theories of moral reasoning, a domain of higher mental processing that could not survive without normative considerations.
Social decisions are often made under great uncertainty – in situations where political principles, and even standard subjectiveexpectedutility, do not apply smoothly. In the first section, we argue that the core of this problem lies in decision theory itself – it is about how to act when we do not have an adequate representation of the context of the action and of its possible consequences. Thus, we distinguish two criteria to complement decision theory under ignorance (...) – Laplace’s principle of insufficient reason and Wald’s maximin criterion. After that, we apply this analysis to political philosophy, by contrasting Harsanyi’s and Rawls’s theories of justice, respectively based on Laplace’s principle of insufficient reason and Wald’s maximin rule – and we end up highlighting the virtues of Rawls’s principle on practical grounds (it is intuitively attractive because of its computational simplicity, so providing a salient point for convergence) – and connect this argument to our moral intuitions and social norms requiring prudence in the case of decisions made for the sake of others. (shrink)
Assuming that votes are independent, the epistemically optimal procedure in a binary collective choice problem is known to be a weighted supermajority rule with weights given by personal log-likelihood-ratios. It is shown here that an analogous result holds in a much more general model. Firstly, the result follows from a more basic principle than expected-utility maximisation, namely from an axiom (Epistemic Monotonicity) which requires neither utilities nor prior probabilities of the ‘correctness’ of alternatives. Secondly, a person’s input need (...) not be a vote for an alternative, it may be any type of input, for instance a subjective degree of belief or probability of the correctness of one of the alternatives. The case of a proﬁle of subjective degrees of belief is particularly appealing, since here no parameters such as competence parameters need to be known. (shrink)
In this paper, I argue for a new normative theory of rational choice under risk, namely expected comparative utility (ECU) theory. I first show that for any choice option, a, and for any state of the world, G, the measure of the choiceworthiness of a in G is the comparative utility (CU) of a in G—that is, the difference in utility, in G, between a and whichever alternative to a carries the greatest utility in G. (...) On the basis of this principle, I then argue that for any agent, S, faced with any decision under risk, S should rank his or her decision options (in terms of how choiceworthy they are) according to their comparative expected comparative utility (CECU) and should choose whichever option carries the greatest CECU. For any option, a, a's CECU is the difference between its ECU and that of whichever alternative to a carries the greatest ECU, where a's ECU is a probability-weighted sum of a's CUs across the various possible states of the world. I lastly demonstrate that in some ordinary decisions under risk, ECU theory delivers different verdicts from those of standard decision theory. (shrink)
The principle that rational agents should maximize expectedutility or choiceworthiness is intuitively plausible in many ordinary cases of decision-making under uncertainty. But it is less plausible in cases of extreme, low-probability risk (like Pascal's Mugging), and intolerably paradoxical in cases like the St. Petersburg and Pasadena games. In this paper I show that, under certain conditions, stochastic dominance reasoning can capture most of the plausible implications of expectational reasoning while avoiding most of its pitfalls. Specifically, given sufficient (...) background uncertainty about the choiceworthiness of one's options, many expectation-maximizing gambles that do not stochastically dominate their alternatives "in a vacuum" become stochastically dominant in virtue of that background uncertainty. But, even under these conditions, stochastic dominance will not require agents to accept options whose expectational superiority depends on sufficiently small probabilities of extreme payoffs. The sort of background uncertainty on which these results depend looks unavoidable for any agent who measures the choiceworthiness of her options in part by the total amount of value in the resulting world. At least for such agents, then, stochastic dominance offers a plausible general principle of choice under uncertainty that can explain more of the apparent rational constraints on such choices than has previously been recognized. (shrink)
Systems of logico-probabilistic (LP) reasoning characterize inference from conditional assertions interpreted as expressing high conditional probabilities. In the present article, we investigate four prominent LP systems (namely, systems O, P, Z, and QC) by means of computer simulations. The results reported here extend our previous work in this area, and evaluate the four systems in terms of the expectedutility of the dispositions to act that derive from the conclusions that the systems license. In addition to conforming to (...) the dominant paradigm for assessing the rationality of actions and decisions, our present evaluation complements our previous work, since our previous evaluation may have been too severe in its assessment of inferences to false and uninformative conclusions. In the end, our new results provide additional support for the conclusion that (of the four systems considered) inference by system Z offers the best balance of error avoidance and inferential power. Our new results also suggest that improved performance could be achieved by a modest strengthening of system Z. (shrink)
This paper is about two requirements on wish reports whose interaction motivates a novel semantics for these ascriptions. The first requirement concerns the ambiguities that arise when determiner phrases, e.g. definite descriptions, interact with `wish'. More specifically, several theorists have recently argued that attitude ascriptions featuring counterfactual attitude verbs license interpretations on which the determiner phrase is interpreted relative to the subject's beliefs. The second requirement involves the fact that desire reports in general require decision-theoretic notions for their analysis. The (...) current study is motivated by the fact that no existing account captures both of these aspects of wishing. I develop a semantics for wish reports that makes available belief-relative readings but also allows decision-theoretic notions to play a role in shaping the truth conditions of these ascriptions. The general idea is that we can analyze wishing in terms of a two-dimensional notion of expectedutility. (shrink)
Instability occurs when the very fact of choosing one particular possible option rather than another affects the expected values of those possible options. In decision theory: An act is stable iff given that it is actually performed, its expectedutility is maximal. When there is no stable choice available, the resulting instability can seem to pose a dilemma of practical rationality. A structurally very similar kind of instability, which occurs in cases of anti-expertise, can likewise seem to (...) create dilemmas of epistemic rationality. One possible line of response to such cases of instability, suggested by both Jeffrey (1983) and Sorensen (1987), is to insist that a rational agent can simply refuse to accept that such instability applies to herself in the first place. According to this line of thought it can be rational for a subject to discount even very strong empirical evidence that the anti-expertise condition obtains. I present a new variety of anti-expertise condition where no particular empirical stage-setting is required, since the subject can deduce a priori that an anti-expertise condition obtains. This kind of anti-expertise case is therefore not amenable to the line of response that Jeffrey and Sorensen recommend. (shrink)
The basic axioms or formal conditions of decision theory, especially the ordering condition put on preferences and the axioms underlying the expectedutility formula, are subject to a number of counter-examples, some of which can be endowed with normative value and thus fall within the ambit of a philosophical reflection on practical rationality. Against such counter-examples, a defensive strategy has been developed which consists in redescribing the outcomes of the available options in such a way that the threatened (...) axioms or conditions continue to hold. We examine how this strategy performs in three major cases: Sen's counterexamples to the binariness property of preferences, the Allais paradox of EU theory under risk, and the Ellsberg paradox of EU theory under uncertainty. We find that the strategy typically proves to be lacking in several major respects, suffering from logical triviality, incompleteness, and theoretical insularity. To give the strategy more structure, philosophers have developed “principles of individuation”; but we observe that these do not address the aforementioned defects. Instead, we propose the method of checking whether the strategy can overcome its typical defects once it is given a proper theoretical expansion. We find that the strategy passes the test imperfectly in Sen's case and not at all in Allais's. In Ellsberg's case, however, it comes close to meeting our requirement. But even the analysis of this more promising application suggests that the strategy ought to address the decision problem as a whole, rather than just the outcomes, and that it should extend its revision process to the very statements it is meant to protect. Thus, by and large, the same cautionary tale against redescription practices runs through the analysis of all three cases. A more general lesson, simply put, is that there is no easy way out from the paradoxes of decision theory. (shrink)
This paper argues that the types of intention can be modeled as modal operators. I delineate the intensional-semantic profiles of the types of intention, and provide a precise account of how the types of intention are unified in virtue of both their operations in a single, encompassing, epistemic modal space, and their role in practical reasoning. I endeavor to provide reasons adducing against the proposal that the types of intention are reducible to the mental states of belief and desire, where (...) the former state is codified by subjective probability measures and the latter is codified by a utility function. I argue, instead, that each of the types of intention -- i.e., intention-in-action, intention-as-explanation, and intention-for-the-future -- has as its aim the value of an outcome of the agent's action, as derived by her partial beliefs and assignments of utility, and as codified by the value of expectedutility in evidential decision theory. (shrink)
The Lockean Thesis says that you must believe p iff you’re sufficiently confident of it. On some versions, the 'must' asserts a metaphysical connection; on others, it asserts a normative one. On some versions, 'sufficiently confident' refers to a fixed threshold of credence; on others, it varies with proposition and context. Claim: the Lockean Thesis follows from epistemic utility theory—the view that rational requirements are constrained by the norm to promote accuracy. Different versions of this theory generate different versions (...) of Lockeanism; moreover, a plausible version of epistemic utility theory meshes with natural language considerations, yielding a new Lockean picture that helps to model and explain the role of beliefs in inquiry and conversation. Your beliefs are your best guesses in response to the epistemic priorities of your context. Upshot: we have a new approach to the epistemology and semantics of belief. And it has teeth. It implies that the role of beliefs is fundamentally different than many have thought, and in fact supports a metaphysical reduction of belief to credence. (shrink)
This paper examines how the concepts of utility, impartiality, and universality worked together to form the foundation of Adam Smith's jurisprudence. It argues that the theory of utility consistent with contemporary rational choice theory is insufficient to account for Smith's use of utility. Smith's jurisprudence relies on the impartial spectator's sympathetic judgment over whether third parties are injured, and not individuals' expectedutility associated with individuals' expected gains from rendering judgments over innocence or guilt.
I argue that prioritarianism cannot be assessed in abstraction from an account of the measure of utility. Rather, the soundness of this view crucially depends on what counts as a greater, lesser, or equal increase in a person’s utility. In particular, prioritarianism cannot accommodate a normatively compelling measure of utility that is captured by the axioms of John von Neumann and Oskar Morgenstern’s expectedutility theory. Nor can it accommodate a plausible and elegant generalization of (...) this theory that has been offered in response to challenges to von Neumann and Morgenstern. This is, I think, a theoretically interesting and unexpected source of difficulty for prioritarianism, which I explore in this article. (shrink)
This paper examines distinctive discourse properties of preposed negative 'yes/no' questions (NPQs), such as 'Isn’t Jane coming too?'. Unlike with other 'yes/no' questions, using an NPQ '∼p?' invariably conveys a bias toward a particular answer, where the polarity of the bias is opposite of the polarity of the question: using the negative question '∼p?' invariably expresses that the speaker previously expected the positive answer p to be correct. A prominent approach—what I call the context-management approach, developed most extensively by (...) Romero and Han (2004)—attempts to capture speaker expectation biases by treating NPQs fundamentally as epistemic questions about the proper discourse status of a proposition. I raise challenges for existing context-managing accounts to provide more adequate formalizations of the posited context-managing content, its implementation in the compositional semantics and discourse dynamics, and its role in generating the observed biases. New data regarding discourse differences between NPQs and associated epistemic modal questions are introduced. I argue that we can capture the roles of NPQs in expressing speakers’ states of mind and managing the discourse common ground without positing special context-managing operators or treating NPQs as questions directly about the context. I suggest that we treat the operator introduced with preposed negation as having an ordinary semantics of epistemic necessity, though lexically associated with a general kind of endorsing use observed with modal expressions. The expressive and context-managing roles of NPQs are explained in terms of a general kind of discourse-oriented use of context-sensitive language. The distinctive expectation biases and discourse properties observed with NPQs are derived from the proposed semantics and a general principle of Discourse Relevance. (shrink)
Theories that use expectedutility maximization to evaluate acts have difficulty handling cases with infinitely many utility contributions. In this paper I present and motivate a way of modifying such theories to deal with these cases, employing what I call “Direct Difference Taking”. This proposal has a number of desirable features: it’s natural and well-motivated, it satisfies natural dominance intuitions, and it yields plausible prescriptions in a wide range of cases. I then compare my account to the (...) most plausible alternative, a proposal offered by Arntzenius :31–58, 2014). I argue that while Arntzenius’s proposal has many attractive features, it runs into a number of problems which Direct Difference Taking avoids. (shrink)
Among recent objections to Pascal's Wager, two are especially compelling. The first is that decision theory, and specifically the requirement of maximizing expectedutility, is incompatible with infinite utility values. The second is that even if infinite utility values are admitted, the argument of the Wager is invalid provided that we allow mixed strategies. Furthermore, Hájek has shown that reformulations of Pascal's Wager that address these criticisms inevitably lead to arguments that are philosophically unsatisfying and historically (...) unfaithful. Both the objections and Hájek's philosophical worries disappear, however, if we represent our preferences using relative utilities rather than a one-place utility function. Relative utilities provide a conservative way to make sense of infinite value that preserves the familiar equation of rationality with the maximization of expectedutility. They also provide a means of investigating a broader class of problems related to the Wager. (shrink)
Some propositions are more epistemically important than others. Further, how important a proposition is is often a contingent matter—some propositions count more in some worlds than in others. Epistemic Utility Theory cannot accommodate this fact, at least not in any standard way. For EUT to be successful, legitimate measures of epistemic utility must be proper, i.e., every probability function must assign itself maximum expectedutility. Once we vary the importance of propositions across worlds, however, normal measures (...) of epistemic utility become improper. I argue there isn’t any good way out for EUT. (shrink)
We use a theorem from M. J. Schervish to explore the relationship between accuracy and practical success. If an agent is pragmatically rational, she will quantify the expected loss of her credence with a strictly proper scoring rule. Which scoring rule is right for her will depend on the sorts of decisions she expects to face. We relate this pragmatic conception of inaccuracy to the purely epistemic one popular among epistemic utility theorists.
The topic of this thesis is axiological uncertainty – the question of how you should evaluate your options if you are uncertain about which axiology is true. As an answer, I defend Expected Value Maximisation (EVM), the view that one option is better than another if and only if it has the greater expected value across axiologies. More precisely, I explore the axiomatic foundations of this view. I employ results from state-dependent utility theory, extend them in various (...) ways and interpret them accordingly, and thus provide axiomatisations of EVM as a theory of axiological uncertainty. (shrink)
Most prominent arguments favoring the widespread discretionary business practice of sending jobs overseas, known as ‘offshoring,’ attempt to justify the trend by appeal to utilitarian principles. It is argued that when business can be performed more cost-effectively offshore, doing so tends, over the longterm, to achieve the greatest good for the greatest number. This claim is supported by evidence that exporting jobs actively promotes economic development overseas while simultaneously increasing the revenue of the exporting country. After showing that offshoring might (...) indeed be justified on utilitarian grounds, I argue that according to Rawlsian social-contract theory, the practice is nevertheless irrational and unjust. For it unfairly expects the people of a given society to accept job-gain benefits to peoples of other societies as outweighing job-loss hardships it imposes on itself. Finally, I conclude that contrary to socialism, which relies much more on government control, capitalism constitutes a social contract that places a particularly strong moral obligation on corporations themselves to refrain from offshoring. (shrink)
One guide to an argument's significance is the number and variety of refutations it attracts. By this measure, the Dutch book argument has considerable importance.2 Of course this measure alone is not a sure guide to locating arguments deserving of our attention—if a decisive refutation has really been given, we are better off pursuing other topics. But the presence of many and varied counterarguments at least suggests that either the refutations are controversial, or that their target admits of more than (...) one interpretation, or both. The main point of this paper is to focus on a way of understanding the Dutch Book argument (DBA) that avoids many of the well-known criticisms, and to consider how it fares against an important criticism that still remains: the objection that the DBA presupposes value-independence of bets. (shrink)
In this paper, we provide a Bayesian analysis of the well-known surprise exam paradox. Central to our analysis is a probabilistic account of what it means for the student to accept the teacher's announcement that he will receive a surprise exam. According to this account, the student can be said to have accepted the teacher's announcement provided he adopts a subjective probability distribution relative to which he expects to receive the exam on a day on which he expects not (...) to receive it. We show that as long as expectation is not equated with subjective certainty there will be contexts in which it is possible for the student to accept the teacher's announcement, in this sense. In addition, we show how a Bayesian modeling of the scenario can yield plausible explanations of the following three intuitive claims: (1) the teacher's announcement becomes easier to accept the more days there are in class; (2) a strict interpretation of the teacher's announcement does not provide the student with any categorical information as to the date of the exam; and (3) the teacher's announcement contains less information about the date of the exam the more days there are in class. To conclude, we show how the surprise exam paradox can be seen as one among the larger class of paradoxes of doxastic fallibilism, foremost among which is the paradox of the preface. (shrink)
According to epistemic utility theory, epistemic rationality is teleological: epistemic norms are instrumental norms that have the aim of acquiring accuracy. What’s definitive of these norms is that they can be expected to lead to the acquisition of accuracy when followed. While there’s much to be said in favor of this approach, it turns out that it faces a couple of worrisome extensional problems involving the future. The first problem involves credences about the future, and the second problem (...) involves future credences. Examining prominent solutions to a different extensional problem for this approach reinforces the severity of the two problems involving the future. Reflecting on these problems reveals the source: the teleological assumption that epistemic rationality aims at acquiring accuracy. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.