In Richard Bradley’s book, Decision Theory with a Human Face, we have selected two themes for discussion. The first is the Bolker-Jeffrey theory of decision, which the book uses throughout as a tool to reorganize the whole field of decision theory, and in particular to evaluate the extent to which expectedutility theories may be normatively too demanding. The second theme is the redefinition strategy that can be used to defend EU theories against the Allais (...) and Ellsberg paradoxes, a strategy that the book by and large endorses, and even develops in an original way concerning the Ellsberg paradox. We argue that the BJ theory is too specific to fulfil Bradley’s foundational project and that the redefinition strategy fails in both the Allais and Ellsberg cases. Although we share Bradley’s conclusion that EU theories do not state universal rationality requirements, we reach it not by a comparison with BJ theory, but by a comparison with the non-EU theories that the paradoxes have heuristically suggested. (shrink)
The paper summarizes expectedutilitytheory, both in its original von Neumann-Morgenstern version and its later developments, and discusses the normative claims to rationality made by this theory.
This monographic chapter explains how expectedutility (EU) theory arose in von Neumann and Morgenstern, how it was called into question by Allais and others, and how it gave way to non-EU theories, at least among the specialized quarters of decion theory. I organize the narrative around the idea that the successive theoretical moves amounted to resolving Duhem-Quine underdetermination problems, so they can be assessed in terms of the philosophical recommendations made to overcome these problems. I (...) actually follow Duhem's recommendation, which was essentially to rely on the passing of time to make many experiments and arguments available, and evebntually strike a balance between competing theories on the basis of this improved knowledge. Although Duhem's solution seems disappointingly vague, relying as it does on "bon sens" to bring an end to the temporal process, I do not think there is any better one in the philosophical literature, and I apply it here for what it is worth. In this perspective, EU theorists were justified in resisting the first attempts at refuting their theory, including Allais's in the 50s, but they would have lacked "bon sens" in not acknowledging their defeat in the 80s, after the long process of pros and cons had sufficiently matured. This primary Duhemian theme is actually combined with a secondary theme - normativity. I suggest that EU theory was normative at its very beginning and has remained so all along, and I express dissatisfaction with the orthodox view that it could be treated as a straightforward descriptive theory for purposes of prediction and scientific test. This view is usually accompanied with a faulty historical reconstruction, according to which EU theorists initially formulated the VNM axioms descriptively and retreated to a normative construal once they fell threatened by empirical refutation. From my historical study, things did not evolve in this way, and the theory was both proposed and rebutted on the basis of normative arguments already in the 1950s. The ensuing, major problem was to make choice experiments compatible with this inherently normative feature of theory. Compability was obtained in some experiments, but implicitly and somewhat confusingly, for instance by excluding overtly incoherent subjects or by creating strong incentives for the subjects to reflect on the questions and provide answers they would be able to defend. I also claim that Allais had an intuition of how to combine testability and normativity, unlike most later experimenters, and that it would have been more fruitful to work from his intuition than to make choice experiments of the naively empirical style that flourished after him. In sum, it can be said that the underdetermination process accompanying EUT was resolved in a Duhemian way, but this was not without major inefficiencies. To embody explicit rationality considerations into experimental schemes right from the beginning would have limited the scope of empirical research, avoided wasting resources to get only minor findings, and speeded up the Duhemian process of groping towards a choice among competing theories. (shrink)
The paper re-expresses arguments against the normative validity of expectedutilitytheory in Robin Pope (1983, 1991a, 1991b, 1985, 1995, 2000, 2001, 2005, 2006, 2007). These concern the neglect of the evolving stages of knowledge ahead (stages of what the future will bring). Such evolution is fundamental to an experience of risk, yet not consistently incorporated even in axiomatised temporal versions of expectedutility. Its neglect entails a disregard of emotional and financial effects on well-being (...) before a particular risk is resolved. These arguments are complemented with an analysis of the essential uniqueness property in the context of temporal and atemporal expectedutilitytheory and a proof of the absence of a limit property natural in an axiomatised approach to temporal expectedutilitytheory. Problems of the time structure of risk are investigated in a simple temporal framework restricted to a subclass of temporal lotteries in the sense of David Kreps and Evan Porteus (1978). This subclass is narrow but wide enough to discuss basic issues. It will be shown that there are serious objections against the modification of expectedutilitytheory axiomatised by Kreps and Porteus (1978, 1979). By contrast the umbrella theory proffered by Pope that she has now termed SKAT, the Stages of Knowledge Ahead Theory, offers an epistemically consistent framework within which to construct particular models to deal with particular decision situations. A model by Caplin and Leahy (2001) will also be discussed and contrasted with the modelling within SKAT (Pope, Leopold and Leitner 2007). (shrink)
We give two social aggregation theorems under conditions of risk, one for constant population cases, the other an extension to variable populations. Intra and interpersonal welfare comparisons are encoded in a single ‘individual preorder’. The theorems give axioms that uniquely determine a social preorder in terms of this individual preorder. The social preorders described by these theorems have features that may be considered characteristic of Harsanyi-style utilitarianism, such as indifference to ex ante and ex post equality. However, the theorems are (...) also consistent with the rejection of all of the expectedutility axioms, completeness, continuity, and independence, at both the individual and social levels. In that sense, expectedutility is inessential to Harsanyi-style utilitarianism. In fact, the variable population theorem imposes only a mild constraint on the individual preorder, while the constant population theorem imposes no constraint at all. We then derive further results under the assumption of our basic axioms. First, the individual preorder satisfies the main expectedutility axiom of strong independence if and only if the social preorder has a vector-valued expected total utility representation, covering Harsanyi’s utilitarian theorem as a special case. Second, stronger utilitarian-friendly assumptions, like Pareto or strong separability, are essentially equivalent to strong independence. Third, if the individual preorder satisfies a ‘local expectedutility’ condition popular in non-expectedutilitytheory, then the social preorder has a ‘local expected total utility’ representation. Fourth, a wide range of non-expectedutility theories nevertheless lead to social preorders of outcomes that have been seen as canonically egalitarian, such as rank-dependent social preorders. Although our aggregation theorems are stated under conditions of risk, they are valid in more general frameworks for representing uncertainty or ambiguity. (shrink)
In this paper, I argue for a new normative theory of rational choice under risk, namely expected comparative utility (ECU) theory. I first show that for any choice option, a, and for any state of the world, G, the measure of the choiceworthiness of a in G is the comparative utility (CU) of a in G—that is, the difference in utility, in G, between a and whichever alternative to a carries the greatest utility (...) in G. On the basis of this principle, I then argue that for any agent, S, faced with any decision under risk, S should rank his or her decision options (in terms of how choiceworthy they are) according to their comparative expected comparative utility (CECU) and should choose whichever option carries the greatest CECU. For any option, a, a’s CECU is the difference between its ECU and that of whichever alternative to a carries the greatest ECU, where a’s ECU is a probability‐weighted sum of a’s CUs across the various possible states of the world. I lastly demonstrate that in some ordinary decisions under risk, ECU theory delivers different verdicts from those of standard decision theory. (shrink)
In this paper, I argue that standard decision theory, otherwise known as expectedutility (EU) theory, is a false theory of instrumental rationality. In its place, I argue for a new theory of instrumental rationality, namely expected comparative utility (ECU) theory. The argument starts from the premise that we require a graded, quantitative measure of choiceworthiness in our theory of instrumental rationality. I then present several lines of reasoning in support (...) of the new theory, two of which are as follows: First, compared to the standard criterion of instrumentally rational choice, viz., rational preference, the proposed criterion of instrumentally rational choice, viz., choiceworthiness, supplies a more plausible measure of the extent to which any given option is more choiceworthy than any other in any ranking of more than two choice options, both in decision situations involving certainty and decision situations involving risk. Second, I appeal to Ralph Wedgwood’s “Gandalf’s principle”, an eminently plausible decision-theoretic principle according to which the choiceworthiness of an option in a given state of the world should be measured only relative to the values of the other options in that state, and not to the values of the options in other states. In this paper, I show that in some ordinary decisions under risk, ECU theory delivers different verdicts from those of EU theory. (shrink)
Some early phase clinical studies of candidate HIV cure and remission interventions appear to have adverse medical risk–benefit ratios for participants. Why, then, do people participate? And is it ethically permissible to allow them to participate? Recent work in decision theory sheds light on both of these questions, by casting doubt on the idea that rational individuals prefer choices that maximise expectedutility, and therefore by casting doubt on the idea that researchers have an ethical obligation not (...) to enrol participants in studies with high risk–benefit ratios. This work supports the view that researchers should instead defer to the considered preferences of the participants themselves. This essay briefly explains this recent work, and then explores its application to these two questions in more detail. (shrink)
In this article we explore an argumentative pattern that provides a normative justification for expectedutility functions grounded on empirical evidence, showing how it worked in three different episodes of their development. The argument claims that we should prudentially maximize our expectedutility since this is the criterion effectively applied by those who are considered wisest in making risky choices (be it gamblers or businessmen). Yet, to justify the adoption of this rule, it should be proven (...) that this is empirically true: i.e. that a given function allows us to predict the choices of that particular class of agents. We show how expectedutility functions were introduced and contested in accordance with this pattern in the 18th century and how it recurred in the 1950s when Allais made his case against the neo-Bernoullians. (shrink)
According to epistemic utilitytheory, epistemic rationality is teleological: epistemic norms are instrumental norms that have the aim of acquiring accuracy. What’s definitive of these norms is that they can be expected to lead to the acquisition of accuracy when followed. While there’s much to be said in favor of this approach, it turns out that it faces a couple of worrisome extensional problems involving the future. The first problem involves credences about the future, and the second (...) problem involves future credences. Examining prominent solutions to a different extensional problem for this approach reinforces the severity of the two problems involving the future. Reflecting on these problems reveals the source: the teleological assumption that epistemic rationality aims at acquiring accuracy. (shrink)
An expectedutility model of individual choice is formulated which allows the decision maker to specify his available actions in the form of controls (partial contingency plans) and to simultaneously choose goals and controls in end-mean pairs. It is shown that the Savage expectedutility model, the Marschak- Radner team model, the Bayesian statistical decision model, and the standard optimal control model can be viewed as special cases of this goal-control expectedutility model.
In this paper, I argue for a new normative theory of rational choice under risk, namely expected comparative utility (ECU) theory. I first show that for any choice option, a, and for any state of the world, G, the measure of the choiceworthiness of a in G is the comparative utility (CU) of a in G—that is, the difference in utility, in G, between a and whichever alternative to a carries the greatest utility (...) in G. On the basis of this principle, I then argue that for any agent, S, faced with any decision under risk, S should rank his or her decision options (in terms of how choiceworthy they are) according to their comparative expected comparative utility (CECU) and should choose whichever option carries the greatest CECU. For any option, a, a's CECU is the difference between its ECU and that of whichever alternative to a carries the greatest ECU, where a's ECU is a probability-weighted sum of a's CUs across the various possible states of the world. I lastly demonstrate that in some ordinary decisions under risk, ECU theory delivers different verdicts from those of standard decision theory. (shrink)
Some propositions are more epistemically important than others. Further, how important a proposition is is often a contingent matter—some propositions count more in some worlds than in others. Epistemic UtilityTheory cannot accommodate this fact, at least not in any standard way. For EUT to be successful, legitimate measures of epistemic utility must be proper, i.e., every probability function must assign itself maximum expectedutility. Once we vary the importance of propositions across worlds, however, normal (...) measures of epistemic utility become improper. I argue there isn’t any good way out for EUT. (shrink)
The principle that rational agents should maximize expectedutility or choiceworthiness is intuitively plausible in many ordinary cases of decision-making under uncertainty. But it is less plausible in cases of extreme, low-probability risk (like Pascal's Mugging), and intolerably paradoxical in cases like the St. Petersburg and Pasadena games. In this paper I show that, under certain conditions, stochastic dominance reasoning can capture most of the plausible implications of expectational reasoning while avoiding most of its pitfalls. Specifically, given sufficient (...) background uncertainty about the choiceworthiness of one's options, many expectation-maximizing gambles that do not stochastically dominate their alternatives "in a vacuum" become stochastically dominant in virtue of that background uncertainty. But, even under these conditions, stochastic dominance will not require agents to accept options whose expectational superiority depends on sufficiently small probabilities of extreme payoffs. The sort of background uncertainty on which these results depend looks unavoidable for any agent who measures the choiceworthiness of her options in part by the total amount of value in the resulting world. At least for such agents, then, stochastic dominance offers a plausible general principle of choice under uncertainty that can explain more of the apparent rational constraints on such choices than has previously been recognized. (shrink)
Suppose that you prefer A to B, B to C, and C to A. Your preferences violate ExpectedUtilityTheory by being cyclic. Money-pump arguments offer a way to show that such violations are irrational. Suppose that you start with A. Then you should be willing to trade A for C and then C for B. But then, once you have B, you are offered a trade back to A for a small cost. Since you prefer A (...) to B, you pay the small sum to trade from B to A. But now you have been turned into a money pump. You are back to the alternative you started with but with less money. This Element shows how each of the axioms of ExpectedUtilityTheory can be defended by money-pump arguments of this kind. The Element also defends money-pump arguments from the standard objections to this kind of approach. This title is also available as Open Access on Cambridge Core. (shrink)
to appear in Lambert, E. and J. Schwenkler (eds.) Transformative Experience (OUP) -/- L. A. Paul (2014, 2015) argues that the possibility of epistemically transformative experiences poses serious and novel problems for the orthodox theory of rational choice, namely, expectedutilitytheory — I call her argument the Utility Ignorance Objection. In a pair of earlier papers, I responded to Paul’s challenge (Pettigrew 2015, 2016), and a number of other philosophers have responded in similar ways (...) (Dougherty, et al. 2015, Harman 2015) — I call our argument the Fine-Graining Response. Paul has her own reply to this response, which we might call the Authenticity Reply. But Sarah Moss has recently offered an alternative reply to the Fine-Graining Response on Paul’s behalf (Moss 2017) — we’ll call it the No Knowledge Reply. This appeals to the knowledge norm of action, together with Moss’ novel and intriguing account of probabilistic knowledge. In this paper, I consider Moss’ reply and argue that it fails. I argue first that it fails as a reply made on Paul’s behalf, since it forces us to abandon many of the features of Paul’s challenge that make it distinctive and with which Paul herself is particularly concerned. Then I argue that it fails as a reply independent of its fidelity to Paul’s intentions. (shrink)
The Lockean Thesis says that you must believe p iff you’re sufficiently confident of it. On some versions, the 'must' asserts a metaphysical connection; on others, it asserts a normative one. On some versions, 'sufficiently confident' refers to a fixed threshold of credence; on others, it varies with proposition and context. Claim: the Lockean Thesis follows from epistemic utilitytheory—the view that rational requirements are constrained by the norm to promote accuracy. Different versions of this theory generate (...) different versions of Lockeanism; moreover, a plausible version of epistemic utilitytheory meshes with natural language considerations, yielding a new Lockean picture that helps to model and explain the role of beliefs in inquiry and conversation. Your beliefs are your best guesses in response to the epistemic priorities of your context. Upshot: we have a new approach to the epistemology and semantics of belief. And it has teeth. It implies that the role of beliefs is fundamentally different than many have thought, and in fact supports a metaphysical reduction of belief to credence. (shrink)
Although expectedutilitytheory has proven a fruitful and elegant theory in the finite realm, attempts to generalize it to infinite values have resulted in many paradoxes. In this paper, we argue that the use of John Conway's surreal numbers shall provide a firm mathematical foundation for transfinite decision theory. To that end, we prove a surreal representation theorem and show that our surreal decision theory respects dominance reasoning even in the case of infinite (...) values. We then bring our theory to bear on one of the more venerable decision problems in the literature: Pascal's Wager. Analyzing the wager showcases our theory's virtues and advantages. To that end, we analyze two objections against the wager: Mixed Strategies and Many Gods. After formulating the two objections in the framework of surreal utilities and probabilities, our theory correctly predicts that (1) the pure Pascalian strategy beats all mixed strategies, and (2) what one should do in a Pascalian decision problem depends on what one's credence function is like. Our analysis therefore suggests that although Pascal's Wager is mathematically coherent, it does not deliver what it purports to, a rationally compelling argument that people should lead a religious life regardless of how confident they are in theism and its alternatives. (shrink)
A de minimis risk is defined as a risk that is so small that it may be legitimately ignored when making a decision. While ignoring small risks is common in our day-to-day decision making, attempts to introduce the notion of a de minimis risk into the framework of decision theory have run up against a series of well-known difficulties. In this paper, I will develop an enriched decision theoretic framework that is capable of overcoming two major obstacles to the (...) modelling of de minimis risk. The key move is to introduce, into decision theory, a non-probabilistic conception of risk known as normic risk. (shrink)
Savage's framework of subjective preference among acts provides a paradigmatic derivation of rational subjective probabilities within a more general theory of rational decisions. The system is based on a set of possible states of the world, and on acts, which are functions that assign to each state a consequence. The representation theorem states that the given preference between acts is determined by their expected utilities, based on uniquely determined probabilities (assigned to sets of states), and numeric utilities assigned (...) to consequences. Savage's derivation, however, is based on a highly problematic well-known assumption not included among his postulates: for any consequence of an act in some state, there is a "constant act" which has that consequence in all states. This ability to transfer consequences from state to state is, in many cases, miraculous -- including simple scenarios suggested by Savage as natural cases for applying his theory. We propose a simplification of the system, which yields the representation theorem without the constant act assumption. We need only postulates P1-P6. This is done at the cost of reducing the set of acts included in the setup. The reduction excludes certain theoretical infinitary scenarios, but includes the scenarios that should be handled by a system that models human decisions. (shrink)
I argue that prioritarianism cannot be assessed in abstraction from an account of the measure of utility. Rather, the soundness of this view crucially depends on what counts as a greater, lesser, or equal increase in a person’s utility. In particular, prioritarianism cannot accommodate a normatively compelling measure of utility that is captured by the axioms of John von Neumann and Oskar Morgenstern’s expectedutilitytheory. Nor can it accommodate a plausible and elegant generalization (...) of this theory that has been offered in response to challenges to von Neumann and Morgenstern. This is, I think, a theoretically interesting and unexpected source of difficulty for prioritarianism, which I explore in this article. (shrink)
The topic of this thesis is axiological uncertainty – the question of how you should evaluate your options if you are uncertain about which axiology is true. As an answer, I defend Expected Value Maximisation (EVM), the view that one option is better than another if and only if it has the greater expected value across axiologies. More precisely, I explore the axiomatic foundations of this view. I employ results from state-dependent utilitytheory, extend them in (...) various ways and interpret them accordingly, and thus provide axiomatisations of EVM as a theory of axiological uncertainty. (shrink)
This article argues that Lara Buchak’s risk-weighted expectedutilitytheory fails to offer a true alternative to expectedutilitytheory. Under commonly held assumptions about dynamic choice and the framing of decision problems, rational agents are guided by their attitudes to temporally extended courses of action. If so, REU theory makes approximately the same recommendations as expectedutilitytheory. Being more permissive about dynamic choice or framing, however, undermines the (...) class='Hi'>theory’s claim to capturing a steady choice disposition in the face of risk. I argue that this poses a challenge to alternatives to expectedutilitytheory more generally. (shrink)
As stochastic independence is essential to the mathematical development of probability theory, it seems that any foundational work on probability should be able to account for this property. Bayesian decision theory appears to be wanting in this respect. Savage’s postulates on preferences under uncertainty entail a subjective expectedutility representation, and this asserts only the existence and uniqueness of a subjective probability measure, regardless of its properties. What is missing is a preference condition corresponding to stochastic (...) independence. To fill this significant gap, the article axiomatizes Bayesian decision theory afresh and proves several representation theorems in this novel framework. (shrink)
This paper examines how the concepts of utility, impartiality, and universality worked together to form the foundation of Adam Smith's jurisprudence. It argues that the theory of utility consistent with contemporary rational choice theory is insufficient to account for Smith's use of utility. Smith's jurisprudence relies on the impartial spectator's sympathetic judgment over whether third parties are injured, and not individuals' expectedutility associated with individuals' expected gains from rendering judgments over innocence (...) or guilt. (shrink)
This paper argues that instrumental rationality is more permissive than expectedutilitytheory. The most compelling instrumentalist argument in favour of separability, its core requirement, is that agents with non-separable preferences end up badly off by their own lights in some dynamic choice problems. I argue that once we focus on the question of whether agents’ attitudes to uncertain prospects help define their ends in their own right, or instead only assign instrumental value in virtue of the (...) outcomes they may lead to, we see that the argument must fail. Either attitudes to prospects assign non-instrumental value in their own right, in which case we cannot establish the irrationality of the dynamic choice behaviour of agents with non-separable preferences. Or they don’t, in which case agents with non-separable preferences can avoid the problematic choice behaviour without adopting separable preferences. (shrink)
A review of some major topics of debate in normative decision theory from circa 2007 to 2019. Topics discussed include the ongoing debate between causal and evidential decision theory, decision instability, risk-weighted expectedutilitytheory, decision-making with incomplete preferences, and decision-making with imprecise credences.
Theories that use expectedutility maximization to evaluate acts have difficulty handling cases with infinitely many utility contributions. In this paper I present and motivate a way of modifying such theories to deal with these cases, employing what I call “Direct Difference Taking”. This proposal has a number of desirable features: it’s natural and well-motivated, it satisfies natural dominance intuitions, and it yields plausible prescriptions in a wide range of cases. I then compare my account to the (...) most plausible alternative, a proposal offered by Arntzenius :31–58, 2014). I argue that while Arntzenius’s proposal has many attractive features, it runs into a number of problems which Direct Difference Taking avoids. (shrink)
In his classic book “the Foundations of Statistics” Savage developed a formal system of rational decision making. The system is based on (i) a set of possible states of the world, (ii) a set of consequences, (iii) a set of acts, which are functions from states to consequences, and (iv) a preference relation over the acts, which represents the preferences of an idealized rational agent. The goal and the culmination of the enterprise is a representation theorem: Any preference relation that (...) satisfies certain arguably acceptable postulates determines a (finitely additive) probability distribution over the states and a utility assignment to the consequences, such that the preferences among acts are determined by their expected utilities. Additional problematic assumptions are however required in Savage's proofs. First, there is a Boolean algebra of events (sets of states) which determines the richness of the set of acts. The probabilities are assigned to members of this algebra. Savage's proof requires that this be a σ-algebra (i.e., closed under infinite countable unions and intersections), which makes for an extremely rich preference relation. On Savage's view we should not require subjective probabilities to be σ-additive. He therefore finds the insistence on a σ-algebra peculiar and is unhappy with it. But he sees no way of avoiding it. Second, the assignment of utilities requires the constant act assumption: for every consequence there is a constant act, which produces that consequence in every state. This assumption is known to be highly counterintuitive. The present work contains two mathematical results. The first, and the more difficult one, shows that the σ-algebra assumption can be dropped. The second states that, as long as utilities are assigned to finite gambles only, the constant act assumption can be replaced by the more plausible and much weaker assumption that there are at least two non-equivalent constant acts. The second result also employs a novel way of deriving utilities in Savage-style systems -- without appealing to von Neumann-Morgenstern lotteries. The paper discusses the notion of “idealized agent" that underlies Savage's approach, and argues that the simplified system, which is adequate for all the actual purposes for which the system is designed, involves a more realistic notion of an idealized agent. (shrink)
Among recent objections to Pascal's Wager, two are especially compelling. The first is that decision theory, and specifically the requirement of maximizing expectedutility, is incompatible with infinite utility values. The second is that even if infinite utility values are admitted, the argument of the Wager is invalid provided that we allow mixed strategies. Furthermore, Hájek has shown that reformulations of Pascal's Wager that address these criticisms inevitably lead to arguments that are philosophically unsatisfying and (...) historically unfaithful. Both the objections and Hájek's philosophical worries disappear, however, if we represent our preferences using relative utilities rather than a one-place utility function. Relative utilities provide a conservative way to make sense of infinite value that preserves the familiar equation of rationality with the maximization of expectedutility. They also provide a means of investigating a broader class of problems related to the Wager. (shrink)
The standard formulation of Newcomb's problem compares evidential and causal conceptions of expectedutility, with those maximizing evidential expectedutility tending to end up far richer. Thus, in a world in which agents face Newcomb problems, the evidential decision theorist might ask the causal decision theorist: "if you're so smart, why ain’cha rich?” Ultimately, however, the expected riches of evidential decision theorists in Newcomb problems do not vindicate their theory, because their success does not (...) generalize. Consider a theory that allows the agents who employ it to end up rich in worlds containing Newcomb problems and continues to outperform in other cases. This type of theory, which I call a “success-first” decision theory, is motivated by the desire to draw a tighter connection between rationality and success, rather than to support any particular account of expectedutility. The primary aim of this paper is to provide a comprehensive justification of success-first decision theories as accounts of rational decision. I locate this justification in an experimental approach to decision theory supported by the aims of methodological naturalism. (shrink)
Most prominent arguments favoring the widespread discretionary business practice of sending jobs overseas, known as ‘offshoring,’ attempt to justify the trend by appeal to utilitarian principles. It is argued that when business can be performed more cost-effectively offshore, doing so tends, over the longterm, to achieve the greatest good for the greatest number. This claim is supported by evidence that exporting jobs actively promotes economic development overseas while simultaneously increasing the revenue of the exporting country. After showing that offshoring might (...) indeed be justified on utilitarian grounds, I argue that according to Rawlsian social-contract theory, the practice is nevertheless irrational and unjust. For it unfairly expects the people of a given society to accept job-gain benefits to peoples of other societies as outweighing job-loss hardships it imposes on itself. Finally, I conclude that contrary to socialism, which relies much more on government control, capitalism constitutes a social contract that places a particularly strong moral obligation on corporations themselves to refrain from offshoring. (shrink)
I have claimed that risk-weighted expectedutility maximizers are rational, and that their preferences cannot be captured by expectedutilitytheory. Richard Pettigrew and Rachael Briggs have recently challenged these claims. Both authors argue that only EU-maximizers are rational. In addition, Pettigrew argues that the preferences of REU-maximizers can indeed be captured by EU theory, and Briggs argues that REU-maximizers lose a valuable tool for simplifying their decision problems. I hold that their arguments do (...) not succeed and that my original claims still stand. However, their arguments do highlight some costs of REU theory. (shrink)
The orthodox theory of instrumental rationality, expectedutility (EU) theory, severely restricts the way in which risk-considerations can figure into a rational individual's preferences. It is argued here that this is because EU theory neglects an important component of instrumental rationality. This paper presents a more general theory of decision-making, risk-weighted expectedutility (REU) theory, of which expectedutility maximization is a special case. According to REU theory, the (...) weight that each outcome gets in decision-making is not the subjective probability of that outcome; rather, the weight each outcome gets depends on both its subjective probability and its position in the gamble. Furthermore, the individual's utility function, her subjective probability function, and a function that measures her attitude towards risk can be separately derived from her preferences via a Representation Theorem. This theorem illuminates the role that each of these entities plays in preferences, and shows how REU theory explicates the components of instrumental rationality. (shrink)
Decision theory is concerned with how agents should act when the consequences of their actions are uncertain. The central principle of contemporary decision theory is that the rational choice is the choice that maximizes subjective expectedutility. This entry explains what this means, and discusses the philosophical motivations and consequences of the theory. The entry will consider some of the main problems and paradoxes that decision theory faces, and some of responses that can be (...) given. Finally the entry will briefly consider how decision theory applies to choices involving more than one agent. (shrink)
Representation theorems are often taken to provide the foundations for decision theory. First, they are taken to characterize degrees of belief and utilities. Second, they are taken to justify two fundamental rules of rationality: that we should have probabilistic degrees of belief and that we should act as expectedutility maximizers. We argue that representation theorems cannot serve either of these foundational purposes, and that recent attempts to defend the foundational importance of representation theorems are unsuccessful. As (...) a result, we should reject these claims, and lay the foundations of decision theory on firmer ground. (shrink)
According to Stephen Finlay, ‘A ought to X’ means that X-ing is more conducive to contextually salient ends than relevant alternatives. This in turn is analysed in terms of probability. I show why this theory of ‘ought’ is hard to square with a theory of a reason’s weight which could explain why ‘A ought to X’ logically entails that the balance of reasons favours that A X-es. I develop two theories of weight to illustrate my point. I first (...) look at the prospects of a theory of weight based on expectedutilitytheory. I then suggest a simpler theory. Although neither allows that ‘A ought to X’ logically entails that the balance of reasons favours that A X-es, this price may be accepted. For there remains a strong pragmatic relation between these claims. (shrink)
This paper is about two requirements on wish reports whose interaction motivates a novel semantics for these ascriptions. The first requirement concerns the ambiguities that arise when determiner phrases, e.g. definite descriptions, interact with `wish'. More specifically, several theorists have recently argued that attitude ascriptions featuring counterfactual attitude verbs license interpretations on which the determiner phrase is interpreted relative to the subject's beliefs. The second requirement involves the fact that desire reports in general require decision-theoretic notions for their analysis. The (...) current study is motivated by the fact that no existing account captures both of these aspects of wishing. I develop a semantics for wish reports that makes available belief-relative readings but also allows decision-theoretic notions to play a role in shaping the truth conditions of these ascriptions. The general idea is that we can analyze wishing in terms of a two-dimensional notion of expectedutility. (shrink)
Standard decision theory, or rational choice theory, is often interpreted to be a theory of instrumental rationality. This dissertation argues, however, that the core requirements of orthodox decision theory cannot be defended as general requirements of instrumental rationality. Instead, I argue that these requirements can only be instrumentally justified to agents who have a desire to have choice dispositions that are stable over time and across different choice contexts. Past attempts at making instrumentalist arguments for the (...) core requirements of decision theory fail due to a pervasive assumption in decision theory, namely the assumption that the agent’s preferences over the objects of choice – be it outcomes or uncertain prospects – form the standard of instrumental rationality against which the agent’s actions are evaluated. I argue that we should instead take more basic desires to be the standard of instrumental rationality. But unless agents have a desire to have stable choice dispositions, according to this standard, instrumental rationality turns out to be more permissive than orthodox decision theory. (shrink)
ExpectedUtility in 3D.Jean Baccelli - forthcoming - In Reflections on the Foundations of Statistics: Essays in Honor of Teddy Seidenfeld.details
Consider a subjective expectedutility preference relation. It is usually held that the representations which this relation admits differ only in one respect, namely, the possible scales for the measurement of utility. In this paper, I discuss the fact that there are, metaphorically speaking, two additional dimensions along which infinitely many more admissible representations can be found. The first additional dimension is that of state-dependence. The second—and, in this context, much lesser-known—additional dimension is that of act-dependence. The (...) simplest implication of their usually neglected existence is that the standard axiomatizations of subjective expectedutility fail to provide the measurement of subjective probability with satisfactory behavioral foundations. (shrink)
The Dutch Book Argument for Probabilism assumes Ramsey's Thesis (RT), which purports to determine the prices an agent is rationally required to pay for a bet. Recently, a new objection to Ramsey's Thesis has emerged (Hedden 2013, Wronski & Godziszewski 2017, Wronski 2018)--I call this the ExpectedUtility Objection. According to this objection, it is Maximise Subjective ExpectedUtility (MSEU) that determines the prices an agent is required to pay for a bet, and this often disagrees (...) with Ramsey's Thesis. I suggest two responses to Hedden's objection. First, we might be permissive: agents are permitted to pay any price that is required or permitted by RT, and they are permitted to pay any price that is required or permitted by MSEU. This allows us to give a revised version of the Dutch Book Argument for Probabilism, which I call the Permissive Dutch Book Argument. Second, I suggest that even the proponent of the ExpectedUtility Objection should admit that RT gives the correct answer in certain very limited cases, and I show that, together with MSEU, this very restricted version of RT gives a new pragmatic argument for Probabilism, which I call the Bookless Pragmatic Argument. (shrink)
Neoclassical economists use expectedutilitytheory to explain, predict, and prescribe choices under risk, that is, choices where the decision-maker knows---or at least deems suitable to act as if she knew---the relevant probabilities. Expectedutilitytheory has been subject to both empirical and conceptual criticism. This chapter reviews expectedutilitytheory and the main criticism it has faced. It ends with a brief discussion of subjective expectedutilitytheory, (...) which is the theory neoclassical economists use to explain, predict, and prescribe choices under uncertainty, that is, choices where the decision-maker cannot act on the basis of objective probabilities but must instead consult her own subjective probabilities. (shrink)
It is widely held that the influence of risk on rational decisions is not entirely explained by the shape of an agent’s utility curve. Buchak (Erkenntnis, 2013, Risk and rationality, Oxford University Press, Oxford, in press) presents an axiomatic decision theory, risk-weighted expectedutilitytheory (REU), in which decision weights are the agent’s subjective probabilities modified by his risk-function r. REU is briefly described, and the global applicability of r is discussed. Rabin’s (Econometrica 68:1281–1292, 2000) (...) calibration theorem strongly suggests that plausible levels of risk aversion cannot be fully explained by concave utility functions; this provides motivation for REU and other theories. But applied to the synchronic preferences of an individual agent, Rabin’s result is not as problematic as it may first appear. Theories that treat outcomes as gains and losses (e.g. prospect theory and cumulative prospect theory) account for risk sensitivity in a way not available to REU. Reference points that mark the difference between gains and losses are subject to framing, many instances of which cannot be regarded as rational. However, rational decision theory may recognize the difference between gains and losses, without endorsing all ways of fixing the point of reference. In any event, REU is a very interesting theory. (shrink)
Ramsey (1926) sketches a proposal for measuring the subjective probabilities of an agent by their observable preferences, assuming that the agent is an expectedutility maximizer. I show how to extend the spirit of Ramsey's method to a strictly wider class of agents: risk-weighted expectedutility maximizers (Buchak 2013). In particular, I show how we can measure the risk attitudes of an agent by their observable preferences, assuming that the agent is a risk-weighted expected (...) class='Hi'>utility maximizer. Further, we can leverage this method to measure the subjective probabilities of a risk-weighted expectedutility maximizer. (shrink)
The basic axioms or formal conditions of decision theory, especially the ordering condition put on preferences and the axioms underlying the expectedutility formula, are subject to a number of counter-examples, some of which can be endowed with normative value and thus fall within the ambit of a philosophical reflection on practical rationality. Against such counter-examples, a defensive strategy has been developed which consists in redescribing the outcomes of the available options in such a way that the (...) threatened axioms or conditions continue to hold. We examine how this strategy performs in three major cases: Sen's counterexamples to the binariness property of preferences, the Allais paradox of EU theory under risk, and the Ellsberg paradox of EU theory under uncertainty. We find that the strategy typically proves to be lacking in several major respects, suffering from logical triviality, incompleteness, and theoretical insularity. To give the strategy more structure, philosophers have developed “principles of individuation”; but we observe that these do not address the aforementioned defects. Instead, we propose the method of checking whether the strategy can overcome its typical defects once it is given a proper theoretical expansion. We find that the strategy passes the test imperfectly in Sen's case and not at all in Allais's. In Ellsberg's case, however, it comes close to meeting our requirement. But even the analysis of this more promising application suggests that the strategy ought to address the decision problem as a whole, rather than just the outcomes, and that it should extend its revision process to the very statements it is meant to protect. Thus, by and large, the same cautionary tale against redescription practices runs through the analysis of all three cases. A more general lesson, simply put, is that there is no easy way out from the paradoxes of decision theory. (shrink)
The value of knowledge can vary in that knowledge of important facts is more valuable than knowledge of trivialities. This variation in the value of knowledge is mirrored by a variation in evidential standards. Matters of greater importance require greater evidential support. But all knowledge, however trivial, needs to be evidentially certain. So on one hand we have a variable evidential standard that depends on the value of the knowledge, and on the other, we have the invariant standard of evidential (...) certainty. This paradox in the concept of knowledge runs deep in the history of philosophy. We approach this paradox by proposing a bet settlement theory of knowledge. Degrees of belief can be measured by the expected value of a bet divided by stake size, with the highest degree of belief being probability 1, or certainty. Evidence sufficient to settle the bet makes the expectation equal to the stake size and therefore has evidential probability 1. This gives us the invariant evidential certainty standard for knowledge. The value of knowledge relative to a bet is given by the stake size. We propose that evidential probability can vary with stake size, so that evidential certainty at low stakes does not entail evidential certainty at high stakes. This solves the paradox by allowing that certainty is necessary for knowledge at any stakes, but that the evidential standards for knowledge vary according to what is at stake. We give a Stake Size Variation Principle that calculates evidential probability from the value of evidence and the stakes. Stake size variant degrees of belief are probabilistically coherent and explain a greater range of preferences than orthodox expectedutilitytheory, namely the Ellsberg and Allais preferences. The resulting theory of knowledge gives an empirically adequate, rationally grounded, unified account of evidence, value and probability. (shrink)
In this paper I offer an account of the normative dimension implicit in D. Bernoulli’s expectedutility functions by means of an analysis of the juridical metaphors upon which the concept of mathematical expectation was moulded. Following a suggestion by the late E. Coumet, I show how this concept incorporated a certain standard of justice which was put in question by the St. Petersburg paradox. I contend that Bernoulli would have solved it by introducing an alternative normative criterion (...) rather than a positive model of decision making processes. (shrink)
This paper argues that the types of intention can be modeled both as modal operators and via a multi-hyperintensional semantics. I delineate the semantic profiles of the types of intention, and provide a precise account of how the types of intention are unified in virtue of both their operations in a single, encompassing, epistemic space, and their role in practical reasoning. I endeavor to provide reasons adducing against the proposal that the types of intention are reducible to the mental states (...) of belief and desire, where the former state is codified by subjective probability measures and the latter is codified by a utility function. I argue, instead, that each of the types of intention -- i.e., intention-in-action, intention-as-explanation, and intention-for-the-future -- has as its aim the value of an outcome of the agent's action, as derived by her partial beliefs and assignments of utility, and as codified by the value of expectedutility in evidential decision theory. (shrink)
People with the kind of preferences that give rise to the St. Petersburg paradox are problematic---but not because there is anything wrong with infinite utilities. Rather, such people cannot assign the St. Petersburg gamble any value that any kind of outcome could possibly have. Their preferences also violate an infinitary generalization of Savage's Sure Thing Principle, which we call the *Countable Sure Thing Principle*, as well as an infinitary generalization of von Neumann and Morgenstern's Independence axiom, which we call *Countable (...) Independence*. In violating these principles, they display foibles like those of people who deviate from standard expectedutilitytheory in more mundane cases: they choose dominated strategies, pay to avoid information, and reject expert advice. We precisely characterize the preference relations that satisfy Countable Independence in several equivalent ways: a structural constraint on preferences, a representation theorem, and the principle we began with, that every prospect has a value that some outcome could have. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.