In Richard Bradley's book, Decision Theory with a Human Face (2017), we have selected two themes for discussion. The first is the Bolker-Jeffrey (BJ) theory of decision, which the book uses throughout as a tool to reorganize the whole field of decision theory, and in particular to evaluate the extent to which expectedutility (EU) theories may be normatively too demanding. The second theme is the redefinition strategy that can be used to defend EU theories (...) against the Allais and Ellsberg paradoxes, a strategy that the book by and large endorses, and even develops in an original way concerning the Ellsberg paradox. We argue that the BJ theory is too specific to fulfil Bradley’s foundational project and that the redefinition strategy fails in both the Allais and Ellsberg cases. Although we share Bradley’s conclusion that EU theories do not state universal rationality requirements, we reach it not by a comparison with BJ theory, but by a comparison with the non-EU theories that the paradoxes have heuristically suggested. (shrink)
The paper summarizes expectedutilitytheory, both in its original von Neumann-Morgenstern version and its later developments, and discusses the normative claims to rationality made by this theory.
This monographic chapter explains how expectedutility (EU) theory arose in von Neumann and Morgenstern, how it was called into question by Allais and others, and how it gave way to non-EU theories, at least among the specialized quarters of decion theory. I organize the narrative around the idea that the successive theoretical moves amounted to resolving Duhem-Quine underdetermination problems, so they can be assessed in terms of the philosophical recommendations made to overcome these problems. I (...) actually follow Duhem's recommendation, which was essentially to rely on the passing of time to make many experiments and arguments available, and evebntually strike a balance between competing theories on the basis of this improved knowledge. Although Duhem's solution seems disappointingly vague, relying as it does on "bon sens" to bring an end to the temporal process, I do not think there is any better one in the philosophical literature, and I apply it here for what it is worth. In this perspective, EU theorists were justified in resisting the first attempts at refuting their theory, including Allais's in the 50s, but they would have lacked "bon sens" in not acknowledging their defeat in the 80s, after the long process of pros and cons had sufficiently matured. This primary Duhemian theme is actually combined with a secondary theme - normativity. I suggest that EU theory was normative at its very beginning and has remained so all along, and I express dissatisfaction with the orthodox view that it could be treated as a straightforward descriptive theory for purposes of prediction and scientific test. This view is usually accompanied with a faulty historical reconstruction, according to which EU theorists initially formulated the VNM axioms descriptively and retreated to a normative construal once they fell threatened by empirical refutation. From my historical study, things did not evolve in this way, and the theory was both proposed and rebutted on the basis of normative arguments already in the 1950s. The ensuing, major problem was to make choice experiments compatible with this inherently normative feature of theory. Compability was obtained in some experiments, but implicitly and somewhat confusingly, for instance by excluding overtly incoherent subjects or by creating strong incentives for the subjects to reflect on the questions and provide answers they would be able to defend. I also claim that Allais had an intuition of how to combine testability and normativity, unlike most later experimenters, and that it would have been more fruitful to work from his intuition than to make choice experiments of the naively empirical style that flourished after him. In sum, it can be said that the underdetermination process accompanying EUT was resolved in a Duhemian way, but this was not without major inefficiencies. To embody explicit rationality considerations into experimental schemes right from the beginning would have limited the scope of empirical research, avoided wasting resources to get only minor findings, and speeded up the Duhemian process of groping towards a choice among competing theories. (shrink)
This paper proposes a new theory of rational choice, Expected Comparative Utility (ECU) Theory. It is first argued that for any decision option, a, and any state of the world, G, the measure of the choiceworthiness of a in G is the comparative utility of a in G – that is, the difference in utility, in G, between a and whichever alternative to a carries the greatest utility in G. On the basis of (...) this principle, it is then argued, roughly speaking, that an agent should rank her decision options (in terms of how choiceworthy they are) according to their expected comparative utility. For any decision option, a, the expected comparative utility of a is the probability-weighted average of the comparative utilities of a across the different states of the world. It is lastly demonstrated that in a number of decision cases, ECU Theory delivers different verdicts from those of standard decision theory. (shrink)
The paper re-expresses arguments against the normative validity of expectedutilitytheory in Robin Pope (1983, 1991a, 1991b, 1985, 1995, 2000, 2001, 2005, 2006, 2007). These concern the neglect of the evolving stages of knowledge ahead (stages of what the future will bring). Such evolution is fundamental to an experience of risk, yet not consistently incorporated even in axiomatised temporal versions of expectedutility. Its neglect entails a disregard of emotional and financial effects on well-being (...) before a particular risk is resolved. These arguments are complemented with an analysis of the essential uniqueness property in the context of temporal and atemporal expectedutilitytheory and a proof of the absence of a limit property natural in an axiomatised approach to temporal expectedutilitytheory. Problems of the time structure of risk are investigated in a simple temporal framework restricted to a subclass of temporal lotteries in the sense of David Kreps and Evan Porteus (1978). This subclass is narrow but wide enough to discuss basic issues. It will be shown that there are serious objections against the modification of expectedutilitytheory axiomatised by Kreps and Porteus (1978, 1979). By contrast the umbrella theory proffered by Pope that she has now termed SKAT, the Stages of Knowledge Ahead Theory, offers an epistemically consistent framework within which to construct particular models to deal with particular decision situations. A model by Caplin and Leahy (2001) will also be discussed and contrasted with the modelling within SKAT (Pope, Leopold and Leitner 2007). (shrink)
We give two social aggregation theorems under conditions of risk, one for constant population cases, the other an extension to variable populations. Intra and interpersonal welfare comparisons are encoded in a single ‘individual preorder’. The theorems give axioms that uniquely determine a social preorder in terms of this individual preorder. The social preorders described by these theorems have features that may be considered characteristic of Harsanyi-style utilitarianism, such as indifference to ex ante and ex post equality. However, the theorems are (...) also consistent with the rejection of all of the expectedutility axioms, completeness, continuity, and independence, at both the individual and social levels. In that sense, expectedutility is inessential to Harsanyi-style utilitarianism. In fact, the variable population theorem imposes only a mild constraint on the individual preorder, while the constant population theorem imposes no constraint at all. We then derive further results under the assumption of our basic axioms. First, the individual preorder satisfies the main expectedutility axiom of strong independence if and only if the social preorder has a vector-valued expected total utility representation, covering Harsanyi’s utilitarian theorem as a special case. Second, stronger utilitarian-friendly assumptions, like Pareto or strong separability, are essentially equivalent to strong independence. Third, if the individual preorder satisfies a ‘local expectedutility’ condition popular in non-expectedutilitytheory, then the social preorder has a ‘local expected total utility’ representation. Fourth, a wide range of non-expectedutility theories nevertheless lead to social preorders of outcomes that have been seen as canonically egalitarian, such as rank-dependent social preorders. Although our aggregation theorems are stated under conditions of risk, they are valid in more general frameworks for representing uncertainty or ambiguity. (shrink)
Some early phase clinical studies of candidate HIV cure and remission interventions appear to have adverse medical risk–benefit ratios for participants. Why, then, do people participate? And is it ethically permissible to allow them to participate? Recent work in decision theory sheds light on both of these questions, by casting doubt on the idea that rational individuals prefer choices that maximise expectedutility, and therefore by casting doubt on the idea that researchers have an ethical obligation not (...) to enrol participants in studies with high risk–benefit ratios. This work supports the view that researchers should instead defer to the considered preferences of the participants themselves. This essay briefly explains this recent work, and then explores its application to these two questions in more detail. (shrink)
An expectedutility model of individual choice is formulated which allows the decision maker to specify his available actions in the form of controls (partial contingency plans) and to simultaneously choose goals and controls in end-mean pairs. It is shown that the Savage expectedutility model, the Marschak- Radner team model, the Bayesian statistical decision model, and the standard optimal control model can be viewed as special cases of this goal-control expectedutility model.
to appear in Lambert, E. and J. Schwenkler (eds.) Transformative Experience (OUP) -/- L. A. Paul (2014, 2015) argues that the possibility of epistemically transformative experiences poses serious and novel problems for the orthodox theory of rational choice, namely, expectedutilitytheory — I call her argument the Utility Ignorance Objection. In a pair of earlier papers, I responded to Paul’s challenge (Pettigrew 2015, 2016), and a number of other philosophers have responded in similar ways (...) (Dougherty, et al. 2015, Harman 2015) — I call our argument the Fine-Graining Response. Paul has her own reply to this response, which we might call the Authenticity Reply. But Sarah Moss has recently offered an alternative reply to the Fine-Graining Response on Paul’s behalf (Moss 2017) — we’ll call it the No Knowledge Reply. This appeals to the knowledge norm of action, together with Moss’ novel and intriguing account of probabilistic knowledge. In this paper, I consider Moss’ reply and argue that it fails. I argue first that it fails as a reply made on Paul’s behalf, since it forces us to abandon many of the features of Paul’s challenge that make it distinctive and with which Paul herself is particularly concerned. Then I argue that it fails as a reply independent of its fidelity to Paul’s intentions. (shrink)
As stochastic independence is essential to the mathematical development of probability theory, it seems that any foundational work on probability should be able to account for this property. Bayesian decision theory appears to be wanting in this respect. Savage’s postulates on preferences under uncertainty entail a subjective expectedutility representation, and this asserts only the existence and uniqueness of a subjective probability measure, regardless of its properties. What is missing is a preference condition corresponding to stochastic (...) independence. To fill this significant gap, the article axiomatizes Bayesian decision theory afresh and proves several representation theorems in this novel framework. (shrink)
Although expectedutilitytheory has proven a fruitful and elegant theory in the finite realm, attempts to generalize it to infinite values have resulted in many paradoxes. In this paper, we argue that the use of John Conway's surreal numbers shall provide a firm mathematical foundation for transfinite decision theory. To that end, we prove a surreal representation theorem and show that our surreal decision theory respects dominance reasoning even in the case of infinite (...) values. We then bring our theory to bear on one of the more venerable decision problems in the literature: Pascal's Wager. Analyzing the wager showcases our theory's virtues and advantages. To that end, we analyze two objections against the wager: Mixed Strategies and Many Gods. After formulating the two objections in the framework of surreal utilities and probabilities, our theory correctly predicts that (1) the pure Pascalian strategy beats all mixed strategies, and (2) what one should do in a Pascalian decision problem depends on what one's credence function is like. Our analysis therefore suggests that although Pascal's Wager is mathematically coherent, it does not deliver what it purports to, a rationally compelling argument that people should lead a religious life regardless of how confident they are in theism and its alternatives. (shrink)
Some propositions are more epistemically important than others. Further, how important a proposition is is often a contingent matter—some propositions count more in some worlds than in others. Epistemic UtilityTheory cannot accommodate this fact, at least not in any standard way. For EUT to be successful, legitimate measures of epistemic utility must be proper, i.e., every probability function must assign itself maximum expectedutility. Once we vary the importance of propositions across worlds, however, normal (...) measures of epistemic utility become improper. I argue there isn’t any good way out for EUT. (shrink)
This article argues that Lara Buchak’s risk-weighted expectedutilitytheory fails to offer a true alternative to expectedutilitytheory. Under commonly held assumptions about dynamic choice and the framing of decision problems, rational agents are guided by their attitudes to temporally extended courses of action. If so, REU theory makes approximately the same recommendations as expectedutilitytheory. Being more permissive about dynamic choice or framing, however, undermines the (...) class='Hi'>theory’s claim to capturing a steady choice disposition in the face of risk. I argue that this poses a challenge to alternatives to expectedutilitytheory more generally. (shrink)
The principle that rational agents should maximize expectedutility or choiceworthiness is intuitively plausible in many ordinary cases of decision-making under uncertainty. But it is less plausible in cases of extreme, low-probability risk (like Pascal's Mugging), and intolerably paradoxical in cases like the St. Petersburg and Pasadena games. In this paper I show that, under certain conditions, stochastic dominance reasoning can capture most of the plausible implications of expectational reasoning while avoiding most of its pitfalls. Specifically, given sufficient (...) background uncertainty about the choiceworthiness of one's options, many expectation-maximizing gambles that do not stochastically dominate their alternatives "in a vacuum" become stochastically dominant in virtue of that background uncertainty. But, even under these conditions, stochastic dominance will not require agents to accept options whose expectational superiority depends on sufficiently small probabilities of extreme payoffs. The sort of background uncertainty on which these results depend looks unavoidable for any agent who measures the choiceworthiness of her options in part by the total amount of value in the resulting world. At least for such agents, then, stochastic dominance offers a plausible general principle of choice under uncertainty that can explain more of the apparent rational constraints on such choices than has previously been recognized. (shrink)
This paper argues that instrumental rationality is more permissive than expectedutilitytheory. The most compelling instrumentalist argument in favour of separability, its core requirement, is that agents with non-separable preferences end up badly off by their own lights in some dynamic choice problems. I argue that once we focus on the question of whether agents' attitudes to uncertain prospects help define their ends in their own right, or instead only assign instrumental value in virtue of the (...) outcomes they may lead to, we see that the argument must fail. Either attitudes to prospects assign non-instrumental value in their own right, in which case we cannot establish the irrationality of the dynamic choice behaviour of agents with non-separable preferences. Or they don't, in which case agents with non-separable preferences can avoid the problematic choice behaviour without adopting separable preferences. (shrink)
According to Stephen Finlay, ‘A ought to X’ means that X-ing is more conducive to contextually salient ends than relevant alternatives. This in turn is analysed in terms of probability. I show why this theory of ‘ought’ is hard to square with a theory of a reason’s weight which could explain why ‘A ought to X’ logically entails that the balance of reasons favours that A X-es. I develop two theories of weight to illustrate my point. I first (...) look at the prospects of a theory of weight based on expectedutilitytheory. I then suggest a simpler theory. Although neither allows that ‘A ought to X’ logically entails that the balance of reasons favours that A X-es, this price may be accepted. For there remains a strong pragmatic relation between these claims. (shrink)
The orthodox theory of instrumental rationality, expectedutility (EU) theory, severely restricts the way in which risk-considerations can figure into a rational individual's preferences. It is argued here that this is because EU theory neglects an important component of instrumental rationality. This paper presents a more general theory of decision-making, risk-weighted expectedutility (REU) theory, of which expectedutility maximization is a special case. According to REU theory, the (...) weight that each outcome gets in decision-making is not the subjective probability of that outcome; rather, the weight each outcome gets depends on both its subjective probability and its position in the gamble. Furthermore, the individual's utility function, her subjective probability function, and a function that measures her attitude towards risk can be separately derived from her preferences via a Representation Theorem. This theorem illuminates the role that each of these entities plays in preferences, and shows how REU theory explicates the components of instrumental rationality. (shrink)
I have claimed that risk-weighted expectedutility maximizers are rational, and that their preferences cannot be captured by expectedutilitytheory. Richard Pettigrew and Rachael Briggs have recently challenged these claims. Both authors argue that only EU-maximizers are rational. In addition, Pettigrew argues that the preferences of REU-maximizers can indeed be captured by EU theory, and Briggs argues that REU-maximizers lose a valuable tool for simplifying their decision problems. I hold that their arguments do (...) not succeed and that my original claims still stand. However, their arguments do highlight some costs of REU theory. (shrink)
This chapter of the Handbook of UtilityTheory aims at covering the connections between utilitytheory and social ethics. The chapter first discusses the philosophical interpretations of utility functions, then explains how social choice theory uses them to represent interpersonal comparisons of welfare in either utilitarian or non-utilitarian representations of social preferences. The chapter also contains an extensive account of John Harsanyi's formal reconstruction of utilitarianism and its developments in the later literature, especially when (...) society faces uncertainty rather than probabilistic risk. (shrink)
Standard decision theory, or rational choice theory, is often interpreted to be a theory of instrumental rationality. This dissertation argues, however, that the core requirements of orthodox decision theory cannot be defended as general requirements of instrumental rationality. Instead, I argue that these requirements can only be instrumentally justified to agents who have a desire to have choice dispositions that are stable over time and across different choice contexts. Past attempts at making instrumentalist arguments for the (...) core requirements of decision theory fail due to a pervasive assumption in decision theory, namely the assumption that the agent’s preferences over the objects of choice – be it outcomes or uncertain prospects – form the standard of instrumental rationality against which the agent’s actions are evaluated. I argue that we should instead take more basic desires to be the standard of instrumental rationality. But unless agents have a desire to have stable choice dispositions, according to this standard, instrumental rationality turns out to be more permissive than orthodox decision theory. (shrink)
Can a group be a standard rational agent? This would require the group to hold aggregate preferences which maximise expectedutility and change only by Bayesian updating. Group rationality is possible, but the only preference aggregation rules which support it (and are minimally Paretian and continuous) are the linear-geometric rules, which combine individual tastes linearly and individual beliefs geometrically.
The Dutch Book Argument for Probabilism assumes Ramsey's Thesis (RT), which purports to determine the prices an agent is rationally required to pay for a bet. Recently, a new objection to Ramsey's Thesis has emerged (Hedden 2013, Wronski & Godziszewski 2017, Wronski 2018)--I call this the ExpectedUtility Objection. According to this objection, it is Maximise Subjective ExpectedUtility (MSEU) that determines the prices an agent is required to pay for a bet, and this often disagrees (...) with Ramsey's Thesis. I suggest two responses to Hedden's objection. First, we might be permissive: agents are permitted to pay any price that is required or permitted by RT, and they are permitted to pay any price that is required or permitted by MSEU. This allows us to give a revised version of the Dutch Book Argument for Probabilism, which I call the Permissive Dutch Book Argument. Second, I suggest that even the proponent of the ExpectedUtility Objection should admit that RT gives the correct answer in certain very limited cases, and I show that, together with MSEU, this very restricted version of RT gives a new pragmatic argument for Probabilism, which I call the Bookless Pragmatic Argument. (shrink)
Ramsey (1926) sketches a proposal for measuring the subjective probabilities of an agent by their observable preferences, assuming that the agent is an expectedutility maximizer. I show how to extend the spirit of Ramsey's method to a strictly wider class of agents: risk-weighted expectedutility maximizers (Buchak 2013). In particular, I show how we can measure the risk attitudes of an agent by their observable preferences, assuming that the agent is a risk-weighted expected (...) class='Hi'>utility maximizer. Further, we can leverage this method to measure the subjective probabilities of a risk-weighted expectedutility maximizer. (shrink)
In the situation known as the “cable guy paradox” the expectedutility principle and the “avoid certain frustration” principle (ACF) seem to give contradictory advice about what one should do. This article tries to resolve the paradox by presenting an example that weakens the grip of ACF: a modified version of the cable guy problem is introduced in which the choice dictated by ACF loses much of its intuitive appeal.
Andy Egan recently drew attention to a class of decision situations that provide a certain kind of informational feedback, which he claims constitute a counterexample to causal decision theory. Arntzenius and Wallace have sought to vindicate a form of CDT by describing a dynamic process of deliberation that culminates in a “mixed” decision. I show that, for many of the cases in question, this proposal depends on an incorrect way of calculating expected utilities, and argue that it is (...) therefore unsuccessful. I then tentatively defend an alternative proposal by Joyce, which produces a similar process of dynamic deliberation but for a different reason. (shrink)
The Lockean Thesis says that you must believe p iff you’re sufficiently confident of it. On some versions, the 'must' asserts a metaphysical connection; on others, it asserts a normative one. On some versions, 'sufficiently confident' refers to a fixed threshold of credence; on others, it varies with proposition and context. Claim: the Lockean Thesis follows from epistemic utilitytheory—the view that rational requirements are constrained by the norm to promote accuracy. Different versions of this theory generate (...) different versions of Lockeanism; moreover, a plausible version of epistemic utilitytheory meshes with natural language considerations, yielding a new Lockean picture that helps to model and explain the role of beliefs in inquiry and conversation. Your beliefs are your best guesses in response to the epistemic priorities of your context. Upshot: we have a new approach to the epistemology and semantics of belief. And it has teeth. It implies that the role of beliefs is fundamentally different than many have thought, and in fact supports a metaphysical reduction of belief to credence. (shrink)
Many argue that absolutist moral theories -- those that prohibit particular kinds of actions or trade-offs under all circumstances -- cannot adequately account for the permissibility of risky actions. In this dissertation, I defend various versions of absolutism against this critique, using overlooked resources from formal decision theory. Against the prevailing view, I argue that almost all absolutist moral theories can give systematic and plausible verdicts about what to do in risky cases. In doing so, I show that critics (...) have overlooked: (1) the fact that absolutist theories -- and moral theories, more generally -- underdetermine their formal decision-theoretic representations; (2) that decision theories themselves can be generalised to better accommodate distinctively absolutist commitments. Overall, this dissertation demonstrates that we can navigate a risky world without compromising our moral commitments. (shrink)
People with the kind of preferences that give rise to the St. Petersburg paradox are problematic---but not because there is anything wrong with infinite utilities. Rather, such people cannot assign the St. Petersburg gamble any value that any kind of outcome could possibly have. Their preferences also violate an infinitary generalization of Savage's Sure Thing Principle, which we call the *Countable Sure Thing Principle*, as well as an infinitary generalization of von Neumann and Morgenstern's Independence axiom, which we call *Countable (...) Independence*. In violating these principles, they display foibles like those of people who deviate from standard expectedutilitytheory in more mundane cases: they choose dominated strategies, pay to avoid information, and reject expert advice. We precisely characterize the preference relations that satisfy Countable Independence in several equivalent ways: a structural constraint on preferences, a representation theorem, and the principle we began with, that every prospect has a value that some outcome could have. (shrink)
A layered approach to the evaluation of action alternatives with continuous time for decision making under the moral doctrine of Negative Utilitarianism is presented and briefly discussed from a philosophical perspective.
With the growing focus on prevention in medicine, studies of how to describe risk have become increasing important. Recently, some researchers have argued against giving patients “comparative risk information,” such as data about whether their baseline risk of developing a particular disease is above or below average. The concern is that giving patients this information will interfere with their consideration of more relevant data, such as the specific chance of getting the disease (the “personal risk”), the risk reduction the treatment (...) provides, and any possible side effects. I explore this view and the theories of rationality that ground it, and I argue instead that comparative risk information can play a positive role in decision-making. The criticism of disclosing this sort of information to patients, I conclude, rests on a mistakenly narrow account of the goals of prevention and the nature of rational choice in medicine. (shrink)
Using “brute reason” I will show why there can be only one valid interpretation of probability. The valid interpretation turns out to be a further refinement of Popper’s Propensity interpretation of probability. Via some famous probability puzzles and new thought experiments I will show how all other interpretations of probability fail, in particular the Bayesian interpretations, while these puzzles do not present any difficulties for the interpretation proposed here. In addition, the new interpretation casts doubt on some concepts often taken (...) as basic and unproblematic, like rationality, utility and expectation. This in turn has implications for decision theory, economic theory and the philosophy of physics. (shrink)
It is widely held that the influence of risk on rational decisions is not entirely explained by the shape of an agent’s utility curve. Buchak (Erkenntnis, 2013, Risk and rationality, Oxford University Press, Oxford, in press) presents an axiomatic decision theory, risk-weighted expectedutilitytheory (REU), in which decision weights are the agent’s subjective probabilities modified by his risk-function r. REU is briefly described, and the global applicability of r is discussed. Rabin’s (Econometrica 68:1281–1292, 2000) (...) calibration theorem strongly suggests that plausible levels of risk aversion cannot be fully explained by concave utility functions; this provides motivation for REU and other theories. But applied to the synchronic preferences of an individual agent, Rabin’s result is not as problematic as it may first appear. Theories that treat outcomes as gains and losses (e.g. prospect theory and cumulative prospect theory) account for risk sensitivity in a way not available to REU. Reference points that mark the difference between gains and losses are subject to framing, many instances of which cannot be regarded as rational. However, rational decision theory may recognize the difference between gains and losses, without endorsing all ways of fixing the point of reference. In any event, REU is a very interesting theory. (shrink)
I argue that prioritarianism cannot be assessed in abstraction from an account of the measure of utility. Rather, the soundness of this view crucially depends on what counts as a greater, lesser, or equal increase in a person’s utility. In particular, prioritarianism cannot accommodate a normatively compelling measure of utility that is captured by the axioms of John von Neumann and Oskar Morgenstern’s expectedutilitytheory. Nor can it accommodate a plausible and elegant generalization (...) of this theory that has been offered in response to challenges to von Neumann and Morgenstern. This is, I think, a theoretically interesting and unexpected source of difficulty for prioritarianism, which I explore in this article. (shrink)
In his classic book “the Foundations of Statistics” Savage developed a formal system of rational decision making. The system is based on (i) a set of possible states of the world, (ii) a set of consequences, (iii) a set of acts, which are functions from states to consequences, and (iv) a preference relation over the acts, which represents the preferences of an idealized rational agent. The goal and the culmination of the enterprise is a representation theorem: Any preference relation that (...) satisfies certain arguably acceptable postulates determines a (finitely additive) probability distribution over the states and a utility assignment to the consequences, such that the preferences among acts are determined by their expected utilities. Additional problematic assumptions are however required in Savage's proofs. First, there is a Boolean algebra of events (sets of states) which determines the richness of the set of acts. The probabilities are assigned to members of this algebra. Savage's proof requires that this be a σ-algebra (i.e., closed under infinite countable unions and intersections), which makes for an extremely rich preference relation. On Savage's view we should not require subjective probabilities to be σ-additive. He therefore finds the insistence on a σ-algebra peculiar and is unhappy with it. But he sees no way of avoiding it. Second, the assignment of utilities requires the constant act assumption: for every consequence there is a constant act, which produces that consequence in every state. This assumption is known to be highly counterintuitive. The present work contains two mathematical results. The first, and the more difficult one, shows that the σ-algebra assumption can be dropped. The second states that, as long as utilities are assigned to finite gambles only, the constant act assumption can be replaced by the more plausible and much weaker assumption that there are at least two non-equivalent constant acts. The second result also employs a novel way of deriving utilities in Savage-style systems -- without appealing to von Neumann-Morgenstern lotteries. The paper discusses the notion of “idealized agent" that underlies Savage's approach, and argues that the simplified system, which is adequate for all the actual purposes for which the system is designed, involves a more realistic notion of an idealized agent. (shrink)
Popper's well-known demarcation criterion has often been understood to distinguish statements of empirical science according to their logical form. Implicit in this interpretation of Popper's philosophy is the belief that when the universe of discourse of the empirical scientist is infinite, empirical universal sentences are falsifiable but not verifiable, whereas the converse holds for existential sentences. A remarkable elaboration of this belief is to be found in Watkins's early work on the statements he calls “all-and-some,” such as: “For every metal (...) there is a melting point.” All-and-some statements are both universally and existentially quantified in that order. Watkins argued that AS should be regarded as both nonfalsifiable and nonverifiable, for they partake in the logical fate of both universal and existential statements. This claim is subject to the proviso that the bound variables are “uncircumscribed” ; i.e., that the universe of discourse is infinite. (shrink)
This paper examines how the concepts of utility, impartiality, and universality worked together to form the foundation of Adam Smith's jurisprudence. It argues that the theory of utility consistent with contemporary rational choice theory is insufficient to account for Smith's use of utility. Smith's jurisprudence relies on the impartial spectator's sympathetic judgment over whether third parties are injured, and not individuals' expectedutility associated with individuals' expected gains from rendering judgments over innocence (...) or guilt. (shrink)
Decision theory is concerned with how agents should act when the consequences of their actions are uncertain. The central principle of contemporary decision theory is that the rational choice is the choice that maximizes subjective expectedutility. This entry explains what this means, and discusses the philosophical motivations and consequences of the theory. The entry will consider some of the main problems and paradoxes that decision theory faces, and some of responses that can be (...) given. Finally the entry will briefly consider how decision theory applies to choices involving more than one agent. (shrink)
The topic of this thesis is axiological uncertainty – the question of how you should evaluate your options if you are uncertain about which axiology is true. As an answer, I defend Expected Value Maximisation (EVM), the view that one option is better than another if and only if it has the greater expected value across axiologies. More precisely, I explore the axiomatic foundations of this view. I employ results from state-dependent utilitytheory, extend them in (...) various ways and interpret them accordingly, and thus provide axiomatisations of EVM as a theory of axiological uncertainty. (shrink)
One guide to an argument's significance is the number and variety of refutations it attracts. By this measure, the Dutch book argument has considerable importance.2 Of course this measure alone is not a sure guide to locating arguments deserving of our attention—if a decisive refutation has really been given, we are better off pursuing other topics. But the presence of many and varied counterarguments at least suggests that either the refutations are controversial, or that their target admits of more than (...) one interpretation, or both. The main point of this paper is to focus on a way of understanding the Dutch Book argument (DBA) that avoids many of the well-known criticisms, and to consider how it fares against an important criticism that still remains: the objection that the DBA presupposes value-independence of bets. (shrink)
Representation theorems are often taken to provide the foundations for decision theory. First, they are taken to characterize degrees of belief and utilities. Second, they are taken to justify two fundamental rules of rationality: that we should have probabilistic degrees of belief and that we should act as expectedutility maximizers. We argue that representation theorems cannot serve either of these foundational purposes, and that recent attempts to defend the foundational importance of representation theorems are unsuccessful. As (...) a result, we should reject these claims, and lay the foundations of decision theory on firmer ground. (shrink)
Abstract The Preface Paradox, first introduced by David Makinson (1961), presents a plausible scenario where an agent is evidentially certain of each of a set of propositions without being evidentially certain of the conjunction of the set of propositions. Given reasonable assumptions about the nature of evidential certainty, this appears to be a straightforward contradiction. We solve the paradox by appeal to stake size sensitivity, which is the claim that evidential probability is sensitive to stake size. The argument is that (...) because the informational content in the conjunction is greater than the sum of the informational content of the conjuncts, the stake size in the conjunction is higher than the sum of the stake sizes in the conjuncts. We present a theory of evidential probability that identifies knowledge with value and allows for coherent stake sensitive beliefs. An agent’s beliefs are represented two dimensionally as a bid – ask spread, which gives a bid price and an ask price for bets at each stake size. The bid ask spread gets wider when there is less valuable evidence relative to the stake size, and narrower when there is more valuable evidence according to a simple formula. The bid-ask spread can represent the uncertainty in the first order probabilistic judgement. According to the theory it can be coherent to be evidentially certain at low stakes, but less than certain at high stakes, and therefore there is no contradiction in the Preface. The theory not only solves the paradox, but also gives a good model of decisions under risk that overcomes many of the problems associated with classic expectedutilitytheory. (shrink)
Savage's framework of subjective preference among acts provides a paradigmatic derivation of rational subjective probabilities within a more general theory of rational decisions. The system is based on a set of possible states of the world, and on acts, which are functions that assign to each state a consequence. The representation theorem states that the given preference between acts is determined by their expected utilities, based on uniquely determined probabilities (assigned to sets of states), and numeric utilities assigned (...) to consequences. Savage's derivation, however, is based on a highly problematic well-known assumption not included among his postulates: for any consequence of an act in some state, there is a "constant act" which has that consequence in all states. This ability to transfer consequences from state to state is, in many cases, miraculous -- including simple scenarios suggested by Savage as natural cases for applying his theory. We propose a simplification of the system, which yields the representation theorem without the constant act assumption. We need only postulates P1-P6. This is done at the cost of reducing the set of acts included in the setup. The reduction excludes certain theoretical infinitary scenarios, but includes the scenarios that should be handled by a system that models human decisions. (shrink)
Suppose that it is rational to choose or intend a course of action if and only if the course of action maximizes some sort of expectation of some sort of value. What sort of value should this definition appeal to? According to an influential neo-Humean view, the answer is “Utility”, where utility is defined as a measure of subjective preference. According to a rival neo-Aristotelian view, the answer is “Choiceworthiness”, where choiceworthiness is an irreducibly normative notion of a (...) course of action that is good in a certain way. The neo-Human view requires preferences to be measurable by means of a utility function. Various interpretations of what exactly a “preference” is are explored, to see if there is any interpretation that supports the claim that a rational agent’s “preferences” must satisfy the “axioms” that are necessary for them to be measurable in this way. It is argued that the only interpretation that supports the idea that the rational agent’s preferences must meet these axioms interprets “preferences” as a kind of value-judgment. But this turns out to be version of the neo-Aristotelian view, rather than the neo-Humean view. Rational intentions maximize expected choiceworthiness, not expectedutility. (shrink)
We introduce a ranking of multidimensional alternatives, including uncertain prospects as a particular case, when these objects can be given a matrix form. This ranking is separable in terms of rows and columns, and continuous and monotonic in the basic quantities. Owing to the theory of additive separability developed here, we derive very precise numerical representations over a large class of domains (i.e., typically notof the Cartesian product form). We apply these representationsto (1)streams of commodity baskets through time, (2)uncertain (...) social prospects, (3)uncertain individual prospects. Concerning(1), we propose a finite horizon variant of Koopmans’s (1960) axiomatization of infinite discounted utility sums. The main results concern(2). We push the classic comparison between the exanteand expostsocial welfare criteria one step further by avoiding any expectedutility assumptions, and as a consequence obtain what appears to be the strongest existing form of Harsanyi’s (1955) Aggregation Theorem. Concerning(3), we derive a subjective probability for Anscombe and Aumann’s (1963) finite case by merely assuming that there are two epistemically independent sources of uncertainty. (shrink)
In this chapter I examine how expected-value theory might inform responses to what I call the dual-use problem. I begin by defining that problem. I then outline a procedure, which invokes expected-value theory, for tackling it. I first illustrate the procedure with the aid of a simplified schematic example of a dual-use problem, and then describe how it might also guide responses to more complex real-world cases. I outline some attractive features of the procedure. Finally, I (...) consider whether and how the procedure might be amended to accommodate various criticisms of it. (shrink)
The value of knowledge can vary in that knowledge of important facts is more valuable than knowledge of trivialities. This variation in the value of knowledge is mirrored by a variation in evidential standards. Matters of greater importance require greater evidential support. But all knowledge, however trivial, needs to be evidentially certain. So on one hand we have a variable evidential standard that depends on the value of the knowledge, and on the other, we have the invariant standard of evidential (...) certainty. This paradox in the concept of knowledge runs deep in the history of philosophy. We approach this paradox by proposing a bet settlement theory of knowledge. Degrees of belief can be measured by the expected value of a bet divided by stake size, with the highest degree of belief being probability 1, or certainty. Evidence sufficient to settle the bet makes the expectation equal to the stake size and therefore has evidential probability 1. This gives us the invariant evidential certainty standard for knowledge. The value of knowledge relative to a bet is given by the stake size. We propose that evidential probability can vary with stake size, so that evidential certainty at low stakes does not entail evidential certainty at high stakes. This solves the paradox by allowing that certainty is necessary for knowledge at any stakes, but that the evidential standards for knowledge vary according to what is at stake. We give a Stake Size Variation Principle that calculates evidential probability from the value of evidence and the stakes. Stake size variant degrees of belief are probabilistically coherent and explain a greater range of preferences than orthodox expectedutilitytheory, namely the Ellsberg and Allais preferences. The resulting theory of knowledge gives an empirically adequate, rationally grounded, unified account of evidence, value and probability. (shrink)
The desirability of what actually occurs is often influenced by what could have been. Preferences based on such value dependencies between actual and counterfactual outcomes generate a class of problems for orthodox decision theory, the best-known perhaps being the so-called Allais Paradox. In this paper we solve these problems by extending Richard Jeffrey's decision theory to counterfactual prospects, using a multidimensional possible-world semantics for conditionals, and showing that preferences that are sensitive to counterfactual considerations can still be desirability (...) maximising. We end the paper by investigating the conditions necessary and sufficient for a desirability function to be an expectedutility. It turns out that the additional conditions imply highly implausible epistemic principles. (shrink)
Theories that use expectedutility maximization to evaluate acts have difficulty handling cases with infinitely many utility contributions. In this paper I present and motivate a way of modifying such theories to deal with these cases, employing what I call “direct difference taking”. This proposal has a number of desirable features: it’s natural and well-motivated, it satisfies natural dominance intuitions, and it yields plausible prescriptions in a wide range of cases. I then compare my account to the (...) most plausible alternative, a proposal offered by Arntzenius (2014). I argue that while Arntzenius’s proposal has many attractive features, it runs into a number of problems which direct difference taking avoids. (shrink)
The standard formulation of Newcomb's problem compares evidential and causal conceptions of expectedutility, with those maximizing evidential expectedutility tending to end up far richer. Thus, in a world in which agents face Newcomb problems, the evidential decision theorist might ask the causal decision theorist: "if you're so smart, why ain’cha rich?” Ultimately, however, the expected riches of evidential decision theorists in Newcomb problems do not vindicate their theory, because their success does not (...) generalize. Consider a theory that allows the agents who employ it to end up rich in worlds containing Newcomb problems and continues to outperform in other cases. This type of theory, which I call a “success-first” decision theory, is motivated by the desire to draw a tighter connection between rationality and success, rather than to support any particular account of expectedutility. The primary aim of this paper is to provide a comprehensive justification of success-first decision theories as accounts of rational decision. I locate this justification in an experimental approach to decision theory supported by the aims of methodological naturalism. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.