We give two social aggregation theorems under conditions of risk, one for constant population cases, the other an extension to variable populations. Intra and interpersonal welfare comparisons are encoded in a single 'individual preorder'. The individual preorder then uniquely determines a social preorder. The social preorders described by these theorems have features that may be considered characteristic of Harsanyi-style utilitarianism, such as indifference to ex ante and ex post equality. However, the theorems are also consistent with the rejection of all (...) of the expectedutility axioms, completeness, continuity, and independence, at both the individual and social levels. In that sense, expectedutility is inessential to Harsanyi-style utilitarianism. In fact, the variable population theorem imposes only a mild constraint on the individual preorder, while the constant population theorem imposes no constraint at all. We then derive further results under the assumption of our basic axioms. First, the individual preorder satisfies the main expectedutility axiom of strong independence if and only if the social preorder has a vector-valued expected total utility representation, covering Harsanyi's utilitarian theorem as a special case. Second, stronger utilitarian-friendly assumptions, like Pareto or strong separability, are essentially equivalent to strong independence. Third, if the individual preorder satisfies a 'local expectedutility' condition popular in non-expectedutility theory, then the social preorder has a 'local expected total utility' representation. Although our aggregation theorems are stated under conditions of risk, they are valid in more general frameworks for representing uncertainty or ambiguity. (shrink)
Some early phase clinical studies of candidate HIV cure and remission interventions appear to have adverse medical risk–benefit ratios for participants. Why, then, do people participate? And is it ethically permissible to allow them to participate? Recent work in decision theory sheds light on both of these questions, by casting doubt on the idea that rational individuals prefer choices that maximise expectedutility, and therefore by casting doubt on the idea that researchers have an ethical obligation not to (...) enrol participants in studies with high risk–benefit ratios. This work supports the view that researchers should instead defer to the considered preferences of the participants themselves. This essay briefly explains this recent work, and then explores its application to these two questions in more detail. (shrink)
The Dutch Book Argument for Probabilism assumes Ramsey's Thesis (RT), which purports to determine the prices an agent is rationally required to pay for a bet. Recently, a new objection to Ramsey's Thesis has emerged (Hedden 2013, Wronski & Godziszewski 2017, Wronski 2018)--I call this the ExpectedUtility Objection. According to this objection, it is Maximise Subjective ExpectedUtility (MSEU) that determines the prices an agent is required to pay for a bet, and this often disagrees (...) with Ramsey's Thesis. I suggest two responses to Hedden's objection. First, we might be permissive: agents are permitted to pay any price that is required or permitted by RT, and they are permitted to pay any price that is required or permitted by MSEU. This allows us to give a revised version of the Dutch Book Argument for Probabilism, which I call the Permissive Dutch Book Argument. Second, I suggest that even the proponent of the ExpectedUtility Objection should admit that RT gives the correct answer in certain very limited cases, and I show that, together with MSEU, this very restricted version of RT gives a new pragmatic argument for Probabilism, which I call the Bookless Pragmatic Argument. (shrink)
The paper summarizes expectedutility theory, both in its original von Neumann-Morgenstern version and its later developments, and discusses the normative claims to rationality made by this theory.
This monographic chapter explains how expectedutility (EU) theory arose in von Neumann and Morgenstern, how it was called into question by Allais and others, and how it gave way to non-EU theories, at least among the specialized quarters of decion theory. I organize the narrative around the idea that the successive theoretical moves amounted to resolving Duhem-Quine underdetermination problems, so they can be assessed in terms of the philosophical recommendations made to overcome these problems. I actually follow (...) Duhem's recommendation, which was essentially to rely on the passing of time to make many experiments and arguments available, and evebntually strike a balance between competing theories on the basis of this improved knowledge. Although Duhem's solution seems disappointingly vague, relying as it does on "bon sens" to bring an end to the temporal process, I do not think there is any better one in the philosophical literature, and I apply it here for what it is worth. In this perspective, EU theorists were justified in resisting the first attempts at refuting their theory, including Allais's in the 50s, but they would have lacked "bon sens" in not acknowledging their defeat in the 80s, after the long process of pros and cons had sufficiently matured. This primary Duhemian theme is actually combined with a secondary theme - normativity. I suggest that EU theory was normative at its very beginning and has remained so all along, and I express dissatisfaction with the orthodox view that it could be treated as a straightforward descriptive theory for purposes of prediction and scientific test. This view is usually accompanied with a faulty historical reconstruction, according to which EU theorists initially formulated the VNM axioms descriptively and retreated to a normative construal once they fell threatened by empirical refutation. From my historical study, things did not evolve in this way, and the theory was both proposed and rebutted on the basis of normative arguments already in the 1950s. The ensuing, major problem was to make choice experiments compatible with this inherently normative feature of theory. Compability was obtained in some experiments, but implicitly and somewhat confusingly, for instance by excluding overtly incoherent subjects or by creating strong incentives for the subjects to reflect on the questions and provide answers they would be able to defend. I also claim that Allais had an intuition of how to combine testability and normativity, unlike most later experimenters, and that it would have been more fruitful to work from his intuition than to make choice experiments of the naively empirical style that flourished after him. In sum, it can be said that the underdetermination process accompanying EUT was resolved in a Duhemian way, but this was not without major inefficiencies. To embody explicit rationality considerations into experimental schemes right from the beginning would have limited the scope of empirical research, avoided wasting resources to get only minor findings, and speeded up the Duhemian process of groping towards a choice among competing theories. (shrink)
An expectedutility model of individual choice is formulated which allows the decision maker to specify his available actions in the form of controls (partial contingency plans) and to simultaneously choose goals and controls in end-mean pairs. It is shown that the Savage expectedutility model, the Marschak- Radner team model, the Bayesian statistical decision model, and the standard optimal control model can be viewed as special cases of this goal-control expectedutility model.
The paper re-expresses arguments against the normative validity of expectedutility theory in Robin Pope (1983, 1991a, 1991b, 1985, 1995, 2000, 2001, 2005, 2006, 2007). These concern the neglect of the evolving stages of knowledge ahead (stages of what the future will bring). Such evolution is fundamental to an experience of risk, yet not consistently incorporated even in axiomatised temporal versions of expectedutility. Its neglect entails a disregard of emotional and financial effects on well-being before (...) a particular risk is resolved. These arguments are complemented with an analysis of the essential uniqueness property in the context of temporal and atemporal expectedutility theory and a proof of the absence of a limit property natural in an axiomatised approach to temporal expectedutility theory. Problems of the time structure of risk are investigated in a simple temporal framework restricted to a subclass of temporal lotteries in the sense of David Kreps and Evan Porteus (1978). This subclass is narrow but wide enough to discuss basic issues. It will be shown that there are serious objections against the modification of expectedutility theory axiomatised by Kreps and Porteus (1978, 1979). By contrast the umbrella theory proffered by Pope that she has now termed SKAT, the Stages of Knowledge Ahead Theory, offers an epistemically consistent framework within which to construct particular models to deal with particular decision situations. A model by Caplin and Leahy (2001) will also be discussed and contrasted with the modelling within SKAT (Pope, Leopold and Leitner 2007). (shrink)
This paper proposes a new theory of rational choice, Expected Comparative Utility (ECU) Theory. It is first argued that for any decision option, a, and any state of the world, G, the measure of the choiceworthiness of a in G is the comparative utility of a in G – that is, the difference in utility, in G, between a and whichever alternative to a carries the greatest utility in G. On the basis of this principle, (...) it is then argued, roughly speaking, that an agent should rank her decision options (in terms of how choiceworthy they are) according to their expected comparative utility. For any decision option, a, the expected comparative utility of a is the probability-weighted average of the comparative utilities of a across the different states of the world. It is lastly demonstrated that in a number of decision cases, ECU Theory delivers different verdicts from those of standard decision theory. (shrink)
Suppose that it is rational to choose or intend a course of action if and only if the course of action maximizes some sort of expectation of some sort of value. What sort of value should this definition appeal to? According to an influential neo-Humean view, the answer is “Utility”, where utility is defined as a measure of subjective preference. According to a rival neo-Aristotelian view, the answer is “Choiceworthiness”, where choiceworthiness is an irreducibly normative notion of a (...) course of action that is good in a certain way. The neo-Human view requires preferences to be measurable by means of a utility function. Various interpretations of what exactly a “preference” is are explored, to see if there is any interpretation that supports the claim that a rational agent’s “preferences” must satisfy the “axioms” that are necessary for them to be measurable in this way. It is argued that the only interpretation that supports the idea that the rational agent’s preferences must meet these axioms interprets “preferences” as a kind of value-judgment. But this turns out to be version of the neo-Aristotelian view, rather than the neo-Humean view. Rational intentions maximize expected choiceworthiness, not expectedutility. (shrink)
Although expectedutility theory has proven a fruitful and elegant theory in the finite realm, attempts to generalize it to infinite values have resulted in many paradoxes. In this paper, we argue that the use of John Conway's surreal numbers shall provide a firm mathematical foundation for transfinite decision theory. To that end, we prove a surreal representation theorem and show that our surreal decision theory respects dominance reasoning even in the case of infinite values. We then bring (...) our theory to bear on one of the more venerable decision problems in the literature: Pascal's Wager. Analyzing the wager showcases our theory's virtues and advantages. To that end, we analyze two objections against the wager: Mixed Strategies and Many Gods. After formulating the two objections in the framework of surreal utilities and probabilities, our theory correctly predicts that (1) the pure Pascalian strategy beats all mixed strategies, and (2) what one should do in a Pascalian decision problem depends on what one's credence function is like. Our analysis therefore suggests that although Pascal's Wager is mathematically coherent, it does not deliver what it purports to, a rationally compelling argument that people should lead a religious life regardless of how confident they are in theism and its alternatives. (shrink)
The principle that rational agents should maximize expectedutility or choiceworthiness is intuitively plausible in many ordinary cases of decision-making under uncertainty. But it is less plausible in cases of extreme, low-probability risk (like Pascal's Mugging), and intolerably paradoxical in cases like the St. Petersburg and Pasadena games. In this paper I show that, under certain conditions, stochastic dominance reasoning can capture most of the plausible implications of expectational reasoning while avoiding most of its pitfalls. Specifically, given sufficient (...) background uncertainty about the choiceworthiness of one's options, many expectation-maximizing gambles that do not stochastically dominate their alternatives "in a vacuum" become stochastically dominant in virtue of that background uncertainty. But, even under these conditions, stochastic dominance will generally not require agents to accept extreme gambles like Pascal's Mugging or the St. Petersburg game. The sort of background uncertainty on which these results depend looks unavoidable for any agent who measures the choiceworthiness of her options in part by the total amount of value in the resulting world. At least for such agents, then, stochastic dominance offers a plausible general principle of choice under uncertainty that can explain more of the apparent rational constraints on such choices than has previously been recognized. (shrink)
This article argues that Lara Buchak’s risk-weighted expectedutility theory fails to offer a true alternative to expectedutility theory. Under commonly held assumptions about dynamic choice and the framing of decision problems, rational agents are guided by their attitudes to temporally extended courses of action. If so, REU theory makes approximately the same recommendations as expectedutility theory. Being more permissive about dynamic choice or framing, however, undermines the theory’s claim to capturing a (...) steady choice disposition in the face of risk. I argue that this poses a challenge to alternatives to expectedutility theory more generally. (shrink)
Stochastic independence (SI) has a complex status in probability theory. It is not part of the definition of a probability measure, but it is nonetheless an essential property for the mathematical development of this theory, hence a property that any theory on the foundations of probability should be able to account for. Bayesian decision theory, which is one such theory, appears to be wanting in this respect. In Savage's classic treatment, postulates on preferences under uncertainty are shown to entail a (...) subjective expectedutility (SEU) representation, and this permits asserting only the existence and uniqueness of a subjective probability, regardless of its properties. What is missing is a preference postulate that would specifically connect with the SI property. The paper develops a version of Bayesian decision theory that fills this gap. In a framework of multiple sources of uncertainty, we introduce preference conditions that jointly entail the SEU representation and the property that the subjective probability in this representation treats the sources of uncertainty as being stochastically independent. We give two representation theorems of graded complexity to demonstrate the power of our preference conditions. Two sections of comments follow, one connecting the theorems with earlier results in Bayesian decision theory, and the other connecting them with the foundational discussion on SI in probability theory and the philosophy of probability. Appendices offer more technical material. (shrink)
Ramsey (1926) sketches a proposal for measuring the subjective probabilities of an agent by their observable preferences, assuming that the agent is an expectedutility maximizer. I show how to extend the spirit of Ramsey's method to a strictly wider class of agents: risk-weighted expectedutility maximizers (Buchak 2013). In particular, I show how we can measure the risk attitudes of an agent by their observable preferences, assuming that the agent is a risk-weighted expected (...) class='Hi'>utility maximizer. Further, we can leverage this method to measure the subjective probabilities of a risk-weighted expectedutility maximizer. (shrink)
In the situation known as the “cable guy paradox” the expectedutility principle and the “avoid certain frustration” principle (ACF) seem to give contradictory advice about what one should do. This article tries to resolve the paradox by presenting an example that weakens the grip of ACF: a modified version of the cable guy problem is introduced in which the choice dictated by ACF loses much of its intuitive appeal.
According to the priority view, or prioritarianism, it matters more to beneﬁt people the worse oﬀ they are. But how exactly should the priority view be deﬁned? This article argues for a highly general characterization which essentially involves risk, but makes no use of evaluative measurements or the expectedutility axioms. A representation theorem is provided, and when further assumptions are added, common accounts of the priority view are recovered. A defense of the key idea behind the priority (...) view, the priority principle, is provided. But it is argued that the priority view fails on both ethical and conceptual grounds. (shrink)
This paper argues that instrumental rationality is more permissive than expectedutility theory. The most compelling instrumentalist argument in favour of separability, its core requirement, is that agents with non-separable preferences end up badly off by their own lights in some dynamic choice problems. I argue that once we focus on the question of whether agents' attitudes to uncertain prospects help define their ends in their own right, or instead only assign instrumental value in virtue of the outcomes (...) they may lead to, we see that the argument must fail. Either attitudes to prospects assign non-instrumental value in their own right, in which case we cannot establish the irrationality of the dynamic choice behaviour of agents with non-separable preferences. Or they don't, in which case agents with non-separable preferences can avoid the problematic choice behaviour without adopting separable preferences. (shrink)
I have claimed that risk-weighted expectedutility maximizers are rational, and that their preferences cannot be captured by expectedutility theory. Richard Pettigrew and Rachael Briggs have recently challenged these claims. Both authors argue that only EU-maximizers are rational. In addition, Pettigrew argues that the preferences of REU-maximizers can indeed be captured by EU theory, and Briggs argues that REU-maximizers lose a valuable tool for simplifying their decision problems. I hold that their arguments do not succeed (...) and that my original claims still stand. However, their arguments do highlight some costs of REU theory. (shrink)
We present a minimal pragmatic restriction on the interpretation of the weights in the “Equal Weight View” (and, more generally, in the “Linear Pooling” view) regarding peer disagreement and show that the view cannot respect it. Based on this result we argue against the view. The restriction is the following one: if an agent, i, assigns an equal or higher weight to another agent, j, (i.e. if i takes j to be as epistemically competent as him or epistemically superior to (...) him), he must be willing – in exchange for a positive and certain payment – to accept an offer to let a completely rational and sympathetic j choose for him whether to accept a bet with positive expectedutility. If i assigns a lower weight to j than to himself, he must not be willing to pay any positive price for letting j choose for him. Respecting the constraint entails, we show, that the impact of disagreement on one’s degree of belief is not independent of what the disagreement is discovered to be (i.e. not independent of j’s degree of belief). (shrink)
to appear in Lambert, E. and J. Schwenkler (eds.) Transformative Experience (OUP) -/- L. A. Paul (2014, 2015) argues that the possibility of epistemically transformative experiences poses serious and novel problems for the orthodox theory of rational choice, namely, expectedutility theory — I call her argument the Utility Ignorance Objection. In a pair of earlier papers, I responded to Paul’s challenge (Pettigrew 2015, 2016), and a number of other philosophers have responded in similar ways (Dougherty, et (...) al. 2015, Harman 2015) — I call our argument the Fine-Graining Response. Paul has her own reply to this response, which we might call the Authenticity Reply. But Sarah Moss has recently offered an alternative reply to the Fine-Graining Response on Paul’s behalf (Moss 2017) — we’ll call it the No Knowledge Reply. This appeals to the knowledge norm of action, together with Moss’ novel and intriguing account of probabilistic knowledge. In this paper, I consider Moss’ reply and argue that it fails. I argue first that it fails as a reply made on Paul’s behalf, since it forces us to abandon many of the features of Paul’s challenge that make it distinctive and with which Paul herself is particularly concerned. Then I argue that it fails as a reply independent of its fidelity to Paul’s intentions. (shrink)
The orthodox theory of instrumental rationality, expectedutility (EU) theory, severely restricts the way in which risk-considerations can figure into a rational individual's preferences. It is argued here that this is because EU theory neglects an important component of instrumental rationality. This paper presents a more general theory of decision-making, risk-weighted expectedutility (REU) theory, of which expectedutility maximization is a special case. According to REU theory, the weight that each outcome gets in (...) decision-making is not the subjective probability of that outcome; rather, the weight each outcome gets depends on both its subjective probability and its position in the gamble. Furthermore, the individual's utility function, her subjective probability function, and a function that measures her attitude towards risk can be separately derived from her preferences via a Representation Theorem. This theorem illuminates the role that each of these entities plays in preferences, and shows how REU theory explicates the components of instrumental rationality. (shrink)
It is widely held that the influence of risk on rational decisions is not entirely explained by the shape of an agent’s utility curve. Buchak (Erkenntnis, 2013, Risk and rationality, Oxford University Press, Oxford, in press) presents an axiomatic decision theory, risk-weighted expectedutility theory (REU), in which decision weights are the agent’s subjective probabilities modified by his risk-function r. REU is briefly described, and the global applicability of r is discussed. Rabin’s (Econometrica 68:1281–1292, 2000) calibration theorem (...) strongly suggests that plausible levels of risk aversion cannot be fully explained by concave utility functions; this provides motivation for REU and other theories. But applied to the synchronic preferences of an individual agent, Rabin’s result is not as problematic as it may first appear. Theories that treat outcomes as gains and losses (e.g. prospect theory and cumulative prospect theory) account for risk sensitivity in a way not available to REU. Reference points that mark the difference between gains and losses are subject to framing, many instances of which cannot be regarded as rational. However, rational decision theory may recognize the difference between gains and losses, without endorsing all ways of fixing the point of reference. In any event, REU is a very interesting theory. (shrink)
According to Stephen Finlay, ‘A ought to X’ means that X-ing is more conducive to contextually salient ends than relevant alternatives. This in turn is analysed in terms of probability. I show why this theory of ‘ought’ is hard to square with a theory of a reason’s weight which could explain why ‘A ought to X’ logically entails that the balance of reasons favours that A X-es. I develop two theories of weight to illustrate my point. I first look at (...) the prospects of a theory of weight based on expectedutility theory. I then suggest a simpler theory. Although neither allows that ‘A ought to X’ logically entails that the balance of reasons favours that A X-es, this price may be accepted. For there remains a strong pragmatic relation between these claims. (shrink)
I argue that prioritarianism cannot be assessed in abstraction from an account of the measure of utility. Rather, the soundness of this view crucially depends on what counts as a greater, lesser, or equal increase in a person’s utility. In particular, prioritarianism cannot accommodate a normatively compelling measure of utility that is captured by the axioms of John von Neumann and Oskar Morgenstern’s expectedutility theory. Nor can it accommodate a plausible and elegant generalization of (...) this theory that has been offered in response to challenges to von Neumann and Morgenstern. This is, I think, a theoretically interesting and unexpected source of difficulty for prioritarianism, which I explore in this article. (shrink)
We use a theorem from M. J. Schervish to explore the relationship between accuracy and practical success. If an agent is pragmatically rational, she will quantify the expected loss of her credence with a strictly proper scoring rule. Which scoring rule is right for her will depend on the sorts of decisions she expects to face. We relate this pragmatic conception of inaccuracy to the purely epistemic one popular among epistemic utility theorists.
Savage's framework of subjective preference among acts provides a paradigmatic derivation of rational subjective probabilities within a more general theory of rational decisions. The system is based on a set of possible states of the world, and on acts, which are functions that assign to each state a consequence. The representation theorem states that the given preference between acts is determined by their expected utilities, based on uniquely determined probabilities (assigned to sets of states), and numeric utilities assigned to (...) consequences. Savage's derivation, however, is based on a highly problematic well-known assumption not included among his postulates: for any consequence of an act in some state, there is a "constant act" which has that consequence in all states. This ability to transfer consequences from state to state is, in many cases, miraculous -- including simple scenarios suggested by Savage as natural cases for applying his theory. We propose a simplification of the system, which yields the representation theorem without the constant act assumption. We need only postulates P1-P6. This is done at the cost of reducing the set of acts included in the setup. The reduction excludes certain theoretical infinitary scenarios, but includes the scenarios that should be handled by a system that models human decisions. (shrink)
We generalize Harsanyi's social aggregation theorem. We allow the population to be infi nite, and merely assume that individual and social preferences are given by strongly independent preorders on a convex set of arbitrary dimension. Thus we assume neither completeness nor any form of continuity. Under Pareto indifference, the conclusion of Harsanyi's theorem nevertheless holds almost entirely unchanged when utility values are taken to be vectors in a product of lexicographic function spaces. The addition of weak or strong Pareto (...) has essentially the same implications in the general case as it does in Harsanyi's original setting. (shrink)
Andy Egan recently drew attention to a class of decision situations that provide a certain kind of informational feedback, which he claims constitute a counterexample to causal decision theory. Arntzenius and Wallace have sought to vindicate a form of CDT by describing a dynamic process of deliberation that culminates in a “mixed” decision. I show that, for many of the cases in question, this proposal depends on an incorrect way of calculating expected utilities, and argue that it is therefore (...) unsuccessful. I then tentatively defend an alternative proposal by Joyce, which produces a similar process of dynamic deliberation but for a different reason. (shrink)
A natural view in distributive ethics is that everyone's interests matter, but the interests of the relatively worse off matter more than the interests of the relatively better off. I provide a new argument for this view. The argument takes as its starting point the proposal, due to Harsanyi and Rawls, that facts about distributive ethics are discerned from individual preferences in the "original position." I draw on recent work in decision theory, along with an intuitive principle about risk-taking, to (...) derive the view. (shrink)
A moderately risk averse person may turn down a 50/50 gamble that either results in her winning $200 or losing $100. Such behaviour seems rational if, for instance, the pain of losing $100 is felt more strongly than the joy of winning $200. The aim of this paper is to examine an influential argument that some have interpreted as showing that such moderate risk aversion is irrational. After presenting an axiomatic argument that I take to be the strongest case for (...) the claim that moderate risk aversion is irrational, I show that it essentially depends on an assumption that those who think that risk aversion can be rational should be skeptical of. Hence, I conclude that risk aversion need not be irrational. (shrink)
Purists think that changes in our practical interests can’t affect what we know unless those changes are truth-relevant with respect to the propositions in question. Impurists disagree. They think changes in our practical interests can affect what we know even if those changes aren’t truth-relevant with respect to the propositions in question. I argue that impurists are right, but for the wrong reasons, since they haven’t appreciated the best argument for their own view. Together with “Minimalism and the Limits of (...) Warranted Assertability Maneuvers,” “The Pragmatic Encroachment Debate,” and “Anti-Intellectualism” (below), this paper constitutes my attempt to refute the entire pragmatic encroachment debate. As I show in this paper, there is an argument for impurism sitting in plain sight that is considerably more plausible than any extant argument for pragmatism. (shrink)
Standard decision theory, or rational choice theory, is often interpreted to be a theory of instrumental rationality. This dissertation argues, however, that the core requirements of orthodox decision theory cannot be defended as general requirements of instrumental rationality. Instead, I argue that these requirements can only be instrumentally justified to agents who have a desire to have choice dispositions that are stable over time and across different choice contexts. Past attempts at making instrumentalist arguments for the core requirements of decision (...) theory fail due to a pervasive assumption in decision theory, namely the assumption that the agent’s preferences over the objects of choice – be it outcomes or uncertain prospects – form the standard of instrumental rationality against which the agent’s actions are evaluated. I argue that we should instead take more basic desires to be the standard of instrumental rationality. But unless agents have a desire to have stable choice dispositions, according to this standard, instrumental rationality turns out to be more permissive than orthodox decision theory. (shrink)
We provide conditions under which an incomplete strongly independent preorder on a convex set X can be represented by a set of mixture preserving real-valued functions. We allow X to be infi nite dimensional. The main continuity condition we focus on is mixture continuity. This is sufficient for such a representation provided X has countable dimension or satisfi es a condition that we call Polarization.
Systems of logico-probabilistic (LP) reasoning characterize inference from conditional assertions interpreted as expressing high conditional probabilities. In the present article, we investigate four prominent LP systems (namely, systems O, P, Z, and QC) by means of computer simulations. The results reported here extend our previous work in this area, and evaluate the four systems in terms of the expectedutility of the dispositions to act that derive from the conclusions that the systems license. In addition to conforming to (...) the dominant paradigm for assessing the rationality of actions and decisions, our present evaluation complements our previous work, since our previous evaluation may have been too severe in its assessment of inferences to false and uninformative conclusions. In the end, our new results provide additional support for the conclusion that (of the four systems considered) inference by system Z offers the best balance of error avoidance and inferential power. Our new results also suggest that improved performance could be achieved by a modest strengthening of system Z. (shrink)
How is the burden of proof to be distributed among individuals who are involved in resolving a particular issue? Under what conditions should the burden of proof be distributed unevenly? We distinguish attitudinal from dialectical burdens and argue that these questions should be answered differently, depending on which is in play. One has an attitudinal burden with respect to some proposition when one is required to possess sufficient evidence for it. One has a dialectical burden with respect to some proposition (...) when one is required to provide supporting arguments for it as part of a deliberative process. We show that the attitudinal burden with respect to certain propositions is unevenly distributed in some deliberative contexts, but in all of these contexts, establishing the degree of support for the proposition is merely a means to some other deliberative end, such as action guidance, or persuasion. By contrast, uneven distributions of the dialectical burden regularly further the aims of deliberation, even in contexts where the quest for truth is the sole deliberative aim, rather than merely a means to some different deliberative end. We argue that our distinction between these two burdens resolves puzzles about unevenness that have been raised in the literature. (shrink)
A layered approach to the evaluation of action alternatives with continuous time for decision making under the moral doctrine of Negative Utilitarianism is presented and briefly discussed from a philosophical perspective.
A strongly independent preorder on a possibly in finite dimensional convex set that satisfi es two of the following conditions must satisfy the third: (i) the Archimedean continuity condition; (ii) mixture continuity; and (iii) comparability under the preorder is an equivalence relation. In addition, if the preorder is nontrivial (has nonempty asymmetric part) and satisfi es two of the following conditions, it must satisfy the third: (i') a modest strengthening of the Archimedean condition; (ii') mixture continuity; and (iii') completeness. Applications (...) to decision making under conditions of risk and uncertainty are provided. (shrink)
A cursory glance at the list of Nobel Laureates for Economics is sufficient to confirm Stanovich’s description of the project to evaluate human rationality as seminal. Herbert Simon, Reinhard Selten, John Nash, Daniel Kahneman, and others, were awarded their prizes less for their work in economics, per se, than for their work on rationality, as such. Although philosophical works have for millennia attempted to describe, explicate and evaluate individual and collective aspects of rationality, new impetus was brought to this endeavor (...) over the last century as mathematical logic along with the social and behavioral sciences emerged. Yet more recently, over the last several decades, propelled by the emergence of artificial intelligence, cognitive science, evolutionary psychology, neuropsychology, and related fields, even more sophisticated approaches to the study of rationality have emerged. (shrink)
With the growing focus on prevention in medicine, studies of how to describe risk have become increasing important. Recently, some researchers have argued against giving patients “comparative risk information,” such as data about whether their baseline risk of developing a particular disease is above or below average. The concern is that giving patients this information will interfere with their consideration of more relevant data, such as the specific chance of getting the disease (the “personal risk”), the risk reduction the treatment (...) provides, and any possible side effects. I explore this view and the theories of rationality that ground it, and I argue instead that comparative risk information can play a positive role in decision-making. The criticism of disclosing this sort of information to patients, I conclude, rests on a mistakenly narrow account of the goals of prevention and the nature of rational choice in medicine. (shrink)
Many argue that absolutist moral theories -- those that prohibit particular kinds of actions or trade-offs under all circumstances -- cannot adequately account for the permissibility of risky actions. In this dissertation, I defend various versions of absolutism against this critique, using overlooked resources from formal decision theory. Against the prevailing view, I argue that almost all absolutist moral theories can give systematic and plausible verdicts about what to do in risky cases. In doing so, I show that critics have (...) overlooked: (1) the fact that absolutist theories -- and moral theories, more generally -- underdetermine their formal decision-theoretic representations; (2) that decision theories themselves can be generalised to better accommodate distinctively absolutist commitments. Overall, this dissertation demonstrates that we can navigate a risky world without compromising our moral commitments. (shrink)
Using “brute reason” I will show why there can be only one valid interpretation of probability. The valid interpretation turns out to be a further refinement of Popper’s Propensity interpretation of probability. Via some famous probability puzzles and new thought experiments I will show how all other interpretations of probability fail, in particular the Bayesian interpretations, while these puzzles do not present any difficulties for the interpretation proposed here. In addition, the new interpretation casts doubt on some concepts often taken (...) as basic and unproblematic, like rationality, utility and expectation. This in turn has implications for decision theory, economic theory and the philosophy of physics. (shrink)
The Lockean Thesis says that you must believe p iff you’re sufficiently confident of it. On some versions, the 'must' asserts a metaphysical connection; on others, it asserts a normative one. On some versions, 'sufficiently confident' refers to a fixed threshold of credence; on others, it varies with proposition and context. Claim: the Lockean Thesis follows from epistemic utility theory—the view that rational requirements are constrained by the norm to promote accuracy. Different versions of this theory generate different versions (...) of Lockeanism; moreover, a plausible version of epistemic utility theory meshes with natural language considerations, yielding a new Lockean picture that helps to model and explain the role of beliefs in inquiry and conversation. Your beliefs are your best guesses in response to the epistemic priorities of your context. Upshot: we have a new approach to the epistemology and semantics of belief. And it has teeth. It implies that the role of beliefs is fundamentally different than many have thought, and in fact supports a metaphysical reduction of belief to credence. (shrink)
We introduce a ranking of multidimensional alternatives, including uncertain prospects as a particular case, when these objects can be given a matrix form. This ranking is separable in terms of rows and columns, and continuous and monotonic in the basic quantities. Owing to the theory of additive separability developed here, we derive very precise numerical representations over a large class of domains (i.e., typically notof the Cartesian product form). We apply these representationsto (1)streams of commodity baskets through time, (2)uncertain social (...) prospects, (3)uncertain individual prospects. Concerning(1), we propose a finite horizon variant of Koopmans’s (1960) axiomatization of infinite discounted utility sums. The main results concern(2). We push the classic comparison between the exanteand expostsocial welfare criteria one step further by avoiding any expectedutility assumptions, and as a consequence obtain what appears to be the strongest existing form of Harsanyi’s (1955) Aggregation Theorem. Concerning(3), we derive a subjective probability for Anscombe and Aumann’s (1963) finite case by merely assuming that there are two epistemically independent sources of uncertainty. (shrink)
This paper investigates how environmental structure, given the innate properties of a population, affects the degree to which this population can adapt to the environment. The model we explore involves simple agents in a 2-d world which can sense a local food distribution and, as specified by their genomes, move to a new location and ingest the food there. Adaptation in this model consists of improving the genomic sensorimotor mapping so as to maximally exploit the environmental resources. We vary environmental (...) structure to see its specific effect on adaptive success. In our investigation, two properties of environmental structure, conditioned by the sensorimotor capacities of the agents, have emerged as significant factors in determining adaptive success: (1) the information content of the environment which quantifies the diversity of conditions sensed, and (2) the expectedutility for optimal action. These correspond to the syntactic and pragmatic aspects of environmental information, respectively. We find that the ratio of expectedutility to information content predicts adaptive success measured by population gain and information content alone predicts the fraction of ideal utility achieved. These quantitative methods and specific conclusions should aid in understanding the effects of environmental structure on evolutionary adaptation in a wide range of evolving systems, both artificial and natural. (shrink)
This paper (first published under the same title in Journal of Mathematical Economics, 29, 1998, p. 331-361) is a sequel to "Consistent Bayesian Aggregation", Journal of Economic Theory, 66, 1995, p. 313-351, by the same author. Both papers examine mathematically whether the the following assumptions are compatible: the individuals and the group both form their preferences according to Subjective ExpectedUtility (SEU) theory, and the preferences of the group satisfy the Pareto principle with respect to those of the (...) individuals. While the 1995 paper explored these assumptions in the axiomatic context of Savage's (1954-1972) SEU theory, the present paper explores them in the context of Anscombe and Aumann's (1963) alternative SEU theory. We first show that the problematic assumptions become compatible when the Anscombe-Aumann utility functions are state-dependent and no subjective probabilities are elicited. Then we show that the problematic assumptions become incompatible when the Anscombe-Aumann utility functions are state-dependent, like before, but subjective probabilities are elicited using a relevant technical scheme. This last result reinstates the impossibilities proved by the 1995 paper, and thus shows them to be robust with respect to the choice of the SEU axiomatic framework. The technical scheme used for the elicitation of subjective probabilities is that of Karni, Schmeidler and Vind (1983). (shrink)
In his classic book “the Foundations of Statistics” Savage developed a formal system of rational decision making. The system is based on (i) a set of possible states of the world, (ii) a set of consequences, (iii) a set of acts, which are functions from states to consequences, and (iv) a preference relation over the acts, which represents the preferences of an idealized rational agent. The goal and the culmination of the enterprise is a representation theorem: Any preference relation that (...) satisfies certain arguably acceptable postulates determines a (finitely additive) probability distribution over the states and a utility assignment to the consequences, such that the preferences among acts are determined by their expected utilities. Additional problematic assumptions are however required in Savage's proofs. First, there is a Boolean algebra of events (sets of states) which determines the richness of the set of acts. The probabilities are assigned to members of this algebra. Savage's proof requires that this be a σ-algebra (i.e., closed under infinite countable unions and intersections), which makes for an extremely rich preference relation. On Savage's view we should not require subjective probabilities to be σ-additive. He therefore finds the insistence on a σ-algebra peculiar and is unhappy with it. But he sees no way of avoiding it. Second, the assignment of utilities requires the constant act assumption: for every consequence there is a constant act, which produces that consequence in every state. This assumption is known to be highly counterintuitive. The present work contains two mathematical results. The first, and the more difficult one, shows that the σ-algebra assumption can be dropped. The second states that, as long as utilities are assigned to finite gambles only, the constant act assumption can be replaced by the more plausible and much weaker assumption that there are at least two non-equivalent constant acts. The second result also employs a novel way of deriving utilities in Savage-style systems -- without appealing to von Neumann-Morgenstern lotteries. The paper discusses the notion of “idealized agent" that underlies Savage's approach, and argues that the simplified system, which is adequate for all the actual purposes for which the system is designed, involves a more realistic notion of an idealized agent. (shrink)
Abstract The Preface Paradox, first introduced by David Makinson (1961), presents a plausible scenario where an agent is evidentially certain of each of a set of propositions without being evidentially certain of the conjunction of the set of propositions. Given reasonable assumptions about the nature of evidential certainty, this appears to be a straightforward contradiction. We solve the paradox by appeal to stake size sensitivity, which is the claim that evidential probability is sensitive to stake size. The argument is that (...) because the informational content in the conjunction is greater than the sum of the informational content of the conjuncts, the stake size in the conjunction is higher than the sum of the stake sizes in the conjuncts. We present a theory of evidential probability that identifies knowledge with value and allows for coherent stake sensitive beliefs. An agent’s beliefs are represented two dimensionally as a bid – ask spread, which gives a bid price and an ask price for bets at each stake size. The bid ask spread gets wider when there is less valuable evidence relative to the stake size, and narrower when there is more valuable evidence according to a simple formula. The bid-ask spread can represent the uncertainty in the first order probabilistic judgement. According to the theory it can be coherent to be evidentially certain at low stakes, but less than certain at high stakes, and therefore there is no contradiction in the Preface. The theory not only solves the paradox, but also gives a good model of decisions under risk that overcomes many of the problems associated with classic expectedutility theory. (shrink)
This paper provides an account of what it is to have faith in a proposition p, in both religious and mundane contexts. It is argued that faith in p doesn’t require adopting a degree of belief that isn’t supported by one’s evidence but rather it requires terminating one’s search for further evidence and acting on the supposition that p. It is then shown, by responding to a formal result due to I.J. Good, that doing so can be rational in a (...) number of circumstances. If expectedutility theory is the correct account of practical rationality, then having faith can be both epistemically and practically rational if the costs associated with gathering further evidence or postponing the decision are high. If a more permissive framework is adopted, then having faith can be rational even when there are no costs associated with gathering further evidence. (shrink)
Representation theorems are often taken to provide the foundations for decision theory. First, they are taken to characterize degrees of belief and utilities. Second, they are taken to justify two fundamental rules of rationality: that we should have probabilistic degrees of belief and that we should act as expectedutility maximizers. We argue that representation theorems cannot serve either of these foundational purposes, and that recent attempts to defend the foundational importance of representation theorems are unsuccessful. As a (...) result, we should reject these claims, and lay the foundations of decision theory on firmer ground. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.