Stochastic independence (SI) has a complex status in probability theory. It is not part of the definition of a probability measure, but it is nonetheless an essential property for the mathematical development of this theory, hence a property that any theory on the foundations of probability should be able to account for. Bayesiandecisiontheory, which is one such theory, appears to be wanting in this respect. In Savage's classic treatment, postulates on preferences (...) under uncertainty are shown to entail a subjective expected utility (SEU) representation, and this permits asserting only the existence and uniqueness of a subjective probability, regardless of its properties. What is missing is a preference postulate that would specifically connect with the SI property. The paper develops a version of Bayesiandecisiontheory that fills this gap. In a framework of multiple sources of uncertainty, we introduce preference conditions that jointly entail the SEU representation and the property that the subjective probability in this representation treats the sources of uncertainty as being stochastically independent. We give two representation theorems of graded complexity to demonstrate the power of our preference conditions. Two sections of comments follow, one connecting the theorems with earlier results in Bayesiandecisiontheory, and the other connecting them with the foundational discussion on SI in probability theory and the philosophy of probability. Appendices offer more technical material. (shrink)
Stochastic independence has a complex status in probability theory. It is not part of the definition of a probability measure, but it is nonetheless an essential property for the mathematical development of this theory. Bayesiandecision theorists such as Savage can be criticized for being silent about stochastic independence. From their current preference axioms, they can derive no more than the definitional properties of a probability measure. In a new framework of twofold uncertainty, we introduce preference (...) axioms that entail not only these definitional properties, but also the stochastic independence of the two sources of uncertainty. This goes some way towards filling a curious lacuna in Bayesiandecisiontheory. (shrink)
In his classic book “the Foundations of Statistics” Savage developed a formal system of rational decision making. The system is based on (i) a set of possible states of the world, (ii) a set of consequences, (iii) a set of acts, which are functions from states to consequences, and (iv) a preference relation over the acts, which represents the preferences of an idealized rational agent. The goal and the culmination of the enterprise is a representation theorem: Any preference relation (...) that satisfies certain arguably acceptable postulates determines a (finitely additive) probability distribution over the states and a utility assignment to the consequences, such that the preferences among acts are determined by their expected utilities. Additional problematic assumptions are however required in Savage's proofs. First, there is a Boolean algebra of events (sets of states) which determines the richness of the set of acts. The probabilities are assigned to members of this algebra. Savage's proof requires that this be a σ-algebra (i.e., closed under infinite countable unions and intersections), which makes for an extremely rich preference relation. On Savage's view we should not require subjective probabilities to be σ-additive. He therefore finds the insistence on a σ-algebra peculiar and is unhappy with it. But he sees no way of avoiding it. Second, the assignment of utilities requires the constant act assumption: for every consequence there is a constant act, which produces that consequence in every state. This assumption is known to be highly counterintuitive. The present work contains two mathematical results. The first, and the more difficult one, shows that the σ-algebra assumption can be dropped. The second states that, as long as utilities are assigned to finite gambles only, the constant act assumption can be replaced by the more plausible and much weaker assumption that there are at least two non-equivalent constant acts. The second result also employs a novel way of deriving utilities in Savage-style systems -- without appealing to von Neumann-Morgenstern lotteries. The paper discusses the notion of “idealized agent" that underlies Savage's approach, and argues that the simplified system, which is adequate for all the actual purposes for which the system is designed, involves a more realistic notion of an idealized agent. (shrink)
The ontology of decisiontheory has been subject to considerable debate in the past, and discussion of just how we ought to view decision problems has revealed more than one interesting problem, as well as suggested some novel modifications of classical decisiontheory. In this paper it will be argued that Bayesian, or evidential, decision-theoretic characterizations of decision situations fail to adequately account for knowledge concerning the causal connections between acts, states, and (...) outcomes in decision situations, and so they are incomplete. Second, it will be argues that when we attempt to incorporate the knowledge of such causal connections into Bayesiandecisiontheory, a substantial technical problem arises for which there is no currently available solution that does not suffer from some damning objection or other. From a broader perspective, this then throws into question the use of decisiontheory as a model of human or machine planning. (shrink)
This essay makes the case for, in the phrase of Angelika Kratzer, packing the fruits of the study of rational decision-making into our semantics for deontic modals—specifically, for parametrizing the truth-condition of a deontic modal to things like decision problems and decision theories. Then it knocks it down. While the fundamental relation of the semantic theory must relate deontic modals to things like decision problems and theories, this semantic relation cannot be intelligibly understood as representing (...) the conditions under which a deontic modal is true. Rather it represents the conditions under which it is accepted by a semantically competent agent. This in turn motivates a reorientation of the whole of semantic theorizing, away from the truth-conditional paradigm, toward a form of Expressivism. (shrink)
This paper addresses the issue of finite versus countable additivity in Bayesian probability and decisiontheory -- in particular, Savage's theory of subjective expected utility and personal probability. I show that Savage's reason for not requiring countable additivity in his theory is inconclusive. The assessment leads to an analysis of various highly idealised assumptions commonly adopted in Bayesiantheory, where I argue that a healthy dose of, what I call, conceptual realism is often (...) helpful in understanding the interpretational value of sophisticated mathematical structures employed in applied sciences like decisiontheory. In the last part, I introduce countable additivity into Savage's theory and explore some technical properties in relation to other axioms of the system. (shrink)
The problem of the man who met death in Damascus appeared in the infancy of the theory of rational choice known as causal decisiontheory. A straightforward, unadorned version of causal decisiontheory is presented here and applied, along with Brian Skyrms’ deliberation dynamics, to Death in Damascus and similar problems. Decision instability is a fascinating topic, but not a source of difficulty for causal decisiontheory. Andy Egan’s purported counterexample to causal (...)decisiontheory, Murder Lesion, is considered; a simple response shows how Murder Lesion and similar examples fail to be counterexamples, and clarifies the use of the unadorned theory in problems of decision instability. I compare unadorned causal decisiontheory to previous treatments by Frank Arntzenius and by Jim Joyce, and recommend a well-founded heuristic that all three accounts can endorse. Whatever course deliberation takes, causal decisiontheory is consistently a good guide to rational action. (shrink)
Decisiontheory has at its core a set of mathematical theorems that connect rational preferences to functions with certain structural properties. The components of these theorems, as well as their bearing on questions surrounding rationality, can be interpreted in a variety of ways. Philosophy’s current interest in decisiontheory represents a convergence of two very different lines of thought, one concerned with the question of how one ought to act, and the other concerned with the question (...) of what action consists in and what it reveals about the actor’s mental states. As a result, the theory has come to have two different uses in philosophy, which we might call the normative use and the interpretive use. It also has a related use that is largely within the domain of psychology, the descriptive use. This essay examines the historical development of decisiontheory and its uses; the relationship between the norm of decisiontheory and the notion of rationality; and the interdependence of the uses of decisiontheory. (shrink)
The essay presents a novel counterexample to Causal DecisionTheory (CDT). Its interest is that it generates a case in which CDT violates the very principles that motivated it in the first place. The essay argues that the objection applies to all extant formulations of CDT and that the only way out for that theory is a modification of it that entails incompatibilism. The essay invites the reader to find this consequence of CDT a reason to reject (...) it. (shrink)
Orthodox decisiontheory gives no advice to agents who hold two goods to be incommensurate in value because such agents will have incomplete preferences. According to standard treatments, rationality requires complete preferences, so such agents are irrational. Experience shows, however, that incomplete preferences are ubiquitous in ordinary life. In this paper, we aim to do two things: (1) show that there is a good case for revising decisiontheory so as to allow it to apply non-vacuously (...) to agents with incomplete preferences, and (2) to identify one substantive criterion that any such non-standard decisiontheory must obey. Our criterion, Competitiveness, is a weaker version of a dominance principle. Despite its modesty, Competitiveness is incompatible with prospectism, a recently developed decisiontheory for agents with incomplete preferences. We spend the final part of the paper showing why Competitiveness should be retained, and prospectism rejected. (shrink)
Standard decisiontheory, or rational choice theory, is often interpreted to be a theory of instrumental rationality. This dissertation argues, however, that the core requirements of orthodox decisiontheory cannot be defended as general requirements of instrumental rationality. Instead, I argue that these requirements can only be instrumentally justified to agents who have a desire to have choice dispositions that are stable over time and across different choice contexts. Past attempts at making instrumentalist arguments (...) for the core requirements of decisiontheory fail due to a pervasive assumption in decisiontheory, namely the assumption that the agent’s preferences over the objects of choice – be it outcomes or uncertain prospects – form the standard of instrumental rationality against which the agent’s actions are evaluated. I argue that we should instead take more basic desires to be the standard of instrumental rationality. But unless agents have a desire to have stable choice dispositions, according to this standard, instrumental rationality turns out to be more permissive than orthodox decisiontheory. (shrink)
The paper argues that on three out of eight possible hypotheses about the EPR experiment we can construct novel and realistic decision problems on which (a) Causal DecisionTheory and Evidential DecisionTheory conflict (b) Causal DecisionTheory and the EPR statistics conflict. We infer that anyone who fully accepts any of these three hypotheses has strong reasons to reject Causal DecisionTheory. Finally, we extend the original construction to show that (...) anyone who gives any of the three hypotheses any non-zero credence has strong reasons to reject Causal DecisionTheory. However, we concede that no version of the Many Worlds Interpretation (Vaidman, in Zalta, E.N. (ed.), Stanford Encyclopaedia of Philosophy 2014) gives rise to the conflicts that we point out. (shrink)
Decisiontheory is concerned with how agents should act when the consequences of their actions are uncertain. The central principle of contemporary decisiontheory is that the rational choice is the choice that maximizes subjective expected utility. This entry explains what this means, and discusses the philosophical motivations and consequences of the theory. The entry will consider some of the main problems and paradoxes that decisiontheory faces, and some of responses that can (...) be given. Finally the entry will briefly consider how decisiontheory applies to choices involving more than one agent. (shrink)
In this paper, I examine the decision-theoretic status of risk attitudes. I start by providing evidence showing that the risk attitude concepts do not play a major role in the axiomatic analysis of the classic models of decision-making under risk. This can be interpreted as reflecting the neutrality of these models between the possible risk attitudes. My central claim, however, is that such neutrality needs to be qualified and the axiomatic relevance of risk attitudes needs to be re-evaluated (...) accordingly. Specifically, I highlight the importance of the conditional variation and the strengthening of risk attitudes, and I explain why they establish the axiomatic significance of the risk attitude concepts. I also present several questions for future research regarding the strengthening of risk attitudes. (shrink)
Bayesian confirmation theory is rife with confirmation measures. Many of them differ from each other in important respects. It turns out, though, that all the standard confirmation measures in the literature run counter to the so-called “Reverse Matthew Effect” (“RME” for short). Suppose, to illustrate, that H1 and H2 are equally successful in predicting E in that p(E | H1)/p(E) = p(E | H2)/p(E) > 1. Suppose, further, that initially H1 is less probable than H2 in that p(H1) (...) < p(H2). Then by RME it follows that the degree to which E confirms H1 is greater than the degree to which it confirms H2. But by all the standard confirmation measures in the literature, in contrast, it follows that the degree to which E confirms H1 is less than or equal to the degree to which it confirms H2. It might seem, then, that RME should be rejected as implausible. Festa (2012), however, argues that there are scientific contexts in which RME holds. If Festa’s argument is sound, it follows that there are scientific contexts in which none of the standard confirmation measures in the literature is adequate. Festa’s argument is thus interesting, important, and deserving of careful examination. I consider five distinct respects in which E can be related to H, use them to construct five distinct ways of understanding confirmation measures, which I call “Increase in Probability”, “Partial Dependence”, “Partial Entailment”, “Partial Discrimination”, and “Popper Corroboration”, and argue that each such way runs counter to RME. The result is that it is not at all clear that there is a place in Bayesian confirmation theory for RME. (shrink)
The primary aim of this paper is the presentation of a foundation for causal decisiontheory. This is worth doing because causal decisiontheory (CDT) is philosophically the most adequate rational decisiontheory now available. I will not defend that claim here by elaborate comparison of the theory with all its competitors, but by providing the foundation. This puts the theory on an equal footing with competitors for which foundations have already been (...) given. It turns out that it will also produce a reply to the most serious objections made so far against CDT and against the particular version of CDT I will defend. (shrink)
Andy Egan has presented a dilemma for decisiontheory. As is well known, Newcomb cases appear to undermine the case for evidential decisiontheory. However, Egan has come up with a new scenario which poses difficulties for causal decisiontheory. I offer a simple solution to this dilemma in terms of a modified EDT. I propose an epistemological test: take some feature which is relevant to your evaluation of the scenarios under consideration, evidentially correlated (...) with the actions under consideration albeit, causally independent of them. Hold this feature fixed as a hypothesis. The test shows that, in Newcomb cases, EDT would mislead the agent. Where the test shows EDT to be misleading, I propose to use fictive conditional credences in the EDT-formula under the constraint that they are set to equal values. I then discuss Huw Price’s defence of EDT as an alternative to my diagnosis. I argue that my solution also applies if one accepts the main premisses of Price’s argument. I close with applying my solution to Nozick’s original Newcomb problem. (shrink)
Decisiontheory has had a long-standing history in the behavioural and social sciences as a tool for constructing good approximations of human behaviour. Yet as artificially intelligent systems (AIs) grow in intellectual capacity and eventually outpace humans, decisiontheory becomes evermore important as a model of AI behaviour. What sort of decision procedure might an AI employ? In this work, I propose that policy-based causal decisiontheory (PCDT), which places a primacy on the (...)decision-relevance of predictors and simulations of agent behaviour, may be such a procedure. I compare this account to the recently-developed functional decisiontheory (FDT), which is motivated by similar concerns. I also address potentially counterintuitive features of PCDT, such as its refusal to condition on observations made at certain times. (shrink)
The principle that rational agents should maximize expected utility or choiceworthiness is intuitively plausible in many ordinary cases of decision-making under uncertainty. But it is less plausible in cases of extreme, low-probability risk (like Pascal's Mugging), and intolerably paradoxical in cases like the St. Petersburg and Pasadena games. In this paper I show that, under certain conditions, stochastic dominance reasoning can capture most of the plausible implications of expectational reasoning while avoiding most of its pitfalls. Specifically, given sufficient background (...) uncertainty about the choiceworthiness of one's options, many expectation-maximizing gambles that do not stochastically dominate their alternatives "in a vacuum" become stochastically dominant in virtue of that background uncertainty. But, even under these conditions, stochastic dominance will not require agents to accept options whose expectational superiority depends on sufficiently small probabilities of extreme payoffs. The sort of background uncertainty on which these results depend looks unavoidable for any agent who measures the choiceworthiness of her options in part by the total amount of value in the resulting world. At least for such agents, then, stochastic dominance offers a plausible general principle of choice under uncertainty that can explain more of the apparent rational constraints on such choices than has previously been recognized. (shrink)
I present a solution to the epistemological or characterisation problem of induction. In part I, Bayesian Confirmation Theory (BCT) is discussed as a good contender for such a solution but with a fundamental explanatory gap (along with other well discussed problems); useful assigned probabilities like priors require substantive degrees of belief about the world. I assert that one does not have such substantive information about the world. Consequently, an explanation is needed for how one can be licensed to (...) act as if one has substantive information about the world when one does not. I sketch the outlines of a solution in part I, showing how it differs from others, with full details to follow in subsequent parts. The solution is pragmatic in sentiment (though differs in specifics to arguments from, for example, William James); the conceptions we use to guide our actions are and should be at least partly determined by preferences. This is cashed out in a reformulation of decisiontheory motivated by a non-reductive formulation of hypotheses and logic. A distinction emerges between initial assumptions--that can be non-dogmatic--and effective assumptions that can simultaneously be substantive. An explanation is provided for the plausibility arguments used to explain assigned probabilities in BCT. -/- In subsequent parts, logic is constructed from principles independent of language and mind. In particular, propositions are defined to not have form. Probabilities are logical and uniquely determined by assumptions. The problems considered fatal to logical probabilities--Goodman's `grue' problem and the uniqueness of priors problem are dissolved due to the particular formulation of logic used. Other problems such as the zero-prior problem are also solved. -/- A universal theory of (non-linguistic) meaning is developed. Problems with counterfactual conditionals are solved by developing concepts of abstractions and corresponding pictures that make up hypotheses. Spaces of hypotheses and the version of Bayes' theorem that utilises them emerge from first principles. -/- Theoretical virtues for hypotheses emerge from the theory. Explanatory force is explicated. The significance of effective assumptions is partly determined by combinatoric factors relating to the structure of hypotheses. I conjecture that this is the origin of simplicity. (shrink)
Pascal’s Wager does not exist in a Platonic world of possible gods, abstract probabilities and arbitrary payoffs. Real decision-makers, such as Pascal’s “man of the world” of 1660, face a range of religious options they take to be serious, with fixed probabilities grounded in their evidence, and with utilities that are fixed quantities in actual minds. The many ingenious objections to the Wager dreamed up by philosophers do not apply in such a real decision matrix. In the situation (...) Pascal addresses, the Wager is a good bet. In the situation of a modern Western intellectual, the reasoning of the Wager is still powerful, though the range of options and the actions indicated are not the same as in Pascal’s day. (shrink)
to appear in Lambert, E. and J. Schwenkler (eds.) Transformative Experience (OUP) -/- L. A. Paul (2014, 2015) argues that the possibility of epistemically transformative experiences poses serious and novel problems for the orthodox theory of rational choice, namely, expected utility theory — I call her argument the Utility Ignorance Objection. In a pair of earlier papers, I responded to Paul’s challenge (Pettigrew 2015, 2016), and a number of other philosophers have responded in similar ways (Dougherty, et al. (...) 2015, Harman 2015) — I call our argument the Fine-Graining Response. Paul has her own reply to this response, which we might call the Authenticity Reply. But Sarah Moss has recently offered an alternative reply to the Fine-Graining Response on Paul’s behalf (Moss 2017) — we’ll call it the No Knowledge Reply. This appeals to the knowledge norm of action, together with Moss’ novel and intriguing account of probabilistic knowledge. In this paper, I consider Moss’ reply and argue that it fails. I argue first that it fails as a reply made on Paul’s behalf, since it forces us to abandon many of the features of Paul’s challenge that make it distinctive and with which Paul herself is particularly concerned. Then I argue that it fails as a reply independent of its fidelity to Paul’s intentions. (shrink)
Causal decisiontheory defines a rational action as the one that tends to cause the best outcomes. If we adopt counterfactual or probabilistic theories of causation, then we may face problems in overdetermination cases. Do such problems affect Causal decisiontheory? The aim of this work is to show that the concept of causation that has been fundamental in all versions of causal decisiontheory is not the most intuitive one. Since overdetermination poses problems (...) for a counterfactual theory of causation, one can think that a version of causal decisiontheory based on counterfactual dependence may also be vulnerable to such scenarios. However, only when an intuitive, not analyzed notion of causation is presupposed as a ground for a more plausible version of causal decisiontheory, overdetermination turns problematic. The first interesting consequence of this is that there are more reasons to dismiss traditional theories of causation (and to accept others). The second is to confirm that traditional causal decisiontheory is not based on our intuitive concept of the causal relation. (shrink)
Can an agent deliberating about an action A hold a meaningful credence that she will do A? 'No', say some authors, for 'Deliberation Crowds Out Prediction' (DCOP). Others disagree, but we argue here that such disagreements are often terminological. We explain why DCOP holds in a Ramseyian operationalist model of credence, but show that it is trivial to extend this model so that DCOP fails. We then discuss a model due to Joyce, and show that Joyce's rejection of DCOP rests (...) on terminological choices about terms such as 'intention', 'prediction', and 'belief'. Once these choices are in view, they reveal underlying agreement between Joyce and the DCOP-favouring tradition that descends from Ramsey. Joyce's Evidential Autonomy Thesis (EAT) is effectively DCOP, in different terminological clothing. Both principles rest on the so-called 'transparency' of first-person present-tensed reflection on one's own mental states. (shrink)
Epistemic decisiontheory (EDT) employs the mathematical tools of rational choice theory to justify epistemic norms, including probabilism, conditionalization, and the Principal Principle, among others. Practitioners of EDT endorse two theses: (1) epistemic value is distinct from subjective preference, and (2) belief and epistemic value can be numerically quantified. We argue the first thesis, which we call epistemic puritanism, undermines the second.
This paper offers a fine analysis of different versions of the well known sure-thing principle. We show that Savage's formal formulation of the principle, i.e., his second postulate (P2), is strictly stronger than what is intended originally.
Representation theorems are often taken to provide the foundations for decisiontheory. First, they are taken to characterize degrees of belief and utilities. Second, they are taken to justify two fundamental rules of rationality: that we should have probabilistic degrees of belief and that we should act as expected utility maximizers. We argue that representation theorems cannot serve either of these foundational purposes, and that recent attempts to defend the foundational importance of representation theorems are unsuccessful. As a (...) result, we should reject these claims, and lay the foundations of decisiontheory on firmer ground. (shrink)
In the world of philosophy of science, the dominant theory of confirmation is Bayesian. In the wider philosophical world, the idea of inference to the best explanation exerts a considerable influence. Here we place the two worlds in collision, using Bayesian confirmation theory to argue that explanatoriness is evidentially irrelevant.
In recent years, some epistemologists have argued that practical factors can make the difference between knowledge and mere true belief. While proponents of this pragmatic thesis have proposed necessary and sufficient conditions for knowledge, it is striking that they have failed to address Gettier cases. As a result, the proposed analyses of knowledge are either lacking explanatory power or susceptible to counterexamples. Gettier cases are also worth reflecting on because they raise foundational questions for the pragmatist. Underlying these challenges is (...) the fact that pragmatic epistemologies not only rely upon normative theories of rational choice but also require externalist standards to rule out epistemic luck. Unfortunately, we lack adequate externalist theories of rational choice. In response, I address these foundational challenges by offering the outlines of an externalist decisiontheory. While this task is an ambitious one that I cannot hope to complete, I survey the outlines of a decision-theoretic framework on which a richer pragmatic epistemology can be developed. My hope is that this framework opens up new avenues of exploration for the pragmatic epistemologist. (shrink)
A book chapter (about 4,000 words, plus references) on decisiontheory in moral philosophy, with particular attention to uses of decisiontheory in specifying the contents of moral principles (e.g., expected-value forms of act and rule utilitarianism), uses of decisiontheory in arguing in support of moral principles (e.g., the hypothetical-choice arguments of Harsanyi and Rawls), and attempts to derive morality from rationality (e.g., the views of Gauthier and McClennen).
This paper explores the possibility that causal decisiontheory can be formulated in terms of probabilities of conditionals. It is argued that a generalized Stalnaker semantics in combination with an underlying branching time structure not only provides the basis for a plausible account of the semantics of indicative conditionals, but also that the resulting conditionals have properties that make them well-suited as a basis for formulating causal decisiontheory. Decisiontheory (at least if we (...) omit the frills) is not an esoteric science, however unfamiliar it may seem to an outsider. Rather it is a systematic exposition of the consequences of certain well-chosen platitudes about belief, desire, preference and choice. It is the very core of our common-sense theory of persons, dissected out and elegantly systematized. (David Lewis, Synthese 23:331–344, 1974, p. 337). A small distortion in the analysis of the conditional may create spurious problems with the analysis of other concepts. So if the facts about usage favor one among a number of subtly different theories, it may be important to determine which one it is. (Robert Stalnaker, A Defense of Conditional Excluded Middle, pp. 87–104, 1980, p. 87). (shrink)
This paper has as its topic two recent philosophical disputes. One of these disputes is internal to the project known as decisiontheory, and while by now familiar to many, may well seem to be of pressing concern only to specialists. It has been carried on over the last twenty years or so, but by now the two opposing camps are pretty well entrenched in their respective positions, and the situation appears to many observers (as well as to (...) some of the parties involved) to have reached a sort of stalemate. The second of these two disputes is, on the other hand, very much alive. While it has been framed in decision theoretic terms, it is definitely not a dispute internal to that enterprise. It is, rather, a debate about the very coherence of the notion of objective value, and as such touches on issues of central importance to, for example, meta–ethics and moral psychology. (shrink)
The paper re-expresses arguments against the normative validity of expected utility theory in Robin Pope (1983, 1991a, 1991b, 1985, 1995, 2000, 2001, 2005, 2006, 2007). These concern the neglect of the evolving stages of knowledge ahead (stages of what the future will bring). Such evolution is fundamental to an experience of risk, yet not consistently incorporated even in axiomatised temporal versions of expected utility. Its neglect entails a disregard of emotional and financial effects on well-being before a particular risk (...) is resolved. These arguments are complemented with an analysis of the essential uniqueness property in the context of temporal and atemporal expected utility theory and a proof of the absence of a limit property natural in an axiomatised approach to temporal expected utility theory. Problems of the time structure of risk are investigated in a simple temporal framework restricted to a subclass of temporal lotteries in the sense of David Kreps and Evan Porteus (1978). This subclass is narrow but wide enough to discuss basic issues. It will be shown that there are serious objections against the modification of expected utility theory axiomatised by Kreps and Porteus (1978, 1979). By contrast the umbrella theory proffered by Pope that she has now termed SKAT, the Stages of Knowledge Ahead Theory, offers an epistemically consistent framework within which to construct particular models to deal with particular decision situations. A model by Caplin and Leahy (2001) will also be discussed and contrasted with the modelling within SKAT (Pope, Leopold and Leitner 2007). (shrink)
Using “brute reason” I will show why there can be only one valid interpretation of probability. The valid interpretation turns out to be a further refinement of Popper’s Propensity interpretation of probability. Via some famous probability puzzles and new thought experiments I will show how all other interpretations of probability fail, in particular the Bayesian interpretations, while these puzzles do not present any difficulties for the interpretation proposed here. In addition, the new interpretation casts doubt on some concepts often (...) taken as basic and unproblematic, like rationality, utility and expectation. This in turn has implications for decisiontheory, economic theory and the philosophy of physics. (shrink)
It is often claimed that the greatest value of the Bayesian framework in cognitive science consists in its unifying power. Several Bayesian cognitive scientists assume that unification is obviously linked to explanatory power. But this link is not obvious, as unification in science is a heterogeneous notion, which may have little to do with explanation. While a crucial feature of most adequate explanations in cognitive science is that they reveal aspects of the causal mechanism that produces the phenomenon (...) to be explained, the kind of unification afforded by the Bayesian framework to cognitive science does not necessarily reveal aspects of a mechanism. Bayesian unification, nonetheless, can place fruitful constraints on causal–mechanical explanation. 1 Introduction2 What a Great Many Phenomena BayesianDecisionTheory Can Model3 The Case of Information Integration4 How Do Bayesian Models Unify?5 Bayesian Unification: What Constraints Are There on Mechanistic Explanation?5.1 Unification constrains mechanism discovery5.2 Unification constrains the identification of relevant mechanistic factors5.3 Unification constrains confirmation of competitive mechanistic models6 ConclusionAppendix. (shrink)
Book review of Paul Horwich, Probability and Evidence (Cambridge Philosophy Classics edition), Cambridge: Cambridge University Press, 2016, 147pp, £14.99 (paperback).
A group is often construed as a single agent with its own probabilistic beliefs (credences), which are obtained by aggregating those of the individuals, for instance through averaging. In their celebrated contribution “Groupthink”, Russell et al. (2015) apply the Bayesian paradigm to groups by requiring group credences to undergo a Bayesian revision whenever new information is learnt, i.e., whenever the individual credences undergo a Bayesian revision based on this information. Bayesians should often strengthen this requirement by extending (...) it to non-public or even private information (learnt by not all or just one individual), or to non-representable information (not corresponding to an event in the algebra on which credences are held). I propose a taxonomy of six kinds of `group Bayesianism', which differ in the type of information for which Bayesian revision of group credences is required: public representable information, private representable information, public non-representable information, and so on. Six corresponding theorems establish exactly how individual credences must (not) be aggregated such that the resulting group credences obey group Bayesianism of any given type, respectively. Aggregating individual credences through averaging is never permitted. One of the theorems – the one concerned with public representable information – is essentially Russell et al.'s central result (with minor corrections). (shrink)
The standard formulation of Newcomb's problem compares evidential and causal conceptions of expected utility, with those maximizing evidential expected utility tending to end up far richer. Thus, in a world in which agents face Newcomb problems, the evidential decision theorist might ask the causal decision theorist: "if you're so smart, why ain’cha rich?” Ultimately, however, the expected riches of evidential decision theorists in Newcomb problems do not vindicate their theory, because their success does not generalize. Consider (...) a theory that allows the agents who employ it to end up rich in worlds containing Newcomb problems and continues to outperform in other cases. This type of theory, which I call a “success-first” decisiontheory, is motivated by the desire to draw a tighter connection between rationality and success, rather than to support any particular account of expected utility. The primary aim of this paper is to provide a comprehensive justification of success-first decision theories as accounts of rational decision. I locate this justification in an experimental approach to decisiontheory supported by the aims of methodological naturalism. (shrink)
How do agents with limited cognitive capacities flourish in informationally impoverished or unexpected circumstances? Aristotle argued that human flourishing emerged from knowing about the world and our place within it. If he is right, then the virtuous processes that produce knowledge, best explain flourishing. Influenced by Aristotle, virtue epistemology defends an analysis of knowledge where beliefs are evaluated for their truth and the intellectual virtue or competences relied on in their creation. However, human flourishing may emerge from how degrees of (...) ignorance are managed in an uncertain world. Perhaps decision-making in the shadow of knowledge best explains human wellbeing—a Bayesian approach? In this dissertation I argue that a hybrid of virtue and Bayesian epistemologies explains human flourishing—what I term homeostatic epistemology. Homeostatic epistemology supposes that an agent has a rational credence p when p is the product of reliable processes aligned with the norms of probability theory; whereas an agent knows that p when a rational credence p is the product of reliable processes such that: 1) p meets some relevant threshold for belief, 2) p coheres with a satisficing set of relevant beliefs and, 3) the relevant set of beliefs is coordinated appropriately to meet the integrated aims of the agent. Homeostatic epistemology recognizes that justificatory relationships between beliefs are constantly changing to combat uncertainties and to take advantage of predictable circumstances. Contrary to holism, justification is built up and broken down across limited sets like the anabolic and catabolic processes that maintain homeostasis in the cells, organs and systems of the body. It is the coordination of choristic sets of reliably produced beliefs that create the greatest flourishing given the limitations inherent in the situated agent. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.