Citations of:
Add citations
You must login to add citations.


Clark Glymour, together with his students Peter Spirtes and Richard Scheines, did pioneering work on graphical causal models . One of the central advances provided by these models is the ability to simply represent the effects of interventions. In an elegant paper , Glymour and his student Christopher Meek applied these methods to problems in decision theory. One of the morals they drew was that causal decision theory should be understood in terms of interventions. I revisit their proposal, and extend (...) 

A simple counterfactual theory of causation fails because of problems with cases of preemption. This might lead us to expect that preemption will raise problems for counterfactual theories of other concepts that have a causal dimension. Indeed, examples are easy to find. But there is one case where we do not find this. Several versions of causal decision theory are formulated using counterfactuals. This might lead us to expect that these theories will yield the wrong recommendations in cases of preemption. (...) 

I present a gametheoretic way to understand the situation describing Newcomb’s Problem (NP) which helps to explain the intuition of both oneboxers and twoboxers. David Lewis has shown that the NP may be modelled as a Prisoners Dilemma game (PD) in which ‘cooperating’ corresponds to ‘taking one box’. Adopting relevant results from game theory, this means that one should take just one box if the NP is repeated an indefinite number of times, but both boxes if it is a oneshot (...) 







Newcomb’s Problem, Arif Ahmed (editor). Cambridge University Press, 2018, 233 pages. 

My dissertation explores the ways in which Rudolf Carnap sought to make philosophy scientific by further developing recent interpretive efforts to explain Carnap’s mature philosophical work as a form of engineering. It does this by looking in detail at his philosophical practice in his most sustained mature project, his work on pure and applied inductive logic. I, first, specify the sort of engineering Carnap is engaged in as involving an engineering design problem and then draw out the complications of design (...) 

The aim of the paper is to develop general criteria of argumentative validity and adequacy for probabilistic arguments on the basis of the epistemological approach to argumentation. In this approach, as in most other approaches to argumentation, proabilistic arguments have been neglected somewhat. Nonetheless, criteria for several special types of probabilistic arguments have been developed, in particular by Richard Feldman and Christoph Lumer. In the first part (sects. 25) the epistemological basis of probabilistic arguments is discussed. With regard to the (...) 

The crucial premise of the standard argument for twoboxing in Newcomb's problem, a causal dominance principle, is false. We present some counterexamples. We then offer a metaethical explanation for why the counterexamples arise. Our explanation reveals a new and superior argument for twoboxing, one that eschews the causal dominance principle in favor of a principle linking rational choice to guidance and actual value maximization. 

Fundamental physics makes no clear use of causal notions; it uses laws that operate in relevant respects in both temporal directions and that relate whole systems across times. But by relating causation to evidence, we can explain how causation fits in to a physical picture of the world and explain its temporal asymmetry. This paper takes up a deliberative approach to causation, according to which causal relations correspond to the evidential relations we need when we decide on one thing in (...) 



The paper will show how one may rationalize oneboxing in Newcomb's problem and drinking the toxin in the Toxin puzzle within the confines of causal decision theory by ascending to socalled reflexive decision models which reflect how actions are caused by decision situations (beliefs, desires, and intentions) represented by ordinary unreflexive decision models. 

Decisiontheoretic representation theorems have been developed and appealed to in the service of two important philosophical projects: in attempts to characterise credences in terms of preferences, and in arguments for probabilism. Theorems developed within the formal framework that Savage developed have played an especially prominent role here. I argue that the use of these ‘Savagean’ theorems create significant difficulties for both projects, but particularly the latter. The origin of the problem directly relates to the question of whether we can have (...) 

A fully adequate solution to Newcomb’s Problem (Nozick 1969) should reveal the source of its extraordinary elusiveness and persistent intractability. Recently, a few accounts have independently sought to meet this criterion of adequacy by exposing the underlying source of the problem’s profound puzzlement. Thus, Sorensen (1987), Slezak (1998), Priest (2002) and Maitzen and Wilson (2003) share the ‘no box’ view according to which the very idea that there is a right choice is misconceived since the problem is illformed or incoherent (...) 

Both Representation Theorem Arguments and Dutch Book Arguments support taking probabilistic coherence as an epistemic norm. Both depend on connecting beliefs to preferences, which are not clearly within the epistemic domain. Moreover, these connections are standardly grounded in questionable definitional/metaphysical claims. The paper argues that these definitional/metaphysical claims are insupportable. It offers a way of reconceiving Representation Theorem arguments which avoids the untenable premises. It then develops a parallel approach to Dutch Book Arguments, and compares the results. In each case (...) 

Can some evidence confirm a conjunction of two hypotheses more than it confirms either of the hypotheses separately? We show that it can, moreover under conditions that are the same for nine different measures of confirmation. Further we demonstrate that it is even possible for the conjunction of two disconfirmed hypotheses to be confirmed by the same evidence. 

The standard representation theorem for expected utility theory tells us that if a subject’s preferences conform to certain axioms, then she can be represented as maximising her expected utility given a particular set of credences and utilities—and, moreover, that having those credences and utilities is the only way that she could be maximising her expected utility. However, the kinds of agents these theorems seem apt to tell us anything about are highly idealised, being always probabilistically coherent with infinitely precise degrees (...) 

outlined. This account is partly inspired by the work of C.S. Peirce. When we want to consider how degree of confirmation varies with changing I show that a large class of quantitative Bayesian measures of con. 

This paper argues that Ramsey's view of the calculus of subjective probabilities as, in effect, logical axioms is the correct view, with powerful heuristic value. This heuristic value is seen particularly in the analysis of the role of conditionalization in the Bayesian theory, where a semantic criterion of synchronic coherence is employed as the test of soundness, which the traditional formulation of conditionalization fails. On the other hand, there is a generally sound rule which supports conditionalization in appropriate contexts, though (...) 

The ontology of decision theory has been subject to considerable debate in the past, and discussion of just how we ought to view decision problems has revealed more than one interesting problem, as well as suggested some novel modifications of classical decision theory. In this paper it will be argued that Bayesian, or evidential, decisiontheoretic characterizations of decision situations fail to adequately account for knowledge concerning the causal connections between acts, states, and outcomes in decision situations, and so they are (...) 

The Paradox of the Ravens (a.k.a,, The Paradox of Confirmation) is indeed an old chestnut. A great many things have been written and said about this paradox and its implications for the logic of evidential support. The first part of this paper will provide a brief survey of the early history of the paradox. This will include the original formulation of the paradox and the early responses of Hempel, Goodman, and Quine. The second part of the paper will describe attempts (...) 



In “A Subjectivist’s Guide to Objective Chance,” David Lewis says that he is “led to wonder whether anyone but a subjectivist is in a position to understand objective chance.” The present essay aims to motivate this same Lewisean attitude, and a similar degree of modest subjectivism, with respect to objective causation. The essay begins with Newcomb problems, which turn on an apparent tension between two principles of choice: roughly, a principle sensitive to the causal features of the relevant situation, and (...) 



The primary aim of this paper is the presentation of a foundation for causal decision theory. This is worth doing because causal decision theory (CDT) is philosophically the most adequate rational decision theory now available. I will not defend that claim here by elaborate comparison of the theory with all its competitors, but by providing the foundation. This puts the theory on an equal footing with competitors for which foundations have already been given. It turns out that it will also (...) 

Hempel first introduced the paradox of confirmation in (Hempel 1937). Since then, a very extensive literature on the paradox has evolved (Vranas 2004). Much of this literature can be seen as responding to Hempel’s subsequent discussions and analyses of the paradox in (Hempel 1945). Recently, it was noted that Hempel’s intuitive (and plausible) resolution of the paradox was inconsistent with his official theory of confirmation (Fitelson & Hawthorne 2006). In this article, we will try to explain how this inconsistency affects (...) 

It is a platitude among decision theorists that agents should choose their actions so as to maximize expected value. But exactly how to define expected value is contentious. Evidential decision theory (henceforth EDT), causal decision theory (henceforth CDT), and a theory proposed by Ralph Wedgwood that this essay will call benchmark theory (BT) all advise agents to maximize different types of expected value. Consequently, their verdicts sometimes conflict. In certain famous cases of conflict—medical Newcomb problems—CDT and BT seem to get (...) 

Richard Jeffrey long held that decision theory should be formulated without recourse to explicitly causal notions. Newcomb problems stand out as putative counterexamples to this ‘evidential’ decision theory. Jeffrey initially sought to defuse Newcomb problems via recourse to the doctrine of ratificationism, but later came to see this as problematic. We will see that Jeffrey’s worries about ratificationism were not compelling, but that valid ratificationist arguments implicitly presuppose causal decision theory. In later work, Jeffrey argued that Newcomb problems are not (...) 

“Absence of evidence isn’t evidence of absence” is a slogan that is popular among scientists and nonscientists alike. This article assesses its truth by using a probabilistic tool, the Law of Likelihood. Qualitative questions (“Is E evidence about H ?”) and quantitative questions (“How much evidence does E provide about H ?”) are both considered. The article discusses the example of fossil intermediates. If finding a fossil that is phenotypically intermediate between two extant species provides evidence that those species have (...) 

Crupi et al. (2008) offer a confirmationtheoretic, Bayesian account of the conjunction fallacy—an error in reasoning that occurs when subjects judge that Pr( h 1 & h 2  e ) > Pr( h 1  e ). They introduce three formal conditions that are satisfied by classical conjunction fallacy cases, and they show that these same conditions imply that h 1 & h 2 is confirmed by e to a greater extent than is h 1 alone. Consequently, they suggest (...) 

The paper argues that on three out of eight possible hypotheses about the EPR experiment we can construct novel and realistic decision problems on which (a) Causal Decision Theory and Evidential Decision Theory conflict (b) Causal Decision Theory and the EPR statistics conflict. We infer that anyone who fully accepts any of these three hypotheses has strong reasons to reject Causal Decision Theory. Finally, we extend the original construction to show that anyone who gives any of the three hypotheses any (...) 

The 'Why ain'cha rich?' argument for oneboxing in Newcomb's problem allegedly vindicates evidential decision theory and undermines causal decision theory. But there is a good response to the argument on behalf of causal decision theory. I develop this response. Then I pose a new problem and use it to give a new 'Why ain'cha rich?' argument. Unlike the old argument, the new argument targets evidential decision theory. And unlike the old argument, the new argument is sound. 

This paper introduces a new equilibrium concept for normal form games called dependency equilibrium; it is defined, exemplified, and compared with Nash and correlated equilibria in Sections 2–4. Its philosophical motive is to rationalize cooperation in the one shot prisoners' dilemma. A brief discussion of its meaningfulness in Section 5 concludes the paper. †To contact the author, please write to: Department of Philosophy, University of Konstanz, 78457 Konstanz, Germany; email: Wolfgang.Spohn@unikonstanz.de. 

"Five Questions on Formal Philosophy": Like the other authors in the volume, I was asked for my reflections on the character of philosophy by answering the following five questions : 1. Why were you initially drawn to formal methods? 2. What example from your work illustrates the role formal methods can play in philosophy? 3. What is the proper role of philosophy in relation to other disciplines? 4. What do you consider the most neglected topics and/or contributions in late 20th (...) 

Richard Jeffrey said that Newcomb’s Problem may be seen “as a rock on which... Bayesianism... must founder” and the problem has been almost universally conceived as reconciling the sciencefictional features of the decision problem with a plausible causal analysis. Later, Jeffrey renounced his earlier position that accepted Newcomb problems as genuine decision problems, suggesting “Newcomb problems are like Escher’s famous staircase”. We may interpret this to mean that we know there can be no such thing, though we see no local (...) 

In this paper an empirical theory about the nature of intention is sketched. After stressing the necessity of reckoning with intentions in philosophy of action a strategy for deciding empirically between competing theories of intention is exposed and applied for criticizing various philosophical theories of intention, among others that of Bratman. The hypothesis that intentions are optimality beliefs is defended on the basis of empirical decision theory. Present empirical decision theory however does not provide an empirically satisfying elaboration of the (...) 

Twoboxers in Newcomb Problems face the question: Why aren't you rich? The essay argues that oneboxers have a false sense of advantage. They fail to align their credences during deliberation with the credences they will have when they act. This puts them in violation of the socalled Principle of Reflection, and it exposes them to a dynamic Dutch Book that will leach any gains they achieve away. 

Responding to Cappelen and Dever’s claim that there is no distinctive role for perspectivality in epistemology, I argue that facts about the outcomes of one’s own reasoning processes may have a different evidential significance than facts about the outcomes of others’. 

In the Newcomb problem, the standard arguments for taking either one box or both boxes adduce what seem to be relevant considerations, but they are not complete arguments, and attempts to complete the arguments rely upon incorrect principles of rational decision making. It is argued that by considering how the predictor is making his prediction, we can generate a more complete argument, and this in turn supports a form of causal decision theory. 

This paper aims to make three contributions to decision theory. First there is the hope that it will help to reestablish the legitimacy of the problem, pace various recent analyses provided by Maitzen and Wilson, Slezak and Priest. Second, after pointing out that analyses of the problem have generally relied upon evidence that is conditional on the taking of one particular option, this paper argues that certain assumptions implicit in those analyses are subtly flawed. As a third contribution, the piece (...) 

It is intuitively attractive to think that it makes a difference in Newcomb’s problem whether or not the predictor is infallible, in the sense of being certainly actually correct. This paper argues that that view is irrational and manifests a welldocumented cognitive illusion. 

The bestknown argument for Evidential Decision Theory (EDT) is the ‘Why ain’cha rich?’ challenge to rival Causal Decision Theory (CDT). The basis for this challenge is that in Newcomblike situations, acts that conform to EDT may be known in advance to have the better return than acts that conform to CDT. Frank Arntzenius has recently proposed an ingenious counter argument, based on an example in which, he claims, it is predictable in advance that acts that conform to EDT will do (...) 

This paper raises a principled objection against the idea that Bayesian confirmation theory can be used to explain the conjunction fallacy. The paper demonstrates that confirmationbased explanations are limited in scope and can only be applied to cases of the fallacy of a certain restricted kind. In particular; confirmationbased explanations cannot account for the inverse conjunction fallacy, a more recently discovered form of the conjunction fallacy. Once the problem has been set out, the paper explores four different ways for the (...) 



Since rationality is a normative ideal, it is difficult to see how a theory of rationality might be subjected to empirical evaluation. This paper explores various aspects of this problem in relation to the work of L. J. Cohen, Amos Tversky and Daviel Kahneman, Ellery Eells, Isaac Levi, and Henry Kyburg. Special consideration is given to its significance for testing systems of inductive logic. 

Proponents of causal decision theories argue that classical Bayesian decision theory (BDT) gives the wrong advice in certain types of cases, of which the clearest and commonest are the medical Newcomb problems. I defend BDT, invoking a familiar principle of statistical inference to show that in such cases a free agent cannot take the contemplated action to be probabilistically relevant to its causes (so that BDT gives the right answer). I argue that my defence does better than those of Ellery (...) 

I argue that standard decision theories, namely causal decision theory and evidential decision theory, both are unsatisfactory. I devise a new decision theory, from which, under certain conditions, standard game theory can be derived. 

I defend evidential decision theory and the theory of deliberationprobability dynamics from a recent criticism advanced by Jordan Howard Sobel. I argue that his alleged counterexample to the theories, called the Popcorn Problem is not a genuine counterexample. 