This article shows that a slight variation of the argument in Milne 1996 yields the log‐likelihood ratio l rather than the log‐ratio measure r as “the one true measure of confirmation. ” *Received December 2006; revised December 2007. †To contact the author, please write to: Formal Epistemology Research Group, Zukunftskolleg and Department of Philosophy, University of Konstanz, P.O. Box X906, 78457 Konstanz, Germany; e‐mail: franz.huber@uni‐konstanz.de.
This paper starts by indicating the analysis of Hempel's conditions of adequacy for any relation of confirmation (Hempel, 1945) as presented in Huber (submitted). There I argue contra Carnap (1962, Section 87) that Hempel felt the need for two concepts of confirmation: one aiming at plausible theories and another aiming at informative theories. However, he also realized that these two concepts are conflicting, and he gave up the concept of confirmation aiming at informative theories. The main part of the (...) paper consists in working out the claim that one can have Hempel's cake and eat it too - in the sense that there is a logic of theory assessment that takes into account both of the two conflicting aspects of plausibility and informativeness. According to the semantics of this logic, a is an acceptable theory for evidence β if and only if a is both sufficiently plausible given β and sufficiently informative about β. This is spelt out in terms of ranking functions (Spohn, 1988) and shown to represent the syntactically specified notion of an assessment relation. The paper then compares these acceptability relations to explanatory and confirmatory consequence relations (Flach, 2000) as well as to nonmonotonic consequence relations (Kraus et al., 1990). It concludes by relating the plausibility-informativeness approach to Carnap's positive relevance account, thereby shedding new light on Carnap's analysis as well as solving another problem of confirmation theory. (shrink)
The concept of sovereignty is a recurring and controversial theme in international law, and it has a long history in western philosophy. The traditionally favored concept of sovereignty proves problematic in the context of international law. International law’s own claims to sovereignty, which are premised on traditional concept of sovereignty, undermine individual nations’ claims to sovereignty. These problems are attributable to deep-seated flaws in the traditional concept of sovereignty. A viable alternative concept of sovereignty can be derived from key concepts (...) in Friedrich Nietzsche’s views on human reason and epistemology. The essay begins by considering the problem of sovereignty from the ancient philosophical perspective inherent in the fundamental assumptions and ideas of Plato’s political philosophy and epistemology. It then considers the contemporary problem of sovereignty in the context of international law by examining Louis Henkin’s formulation of and approach to it in his essay That S-Word: Sovereignty, and Globalization, and Human Rights, Etc. Finally, the essay articulates Nietzsche’s views on intellectual conscience, discusses their merits and advantages when used in dealing the problem of sovereignty in the context of international law, and proposes a solution to this problem that draws on the philosophies of Nietzsche, Novalis, Kant and Plato. The essay illustrates the relevance and advantages of this solution by examining the issue of states’ reservations to international treaties and conventions. (shrink)
A natural view in distributive ethics is that everyone's interests matter, but the interests of the relatively worse off matter more than the interests of the relatively better off. I provide a new argument for this view. The argument takes as its starting point the proposal, due to Harsanyi and Rawls, that facts about distributive ethics are discerned from individual preferences in the "original position." I draw on recent work in decision theory, along with an intuitive principle about risk-taking, to (...) derive the view. (shrink)
The problem addressed in this paper is “the main epistemic problem concerning science”, viz. “the explication of how we compare and evaluate theories [...] in the light of the available evidence” (van Fraassen, BC, 1983, Theory comparison and relevant Evidence. In J. Earman (Ed.), Testing scientific theories (pp. 27–42). Minneapolis: University of Minnesota Press). Sections 1– 3 contain the general plausibility-informativeness theory of theory assessment. In a nutshell, the message is (1) that there are two values a theory should exhibit: (...) truth and informativeness—measured respectively by a truth indicator and a strength indicator; (2) that these two values are conflicting in the sense that the former is a decreasing and the latter an increasing function of the logical strength of the theory to be assessed; and (3) that in assessing a given theory by the available data one should weigh between these two conflicting aspects in such a way that any surplus in informativeness succeeds, if the shortfall in plausibility is small enough. Particular accounts of this general theory arise by inserting particular strength indicators and truth indicators. In Section 4 the theory is spelt out for the Bayesian paradigm of subjective probabilities. It is then compared to incremental Bayesian confirmation theory. Section 4 closes by asking whether it is likely to be lovely. Section 5 discusses a few problems of confirmation theory in the light of the present approach. In particular, it is briefly indicated how the present account gives rise to a new analysis of Hempel’s conditions of adequacy for any relation of confirmation (Hempel, CG, 1945, Studies in the logic of comfirmation. Mind, 54, 1–26, 97–121.), differing from the one Carnap gave in § 87 of his Logical foundations of probability (1962, Chicago: University of Chicago Press). Section 6 adresses the question of justification any theory of theory assessment has to face: why should one stick to theories given high assessment values rather than to any other theories? The answer given by the Bayesian version of the account presented in section 4 is that one should accept theories given high assessment values, because, in the medium run, theory assessment almost surely takes one to the most informative among all true theories when presented separating data. The concluding section 7 continues the comparison between the present account and incremental Bayesian confirmation theory. (shrink)
Degrees of belief are familiar to all of us. Our conﬁdence in the truth of some propositions is higher than our conﬁdence in the truth of other propositions. We are pretty conﬁdent that our computers will boot when we push their power button, but we are much more conﬁdent that the sun will rise tomorrow. Degrees of belief formally represent the strength with which we believe the truth of various propositions. The higher an agent’s degree of belief for a particular (...) proposition, the higher her conﬁdence in the truth of that proposition. For instance, Sophia’s degree of belief that it will be sunny in Vienna tomorrow might be .52, whereas her degree of belief that the train will leave on time might be .23. The precise meaning of these statements depends, of course, on the underlying theory of degrees of belief. These theories offer a formal tool to measure degrees of belief, to investigate the relations between various degrees of belief in different propositions, and to normatively evaluate degrees of belief. (shrink)
Any theory of confirmation must answer the following question: what is the purpose of its conception of confirmation for scientific inquiry? In this article, we argue that no Bayesian conception of confirmation can be used for its primary intended purpose, which we take to be making a claim about how worthy of belief various hypotheses are. Then we consider a different use to which Bayesian confirmation might be put, namely, determining the epistemic value of experimental outcomes, and thus to decide (...) which experiments to carry out. Interestingly, Bayesian confirmation theorists rule out that confirmation be used for this purpose. We conclude that Bayesian confirmation is a means with no end. 1 Introduction2 Bayesian Confirmation Theory3 Bayesian Confirmation and Belief4 Confirmation and the Value of Experiments5 Conclusion. (shrink)
We argue that a semantics for counterfactual conditionals in terms of comparative overall similarity faces a formal limitation due to Arrow’s impossibility theorem from social choice theory. According to Lewis’s account, the truth-conditions for counterfactual conditionals are given in terms of the comparative overall similarity between possible worlds, which is in turn determined by various aspects of similarity between possible worlds. We argue that a function from aspects of similarity to overall similarity should satisfy certain plausible constraints while Arrow’s impossibility (...) theorem rules out that such a function satisfies all the constraints simultaneously. We argue that a way out of this impasse is to represent aspectual similarity in terms of ranking functions instead of representing it in a purely ordinal fashion. Further, we argue against the claim that the determination of overall similarity by aspects of similarity faces a difficulty in addition to the Arrovian limitation, namely the incommensurability of different aspects of similarity. The phenomena that have been cited as evidence for such incommensurability are best explained by ordinary vagueness. (shrink)
Recent accounts of actual causation are stated in terms of extended causal models. These extended causal models contain two elements representing two seemingly distinct modalities. The first element are structural equations which represent the or mechanisms of the model, just as ordinary causal models do. The second element are ranking functions which represent normality or typicality. The aim of this paper is to show that these two modalities can be unified. I do so by formulating two constraints under which extended (...) causal models with their two modalities can be subsumed under so called which contain just one modality. These two constraints will be formally precise versions of Lewissystem of weights or priorities” governing overall similarity between possible worlds. (shrink)
Philosophers typically rely on intuitions when providing a semantics for counterfactual conditionals. However, intuitions regarding counterfactual conditionals are notoriously shaky. The aim of this paper is to provide a principled account of the semantics of counterfactual conditionals. This principled account is provided by what I dub the Royal Rule, a deterministic analogue of the Principal Principle relating chance and credence. The Royal Rule says that an ideal doxastic agent’s initial grade of disbelief in a proposition \(A\) , given that the (...) counterfactual distance in a given context to the closest \(A\) -worlds equals \(n\) , and no further information that is not admissible in this context, should equal \(n\) . Under the two assumptions that the presuppositions of a given context are admissible in this context, and that the theory of deterministic alethic or metaphysical modality is admissible in any context, it follows that the counterfactual distance distribution in a given context has the structure of a ranking function. The basic conditional logic V is shown to be sound and complete with respect to the resulting rank-theoretic semantics of counterfactuals. (shrink)
Philosophically, one of the most important questions in the enterprise termed confirmation theory is this: Why should one stick to well confirmed theories rather than to any other theories? This paper discusses the answers to this question one gets from absolute and incremental Bayesian confirmation theory. According to absolute confirmation, one should accept ''absolutely well confirmed'' theories, because absolute confirmation takes one to true theories. An examination of two popular measures of incremental confirmation suggests the view that one should stick (...) to incrementally well confirmed theories, because incremental confirmation takes one to (the most) informative (among all) true theories. However, incremental confirmation does not further this goal in general. I close by presenting a necessary and sufficient condition for revealing the confirmational structure in almost every world when presented separating data. (shrink)
The paper provides an argument for the thesis that an agent’s degrees of disbelief should obey the ranking calculus. This Consistency Argument is based on the Consistency Theorem. The latter says that an agent’s belief set is and will always be consistent and deductively closed iff her degrees of entrenchment satisfy the ranking axioms and are updated according to the ranktheoretic update rules.
This paper presents a new analysis of C.G. Hempel’s conditions of adequacy for any relation of confirmation [Hempel C. G. (1945). Aspects of scientific explanation and other essays in the philosophy of science. New York: The Free Press, pp. 3–51.], differing from the one Carnap gave in §87 of his [1962. Logical foundations of probability (2nd ed.). Chicago: University of Chicago Press.]. Hempel, it is argued, felt the need for two concepts of confirmation: one aiming at true hypotheses and another (...) aiming at informative hypotheses. However, he also realized that these two concepts are conflicting, and he gave up the concept of confirmation aiming at informative hypotheses. I then show that one can have Hempel’s cake and eat it too. There is a logic that takes into account both of these two conflicting aspects. According to this logic, a sentence H is an acceptable hypothesis for evidence E if and only if H is both sufficiently plausible given E and sufficiently informative about E. Finally, the logic sheds new light on Carnap’s analysis. (shrink)
The thesis of this paper is that we can justify induction deductively relative to one end, and deduction inductively relative to a different end. I will begin by presenting a contemporary variant of Hume ’s argument for the thesis that we cannot justify the principle of induction. Then I will criticize the responses the resulting problem of induction has received by Carnap and Goodman, as well as praise Reichenbach ’s approach. Some of these authors compare induction to deduction. Haack compares (...) deduction to induction, and I will critically discuss her argument for the thesis that we cannot justify the principles of deduction next. In concluding I will defend the thesis that we can justify induction deductively relative to one end, and deduction inductively relative to a different end, and that we can do so in a non-circular way. Along the way I will show how we can understand deductive and inductive logic as normative theories, and I will briefly sketch an argument to the effect that there are only hypothetical, but no categorical imperatives. (shrink)
Belief revision theory studies how an ideal doxastic agent should revise her beliefs when she receives new information. In part I, I have first presented the AGM theory of belief revision. Then I have focused on the problem of iterated belief revisions. In part II, I will first present ranking theory (Spohn 1988). Then I will show how it solves the problem of iterated belief revisions. I will conclude by sketching two areas of future research.
Belief revision theory studies how an ideal doxastic agent should revise her beliefs when she receives new information. In part I I will first present the AGM theory of belief revision (Alchourrón & Gärdenfors & Makinson 1985). Then I will focus on the problem of iterated belief revisions.
Bayesianism is the position that scientific reasoning is probabilistic and that probabilities are adequately interpreted as an agent's actual subjective degrees of belief, measured by her betting behaviour. Confirmation is one important aspect of scientific reasoning. The thesis of this paper is the following: if scientific reasoning is at all probabilistic, the subjective interpretation has to be given up in order to get right confirmation—and thus scientific reasoning in general. The Bayesian approach to scientific reasoning Bayesian confirmation theory The example (...) The less reliable the source of information, the higher the degree of Bayesian confirmation Measure sensitivity A more general version of the problem of old evidence Conditioning on the entailment relation The counterfactual strategy Generalizing the counterfactual strategy The desired result, and a necessary and sufficient condition for it Actual degrees of belief The common knock-down feature, or ‘anything goes’ The problem of prior probabilities. (shrink)
The Spohnian paradigm of ranking functions is in many respects like an order-of-magnitude reverse of subjective probability theory. Unlike probabilities, however, ranking functions are only indirectly—via a pointwise ranking function on the underlying set of possibilities W —defined on a field of propositions A over W. This research note shows under which conditions ranking functions on a field of propositions A over W and rankings on a language L are induced by pointwise ranking functions on W and the set of (...) models for L, ModL, respectively. (shrink)
The question I am addressing in this paper is the following: how is it possible to empirically test, or confirm, counterfactuals? After motivating this question in Section 1, I will look at two approaches to counterfactuals, and at how counterfactuals can be empirically tested, or confirmed, if at all, on these accounts in Section 2. I will then digress into the philosophy of probability in Section 3. The reason for this digression is that I want to use the way observable (...) absolute and relative frequencies, two empirical notions, are used to empirically test, or confirm, hypotheses about objective chances, a metaphysical notion, as a role-model. Specifically, I want to use this probabilistic account of the testing of chance hypotheses as a role-model for the account of the testing of counterfactuals, another metaphysical notion, that I will present in Sections 4 to 8. I will conclude by comparing my proposal to one non-probabilistic and one probabilistic alternative in Section 9. (shrink)
Crupi et al. propose a generalization of Bayesian conﬁrmation theory that they claim to adequately deal with conﬁrmation by uncertain evidence. Consider a series of points of time t0, . . . , ti, . . . , tn such that the agent’s subjective probability for an atomic proposition E changes from Pr0 at t0 to . . . to Pri at ti to . . . to Prn at tn. It is understood that the agent’s subjective probabilities change for (...) E and no logically stronger proposition, and that the agent updates her subjective probabilities by Jeffrey conditionalization. For this speciﬁc scenario the authors propose to take the difference between Pr0 and Pri as the degree to which E conﬁrms H for the agent at time ti , C0,i. This proposal is claimed to be adequate, because. (shrink)
Thomas Bonk has dedicated a book to analyzing the thesis of underdetermination of scientific theories, with a chapter exclusively devoted to the analysis of the relation between this idea and the indeterminacy of meaning. Both theses caused a revolution in the philosophic world in the sixties, generating a cascade of articles and doctoral theses. Agitation seems to have cooled down, but the point is still debated and it may be experiencing a renewed resurgence.
Weisberg introduces a phenomenon he terms perceptual undermining. He argues that it poses a problem for Jeffrey conditionalization, and Bayesian epistemology in general. This is Weisberg’s paradox. Weisberg argues that perceptual undermining also poses a problem for ranking theory and for Dempster-Shafer theory. In this note I argue that perceptual undermining does not pose a problem for any of these theories: for true conditionalizers Weisberg’s paradox is a false alarm.
The problem adressed in this paper is “the main epistemic problem concerning science”, viz. “the explication of how we compare and evaluate theories [...] in the light of the available evidence” (van Fraassen 1983, 27).
Ranking functions have been introduced under the name of ordinal conditional functions in Spohn (1988; 1990). They are representations of epistemic states and their dynamics. The most comprehensive and up to date presentation is Spohn (manuscript).
The paper presents a new analysis of Hempel’s conditions of adequacy, differing from the one in Carnap. Hempel, so it is argued, felt the need for two concepts of confirmation: one aiming at true theories, and another aiming at informative theories. However, so the analysis continues, he also realized that these two concepts were conflicting, and so he gave up the concept of confirmation aiming at informative theories. It is then shown that one can have the cake and eat it: (...) There is a logic of confirmation that accounts for both of these two conflicting aspects. (shrink)
Logic is the study of the quality of arguments. An argument consists of a set of premises and a conclusion. The quality of an argument depends on at least two factors: the truth of the premises, and the strength with which the premises confirm the conclusion. The truth of the premises is a contingent factor that depends on the state of the world. The strength with which the premises confirm the conclusion is supposed to be independent of the state of (...) the world. Logic is only concerned with this second, logical factor of the quality of arguments. (shrink)
Searle proposes an argument in order to prove the existence of universals and thereby solve the problem of universals: From every meaningful general term P(x) follows a tautology Vx[P(x) v -P(x)], which entails the existence of the corresponding universal P. To be convincing, this argument for existence must be valid, it must presume true premises and it must be free of any informal fallacy. First, the validity of the argument for existence in its non-modal interpretation will be proven with the (...) help of the formal deductive system F. Secondly, it will be shown that a self-contradictory tautology concept is employed, which renders the premises meaningless. Consequently, the inconsistency will be emended through redefinition and the argument's ensuing correctness will be demonstrated. Finally, it will be shown that the argument for existence presupposes the existence of universals in its premise and hence begs the question. (shrink)
This paper discusses an almost sixty year old problem in the philosophy of science -- that of a logic of confirmation. We present a new analysis of Carl G. Hempel's conditions of adequacy (Hempel 1945), differing from the one Carnap gave in §87 of his Logical Foundations of Probability (1962). Hempel, it is argued, felt the need for two concepts of confirmation: one aiming at true theories and another aiming at informative theories. However, he also realized that these two concepts (...) are conflicting, and he gave up the concept of confirmation aiming at informative theories. We then show that one can have Hempel's cake and eat it, too: There is a (rank-theoretic and genuinely nonmonotonic) logic of confirmation -- or rather, theory assessment -- that takes into account both of these two conflicting aspects. According to this logic, a statement H is an acceptable theory for the data E if and only if H is both sufficiently plausible given E and sufficiently informative about E. Finally, the logic sheds new light on Carnap's analysis (and solves another problem of confirmation theory). (shrink)
En el artículo se critica la tesis de Javier Mosterín (defendida en su libro Vivan los Animales, Debate, 1998) de que los animales no humanos deben ser incluidos en la comunidad moral como resultado de un progreso moral en la sensibilidad de los humanos que llevará progresivamente a una mayor compasión por el sufrimiento ajeno. Tras señalar ciertas deficiencias metaéticas de esta propuesta y la no deseada implicación normativa de tener que incluir finalmente en la comunidad moral a cualquier ser (...) vivo, se esbozan en el artículo algunas razones para permitir una mejor consideración de los animales sin acabar por ello en ese radicalismo ecológico. (shrink)
What is faith? Lara Buchak has done as much as anyone recently to answer our question in a sensible and instructive fashion. As it turns out, her writings reveal two theories of faith, an early one and a later one (or, if you like, two versions of the same theory). In what follows, we aim to do three things. First, we will state and assess Buchak’s early theory, highlighting both its good-making and bad-making features. Second, we will do the (...) same for her later theory, noting improvements on the early one. Third, we will mark various choice points in theorizing about faith, and we will argue for specific choices at those points, culminating in what we regard as a better, alternative theory of faith. Our critical aims, therefore, are ultimately constructive. By theorizing about faith with Lara Buchak, we aim to contribute to our common understanding of what faith is. (shrink)
Faith is often regarded as having a fraught relationship with evidence. Lara Buchak even argues that it entails foregoing evidence, at least when this evidence would influence your decision to act on the proposition in which you have faith. I present a counterexample inspired by the book of Job, in which seeking evidence for the sake of deciding whether to worship God is not only compatible with faith, but is in fact an expression of great faith. One might still (...) think that foregoing evidence may make faith more praiseworthy than otherwise. But I argue against this claim too, once more drawing on Job. A faith that expresses itself by a search for evidence can be more praiseworthy than a faith that sits passively in the face of epistemic adversity. (shrink)
Lara Buchak argues for a version of rank-weighted utilitarianism that assigns greater weight to the interests of the worse off. She argues that our distributive principles should be derived from the preferences of rational individuals behind a veil of ignorance, who ought to be risk averse. I argue that Buchak’s appeal to the veil of ignorance leads to a particular way of extending rank-weighted utilitarianism to the evaluation of uncertain prospects. This method recommends choices that violate the unanimous preferences (...) of rational individuals and choices that guarantee worse distributions. These results, I suggest, undermine Buchak’s argument for rank-weighted utilitarianism. (shrink)
There are currently two robust traditions in philosophy dealing with doxastic attitudes: the tradition that is concerned primarily with all-or-nothing belief, and the tradition that is concerned primarily with degree of belief or credence. This paper concerns the relationship between belief and credence for a rational agent, and is directed at those who may have hoped that the notion of belief can either be reduced to credence or eliminated altogether when characterizing the norms governing ideally rational agents. It presents a (...) puzzle which lends support to two theses. First, that there is no formal reduction of a rational agent’s beliefs to her credences, because belief and credence are each responsive to different features of a body of evidence. Second, that if our traditional understanding of our practices of holding each other responsible is correct, then belief has a distinctive role to play, even for ideally rational agents, that cannot be played by credence. The question of which avenues remain for the credence-only theorist is considered. (shrink)
This paper provides an account of what it is to have faith in a proposition p, in both religious and mundane contexts. It is argued that faith in p doesn’t require adopting a degree of belief that isn’t supported by one’s evidence but rather it requires terminating one’s search for further evidence and acting on the supposition that p. It is then shown, by responding to a formal result due to I.J. Good, that doing so can be rational in a (...) number of circumstances. If expected utility theory is the correct account of practical rationality, then having faith can be both epistemically and practically rational if the costs associated with gathering further evidence or postponing the decision are high. If a more permissive framework is adopted, then having faith can be rational even when there are no costs associated with gathering further evidence. (shrink)
The ‘rollback argument,’ pioneered by Peter van Inwagen, purports to show that indeterminism in any form is incompatible with free will. The argument has two major premises: the first claims that certain facts about chances obtain in a certain kind of hypothetical situation, and the second that these facts entail that some actual act is not free. Since the publication of the rollback argument, the second claim has been vehemently debated, but everyone seems to have taken the first claim for (...) granted. Nevertheless, the first claim is totally unjustified. Even if we accept the second claim, therefore, the argument gives us no reason to think that free will and indeterminism are incompatible. Furthermore, seeing where the rollback argument goes wrong illuminates how a certain kind of incompatibilist, the ‘chance-incompatibilist,’ ought to think about free will and chance, and points to a possibility for free will that has remained largely unexplored. (shrink)
The veil of ignorance argument was used by John C. Harsanyi to defend Utilitarianism and by John Rawls to defend the absolute priority of the worst off. In a recent paper, Lara Buchak revives the veil of ignorance argument, and uses it to defend an intermediate position between Harsanyi's and Rawls' that she calls Relative Prioritarianism. None of these authors explore the implications of allowing that agent's behind the veil are averse to ambiguity. Allowing for aversion to ambiguity---which is (...) both the most commonly observed and a seemingly reasonable attitude to ambiguity---however supports a version of Egalitarianism, whose logical form is quite different from the theories defended by the aforementioned authors. Moreover, it turns out that the veil of ignorance argument neither supports standard Utilitarianism nor Prioritarianism unless we assume that rational people are insensitive to ambiguity. (shrink)
This article argues that Lara Buchak’s risk-weighted expected utility theory fails to offer a true alternative to expected utility theory. Under commonly held assumptions about dynamic choice and the framing of decision problems, rational agents are guided by their attitudes to temporally extended courses of action. If so, REU theory makes approximately the same recommendations as expected utility theory. Being more permissive about dynamic choice or framing, however, undermines the theory’s claim to capturing a steady choice disposition in the (...) face of risk. I argue that this poses a challenge to alternatives to expected utility theory more generally. (shrink)
How should a group with different opinions (but the same values) make decisions? In a Bayesian setting, the natural question is how to aggregate credences: how to use a single credence function to naturally represent a collection of different credence functions. An extension of the standard Dutch-book arguments that apply to individual decision-makers recommends that group credences should be updated by conditionalization. This imposes a constraint on what aggregation rules can be like. Taking conditionalization as a basic constraint, we gather (...) lessons from the established work on credence aggregation, and extend this work with two new impossibility results. We then explore contrasting features of two kinds of rules that satisfy the constraints we articulate: one kind uses fixed prior credences, and the other uses geometric averaging, as opposed to arithmetic averaging. We also prove a new characterisation result for geometric averaging. Finally we consider applications to neighboring philosophical issues, including the epistemology of disagreement. (shrink)
Decision theory has at its core a set of mathematical theorems that connect rational preferences to functions with certain structural properties. The components of these theorems, as well as their bearing on questions surrounding rationality, can be interpreted in a variety of ways. Philosophy’s current interest in decision theory represents a convergence of two very different lines of thought, one concerned with the question of how one ought to act, and the other concerned with the question of what action consists (...) in and what it reveals about the actor’s mental states. As a result, the theory has come to have two different uses in philosophy, which we might call the normative use and the interpretive use. It also has a related use that is largely within the domain of psychology, the descriptive use. This essay examines the historical development of decision theory and its uses; the relationship between the norm of decision theory and the notion of rationality; and the interdependence of the uses of decision theory. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.