The most immediately appealing model for formal constraints on degrees of belief is provided by probability theory, which tells us, for instance, that the probability of P can never be greater than that of (P v Q). But while this model has much intuitive appeal, many have been concerned to provide arguments showing that ideally rational degrees of belief would conform to the calculus of probabilities. The arguments most frequently used to make this claim plausible are the so-called "Dutch (...)Book" arguments. (shrink)
The Reflection Principle can be defended with a Diachronic DutchBookArgument (DBA), but it is also defeated by numerous compelling counter-examples. It seems then that Diachronic DBAs can lead us astray. Should we reject them en masse—including Lewis’s Diachronic DBA for Conditionalization? Rachael Briggs’s “suppositional test” is supposed to differentiate between Diachronic DBAs that we can safely ignore (including the DBA for Reflection) and Diachronic DBAs that we should find compelling (including the DBA for Conditionalization). I (...) argue that Brigg’s suppositional test is wrong: it sets the bar for coherence too high and places certain cases of self-doubt on the wrong side of the divide. Given that the suppositional test is unsatisfactory, we are left without any justification for discriminating between Diachronic DBAs and ought to reject them all—including the DBA for Conditionalization. (shrink)
DutchBook arguments have been presented for static belief systems and for belief change by conditionalization. An argument is given here that a rule for belief change which under certain conditions violates probability kinematics will leave the agent open to a DutchBook.
The DutchBookArgument for Probabilism assumes Ramsey's Thesis (RT), which purports to determine the prices an agent is rationally required to pay for a bet. Recently, a new objection to Ramsey's Thesis has emerged (Hedden 2013, Wronski & Godziszewski 2017, Wronski 2018)--I call this the Expected Utility Objection. According to this objection, it is Maximise Subjective Expected Utility (MSEU) that determines the prices an agent is required to pay for a bet, and this often disagrees with (...) Ramsey's Thesis. I suggest two responses to Hedden's objection. First, we might be permissive: agents are permitted to pay any price that is required or permitted by RT, and they are permitted to pay any price that is required or permitted by MSEU. This allows us to give a revised version of the DutchBookArgument for Probabilism, which I call the Permissive DutchBookArgument. Second, I suggest that even the proponent of the Expected Utility Objection should admit that RT gives the correct answer in certain very limited cases, and I show that, together with MSEU, this very restricted version of RT gives a new pragmatic argument for Probabilism, which I call the Bookless Pragmatic Argument. (shrink)
“DutchBook” arguments and references to gambling theorems are typical in the debate between Bayesians and scientists committed to “classical” statistical methods. These arguments have rarely convinced non-Bayesian scientists to abandon certain conventional practices, partially because many scientists feel that gambling theorems have little relevance to their research activities. In other words, scientists “don’t bet.” This article examines one attempt, by Schervish, Seidenfeld, and Kadane, to progress beyond such apparent stalemates by connecting “DutchBook”–type mathematical results (...) with principles actually endorsed by practicing experimentalists. (shrink)
If an agent believes that the probability of E being true is 1/2, should she accept a bet on E at even odds or better? Yes, but only given certain conditions. This paper is about what those conditions are. In particular, we think that there is a condition that has been overlooked so far in the literature. We discovered it in response to a paper by Hitchcock (2004) in which he argues for the 1/3 answer to the Sleeping Beauty problem. (...) Hitchcock argues that this credence follows from calculating her fair betting odds, plus the assumption that Sleeping Beauty’s credences should track her fair betting odds. We will show that this last assumption is false. Sleeping Beauty’s credences should not follow her fair betting odds due to a peculiar feature of her epistemic situation. (shrink)
We point out a yet unnoticed flaw in DutchBook arguments that relates to a link between degrees of belief and betting quotients. We offer a set of precise conditions governing when a nonprobabilist is immune to the classical DutchBookargument. We suggest that diachronic DutchBook arguments are also affected.
A handful of well-known arguments (the 'diachronic Dutchbook arguments') rely upon theorems establishing that, in certain circumstances, you are immune from sure monetary loss (you are not 'diachronically Dutch bookable') if and only if you adopt the strategy of conditionalizing (or Jeffrey conditionalizing) on whatever evidence you happen to receive. These theorems require non-trivial assumptions about which evidence you might acquire---in the case of conditionalization, the assumption is that, if you might learn that e, then it (...) is not the case that you might learn something else that is consistent with e. These assumptions may not be relaxed. When they are, not only will non-(Jeffrey) conditionalizers be immune from diachronic Dutch bookability, but (Jeffrey) conditionalizers will themselves be diachronically Dutch bookable. I argue: 1) that there are epistemic situations in which these assumptions are violated; 2) that this reveals a conflict between the premise that susceptibility to sure monetary loss is irrational, on the one hand, and the view that rational belief revision is a function of your prior beliefs and the acquired evidence alone, on the other; and 3) that this inconsistency demonstrates that diachronic Dutchbook arguments for (Jeffrey) conditionalization are invalid. (shrink)
One guide to an argument's significance is the number and variety of refutations it attracts. By this measure, the Dutchbookargument has considerable importance.2 Of course this measure alone is not a sure guide to locating arguments deserving of our attention—if a decisive refutation has really been given, we are better off pursuing other topics. But the presence of many and varied counterarguments at least suggests that either the refutations are controversial, or that their target (...) admits of more than one interpretation, or both. The main point of this paper is to focus on a way of understanding the DutchBookargument (DBA) that avoids many of the well-known criticisms, and to consider how it fares against an important criticism that still remains: the objection that the DBA presupposes value-independence of bets. (shrink)
In this paper I present a new way of understanding DutchBook Arguments: the idea is that an agent is shown to be incoherent iff he would accept as fair a set of bets that would result in a loss under any interpretation of the claims involved. This draws on a standard definition of logical inconsistency. On this new understanding, the DutchBook Arguments for the probability axioms go through, but the DutchBook (...) class='Hi'>Argument for Reflection fails. The question of whether we have a DutchBookArgument for Conditionalization is left open. (shrink)
DutchBook Arguments (DBAs) have been invoked to support various requirements of rationality. Some are plausible: probabilism and conditionalization. Others are less so: credal transparency and reflection. Anna Mahtani has argued for a new understanding of DBAs which, she claims, allow us to keep the DBAs for probabilism (and perhaps conditionalization) and reject the DBAs for credal transparency and reflection. I argue that Mahtani’s new account fails as (a) it does not support highly plausible requirements of rational coherence (...) and (b) it does not, even setting aside the first objection, succeed in undermining the DBAs for credal transparency or reflection. (shrink)
Dutchbook and accuracy arguments are used to justify certain rationality constraints on credence functions. Underlying these Dutchbook and accuracy arguments are associated theorems, and I show that the interpretation of these theorems can vary along a range of dimensions. Given that the theorems can be interpreted in a variety of different ways, what is the status of the associated arguments? I consider three possibilities: we could aggregate the results of the differently interpreted theorems in (...) some way, and motivate rationality constraints based on this aggregation; we could be permissive, and accept the conclusions of the Dutchbook and accuracy arguments under all interpretations of the associated theorems; or we could select one uniquely correct interpretation of the Dutchbook or accuracy theorem, and use that to justify certain rationality constraints. I show that each possibility faces problems, and conclude that Dutchbook and accuracy theorems cannot be used to justify any principle of rationality. (shrink)
In this paper, I present an argument for a rational norm involving a kind of credal attitude called a quantificational credence – the kind of attitude we can report by saying that Lucy thinks that each record in Schroeder’s collection is 5% likely to be scratched. I prove a result called a DutchBook Theorem, which constitutes conditional support for the norm. Though DutchBook Theorems exist for norms on ordinary and conditional credences, there is (...) controversy about the epistemic significance of these results. So, my conclusion is that if DutchBook Theorems do, in general, support norms on credal states, then we have support for the suggested norm on quantificational credences. Providing conditional support for this norm gives us a fuller picture of the normative landscape of credal states. (shrink)
In this paper, we ask: how should an agent who has incoherent credences update when they learn new evidence? The standard Bayesian answer for coherent agents is that they should conditionalize; however, this updating rule is not defined for incoherent starting credences. We show how one of the main arguments for conditionalization, the Dutch strategy argument, can be extended to devise a target property for updating plans that can apply to them regardless of whether the agent starts out (...) with coherent or incoherent credences. The main idea behind this extension is that the agent should avoid updating plans that increase the possible sure loss from Dutch strategies. This happens to be equivalent to avoiding updating plans that increase incoherence according to a distance-based incoherence measure. (shrink)
De Finetti would claim that we can make sense of a draw in which each positive integer has equal probability of winning. This requires a uniform probability distribution over the natural numbers, violating countable additivity. Countable additivity thus appears not to be a fundamental constraint on subjective probability. It does, however, seem mandated by DutchBook arguments similar to those that support the other axioms of the probability calculus as compulsory for subjective interpretations. These two lines of reasoning (...) can be reconciled through a slight generalization of the DutchBook framework. Countable additivity may indeed be abandoned for de Finetti's lottery, but this poses no serious threat to its adoption in most applications of subjective probability. Introduction The de Finetti lottery Two objections to equiprobability 3.1 The ‘No random mechanism’ argument 3.2 The DutchBookargument Equiprobability and relative betting quotients The re-labelling paradox 5.1 The paradox 5.2 Resolution: from symmetry to relative probability Beyond the de Finetti lottery. (shrink)
Both Representation Theorem Arguments and DutchBook Arguments support taking probabilistic coherence as an epistemic norm. Both depend on connecting beliefs to preferences, which are not clearly within the epistemic domain. Moreover, these connections are standardly grounded in questionable definitional/metaphysical claims. The paper argues that these definitional/metaphysical claims are insupportable. It offers a way of reconceiving Representation Theorem arguments which avoids the untenable premises. It then develops a parallel approach to DutchBook Arguments, and compares the (...) results. In each case preferencedefects serve as a diagnostic tool, indicating purely epistemic defects. (shrink)
The Sleeping Beauty problem has attracted considerable attention in the literature as a paradigmatic example of how self-locating uncertainty creates problems for the Bayesian principles of Conditionalization and Reflection. Furthermore, it is also thought to raise serious issues for diachronic DutchBook arguments. I show that, contrary to what is commonly accepted, it is possible to represent the Sleeping Beauty problem within a standard Bayesian framework. Once the problem is correctly represented, the ‘thirder’ solution satisfies standard rationality principles, (...) vindicating why it is not vulnerable to diachronic DutchBook arguments. Moreover, the diachronic Dutch Books against the ‘halfer’ solutions fail to undermine the standard arguments for Conditionalization. The main upshot that emerges from my discussion is that the disagreement between different solutions does not challenge the applicability of Bayesian reasoning to centered settings, nor the commitment to Conditionalization, but is instead an instance of the familiar problem of choosing the priors. (shrink)
Michael Rescorla (2020) has recently pointed out that the standard arguments for Bayesian Conditionalization assume that whenever you take yourself to learn something with certainty, it's true. Most people would reject this assumption. In response, Rescorla offers an improved DutchBookargument for Bayesian Conditionalization that does not make this assumption. My purpose in this paper is two-fold. First, I want to illuminate Rescorla's new argument by giving a very general DutchBookargument (...) that applies to many cases of updating beyond those covered by Conditionalization, and then showing how Rescorla's version follows as a special case of that. Second, I want to show how to generalise Briggs and Pettigrew's Accuracy Dominance argument to avoid the assumption that Rescorla has identified (Briggs & Pettigrew 2018). (shrink)
Jeff Paris proves a generalized DutchBook theorem. If a belief state is not a generalized probability then one faces ‘sure loss’ books of bets. In Williams I showed that Joyce’s accuracy-domination theorem applies to the same set of generalized probabilities. What is the relationship between these two results? This note shows that both results are easy corollaries of the core result that Paris appeals to in proving his dutchbook theorem. We see that every point (...) of accuracy-domination deﬁnes a dutchbook, but we only have a partial converse. (shrink)
The idea that beliefs may be stake-sensitive is explored. This is the idea that the strength with which a single, persistent belief is held may vary and depend upon what the believer takes to be at stake. The stakes in question are tied to the truth of the belief—not, as in Pascal’s wager and other cases, to the belief’s presence. Categorical beliefs and degrees of belief are considered; both kinds of account typically exclude the idea and treat belief as stake-invariant (...) , though an exception is briefly described. The role of the assumption of stake-invariance in familiar accounts of degrees of belief is also discussed, and morals are drawn concerning finite and countable Dutchbook arguments. (shrink)
Conditionalization is one of the central norms of Bayesian epistemology. But there are a number of competing formulations, and a number of arguments that purport to establish it. In this paper, I explore which formulations of the norm are supported by which arguments. In their standard formulations, each of the arguments I consider here depends on the same assumption, which I call Deterministic Updating. I will investigate whether it is possible to amend these arguments so that they no longer depend (...) on it. As I show, whether this is possible depends on the formulation of the norm under consideration. (shrink)
Peter Walley argues that a vague credal state need not be representable by a set of probability functions that could represent precise credal states, because he believes that the members of the representor set need not be countably additive. I argue that the states he defends are in a way incoherent.
How should a group with different opinions (but the same values) make decisions? In a Bayesian setting, the natural question is how to aggregate credences: how to use a single credence function to naturally represent a collection of different credence functions. An extension of the standard Dutch-book arguments that apply to individual decision-makers recommends that group credences should be updated by conditionalization. This imposes a constraint on what aggregation rules can be like. Taking conditionalization as a basic constraint, (...) we gather lessons from the established work on credence aggregation, and extend this work with two new impossibility results. We then explore contrasting features of two kinds of rules that satisfy the constraints we articulate: one kind uses fixed prior credences, and the other uses geometric averaging, as opposed to arithmetic averaging. We also prove a new characterisation result for geometric averaging. Finally we consider applications to neighboring philosophical issues, including the epistemology of disagreement. (shrink)
Recently, there has been some discussion of how DutchBook arguments might be used to demonstrate the rational incoherence of certain hidden variable models of quantum theory. In this paper, we argue that the 'form of inconsistency' underlying this alleged irrationality is deeply and comprehensively related to the more familiar 'inconsistency' phenomenon of contextuality. Our main result is that the hierarchy of contextuality due to Abramsky and Brandenburger corresponds to a hierarchy of additivity/convexity-violations which yields formal Dutch (...) Books of different strengths. We then use this result to provide a partial assessment of whether these formal Dutch Books can be interpreted normatively. (shrink)
According to certain normative theories in epistemology, rationality requires us to be logically omniscient. Yet this prescription clashes with our ordinary judgments of rationality. How should we resolve this tension? In this paper, I focus particularly on the logical omniscience requirement in Bayesian epistemology. Building on a key insight by Hacking :311–325, 1967), I develop a version of Bayesianism that permits logical ignorance. This includes: an account of the synchronic norms that govern a logically ignorant individual at any given time; (...) an account of how we reduce our logical ignorance by learning logical facts and how we should update our credences in response to such evidence; and an account of when logical ignorance is irrational and when it isn’t. At the end, I explain why the requirement of logical omniscience remains true of ideal agents with no computational, processing, or storage limitations. (shrink)
The best accuracy arguments for probabilism apply only to credence functions with finite domains, that is, credence functions that assign credence to at most finitely many propositions. This is a significant limitation. It reveals that the support for the accuracy-first program in epistemology is a lot weaker than it seems at first glance, and it means that accuracy arguments cannot yet accomplish everything that their competitors, the pragmatic (Dutchbook) arguments, can. In this paper, I investigate the extent (...) to which this limitation can be overcome. Building on the best arguments in finite domains, I present two accuracy arguments for probabilism that are perfectly general—they apply to credence functions with arbitrary domains. I then discuss how the arguments’ premises can be challenged. We will see that it is particularly difficult to characterize admissible accuracy measures in infinite domains. (shrink)
This paper describes a cubic water tank equipped with a movable partition receiving various amounts of liquid used to represent joint probability distributions. This device is applied to the investigation of deductive inferences under uncertainty. The analogy is exploited to determine by qualitative reasoning the limits in probability of the conclusion of twenty basic deductive arguments (such as Modus Ponens, And-introduction, Contraposition, etc.) often used as benchmark problems by the various theoretical approaches to reasoning under uncertainty. The probability bounds imposed (...) by the premises on the conclusion are derived on the basis of a few trivial principles such as "a part of the tank cannot contain more liquid than its capacity allows", or "if a part is empty, the other part contains all the liquid". This stems from the equivalence between the physical constraints imposed by the capacity of the tank and its subdivisions on the volumes of liquid, and the axioms and rules of probability. The device materializes de Finetti's coherence approach to probability. It also suggests a physical counterpart of Dutchbook arguments to assess individuals' rationality in probability judgments in the sense that individuals whose degrees of belief in a conclusion are out of the bounds of coherence intervals would commit themselves to executing physically impossible tasks. (shrink)
We generalize the Kolmogorov axioms for probability calculus to obtain conditions defining, for any given logic, a class of probability functions relative to that logic, coinciding with the standard probability functions in the special case of classical logic but allowing consideration of other classes of "essentially Kolmogorovian" probability functions relative to other logics. We take a broad view of the Bayesian approach as dictating inter alia that from the perspective of a given logic, rational degrees of belief are those representable (...) by probability functions from the class appropriate to that logic. Classical Bayesianism, which fixes the logic as classical logic, is only one version of this general approach. Another, which we call Intuitionistic Bayesianism, selects intuitionistic logic as the preferred logic and the associated class of probability functions as the right class of candidate representions of epistemic states (rational allocations of degrees of belief). Various objections to classical Bayesianism are, we argue, best met by passing to intuitionistic Bayesianism—in which the probability functions are taken relative to intuitionistic logic—rather than by adopting a radically non-Kolmogorovian, for example, nonadditive, conception of (or substitute for) probability functions, in spite of the popularity of the latter response among those who have raised these objections. The interest of intuitionistic Bayesianism is further enhanced by the availability of a DutchBookargument justifying the selection of intuitionistic probability functions as guides to rational betting behavior when due consideration is paid to the fact that bets are settled only when/if the outcome bet on becomes known. (shrink)
This paper calls for a re-appraisal of McGee's analysis of the semantics, logic and probabilities of indicative conditionals presented in his 1989 paper Conditional probabilities and compounds of conditionals. The probabilistic measures introduced by McGee are given a new axiomatisation built on the principle that the antecedent of a conditional is probabilistically independent of the conditional and a more transparent method of constructing such measures is provided. McGee's Dutchbookargument is restructured to more clearly reveal that (...) it introduces a novel contribution to the epistemology of semantic indeterminacy, and shows that its more controversial implications are unavoidable if we want to maintain the Ramsey Test along with the standard laws of probability. Importantly, it is shown that the counterexamples that have been levelled at McGee's analysis|generating a rather wide consensus that it yields `unintuitive' or `wrong' probabilities for compounds |fail to strike at their intended target; for to honour the intuitions of the counterexamples one must either give up the Ramsey Test or the standard laws of probability. It will be argued that we need to give up neither if we take the counterexamples as further evidence that the indicative conditional sometimes allows for a non-epistemic `causal' interpretation alongside its usual epistemic interpretation. (shrink)
The arguments for Bayesianism in the literature fall into three broad categories. There are DutchBook arguments, both of the traditional pragmatic variety and the modern ‘depragmatised’ form. And there are arguments from the so-called ‘representation theorems’. The arguments have many similarities, for example they have a common conclusion, and they all derive epistemic constraints from considerations about coherent preferences, but they have enough differences to produce hostilities between their proponents. In a recent paper, Maher (1997) has argued (...) that the pragmatised DutchBook arguments are unsound and the depragmatised DutchBook arguments question begging. He urges we instead use the representation theorem argument as in his (1993). In this paper I argue that Maher’s own argument is question-begging, though in a more subtle and interesting way than his DutchBook wielding opponents. (shrink)
According to the traditional Bayesian view of credence, its structure is that of precise probability, its objects are descriptive propositions about the empirical world, and its dynamics are given by conditionalization. Each of the three essays that make up this thesis deals with a different variation on this traditional picture. The first variation replaces precise probability with sets of probabilities. The resulting imprecise Bayesianism is sometimes motivated on the grounds that our beliefs should not be more precise than the evidence (...) calls for. One known problem for this evidentially motivated imprecise view is that in certain cases, our imprecise credence in a particular proposition will remain the same no matter how much evidence we receive. In the first essay I argue that the problem is much more general than has been appreciated so far, and that it’s difficult to avoid without compromising the initial evidentialist motivation. The second variation replaces descriptive claims with moral claims as the objects of credence. I consider three standard arguments for probabilism with respect to descriptive uncertainty—representation theorem arguments, Dutchbook arguments, and accuracy arguments—in order to examine whether such arguments can also be used to establish probabilism with respect to moral uncertainty. In the second essay, I argue that by and large they can, with some caveats. First, I don’t examine whether these arguments can be given sound non-cognitivist readings, and any conclusions therefore only hold conditional on cognitivism. Second, decision-theoretic representation theorems are found to be less convincing in the moral case, because there they implausibly commit us to thinking that intertheoretic comparisons of value are always possible. Third and finally, certain considerations may lead one to think that imprecise probabilism provides a more plausible model of moral epistemology. The third variation considers whether, in addition to conditionalization, agents may also change their minds by becoming aware of propositions they had not previously entertained, and therefore not previously assigned any probability. More specifically, I argue that if we wish to make room for reflective equilibrium in a probabilistic moral epistemology, we must allow for awareness growth. In the third essay, I sketch the outline of such a Bayesian account of reflective equilibrium. Given that this account gives a central place to awareness growth, and that the rationality constraints on belief change by awareness growth are much weaker than those on belief change by conditionalization, it follows that the rationality constraints on the credences of agents who are seeking reflective equilibrium are correspondingly weaker. (shrink)
English abstract: This paper discusses the delicate relationship between traditional epistemology and the increasingly influential probabilistic (or ‘Bayesian’) approach to epistemology. The paper introduces some of the key ideas of probabilistic epistemology, including credences or degrees of belief, Bayes’ theorem, conditionalization, and the DutchBookargument. The tension between traditional and probabilistic epistemology is brought out by considering the lottery and preface paradoxes as they relate to rational (binary) belief and credence respectively. It is then argued that (...) this tension can be alleviated by rejecting the requirement that rational (binary) beliefs must be consistent and closed under logical entailment. Instead, it is suggested that this logical requirement applies to a different type of binary propositional attitude, viz. acceptance. (shrink)
In this article, I introduce the term “cognitivism” as a name for the thesis that degrees of belief are equivalent to full beliefs about truth-valued propositions. The thesis (of cognitivism) that degrees of belief are equivalent to full beliefs is equivocal, inasmuch as different sorts of equivalence may be postulated between degrees of belief and full beliefs. The simplest sort of equivalence (and the sort of equivalence that I discuss here) identifies having a given degree of belief with having a (...) full belief with a specific content. This sort of view was proposed in [C. Howson and P. Urbach, Scientific reasoning: the Bayesian approach. Chicago: Open Court (1996)].In addition to embracing a form of cognitivism about degrees of belief, Howson and Urbach argued for a brand of probabilism. I call a view, such as Howson and Urbach’s, which combines probabilism with cognitivism about degrees of belief “cognitivist probabilism”. In order to address some problems with Howson and Urbach’s view, I propose a view that incorperates several of modifications of Howson and Urbach’s version of cognitivist probabilism. The view that I finally propose upholds cognitivism about degrees of belief, but deviates from the letter of probabilism, in allowing that a rational agent’s degrees of belief need not conform to the axioms of probability, in the case where the agent’s cognitive resources are limited. (shrink)
Why is it good to be less, rather than more incoherent? Julia Staffel, in her excellent book "Unsettled Thoughts," answers this question by showing that if your credences are incoherent, then there is some way of nudging them toward coherence that is guaranteed to make them more accurate and reduce the extent to which they are Dutch-bookable. This seems to show that such a nudge toward coherence makes them better fit to play their key epistemic and practical roles: (...) representing the world and guiding action. In this paper, I argue that Staffel's strategy needs a small tweak. While she identifies appropriate measures of epistemic value, she does not identify appropriate measures of practical value. Staffel measures practical value using Dutch-bookability scores. But credences have practical value in virtue of recommending actions that produce as much utility as possible. And while susceptibility to a Dutchbook is a surefire sign that one's credences are needlessly bad at this task, one's degree of Dutch-bookability is not itself a good measure of how well they recommend practically valuable actions. Strictly proper scoring rules, I argue, are the right tools for measuring both epistemic and practical value. I show that we can rerun Staffel's strategy swapping in strictly proper scoring rules for Dutch-bookability measures. So long as one's epistemic scoring rule and practical scoring rule are ``sufficiently similar,'' there is some way of nudging incoherent credences toward coherence that is guaranteed to yield more of both types of value. -/- . (shrink)
Bayesian epistemologists propose norms of rationality based on the probability calculus. ?Probabilism? states that agents must hold credences that are consistent with the axioms of probability. ?Conditionalization? states that credences must be updated using Bayesian conditionalization. These norms are supported using `maximization arguments' such as Dutchbook and accuracy arguments. These arguments presuppose that rationality requires agents to maximize (practical or epistemic) value in every doxastic state, whose evaluation is done from a subjective point of view. Accuracy arguments (...) also presuppose that agents are opinionated. The ?rst assumptions are reasonable, but not mandatory for the notion of rationality. The assumption of opinionation is questionable. In this paper, I investigate whether these norms (or opinionation) are supported by a maximization argument without these assumptions. I have designed AI agents based on the Bayesian model and a nonmonotonic framework and tested how they perform in an epistemic version of the Wumpus World. The nonmonotonic agent, who is not opinionated and fails probabilism and conditionalization, outperforms the Bayesian in some conditions, which suggests a negative answer to the question. (shrink)
This book analyzes the uses of emotive language and redefinitions from pragmatic, dialectical, epistemic and rhetorical perspectives, investigating the relationship between emotions, persuasion and meaning, and focusing on the implicit dimension of the use of a word and its dialectical effects. It offers a method for evaluating the persuasive and manipulative uses of emotive language in ordinary and political discourse. Through the analysis of political speeches and legal arguments, the book offers a systematic study of emotive language in (...) argumentation, rhetoric, communication, political science and public speaking. (shrink)
This book shows how research in linguistic pragmatics, philosophy of language, and rhetoric can be connected through argumentation to analyze a recognizably common strategy used in political and everyday conversation, namely the distortion of another’s words in an argumentative exchange. Straw man argumentation refers to the modification of a position by misquoting, misreporting or wrenching the original speaker’s statements from their context in order to attack them more easily or more effectively. Through 63 examples taken from different contexts (including (...) political and forensic discourses and dialogs) and 20 legal cases, the book analyzes the explicit and implicit types of straw man, shows how to assess the correctness of a quote or a report, and illustrates the arguments that can be used for supporting an interpretation and defending against a distortion. The tools of argumentation theory, a discipline aimed at investigating the uses of arguments by combining insights from pragmatics, logic, and communication, are applied to provide an original account of interpretation and reporting, and to describe and illustrate tactics and procedures that can be used and implemented for practical purposes.. This book will appeal to scholars in the fields of political communication, communication in general, argumentation theory, rhetoric and pragmatics, as well as to people working in public speech, speech writing, and discourse analysis. (shrink)
We are pleased to publish this WSIA edition of Trudy’s Govier’s seminal volume, Problems in Argument Analysis and Evaluation. Originally published in 1987 by Foris Publications, this was a pioneering work that played a major role in establishing argumentation theory as a discipline. Today, it is as relevant to the field as when it first appeared, with discussions of questions and issues that remain central to the study of argument. It has defined the main approaches to many of (...) those issues and guided the ways in which we might respond to them. From this foundation, it sets the stage for further investigations and emerging research. This is a second edition of the book that is corrected and updated by the author, with new prefaces to each chapter. (shrink)
Bayesian epistemologists propose norms of rationality based on the proba- bility calculus. ?Probabilism? states that agents must hold credences that are consistent with the axioms of probability. ?Conditionalization? states that credences must be updated using Bayesian conditionalization. These norms are supported using `maximization arguments' such as Dutchbook and accuracy arguments. These arguments presuppose that rationality requires agents to maximize (practical or epistemic) value in every doxastic state, whose evaluation is done from a subjective point of view. Accuracy (...) arguments also presuppose that agents are opinionated. The first assumptions are reasonable, but not mandatory for the notion of rationality. The assumption of opinionation is questionable. In this paper, I investigate whether these norms (or opinionation) are supported by a maximization argument without these assumptions. I have designed AI agents based on the Bayesian model and a nonmonotonic framework and tested how they perform in an epistemic version of the Wumpus World. The nonmonotonic agent, who is not opinionated and fails probabilism and conditionalization, outperforms the Bayesian in some conditions, which suggests a negative answer to the question. (shrink)
This paper focuses on the view that rationality requires that our credences be regular. I go through different formulations of the requirement, and show that they face several problems. I then formulate a version of the requirement that solves most of, if not all, these problems. I conclude by showing that an argument thought to support the requirement as traditionally formulated actually does not; if anything, the argument, slightly modified, supports my version of the requirement.Send article to KindleTo (...) send this article to your Kindle, first ensure [email protected] is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about sending to your Kindle. Find out more about sending to your Kindle. Note you can select to send to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply. Find out more about the Kindle Personal Document Service.REGULARITY REFORMULATEDVolume 9, Issue 4Weng Hong TangDOI: https://doi.org/10.1017/epi.2012.23Your Kindle email address Please provide your Kindle [email protected]@kindle.com Available formats PDF Please select a format to send. By using this service, you agree that you will only keep articles for personal use, and will not openly distribute them via Dropbox, Google Drive or other file sharing services. Please confirm that you accept the terms of use. Cancel Send ×Send article to Dropbox To send this article to your Dropbox account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about sending content to Dropbox. REGULARITY REFORMULATEDVolume 9, Issue 4Weng Hong TangDOI: https://doi.org/10.1017/epi.2012.23Available formats PDF Please select a format to send. By using this service, you agree that you will only keep articles for personal use, and will not openly distribute them via Dropbox, Google Drive or other file sharing services. Please confirm that you accept the terms of use. Cancel Send ×Send article to Google Drive To send this article to your Google Drive account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about sending content to Google Drive. REGULARITY REFORMULATEDVolume 9, Issue 4Weng Hong TangDOI: https://doi.org/10.1017/epi.2012.23Available formats PDF Please select a format to send. By using this service, you agree that you will only keep articles for personal use, and will not openly distribute them via Dropbox, Google Drive or other file sharing services. Please confirm that you accept the terms of use. Cancel Send ×Export citation Request permission. (shrink)
Statutory interpretation involves the reconstruction of the meaning of a legal statement when it cannot be considered as accepted or granted. This phenomenon needs to be considered not only from the legal and linguistic perspective, but also from the argumentative one - which focuses on the strategies for defending a controversial or doubtful viewpoint. This book draws upon linguistics, legal theory, computing, and dialectics to present an argumentation-based approach to statutory interpretation. By translating and summarizing the existing legal interpretative (...) canons into eleven patterns of natural arguments - called argumentation schemes - the authors offer a system of argumentation strategies for developing, defending, assessing, and attacking an interpretation. Illustrated through major cases from both common and civil law, this methodology is summarized in diagrams and maps for application to computer sciences. These visuals help make the structures, strategies, and vulnerabilities of legal reasoning accessible to both legal professionals and laypeople. (shrink)
1848 is a watershed in Dutch political and intellectual history. In the wake of liberalism positivism and empiricism dominated Dutch philosophy. In this paper it is argued that Spinoza’s philosophy played an important part in developing a liberal Weltanschauung. Dutch Spinozism started with the theological dissertation of Johannes van Vloten, who from the 1860s onwards became the great pamphleteer of Spinozism. However due to his break with Christianity he remained an exception in Dutch intellectual life. The (...) Utrecht professor of philosophy Cornelis Willem Opzoomer and his friend the classical scholar D. Burger jr., for example, propagated a liberal Christianity purged from its mythical elements. Adopting Schleiermacher’s example Opzoomer developed a morality inspired by Ethics V. In 1850 he turned to J. S. Mill and A. Comte. From that year onwards he justified the methodological unity of the natural and ‘moral’ sciences in Spinoza’s doctrine of the passions. According to Burger the Ethics contains an obsolete metaphysics, but due to its morality consistent with science the book deserves a large 19th-century readership. In 1858 he translated Spinoza’s main work into Dutch. (shrink)
Aristotle divided arguments that persuade into the rhetorical (which happen to persuade), the dialectical (which are strong so ought to persuade to some degree) and the demonstrative (which must persuade if rightly understood). Dialectical arguments were long neglected, partly because Aristotle did not write a book about them. But in the sixteenth and seventeenth century late scholastic authors such as Medina, Cano and Soto developed a sound theory of probable arguments, those that have logical and not merely psychological force (...) but fall short of demonstration. Informed by late medieval treatments of the law of evidence and problems in moral theology and aleatory contracts, they considered the reasons that could render legal, moral, theological, commercial and historical arguments strong though not demonstrative. At the same time, demonstrative arguments became better understood as Galileo and other figures of the Scientific Revolution used mathematical proof in arguments in physics. Galileo moved both dialectical and demonstrative arguments into mathematical territory. (shrink)
A book review of _Free Choice: A Self-referential Argument_ by J. M. Boyle, Jr., G. Grisez, and O. Tollefsen. The review concerns the pragmatical self-referential argument employed in the book, and points to the fact that the argument is itself self-referentially inconsistent, but on the level of metalogical self-reference.
Suppose that you prefer A to B, B to C, and C to A. Your preferences violate Expected Utility Theory by being cyclic. Money-pump arguments offer a way to show that such violations are irrational. Suppose that you start with A. Then you should be willing to trade A for C and then C for B. But then, once you have B, you are offered a trade back to A for a small cost. Since you prefer A to B, you (...) pay the small sum to trade from B to A. But now you have been turned into a money pump. You are back to the alternative you started with but with less money. This Element shows how each of the axioms of Expected Utility Theory can be defended by money-pump arguments of this kind. The Element also defends money-pump arguments from the standard objections to this kind of approach. This title is also available as Open Access on Cambridge Core. (shrink)
In this essay for a PPR book symposium on Theodore Sider's _Four-Dimensionalism<D>, I focus on two of Sider's arguments for four-dimensionalism: (i) his argument from vagueness, and (ii) his argument from time travel. Concerning (i), I first show that Sider's argument commits him to certain strange consequences that many four-dimensionalists may not endorse, and then I discuss an objection that involves appealing to 'brutal composition', the view that there is no informative answer to Peter van Inwagen's (...) 'special composition question'. Concerning (ii), I argue that the three-dimensionalist can account for time travel scenarios in a way that is analogous to Sider's four-dimensionalist account of such scenarios. (shrink)
In his excellent book *The Metaphysics of Sensory Experience* (2021), David Papineau argues against standard theories of sensory experience: the sense datum view, representationalism, naïve realism, and so on. The only view left standing is his own “qualitative view”. On Papineau’s physicalist version, all experiences are nothing but neural states, and the only features essentially involved in experience are intrinsic neural properties (29-30, 95-97). In my book *Perception* (2021), I developed an argument from spatial experience against this (...) kind of view (also Pautz 2010, 2017). Here I elaborate on that argument in the light of Papineau’s discussion. (shrink)
Argumentation schemes [1–3] are a relatively recent notion that continues an extremely ancient debate on one of the foundations of human reasoning, human comprehension, and obviously human argumentation, i.e., the topics. To understand the revolutionary nature of Walton’s work on this subject matter, it is necessary to place it in the debate that it continues and contributes to, namely a view of logic that is much broader than the formalistic perspective that has been adopted from the 20th century until nowadays. (...) With his book Argumentation schemes for presumptive reasoning, Walton attempted to start a dialogue between three different fields or views on human reasoning – one (argumentation theory) very recent, one (dialectics) very ancient and with a very long tradition, and one (formal logic) relatively recent, but dominating in philosophy. Argumentation schemes were proposed as dialectical instruments, in the sense that they represented arguments not only as formal relations, but also as pragmatic inferences in the sense that they depend on what the interlocutors share and accept in a given dialogical circumstance and affect their dialogical relation. In this introduction, the notion of argumentation scheme will be analyzed in detail, showing its different dimensions and its defining features which make them an extremely useful instrument in Artificial Intelligence. This theoretical background will be followed by a literature review on the uses of the schemes in computing, aimed at identifying the most important areas and trends, the most promising proposals, and the directions of future research. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.