Thomas Kroedel argues that the lotteryparadox can be solved by identifying epistemic justification with epistemic permissibility rather than epistemic obligation. According to his permissibility solution, we are permitted to believe of each lottery ticket that it will lose, but since permissions do not agglomerate, it does not follow that we are permitted to have all of these beliefs together, and therefore it also does not follow that we are permitted to believe that all tickets will lose. (...) I present two objections to this solution. First, even if justification itself amounts to no more than epistemic permissibility, the lotteryparadox recurs at the level of doxastic obligations unless one adopts an extremely permissive view about suspension of belief that is in tension with our practice of doxastic criticism. Second, even if there are no obligations to believe lottery propositions, the permissibility solution fails because epistemic permissions typically agglomerate, and the lottery case provides no exception to this rule. (shrink)
Many epistemologists have responded to the lotteryparadox by proposing formal rules according to which high probability defeasibly warrants acceptance. Douven and Williamson present an ingenious argument purporting to show that such rules invariably trivialise, in that they reduce to the claim that a probability of 1 warrants acceptance. Douven and Williamson’s argument does, however, rest upon significant assumptions – amongst them a relatively strong structural assumption to the effect that the underlying probability space is both finite and (...) uniform. In this paper, I will show that something very like Douven and Williamson’s argument can in fact survive with much weaker structural assumptions – and, in particular, can apply to infinite probability spaces. (shrink)
The lotteryparadox involves a set of judgments that are individually easy, when we think intuitively, but ultimately hard to reconcile with each other, when we think reflectively. Empirical work on the natural representation of probability shows that a range of interestingly different intuitive and reflective processes are deployed when we think about possible outcomes in different contexts. Understanding the shifts in our natural ways of thinking can reduce the sense that the lotteryparadox reveals something (...) problematic about our concept of knowledge. However, examining these shifts also raises interesting questions about how we ought to be thinking about possible outcomes in the first place. (shrink)
According to the permissibility solution to the lotteryparadox, the paradox can be solved if we conceive of epistemic justification as a species of permissibility. Clayton Littlejohn has objected that the permissibility solution draws on a sufficient condition for permissible belief that has implausible consequences and that the solution conflicts with our lack of knowledge that a given lottery ticket will lose. The paper defends the permissibility solution against Littlejohn's objections.
This paper elaborates a new solution to the lotteryparadox, according to which the paradox arises only when we lump together two distinct states of being confident that p under one general label of ‘belief that p’. The two-state conjecture is defended on the basis of some recent work on gradable adjectives. The conjecture is supported by independent considerations from the impossibility of constructing the lotteryparadox both for risk-tolerating states such as being afraid, hoping (...) or hypothesizing, and for risk-averse, certainty-like states. The new proposal is compared to views within the increasingly popular debate opposing dualists to reductionists with respect to the relation between belief and degrees of belief. (shrink)
Vann McGee has presented a putative counterexample to modus ponens. I show that (a slightly modified version of) McGee’s election scenario has the same structure as a famous lottery scenario by Kyburg. More specifically, McGee’s election story can be taken to show that, if the Lockean Thesis holds, rational belief is not closed under classical logic, including classical-logic modus ponens. This conclusion defies the existing accounts of McGee’s puzzle.
A ‘lottery belief’ is a belief that a particular ticket has lost a large, fair lottery, based on nothing more than the odds against it winning. The lotteryparadox brings out a tension between the idea that lottery beliefs are justified and the idea that that one can always justifiably believe the deductive consequences of things that one justifiably believes – what is sometimes called the principle of closure. Many philosophers have treated the lottery (...)paradox as an argument against the second idea – but I make a case here that it is the first idea that should be given up. As I shall show, there are a number of independent arguments for denying that lottery beliefs are justified. (shrink)
The lottery and preface paradoxes pose puzzles in epistemology concerning how to think about the norms of reasonable or permissible belief. Contextualists in epistemology have focused on knowledge ascriptions, attempting to capture a set of judgments about knowledge ascriptions and denials in a variety of contexts (including those involving lottery beliefs and the principles of closure). This article surveys some contextualist approaches to handling issues raised by the lottery and preface, while also considering some of the difficulties (...) encountered by those approaches. (shrink)
According to the principle of Conjunction Closure, if one has justification for believing each of a set of propositions, one has justification for believing their conjunction. The lottery and preface paradoxes can both be seen as posing challenges for Closure, but leave open familiar strategies for preserving the principle. While this is all relatively well-trodden ground, a new Closure-challenging paradox has recently emerged, in two somewhat different forms, due to Marvin Backes (2019a) and Francesco Praolini (2019). This (...) class='Hi'>paradox synthesises elements of the lottery and the preface and is designed to close off the familiar Closure-preserving strategies. By appealing to a normic theory of justification, I will defend Closure in the face of this new paradox. Along the way I will draw more general conclusions about justification, normalcy and defeat, which bear upon what Backes (2019b) has dubbed the ‘easy defeat’ problem for the normic theory. (shrink)
Harman’s lotteryparadox, generalized by Vogel to a number of other cases, involves a curious pattern of intuitive knowledge ascriptions: certain propositions seem easier to know than various higher-probability propositions that are recognized to follow from them. For example, it seems easier to judge that someone knows his car is now on Avenue A, where he parked it an hour ago, than to judge that he knows that it is not the case that his car has been stolen (...) and driven away in the last hour. Contextualists have taken this pattern of intuitions as evidence that ‘knows’ does not always denote the same relationship; subject-sensitive invariantists have taken this pattern of intuitions as evidence that non-traditional factors such as practical interests figure in knowledge; still others have argued that the Harman Vogel pattern gives us a reason to abandon the principle that knowledge is closed under known entailment. This paper argues that there is a psychological explanation of the strange pattern of intuitions, grounded in the manner in which we shift between an automatic or heuristic mode of judgment and a controlled or systematic mode. Understanding the psychology behind the pattern of intuitions enables us to see that the pattern gives us no reason to abandon traditional intellectualist invariantism. The psychological account of the paradox also yields new resources for clarifying and defending the single premise closure principle for knowledge ascriptions. (shrink)
In this paper, I present an outline of a paradox which is a variation on the lotteryparadox and concerns whether we can ignore hesitant moral judgments.
We prove a representation theorem for preference relations over countably infinite lotteries that satisfy a generalized form of the Independence axiom, without assuming Continuity. The representing space consists of lexicographically ordered transfinite sequences of bounded real numbers. This result is generalized to preference orders on abstract superconvex spaces.
De Finetti would claim that we can make sense of a draw in which each positive integer has equal probability of winning. This requires a uniform probability distribution over the natural numbers, violating countable additivity. Countable additivity thus appears not to be a fundamental constraint on subjective probability. It does, however, seem mandated by Dutch Book arguments similar to those that support the other axioms of the probability calculus as compulsory for subjective interpretations. These two lines of reasoning can be (...) reconciled through a slight generalization of the Dutch Book framework. Countable additivity may indeed be abandoned for de Finetti's lottery, but this poses no serious threat to its adoption in most applications of subjective probability. Introduction The de Finetti lottery Two objections to equiprobability 3.1 The ‘No random mechanism’ argument 3.2 The Dutch Book argument Equiprobability and relative betting quotients The re-labelling paradox 5.1 The paradox 5.2 Resolution: from symmetry to relative probability Beyond the de Finetti lottery. (shrink)
I argue that we should solve the LotteryParadox by denying that rational belief is closed under classical logic. To reach this conclusion, I build on my previous result that (a slight variant of) McGee’s election scenario is a lottery scenario (see Lissia 2019). Indeed, this result implies that the sensible ways to deal with McGee’s scenario are the same as the sensible ways to deal with the lottery scenario: we should either reject the Lockean Thesis (...) or Belief Closure. After recalling my argument to this conclusion, I demonstrate that a McGee-like example (which is just, in fact, Carroll’s barbershop paradox) can be provided in which the Lockean Thesis plays no role: this proves that denying Belief Closure is the right way to deal with both McGee’s scenario and the LotteryParadox. A straightforward consequence of my approach is that Carroll’s puzzle is solved, too. (shrink)
This paper defends the heretical view that, at least in some cases, we ought to assign legal liability based on purely statistical evidence. The argument draws on prominent civil law litigation concerning pharmaceutical negligence and asbestos-poisoning. The overall aim is to illustrate moral pitfalls that result from supposing that it is never appropriate to rely on bare statistics when settling a legal dispute.
In recent work, Thomas Kroedel has proposed a novel solution to the lotteryparadox. As he sees it, we are permitted/justified in believing some lottery propositions, but we are not permitted/justified in believing them all. I criticize this proposal on two fronts. First, I think that if we had the right to add some lottery beliefs to our belief set, we would not have any decisive reason to stop adding more. Suggestions to the contrary run into (...) the wrong kind of reason problem. Reflection on the preface paradox suggests as much. Second, while I agree with Kroedel that permissions do not agglomerate, I do not think that this fact can help us solve the lotteryparadox. First, I do not think we have any good reason to think that we’re permitted to believe any lottery propositions. Second, I do not see any good reason to think that epistemic permissions do not agglomerate. (shrink)
I show that the LotteryParadox is just a version of the Sorites, and argue that this should modify our way of looking at the Paradox itself. In particular, I focus on what I call “the Cut-off Point Problem” and contend that this problem, well known by Sorites scholars, ought to play a key role in the debate on Kyburg’s puzzle. Very briefly, I show that, in the LotteryParadox, the premises “ticket n°1 will lose”, (...) “ticket n°2 will lose”… “ticket n°1000 will lose” are equivalent to soritical premises of the form “~(the winning ticket is in {…, (tn)}) ⊃ ~(the winning ticket is in {…, tn, (tn + 1)})” (where “⊃” is the material conditional, “~” is the negation symbol, “tn” and “tn + 1” are “ticket n°n” and “ticket n°n + 1” respectively, and “{}” identify the elements of the lottery tickets’ set. The brackets in “(tn)” and “(tn + 1)” are meant to point out that in the antecedent of the conditional we do not always have a “tn” (and, as a result, a “tn + 1” in the consequent): consider the conditional “~(the winning ticket is in {}) ⊃ ~(the winning ticket is in {t1})”). As a result, failing to believe, for some ticket, that it will lose comes down to introducing a cut-off point in a chain of soritical premises. In this paper I explore the consequences of the different ways of blocking the LotteryParadox with respect to the Cut-off Point Problem. A heap variant of the LotteryParadox is especially relevant for evaluating the different solutions. One important result is that the most popular way out of the puzzle, i.e., denying the Lockean Thesis, becomes less attractive. Moreover, I show that, along with the debate on whether rational belief is closed under classical logic, the debate on the validity of modus ponens should play an important role in discussions on the LotteryParadox. -/- . (shrink)
Agents are often assumed to have degrees of belief (“credences”) and also binary beliefs (“beliefs simpliciter”). How are these related to each other? A much-discussed answer asserts that it is rational to believe a proposition if and only if one has a high enough degree of belief in it. But this answer runs into the “lotteryparadox”: the set of believed propositions may violate the key rationality conditions of consistency and deductive closure. In earlier work, we showed that (...) this problem generalizes: there exists no local function from degrees of belief to binary beliefs that satisfies some minimal conditions of rationality and non-triviality. “Locality” means that the binary belief in each proposition depends only on the degree of belief in that proposition, not on the degrees of belief in others. One might think that the impossibility can be avoided by dropping the assumption that binary beliefs are a function of degrees of belief. We prove that, even if we drop the “functionality” restriction, there still exists no local relation between degrees of belief and binary beliefs that satisfies some minimal conditions. Thus functionality is not the source of the impossibility; its source is the condition of locality. If there is any non-trivial relation between degrees of belief and binary beliefs at all, it must be a “holistic” one. We explore several concrete forms this “holistic” relation could take. (shrink)
What is the relationship between degrees of belief and binary beliefs? Can the latter be expressed as a function of the former—a so-called “belief-binarization rule”—without running into difficulties such as the lotteryparadox? We show that this problem can be usefully analyzed from the perspective of judgment-aggregation theory. Although some formal similarities between belief binarization and judgment aggregation have been noted before, the connection between the two problems has not yet been studied in full generality. In this paper, (...) we seek to fill this gap. The paper is organized around a baseline impossibility theorem, which we use to map out the space of possible solutions to the belief-binarization problem. Our theorem shows that, except in limiting cases, there exists no belief-binarization rule satisfying four initially plausible desiderata. Surprisingly, this result is a direct corollary of the judgment-aggregation variant of Arrow’s classic impossibility theorem in social choice theory. (shrink)
Clayton Littlejohn claims that the permissibility solution to the lotteryparadox requires an implausible principle in order to explain why epistemic permissions don't agglomerate. This paper argues that an uncontentious principle suffices to explain this. It also discusses another objection of Littlejohn's, according to which we’re not permitted to believe lottery propositions because we know that we’re not in a position to know them.
This essay discusses the difficulty to reconcile two paradigms about beliefs: the binary or categorical paradigm of yes/no beliefs and the probabilistic paradigm of degrees of belief. The possibility for someone to hold both types of belief simultaneously is challenged by the lotteryparadox, and more recently by a general impossibility theorem. The nature, relevance, and implications of the tension are explained and assessed. A more technical elaboration can be found in Dietrich and List (2018, 2021).
As the ongoing literature on the paradoxes of the Lottery and the Preface reminds us, the nature of the relation between probability and rational acceptability remains far from settled. This article provides a novel perspective on the matter by exploiting a recently noted structural parallel with the problem of judgment aggregation. After offering a number of general desiderata on the relation between finite probability models and sets of accepted sentences in a Boolean sentential language, it is noted that a (...) number of these constraints will be satisfied if and only if acceptable sentences are true under all valuations in a distinguished non-empty set W. Drawing inspiration from distance-based aggregation procedures, various scoring rule based membership conditions for W are discussed and a possible point of contact with ranking theory is considered. The paper closes with various suggestions for further research. (shrink)
I explore how rational belief and rational credence relate to evidence. I begin by looking at three cases where rational belief and credence seem to respond differently to evidence: cases of naked statistical evidence, lotteries, and hedged assertions. I consider an explanation for these cases, namely, that one ought not form beliefs on the basis of statistical evidence alone, and raise worries for this view. Then, I suggest another view that explains how belief and credence relate to evidence. My view (...) focuses on the possibilities that the evidence makes salient. I argue that this makes better sense of the difference between rational credence and rational belief than other accounts. (shrink)
In order to perform certain actions – such as incarcerating a person or revoking parental rights – the state must establish certain facts to a particular standard of proof. These standards – such as preponderance of evidence and beyond reasonable doubt – are often interpreted as likelihoods or epistemic confidences. Many theorists construe them numerically; beyond reasonable doubt, for example, is often construed as 90 to 95% confidence in the guilt of the defendant. -/- A family of influential cases suggests (...) standards of proof should not be interpreted numerically. These ‘proof paradoxes’ illustrate that purely statistical evidence can warrant high credence in a disputed fact without satisfying the relevant legal standard. In this essay I evaluate three influential attempts to explain why merely statistical evidence cannot satisfy legal standards. (shrink)
This paper formulates some paradoxes of inductive knowledge. Two responses in particular are explored: According to the first sort of theory, one is able to know in advance that certain observations will not be made unless a law exists. According to the other, this sort of knowledge is not available until after the observations have been made. Certain natural assumptions, such as the idea that the observations are just as informative as each other, the idea that they are independent, and (...) that they increase your knowledge monotonically (among others) are given precise formulations. Some surprising consequences of these assumptions are drawn, and their ramifications for the two theories examined. Finally, a simple model of inductive knowledge is offered, and independently derived from other principles concerning the interaction of knowledge and counterfactuals. (shrink)
This essay is an accessible introduction to the proof paradox in legal epistemology. -/- In 1902 the Supreme Judicial Court of Maine filed an influential legal verdict. The judge claimed that in order to find a defendant culpable, the plaintiff “must adduce evidence other than a majority of chances”. The judge thereby claimed that bare statistical evidence does not suffice for legal proof. -/- In this essay I first motivate the claim that bare statistical evidence does not suffice for (...) legal proof. I then introduce and motivate a knowledge-centred explanation of this fact. The knowledge-centred explanation rests on two premises. The first is that legal proof requires knowledge of culpability. The second is that one cannot attain knowledge that p from bare statistical evidence that p. To motivate the second premise, I suggest that beliefs based on bare statistical evidence fail to be safe—they could easily be wrong—and bare statistical evidence cannot eliminate relevant alternatives. -/- I then cast doubt on the first premise; I argue that legal proof does not require knowledge. I thereby dispute the knowledge-centred explanation of the inadequacy of bare statistical evidence for legal proof. Instead of appealing to the nature of knowledge, I suggest we should seek a more direct explanation by appealing to those more foundational epistemic properties, such as safety or eliminating relevant alternatives. (shrink)
This dissertation is a contribution to formal and computational philosophy. -/- In the first part, we show that by exploiting the parallels between large, yet finite lotteries on the one hand and countably infinite lotteries on the other, we gain insights in the foundations of probability theory as well as in epistemology. Case 1: Infinite lotteries. We discuss how the concept of a fair finite lottery can best be extended to denumerably infinite lotteries. The solution boils down to the (...) introduction of infinitesimal probability values, which can be achieved using non-standard analysis. Our solution can be generalized to uncountable sample spaces, giving rise to a Non-Archimedean Probability (NAP) theory. Case 2: Large but finite lotteries. We propose application of the language of relative analysis (a type of non-standard analysis) to formulate a new model for rational belief, called Stratified Belief. This contextualist model seems well-suited to deal with a concept of beliefs based on probabilities ‘sufficiently close to unity’. -/- The second part presents a case study in social epistemology. We model a group of agents who update their opinions by averaging the opinions of other agents. Our main goal is to calculate the probability for an agent to end up in an inconsistent belief state due to updating. To that end, an analytical expression is given and evaluated numerically, both exactly and using statistical sampling. The probability of ending up in an inconsistent belief state turns out to be always smaller than 2%. (shrink)
Sometimes epistemologists theorize about belief, a tripartite attitude on which one can believe, withhold belief, or disbelieve a proposition. In other cases, epistemologists theorize about credence, a fine-grained attitude that represents one’s subjective probability or confidence level toward a proposition. How do these two attitudes relate to each other? This article explores the relationship between belief and credence in two categories: descriptive and normative. It then explains the broader significance of the belief-credence connection and concludes with general lessons from the (...) debate thus far. (shrink)
According to deontological approaches to justification, we can analyze justification in deontic terms. In this paper, I try to advance the discussion of deontological approaches by applying recent insights in the semantics of deontic modals. Specifically, I use the distinction between weak necessity modals and strong necessity modals to make progress on a question that has received surprisingly little discussion in the literature, namely: ‘What’s the best version of a deontological approach?’ The two most obvious hypotheses are the Permissive View, (...) according to which justified expresses permission, and the Obligatory View, according to which justified expresses some species of obligation. I raise difficulties for both of these hypotheses. In light of these difficulties, I propose a new position, according to which justified expresses a property I call faultlessness, defined as the dual of weak necessity modals. According to this view, an agent is justified in phi-ing iff it’s not the case that she should [/ought] not phi. I argue that this ‘Faultlessness View’ gives us precisely what’s needed to avoid the problems facing the Permissive and Obligatory Views. (shrink)
To understand something involves some sort of commitment to a set of propositions comprising an account of the understood phenomenon. Some take this commitment to be a species of belief; others, such as Elgin and I, take it to be a kind of cognitive policy. This paper takes a step back from debates about the nature of understanding and asks when this commitment involved in understanding is epistemically appropriate, or ‘acceptable’ in Elgin’s terminology. In particular, appealing to lessons from the (...)lottery and preface paradoxes, it is argued that this type of commitment is sometimes acceptable even when it would be rational to assign arbitrarily low probabilities to the relevant propositions. This strongly suggests that the relevant type of commitment is sometimes acceptable in the absence of epistemic justification for belief, which in turn implies that understanding does not require justification in the traditional sense. The paper goes on to develop a new probabilistic model of acceptability, based on the idea that the maximally informative accounts of the understood phenomenon should be optimally probable. Interestingly, this probabilistic model ends up being similar in important ways to Elgin’s proposal to analyze the acceptability of such commitments in terms of ‘reflective equilibrium’. (shrink)
An influential proposal is that knowledge involves safe belief. A belief is safe, in the relevant sense, just in case it is true in nearby metaphysically possible worlds. In this paper, I introduce a distinct but complementary notion of safety, understood in terms of epistemically possible worlds. The main aim, in doing so, is to add to the epistemologist’s tool-kit. To demonstrate the usefulness of the tool, I use it to advance and assess substantive proposals concerning knowledge and justification.
Statistical evidence—say, that 95% of your co-workers badmouth each other—can never render resenting your colleague appropriate, in the way that other evidence (say, the testimony of a reliable friend) can. The problem of statistical resentment is to explain why. We put the problem of statistical resentment in several wider contexts: The context of the problem of statistical evidence in legal theory; the epistemological context—with problems like the lotteryparadox for knowledge, epistemic impurism and doxastic wrongdoing; and the context (...) of a wider set of examples of responses and attitudes that seem not to be appropriately groundable in statistical evidence. Regrettably, we do not come up with a fully general, fully adequate, fully unified account of all the phenomena discussed. But we give reasons to believe that no such account is forthcoming, and we sketch a somewhat messier account that may be the best that can be had here. (shrink)
One thousand fair causally isolated coins will be independently flipped tomorrow morning and you know this fact. I argue that the probability, conditional on your knowledge, that any coin will land tails is almost 1 if that coin in fact lands tails, and almost 0 if it in fact lands heads. I also show that the coin flips are not probabilistically independent given your knowledge. These results are uncomfortable for those, like Timothy Williamson, who take these probabilities to play a (...) central role in their theorizing. (shrink)
The idea that knowledge can be extended by inference from what is known seems highly plausible. Yet, as shown by familiar preface paradox and lottery-type cases, the possibility of aggregating uncertainty casts doubt on its tenability. We show that these considerations go much further than previously recognized and significantly restrict the kinds of closure ordinary theories of knowledge can endorse. Meeting the challenge of uncertainty aggregation requires either the restriction of knowledge-extending inferences to single premises, or eliminating epistemic (...) uncertainty in known premises. The first strategy, while effective, retains little of the original idea—conclusions even of modus ponens inferences from known premises are not always known. We then look at the second strategy, inspecting the most elaborate and promising attempt to secure the epistemic role of basic inferences, namely Timothy Williamson’s safety theory of knowledge. We argue that while it indeed has the merit of allowing basic inferences such as modus ponens to extend knowledge, Williamson’s theory faces formidable difficulties. These difficulties, moreover, arise from the very feature responsible for its virtue- the infallibilism of knowledge. (shrink)
We consider a basic logic with two primitive uni-modal operators: one for certainty and the other for plausibility. The former is assumed to be a normal operator, while the latter is merely a classical operator. We then define belief, interpreted as “maximally plausible possibility”, in terms of these two notions: the agent believes \ if she cannot rule out \ ), she judges \ to be plausible and she does not judge \ to be plausible. We consider four interaction properties (...) between certainty and plausibility and study how these properties translate into properties of belief. We then prove that all the logics considered are minimal logics for the highlighted theorems. We also consider a number of possible interpretations of plausibility, identify the corresponding logics and show that some notions considered in the literature are special cases of our framework. (shrink)
According to a captivating picture, epistemic justification is essentially a matter of epistemic or evidential likelihood. While certain problems for this view are well known, it is motivated by a very natural thought—if justification can fall short of epistemic certainty, then what else could it possibly be? In this paper I shall develop an alternative way of thinking about epistemic justification. On this conception, the difference between justification and likelihood turns out to be akin to the more widely recognised difference (...) between ceteris paribus laws and brute statistical generalisations. I go on to discuss, in light of this suggestion, issues such as classical and lottery-driven scepticism as well as the lottery and preface paradoxes. (shrink)
There is much to like about the idea that justification should be understood in terms of normality or normic support (Smith 2016, Goodman and Salow 2018). The view does a nice job explaining why we should think that lottery beliefs differ in justificatory status from mundane perceptual or testimonial beliefs. And it seems to do that in a way that is friendly to a broadly internalist approach to justification. In spite of its attractions, we think that the normic support (...) view faces two serious challenges. The first is that it delivers the wrong result in preface cases. These cases suggest that the view is either too sceptical or too externalist. The second is that the view struggles with certain kinds of Moorean absurdities. It turns out that these problems can easily be avoided. If we think of normality as a condition on *knowledge*, we can characterise justification in terms of its connection to knowledge and thereby avoid the difficulties discussed here. The resulting view does an equally good job explaining why we should think that our perceptual and testimonial beliefs are justified when lottery beliefs cannot be. Thus, it seems that little could be lost and much could be gained by revising the proposal and adopting a view on which it is knowledge, not justification, that depends directly upon normality. (shrink)
This paper defends a new norm of assertion: Assert that p only if you are in a position to know that p. We test the norm by judging its performance in explaining three phenomena that appear jointly inexplicable at first: Moorean paradoxes, lottery propositions, and selfless assertions. The norm succeeds by tethering unassertability to unknowability while untethering belief from assertion. The PtK‐norm foregrounds the public nature of assertion as a practice that can be other‐regarding, allowing asserters to act in (...) the best interests of their audience when psychological pressures would otherwise prevent them from communicating the knowable truth. (shrink)
Socrates' attitude towards falsehood is quite puzzling in the Republic. Although Socrates is clearly committed to truth, at several points he discusses the benefits of falsehood. This occurs most notably in Book 3 with the "noble lie" (414d-415c) and most disturbingly in Book 5 with the "rigged sexual lottery" (459d-460c). This raises the question: What kinds of falsehoods does Socrates think are beneficial, and what kinds of falsehoods does he think are harmful? And more broadly: What can this tell (...) us about the relationship between ethics and epistemology? The key to answering these questions lies in an obscure and paradoxical passage in Book II; at 382a-d Socrates distinguishes between "true falsehoods" and "impure lies." True falsehoods are always bad, but impure lies are sometimes beneficial. Despite Socrates' insistence that he is not saying anything deep, his distinction is far from straightforward. Nevertheless, in order to determine why some falsehoods are beneficial and why some are always harmful, we must understand what exactly true falsehoods are and how they differ from impure lies. In this paper, I argue that true falsehoods are a restricted class of false beliefs about ethics; they are false beliefs about how one should live and what one should pursue. I refer to these beliefs as "normative commitments." False normative commitments are always pernicious because they create and sustain psychological disharmony. Unlike true falsehoods, impure lies can be about anything. Nevertheless, they are only beneficial when they help produce and sustain true normative commitments. I argue that the upshot of this is that practical concerns have a kind of primacy over theoretical concerns. (shrink)
This paper is about the alethic aspect of epistemic rationality. The most common approaches to this aspect are either normative (what a reasoner ought to/may believe?) or evaluative (how rational is a reasoner?), where the evaluative approaches are usually comparative (one reasoner is assessed compared to another). These approaches often present problems with blindspots. For example, ought a reasoner to believe a currently true blindspot? Is she permitted to? Consequently, these approaches often fail in describing a situation of alethic maximality, (...) where a reasoner fulfills all the alethic norms and could be used as a standard of rationality (as they are, in fact, used in some of these approaches). I propose a function α, which accepts a set of beliefs as inputand returns a numeric alethic value. Then I use this function to define a notion of alethic maximality that is satisﬁable by finite reasoners (reasoners with cognitive limitations) and does not present problems with blindspots. Function α may also be used in alethic norms and evaluation methods (comparative and non-comparative) that may be applied to ﬁnite reasoners and do not present problems with blindspots. A result of this investigation isthat the project of providing purely alethic norms is defective. The use of function α also sheds light on important epistemological issues, such as the lottery and the preface paradoxes, and the principles of clutter avoidance and reflection. (shrink)
A question, long discussed by legal scholars, has recently provoked a considerable amount of philosophical attention: ‘Is it ever appropriate to base a legal verdict on statistical evidence alone?’ Many philosophers who have considered this question reject legal reliance on bare statistics, even when the odds of error are extremely low. This paper develops a puzzle for the dominant theories concerning why we should eschew bare statistics. Namely, there seem to be compelling scenarios in which there are multiple sources of (...) incriminating statistical evidence. As we conjoin together different types of statistical evidence, it becomes increasingly incredible to suppose that a positive verdict would be impermissible. I suggest that none of the dominant views in the literature can easily accommodate such cases, and close by offering a diagnosis of my own. (shrink)
English abstract: This paper discusses the delicate relationship between traditional epistemology and the increasingly influential probabilistic (or ‘Bayesian’) approach to epistemology. The paper introduces some of the key ideas of probabilistic epistemology, including credences or degrees of belief, Bayes’ theorem, conditionalization, and the Dutch Book argument. The tension between traditional and probabilistic epistemology is brought out by considering the lottery and preface paradoxes as they relate to rational (binary) belief and credence respectively. It is then argued that this tension (...) can be alleviated by rejecting the requirement that rational (binary) beliefs must be consistent and closed under logical entailment. Instead, it is suggested that this logical requirement applies to a different type of binary propositional attitude, viz. acceptance. (shrink)
In some lottery situations, the probability that your ticket's a loser can get very close to 1. Suppose, for instance, that yours is one of 20 million tickets, only one of which is a winner. Still, it seems that (1) You don't know yours is a loser and (2) You're in no position to flat-out assert that your ticket is a loser. "It's probably a loser," "It's all but certain that it's a loser," or even, "It's quite certain that (...) it's a loser" seem quite alright to say, but, it seems, you're in no position to declare simply, "It's a loser." (1) and (2) are closely related phenomena. In fact, I'll take it as a working hypothesis that the reason "It's a loser" is unassertable is that (a) You don't seem to know that your ticket's a loser, and (b) In flat-out asserting some proposition, you represent yourself as knowing it.1 This working hypothesis will enable me to address these two phenomena together, moving back and forth freely between them. I leave it to those who reject the hypothesis to sort out those considerations which properly apply to the issue of knowledge from those germane to that of assertability. Things are quite different when you report the results of last night's basketball game. Suppose your only source is your morning newspaper, which did not carry a story about the 1 game, but simply listed the score, "Knicks 83, at Bulls 95," under "Yesterday's Results." Now, it doesn't happen very frequently, but, as we all should suspect, newspapers do misreport scores from time to time. On several occasions, my paper has transposed a result, attributing to each team the score of its opponent. In fact, that your paper's got the present result wrong seems quite a bit more probable than that you've won the lottery of the above paragraph. Still, when asked, "Did the Bulls win yesterday?", "Probably" and "In all likelihood" seem quite unnecessary. "Yes, they did," seems just fine.. (shrink)
This article discusses how the concept of a fair finite lottery can best be extended to denumerably infinite lotteries. Techniques and ideas from non-standard analysis are brought to bear on the problem.
I review recent empirical findings on knowledge attributions in lottery cases and report a new experiment that advances our understanding of the topic. The main novel finding is that people deny knowledge in lottery cases because of an underlying qualitative difference in how they process probabilistic information. “Outside” information is generic and pertains to a base rate within a population. “Inside” information is specific and pertains to a particular item’s propensity. When an agent receives information that 99% of (...) all lottery tickets lose (outside information), people judge that she does not know that her ticket will lose. By contrast, when an agent receives information that her specific ticket is 99% likely to lose (inside information), people judge that she knows that her ticket will lose. Despite this difference in knowledge judgments, people rate the likelihood of her ticket losing the exact same in both cases (i.e. 99%). The results shed light on other factors affecting knowledge judgments in lottery cases, including formulaic expression and participants’ own estimation of whether it is true that the ticket will lose. The results also undermine previous hypotheses offered for knowledge denial in lottery cases, including the hypotheses that people deny knowledge because they either deny justification or acknowledge a chance for error. (shrink)
In the first chapter of his Knowledge and Lotteries, John Hawthorne argues that thinkers do not ordinarily know lottery propositions. His arguments depend on claims about the intimate connections between knowledge and assertion, epistemic possibility, practical reasoning, and theoretical reasoning. In this paper, we cast doubt on the proposed connections. We also put forward an alternative picture of belief and reasoning. In particular, we argue that assertion is governed by a Gricean constraint that makes no reference to knowledge, and (...) that practical reasoning has more to do with rational degrees of belief than with states of knowledge. (shrink)
Supererogatory acts—good deeds “beyond the call of duty”—are a part of moral common sense, but conceptually puzzling. I propose a unified solution to three of the most infamous puzzles: the classic Paradox of Supererogation (if it’s so good, why isn’t it just obligatory?), Horton’s All or Nothing Problem, and Kamm’s Intransitivity Paradox. I conclude that supererogation makes sense if, and only if, the grounds of rightness are multi-dimensional and comparative.
The paradox of pain refers to the idea that the folk concept of pain is paradoxical, treating pains as simultaneously mental states and bodily states. By taking a close look at our pain terms, this paper argues that there is no paradox of pain. The air of paradox dissolves once we recognize that pain terms are polysemous and that there are two separate but related concepts of pain rather than one.
Counterfactuals are somewhat tolerant. Had Socrates been at least six feet tall, he need not have been exactly six feet tall. He might have been a little taller—he might have been six one or six two. But while he might have been a little taller, there are limits to how tall he would have been. Had he been at least six feet tall, he would not have been more than a hundred feet tall, for example. Counterfactuals are not just tolerant, (...) then, but bounded. This paper presents a surprising paradox: If counterfactuals are tolerant and bounded, then we can prove a flat contradiction using natural rules of inference. Something has to go then. But what? (shrink)
This paper presents and motivates a new philosophical and logical approach to truth and semantic paradox. It begins from an inferentialist, and particularly bilateralist, theory of meaning---one which takes meaning to be constituted by assertibility and deniability conditions---and shows how the usual multiple-conclusion sequent calculus for classical logic can be given an inferentialist motivation, leaving classical model theory as of only derivative importance. The paper then uses this theory of meaning to present and motivate a logical system---ST---that conservatively extends (...) classical logic with a fully transparent truth predicate. This system is shown to allow for classical reasoning over the full (truth-involving) vocabulary, but to be non-transitive. Some special cases where transitivity does hold are outlined. ST is also shown to give rise to a familiar sort of model for non-classical logics: Kripke fixed points on the Strong Kleene valuation scheme. Finally, to give a theory of paradoxical sentences, a distinction is drawn between two varieties of assertion and two varieties of denial. On one variety, paradoxical sentences cannot be either asserted or denied; on the other, they must be both asserted and denied. The target theory is compared favourably to more familiar related systems, and some objections are considered. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.