We care not only about how people treat us, but also what they believe of us. If I believe that you’re a bad tipper given your race, I’ve wronged you. But, what if you are a bad tipper? It is commonly argued that the way racist beliefs wrong is that the racist believer either misrepresents reality, organizes facts in a misleading way that distorts the truth, or engages in fallacious reasoning. In this paper, I present a case that challenges (...) this orthodoxy: the case of the supposedly rational racist. We live in a world that has been, and continues to be, structured by racist attitudes and institutions. As a result, the evidence might be stacked in favour of racist beliefs. But, if there are racist beliefs that reflect reality and are rationally justified, what could be wrong with them? Moreover, how do I wrong you by believing what I epistemically ought believe given the evidence? To address this challenge, we must recognize that there are not only epistemic norms governing belief, but moral ones as well. This view, however, is at odds with the assumption that moral obligation requires a kind of voluntary control that we lack with regard to our beliefs. This background assumption motivates many philosophers to try to explain away the appearance that beliefs can wrong by locating the wrong elsewhere, e.g., in an agent’s actions. Further, even accounts that accept the thesis that racist beliefs can wrong restrict the class of beliefs that wrong to beliefs that are either false or the result of hot irrationality, e.g., the racist belief is a result of ill-will. In this paper I argue that although the these accounts will capture many of the wrongs associated with racist beliefs, they will be only partial explanations because they cannot explain the wrong committed by the supposedly rational racist. The challenge posed by the supposedly rational racist concerns our epistemic practices in a non-ideal world. The world is an unjust place, and there may be many morally objectionable beliefs it justifies. To address this challenge, we must seriously consider the thesis people wrong others in virtue of what they believe about them, and not just in virtue of what they do. (shrink)
We care what people think of us. The thesis that beliefs wrong, although compelling, can sound ridiculous. The norms that properly govern belief are plausibly epistemic norms such as truth, accuracy, and evidence. Moral and prudential norms seem to play no role in settling the question of whether to believe p, and they are irrelevant to answering the question of what you should believe. This leaves us with the question: can we wrong one another by virtue of what we (...) believe about each other? Can beliefs wrong? In this introduction, I present a brief summary of the articles that make up this special issue. The aim is to direct readers to open avenues for future research by highlighting questions and challenges that are far from being settled. These papers shouldn’t be taken as the last word on the subject. Rather, they mark the beginning of a serious exploration into a set of questions that concern the morality of belief, i.e., doxastic morality. (shrink)
You shouldn’t have done it. But you did. Against your better judgment you scrolled to the end of an article concerning the state of race relations in America and you are now reading the comments. Amongst the slurs, the get-rich-quick schemes, and the threats of physical violence, there is one comment that catches your eye. Spencer argues that although it might be “unpopular” or “politically incorrect” to say this, the evidence supports believing that the black diner in his section will (...) tip poorly. He insists that the facts don’t lie. The facts aren’t racist. In denying his claim and in believing otherwise, it is you who engages in wishful thinking. It is you who believes against the evidence. You, not Spencer, are epistemically irrational. -/- My dissertation gives an account of the moral-epistemic norms governing belief that will help us answer Spencer and the challenge he poses. We live in a society that has been shaped by racist attitudes and institutions. Given the effects of structural racism, Spencer’s belief could have considerable evidential support. Spencer notes that it might make him unpopular, but he cares about the truth and he is willing to believe the unpopular thing. But, Spencer’s belief seems racist. Spencer asks, however, how could his belief be racist if his beliefs reflect reality and are rationally justified? Moreover, how could he wrong anyone by believing what he epistemically ought to believe given the evidence? In answer, I argue that beliefs can wrong. (shrink)
In the Book of Common Prayer’s Rite II version of the Eucharist, the congregation confesses, “we have sinned against you in thought, word, and deed”. According to this confession we wrong God not just by what we do and what we say, but also by what we think. The idea that we can wrong someone not just by what we do, but by what think or what we believe, is a natural one. It is the kind of wrong we feel (...) when those we love believe the worst about us. And it is one of the salient wrongs of racism and sexism. Yet it is puzzling to many philosophers how we could wrong one another by virtue of what we believe about them. This paper defends the idea that we can morally wrong one another by what we believe about them from two such puzzles. The first puzzle concerns whether we have the right sort of control over our beliefs for them to be subject to moral evaluation. And the second concerns whether moral wrongs would come into conflict with the distinctively epistemic standards that govern belief. Our answer to both puzzles is that the distinctively epistemic standards governing belief are not independent of moral considerations. This account of moral encroachment explains how epistemic norms governing belief are sensitive to the moral requirements governing belief. (shrink)
An introduction of the ethics of belief and application to current political debates, with the observation that people of all political persuasions have beliefs that are not based on strong evidence.
It is a commonplace belief that many beliefs, e.g. religious convictions, are a purely private matter, and this is meant in some way to serve as a defense against certain forms of criticism. In this paper it is argued that this thesis is false, and that belief is really often a public matter. This argument, the publicity of belief argument, depends on one of the most compelling and central thesis of Peircean pragmatism. This crucial thesis is that bona fide (...) belief cannot be separated from action. It is then also suggested that we should accept a form of W. K. Clifford's evidentialism. When these theses are jointly accepted in conjunction with the basic principle of ethics that it is prima facie wrong to act in such a way that may subject others to serious but unnecessary and avoidable harm, it follows that many beliefs are morally wrong. (shrink)
I use the case of religious belief to argue that communal institutions are crucial to successfully transmitting knowledge to a broad public. The transmission of maximally counterintuitive religious concepts can only be explained by reference to the communities that sustain and pass them on. The shared life and vision of such communities allows believers to trust their fellow adherents. Repeated religious practices provide reinforced exposure while the comprehensive and structured nature of religious worldviews helps to limit distortion. I argue that (...) the phenomenon of theological incorrectness noted by many cognitive scientists of religion is not as worrisome as it may appear. Believers may be employing models that are good enough for practical knowledge, as much of the relevant sociological evidence suggests. Further, communities can help us both in acquiring our initial beliefs and in correcting our errors. (shrink)
In a number of recent philosophical debates, it has become common to distinguish between two kinds of normative reasons, often called the right kind of reasons (henceforth: RKR) and the wrong kind of reasons (henceforth: WKR). The distinction was first introduced in discussions of the so-called buck-passing account of value, which aims to analyze value properties in terms of reasons for pro-attitudes and has been argued to face the wrong kind of reasons problem. But nowadays it also gets applied in (...) other philosophical contexts and to reasons for other responses than pro-attitudes, for example in recent debates about evidentialism and pragmatism about reasons for belief. While there seems to be wide agreement that there is a general and uniform distinction that applies to reasons for different responses, there is little agreement about the scope, relevance and nature of this distinction. Our aim in this article is to shed some light on this issue by surveying the RKR/WKR distinction as it has been drawn with respect to different responses, and by examining how it can be understood as a uniform distinction across different contexts. We start by considering reasons for pro-attitudes and emotions in the context of the buck-passing account of value (§1). Subsequently we address the distinction that philosophers have drawn with respect to reasons for other attitudes, such as beliefs and intentions (§2), as well as with respect to reasons for action (§3). We discuss the similarities and differences between the ways in which philosophers have drawn the RKR/WKR distinction in these areas and offer different interpretations of the idea of a general, uniform distinction. The major upshot is that there is at least one interesting way of substantiating a general RKR/WKR distinction with respect to a broad range of attitudes as well as actions. We argue that this has important implications for the proper scope of buck-passing accounts and the status of the wrong kind of reasons problem (§4). (shrink)
Don’t form beliefs on the basis of coin flips or random guesses. More generally, don’t take belief gambles: if a proposition is no more likely to be true than false given your total body of evidence, don’t go ahead and believe that proposition. Few would deny this seemingly innocuous piece of epistemic advice. But what, exactly, is wrong with taking belief gambles? Philosophers have debated versions of this question at least since the classic dispute between William Clifford and William (...) James near the end of the nineteenth century. Here I reassess the normative standing of belief gambles from the perspective of epistemic decision theory. The main lesson of the paper is a negative one: it turns out that we need to make some surprisingly strong and hard-to-motivate assumptions to establish a general norm against belief gambles within a decision-theoretic framework. I take this to pose a dilemma for epistemic decision theory: it forces us to either make seemingly unmotivated assumptions to secure a norm against belief gambles, or concede that belief gambles can be rational after all. (shrink)
Mark Schroeder has recently offered a solution to the problem of distinguishing between the so-called " right " and " wrong " kinds of reasons for attitudes like belief and admiration. Schroeder tries out two different strategies for making his solution work: the alethic strategy and the background-facts strategy. In this paper I argue that neither of Schroeder's two strategies will do the trick. We are still left with the problem of distinguishing the right from the wrong kinds of reasons.
In this paper, I argue that morality might bear on belief in at least two conceptually distinct ways. The first is that morality might bear on belief by bearing on questions of justification. The claim that it does is the doctrine of moral encroachment. The second, is that morality might bear on belief given the central role belief plays in mediating and thereby constituting our relationships with one another. The claim that it does is the doctrine of doxastic wronging. (...) Though conceptually distinct, the two doctrines overlap in important ways. This paper provides clarification on the relationship between the two, providing reasons throughout that we should accept both. (shrink)
In this article, I address what kinds of claims are of the right kind to ground conscientious refusals. Specifically, I investigate what conceptions of moral responsibility and moral wrongness can be permissibly presumed by conscientious objectors. I argue that we must permit HCPs to come to their own subjective conclusions about what they take to be morally wrong and what they take themselves to be morally responsible for. However, these subjective assessments of wrongness and responsibility must be constrained in several (...) important ways: they cannot involve empirical falsehoods, objectionably discriminatory attitudes, or unreasonable normative beliefs. I argue that the sources of these constraints are the basic epistemic, relational, and normative competencies needed to function as a minimally decent health-care professional. Finally, I consider practical implications for my framework, and argue that it shows us that the objection raised by the plaintiffs in Zubik v. Burwell is of the wrong sort. (shrink)
A growing number of philosophers are concerned with the epistemic status of culturally nurtured beliefs, beliefs found especially in domains of morals, politics, philosophy, and religion. Plausibly, worries about the deep impact of cultural contingencies on beliefs in these domains of controversial views is a question about well-foundedness: Does it defeat well-foundedness if the agent is rationally convinced that she would take her own reasons for belief as insufficiently well-founded, or would take her own belief as biased, (...) had she been nurtured in a different psychographic community? This chapter will examine the proper scope and force of this epistemic location problem. It sketches an account of well and ill-founded nurtured belief based upon the many markers of doxastic strategies exhibiting low to high degrees of inductive risk: the moral and epistemic risk of ‘getting it wrong’ in an inductive context of inquiry. (shrink)
It is plausible that there are epistemic reasons bearing on a distinctively epistemic standard of correctness for belief. It is also plausible that there are a range of practical reasons bearing on what to believe. These theses are often thought to be in tension with each other. Most significantly for our purposes, it is obscure how epistemic reasons and practical reasons might interact in the explanation of what one ought to believe. We draw an analogy with a similar distinction between (...) types of reasons for actions in the context of activities. The analogy motivates a two-level account of the structure of normativity that explains the interaction of correctness-based and other reasons. This account relies upon a distinction between normative reasons and authoritatively normative reasons. Only the latter play the reasons role in explaining what state one ought to be in. All and only practical reasons are authoritative reasons. Hence, in one important sense, all reasons for belief are practical reasons. But this account also preserves the autonomy and importance of epistemic reasons. Given the importance of having true beliefs about the world, our epistemic standard typically plays a key role in many cases in explaining what we ought to believe. In addition to reconciling (versions of) evidentialism and pragmatism, this two-level account has implications for a range of important debates in normative theory, including the interaction of right and wrong reasons for actions and other attitudes, the significance of reasons in understanding normativity and authoritative normativity, the distinction between ‘formal’ and ‘substantive’ normativity, and whether there is a unified source of authoritative normativity. (shrink)
Historical patterns of discrimination seem to present us with conflicts between what morality requires and what we epistemically ought to believe. I will argue that these cases lend support to the following nagging suspicion: that the epistemic standards governing belief are not independent of moral considerations. We can resolve these seeming conflicts by adopting a framework wherein standards of evidence for our beliefs to count as justified can shift according to the moral stakes. On this account, believing a paradigmatically (...) racist belief reflects a failure to not only attend to the epistemic risk of being wrong, but also a failure to attend to the distinctively moral risk of wronging others given what we believe. (shrink)
Philosophers have recently come to focus on explaining the phenomenon of bad beliefs, beliefs that are apparently true and well-evidenced but nevertheless objectionable. Despite this recent focus, a consensus is already forming around a particular explanation of these beliefs’ badness called moral encroachment, according to which, roughly, the moral stakes engendered by bad beliefs make them particularly difficult to justify. This paper advances an alternative account not just of bad beliefs but of bad attitudes more (...) generally according to which bad beliefs’ badness originates not in a failure of sufficient evidence but in a failure to respond adequately to reasons. I motivate this alternative account through an analogy to recent discussions of moral worth centered on the well-known grocer case from Kant’s Groundwork, and by showing that this analogy permits the proposed account to generalize to bad attitudes beyond belief. The paper concludes by contrasting the implications of moral encroachment and of the proposed account for bad attitudes’ blameworthiness. (shrink)
According to the view that there is moral encroachment in epistemology, whether a person has knowledge of p sometimes depends on moral considerations, including moral considerations that do not bear on the truth or likelihood of p. Defenders of moral encroachment face a central challenge: they must explain why the moral considerations they cite, unlike moral bribes for belief, are reasons of the right kind for belief (or withheld belief). This paper distinguishes between a moderate and a radical version of (...) moral encroachment. It shows that, while defenders of moderate moral encroachment are well-placed to meet the central challenge, defenders of radical moral encroachment are not. The problem for radical moral encroachment is that it cannot, without taking on unacceptable costs, forge the right sort of connection between the moral badness of a belief and that belief’s chance of being false. (shrink)
Many authors have argued that epistemic rationality sometimes comes into conflict with our relationships. Although Sarah Stroud and Simon Keller argue that friendships sometimes require bad epistemic agency, their proposals do not go far enough. I argue here for a more radical claim—romantic love sometimes requires we form beliefs that are false. Lovers stand in a special position with one another; they owe things to one another that they do not owe to others. Such demands hold for beliefs (...) as well. Two facets of love ground what I call the false belief requirement , or the demand to form false beliefs when it is for the good of the beloved: the demand to love for the right reasons and the demand to refrain from doxastic wronging. Since truth is indispensable to epistemic rationality, the requirement to believe falsely, consequently, undermines truth norms. I demonstrate that, when the false belief requirement obtains, there is an irreconcilable conflict between love and truth norms of epistemic rationality: we must forsake one, at least at the time, for the other. (shrink)
In a recent article, Krauss (2017) raises some fundamental questions concerning (i) what the desiderata of a definition of lying are, and (ii) how definitions of lying can account for partial beliefs. This paper aims to provide an adequate answer to both questions. Regarding (i), it shows that there can be a tension between two desiderata for a definition of lying: 'descriptive accuracy' (meeting intuitions about our ordinary concept of lying), and 'moral import' (meeting intuitions about what is wrong (...) with lying), vindicating the primacy of the former desideratum. Regarding (ii), it shows that Krauss' proposed 'worse-off requirement' meets neither of these desiderata, whereas the 'comparative insincerity condition' (Marsili 2014) can meet both. The conclusion is that lies are assertions that the speaker takes to be more likely to be false than true, and their distinctive blameworthiness is a function of the extent to which they violate a sincerity norm. (shrink)
Abstract: When do we meet the standard of proof in a criminal trial? Some have argued that it is when the guilt of the defendant is sufficiently probable on the evidence. Some have argued that it is a matter of normic support. While the first view provides us with a nice account of how we ought to manage risk, the second explains why we shouldn’t convict on the basis of naked statistical evidence alone. Unfortunately, this second view doesn’t help us (...) understand how we should manage risk (e.g., the risk of violating rights against wrongful conviction) and faces counterexamples of its own. I shall defend an alternative approach that builds on the strengths of these two accounts. On the approach defending here, it is objectively suitable to punish iff we know a defendant to be guilty. To determine what is consistent with procedural justice and to determine what we prospectively ought to do, we need to think about the risks we face of deviating from this objective ideal. (shrink)
In this paper, I present and defend a novel account of doubt. In Part 1, I make some preliminary observations about the nature of doubt. In Part 2, I introduce a new puzzle about the relationship between three psychological states: doubt, belief, and confidence. I present this puzzle because my account of doubt emerges as a possible solution to it. Lastly, in Part 3, I elaborate on and defend my account of doubt. Roughly, one has doubt if and only if (...) one believes one might be wrong; I argue that this is superior to the account that says that one has doubt if and only if one has less than the highest degree of confidence. (shrink)
Political conspiracy theories—e.g., unsupported beliefs about the nefarious machinations of one’s cunning, powerful, and evil opponents—are adopted enthusiastically by a great many people of widely varying political orientations. In many cases, these theories posit that there exists a small group of individuals who have intentionally but secretly acted to cause economic problems, political strife, and even natural disasters. This group is often held to exist “in the shadows,” either because its membership is unknown, or because “the real nature” of (...) its members’ allegiances, motives, and methods has been concealed from the public at large. Paradigmatic examples of these political conspiracy theories include anti-Semitic beliefs of the sort associated with The Protocols of the Elder of Zion, the “Red Scare” of the 1950s, claims about the “New World Order,” and many others. -/- Why do these theories attract so many adherents? In this essay, I’ll attempt to spread some light on this issue by applying Kahneman and Tversky’s highly influential work on reasoning under uncertainty. I’ll proceed by first providing a brief introduction to Kahneman and Tversky’s work on reasoning under uncertainty, and the way in which this relates to standard economic and philosophic accounts of rational behavior, as well as philosophical ideas about the role of intuition. Next, I’ll move onto some specific interconnected aspects of this work that are relevant to understanding conspiracy theories, including errors involving probabilistic reasoning (“Prospect Theory”), those involving the inappropriate use of heuristics, and those related to the “framing” of certain outcomes as losses from a baseline. This essay will conclude by making two related points. First, some of the most important reasoning errors committed by adherents of conspiracy theories are errors that many of us regularly commit. Given that self-awareness of these errors provides only minimal protection from committing them, this suggests that many of use may be more vulnerable to conspiratorial reasoning than we may like to believe. Second, in the light of this danger, I will outline a few steps that might be taken to help inoculate ourselves against the appeal of these theories, and to help respond to the conspiratorial arguments of others. (shrink)
This paper tries to answer the question why the epistemic value of so many social simulations is questionable. I consider the epistemic value of a social simulation as questionable if it contributes neither directly nor indirectly to the understanding of empirical reality. To examine this question, two classical social simulations are analyzed with respect to their possible epistemic justification: Schelling’s neighborhood segregation model and Axelrod’s reiterated Prisoner’s Dilemma simulations of the evolution of cooperation. It is argued that Schelling’s simulation is (...) useful because it can be related to empirical reality, while Axelrod’s simulations and those of his followers cannot and thus that their scientific value remains doubtful. I relate this findingto the background beliefs of modelers about the superiority of the modeling method as expressed in Joshua Epstein’s keynote address “Why model?”. (shrink)
A moral theory T is esoteric if and only if T is true but there are some individuals who, by the lights of T itself, ought not to embrace T, where to embrace T is to believe T and rely upon it in practical deliberation. Some philosophers hold that esotericism is a strong, perhaps even decisive, reason to reject a moral theory. However, proponents of this objection have often supposed its force is obvious and have said little to articulate it. (...) I defend a version of this objection—namely, that, in light of the strongly first-personal epistemology of benefit and burden, esoteric theories fail to justify the allocation of benefits and burdens to which moral agents would be subject under such theories. Because of the holistic nature of moral-theory justification, this conclusion in turn implies that the entirety of a moral theory must be open to public scrutiny in order for the theory to be justified. I conclude by answering several objections to my account of the esotericism objection. (shrink)
In order to explain delusional beliefs, one must first consider what factors should be included in a theory of delusion. Unlike a one-factor theory, a two-factor theory of delusion argues that not only anomalous experience (the first factor) but also an impairment of the belief-evaluation system (the second factor) is required. Recently, two-factor theorists have adopted various Bayesian approaches in order to give a more accurate description of delusion formation. By reviewing the progression from a one-factor theory to a (...) two-factor theory, I argue that in light of the second factor’s requirements, different proposed impairments can be unified within a consistent belief-evaluation system. Under this interpretation of the second factor, I further argue that the role of a mechanism responsible for detecting bizarreness is wrongly neglected. I conclude that the second factor is a compound system which consists of differing functional parts, one of which functions to detect bizarreness in different stages of delusion; moreover, I hold that the impairment can be one or several of these functional parts. (shrink)
Alvin Plantinga has argued that evolutionary naturalism (the idea that God does not tinker with evolution) undermines its own rationality. Natural selection is concerned with survival and reproduction, and false beliefs conjoined with complementary motivational drives could serve the same aims as true beliefs. Thus, argues Plantinga, if we believe we evolved naturally, we should not think our beliefs are, on average, likely to be true, including our beliefs in evolution and naturalism. I argue herein that (...) our cognitive faculties are less reliable than we often take them to be, that it is theism which has difficulty explaining the nature of our cognition, that much of our knowledge is not passed through biological evolution but learned and transferred through culture, and that the unreliability of our cognition helps explain the usefulness of science. (shrink)
It seems to many that moral opinions must make a difference to what we’re motivated to do, at least in suitable conditions. For others, it seems that it is possible to have genuine moral opinions that make no motivational difference. Both sides – internalists and externalists about moral motivation – can tell persuasive stories of actual and hypothetical cases. My proposal for a kind of reconciliation is to distinguish between two kinds of psychological states with moral content. There are both (...) moral thoughts or opinions that intrinsically motivate, and moral thoughts or opinions that don’t. The thoughts that intrinsically motivate are moral intuitions – spontaneous and compelling non-doxastic appearances of right or wrong that both attract assent and incline us to act or react. I argue that there is good reason to think that these intuitions, but not moral judgments, are constituted by manifestations of moral sentiments. The moral thoughts that do not intrinsically motivate are moral beliefs, which are in themselves as inert as any ordinary beliefs. Thus, roughly, internalism is true about intuitions and externalism is true about beliefs or judgments. (shrink)
ABSTRACTThe contemporary debate over responsibility for belief is divided over the issue of whether such responsibility requires doxastic control, and whether this control must be voluntary in nature. It has recently become popular to hold that responsibility for belief does not require voluntary doxastic control, or perhaps even any form of doxastic ‘control’ at all. However, Miriam McCormick has recently argued that doxastic responsibility does in fact require quasi-voluntary doxastic control: “guidance control,” a complex, compatibilist form of control. In this (...) paper, I pursue a negative and a positive task. First, I argue that grounding doxastic responsibility in guidance control requires too much for agents to be the proper targets for attributions of doxastic responsibility. I will focus my criticisms on three cases in which McCormick's account gives the intuitively wrong verdict. Second, I develop a modified conception of McCormick's notion of “ownership of belief,” which I call Weak Doxastic Ownership. I employ this conception to argue that responsibility for belief is possible even in the absence of guidance control. In doing so, I argue that the notion of doxastic ownership can do important normative work in grounding responsibility for belief without being subsumed under or analyzed in terms of the notion of doxastic control. (shrink)
According to Fumerton in his "How Does Perception Justify Belief?", it is misleading or wrong to say that perception is a source of justification for beliefs about the external world. Moreover, reliability does not have an essential role to play here either. I agree, and I explain why in section 1, using novel considerations about evil demon scenarios in which we are radically deceived. According to Fumerton, when it comes to how sensations or experiences supply justification, they do not (...) do so on their own, and instead only do so only in conjunction with support for background beliefs about how the sensations or experiences are best explained. Here I disagree. In section 2, I first clarify the question of whether sensations or experiences provide justification on their own. I then respond to Fumerton’s arguments that use considerations about concept-possession and about how to close possible gaps between experience and truth. In section 3, I develop my main concern about his positive view, where that concern also brings out some of the merits of the view that experiences do justify beliefs about the external world on their own. (shrink)
Moral Internalism is the claim that it is a priori that moral beliefs are reasons for action. At least three conceptions of 'reason' may be disambiguated: psychological, epistemological, and purely ethical. The first two conceptions of Internalism are false on conceptual, and indeed empirical, grounds. On a purely ethical conception of 'reasons', the claim is true but is an Externalist claim. Positive arguments for Internalism — from phenomenology, connection and oddness — are found wanting. Three possible responses to the (...) stock Externalist objections are uncovered and overturned. In so doing a close relation between Internalism and Behaviourism is revealed, and some stock anti-behaviouristic arguments are co-opted for Externalism. The likely dependence of Internalism on an Atomistic Associationism is uncovered and criticised. Internalism is seen as being ultimately a type of Ethical Determinism. Finally, a sketch of an Anti-Associative Externalism is given whereby the notion of self determination of action is put forward as an account of moral motivation fit to resist both the internalist and the belief-desire psychology premises of the stock non-cognitivist argument. (shrink)
Kripke’s puzzle has puts pressure on the intuitive idea that one can believe that Superman can fly without believing that Clark Kent can fly. If this idea is wrong then many theories of belief and belief ascription are built from faulty data. I argue that part of the proper analysis of Kripke’s puzzle refutes the closure principles that show up in many important arguments in epistemology, e.g., if S is rational and knows that P and that P entails Q, then (...) if she considers these two beliefs and Q, then she is in a position to know that.. (shrink)
What are the truth conditions of want ascriptions? According to a highly influential and fruitful approach, championed by Heim (1992) and von Fintel (1999), the answer is intimately connected to the agent’s beliefs: ⌜S wants p⌝ is true iff within S’s belief set, S prefers the p worlds to the ~p worlds. This approach faces a well-known and as-yet unsolved problem, however: it makes the entirely wrong predictions with what we call '(counter)factual want ascriptions', wherein the agent either believes (...) p or believes ~p—e.g., ‘I want it to rain tomorrow and that is exactly what is going to happen’ or ‘I want this weekend to last forever but of course it will end in a few hours’. We solve this problem. The truth conditions for want ascriptions are, we propose, connected to the agent’s conditional beliefs. We bring out this connection by pursuing a striking parallel between (counter)factual and non-(counter)factual want ascriptions on the one hand and counterfactual and indicative conditionals on the other. (shrink)
A curious and comparatively neglected element of death penalty jurisprudence in America is my target in this paper. That element concerns the circumstances under which severely mentally disabled persons, incarcerated on death row, may have their sentences carried out. Those circumstances are expressed in a part of the law which turns out to be indefensible. This legal doctrine—competence-for-execution —holds that a condemned, death-row inmate may not be killed if, at the time of his scheduled execution, he lacks an awareness of (...) his impending death or the reasons for it. I argue that the law of CFE should be abandoned, along with the notion that it is permissible to kill the deeply disturbed just so long as they meet some narrow test of readiness to die. By adopting CFE, the courts have been forced to give independent conceptual and moral significance to a standard for competence that simply cannot bear the weight placed upon it. To be executable, CFE requires that a condemned prisoner meet a standard demonstrating an awareness of certain facts about his death. Yet this standard both leads to confusing and counter-intuitive results and is unsupported either by the reasons advanced by the courts on its behalf or by any of the standard theoretical justifications of criminal punishment. If executing the profoundly psychotic or delusional is wrong the law needs a better account of the wrong done when prisoners like Ford are killed. I suggest wherein that wrong might be located. (shrink)
Mark Schroeder has recently proposed a new analysis of knowledge. I examine that analysis and show that it fails. More specifically, I show that it faces a problem all too familiar from the post-Gettier literature, namely, that it is delivers the wrong verdict in fake barn cases.
Charles Taylor defines `hypergoods' as the fundamental, architechtonic goods that serve as the basis of our moral frameworks. He also believes that, in principle, we can use reason to reconcile the conflicts that hypergoods engender. This belief, however, relies upon a misindentification of hypergoods as goods rather than as works of art, an error which is itself a result of an overly adversarial conception of practical reason. For Taylor fails to distinguish enough between ethical conflicts and those relating to the (...) religio-aesthetic domain. A proper identification of hypergoods as aesthetic, moreover, requires us to revise his accounts of ordinary life, of evil and of the controversy over university curricula. (shrink)
Statistical evidence—say, that 95% of your co-workers badmouth each other—can never render resenting your colleague appropriate, in the way that other evidence (say, the testimony of a reliable friend) can. The problem of statistical resentment is to explain why. We put the problem of statistical resentment in several wider contexts: The context of the problem of statistical evidence in legal theory; the epistemological context—with problems like the lottery paradox for knowledge, epistemic impurism and doxastic wrongdoing; and the context of a (...) wider set of examples of responses and attitudes that seem not to be appropriately groundable in statistical evidence. Regrettably, we do not come up with a fully general, fully adequate, fully unified account of all the phenomena discussed. But we give reasons to believe that no such account is forthcoming, and we sketch a somewhat messier account that may be the best that can be had here. (shrink)
We develop an approach to the problem of de se belief usually expressed with the question, what does the shopper with the leaky sugar bag have to learn to know that s/he is the one making the mess. Where one might have thought that some special kind of “de se” belief explains the triggering of action, we maintain that this gets the order of explanation wrong. We sketch a very simple cognitive architecture that yields de se-like behavior on which the (...) action-triggering functionality of the belief-state is what counts it as de se rather than some prior property of being “de se” explaining the triggering of action. This functionality shows that action-triggering change in belief-state also undergirds a correlative change in the objective involved in the triggered action. This model is far too simple to have any claim to showing how the de se works for humans, but it shows, by illustration, that nothing mysteriously “subjective”” need be involved in this aspect of self-conception. While our exposition is very different from those of Perry and Recanati, all three of us are developing the same kind of view. (shrink)
There is a well-documented Pre-Reflective Hostility against Machine Art (PRHMA), exemplified by the sentiments of fear and anxiety. How can it be explained? The present paper attempts to find the answer to this question by surveying a considerable amount of research on machine art. It is found that explanations of PRHMA based on the (alleged) fact that machine art lacks an element that is (allegedly) found in human art (for example, autonomy) do not work. Such explanations cannot account for the (...) sentiments of fear and anxiety present in PRHMA, because the art receiver could simply turn to human art for finding the element she is looking for. By contrast, an explanation based on the idea that machine art is “symbolically” a threat to human survival can be successful, since the art receiver’s turning from machine art to human art does not eliminate the (alleged) “symbolic” threat machine art poses for human survival. If there is a pre-reflective belief or feeling that machine art is such a threat, then it is perfectly understandable why humans exhibit a pre-reflective hostility against machine art. (shrink)
We offer a critical assessment of the “exclusion argument” against free will, which may be summarized by the slogan: “My brain made me do it, therefore I couldn't have been free”. While the exclusion argument has received much attention in debates about mental causation (“could my mental states ever cause my actions?”), it is seldom discussed in relation to free will. However, the argument informally underlies many neuroscientific discussions of free will, especially the claim that advances in neuroscience seriously challenge (...) our belief in free will. We introduce two distinct versions of the argument, discuss several unsuccessful responses to it, and then present our preferred response. This involves showing that a key premise – the “exclusion principle” – is false under what we take to be the most natural account of causation in the context of agency: the difference-making account. We finally revisit the debate about neuroscience and free will. (shrink)
Conpiracy theories are widely deemed to be superstitious. Yet history appears to be littered with conspiracies successful and otherwise. (For this reason, "cock-up" theories cannot in general replace conspiracy theories, since in many cases the cock-ups are simply failed conspiracies.) Why then is it silly to suppose that historical events are sometimes due to conspiracy? The only argument available to this author is drawn from the work of the late Sir Karl Popper, who criticizes what he calls "the conspiracy theory (...) of society" in The Open Society and elsewhere. His critique of the conspiracy theory is indeed sound, but it is a theory no sane person maintains. Moreover, its falsehood is compatible with the prevalence of conspiracies. Nor do his arguments create any presumption against conspiracy theories of this or that. Thus the belief that it is superstitious to posit conspiracies is itself a superstition. The article concludes with some speculations as to why this superstition is so widely believed. (shrink)
W. K. Clifford famously argued that it is “wrong always, everywhere and for anyone, to believe anything upon insufficient evidence.” Though the spirit of this claim resonates with me, the letter does not. To wit, I am inclined to think that it is not morally wrong for, say, an elderly woman on her death bed to believe privately that she is going to heaven even if she does so on insufficient evidence—indeed, and lest there be any confusion, even if the (...) woman herself deems the evidence for her so believing to be insufficient. After all, her believing so does not appear to endanger, harm, or violate the rights of anyone, nor does it make the world a worse place in a significant, if any, way. That Clifford might have put too fine a point on the matter, however, does not entail that there are no conditions under which it is wrong to believe something upon insufficient evidence. In this paper, I argue that, in cases where believing a proposition (read: believing a proposition to be true) will affect others, it is prima facie wrong to have propositional faith—for present purposes, to believe the proposition despite deeming the evidence for one’s believing to be insufficient—before one has attempted to believe the proposition by proportioning one’s belief to the evidence. (shrink)
The vacuum energy density of free scalar quantum field Φ in a Rindler distributional space-time with distributional Levi-Cività connection is considered. It has been widely believed that, except in very extreme situations, the influence of acceleration on quantum fields should amount to just small, sub-dominant contributions. Here we argue that this belief is wrong by showing that in a Rindler distributional background space-time with distributional Levi-Cività connection the vacuum energy of free quantum fields is forced, by the very same background (...) distributional space-time such a Rindler distributional background space-time, to become dominant over any classical energy density component. This semiclassical gravity effect finds its roots in the singular behavior of quantum fields on a Rindler distributional space-times with distributional Levi-Cività connection. In particular we obtain that the vacuum fluctuations Φ2 have a singular behavior at a Rindler horizon R 0 : 2 ( ) 4 , 2 , δ = Φ δ δ − δ c a a→∞ . Therefore sufficiently strongly accelerated observer burns up near the Rindler horizon. Thus Polchinski’s account doesn’t violate the Einstein equivalence principle. (shrink)
Can I be wrong about my own beliefs? More precisely: Can I falsely believe that I believe that p? I argue that the answer is negative. This runs against what many philosophers and psychologists have traditionally thought and still think. I use a rather new kind of argument, – one that is based on considerations about Moore's paradox. It shows that if one believes that one believes that p then one believes that p – even though one can believe (...) that p without believing that one believes that p. (shrink)
This special issue collects five new essays on various topics relevant to the ethics of belief. They shed fresh light on important questions, and bring new arguments to bear on familiar topics of concern to most epistemologists, and indeed, to anyone interested in normative requirements on beliefs either for their own sake or because of the way such requirements bear on other domains of inquiry.
It is true of many truths that I do not believe them. It is equally true, however, that I cannot rationally assert of any such truth both that it is true and that I do not believe it. To explain why this is so, I will distinguish absence of belief from disbelief and argue that an assertion of “p, but I do not believe that p” is paradoxical because it is indefensible, i.e. for reasons internal to it unable to convince. (...) A closer examination of the irrationality involved will show that such is the skeptic’s predicament, trying to convince us to bracket knowledge claims we have good grounds to take ourselves to be entitled to. Even if the sceptic cannot be proven wrong, his challenge still demands an answer, if not a treatment. In this paper, I argue that the cure lies in epidemiology rather than epistemology: instead of attacking the sceptic head-long, I commend guerilla tactics, vaccinating our fellow non-sceptics against the sceptical virus. I will not argue that the sceptic is wrong, necessarily wrong or that he cannot be believed, but that he cannot convince. Scepticism requires a leap of faith: something we may justifiably refrain from even on the sceptic’s own standards. (shrink)
I take pseudoscience to be a pretence at science. Pretences are innumerable, limited only by our imagination and credulity. As Stove points out, ‘numerology is actually quite as different from astrology as astrology is from astronomy’ (Stove 1991, 187). We are sure that ‘something has gone appallingly wrong’ (Stove 1991, 180) and yet ‘thoughts…can go wrong in a multiplicity of ways, none of which anyone yet understands’ (Stove 1991, 190). Often all we can do is give a careful description of (...) a way of pretending, a motivation for pretence, a source of pretension. In this chapter I attempt the latter. We will be concerned with the relation of conviction to rational belief. I shall be suggesting that the question of whether an enquiry is a pretence at science can be, in part, a question over the role of conviction in rational belief, and that the answer is to be found in the philosophical problem of the role of values in rational belief. (shrink)
Alice encounters at least three distinct problems in her struggles to understand and navigate Wonderland. The first arises when she attempts to predict what will happen in Wonderland based on what she has experienced outside of Wonderland. In many cases, this proves difficult -- she fails to predict that babies might turn into pigs, that a grin could survive without a cat or that playing cards could hold criminal trials. Alice's second problem involves her efforts to figure out the basic (...) nature of Wonderland. So, for example, there is nothing Alice could observe that would allow her to prove whether Wonderland is simply a dream. The final problem is manifested by Alice's attempts to understand what the various residents of Wonderland mean when they speak to her. In Wonderland, "mock turtles" are real creatures and people go places with a "porpoise" (and not a purpose). All three of these problems concern Alice's attempts to infer information about unobserved events or objects from those she has observed. In philosophical terms, they all involve *induction*. -/- In this essay, I will show how Alice's experiences can be used to clarify the relation between three more general problems related to induction. The first problem, which concerns our justification for beliefs about the future, is an instance of David Hume's classic *problem of induction*. Most of us believe that rabbits will not start talking tomorrow -- the problem of induction challenges us to justify this belief. Even if we manage to solve Hume's puzzle, however, we are left with what W.V.O. Quine calls the problems of *underdetermination *and *indeterminacy.* The former problem asks us to explain how we can determine *what the world is really like *based on *everything that could be observed about the world. *So, for example, it seems plausible that nothing that Alice could observe would allow her to determine whether eating mushrooms causes her to grow or the rest of the world to shrink. The latter problem, which might remain even if resolve the first two, casts doubt on our capacity to determine *what a certain person means *based on *which words that person uses.* This problem is epitomized in the Queen's interpretation of the Knave's letter. The obstacles that Alice faces in getting around Wonderland are thus, in an important sense, the same types of obstacles we face in our own attempts to understand the world. Her successes and failures should therefore be of real interest. (shrink)
This paper is about an overlooked aspect—the cognitive or epistemic aspect—of the moral demand we place on one another to be treated well. We care not only how people act towards us and what they say of us, but also what they believe of us. That we can feel hurt by what others believe of us suggests both that beliefs can wrong and that there is something we epistemically owe to each other. This proposal, however, surprises many theorists who (...) claim it lacks both intuitive and theoretical support. This paper argues that the proposal has intuitive support and is not at odds with much contemporary theorizing about what we owe to each other. (shrink)
Many people with religious beliefs, pro or con, are aware that those beliefs are denied by a great number of others who are as reasonable, intelligent, fair-minded, and relatively unbiased as they are. Such a realization often leads people to wonder, “How do I know I’m right and they’re wrong? How do I know that the basis for my belief is right and theirs is misleading?” In spite of that realization, most people stick with their admittedly controversial religious (...) belief. This entry examines the epistemology of such belief retention, addressing issues of disagreement, agnosticism, skepticism, and the rationality of reflective religious belief. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.