Pettigrew offers new axiomatic constraints on legitimate measures of inaccuracy. His axiom called ‘Decomposition’ stipulates that legitimate measures of inaccuracy evaluate a credence function in part based on its level of calibration at a world. I argue that if calibration is valuable, as Pettigrew claims, then this fact is an explanandum for accuracy-rst epistemologists, not an explanans, for three reasons. First, the intuitive case for the importance of calibration isn’t as strong as Pettigrew believes. Second, calibration is a perniciously global (...) property that both contravenes Pettigrew’s own views about the nature of credence functions themselves and undercuts the achievements and ambitions of accuracy-rst epistemology. Finally, Decomposition introduces a new kind of value compatible with but separate from accuracy-proper in violation of Pettigrew’s alethic monism. introduction. (shrink)
There are many things—call them ‘experts’—that you should defer to in forming your opinions. The trouble is, many experts are modest: they’re less than certain that they are worthy of deference. When this happens, the standard theories of deference break down: the most popular (“Reflection”-style) principles collapse to inconsistency, while their most popular (“New-Reflection”-style) variants allow you to defer to someone while regarding them as an anti-expert. We propose a middle way: deferring to someone involves preferring to make any decision (...) using their opinions instead of your own. In a slogan, deferring opinions is deferring decisions. Generalizing the proposal of Dorst (2020a), we first formulate a new principle that shows exactly how your opinions must relate to an expert’s for this to be so. We then build off the results of Levinstein (2019) and Campbell-Moore (2020) to show that this principle is also equivalent to the constraint that you must always expect the expert’s estimates to be more accurate than your own. Finally, we characterize the conditions an expert’s opinions must meet to be worthy of deference in this sense, showing how they sit naturally between the too-strong constraints of Reflection and the too-weak constraints of New Reflection. (shrink)
We use a theorem from M. J. Schervish to explore the relationship between accuracy and practical success. If an agent is pragmatically rational, she will quantify the expected loss of her credence with a strictly proper scoring rule. Which scoring rule is right for her will depend on the sorts of decisions she expects to face. We relate this pragmatic conception of inaccuracy to the purely epistemic one popular among epistemic utility theorists.
Permissivism about rationality is the view that there is sometimes more than one rational response to a given body of evidence. In this paper I discuss the relationship between permissivism, deference to rationality, and peer disagreement. I begin by arguing that—contrary to popular opinion—permissivism supports at least a moderate version of conciliationism. I then formulate a worry for permissivism. I show that, given a plausible principle of rational deference, permissive rationality seems to become unstable and to collapse into unique rationality. (...) I conclude with a formulation of a way out of this problem on behalf of the permissivist. (shrink)
Some propositions are more epistemically important than others. Further, how important a proposition is is often a contingent matter—some propositions count more in some worlds than in others. Epistemic Utility Theory cannot accommodate this fact, at least not in any standard way. For EUT to be successful, legitimate measures of epistemic utility must be proper, i.e., every probability function must assign itself maximum expected utility. Once we vary the importance of propositions across worlds, however, normal measures of epistemic utility become (...) improper. I argue there isn’t any good way out for EUT. (shrink)
Consequentialist theories determine rightness solely based on real or expected consequences. Although such theories are popular, they often have difficulty with generalizing intuitions, which demand concern for questions like “What if everybody did that?” Rule consequentialism attempts to incorporate these intuitions by shifting the locus of evaluation from the consequences of acts to those of rules. However, detailed rule-consequentialist theories seem ad hoc or arbitrary compared to act consequentialist ones. We claim that generalizing can be better incorporated into consequentialism by (...) keeping the locus of evaluation on acts but adjusting the decision theory behind act selection. Specifically, we should adjust which types of dependencies the theory takes to be decision-relevant. Using this strategy, we formulate a new theory, generalized act consequentialism, which we argue is more compelling than rule consequentialism both in modeling the actual reasoning of generalizers and in delivering correct verdicts. (shrink)
This paper defends the view, put roughly, that to think that p is to guess that p is the answer to the question at hand, and that to think that p rationally is for one’s guess to that question to be in a certain sense non-arbitrary. Some theses that will be argued for along the way include: that thinking is question-sensitive and, correspondingly, that ‘thinks’ is context-sensitive; that it can be rational to think that p while having arbitrarily low credence (...) that p; that, nonetheless, rational thinking is closed under entailment; that thinking does not supervene on credence; and that in many cases what one thinks on certain matters is, in a very literal sense, a choice. Finally, since there are strong reasons to believe that thinking just is believing, there are strong reasons to think that all this goes for belief as well. (shrink)
The evil God challenge is an argumentative strategy that has been pursued by a number of philosophers in recent years. It is apt to be understood as a parody argument: a wholly evil, omnipotent and omniscient God is absurd, as both theists and atheists will agree. But according to the challenge, belief in evil God is about as reasonable as belief in a wholly good, omnipotent and omniscient God; the two hypotheses are roughly epistemically symmetrical. Given this symmetry, thesis belief (...) in an evil God and belief in a good God are taken to be similarly preposterous. In this paper, we argue that the challenge can be met, suggesting why the three symmetries that need to hold between evil God and good God – intrinsic, natural theology and theodicy symmetries – can all be broken. As such, we take it that the evil God challenge can be met. (shrink)
This article offers a normative analysis of some of the most controversial incidents involving police—what I call police-generated killings. In these cases, bad police tactics create a situation where deadly force becomes necessary, becomes perceived as necessary, or occurs unintentionally. Police deserve blame for such killings because they choose tactics that unnecessarily raise the risk of deadly force, thus violating their obligation to prioritize the protection of life. Since current law in the United States fails to ban many bad tactics, (...) police- generated killings often are treated as “lawful but awful.” To address these killings, some call on changes to departmental policies or voluntary reparations by local governments, yet such measures leave in place a troubling gap between ethics and law. I argue that police-generated killings merit legal sanctions by appealing to a relevant analogy: self-generated self-defense, where the person who engages in self-defense started the trouble. The persistent lack of accountability for police-generated killings threatens life, police legitimacy, and trust in democratic institutions. The article closes by identifying tools in law and policy to address this challenge. (shrink)
A puzzling feature of paradigmatic cases of dehumanization is that the perpetrators often attribute uniquely human traits to their victims. This has become known as the “paradox of dehumanization.” We address the paradox by arguing that the perpetrators think of their victims as human in one sense, while denying that they are human in another sense. We do so by providing evidence that people harbor a dual character concept of humanity. Research has found that dual character concepts have two independent (...) sets of criteria for their application, one of which is descriptive and one of which is normative. Across four experiments, we found evidence that people deploy a descriptive criterion according to which being human is a matter of being a Homo sapiens; as well as a normative criterion according to which being human is a matter of possessing a deep-seated commitment to do the morally right thing. Importantly, we found that people are willing to affirm that someone is human in the descriptive sense, while denying that they are human in the normative sense, and vice versa. In addition to providing a solution to the paradox of dehumanization, these findings suggest that perceptions of moral character have a central role to play in driving dehumanization. (shrink)
The philosophical study of well-being concerns what makes lives good for their subjects. It is now standard among philosophers to distinguish between two kinds of well-being: - lifetime well-being, i.e., how good a person's life was for him or her considered as a whole, and - temporal well-being, i.e., how well off someone was, or how they fared, at a particular moment in time or over a period of time longer than a moment but shorter than a whole life, say, (...) a day, month, year, or chapter of a life. Many theories have been offered of each of these kinds of well-being. A common view is that lifetime well-being is in some way constructed out of temporal well-being. This book argues that much of this literature is premised on a mistake. Lifetime well-being cannot be constructed out of temporal well-being, because there is no such thing as temporal well-being. The only genuine kind of well-being is lifetime well-being. The Passing of Temporal Well-Being will prove essential reading for professional philosophers, especially in moral and political philosophy. It will also be of interest to welfare economists and policy-makers who appeal to well-being. (shrink)
An overview (about 8,000 words) of act utilitarianism, covering the basic idea of the theory, historical examples, how it differs from rule utilitarianism and motive utilitarianism, supporting arguments, and standard objections. A closing section provides a brief introduction to indirect utilitarianism (i.e., a Hare- or Railton-style view distinguishing between a decision procedure and a criterion of rightness).
The historical consensus is that logical evidence is special. Whereas empirical evidence is used to support theories within both the natural and social sciences, logic answers solely to a priori evidence. Further, unlike other areas of research that rely upon a priori evidence, such as mathematics, logical evidence is basic. While we can assume the validity of certain inferences in order to establish truths within mathematics and test scientifi c theories, logicians cannot use results from mathematics or the empirical sciences (...) without seemingly begging the question. Appeals to rational intuition and analyticity in order to account for logical knowledge are symptomatic of these commitments to the apriority and basicness of logical evidence. This chapter argues that these historically prevalent accounts of logical evidence are mistaken, and that if we take logical practice seriously we fi nd that logical evidence is rather unexceptional, sharing many similarities to the types of evidence appealed to within other research areas. (shrink)
In this paper, I set out and defend a new theory of value, whole-life welfarism. According to this theory, something is good only if it makes somebody better off in some way in his life considered as a whole. By focusing on lifetime, rather than momentary, well-being, a welfarist can solve two of the most vexing puzzles in value theory, The Badness of Death and The Problem of Additive Aggregation.
This paper considers some puzzling knowledge ascriptions and argues that they present prima facie counterexamples to credence, belief, and justification conditions on knowledge, as well as to many of the standard meta-semantic assumptions about the context-sensitivity of ‘know’. It argues that these ascriptions provide new evidence in favor of contextualist theories of knowledge—in particular those that take the interpretation of ‘know’ to be sensitive to the mechanisms of constraint.
This paper defends the simple view that in asserting that p, one lies iff one knows that p is false. Along the way it draws some morals about deception, knowledge, Gettier cases, belief, assertion, and the relationship between first- and higher-order norms.
The devastating impact of the COVID‐19 (coronavirus disease 2019) pandemic is prompting renewed scrutiny of practices that heighten the risk of infectious disease. One such practice is refusing available vaccines known to be effective at preventing dangerous communicable diseases. For reasons of preventing individual harm, avoiding complicity in collective harm, and fairness, there is a growing consensus among ethicists that individuals have a duty to get vaccinated. I argue that these same grounds establish an analogous duty to avoid buying and (...) eating most meat sold today, based solely on a concern for human welfare. Meat consumption is a leading driver of infectious disease. Wildlife sales at wet markets, bushmeat hunting, and concentrated animal feeding operations (CAFOs) are all exceptionally risky activities that facilitate disease spread and impose immense harms on human populations. If there is a moral duty to vaccinate, we also should recognize a moral duty to avoid most meat. The paper concludes by considering the implications of this duty for policy. (shrink)
The introduction (about 6,000 words) to _The Cambridge Companion to Utilitarianism_, in three sections: utilitarianism’s place in recent and contemporary moral philosophy (including the opinions of critics such as Rawls and Scanlon), a brief history of the view (again, including the opinions of critics, such as Marx and Nietzsche), and an overview of the chapters of the book.
A clear and provocative introduction to the ethics of COVID-19, suitable for university-level students, academics, and policymakers, as well as the general reader. It is also an original contribution to the emerging literature on this important topic. The author has made it available Open Access, so that it can be downloaded and read for free by all those who are interested in these issues. Key features include: -/- A neat organisation of the ethical issues raised by the pandemic. An exploration (...) of the many complex interconnections between these issues. A succinct case for a continued lockdown until we develop a vaccine. An original account of the Deep Moral Problem of the Pandemic, and a Revolutionary Argument for how we should change society post-pandemic. References to, and engagement with, many of the best writings on the pandemic so far (both in popular media and academic journals). -/- ISBN: 978-0-6489016-0-0. (shrink)
A plausible principle about the felicitous use of indicative conditionals says that there is something strange about asserting an indicative conditional when you know whether its antecedent is true. But in most contexts there is nothing strange at all about asserting indicative conditionals like ‘If Oswald didn’t shoot Kennedy, then someone else did’. This paper argues that the only compelling explanation of these facts requires the resources of contextualism about knowledge.
According to hedonism about well-being, lives can go well or poorly for us just in virtue of our ability to feel pleasure and pain. Hedonism has had many advocates historically, but has relatively few nowadays. This is mainly due to three highly influential objections to it: The Philosophy of Swine, The Experience Machine, and The Resonance Constraint. In this paper, I attempt to revive hedonism. I begin by giving a precise new definition of it. I then argue that the right (...) motivation for it is the ‘experience requirement’ (i.e., that something can benefit or harm a being only if it affects the phenomenology of her experiences in some way). Next, I argue that hedonists should accept a felt-quality theory of pleasure, rather than an attitude-based theory. Finally, I offer new responses to the three objections. Central to my responses are (i) a distinction between experiencing a pleasure (i.e., having some pleasurable phenomenology) and being aware of that pleasure, and (ii) an emphasis on diversity in one’s pleasures. (shrink)
I examine the origins of ordinary racial thinking. In doing so, I argue against the thesis that it is the byproduct of a unique module. Instead, I defend a pluralistic thesis according to which different forms of racial thinking are driven by distinct mechanisms, each with their own etiology. I begin with the belief that visible features are diagnostic of race. I argue that the mechanisms responsible for face recognition have an important, albeit delimited, role to play in sustaining this (...) belief. I then argue that essentialist beliefs about race are driven by some of the mechanisms responsible for “entitativity perception”: the tendency to perceive some aggregates of people as more genuine groups than others. Finally, I argue that coalitional thinking about race is driven by a distinctive form of entitativity perception. However, I suggest that more data is needed to determine the prevalence of this form of racial thinking. (shrink)
Maimonides’ Latin translation of Moreh Nevukhim | Guide for the Perplexed, was the most influential Jewish work in the last millennia (Di Segni, 2019; Rubio, 2006; Wohlman, 1988, 1995; Kohler, 2017). It marked the beginning of scholasticism, a daughter of Judaism raised by Jewish thinkers, according to historian Heinrich Graetz (Geschichte der Juden, L. 6, Leipzig 1861, p. xii). Printed by Gutenberg's first mechanical press, its influence in the West went as far as the Fifth Lateran Council (1512 — 1517) (...) "where scholars were encouraged to remove the difficulties which seemed to divide the whole of theology and philosophy (Leibniz, Théodicée, 11)." For centuries, the Guide revolutionized the curriculum of school instruction by reintegrating the natural laws of thought in the sphere of faith (the fourth of which became Leibniz’ Principle of sufficient reason). This complete collection of notes expounds the ideas of the Guide and features all the passages selected and rewritten by Leibniz. This first complete annotated bilingual translation of the original manuscripts in Latin serves as an entry point to the faith in conformity with Reason. This complete collection of notes expounds the ideas of the Guide as selected and rewritten by Leibniz, the famous mathematician inventor of computer arithmetic, considered the last universal genius. The first complete bilingual translation in three centuries features on the front cover Rembrandt's Philosopher in meditation, and a recommendation from Leibniz's himself on the back: "Rabbi Maimonides' excellent book, A Guide for the Perplexed, is more philosophical than I imagined and worthy of careful reading." According to Malbim’s translator, Noah Rosenbloom, the book’s epigraph indicates that nineteenth-century Rabbi Leibush ben Yehiel Michel Wisser, also known as Malbim, was familiar with Leibniz’ Theodicy. In Leibniz’ anthology of the Guide, the reader can get a detailed glimpse of Leibniz' impressions of Maimonides. The Foreword by Leibniz' translator Lloyd Strickland suggests that there were sympathies and perhaps even overlaps between the thoughts of both universal luminaries. -/- . (shrink)
There is significant controversy over whether patients have a ‘right not to know’ information relevant to their health. Some arguments for limiting such a right appeal to potential burdens on others that a patient’s avoidable ignorance might generate. This paper develops this argument by extending it to cases where refusal of relevant information may generate greater demands on a publicly funded healthcare system. In such cases, patients may have an ‘obligation to know’. However, we cannot infer from the fact that (...) a patient has an obligation to know that she does not also have a right not to know. The right not to know is held against medical professionals at a formal institutional level. We have reason to protect patients’ control over the information that they receive, even if in individual instances patients exercise this control in ways that violate obligations. (shrink)
What is it for a life to be meaningful? In this article, I defend what I call Consequentialism about Meaning in Life, the view that one's life is meaningful at time t just in case one's surviving at t would be good in some way, and one's life was meaningful considered as a whole just in case the world was made better in some way for one's having existed.
What is the role of pleasure in determining a person’s well-being? I start by considering the nature of pleasure (i.e., what pleasure is). I then consider what factors, if any, can affect how much a given pleasure adds to a person’s lifetime well-being other than its degree of pleasurableness (i.e., how pleasurable it is). Finally, I consider whether it is plausible that there is any other way to add to somebody’s lifetime well-being than by giving him some pleasure or helping (...) him to avoid some pain. (shrink)
In Liberalism without Perfection Jonathan Quong defends a form of political liberalism; that is, a political philosophy that answers ‘no’ to both the following questions: 1. Must liberal political philosophy be based in some particular ideal of what constitutes a valuable or worthwhile human life, or other metaphysical beliefs? 2. Is it permissible for a liberal state to promote or discourage some activities, ideals, or ways of life on grounds relating to their inherent or intrinsic value, or on the basis (...) of other metaphysical claims? In these remarks, I respond to Quong’s arguments against those of his rivals who answer ‘Yes’ to his first question by dint of their comprehensive commitment to an ideal of individual autonomy. One of these, which Quong calls ‘comprehensive antiperfectionism’, answers ‘Yes’ to Question 1 and ‘No’ to Question 2. The other, which answers ‘Yes’ to both, he calls (comprehensive) ‘liberal perfectionism’. Quong poses these positions a dilemma: they cannot consistently be both comprehensive (by retaining their commitment to autonomy) and liberal (by ruling out the sort of coercive interference in people’s choices which is beyond the liberal pale). I argue on the contrary that a comprehensive commitment to autonomy actually demands a general injunction against such coercive interference, because responsibility is an important component of the autonomous life, and coercion always undermines responsibility. So, Quong’s dilemma is unsuccessful. (shrink)
Adaptive preference formation is the unconscious altering of our preferences in light of the options we have available. Jon Elster has argued that this is bad because it undermines our autonomy. I agree, but think that Elster's explanation of why is lacking. So, I draw on a richer account of autonomy to give the following answer. Preferences formed through adaptation are characterized by covert influence (that is, explanations of which an agent herself is necessarily unaware), and covert influence undermines our (...) autonomy because it undermines the extent to which an agent's preferences are ones that she has decided upon for herself. This answer fills the lacuna in Elster's argument. It also allows us to draw a principled distinction between adaptive preference formation and the closely related phenomenon of character planning. (shrink)
A book chapter (about 9,000 words, plus references) presenting an act-consequentialist approach to the ethics of climate change. It begins with an overview of act consequentialism, including a description of the view’s principle of rightness (an act is right if and only if it maximizes the good) and a conception of the good focusing on the well-being of sentient creatures and rejecting temporal discounting. Objections to act consequentialism, and replies, are also considered. Next, the chapter briefly suggests that act consequentialism (...) could reasonably be regarded as the default moral theory of climate change, in the sense that a broadly act-consequentialist framework often seems implicit in both scholarly and casual discussions of the ethics of climate change. The remainder of the chapter explores three possible responses to the threat of climate change: having fewer children to reduce the number of people emitting greenhouse gases; taxing greenhouse-gas (GHG) emissions (commonly called a “carbon tax”) to discourage GHG-emitting behavior; and reducing poverty to lessen personal, familial, and community vulnerability to the harms of climate change. (shrink)
I argue that in addressing worries about the validity and reliability of implicit measures of social cognition, theorists should draw on research concerning “entitativity perception.” In brief, an aggregate of people is perceived as highly “entitative” when its members exhibit a certain sort of unity. For example, think of the difference between the aggregate of people waiting in line at a bank versus a tight-knit group of friends: The latter seems more “groupy” than the former. I start by arguing that (...) entitativity perception modulates the activation of implicit biases and stereotypes. I then argue that recognizing this modulatory role will help researchers to address concerns surrounding the validity and reliability of implicit measures. (shrink)
A popular response to the Exclusion Argument for physicalism maintains that mental events depend on their physical bases in such a way that the causation of a physical effect by a mental event and its physical base needn’t generate any problematic form of causal overdetermination, even if mental events are numerically distinct from and irreducible to their physical bases. This paper presents and defends a form of dualism that implements this response by using a dispositional essentialist view of properties to (...) argue that the psychophysical laws linking mental events to their physical bases are metaphysically necessary. I show the advantages of such a position over an alternative form of dualism that merely places more “modal weight” on psychophysical laws than on physical laws. The position is then defended against the objection that it is inconsistent with dualism. Lastly, some suggestions are made as to how dualists might clarify the contribution that mental causes make to their physical effects. (shrink)
Virtue epistemology is among the dominant influences in mainstream epistemology today. An important commitment of one strand of virtue epistemology – responsibilist virtue epistemology (e.g., Montmarquet 1993; Zagzebski 1996; Battaly 2006; Baehr 2011) – is that it must provide regulative normative guidance for good thinking. Recently, a number of virtue epistemologists (most notably Baehr, 2013) have held that virtue epistemology not only can provide regulative normative guidance, but moreover that we should reconceive the primary epistemic aim of all education as (...) the inculcation of the intellectual virtues. Baehr’s picture contrasts with another well-known position – that the primary aim of education is the promotion of critical thinking (Scheffler 1989; Siegel 1988; 1997; 2017). In this paper – that we hold makes a contribution to both philosophy of education and epistemology and, a fortiori, epistemology of education – we challenge this picture. We outline three criteria that any putative aim of education must meet and hold that it is the aim of critical thinking, rather than the aim of instilling intellectual virtue, that best meets these criteria. On this basis, we propose a new challenge for intellectual virtue epistemology, next to the well-known empirically-driven ‘situationist challenge’. What we call the ‘pedagogical challenge’ maintains that the intellectual virtues approach does not have available a suitably effective pedagogy to qualify the acquisition of intellectual virtue as the primary aim of education. This is because the pedagogic model of the intellectual virtues approach (borrowed largely from exemplarist thinking) is not properly action-guiding. Instead, we hold that, without much further development in virtue-based theory, logic and critical thinking must still play the primary role in the epistemology of education. (shrink)
There is a simple but powerful argument against the human practice of raising and killing animals for food (RKF for short). It goes like this: 1. RKF is extremely bad for animals. 2. RKF is only trivially good for human beings Therefore, 3. RKF should be stopped. While many consider this argument decisive, not everyone is convinced. There have been four main lines of objection to it. In this paper, I provide new responses to these four objections.
Agentialist accounts of self-knowledge seek to do justice to the connection between our identities as rational agents and our capacity to know our own minds. There are two strategies that agentialists have employed in developing their position: substantive and non-substantive. My aim is to explicate and defend one particular example of the non-substantive strategy, namely, that proposed by Tyler Burge. In particular, my concern is to defend Burge's claim that critical reasoning requires a relation of normative directness between reviewing and (...) reviewed perspectives. My defence will involve supplementing Burge's view with a substantive agentialist account of self-knowledge. (shrink)
In this paper, I reconstruct Robert Nozick's experience machine objection to hedonism about well-being. I then explain and briefly discuss the most important recent criticisms that have been made of it. Finally, I question the conventional wisdom that the experience machine, while it neatly disposes of hedonism, poses no problem for desire-based theories of well-being.
The creation of artificial moral systems requires us to make difficult choices about which of varying human value sets should be instantiated. The industry-standard approach is to seek and encode moral consensus. Here we argue, based on evidence from empirical psychology, that encoding current moral consensus risks reinforcing current norms, and thus inhibiting moral progress. However, so do efforts to encode progressive norms. Machine ethics is thus caught between a rock and a hard place. The problem is particularly acute when (...) progress beyond prevailing moral norms is particularly urgent, as is currently the case due to the inadequacy of prevailing moral norms in the face of the climate and ecological crisis. (shrink)
There are at least two traditional conceptions of numerical degree of similarity. According to the first, the degree of dissimilarity between two particulars is their distance apart in a metric space. According to the second, the degree of similarity between two particulars is a function of the number of (sparse) properties they have in common and not in common. This paper argues that these two conceptions are logically independent, but philosophically inconsonant.
Apocalypse, it seems, is everywhere. Preachers with vast followings proclaim the world's end and apocalyptic fears grip even the non-religious amid climate change, pandemics, and threats of nuclear war. But as these ideas pervade popular discourse, grasping their logic remains elusive. Ben Jones argues that we can gain insight into apocalyptic thought through secular thinkers. He starts with a puzzle: Why would secular thinkers draw on Christian apocalyptic beliefs--often dismissed as bizarre--to interpret politics? The apocalyptic tradition proves appealing in part (...) because it theorizes a special relation between crisis and utopia. Apocalyptic thought points to crisis as the vehicle to bring the previously impossible within reach, thus offering apparent resources for navigating challenges in ideal theory, which tries to imagine the best and most just society. By examining apocalyptic thought's appeal and risks, this study arrives at new insights on the limits of ideal theory and utopian hope. (shrink)
The idea of using responsibility in the allocation of healthcare resources has been criticized for, among other things, too readily abandoning people who are responsible for being very badly off. One response to this problem is that while responsibility can play a role in resource allocation, it cannot do so if it will leave those who are responsible below a “sufficiency” threshold. This paper considers first whether a view can be both distinctively sufficientarian and allow responsibility to play a role (...) even for those who will be left with very poor health. It then draws several further distinctions that may affect the application of responsibility at this level. We conclude that a more plausible version of the sufficientarian view is to allow a role for responsibility where failure to do so will leave someone else who is not responsible below the sufficiency threshold. However, we suggest that individuals must exhibit “sufficient responsibility” in order for this to apply, involving both a sufficient level of control and an avoidable failure to respond adequately to reasons for action. (shrink)
Suppose that we think it important that people have the chance to enjoy autonomous lives. An obvious corollary of this thought is that people should, if they want it, have control over the time and manner of their deaths, either ending their own lives, or by securing the help of others in doing so. So, generally, and even if we overall think that the practice should not be legalized on other grounds, it looks like common sense to think that considerations (...) of autonomy tell at least somewhat in favour of legalizing at least some acts of suicide and voluntary euthanasia. In this paper, I argue that, in fact, when we scrutinize the reasons for most end of life decisions, it turns out that they are seriously problematic from the point of view of autonomy. Full autonomy requires that we are responsible for the consequences of our decisions, and responsibility is precluded by non-voluntariness, which is to say decisions made because there are no acceptable alternatives. Since most end of life decisions are made for precisely this reason, it looks as though most such decisions are non-voluntary, and therefore undermine our autonomy: a discomforting and paradoxical claim. I argue that we should respond by taking the paradox to illuminate the context required by an autonomy-respecting framework for legalizing assisted suicide and euthanasia. People should have a legal right to a reasonable choice about when and how to die. However, this must go hand in hand with institutions that ensure, as far as possible, that such choices are made against a background which ensures, as far as possible, that people choose death clear-sightedly and not because nothing else is acceptable. (shrink)
Why are pains bad for us? A natural answer is that it is just because of how they feel (or their felt-qualities). But this answer is cast into doubt by cases of people who are unbothered by certain pains of theirs. These pains retain their felt-qualities, but do not seem bad for the people in question. In this paper, I offer a new response to this problem. I argue that in such cases, the pains in question have become ‘just more (...) of the same’, and for this reason have ceased to be bad for the relevant individuals. It is because they (implicitly) recognise this that they are unbothered by such pains. (shrink)
This paper will outline a series of changes in the archaeological record related to Hominins. I argue that these changes underlie the emergence of the capacity for strategic thinking. The paper will start by examining the foundation of technical skills found in primates, and then work through various phases of the archaeological and paleontological record. I argue that the key driver for the development of strategic thinking was the need to expand range sizes and cope with increasingly heterogeneous environments.
John Patrick Rudisill purports to identify various problems with my argument that the state promotion of autonomy is consistent with anti-perfectionism, viz., that it falsely pretends to be novel, is unacceptably counterintuitive because too restrictive and too permissive, and that it deploys a self-defeating formal apparatus. I argue, in reply, that my argument is more novel than Rudisill gives me credit for; that properly understood my anti-perfectionism implies neither the implausible restrictions nor the unpalatable permissions that Rudisill claims; and that (...) my formal apparatus is innocent of the flaws imputed to it. (shrink)
Judith Shklar, David Runciman, and others argue against what they see as excessive criticism of political hypocrisy. Such arguments often assume that communicating in an authentic manner is an impossible political ideal. This article challenges the characterization of authenticity as an unrealistic ideal and makes the case that its value can be grounded in a certain political realism sensitive to the threats posed by representative democracy. First, by analyzing authenticity’s demands for political discourse, I show that authenticity has greater flexibility (...) than many assume in accommodating practices common to politics, such as deception, concealment, and persuasion through rhetoric. Second, I argue that a concern for authenticity in political discourse represents a virtue, not a distraction, for representative democracy. Authenticity takes on heightened importance when the public seeks information on how representatives will act in contexts where the public is absent and unable to influence decisions. Furthermore, given the psychological mechanisms behind hypocrisy, public criticism is a sensible response for trying to limit political hypocrisy. From the perspective of democratic theory and psychology, the public has compelling reasons to value authenticity in political discourse. (shrink)
Take a strip of paper with 'once upon a time there'‚ written on one side and 'was a story that began'‚ on the other. Twisting the paper and joining the ends produces John Barth’s story Frame-Tale, which prefixes 'once upon a time there was a story that began'‚ to itself. I argue that the ability to understand this sentence cannot be explained by tacit knowledge of a recursive theory of truth in English.
It is often thought that some version of what is generally called the publicity condition is a reasonable requirement to impose on moral theories. In this article, after formulating and distinguishing three versions of the publicity condition, I argue that the arguments typically used to defend them are unsuccessful and, moreover, that even in its most plausible version, the publicity condition ought to be rejected as both question-begging and unreasonably demanding.
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.