In defending his interest-relative account of knowledge in Knowledge and Practical Interests (2005), Jason Stanley relies heavily on intuitions about several bank cases. We experimentally test the empirical claims that Stanley seems to make concerning our common-sense intuitions about these bank cases. Additionally, we test the empirical claims that Jonathan Schaffer seems to make in his critique of Stanley. We argue that our data impugn what both Stanley and Schaffer claim our intuitions about such cases are. To account for these (...) results, one must develop a better conception of the connection between a subject's interests and her body of knowledge than those offered by Stanley and Schaffer. (shrink)
The epistemology of risk examines how risks bear on epistemic properties. A common framework for examining the epistemology of risk holds that strength of evidential support is best modelled as numerical probability given the available evidence. In this essay I develop and motivate a rival ‘relevantalternatives’ framework for theorising about the epistemology of risk. I describe three loci for thinking about the epistemology of risk. The first locus concerns consequences of relying on a belief for action, where (...) those consequences are significant if the belief is false. The second locus concerns whether beliefs themselves—regardless of action—can be risky, costly, or harmful. The third locus concerns epistemic risks we confront as social epistemic agents. I aim to motivate the relevantalternatives framework as a fruitful approach to the epistemology of risk. I first articulate a ‘relevantalternatives’ model of the relationship between stakes, evidence, and action. I then employ the relevantalternatives framework to undermine the motivation for moral encroachment. Finally, I argue the relevantalternatives framework illuminates epistemic phenomena such as gaslighting, conspiracy theories, and crying wolf, and I draw on the framework to diagnose the undue skepticism endemic to rape accusations. (shrink)
According to David Lewis’s contextualist analysis of knowledge, there can be contexts in which a subject counts as knowing a proposition just because every possibility that this proposition might be false is irrelevant in those contexts. In this paper I argue that, in some cases at least, Lewis’ analysis results in granting people non-evidentially based knowledge of ordinary contingent truths which, intuitively, cannot be known but on the basis of appropriate evidence.
The main argument given for relevantalternatives theories of knowledge has been that they answer scepticism about the external world. I will argue that relevantalternatives also solve two other problems that have been much discussed in recent years, a) the bootstrapping problem and b) the apparent conflict between semantic externalism and armchair self-knowledge. Furthermore, I will argue that scepticism and Mooreanism can be embedded within the relevantalternatives framework.
One approach to knowledge, termed the relevantalternatives theory, stipulates that a belief amounts to knowledge if one can eliminate all relevantalternatives to the belief in the epistemic situation. This paper uses causal graphical models to formalize the relevantalternatives approach to knowledge. On this theory, an epistemic situation is encoded through the causal relationships between propositions, which determine which alternatives are relevant and irrelevant. This formalization entails that statistical evidence is (...) not sufficient for knowledge, provides a simple way to incorporate epistemic contextualism, and can rule out many Gettier cases from knowledge. The interpretation in terms of causal models offers more precise predictions for the relevantalternatives theory, strengthening the case for it as a theory of knowledge. (shrink)
According to a common conception of legal proof, satisfying a legal burden requires establishing a claim to a numerical threshold. Beyond reasonable doubt, for example, is often glossed as 90% or 95% likelihood given the evidence. Preponderance of evidence is interpreted as meaning at least 50% likelihood given the evidence. In light of problems with the common conception, I propose a new ‘relevantalternatives’ framework for legal standards of proof. Relevant alternative accounts of knowledge state that a (...) person knows a proposition when their evidence rules out all relevant error possibilities. I adapt this framework to model three legal standards of proof—the preponderance of evidence, clear and convincing evidence, and beyond reasonable doubt standards. I describe virtues of this framework. I argue that, by eschewing numerical thresholds, the relevantalternatives framework avoids problems inherent to rival models. I conclude by articulating aspects of legal normativity and practice illuminated by the relevantalternatives framework. (shrink)
P. Kyle Stanford defends the problem of unconceived alternatives, which maintains that scientists are unlikely to conceive of all the scientifically plausible alternatives to the theories they accept. Stanford’s argument has been criticized on the grounds that the failure of individual scientists to conceive of relevantalternatives does not entail the failure of science as a corporate body to do so. I consider two replies to this criticism and find both lacking. In the process, I argue (...) that Stanford does not provide evidence that there are likely scientifically plausible unconceived alternatives to scientific theories accepted now and in the future. (shrink)
Probabilistic theories of “should” and “ought” face a predicament. At first blush, it seems that such theories must provide different lexical entries for the epistemic and the deontic interpretations of these modals. I show that there is a new style of premise semantics that can avoid this consequence in an attractively conservative way.
This paper aims to provide a unifying approach to the analysis of understanding coherencies (interrogative understanding, e.g. understanding why something is the case) and understanding subject matters (objectual understanding) by highlighting the contextualist nature of understanding. Inspired by the relevantalternatives contextualism about knowledge, I will argue that understanding (in the above mentioned sense) inherently has context-sensitive features and that a theory of understanding that highlights those features can incorporate our intuitions towards understanding as well as consolidate the (...) different accounts of how to analyse understanding. In developing a contextualist account of understanding, I will argue that an account of the features commonly taken to be central to understanding greatly benefits from a contextualist framework. Central to my analysis will be the claim that a person has to fulfil the function of a competent problem solver in order to qualify for the ascription of understanding. In addition to the theoretical elucidation of my contextualist approach to understanding, a demanding hypothetical scenario will be developed to function as a test case. (shrink)
This article addresses and resolves an epistemological puzzle that has attracted much attention in the recent literature—namely, the puzzle arising from Moorean anti-sceptical reasoning and the phenomenon of transmission failure. The paper argues that an appealing account of Moorean reasoning can be given by distinguishing carefully between two subtly different ways of thinking about justification and evidence. Once the respective distinctions are in place we have a simple and straightforward way to model both the Wrightean position of transmission failure and (...) the Moorean position of dogmatism. The approach developed in this article is, accordingly, ecumenical in that it allows us to embrace two positions that are widely considered to be incompatible. The paper further argues that the Moorean Puzzle can be resolved by noting the relevant distinctions and our insensitivity towards them: once we carefully tease apart the different senses of ‘justified’ and ‘evidence’ involved, the bewilderment caused by Moore’s anti-sceptical strategy subsides. (shrink)
In this paper, I argue that morality might bear on belief in at least two conceptually distinct ways. The first is that morality might bear on belief by bearing on questions of justification. The claim that it does is the doctrine of moral encroachment. The second, is that morality might bear on belief given the central role belief plays in mediating and thereby constituting our relationships with one another. The claim that it does is the doctrine of doxastic wronging. Though (...) conceptually distinct, the two doctrines overlap in important ways. This paper provides clarification on the relationship between the two, providing reasons throughout that we should accept both. (shrink)
This essay deals with a selected part of an epistemological controversy provided by Tūsī in response to the skeptical arguments reported by Rāzī that is related to what might be called "intellectual skepticism," or skepticism regarding the judgments of the intellect, particularly in connection with self-evident principles. It will be shown that Rāzī has cited and exposed a position that seems to be no less than a medieval version of empiricism. Tūsī, in contrast, has presented us with a position that (...) rejects such empiricism. The comparative aim of this essay is to draw attention to some similarities as well as some points of divergence between the kind of skeptical debate we are focusing on here, and some relevant epistemological discussions in the later traditions in the West. ". (shrink)
I’m going to argue for a set of restricted skeptical results: roughly put, we don’t know that fire engines are red, we don’t know that we sometimes have pains in our lower backs, we don’t know that John Rawls was kind, and we don’t even know that we believe any of those truths. However, people unfamiliar with philosophy and cognitive science do know all those things. The skeptical argument is traditional in form: here’s a skeptical hypothesis; you can’t epistemically neutralize (...) it, you have to be able to neutralize it to know P; so you don’t know P. But the skeptical hypotheses I plug into it are “real, live” scientific-philosophical hypotheses often thought to be actually true, unlike any of the outrageous traditional skeptical hypotheses (e.g., ‘You’re a brain in a vat’). So I call the resulting skepticism Live Skepticism. Notably, the Live Skeptic’s argument goes through even if we adopt the clever anti-skeptical fixes thought up in recent years such as reliabilism, relevantalternatives theory, contextualism, and the rejection of epistemic closure. Furthermore, the scope of Live Skepticism is bizarre: although we don’t know the simple facts noted above, many of us do know that there are black holes and other amazing facts. (shrink)
This paper concerns the semantic difference between strong and weak neces-sity modals. First we identify a number of explananda: their well-known in-tuitive difference in strength between ‘must’ and ‘ought’ as well as differ-ences in connections to probabilistic considerations and acts of requiring and recommending. Here we argue that important extant analyses of the se-mantic differences, though tailored to account for some of these aspects, fail to account for all. We proceed to suggest that the difference between ’ought’ and ’must’ lies (...) in how they relate to scalar and binary standards. Briefly put, must(φ) says that among the relevantalternatives, φ is selected by the relevant binary standard, whereas ought(φ) says that among the relevant al-ternatives, φ is selected by the relevant scale. Given independently plausi-ble assumptions about how standards are provided by context, this ex-plains the relevant differences discussed. (shrink)
Common sense suggests that if a war is unjust, then there is a strong moral reason not to contribute to it. I argue that this presumption is mistaken. It can be permissible to contribute to an unjust war because, in general, whether it is permissible to perform an act often depends on the alternatives available to the actor. The relevantalternatives available to a government waging a war differ systematically from the relevantalternatives available to (...) individuals in a position to contribute to the war. Hence the conditions determining whether it is permissible for a government to wage a war often differ from the conditions determining whether it is permissible for others to promote that war. This difference is manifest most often in unjust wars with putatively humanitarian aims—an increasingly common type of war. (shrink)
The most prominent arguments for scepticism in modern epistemology employ closure principles of some kind. To begin my discussion of such arguments, consider Simple Knowledge Closure (SKC): (SKC) (Kxt[p] ∧ (p → q)) → Kxt[q].1 Assuming its truth for the time being, the sceptic can use (SKC) to reason from the two assumptions that, firstly, we don’t know ¬sh and that, secondly, op entails ¬sh to the conclusion that we don’t know op, where ‘op’ and ‘sh’ are shorthand for ‘ordinary (...) proposition’ and ‘sceptical hypothesis’ respectively. (SKC), however, fails for familiar reasons: since knowledge entails belief (KB), we can derive the falsity (F) from (SKC) by hypothetical syllogism, and thus reduce (SKC) to absurdity: (KB) Kxt[p] → Bxt[p]. (F) (Kxt[p] ∧ (p → q)) → Bxt[q]. (shrink)
The paper takes as its starting point the observation that people can be led to retract knowledge claims when presented with previously ignored error possibilities, but offers a noncontextualist explanation of the data. Fallibilist epistemologies are committed to the existence of two kinds of Kp -falsifying contingencies: (i) Non-Ignorable contingencies [NI-contingencies] and (ii) Properly-Ignorable contingencies [PI-contingencies]. For S to know that p, S must be in an epistemic position to rule out all NI-contingencies, but she need not be able to (...) rule out the PI-contingencies. What is required vis-à-vis PI-contingencies is that they all be false . In mentioning PI-contingencies, an interlocutor can lead S mistakenly to think that these contingencies are NI-contingencies, when in fact they are not. Since S cannot rule out these newly mentioned contingencies and since she mistakenly takes them to be NI-contingencies , it is quite natural that she retract her earlier knowledge claim. In short, mentioning NI-contingencies creates a distortion effect. It makes S think that the standards for knowledge are higher than they actually are, which in turn explains why she mistakenly thinks she lacks knowledge. Conclusion: The primary linguistic data offered in support of contextualism can be explained without resorting to contextualism. (shrink)
A prominent type of scientific realism holds that some important parts of our best current scientific theories are at least approximately true. According to such realists, radically distinct alternatives to these theories or theory-parts are unlikely to be approximately true. Thus one might be tempted to argue, as the prominent anti-realist Kyle Stanford recently did, that realists of this kind have little or no reason to encourage scientists to attempt to identify and develop theoretical alternatives that are radically (...) distinct from currently accepted theories in the relevant respects. In other words, it may seem that realists should recommend that scientists be relatively conservative in their theoretical endeavors. This paper aims to show that this argument is mistaken. While realists should indeed be less optimistic of finding radically distinct alternatives to replace current theories, realists also have greater reasons to value the outcomes of such searches. Interestingly, this holds both for successful and failed attempts to identify and develop such alternatives. (shrink)
I investigate when we can (rationally) have attitudes, and when we cannot. I argue that a comprehensive theory must explain three phenomena. First, being related by descriptions or names to a proposition one has strong reason to believe is true does not guarantee that one can rationally believe that proposition. Second, such descriptions, etc. do enable individuals to rationally have various non-doxastic attitudes, such as hope and admiration. And third, even for non-doxastic attitudes like that, not just any description will (...) allow it. I argue that we should think of attitude formation like we do (practical) choices among options. I motivate this view linguistically, extending "relevantalternatives'' theories of the attitudes to both belief and to the other, non-doxastic attitudes. Given a natural principle governing choice, and some important differences between doxastic and non-doxastic "choices'', we can explain these puzzling phenomena. (shrink)
According to the `grammatical account', scalar implicatures are triggered by a covert exhaustification operator present in logical form. This account covers considerable empirical ground, but there is a peculiar pattern that resists treatment given its usual implementation. The pattern centers on odd assertions like #"Most lions are mammals" and #"Some Italians come from a beautiful country", which seem to trigger implicatures in contexts where the enriched readings conflict with information in the common ground. Magri (2009, 2011) argues that, to account (...) for these cases, the basic grammatical approach has to be supplemented with the stipulations that exhaustification is obligatory and is based on formal computations which are blind to information in the common ground. In this paper, I argue that accounts of oddness should allow for the possibility of felicitous assertions that call for revision of the common ground, including explicit assertions of unusual beliefs such as "Most but not all lions are mammals" and "Some but not all Italians come from Italy". To adequately cover these and similar cases, I propose that Magri's version of the Grammatical account should be refined with the novel hypothesis that exhaustification triggers a bifurcation between presupposed (the negated relevantalternatives) and at-issue (the prejacent) content. The explanation of the full oddness pattern, including cases of felicitous proposals to revise the common ground, follows from the interaction between presupposed and at-issue content with an independently motivated constraint on accommodation. Finally, I argue that treating the exhaustification operator as a presupposition trigger helps solve various independent puzzles faced by extant grammatical accounts, and motivates a substantial revision of standard accounts of the overt exhaustifier "only". (shrink)
According to Stephen Finlay, ‘A ought to X’ means that X-ing is more conducive to contextually salient ends than relevantalternatives. This in turn is analysed in terms of probability. I show why this theory of ‘ought’ is hard to square with a theory of a reason’s weight which could explain why ‘A ought to X’ logically entails that the balance of reasons favours that A X-es. I develop two theories of weight to illustrate my point. I first (...) look at the prospects of a theory of weight based on expected utility theory. I then suggest a simpler theory. Although neither allows that ‘A ought to X’ logically entails that the balance of reasons favours that A X-es, this price may be accepted. For there remains a strong pragmatic relation between these claims. (shrink)
Semantic externalism holds that the content of at least some of our thoughts is partly constituted by external factors. Accordingly, it leads to the unintuitive consequence that we must then often be mistaken in what we are thinking, and any kind of claim of privileged access must be given up. Those who deny that semantic externalists can retain any account of self-knowledge are ‘incompatibilists’, while those who defend the compatibility of self-knowledge with semantic externalism are ‘compatibilists’. This paper examines the (...) claim of compatibilism, focusing on Burge’s “Slow Switching Argument” and Boghossian’s “Objection of RelevantAlternatives”. I argue that compatibilism is false, and that semantic externalism is incompatible with self-knowledge. (shrink)
Carter and Pritchard (2016) and Pritchard (2010, 2012, 2016) have tried to reconcile the intuition that perceptual knowledge requires only limited discriminatory abilities with the closure principle. To this end, they have introduced two theoretical innovations: a contrast between two ways of introducing error-possibilities and a distinction between discriminating and favoring evidence. I argue that their solution faces the “sufficiency problem”: it is unclear whether the evidence that is normally available to adult humans is sufficient to retain knowledge of the (...) entailing proposition and come to know the entailed proposition. I submit that, on either infallibilist or fallibilist views of evidence, Carter and Pritchard have set the bar for deductive knowledge too low. At the end, I offer an alternative solution. I suggest that the knowledge-retention condition of the closure principle is not satisfied in zebra-like scenarios. (shrink)
One of the most frequently voiced criticisms of free will skepticism is that it is unable to adequately deal with criminal behavior and that the responses it would permit as justified are insufficient for acceptable social policy. This concern is fueled by two factors. The first is that one of the most prominent justifications for punishing criminals, retributivism, is incompatible with free will skepticism. The second concern is that alternative justifications that are not ruled out by the skeptical view per (...) se face significant independent moral objections. Yet despite these concerns, I maintain that free will skepticism leaves intact other ways to respond to criminal behavior—in particular preventive detention, rehabilitation, and alteration of relevant social conditions—and that these methods are both morally justifiable and sufficient for good social policy. The position I defend is similar to Derk Pereboom’s, taking as its starting point his quarantine analogy, but it sets out to develop the quarantine model within a broader justificatory framework drawn from public health ethics. The resulting model—which I call the public health -quarantine model—provides a framework for justifying quarantine and criminal sanctions that is more humane than retributivism and preferable to other non-retributive alternatives. It also provides a broader approach to criminal behavior than Pereboom’s quarantine analogy does on its own. (shrink)
This paper proposes that the question “What should I believe?” is to be answered in the same way as the question “What should I do?,” a view I call Equal Treatment. After clarifying the relevant sense of “should,” I point out advantages that Equal Treatment has over both simple and subtle evidentialist alternatives, including versions that distinguish what one should believe from what one should get oneself to believe. I then discuss views on which there is a distinctively (...) epistemic sense of should. Next I reply to an objection which alleges that non-evidential considerations cannot serve as reasons for which one believes. I then situate Equal Treatment in a broader theoretical framework, discussing connections to rationality, justification, knowledge, and theoretical vs. practical reasoning. Finally, I show how Equal Treatment has important implications for a wide variety of issues, including the status of religious belief, philosophical skepticism, racial profiling and gender stereotyping, and certain issues in psychology, such as depressive realism and positive illusions. (shrink)
The moral importance of liability to harm has so far been ignored in the lively debate about what self-driving vehicles should be programmed to do when an accident is inevitable. But liability matters a great deal to just distribution of risk of harm. While morality sometimes requires simply minimizing relevant harms, this is not so when one party is liable to harm in virtue of voluntarily engaging in activity that foreseeably creates a risky situation, while having reasonable alternatives. (...) On plausible assumptions, merely choosing to use a self-driving vehicle typically gives rise to a degree of liability, so that such vehicles should be programmed to shift the risk from bystanders to users, other things being equal. Insofar vehicles cannot be programmed to take all the factors affecting liability into account, there is a pro tanto moral reason not to introduce them, or restrict their use. (shrink)
Past work has demonstrated that people’s moral judgments can influence their judgments in a number of domains that might seem to involve straightforward matters of fact, including judgments about freedom, causation, the doing/allowing distinction, and intentional action. The present studies explore whether the effect of morality in these four domains can be explained by changes in the relevance of alternative possibilities. More precisely, we propose that moral judgment influences the degree to which people regard certain alternative possibilities as relevant, (...) which in turn impacts intuitions about freedom, causation, doing/allowing, and intentional action. Employing the stimuli used in previous research, Studies 1a, 2a, 3a, and 4a show that the relevance of alternatives is influenced by moral judgments and mediates the impact of morality on non-moral judgments. Studies 1b, 2b, 3b, and 4b then provide direct empirical evidence for the link between the relevance of alternatives and judgments in these four domains by manipulating (rather than measuring) the relevance of alternative possibilities. Lastly, Study 5 demonstrates that the critical mechanism is not whether alternative possibilities are considered, but whether they are regarded as relevant. These studies support a unified framework for understanding the impact of morality across these very different kinds of judgments. (shrink)
A number of Bayesians claim that, if one has no evidence relevant to a proposition P, then one's credence in P should be spread over the interval [0, 1]. Against this, I argue: first, that it is inconsistent with plausible claims about comparative levels of confidence; second, that it precludes inductive learning in certain cases. Two motivations for the view are considered and rejected. A discussion of alternatives leads to the conjecture that there is an in-principle limitation on (...) formal representations of belief: they cannot be both fully accurate and maximally specific. (shrink)
Much recent philosophical work on social freedom focuses on whether freedom should be understood as non-interference, in the liberal tradition associated with Isaiah Berlin, or as non-domination, in the republican tradition revived by Philip Pettit and Quentin Skinner. We defend a conception of freedom that lies between these two alternatives: freedom as independence. Like republican freedom, it demands the robust absence of relevant constraints on action. Unlike republican, and like liberal freedom, it is not moralized. We show that (...) freedom as independence retains the virtues of its liberal and republican counterparts while shedding their vices. Our aim is to put this conception of freedom more firmly on the map and to offer a novel perspective on the logical space in which different conceptions of freedom are located. (shrink)
Epistemic contextualists think that the truth-conditions of ‘knowledge’ ascriptions depend in part on the context in which they are uttered. But what features of context play a role in determining truth-conditions? The idea that the making salient of error possibilities is a central part of the story has often been attributed to contextualists, and a number of contextualists seem to endorse it (see Cohen (Philos Perspect, 13:57–89, 1999) and Hawthorne, (Knowledge and lotteries, Oxford University Press, Oxford, 2004)). In this paper (...) I argue that the focus on salience relations is a mistake. On the view I defend, the relevant features of context are facts about what error-possibilities and alternatives those in the context have a reason to consider, not facts about what error-possibilities and alternatives those in the context actually consider. As I will argue, this view has certain advantages over the standard view. (shrink)
I develop an epistemic focal bias account of certain patterns of judgments about knowledge ascriptions by integrating it with a general dual process framework of human cognition. According to the focal bias account, judgments about knowledge ascriptions are generally reliable but systematically fallible because the cognitive processes that generate them are affected by what is in focus. I begin by considering some puzzling patters of judgments about knowledge ascriptions and sketch how a basic focal bias account seeks to account for (...) them. In doing so, I argue that the basic focal bias account should be integrated in a more general framework of human cognition. Consequently, I present some central aspects of a prominent general dual process theory of human cognition and discuss how focal bias may figure at various levels of processing. On the basis of this discussion, I attempt to categorize the relevant judgments about knowledge ascriptions. Given this categorization, I argue that the basic epistemic focal bias account of certain contrast effects and salient alternatives effects can be plausibly integrated with the dual process framework. Likewise, I try to explain the absence of strong intuitions in cases of far-fetched salient alternatives. As a manner of conclusion, I consider some methodological issues concerning the relationship between cognitive psychology, experimental data and epistemological theorizing. -/- . (shrink)
Technology is a practically indispensible means for satisfying one’s basic interests in all central areas of human life including nutrition, habitation, health care, entertainment, transportation, and social interaction. It is impossible for any one person, even a well-trained scientist or engineer, to know enough about how technology works in these different areas to make a calculated choice about whether to rely on the vast majority of the technologies she/he in fact relies upon. Yet, there are substantial risks, uncertainties, and unforeseen (...) practical consequences associated with the use of technological artifacts and systems. The salience of technological failure (both catastrophic and mundane), as well as technology’s sometimes unforeseeable influence on our behavior, makes it relevant to wonder whether we are really justified as individuals in our practical reliance on technology. Of course, even if we are not justified, we might nonetheless continue in our technological reliance, since the alternatives might not be attractive or feasible. In this chapter I argue that a conception of trust in technological artifacts and systems is plausible and helps us understand what is at stake philosophically in our reliance on technology. Such an account also helps us understand the relationship between trust and technological risk and the ethical obligations of those who design, manufacture, and deploy technological artifacts. (shrink)
One of the most frequently voiced criticisms of free will skepticism is that it is unable to adequately deal with criminal behavior and that the responses it would permit as justified are insufficient for acceptable social policy. This concern is fueled by two factors. The first is that one of the most prominent justifications for punishing criminals, retributivism, is incompatible with free will skepticism. The second concern is that alternative justifications that are not ruled out by the skeptical view per (...) se face significant independent moral objections (Pereboom 2014: 153). Despite these concerns, I maintain that free will skepticism leaves intact other ways to respond to criminal behavior—in particular incapacitation, rehabilitation, and alteration of relevant social conditions—and that these methods are both morally justifiable and sufficient for good social policy. The position I defend is similar to Derk Pereboom’s (2001, 2013, 2014), taking as its starting point his quarantine analogy, but it sets out to develop the quarantine model within a broader justificatory framework drawn from public health ethics. The resulting model—which I call the public health-quarantine model (Caruso 2016, 2017a)—provides a framework for justifying quarantine and criminal sanctions that is more humane than retributivism and preferable to other non-retributive alternatives. It also provides a broader approach to criminal behavior than Pereboom’s quarantine analogy does on its own since it prioritizes prevention and social justice. -/- In Section 1, I begin by (very) briefly summarizing my arguments against free will and basic desert moral responsibility. In Section 2, I then introduce and defend my public health-quarantine model, which is a non-retributive alternative to criminal punishment that prioritizes prevention and social justice. In Sections 3 and 4, I take up and respond to two general objections to the public health-quarantine model. Since objections by Michael Corrado (2016), John Lemos (2016), Saul Smilanksy (2011, 2017), and Victor Tadros (2017) have been addressed in detail elsewhere (see Pereboom 2017a; Pereboom and Caruso 2018), I will here focus on objections that have not yet been addressed. In particular, I will respond to concerns about proportionality, human dignity, and victims’ rights. I will argue that each of these concerns can be met and that in the end the public health-quarantine model offers a superior alternative to retributive punishment and other non-retributive accounts. (shrink)
John Taurek has argued that, where choices must be made between alternatives that affect different numbers of people, the numbers are not, by themselves, morally relevant. This is because we "must" take "losses-to" the persons into account (and these don't sum), but "must not" consider "losses-of" persons (because we must not treat persons like objects). I argue that the numbers are always ethically relevant, and that they may sometimes be the decisive consideration.
Mizrahi’s argument against Stanford’s challenge to scientific realism is analyzed. Mizrahi’s argument is worth of attention for at least two reasons: unlike other criticisms that have been made to Stanford’s view so far, Mizrahi’s argument does not question any specific claim of Stanford’s argument, rather it puts into question the very coherence of Stanford’s position, because it argues that since Stanford’s argument rests on the problem of the unconceived alternatives, Stanford’s argument is self-defeating. Thus, if Mizrahi’s argument is effective (...) in countering Stanford’s view, it may be able to question the validity of other philosophical positions which similarly rest on the problem of the unconceived alternatives; Mizrahi’s argument against Stanford’s view is in part based on the development of a Stanford-like argument for the field of philosophy. This makes Mizrahi’s argument potentially relevant to the metaphilosophical debate. After careful examination, Mizrahi’s argument against Stanford’s instrumentalism is found wanting. Moreover, a Stanford-like argument is developed, which aims at challenging the metaphilosophical stance implied by Mizrahi’s argument against Stanford’s instrumentalism. (shrink)
In his critical notice entitled ‘An Improved Whole Life Satisfaction Theory of Happiness?’ focusing on my article that was previously published in this journal, Fred Feldman raises an important objection to a suggestion I made about how to best formulate the whole life satisfaction theories of happiness. According to my proposal, happiness is a matter of whether an idealised version of you would judge that your actual life corresponds to the life-plan, which he or she has constructed for you on (...) the basis of your cares and concerns. Feldman argues that either the idealised version will include in the relevant life-plan only actions that are possible for you to do or he or she will also include actions and outcomes that are not available for you in the real world. He then uses examples to argue that both of these alternatives have implausible consequences. In response to this objection, I argue that what it is included in the relevant life-plan depends on what you most fundamentally desire and that this constraint is enough to deal with Feldman’s new cases. (shrink)
Our concept of choice is integral to the way we understand others and ourselves, especially when considering ourselves as free and responsible agents. Despite the importance of this concept, there has been little empirical work on it. In this paper we report four experiments that provide evidence for two concepts of choice—namely, a concept of choice that is operative in the phrase having a choice and another that is operative in the phrase making a choice. The experiments indicate that the (...) two concepts of choice can be differentiated from each other on the basis of the kind of alternatives to which each is sensitive. The results indicate that the folk concept of choice is more nuanced than has been assumed. This new, empirically informed understanding of the folk concept of choice has important implications for debates concerning free will, responsibility, and other debates spanning psychology and philosophy. -/- Specifically, 'having a choice' appears to require genuinely open alternatives, or alternative possibilities that are actually realizable, while 'making a choice' appears to only require psychological open alternatives, or the ability to consider alternatives independent of whether these alternatives are actually realizable. We argue that these findings are relevant to the free will debate because choice is central to the folk concept of free will and many philosophical analysis of free will. The kinds of alternatives required for having a choice appear to be incompatibilist in nature, while the kinds of alternatives required for making a choice appear to be compatibilist in nature. If free will requires having choices, then this is perhaps evidence against compatibilism. If free will requires making choices, then this is perhaps evidence in favor of--or at least consistent with--compatibilism. (shrink)
This paper examines the moral force of exploitation in developing world research agreements. Taking for granted that some clinical research which is conducted in the developing world but funded by developed world sponsors is exploitative, it asks whether a third party would be morally justified in enforcing limits on research agreements in order to ensure more fair and less exploitative outcomes. This question is particularly relevant when such exploitative transactions are entered into voluntarily by all relevant parties, and (...) both research sponsors and host communities benefit from the resulting agreements. I show that defenders of the claim that exploitation ought to be permitted rely on a mischaracterization of certain forms of interference as unjustly paternalistic and two dubious empirical assumptions about the results of regulation. The view I put forward is that by evaluating a system of constraints on international research agreements, rather than individual transaction-level interference, we can better assess the alternatives to permitting exploitative research agreements. (shrink)
I argue here for a view I call epistemic separabilism , which states that there are two different ways we can be evaluated epistemically when we assert a proposition or treat a proposition as a reason for acting: one in terms of whether we have adhered to or violated the relevant epistemic norm, and another in terms of how epistemically well-positioned we are towards the fact that we have either adhered to or violated said norm. ES has been appealed (...) to most prominently in order to explain why epistemic evaluations that conflict with the knowledge norm of assertion and practical reasoning nevertheless seem correct. Opponents of such a view are committed to what I call epistemic monism , which states that there is only one way we can be properly evaluated as epistemically appropriate asserters and practical reasoners, namely in terms of whether we have adhered to or violated the relevant norm. Accepting ES over EM has two significant consequences: first, a “metaepistemological” consequence that the structure of normative epistemic evaluations parallels that found in other normative areas , and second, that the knowledge norms of assertion and practical reasoning are no worse off than any alternatives in terms of either explanatory power or simplicity. (shrink)
The Problem of Nearly Convergent Knowledge is an updated and stronger version of the Problem of Convergent Knowledge, which presents a problem for the traditional, binary view of knowledge in which knowledge is a two-place relation between a subject and the known proposition. The problem supports Knowledge Contrastivism, the view that knowledge is a three-place relation between a subject, the known proposition, and a proposition that disjoins the alternativesrelevant to what the subject knows. For example, if knowledge (...) is contrastive, I do not simply know that the bird in front of me is a goldfinch; instead, I know that the bird in front of me is a goldfinch rather than a raven or eagle or falcon. There is, however, a binary view of knowledge that overcomes even the Problem of Nearly Convergent Knowledge. I will give this binary view, show that it is motivated by the same considerations that motivate Knowledge Contrastivism, and argue that it avoids problematic consequences for our epistemic lives that Knowledge Contrastivism cannot. (shrink)
Thom Brooks'sHegel's Political Philosophy: A Systematic Reading of the Philosophy of Rightpresents a very clear and methodologically self-conscious series of discussions of key topics within Hegel's classic text. As one might expect for a ‘systematic’ reading, the main body of Brooks's text commences with an opening chapter on Hegel's system. Then follow seven chapters, the topics of which are encountered sequentially as one reads through thePhilosophy of Right. Brooks's central claim is that too often Hegel's theories or views on any (...) of these topics are misunderstood because of a tendency to isolate the relevant passages from the encompassing structure of thePhilosophy of Rightitself, and, in turn, from Hegel's system of philosophy as a whole, with its logical underpinnings. Brooks is clearly right in holding that Hegel hadintendedthePhilosophy of Rightto be read against the background of ‘the system’ and the ‘logic’ articulating it —nobody doubts that— but there is a further substantive issue here.Shouldcontemporary readers heed Hegel's advice? Brooks's answer is emphatically in the affirmative, and what results is a series of illuminating discussions in which he makes a case for his own interpretations on the basis of systematic considerations, presented against a range of alternatives taken from the contemporary secondary literature, which is amply covered, often in the extensive endnotes to the book. (shrink)
The social welfare functional approach to social choice theory fails to distinguish a genuine change in individual well-beings from a merely representational change due to the use of different measurement scales. A generalization of the concept of a social welfare functional is introduced that explicitly takes account of the scales that are used to measure well-beings so as to distinguish between these two kinds of changes. This generalization of the standard theoretical framework results in a more satisfactory formulation of welfarism, (...) the doctrine that social alternatives are evaluated and socially ranked solely in terms of the well-beings of the relevant individuals. This scale-dependent form of welfarism is axiomatized using this framework. The implications of this approach for characterizing classes of social welfare orderings are also considered. (shrink)
In an earlier discussion, I argued that Kant's moral theory satisfies some of the basic criteria for being a genuine theory: it includes testable hypotheses, nomological higher-and lower-level laws, theoretical constructs, internal principles, and bridge principles. I tried to show that Kant's moral theory is an ideal, descriptive deductive-nomological theory that explains the behavior of a fully rational being and generates testable hypotheses about the moral behavior of actual agents whom we initially assume to conform to its theoretical constructs. I (...) argued that the moral "ought" is best understood as the "ought" of tentative prediction expressed in the range of uses of the German sollen; and that the degree to which such a theory is well-confirmed is a function of the degree to which we actually judge individual human agents, on a case-by-case basis, to be motivated by rationality, stupidity, or moral corruption in their actions. I assume that a similar case could be made for other major contenders, such as Utilitarianism or Aristotelianism. But there still remains unanswered the question of which of these theories is the best among the available alternatives. To answer this question, further criteria of selection must be invoked. Among these are structural elegance and explanatory simplicity, but even these do not exhaust the desiderata for an adequate moral theory. More pressing in the case of moral theory is the requirement that the theory enable us to understand all the available data of moral experience; that the theory be sufficiently inclusive that in the formulation of its descriptive laws and practical principles, it be capable of identifying as morally significant all the behavior to which moral praise, condemnation, or acquittal is a relevant and appropriate response. (shrink)
A Kuhnian reformulation of the recent debate in psychiatric nosography suggested that the current psychiatric classification system (the DSM) is in crisis and that a sort of paradigm shift is awaited (Aragona, 2009). Among possible revolutionary alternatives, the proposed fi ve-axes etiopathogenetic taxonomy (Charney et al., 2002) emphasizes the primacy of the genotype over the phenomenological level as the relevant basis for psychiatric nosography. Such a position is along the lines of the micro-reductionist perspective of E. Kandel (1998, (...) 1999), which sees mental disorders reducible to explanations at a fundamental epistemic level of genes and neurotransmitters. This form of micro-reductionism has been criticized as a form of genetic-molecular fundamentalism (e.g. Murphy, 2006) and a multi-level approach, in the form of the burgeoning Cognitive Neuropsychiatry, was proposed. This article focuses on multi-level mechanistic explanations, coming from Cognitive Science, as a possible alternative etiopathogenetic basis for psychiatric classification. The idea of a mechanistic approach to psychiatric taxonomy is here defended on the basis of a better conception of levels and causality. Nevertheless some critical remarks of Mechanism as a psychiatric general view are also offered. (shrink)
Income inequality in democratic societies with market economies is sizable and growing. One reason for this growth can be traced to unequal forms of compensation that employers pay workers. Democratic societies have tackled this problem by enforcing a wage standard that all workers are paid regardless of education, skills, or contribution. This raises a novel question: Should there be equal pay for all workers? To answer it, we need to investigate some factors that are relevant to the unequal conditions (...) of power and authority in which wage offers are made. By clarifying these, we can determine whether wage inequality is morally permissible. If not, then a case might be made to pay all workers the same regardless of education, skills, or contribution. Even if it is permissible, another question worth considering is whether there are limits to how much inequality is acceptable. The argument here proceeds along the following lines. First, I summarize the economic and non-economic factors that determine the value of wages in labor markets. Second, I examine a particular problem that concerns whether the conditions of wage labor are coercive because they restrict alternatives or otherwise include threats to the welfare of workers. If there is coercion, we have good reasons to establish a standard that improve these conditions. Finally, I claim that establishing this standard requires increasing the value of low-wage work. Doing so will not only expand alternatives that are available to these workers, it will also diminish the potential threat to their welfare. (shrink)
Is the use of animals in undergraduate education ethically justifiable? One way to answer this question is to focus on the factors relevant to those who serve on Institutional Animal Care and Use Committees . An analysis of the debate surrounding the practice of dissection at the undergraduate level helps shed light on these issues. Settling that debate hinges on claims about the kind of knowledge gained from dissection and other “hands-on” kinds of experiences, and whether such knowledge is (...) needed to meet educational goals. Most undergraduate courses will probably lack a sufficient justification for the use of animals for dissection, since the educational goals can be met with non-animal alternatives. In addition, there are some general guidelines that can be extrapolated from this debate, which should be of assistance in deciding whether a sufficient justification has been given for animal use. One guideline is that justifications for animal use should require demonstrating that the use adds value to the educational experience in a way that is directly tied to the course objectives. Furthermore, the use of animals should not be simply built into the objectives of a course in such a way that the use is merely assumed, with no actual justification provided. One must beware of putative standards for justification that would fail to rule out any possible animal use. (shrink)
According to contractualist theories in ethics, whether an action is wrong is determined by whether it could be justified to others on grounds no one could reasonably reject. Contractualists then think that reasonable rejectability of principles depends on the strength of the personal objections individuals can make to them. There is, however, a deep disagreement between contractualists concerning from which temporal perspective the relevant objections to different principles are to be made. Are they to be made on the basis (...) of the prospects the principles give to different individuals ex ante or on the basis of the outcomes of the principles ex post? Both answers have been found to be problematic. The ex ante views make irrelevant information about personal identity morally significant and lead to objectionable ex ante rules, whereas ex post views lead to counterintuitive results in the so-called different harm and social risk imposition cases. The aim of this article is to provide a new synthesis of these views that can avoid the problems of the previous alternatives. I call the proposal ‘risk-acknowledging’ ex post contractualism. The crux of the view is to take into account in the comparisons of different objections both the realized harms and the risks under which individuals have to live. (shrink)
Elsewhere we have responded to the so-called demandingness objection to consequentialism – that consequentialism is excessively demanding and is therefore unacceptable as a moral theory – by introducing the theoretical position we call institutional consequentialism. This is a consequentialist view that, however, requires institutional systems, and not individuals, to follow the consequentialist principle. In this paper, we first introduce and explain the theory of institutional consequentialism and the main reasons that support it. In the remainder of the paper, we turn (...) to the global dimension where the first and foremost challenge is to explain how institutional consequentialism can deal with unsolved global problems such as poverty, war and climate change. In response, following the general idea of institutional consequentialism, we draw up three alternative routes: relying on existing national, transnational and supranational institutions; promoting gradual institutional reform; and advocating radical changes to the status quo. We evaluate these routes by describing normatively relevant properties of the existing global institutional system, as well as by showing what institutional consequentialism can say about alternatives to it: a world government; and multi-layered sovereignty/neo-medieval system. (shrink)
Causal Determinism (CD) entails that all of a person’s choices and actions are nomically related to events in the distant past, the approximate, but lawful, consequences of those occurrences. Assuming that history cannot be undone nor those (natural) relations altered, that whatever results from what is inescapable is itself inescapable, and the contrariety of inevitability and freedom, it follows that we are completely devoid of liberty: our choices are not freely made; our actions are not freely performed. Instead of disputing (...) the soundness of this reasoning, some philosophers prefer to maintain that we could yet have a small measure of freedom were CD true of our world: although being unable to choose or act differently, one could at least under normal circumstances truly claim to be acting ‘on one’s own’, beyond the control of ‘outside forces’, in a word, autonomous. They further argue that being free in this sense suffices for moral responsibility. Call their philosophy ‘Autonomy Compatibilism’ (AC). -/- In adopting here reactive attitudes towards an agent, one is choosing to highlight the fact that the individual in question is of sound mind, reasoning and acting free from the interference of others. These facts alone, the adherent of AC claims, justify his stance, despite the necessity of the agent’s choices. Why would we not regard a sane individual who is not being coerced, intimidated, deceived or unduly put upon as in charge of his life so as to be responsible for his activities? -/- The Manipulation Argument (MA) is supposed to cut off this line of retreat. Its authors hold that, were CD true of our world, we would be no more autonomous than a victim of “covert, non-constraining control” (CNC): manipulation whereby one person causes another, through the use of methods such as brainwashing or circumspect operant conditioning, to ‘do his bidding’ without the latter being aware of his subjugation or feeling in any way coerced. Since a CNC victim obviously lacks autonomy, then so must “persons” living in a deterministic universe. Defenders of AC have, then, the following argument with which to contend: -/- 1. Victims of CNC (obviously) lack autonomy. 2. Thus, AC would be true only if some definition of autonomy succeeds in specifying a freedom relevant difference between victims of CNC and agents whose choices/actions are necessary consequences of prior events. 3. There could be no such definition. 4. Therefore, AC must be false. -/- The challenge issued here is clear: find a way to refute the claim that being subject to natural laws would be tantamount to being a victim of CNC, to show that Nature is no manipulator. Moreover, this challenge cannot be met by responding with a Frankfurt case: a situation in which things have been surreptitiously arranged so that an agent is unable to avoid doing something that he manages to do ‘on his own’, thus, being autonomous despite his inability to act otherwise. For, even if CD is not inconsistent with autonomy because it eliminates the ability to do otherwise per se, it may yet entail that no human agent ever does act of his own accord, an implication of which would be a lack of alternatives on anyone’s part. In other words, the fact that causally determined beings could never act differently than they do does is perhaps only symptomatic of the reason why such beings would lack autonomy: forces beyond their control would have dominion over their psychological development. Thus, AC advocates must show that the way that an agent’s character would be shaped, were she (merely) subject to natural laws, would leave unimpaired an ability that CNC would destroy. What follows is a definition of this ability, which I then use to solve Problem of Freedom and Foreknowledge. (shrink)
JUNE 2015 UPDATE: A BIBLIOGRAPHY: JOHN CORCORAN’S PUBLICATIONS ON ARISTOTLE 1972–2015 By John Corcoran -/- This presentation includes a complete bibliography of John Corcoran’s publications relevant to his research on Aristotle’s logic. Sections I, II, III, and IV list 21 articles, 44 abstracts, 3 books, and 11 reviews. It starts with two watershed articles published in 1972: the Philosophy & Phenomenological Research article from Corcoran’s Philadelphia period that antedates his Aristotle studies and the Journal of Symbolic Logic article from (...) his Buffalo period first reporting his original results; it ends with works published in 2015. A few of the items are annotated as listed or with endnotes connecting them with other work and pointing out passages that in-retrospect are seen to be misleading and in a few places erroneous. In addition, Section V, “Discussions”, is a nearly complete secondary bibliography of works describing, interpreting, extending, improving, supporting, and criticizing Corcoran’s work: 8 items published in the 1970s, 23 in the 1980s, 42 in the 1990s, 56 in the 2000s, and 69 in the current decade. The secondary bibliography is also annotated as listed or with endnotes: some simply quoting from the cited item, but several answering criticisms and identifying errors. Section VI, “Alternatives”, lists recent works on Aristotle’s logic oblivious of Corcoran’s research and, more generally, of the Lukasiewicz-initiated tradition. As is evident from Section VII, “Acknowledgements”, Corcoran’s publications benefited from consultation with other scholars, most notably Timothy Smiley, Michael Scanlan, Roberto Torretti, and Kevin Tracy. All of Corcoran’s Greek translations were done in collaboration with two or more classicists. Corcoran never published a sentence without discussing it with his colleagues and students. -/- REQUEST: Please send errors, omissions, and suggestions. I am especially interested in citations made in non-English publications. Also, let me know of passages I should comment on. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.