In a recent issue of Faith and Philosophy, Steven Cowan calls into question our success in responding to what we called the “Problem of Heavenly Free- dom” in our earlier “Incompatibilism, Sin, and Free Will in Heaven.” In this reply, we defend our view against Cowan’s criticisms.
Envy is, roughly, the disposition to desire that another lose a perceived good so that one can, by comparison, feel better about one’s self. The divisiveness of envy follows not just from one’s willing against the good of the other, but also from the other vices that spring from it. It is for this second reason that envy is a capital vice. This chapter begins by arguing for a definition of envy similar to that given by Aquinas and then considers (...) its relationship to other vices (e.g. jealousy, schadenfreude, and hate). At the heart of envy is a disposition to make relative comparisons which lead to a sense of inferiority. This is turn can lead a person to feel and act in ways destructive of community and the self. The present chapter also addresses recent work in both psychology and economics related to envy. (shrink)
Our focus in this chapter will be the role the pride has played, both historically and contemporarily, in Christian theology and philosophical theology. We begin by delineating a number of different types of pride, since some types are positive (e.g., when a parent tells a daughter “I’m proud of you for being brave”), and others are negative (e.g., “Pride goes before a fall”) or even vicious. We then explore the role that the negative emotion and vice play in the history (...) of Christianity, with particular attention to a number of influential figures. We conclude by exploring how pride connects with a number of other central issues in Christian theology. (shrink)
This chapter canvases a number of ways that issues surrounding disability intersect with social epistemology, particularly how dominate norms concerning communication and ability can epistemically disadvantage some disabled individuals. We begin with a discussion of how social epistemology as a field and debates concerning epistemic injustice in particular fail to take the problem of ableism seriously. In section two, we analyze the concept of an individual’s “knowledge capacity,” arguing that it can easily misconstrue the extended, social nature of both knowledge (...) and capacity/ability. In section three, we turn to issues of testimony and their relation to debates concerning disability and well-being. We address how the regular lack of uptake of disabled people’s testimony can lead to a number of structural rather than merely individual epistemic injustices, and we also consider how the very nature of some disabilities make testimonial issues more complicated. In our fourth and final section, we discuss various norms of social interaction and how they can systematically disadvantage Autistic people in particular. (shrink)
In an earlier paper, I argued for an account of the metaphysics of grace which was libertarian in nature but also non-Pelagian. My goal in the present paper is to broaden my focus on how the human and divine wills relate in graced activities. While there is widespread agreement in Christian theology that the two do interact in an important way, what’s less clear is how the wills of two agents can be united in one of them performing a particular (...) action via a kind of joint or unitive willing. Insofar as the goal in these unitive willings is to have the human will and the divine will operating together in the human bringing about a particular action, I refer to this kind of volition as ”cooperative agency’. I explore two different models -- an identificationist model and an incarnation model -- regarding how the human agent is aligned with God in cooperative agency. I then argue that there are significant reasons for preferring the incarnational model over the identificationist model. (shrink)
The Twin Earth thought experiment invites us to consider a liquid that has all of the superficial properties associated with water (clear, potable, etc.) but has entirely different deeper causal properties (composed of “XYZ” rather than of H2O). Although this thought experiment was originally introduced to illuminate questions in the theory of reference, it has also played a crucial role in empirically informed debates within the philosophy of psychology about people’s ordinary natural kind concepts. Those debates have sought to accommodate (...) an apparent fact about ordinary people’s judgments: Intuitively, the Twin Earth liquid is not water. We present results from four experiments showing that people do not, in fact, have this intuition. Instead, people tend to have the intuition that there is a sense in which the liquid is not water but also a sense in which it is water. We explore the implications of this finding for debates about theories of natural kind concepts, arguing that it supports views positing two distinct criteria for membership in natural kind categories – one based on deeper causal properties, the other based on superficial, observable properties. (shrink)
Replacing Truth.Kevin Scharp - 2007 - Inquiry: An Interdisciplinary Journal of Philosophy 50 (6):606 – 621.details
Of the dozens of purported solutions to the liar paradox published in the past fifty years, the vast majority are "traditional" in the sense that they reject one of the premises or inference rules that are used to derive the paradoxical conclusion. Over the years, however, several philosophers have developed an alternative to the traditional approaches; according to them, our very competence with the concept of truth leads us to accept that the reasoning used to derive the paradox is sound. (...) That is, our conceptual competence leads us into inconsistency. I call this alternative the inconsistency approach to the liar. Although this approach has many positive features, I argue that several of the well-developed versions of it that have appeared recently are unacceptable. In particular, they do not recognize that if truth is an inconsistent concept, then we should replace it with new concepts that do the work of truth without giving rise to paradoxes. I outline an inconsistency approach to the liar paradox that satisfies this condition. (shrink)
The Lockean Thesis says that you must believe p iff you’re sufficiently confident of it. On some versions, the 'must' asserts a metaphysical connection; on others, it asserts a normative one. On some versions, 'sufficiently confident' refers to a fixed threshold of credence; on others, it varies with proposition and context. Claim: the Lockean Thesis follows from epistemic utility theory—the view that rational requirements are constrained by the norm to promote accuracy. Different versions of this theory generate different versions of (...) Lockeanism; moreover, a plausible version of epistemic utility theory meshes with natural language considerations, yielding a new Lockean picture that helps to model and explain the role of beliefs in inquiry and conversation. Your beliefs are your best guesses in response to the epistemic priorities of your context. Upshot: we have a new approach to the epistemology and semantics of belief. And it has teeth. It implies that the role of beliefs is fundamentally different than many have thought, and in fact supports a metaphysical reduction of belief to credence. (shrink)
Phineas Gage’s story is typically offered as a paradigm example supporting the view that part of what matters for personal identity is a certain magnitude of similarity between earlier and later individuals. Yet, reconsidering a slight variant of Phineas Gage’s story indicates that it is not just magnitude of similarity, but also the direction of change that affects personal identity judgments; in some cases, changes for the worse are more seen as identity-severing than changes for the better of comparable magnitude. (...) Ironically, thinking carefully about Phineas Gage’s story tells against the thesis it is typically taken to support. (shrink)
The personal identity relation is of great interest to philosophers, who often consider fictional scenarios to test what features seem to make persons persist through time. But often real examples of neuroscientific interest also provide important tests of personal identity. One such example is the case of Phineas Gage – or at least the story often told about Phineas Gage. Many cite Gage’s story as example of severed personal identity; Phineas underwent such a tremendous change that Gage “survived as a (...) different man.” I discuss a recent empirical finding about judgments about this hypothetical. It is not just the magnitude of the change that affects identity judgment; it is also the negative direction of the change. I present an experiment suggesting that direction of change also affects neuroethical judgments. I conclude we should consider carefully the way in which improvements and deteriorations affect attributions of personal identity. This is particularly important since a number of the most crucial neuroethical decisions involve varieties of cognitive enhancements or deteriorations. (shrink)
A realist theory of truth for a class of sentences holds that there are entities in virtue of which these sentences are true or false. We call such entities ‘truthmakers’ and contend that those for a wide range of sentences about the real world are moments (dependent particulars). Since moments are unfamiliar, we provide a definition and a brief philosophical history, anchoring them in our ontology by showing that they are objects of perception. The core of our theory is the (...) account of truthmaking for atomic sentences, in which we expose a pervasive ‘dogma of logical form’, which says that atomic sentences cannot have more than one truthmaker. In contrast to this, we uphold the mutual independence of logical and ontological complexity, and the authors outline formal principles of truthmaking taking account of both kinds of complexity. (shrink)
Assume that it is your evidence that determines what opinions you should have. I argue that since you should take peer disagreement seriously, evidence must have two features. (1) It must sometimes warrant being modest: uncertain what your evidence warrants, and (thus) uncertain whether you’re rational. (2) But it must always warrant being guided: disposed to treat your evidence as a guide. Surprisingly, it is very difficult to vindicate both (1) and (2). But diagnosing why this is so leads to (...) a proposal—Trust—that is weak enough to allow modesty but strong enough to yield many guiding features. In fact, I claim that Trust is the Goldilocks principle—for it is necessary and sufficient to vindicate the claim that you should always prefer to use free evidence. Upshot: Trust lays the foundations for a theory of disagreement and, more generally, an epistemology that permits self-doubt—a modest epistemology. (shrink)
Causal selection is the cognitive process through which one or more elements in a complex causal structure are singled out as actual causes of a certain effect. In this paper, we report on an experiment in which we investigated the role of moral and temporal factors in causal selection. Our results are as follows. First, when presented with a temporal chain in which two human agents perform the same action one after the other, subjects tend to judge the later agent (...) to be the actual cause. Second, the impact of temporal location on causal selection is almost canceled out if the later agent did not violate a norm while the former did. We argue that this is due to the impact that judgments of norm violation have on causal selection—even if the violated norm has nothing to do with the obtaining effect. Third, moral judgments about the effect influence causal selection even in the case in which agents could not have foreseen the effect and did not intend to bring it about. We discuss our findings in connection to recent theories of the role of moral judgment in causal reasoning, on the one hand, and to probabilistic models of temporal location, on the other. (shrink)
During the realist revival in the early years of this century, philosophers of various persuasions were concerned to investigate the ontology of truth. That is, whether or not they viewed truth as a correspondence, they were interested in the extent to which one needed to assume the existence of entities serving some role in accounting for the truth of sentences. Certain of these entities, such as the Sätze an sich of Bolzano, the Gedanken of Frege, or the propositions of Russell (...) and Moore, were conceived as the bearers of the properties of truth and falsehood. Some thinkers however, such as Russell, Wittgenstein in the Tractatus, and Husserl in the Logische Untersuchungen, argued that instead of, or in addition to, truth-bearers, one must assume the existence of certain entities in virtue of which sentences and/or propositions are true. Various names were used for these entities, notably 'fact', 'Sachverhalt', and 'state of affairs'. (1) In order not to prejudge the suitability of these words we shall initially employ a more neutral terminology, calling any entities which are candidates for this role truth-makers. (shrink)
This is a review article on Franz Brentano’s Descriptive Psychology published in 1982. We provide a detailed exposition of Brentano’s work on this topic, focusing on the unity of consciousness, the modes of connection and the types of part, including separable parts, distinctive parts, logical parts and what Brentano calls modificational quasi-parts. We also deal with Brentano’s account of the objects of sensation and the experience of time.
‘What is characteristic of every mental activity’, according to Brentano, is ‘the reference to something as an object. In this respect every mental activity seems to be something relational.’ But what sort of a relation, if any, is our cognitive access to the world? This question – which we shall call Brentano’s question – throws a new light on many of the traditional problems of epistemology. The paper defends a view of perceptual acts as real relations of a subject to (...) an object. To make this view coherent, a theory of different types of relations is developed, resting on ideas on formal ontology put forward by Husserl in his Logical Investigations and on the theory of relations sketched in Smith's "Acta cum fundamentis in re". The theory is applied to the notion of a Cambridge change, which proves to have an unforeseen relevance to our understanding of perception. (shrink)
KK is the thesis that if you can know p, you can know that you can know p. Though it’s unpopular, a flurry of considerations has recently emerged in its favour. Here we add fuel to the fire: standard resources allow us to show that any failure of KK will lead to the knowability and assertability of abominable indicative conditionals of the form ‘If I don’t know it, p’. Such conditionals are manifestly not assertable—a fact that KK defenders can easily (...) explain. I survey a variety of KK-denying responses and find them wanting. Those who object to the knowability of such conditionals must either deny the possibility of harmony between knowledge and belief, or deny well-supported connections between conditional and unconditional attitudes. Meanwhile, those who grant knowability owe us an explanation of such conditionals’ unassertability—yet no successful explanations are on offer. Upshot: we have new evidence for KK. (shrink)
Willful ignorance is an important concept in criminal law and jurisprudence, though it has not received much discussion in philosophy. When it is mentioned, however, it is regularly assumed to be a kind of self-deception. In this article I will argue that self-deception and willful ignorance are distinct psychological kinds. First, some examples of willful ignorance are presented and discussed, and an analysis of the phenomenon is developed. Then it is shown that current theories of self-deception give no support to (...) the idea that willful ignorance is a kind of self-deception. Afterwards an independent argument is adduced for excluding willful ignorance from this category. The crucial differences between the two phenomena are explored, as are the reasons why they are so easily conflated. (shrink)
One popular conception of natural theology holds that certain purely rational arguments are insulated from empirical inquiry and independently establish conclusions that provide evidence, justification, or proof of God’s existence. Yet, some raise suspicions that philosophers and theologians’ personal religious beliefs inappropriately affect these kinds of arguments. I present an experimental test of whether philosophers and theologians’ argument analysis is influenced by religious commitments. The empirical findings suggest religious belief affects philosophical analysis and offer a challenge to theists and atheists, (...) alike: reevaluate the scope of natural theology’s conclusions or acknowledge and begin to address the influence of religious belief. (shrink)
Vogel, Sosa, and Huemer have all argued that sensitivity is incompatible with knowing that you do not believe falsely, therefore the sensitivity condition must be false. I show that this objection misses its mark because it fails to take account of the basis of belief. Moreover, if the objection is modified to account for the basis of belief then it collapses into the more familiar objection that sensitivity is incompatible with closure.
Assertions are the centre of gravity in social epistemology. They are the vehicles we use to exchange information within scientific groups and society as a whole. It is therefore essential to determine under which conditions we are permitted to make an assertion. In this paper we argue and provide empirical evidence for the view that the norm of assertion is justified belief: truth or even knowledge are not required. Our results challenge the knowledge account advocated by, e.g. Williamson (1996), in (...) general, and more specifically, put into question several studies conducted by Turri (2013, 2016) that support a knowledge norm of assertion. Instead, the justified belief account championed by, e.g. Douven (2006), seems to prevail. (shrink)
Sosa, Pritchard, and Vogel have all argued that there are cases in which one knows something inductively but does not believe it sensitively, and that sensitivity therefore cannot be necessary for knowledge. I defend sensitivity by showing that inductive knowledge is sensitive.
You can perceive things, in many respects, as they really are. For example, you can correctly see a coin as circular from most angles. Nonetheless, your perception of the world is perspectival. The coin looks different when slanted than when head-on, and there is some respect in which the slanted coin looks similar to a head-on ellipse. Many hold that perception is perspectival because you perceive certain properties that correspond to the “looks” of things. I argue that this view is (...) misguided. I consider the two standard versions of this view. What I call the PLURALIST APPROACH fails to give a unified account of the perspectival character of perception, while what I call the PERSPECTIVAL PROPERTIES APPROACH violates central commitments of contemporary psychology. I propose instead that perception is perspectival because of the way perceptual states are structured from their parts. (shrink)
A classic debate concerns whether reasonableness should be understood statistically (e.g., reasonableness is what is common) or prescriptively (e.g., reasonableness is what is good). This Article elaborates and defends a third possibility. Reasonableness is a partly statistical and partly prescriptive “hybrid,” reflecting both statistical and prescriptive considerations. Experiments reveal that people apply reasonableness as a hybrid concept, and the Article argues that a hybrid account offers the best general theory of reasonableness. -/- First, the Article investigates how ordinary people judge (...) what is reasonable. Reasonableness sits at the core of countless legal standards, yet little work has investigated how ordinary people (i.e., potential jurors) actually make reasonableness judgments. Experiments reveal that judgments of reasonableness are systematically intermediate between judgments of the relevant average and ideal across numerous legal domains. For example, participants’ mean judgment of the legally reasonable number of weeks’ delay before a criminal trial (ten weeks) falls between the judged average (seventeen weeks) and ideal (seven weeks). So too for the reasonable num- ber of days to accept a contract offer, the reasonable rate of attorneys’ fees, the reasonable loan interest rate, and the reasonable annual number of loud events on a football field in a residential neighborhood. Judgment of reasonableness is better predicted by both statistical and prescriptive factors than by either factor alone. -/- This Article uses this experimental discovery to develop a normative view of reasonableness. It elaborates an account of reasonableness as a hybrid standard, arguing that this view offers the best general theory of reasonableness, one that applies correctly across multiple legal domains. Moreover, this hybrid feature is the historical essence of legal reasonableness: the original use of the “reasonable person” and the “man on the Clapham omnibus” aimed to reflect both statistical and prescriptive considerations. Empirically, reasonableness is a hybrid judgment. And normatively, reasonableness should be applied as a hybrid standard. (shrink)
Stubborn belief, like self-deception, is a species of motivated irrationality. The nature of stubborn belief, however, has not been investigated by philosophers, and it is something that poses a challenge to some prominent accounts of self-deception. In this paper, I argue that the case of stubborn belief constitutes a counterexample to Alfred Mele’s proposed set of sufficient conditions for self-deception, and I attempt to distinguish between the two. The recognition of this phenomenon should force an amendment in this account, and (...) should also make a Mele-style deflationist think more carefully about the kinds of motivational factors operating in self-deception. (shrink)
Many current popular views in epistemology require a belief to be the result of a reliable process (aka ‘method of belief formation’ or ‘cognitive capacity’) in order to count as knowledge. This means that the generality problem rears its head, i.e. the kind of process in question has to be spelt out, and this looks difficult to do without being either over or under-general. In response to this problem, I propose that we should adopt a more fine-grained account of the (...) epistemic basing relation, at which point the generality problem becomes easy to solve. (shrink)
There is a long-running debate as to whether privacy is a matter of control or access. This has become more important following revelations made by Edward Snowden in 2013 regarding the collection of vast swathes of data from the Internet by signals intelligence agencies such as NSA and GCHQ. The nature of this collection is such that if the control account is correct then there has been a significant invasion of people's privacy. If, though, the access account is correct then (...) there has not been an invasion of privacy on the scale suggested by the control account. I argue that the control account of privacy is mistaken. However, the consequences of this are not that the seizing control of personal information is unproblematic. I argue that the control account, while mistaken, seems plausible for two reasons. The first is that a loss of control over my information entails harm to the rights and interests that privacy protects. The second is that a loss of control over my information increases the risk that my information will be accessed and that my privacy will be violated. Seizing control of another's information is therefore harmful, even though it may not entail a violation of privacy. Indeed, seizing control of another's information may be more harmful than actually violating their privacy. (shrink)
Suppose you have recently gained a disposition for recognizing a high-level kind property, like the property of being a wren. Wrens might look different to you now. According to the Phenomenal Contrast Argument, such cases of perceptual learning show that the contents of perception can include high-level kind properties such as the property of being a wren. I detail an alternative explanation for the different look of the wren: a shift in one’s attentional pattern onto other low-level properties. Philosophers have (...) alluded to this alternative before, but I provide a comprehensive account of the view, show how my account significantly differs from past claims, and offer a novel argument for the view. Finally, I show that my account puts us in a position to provide a new objection to the Phenomenal Contrast Argument. (shrink)
In a recent paper, Melchior pursues a novel argumentative strategy against the sensitivity condition. His claim is that sensitivity suffers from a ‘heterogeneity problem:’ although some higher-order beliefs are knowable, other, very similar, higher-order beliefs are insensitive and so not knowable. Similarly, the conclusions of some bootstrapping arguments are insensitive, but others are not. In reply, I show that sensitivity does not treat different higher-order beliefs differently in the way that Melchior states and that while genuine bootstrapping arguments have insensitive (...) conclusions, the cases that Melchior describes as sensitive ‘bootstrapping’ arguments don’t deserve the name, since they are a perfectly good way of getting to know their conclusions. In sum, sensitivity doesn’t have a heterogeneity problem. (shrink)
Most plausible moral theories must address problems of partial acceptance or partial compliance. The aim of this paper is to examine some proposed ways of dealing with partial acceptance problems as well as to introduce a new Rule Utilitarian suggestion. Here I survey three forms of Rule Utilitarianism, each of which represents a distinct approach to solving partial acceptance issues. I examine Fixed Rate, Variable Rate, and Optimum Rate Rule Utilitarianism, and argue that a new approach, Maximizing Expectation Rate Rule (...) Utilitarianism, better solves partial acceptance problems. (shrink)
You have higher-order uncertainty iff you are uncertain of what opinions you should have. I defend three claims about it. First, the higher-order evidence debate can be helpfully reframed in terms of higher-order uncertainty. The central question becomes how your first- and higher-order opinions should relate—a precise question that can be embedded within a general, tractable framework. Second, this question is nontrivial. Rational higher-order uncertainty is pervasive, and lies at the foundations of the epistemology of disagreement. Third, the answer is (...) not obvious. The Enkratic Intuition---that your first-order opinions must “line up” with your higher-order opinions---is incorrect; epistemic akrasia can be rational. If all this is right, then it leaves us without answers---but with a clear picture of the question, and a fruitful strategy for pursuing it. (shrink)
Alfred Mele's deflationary account of self-deception has frequently been criticised for being unable to explain the ?tension? inherent in self-deception. These critics maintain that rival theories can better account for this tension, such as theories which suppose self-deceivers to have contradictory beliefs. However, there are two ways in which the tension idea has been understood. In this article, it is argued that on one such understanding, Mele's deflationism can account for this tension better than its rivals, but only if we (...) reconceptualize the self-deceiver's attitude in terms of unwarranted degrees of conviction rather than unwarranted belief. This new way of viewing the self-deceiver's attitude will be informed by observations on experimental work done on the biasing influence of desire on belief, which suggests that self-deceivers don?t manage to fully convince themselves of what they want to be true. On another way in which this tension has been understood, this account would not manage so well, since on this understanding the self-deceiver is best interpreted as knowing, but wishing to avoid, the truth. However, it is argued that we are under no obligation to account for this since it is a characteristic of a different phenomenon than self-deception, namely, escapism. (shrink)
This response addresses the excellent responses to my book provided by Heather Douglas, Janet Kourany, and Matt Brown. First, I provide some comments and clarifications concerning a few of the highlights from their essays. Second, in response to the worries of my critics, I provide more detail than I was able to provide in my book regarding my three conditions for incorporating values in science. Third, I identify some of the most promising avenues for further research that flow out of (...) this interchange. (shrink)
This paper develops an account of the distinctive epistemic authority of avowals of propositional attitude, focusing on the case of belief. It is argued that such avowals are expressive of the very mental states they self-ascribe. This confers upon them a limited self-warranting status, and renders them immune to an important class of errors to which paradigm empirical (e.g., perceptual) judgments are liable.
Explanationism is a plausible view of epistemic justification according to which justification is a matter of explanatory considerations. Despite its plausibility, explanationism is not without its critics. In a recent issue of this journal T. Ryan Byerly and Kraig Martin have charged that explanationism fails to provide necessary or sufficient conditions for epistemic justification. In this article I examine Byerly and Martin’s arguments and explain where they go wrong.
We provide a detailed exposition of Brentano’s descriptive psychology, focusing on the unity of consciousness, the modes of connection and the types of part, including separable parts, distinctive parts, logical parts and what Brentano calls modificational quasi-parts. We also deal with Brentano’s account of the objects of sensation and the experience of time.
Explanationists about epistemic justification hold that justification depends upon explanatory considerations. After a bit of a lull, there has recently been a resurgence of defenses of such views. Despite the plausibility of these defenses, explanationism still faces challenges. Recently, T. Ryan Byerly and Kraig Martin have argued that explanationist views fail to provide either necessary or sufficient conditions for epistemic justification. I argue that Byerly and Martin are mistaken on both accounts.
We argue that philosophers ought to distinguish epistemic decision theory and epistemology, in just the way ordinary decision theory is distinguished from ethics. Once one does this, the internalist arguments that motivate much of epistemic decision theory make sense, given specific interpretations of the formalism. Making this distinction also causes trouble for the principle called Propriety, which says, roughly, that the only acceptable epistemic utility functions make probabilistically coherent credence functions immodest. We cast doubt on this requirement, but then argue (...) that epistemic decision theorists should never have wanted such a strong principle in any case. (shrink)
Schwenkler (2012) criticizes a 2011 experiment by R. Held and colleagues purporting to answer Molyneux’s question. Schwenkler proposes two ways to re-run the original experiment: either by allowing subjects to move around the stimuli, or by simplifying the stimuli to planar objects rather than three-dimensional ones. In Schwenkler (2013) he expands on and defends the former. I argue that this way of re-running the experiment is flawed, since it relies on a questionable assumption that newly sighted subjects will be able (...) to appreciate depth cues. I then argue that the second way of re-running the experiment is successful both in avoiding the flaw of original Held experiment, and in avoiding the problem with the first way of re-running the experiment. (shrink)
Despite recent growth in surveillance capabilities there has been little discussion regarding the ethics of surveillance. Much of the research that has been carried out has tended to lack a coherent structure or fails to address key concerns. I argue that the just war tradition should be used as an ethical framework which is applicable to surveillance, providing the questions which should be asked of any surveillance operation. In this manner, when considering whether to employ surveillance, one should take into (...) account the reason for the surveillance, the authority of the surveillant, whether or not there has been a declaration of intent, whether surveillance is an act of last resort, what is the likelihood of success of the operation and whether surveillance is a proportionate response. Once underway, the methods of surveillance should be proportionate to the occasion and seek to target appropriate people while limiting surveillance of those deemed inappropriate. By drawing on the just war tradition, ethical questions regarding surveillance can draw on a long and considered discourse while gaining a framework which, I argue, raises all the key concerns and misses none. (shrink)
Rule-Consequentialism faces “the problem of partial acceptance”: How should the ideal code be selected given the possibility that its rules may not be universally accepted? A new contender, “Calculated Rates” Rule-Consequentialism claims to solve this problem. However, I argue that Calculated Rates merely relocates the partial acceptance question. Nevertheless, there is a significant lesson from this failure of Calculated Rates. Rule-Consequentialism’s problem of partial acceptance is more helpfully understood as an instance of the broader problem of selecting the ideal code (...) given various assumptions—assumptions about who will accept and comply with the rules, but also about how the rules will be taught and enforced, and how similar the future will be. Previous rich discussions about partial acceptance provide a taxonomy and groundwork for formulating the best version of Rule-Consequentialism. (shrink)
It is often claimed that surveillance should be proportionate, but it is rarely made clear exactly what proportionate surveillance would look like beyond an intuitive sense of an act being excessive. I argue that surveillance should indeed be proportionate and draw on Thomas Hurka’s work on proportionality in war to inform the debate on surveillance. After distinguishing between the proportionality of surveillance per se, and surveillance as a particular act, I deal with objections to using proportionality as a legitimate ethical (...) measure. From there I argue that only certain benefits and harms should be counted in any determination of proportionality. Finally I look at how context can affect the proportionality of a particular method of surveillance. In conclusion, I hold that proportionality is not only a morally relevant criterion by which to assess surveillance, but that it is a necessary criterion. Furthermore, while granting that it is difficult to assess, that difficulty should not prevent our trying to do so. (shrink)
James Woodward’s Making Things Happen presents the most fully developed version of a manipulability theory of causation. Although the ‘interventionist’account of causation that Woodward defends in Making Things Happen has many admirable qualities, Michael Strevens argues that it has a fatal flaw. Strevens maintains that Woodward’s interventionist account of causation renders facts about causation relative to an individual’s perspective. In response to this charge, Woodward claims that although on his account X might be a relativized cause of Y relative to (...) some perspective, this does not lead to the problematic relativity that Strevens claims. Roughly, Woodward argues this is so because if X is a relativized cause of Y with respect to some perspective, then X is a cause of Y simpliciter. So, the truth of whether X is a cause of Y is not relative to one’s perspective. Strevens counters by arguing that Woodward’s response fails because relativized causation is not monotonic. In this paper I argue that Strevens’ argument that relativized causation is not monotonic is unsound. (shrink)
A prevalent assumption among philosophers who believe that people can intentionally deceive themselves (intentionalists) is that they accomplish this by controlling what evidence they attend to. This article is concerned primarily with the evaluation of this claim, which we may call ‘attentionalism’. According to attentionalism, when one justifiably believes/suspects that not-p but wishes to make oneself believe that p, one may do this by shifting attention away from the considerations supportive of the belief that not-p and onto considerations supportive of (...) the belief that p. The details of this theory are elaborated, its theoretical importance is pointed out, and it is argued that the strategy is supposed to work by leading to the repression of one’s knowledge of the unwelcome considerations. However, I then show that the assumption that this is possible is opposed by the balance of a relevant body of empirical research, namely, the thought-suppression literature, and so intentionalism about self-deception cannot find vindication in the attentional theory. (shrink)
Scalar Utilitarianism eschews foundational notions of rightness and wrongness in favor of evaluative comparisons of outcomes. I defend Scalar Utilitarianism from two critiques, the first against an argument for the thesis that Utilitarianism's commitments are fundamentally evaluative, and the second that Scalar Utilitarianism does not issue demands or sufficiently guide action. These defenses suggest a variety of more plausible Scalar Utilitarian interpretations, and I argue for a version that best represents a moral theory founded on evaluative notions, and offers better (...) answers to demandingness concerns than does the ordinary Scalar Utilitarian response. If Utilitarians seek reasonable development and explanation of their basic commitments, they may wish to reconsider Scalar Utilitarianism. (shrink)
Ernst Mach's atomistic theory of sensation faces problems in doing justice to our ability to perceive and remember complex phenomena such as melodies and shapes. Christian von Ehrenfels attempted to solve these problems with his theory of "Gestalt qualities", which he sees as entities depending one-sidedly on the corresponding simple objects of sensation. We explore the theory of dependence relations advanced by Ehrenfels and show how it relates to the views on the objects of perception advanced by Husserl and by (...) the Gestalt psychologists. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.