Scalar Utilitarianism eschews foundational notions of rightness and wrongness in favor of evaluative comparisons of outcomes. I defend Scalar Utilitarianism from two critiques, the first against an argument for the thesis that Utilitarianism's commitments are fundamentally evaluative, and the second that Scalar Utilitarianism does not issue demands or sufficiently guide action. These defenses suggest a variety of more plausible Scalar Utilitarian interpretations, and I argue for a version that best represents a moral theory founded on evaluative notions, and offers better (...) answers to demandingness concerns than does the ordinary Scalar Utilitarian response. If Utilitarians seek reasonable development and explanation of their basic commitments, they may wish to reconsider Scalar Utilitarianism. (shrink)
Many philosophers claim to employ intuitions in their philosophical arguments. Others contest that no such intuitions are used frequently or at all in philosophy. This article suggests and defends a conception of intuitions as part of the philosophical method: intuitions are special types of philosophical assumptions to which we are invited to assent, often as premises in argument, that may serve an independent function in philosophical argument and that are not formed through a purely inferential process. A series of philosophical (...) case studies shows that intuitions in these arguments contain the relevant features. The view has implications for philosophical method, offering a compromise between opponents on the divisive debate of the merits of experimental philosophy: experimental philosophy provides an especially useful role in philosophical assumption analysis. (shrink)
The Twin Earth thought experiment invites us to consider a liquid that has all of the superficial properties associated with water (clear, potable, etc.) but has entirely different deeper causal properties (composed of “XYZ” rather than of H2O). Although this thought experiment was originally introduced to illuminate questions in the theory of reference, it has also played a crucial role in empirically informed debates within the philosophy of psychology about people’s ordinary natural kind concepts. Those debates have sought to accommodate (...) an apparent fact about ordinary people’s judgments: Intuitively, the Twin Earth liquid is not water. We present results from four experiments showing that people do not, in fact, have this intuition. Instead, people tend to have the intuition that there is a sense in which the liquid is not water but also a sense in which it is water. We explore the implications of this finding for debates about theories of natural kind concepts, arguing that it supports views positing two distinct criteria for membership in natural kind categories – one based on deeper causal properties, the other based on superficial, observable properties. (shrink)
Phineas Gage’s story is typically offered as a paradigm example supporting the view that part of what matters for personal identity is a certain magnitude of similarity between earlier and later individuals. Yet, reconsidering a slight variant of Phineas Gage’s story indicates that it is not just magnitude of similarity, but also the direction of change that affects personal identity judgments; in some cases, changes for the worse are more seen as identity-severing than changes for the better of comparable magnitude. (...) Ironically, thinking carefully about Phineas Gage’s story tells against the thesis it is typically taken to support. (shrink)
The personal identity relation is of great interest to philosophers, who often consider fictional scenarios to test what features seem to make persons persist through time. But often real examples of neuroscientific interest also provide important tests of personal identity. One such example is the case of Phineas Gage – or at least the story often told about Phineas Gage. Many cite Gage’s story as example of severed personal identity; Phineas underwent such a tremendous change that Gage “survived as a (...) different man.” I discuss a recent empirical finding about judgments about this hypothetical. It is not just the magnitude of the change that affects identity judgment; it is also the negative direction of the change. I present an experiment suggesting that direction of change also affects neuroethical judgments. I conclude we should consider carefully the way in which improvements and deteriorations affect attributions of personal identity. This is particularly important since a number of the most crucial neuroethical decisions involve varieties of cognitive enhancements or deteriorations. (shrink)
One popular conception of natural theology holds that certain purely rational arguments are insulated from empirical inquiry and independently establish conclusions that provide evidence, justification, or proof of God’s existence. Yet, some raise suspicions that philosophers and theologians’ personal religious beliefs inappropriately affect these kinds of arguments. I present an experimental test of whether philosophers and theologians’ argument analysis is influenced by religious commitments. The empirical findings suggest religious belief affects philosophical analysis and offer a challenge to theists and atheists, (...) alike: reevaluate the scope of natural theology’s conclusions or acknowledge and begin to address the influence of religious belief. (shrink)
Most plausible moral theories must address problems of partial acceptance or partial compliance. The aim of this paper is to examine some proposed ways of dealing with partial acceptance problems as well as to introduce a new Rule Utilitarian suggestion. Here I survey three forms of Rule Utilitarianism, each of which represents a distinct approach to solving partial acceptance issues. I examine Fixed Rate, Variable Rate, and Optimum Rate Rule Utilitarianism, and argue that a new approach, Maximizing Expectation Rate Rule (...) Utilitarianism, better solves partial acceptance problems. (shrink)
A classic debate concerns whether reasonableness should be understood statistically (e.g., reasonableness is what is common) or prescriptively (e.g., reasonableness is what is good). This Article elaborates and defends a third possibility. Reasonableness is a partly statistical and partly prescriptive “hybrid,” reflecting both statistical and prescriptive considerations. Experiments reveal that people apply reasonableness as a hybrid concept, and the Article argues that a hybrid account offers the best general theory of reasonableness. -/- First, the Article investigates how ordinary people judge (...) what is reasonable. Reasonableness sits at the core of countless legal standards, yet little work has investigated how ordinary people (i.e., potential jurors) actually make reasonableness judgments. Experiments reveal that judgments of reasonableness are systematically intermediate between judgments of the relevant average and ideal across numerous legal domains. For example, participants’ mean judgment of the legally reasonable number of weeks’ delay before a criminal trial (ten weeks) falls between the judged average (seventeen weeks) and ideal (seven weeks). So too for the reasonable num- ber of days to accept a contract offer, the reasonable rate of attorneys’ fees, the reasonable loan interest rate, and the reasonable annual number of loud events on a football field in a residential neighborhood. Judgment of reasonableness is better predicted by both statistical and prescriptive factors than by either factor alone. -/- This Article uses this experimental discovery to develop a normative view of reasonableness. It elaborates an account of reasonableness as a hybrid standard, arguing that this view offers the best general theory of reasonableness, one that applies correctly across multiple legal domains. Moreover, this hybrid feature is the historical essence of legal reasonableness: the original use of the “reasonable person” and the “man on the Clapham omnibus” aimed to reflect both statistical and prescriptive considerations. Empirically, reasonableness is a hybrid judgment. And normatively, reasonableness should be applied as a hybrid standard. (shrink)
Two separate research programs have revealed two different factors that feature in our judgments of whether some entity persists. One program—inspired by Knobe—has found that normative considerations affect persistence judgments. For instance, people are more inclined to view a thing as persisting when the changes it undergoes lead to improvements. The other program—inspired by Kelemen—has found that teleological considerations affect persistence judgments. For instance, people are more inclined to view a thing as persisting when it preserves its purpose. Our goal (...) in this paper is to determine what causes persistence judgments. Across four studies, we pit normative considerations against teleological considerations. And using causal modeling procedures, we find a consistent, robust pattern with teleological and not normative considerations directly causing persistence judgments. Our findings put teleology in the driver’s seat, while at the same time shedding further light on our folk notion of an object. (shrink)
Rule-Consequentialism faces “the problem of partial acceptance”: How should the ideal code be selected given the possibility that its rules may not be universally accepted? A new contender, “Calculated Rates” Rule-Consequentialism claims to solve this problem. However, I argue that Calculated Rates merely relocates the partial acceptance question. Nevertheless, there is a significant lesson from this failure of Calculated Rates. Rule-Consequentialism’s problem of partial acceptance is more helpfully understood as an instance of the broader problem of selecting the ideal code (...) given various assumptions—assumptions about who will accept and comply with the rules, but also about how the rules will be taught and enforced, and how similar the future will be. Previous rich discussions about partial acceptance provide a taxonomy and groundwork for formulating the best version of Rule-Consequentialism. (shrink)
Our aim in this entry is to articulate the state of the art in the moral psychology of personal identity. We begin by discussing the major philosophical theories of personal identity, including their shortcomings. We then turn to recent psychological work on personal identity and the self, investigations that often illuminate our person-related normative concerns. We conclude by discussing the implications of this psychological work for some contemporary philosophical theories and suggesting fruitful areas for future work on personal identity.
A growing body of research has examined how people judge the persistence of identity over time—that is, how they decide that a particular individual is the same entity from one time to the next. While a great deal of progress has been made in understanding the types of features that people typically consider when making such judgments, to date, existing work has not explored how these judgments may be shaped by normative considerations. The present studies demonstrate that normative beliefs do (...) appear to play an important role in people's beliefs about persistence. Specifically, people are more likely to judge that the identity of a given entity remains the same when its features improve than when its features deteriorate. Study 1 provides a basic demonstration of this effect. Study 2 shows that this effect is moderated by individual differences in normative beliefs. Study 3 examines the underlying mechanism, which is the belief that, in general, various entities are essentially good. Study 4 directly manipulates beliefs about essence to show that the positivity bias regarding essences is causally responsible for the effect. (shrink)
Statistical evidence is crucial throughout disparate impact’s three-stage analysis: during (1) the plaintiff’s prima facie demonstration of a policy’s disparate impact; (2) the defendant’s job-related business necessity defense of the discriminatory policy; and (3) the plaintiff’s demonstration of an alternative policy without the same discriminatory impact. The circuit courts are split on a vital question about the “practical significance” of statistics at Stage 1: Are “small” impacts legally insignificant? For example, is an employment policy that causes a one percent disparate (...) impact an appropriate policy for redress through disparate impact litigation? This circuit split calls for a comprehensive analysis of practical significance testing across disparate impact’s stages. Importantly, courts and commentators use “practical significance” ambiguously between two aspects of practical significance: the magnitude of an effect and confidence in statistical evidence. For example, at Stage 1 courts might ask whether statistical evidence supports a disparate impact (a confidence inquiry) and whether such an impact is large enough to be legally relevant (a magnitude inquiry). Disparate impact’s texts, purposes, and controlling interpretations are consistent with confidence inquires at all three stages, but not magnitude inquiries. Specifically, magnitude inquiries are inappropriate at Stages 1 and 3—there is no discriminatory impact or reduction too small or subtle for the purposes of the disparate impact analysis. Magnitude inquiries are appropriate at Stage 2, when an employer defends a discriminatory policy on the basis of its job-related business necessity. (shrink)
In Replacing Truth, Scharp takes the concept of truth to be fundamentally incoherent. As such, Scharp reckons it to be unsuited for systematic philosophical theorising and in need of replacement – at least for regions of thought and talk which permit liar sentences and their ilk to be formulated. This replacement methodology is radical because it not only recommends that the concept of truth be replaced, but that the word ‘true’ be replaced too. Only Tarski has attempted anything like it (...) before. I dub such a view Conceptual Marxism. In assessing this view, my goals are fourfold: to summarise the many components of Scharp’s theory of truth; to highlight what I take to be some of the excess baggage carried by the view; to assess whether, and to what extent, the extreme methodology on offer is at all called for; finally, to briefly propose a less radical replacement strategy for resolving the liar paradox. (shrink)
In Tobia (2016), Kevin P. Tobia tests for bias using two ontological arguments claimed to be symmetrical and of equal strength. We show they are neither.
You have higher-order uncertainty iff you are uncertain of what opinions you should have. I defend three claims about it. First, the higher-order evidence debate can be helpfully reframed in terms of higher-order uncertainty. The central question becomes how your first- and higher-order opinions should relate—a precise question that can be embedded within a general, tractable framework. Second, this question is nontrivial. Rational higher-order uncertainty is pervasive, and lies at the foundations of the epistemology of disagreement. Third, the answer is (...) not obvious. The Enkratic Intuition---that your first-order opinions must “line up” with your higher-order opinions---is incorrect; epistemic akrasia can be rational. If all this is right, then it leaves us without answers---but with a clear picture of the question, and a fruitful strategy for pursuing it. (shrink)
This paper considers Norton’s Material Theory of Induction. The material theory aims inter alia to neutralize Hume’s Problem of Induction. The purpose of the paper is to evaluate the material theorys capacity to achieve this end. After pulling apart two versions of the theory, I argue that neither version satisfactorily neutralizes the problem.
European Computing and Philosophy conference, 2–4 July Barcelona The Seventh ECAP (European Computing and Philosophy) conference was organized by Jordi Vallverdu at Autonomous University of Barcelona. The conference started with the IACAP (The International Association for CAP) presidential address by Luciano Floridi, focusing on mechanisms of knowledge production in informational networks. The first keynote delivered by Klaus Mainzer made a frame for the rest of the conference, by elucidating the fundamental role of complexity of informational structures that can be analyzed (...) on different levels of organization giving place for variety of possible approaches which converge in this cross-disciplinary and multi-disciplinary research field. Keynotes by Kevin Warwick about re-embodiment of rats’ neurons into robots, Raymond Turner on syntax and semantics in programming languages, Roderic Guigo on Biocomputing Sciences and Francesco Subirada on the past and future of supercomputing presented different topics of philosophical as well as practical aspects of computing. Vonference tracks included: Philosophy of Information (Patrick Allo), Philosophy of Computer Science (Raymond Turner), Computer and Information Ethics (Johnny Søraker and Alison Adam), Computational Approaches to the Mind (Ruth Hagengruber), IT and Cultural Diversity (Jutta Weber and Charles Ess), Crossroads (David Casacuberta), Robotics, AI & Ambient Intelligence (Thomas Roth-Berghofer), Biocomputing, Evolutionary and Complex Systems (Gordana Dodig Crnkovic and Søren Brier), E-learning, E-science and Computer-Supported Cooperative Work (Annamaria Carusi) and Technological Singularity and Acceleration Studies (Amnon Eden). (shrink)
The relations between Searle, Derrida, CP and phenomenology are complex. The writings of Derrida, the most influential figure within CP, are inseparably bound up with phenomenology and with the transformation of phenomenology effected by Heidegger. Indeed a large part of CP grew out of phenomenology. It has often been claimed that Searle's own contributions to the philosophy of mind advance claims already put forward by the phenomenologists, and Searle himself has given his own account of phenomenology, in particular of the (...) role of idealism in phenomenology. In what follows I argue that the preoccupations of early phenomenology are often those of later analytic philosophers - a point that remains invisible so long as phenomenology is looked at from the point of view of what phenomenology became - but that Searle's philosophy of mind differs on most central points from that given by Husserl. On the other hand, Searle's criticisms of Derrida and of the philosophical parts of postmodernism do indeed have much in common with the criticisms put forward by the early phenomenologists and by Husserl himself of what they saw as phenomenology's gradual transformation and degeneration and of related irrationalisms. A grasp of these similarities will suggest the beginnings of an answer to the question why Searle's anti-Derridas and anti-postmodernisms are such splendidly isolated examples of the genre. (shrink)
The Lockean Thesis says that you must believe p iff you’re sufficiently confident of it. On some versions, the 'must' asserts a metaphysical connection; on others, it asserts a normative one. On some versions, 'sufficiently confident' refers to a fixed threshold of credence; on others, it varies with proposition and context. Claim: the Lockean Thesis follows from epistemic utility theory—the view that rational requirements are constrained by the norm to promote accuracy. Different versions of this theory generate different versions of (...) Lockeanism; moreover, a plausible version of epistemic utility theory meshes with natural language considerations, yielding a new Lockean picture that helps to model and explain the role of beliefs in inquiry and conversation. Your beliefs are your best guesses in response to the epistemic priorities of your context. Upshot: we have a new approach to the epistemology and semantics of belief. And it has teeth. It implies that the role of beliefs is fundamentally different than many have thought, and in fact supports a metaphysical reduction of belief to credence. (shrink)
At the most general level, "manipulation" refers one of many ways of influencing behavior, along with (but to be distinguished from) other such ways, such as coercion and rational persuasion. Like these other ways of influencing behavior, manipulation is of crucial importance in various ethical contexts. First, there are important questions concerning the moral status of manipulation itself; manipulation seems to be mor- ally problematic in ways in which (say) rational persuasion does not. Why is this so? Furthermore, the notion (...) of manipulation has played an increasingly central role in debates about free will and moral responsibility. Despite its significance in these (and other) contexts, however, the notion of manipulation itself remains deeply vexed. I would say notoriously vexed, but in fact direct philosophical treatments of the notion of manipulation are few and far between, and those that do exist are nota- ble for the sometimes widely divergent conclusions they reach concerning what it is. I begin by addressing (though certainly not resolving) the conceptual issue of how to distinguish manipulation from other ways of influencing behavior. Along the way, I also briefly address the (intimately related) question of the moral status of manipulation: what, if anything, makes it morally problematic? Then I discuss the controversial ways in which the notion of manipulation has been employed in contemporary debates about free will and moral responsibility. (shrink)
A realist theory of truth for a class of sentences holds that there are entities in virtue of which these sentences are true or false. We call such entities ‘truthmakers’ and contend that those for a wide range of sentences about the real world are moments (dependent particulars). Since moments are unfamiliar, we provide a definition and a brief philosophical history, anchoring them in our ontology by showing that they are objects of perception. The core of our theory is the (...) account of truthmaking for atomic sentences, in which we expose a pervasive ‘dogma of logical form’, which says that atomic sentences cannot have more than one truthmaker. In contrast to this, we uphold the mutual independence of logical and ontological complexity, and the authors outline formal principles of truthmaking taking account of both kinds of complexity. (shrink)
Assume that it is your evidence that determines what opinions you should have. I argue that since you should take peer disagreement seriously, evidence must have two features. (1) It must sometimes warrant being modest: uncertain what your evidence warrants, and (thus) uncertain whether you’re rational. (2) But it must always warrant being guided: disposed to treat your evidence as a guide. Surprisingly, it is very difficult to vindicate both (1) and (2). But diagnosing why this is so leads to (...) a proposal—Trust—that is weak enough to allow modesty but strong enough to yield many guiding features. In fact, I claim that Trust is the Goldilocks principle—for it is necessary and sufficient to vindicate the claim that you should always prefer to use free evidence. Upshot: Trust lays the foundations for a theory of disagreement and, more generally, an epistemology that permits self-doubt—a modest epistemology. (shrink)
The Swiss philosopher Anton Marty (Schwyz, 1847 - Prague, 1914) belongs, with Carl Stumpf, to the first circle of Brentano’s pupils. Within Brentano’s school (and, to some extent, in the secondary literature), Marty has often been considered (in particular by Meinong) a kind of would-be epigone of his master (Fisette & Fréchette 2007: 61-2). There is no doubt that Brentano’s doctrine often provides Marty with his philosophical starting points. But Marty often arrives at original conclusions which are diametrically opposed to (...) Brentano’s views. This is true of his views about space and time and about judgment, emotions and intentionality. In the latter case, for example, Marty develops Brentano’s view and its implications in great detail (Mulligan 1989; Rollinger 2004), but uses them to formulate a very unBrentanian account of intentionality as a relation of ideal assimilation (Chrudzimski 1999; Cesalli & Taieb 2013). Marty’s philosophy of language, on the other hand, is one of the first philosophies worthy of the name. In what follows, we contrast briefly their accounts of (i) judgment and states of affairs and of (ii) emotings and value (two topics of foremost significance, for Brentano and Marty’s theoretical and practical philosophies respectively) (§1), and their philosophies of language (§2). Brentano’s view of language is based on his philosophy of mind. Marty takes over the latter and turns a couple of claims by Brentano about language into a sophisticated philosophy of language of a kind made familiar much later by Grice. Marty’s philosophy of states of affairs and value and of the mind’s relations to these also takes off from views sketched by the early Brentano, views forcefully rejected by the later Brentano. (shrink)
Replacing Truth.Kevin Scharp - 2007 - Inquiry: An Interdisciplinary Journal of Philosophy 50 (6):606 – 621.details
Of the dozens of purported solutions to the liar paradox published in the past fifty years, the vast majority are "traditional" in the sense that they reject one of the premises or inference rules that are used to derive the paradoxical conclusion. Over the years, however, several philosophers have developed an alternative to the traditional approaches; according to them, our very competence with the concept of truth leads us to accept that the reasoning used to derive the paradox is sound. (...) That is, our conceptual competence leads us into inconsistency. I call this alternative the inconsistency approach to the liar. Although this approach has many positive features, I argue that several of the well-developed versions of it that have appeared recently are unacceptable. In particular, they do not recognize that if truth is an inconsistent concept, then we should replace it with new concepts that do the work of truth without giving rise to paradoxes. I outline an inconsistency approach to the liar paradox that satisfies this condition. (shrink)
During the realist revival in the early years of this century, philosophers of various persuasions were concerned to investigate the ontology of truth. That is, whether or not they viewed truth as a correspondence, they were interested in the extent to which one needed to assume the existence of entities serving some role in accounting for the truth of sentences. Certain of these entities, such as the Sätze an sich of Bolzano, the Gedanken of Frege, or the propositions of Russell (...) and Moore, were conceived as the bearers of the properties of truth and falsehood. Some thinkers however, such as Russell, Wittgenstein in the Tractatus, and Husserl in the Logische Untersuchungen, argued that instead of, or in addition to, truth-bearers, one must assume the existence of certain entities in virtue of which sentences and/or propositions are true. Various names were used for these entities, notably 'fact', 'Sachverhalt', and 'state of affairs'. (1) In order not to prejudge the suitability of these words we shall initially employ a more neutral terminology, calling any entities which are candidates for this role truth-makers. (shrink)
KK is the thesis that if you can know p, you can know that you can know p. Though it’s unpopular, a flurry of considerations has recently emerged in its favour. Here we add fuel to the fire: standard resources allow us to show that any failure of KK will lead to the knowability and assertability of abominable indicative conditionals of the form ‘If I don’t know it, p’. Such conditionals are manifestly not assertable—a fact that KK defenders can easily (...) explain. I survey a variety of KK-denying responses and find them wanting. Those who object to the knowability of such conditionals must either deny the possibility of harmony between knowledge and belief, or deny well-supported connections between conditional and unconditional attitudes. Meanwhile, those who grant knowability owe us an explanation of such conditionals’ unassertability—yet no successful explanations are on offer. Upshot: we have new evidence for KK. (shrink)
Willful ignorance is an important concept in criminal law and jurisprudence, though it has not received much discussion in philosophy. When it is mentioned, however, it is regularly assumed to be a kind of self-deception. In this article I will argue that self-deception and willful ignorance are distinct psychological kinds. First, some examples of willful ignorance are presented and discussed, and an analysis of the phenomenon is developed. Then it is shown that current theories of self-deception give no support to (...) the idea that willful ignorance is a kind of self-deception. Afterwards an independent argument is adduced for excluding willful ignorance from this category. The crucial differences between the two phenomena are explored, as are the reasons why they are so easily conflated. (shrink)
In an earlier paper, I argued for an account of the metaphysics of grace which was libertarian in nature but also non-Pelagian. My goal in the present paper is to broaden my focus on how the human and divine wills relate in graced activities. While there is widespread agreement in Christian theology that the two do interact in an important way, what’s less clear is how the wills of two agents can be united in one of them performing a particular (...) action via a kind of joint or unitive willing. Insofar as the goal in these unitive willings is to have the human will and the divine will operating together in the human bringing about a particular action, I refer to this kind of volition as ”cooperative agency’. I explore two different models -- an identificationist model and an incarnation model -- regarding how the human agent is aligned with God in cooperative agency. I then argue that there are significant reasons for preferring the incarnational model over the identificationist model. (shrink)
Suppose you have recently gained a disposition for recognizing a high-level kind property, like the property of being a wren. Wrens might look different to you now. According to the Phenomenal Contrast Argument, such cases of perceptual learning show that the contents of perception can include high-level kind properties such as the property of being a wren. I detail an alternative explanation for the different look of the wren: a shift in one’s attentional pattern onto other low-level properties. Philosophers have (...) alluded to this alternative before, but I provide a comprehensive account of the view, show how my account significantly differs from past claims, and offer a novel argument for the view. Finally, I show that my account puts us in a position to provide a new objection to the Phenomenal Contrast Argument. (shrink)
You can perceive things, in many respects, as they really are. For example, you can correctly see a coin as circular from most angles. Nonetheless, your perception of the world is perspectival. The coin looks different when slanted than when head-on, and there is some respect in which the slanted coin looks similar to a head-on ellipse. Many hold that perception is perspectival because you perceive certain properties that correspond to the “looks” of things. I argue that this view is (...) misguided. I consider the two standard versions of this view. What I call the PLURALIST APPROACH fails to give a unified account of the perspectival character of perception, while what I call the PERSPECTIVAL PROPERTIES APPROACH violates central commitments of contemporary psychology. I propose instead that perception is perspectival because of the way perceptual states are structured from their parts. (shrink)
There is a long-running debate as to whether privacy is a matter of control or access. This has become more important following revelations made by Edward Snowden in 2013 regarding the collection of vast swathes of data from the Internet by signals intelligence agencies such as NSA and GCHQ. The nature of this collection is such that if the control account is correct then there has been a significant invasion of people's privacy. If, though, the access account is correct then (...) there has not been an invasion of privacy on the scale suggested by the control account. I argue that the control account of privacy is mistaken. However, the consequences of this are not that the seizing control of personal information is unproblematic. I argue that the control account, while mistaken, seems plausible for two reasons. The first is that a loss of control over my information entails harm to the rights and interests that privacy protects. The second is that a loss of control over my information increases the risk that my information will be accessed and that my privacy will be violated. Seizing control of another's information is therefore harmful, even though it may not entail a violation of privacy. Indeed, seizing control of another's information may be more harmful than actually violating their privacy. (shrink)
An early, very preliminary edition of this book was circulated in 1962 under the title Set-theoretical Structures in Science. There are many reasons for maintaining that such structures play a role in the philosophy of science. Perhaps the best is that they provide the right setting for investigating problems of representation and invariance in any systematic part of science, past or present. Examples are easy to cite. Sophisticated analysis of the nature of representation in perception is to be found already (...) in Plato and Aristotle. One of the great intellectual triumphs of the nineteenth century was the mechanical explanation of such familiar concepts as temperature and pressure by their representation in terms of the motion of particles. A more disturbing change of viewpoint was the realization at the beginning of the twentieth century that the separate invariant properties of space and time must be replaced by the space-time invariants of Einstein's special relativity. Another example, the focus of the longest chapter in this book, is controversy extending over several centuries on the proper representation of probability. The six major positions on this question are critically examined. Topics covered in other chapters include an unusually detailed treatment of theoretical and experimental work on visual space, the two senses of invariance represented by weak and strong reversibility of causal processes, and the representation of hidden variables in quantum mechanics. The final chapter concentrates on different kinds of representations of language, concluding with some empirical results on brain-wave representations of words and sentences. (shrink)
Alfred Mele's deflationary account of self-deception has frequently been criticised for being unable to explain the ?tension? inherent in self-deception. These critics maintain that rival theories can better account for this tension, such as theories which suppose self-deceivers to have contradictory beliefs. However, there are two ways in which the tension idea has been understood. In this article, it is argued that on one such understanding, Mele's deflationism can account for this tension better than its rivals, but only if we (...) reconceptualize the self-deceiver's attitude in terms of unwarranted degrees of conviction rather than unwarranted belief. This new way of viewing the self-deceiver's attitude will be informed by observations on experimental work done on the biasing influence of desire on belief, which suggests that self-deceivers don?t manage to fully convince themselves of what they want to be true. On another way in which this tension has been understood, this account would not manage so well, since on this understanding the self-deceiver is best interpreted as knowing, but wishing to avoid, the truth. However, it is argued that we are under no obligation to account for this since it is a characteristic of a different phenomenon than self-deception, namely, escapism. (shrink)
This is a review article on Franz Brentano’s Descriptive Psychology published in 1982. We provide a detailed exposition of Brentano’s work on this topic, focusing on the unity of consciousness, the modes of connection and the types of part, including separable parts, distinctive parts, logical parts and what Brentano calls modificational quasi-parts. We also deal with Brentano’s account of the objects of sensation and the experience of time.
Causal selection is the cognitive process through which one or more elements in a complex causal structure are singled out as actual causes of a certain effect. In this paper, we report on an experiment in which we investigated the role of moral and temporal factors in causal selection. Our results are as follows. First, when presented with a temporal chain in which two human agents perform the same action one after the other, subjects tend to judge the later agent (...) to be the actual cause. Second, the impact of temporal location on causal selection is almost canceled out if the later agent did not violate a norm while the former did. We argue that this is due to the impact that judgments of norm violation have on causal selection—even if the violated norm has nothing to do with the obtaining effect. Third, moral judgments about the effect influence causal selection even in the case in which agents could not have foreseen the effect and did not intend to bring it about. We discuss our findings in connection to recent theories of the role of moral judgment in causal reasoning, on the one hand, and to probabilistic models of temporal location, on the other. (shrink)
‘What is characteristic of every mental activity’, according to Brentano, is ‘the reference to something as an object. In this respect every mental activity seems to be something relational.’ But what sort of a relation, if any, is our cognitive access to the world? This question – which we shall call Brentano’s question – throws a new light on many of the traditional problems of epistemology. The paper defends a view of perceptual acts as real relations of a subject to (...) an object. To make this view coherent, a theory of different types of relations is developed, resting on ideas on formal ontology put forward by Husserl in his Logical Investigations and on the theory of relations sketched in my "Acta cum fundamentis in re". The theory is applied to the notion of a Cambridge change, which proves to have an unforeseen relevance to our understanding of perception. (shrink)
This response addresses the excellent responses to my book provided by Heather Douglas, Janet Kourany, and Matt Brown. First, I provide some comments and clarifications concerning a few of the highlights from their essays. Second, in response to the worries of my critics, I provide more detail than I was able to provide in my book regarding my three conditions for incorporating values in science. Third, I identify some of the most promising avenues for further research that flow out of (...) this interchange. (shrink)
Vogel, Sosa, and Huemer have all argued that sensitivity is incompatible with knowing that you do not believe falsely, therefore the sensitivity condition must be false. I show that this objection misses its mark because it fails to take account of the basis of belief. Moreover, if the objection is modified to account for the basis of belief then it collapses into the more familiar objection that sensitivity is incompatible with closure.
Recently, philosophers have turned their attention to the question, not when a given agent is blameworthy for what she does, but when a further agent has the moral standing to blame her for what she does. Philosophers have proposed at least four conditions on having “moral standing”: -/- 1. One’s blame would not be “hypocritical”. 2. One is not oneself “involved in” the target agent’s wrongdoing. 3. One must be warranted in believing that the target is indeed blameworthy for the (...) wrongdoing. 4. The target’s wrongdoing must some of “one’s business”. -/- These conditions are often proposed as both conditions on one and the same thing, and as marking fundamentally different ways of “losing standing.” Here I call these claims into question. First, I claim that conditions (3) and (4) are simply conditions on different things than are conditions (1) and (2). Second, I argue that condition (2) reduces to condition (1): when “involvement” removes someone’s standing to blame, it does so only by indicating something further about that agent, viz., that he or she lacks commitment to the values that condemn the wrongdoer’s action. The result: after we clarify the nature of the non-hypocrisy condition, we will have a unified account of moral standing to blame. Issues also discussed: whether standing can ever be regained, the relationship between standing and our "moral fragility", the difference between mere inconsistency and hypocrisy, and whether a condition of standing might be derived from deeper facts about the "equality of persons". (shrink)
Explanationism is a plausible view of epistemic justification according to which justification is a matter of explanatory considerations. Despite its plausibility, explanationism is not without its critics. In a recent issue of this journal T. Ryan Byerly and Kraig Martin have charged that explanationism fails to provide necessary or sufficient conditions for epistemic justification. In this article I examine Byerly and Martin’s arguments and explain where they go wrong.
This paper develops an account of the distinctive epistemic authority of avowals of propositional attitude, focusing on the case of belief. It is argued that such avowals are expressive of the very mental states they self-ascribe. This confers upon them a limited self-warranting status, and renders them immune to an important class of errors to which paradigm empirical (e.g., perceptual) judgments are liable.
Explanationists about epistemic justification hold that justification depends upon explanatory considerations. After a bit of a lull, there has recently been a resurgence of defenses of such views. Despite the plausibility of these defenses, explanationism still faces challenges. Recently, T. Ryan Byerly and Kraig Martin have argued that explanationist views fail to provide either necessary or sufficient conditions for epistemic justification. I argue that Byerly and Martin are mistaken on both accounts.
We argue that philosophers ought to distinguish epistemic decision theory and epistemology, in just the way ordinary decision theory is distinguished from ethics. Once one does this, the internalist arguments that motivate much of epistemic decision theory make sense, given specific interpretations of the formalism. Making this distinction also causes trouble for the principle called Propriety, which says, roughly, that the only acceptable epistemic utility functions make probabilistically coherent credence functions immodest. We cast doubt on this requirement, but then argue (...) that epistemic decision theorists should never have wanted such a strong principle in any case. (shrink)
Schwenkler (2012) criticizes a 2011 experiment by R. Held and colleagues purporting to answer Molyneux’s question. Schwenkler proposes two ways to re-run the original experiment: either by allowing subjects to move around the stimuli, or by simplifying the stimuli to planar objects rather than three-dimensional ones. In Schwenkler (2013) he expands on and defends the former. I argue that this way of re-running the experiment is flawed, since it relies on a questionable assumption that newly sighted subjects will be able (...) to appreciate depth cues. I then argue that the second way of re-running the experiment is successful both in avoiding the flaw of original Held experiment, and in avoiding the problem with the first way of re-running the experiment. (shrink)
Sosa, Pritchard, and Vogel have all argued that there are cases in which one knows something inductively but does not believe it sensitively, and that sensitivity therefore cannot be necessary for knowledge. I defend sensitivity by showing that inductive knowledge is sensitive.
Despite recent growth in surveillance capabilities there has been little discussion regarding the ethics of surveillance. Much of the research that has been carried out has tended to lack a coherent structure or fails to address key concerns. I argue that the just war tradition should be used as an ethical framework which is applicable to surveillance, providing the questions which should be asked of any surveillance operation. In this manner, when considering whether to employ surveillance, one should take into (...) account the reason for the surveillance, the authority of the surveillant, whether or not there has been a declaration of intent, whether surveillance is an act of last resort, what is the likelihood of success of the operation and whether surveillance is a proportionate response. Once underway, the methods of surveillance should be proportionate to the occasion and seek to target appropriate people while limiting surveillance of those deemed inappropriate. By drawing on the just war tradition, ethical questions regarding surveillance can draw on a long and considered discourse while gaining a framework which, I argue, raises all the key concerns and misses none. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.