The Lockean Thesis says that you must believe p iff you’re sufficiently confident of it. On some versions, the 'must' asserts a metaphysical connection; on others, it asserts a normative one. On some versions, 'sufficiently confident' refers to a fixed threshold of credence; on others, it varies with proposition and context. Claim: the Lockean Thesis follows from epistemic utility theory—the view that rational requirements are constrained by the norm to promote accuracy. Different versions of this theory generate different versions of (...) Lockeanism; moreover, a plausible version of epistemic utility theory meshes with natural language considerations, yielding a new Lockean picture that helps to model and explain the role of beliefs in inquiry and conversation. Your beliefs are your best guesses in response to the epistemic priorities of your context. Upshot: we have a new approach to the epistemology and semantics of belief. And it has teeth. It implies that the role of beliefs is fundamentally different than many have thought, and in fact supports a metaphysical reduction of belief to credence. (shrink)
A realist theory of truth for a class of sentences holds that there are entities in virtue of which these sentences are true or false. We call such entities ‘truthmakers’ and contend that those for a wide range of sentences about the real world are moments (dependent particulars). Since moments are unfamiliar, we provide a definition and a brief philosophical history, anchoring them in our ontology by showing that they are objects of perception. The core of our theory is the (...) account of truthmaking for atomic sentences, in which we expose a pervasive ‘dogma of logical form’, which says that atomic sentences cannot have more than one truthmaker. In contrast to this, we uphold the mutual independence of logical and ontological complexity, and the authors outline formal principles of truthmaking taking account of both kinds of complexity. (shrink)
Assume that it is your evidence that determines what opinions you should have. I argue that since you should take peer disagreement seriously, evidence must have two features. (1) It must sometimes warrant being modest: uncertain what your evidence warrants, and (thus) uncertain whether you’re rational. (2) But it must always warrant being guided: disposed to treat your evidence as a guide. Surprisingly, it is very difficult to vindicate both (1) and (2). But diagnosing why this is so leads to (...) a proposal—Trust—that is weak enough to allow modesty but strong enough to yield many guiding features. In fact, I claim that Trust is the Goldilocks principle—for it is necessary and sufficient to vindicate the claim that you should always prefer to use free evidence. Upshot: Trust lays the foundations for a theory of disagreement and, more generally, an epistemology that permits self-doubt—a modest epistemology. (shrink)
Replacing Truth.Kevin Scharp - 2007 - Inquiry: An Interdisciplinary Journal of Philosophy 50 (6):606 – 621.details
Of the dozens of purported solutions to the liar paradox published in the past fifty years, the vast majority are "traditional" in the sense that they reject one of the premises or inference rules that are used to derive the paradoxical conclusion. Over the years, however, several philosophers have developed an alternative to the traditional approaches; according to them, our very competence with the concept of truth leads us to accept that the reasoning used to derive the paradox is sound. (...) That is, our conceptual competence leads us into inconsistency. I call this alternative the inconsistency approach to the liar. Although this approach has many positive features, I argue that several of the well-developed versions of it that have appeared recently are unacceptable. In particular, they do not recognize that if truth is an inconsistent concept, then we should replace it with new concepts that do the work of truth without giving rise to paradoxes. I outline an inconsistency approach to the liar paradox that satisfies this condition. (shrink)
During the realist revival in the early years of this century, philosophers of various persuasions were concerned to investigate the ontology of truth. That is, whether or not they viewed truth as a correspondence, they were interested in the extent to which one needed to assume the existence of entities serving some role in accounting for the truth of sentences. Certain of these entities, such as the Sätze an sich of Bolzano, the Gedanken of Frege, or the propositions of Russell (...) and Moore, were conceived as the bearers of the properties of truth and falsehood. Some thinkers however, such as Russell, Wittgenstein in the Tractatus, and Husserl in the Logische Untersuchungen, argued that instead of, or in addition to, truth-bearers, one must assume the existence of certain entities in virtue of which sentences and/or propositions are true. Various names were used for these entities, notably 'fact', 'Sachverhalt', and 'state of affairs'. (1) In order not to prejudge the suitability of these words we shall initially employ a more neutral terminology, calling any entities which are candidates for this role truth-makers. (shrink)
The Twin Earth thought experiment invites us to consider a liquid that has all of the superficial properties associated with water (clear, potable, etc.) but has entirely different deeper causal properties (composed of “XYZ” rather than of H2O). Although this thought experiment was originally introduced to illuminate questions in the theory of reference, it has also played a crucial role in empirically informed debates within the philosophy of psychology about people’s ordinary natural kind concepts. Those debates have sought to accommodate (...) an apparent fact about ordinary people’s judgments: Intuitively, the Twin Earth liquid is not water. We present results from four experiments showing that people do not, in fact, have this intuition. Instead, people tend to have the intuition that there is a sense in which the liquid is not water but also a sense in which it is water. We explore the implications of this finding for debates about theories of natural kind concepts, arguing that it supports views positing two distinct criteria for membership in natural kind categories – one based on deeper causal properties, the other based on superficial, observable properties. (shrink)
Phineas Gage’s story is typically offered as a paradigm example supporting the view that part of what matters for personal identity is a certain magnitude of similarity between earlier and later individuals. Yet, reconsidering a slight variant of Phineas Gage’s story indicates that it is not just magnitude of similarity, but also the direction of change that affects personal identity judgments; in some cases, changes for the worse are more seen as identity-severing than changes for the better of comparable magnitude. (...) Ironically, thinking carefully about Phineas Gage’s story tells against the thesis it is typically taken to support. (shrink)
The personal identity relation is of great interest to philosophers, who often consider fictional scenarios to test what features seem to make persons persist through time. But often real examples of neuroscientific interest also provide important tests of personal identity. One such example is the case of Phineas Gage – or at least the story often told about Phineas Gage. Many cite Gage’s story as example of severed personal identity; Phineas underwent such a tremendous change that Gage “survived as a (...) different man.” I discuss a recent empirical finding about judgments about this hypothetical. It is not just the magnitude of the change that affects identity judgment; it is also the negative direction of the change. I present an experiment suggesting that direction of change also affects neuroethical judgments. I conclude we should consider carefully the way in which improvements and deteriorations affect attributions of personal identity. This is particularly important since a number of the most crucial neuroethical decisions involve varieties of cognitive enhancements or deteriorations. (shrink)
KK is the thesis that if you can know p, you can know that you can know p. Though it’s unpopular, a flurry of considerations has recently emerged in its favour. Here we add fuel to the fire: standard resources allow us to show that any failure of KK will lead to the knowability and assertability of abominable indicative conditionals of the form ‘If I don’t know it, p’. Such conditionals are manifestly not assertable—a fact that KK defenders can easily (...) explain. I survey a variety of KK-denying responses and find them wanting. Those who object to the knowability of such conditionals must either deny the possibility of harmony between knowledge and belief, or deny well-supported connections between conditional and unconditional attitudes. Meanwhile, those who grant knowability owe us an explanation of such conditionals’ unassertability—yet no successful explanations are on offer. Upshot: we have new evidence for KK. (shrink)
Willful ignorance is an important concept in criminal law and jurisprudence, though it has not received much discussion in philosophy. When it is mentioned, however, it is regularly assumed to be a kind of self-deception. In this article I will argue that self-deception and willful ignorance are distinct psychological kinds. First, some examples of willful ignorance are presented and discussed, and an analysis of the phenomenon is developed. Then it is shown that current theories of self-deception give no support to (...) the idea that willful ignorance is a kind of self-deception. Afterwards an independent argument is adduced for excluding willful ignorance from this category. The crucial differences between the two phenomena are explored, as are the reasons why they are so easily conflated. (shrink)
Suppose you have recently gained a disposition for recognizing a high-level kind property, like the property of being a wren. Wrens might look different to you now. According to the Phenomenal Contrast Argument, such cases of perceptual learning show that the contents of perception can include high-level kind properties such as the property of being a wren. I detail an alternative explanation for the different look of the wren: a shift in one’s attentional pattern onto other low-level properties. Philosophers have (...) alluded to this alternative before, but I provide a comprehensive account of the view, show how my account significantly differs from past claims, and offer a novel argument for the view. Finally, I show that my account puts us in a position to provide a new objection to the Phenomenal Contrast Argument. (shrink)
You can perceive things, in many respects, as they really are. For example, you can correctly see a coin as circular from most angles. Nonetheless, your perception of the world is perspectival. The coin looks different when slanted than when head-on, and there is some respect in which the slanted coin looks similar to a head-on ellipse. Many hold that perception is perspectival because you perceive certain properties that correspond to the “looks” of things. I argue that this view is (...) misguided. I consider the two standard versions of this view. What I call the PLURALIST APPROACH fails to give a unified account of the perspectival character of perception, while what I call the PERSPECTIVAL PROPERTIES APPROACH violates central commitments of contemporary psychology. I propose instead that perception is perspectival because of the way perceptual states are structured from their parts. (shrink)
This is a review article on Franz Brentano’s Descriptive Psychology published in 1982. We provide a detailed exposition of Brentano’s work on this topic, focusing on the unity of consciousness, the modes of connection and the types of part, including separable parts, distinctive parts, logical parts and what Brentano calls modificational quasi-parts. We also deal with Brentano’s account of the objects of sensation and the experience of time.
Causal selection is the cognitive process through which one or more elements in a complex causal structure are singled out as actual causes of a certain effect. In this paper, we report on an experiment in which we investigated the role of moral and temporal factors in causal selection. Our results are as follows. First, when presented with a temporal chain in which two human agents perform the same action one after the other, subjects tend to judge the later agent (...) to be the actual cause. Second, the impact of temporal location on causal selection is almost canceled out if the later agent did not violate a norm while the former did. We argue that this is due to the impact that judgments of norm violation have on causal selection—even if the violated norm has nothing to do with the obtaining effect. Third, moral judgments about the effect influence causal selection even in the case in which agents could not have foreseen the effect and did not intend to bring it about. We discuss our findings in connection to recent theories of the role of moral judgment in causal reasoning, on the one hand, and to probabilistic models of temporal location, on the other. (shrink)
‘What is characteristic of every mental activity’, according to Brentano, is ‘the reference to something as an object. In this respect every mental activity seems to be something relational.’ But what sort of a relation, if any, is our cognitive access to the world? This question – which we shall call Brentano’s question – throws a new light on many of the traditional problems of epistemology. The paper defends a view of perceptual acts as real relations of a subject to (...) an object. To make this view coherent, a theory of different types of relations is developed, resting on ideas on formal ontology put forward by Husserl in his Logical Investigations and on the theory of relations sketched in my "Acta cum fundamentis in re". The theory is applied to the notion of a Cambridge change, which proves to have an unforeseen relevance to our understanding of perception. (shrink)
There is a long-running debate as to whether privacy is a matter of control or access. This has become more important following revelations made by Edward Snowden in 2013 regarding the collection of vast swathes of data from the Internet by signals intelligence agencies such as NSA and GCHQ. The nature of this collection is such that if the control account is correct then there has been a significant invasion of people's privacy. If, though, the access account is correct then (...) there has not been an invasion of privacy on the scale suggested by the control account. I argue that the control account of privacy is mistaken. However, the consequences of this are not that the seizing control of personal information is unproblematic. I argue that the control account, while mistaken, seems plausible for two reasons. The first is that a loss of control over my information entails harm to the rights and interests that privacy protects. The second is that a loss of control over my information increases the risk that my information will be accessed and that my privacy will be violated. Seizing control of another's information is therefore harmful, even though it may not entail a violation of privacy. Indeed, seizing control of another's information may be more harmful than actually violating their privacy. (shrink)
You have higher-order uncertainty iff you are uncertain of what opinions you should have. I defend three claims about it. First, the higher-order evidence debate can be helpfully reframed in terms of higher-order uncertainty. The central question becomes how your first- and higher-order opinions should relate—a precise question that can be embedded within a general, tractable framework. Second, this question is nontrivial. Rational higher-order uncertainty is pervasive, and lies at the foundations of the epistemology of disagreement. Third, the answer is (...) not obvious. The Enkratic Intuition---that your first-order opinions must “line up” with your higher-order opinions---is incorrect; epistemic akrasia can be rational. If all this is right, then it leaves us without answers---but with a clear picture of the question, and a fruitful strategy for pursuing it. (shrink)
Vogel, Sosa, and Huemer have all argued that sensitivity is incompatible with knowing that you do not believe falsely, therefore the sensitivity condition must be false. I show that this objection misses its mark because it fails to take account of the basis of belief. Moreover, if the objection is modified to account for the basis of belief then it collapses into the more familiar objection that sensitivity is incompatible with closure.
One popular conception of natural theology holds that certain purely rational arguments are insulated from empirical inquiry and independently establish conclusions that provide evidence, justification, or proof of God’s existence. Yet, some raise suspicions that philosophers and theologians’ personal religious beliefs inappropriately affect these kinds of arguments. I present an experimental test of whether philosophers and theologians’ argument analysis is influenced by religious commitments. The empirical findings suggest religious belief affects philosophical analysis and offer a challenge to theists and atheists, (...) alike: reevaluate the scope of natural theology’s conclusions or acknowledge and begin to address the influence of religious belief. (shrink)
Sosa, Pritchard, and Vogel have all argued that there are cases in which one knows something inductively but does not believe it sensitively, and that sensitivity therefore cannot be necessary for knowledge. I defend sensitivity by showing that inductive knowledge is sensitive.
Alfred Mele's deflationary account of self-deception has frequently been criticised for being unable to explain the ?tension? inherent in self-deception. These critics maintain that rival theories can better account for this tension, such as theories which suppose self-deceivers to have contradictory beliefs. However, there are two ways in which the tension idea has been understood. In this article, it is argued that on one such understanding, Mele's deflationism can account for this tension better than its rivals, but only if we (...) reconceptualize the self-deceiver's attitude in terms of unwarranted degrees of conviction rather than unwarranted belief. This new way of viewing the self-deceiver's attitude will be informed by observations on experimental work done on the biasing influence of desire on belief, which suggests that self-deceivers don?t manage to fully convince themselves of what they want to be true. On another way in which this tension has been understood, this account would not manage so well, since on this understanding the self-deceiver is best interpreted as knowing, but wishing to avoid, the truth. However, it is argued that we are under no obligation to account for this since it is a characteristic of a different phenomenon than self-deception, namely, escapism. (shrink)
This response addresses the excellent responses to my book provided by Heather Douglas, Janet Kourany, and Matt Brown. First, I provide some comments and clarifications concerning a few of the highlights from their essays. Second, in response to the worries of my critics, I provide more detail than I was able to provide in my book regarding my three conditions for incorporating values in science. Third, I identify some of the most promising avenues for further research that flow out of (...) this interchange. (shrink)
Many current popular views in epistemology require a belief to be the result of a reliable process (aka ‘method of belief formation’ or ‘cognitive capacity’) in order to count as knowledge. This means that the generality problem rears its head, i.e. the kind of process in question has to be spelt out, and this looks difficult to do without being either over or under-general. In response to this problem, I propose that we should adopt a more fine-grained account of the (...) epistemic basing relation, at which point the generality problem becomes easy to solve. (shrink)
Assertions are the centre of gravity in social epistemology. They are the vehicles we use to exchange information within scientific groups and society as a whole. It is therefore essential to determine under which conditions we are permitted to make an assertion. In this paper we argue and provide empirical evidence for the view that the norm of assertion is justified belief: truth or even knowledge are not required. Our results challenge the knowledge account advocated by, e.g. Williamson (1996), in (...) general, and more specifically, put into question several studies conducted by Turri (2013, 2016) that support a knowledge norm of assertion. Instead, the justified belief account championed by, e.g. Douven (2006), seems to prevail. (shrink)
In a recent paper, Melchior pursues a novel argumentative strategy against the sensitivity condition. His claim is that sensitivity suffers from a ‘heterogeneity problem:’ although some higher-order beliefs are knowable, other, very similar, higher-order beliefs are insensitive and so not knowable. Similarly, the conclusions of some bootstrapping arguments are insensitive, but others are not. In reply, I show that sensitivity does not treat different higher-order beliefs differently in the way that Melchior states and that while genuine bootstrapping arguments have insensitive (...) conclusions, the cases that Melchior describes as sensitive ‘bootstrapping’ arguments don’t deserve the name, since they are a perfectly good way of getting to know their conclusions. In sum, sensitivity doesn’t have a heterogeneity problem. (shrink)
Explanationism is a plausible view of epistemic justification according to which justification is a matter of explanatory considerations. Despite its plausibility, explanationism is not without its critics. In a recent issue of this journal T. Ryan Byerly and Kraig Martin have charged that explanationism fails to provide necessary or sufficient conditions for epistemic justification. In this article I examine Byerly and Martin’s arguments and explain where they go wrong.
This paper develops an account of the distinctive epistemic authority of avowals of propositional attitude, focusing on the case of belief. It is argued that such avowals are expressive of the very mental states they self-ascribe. This confers upon them a limited self-warranting status, and renders them immune to an important class of errors to which paradigm empirical (e.g., perceptual) judgments are liable.
Most plausible moral theories must address problems of partial acceptance or partial compliance. The aim of this paper is to examine some proposed ways of dealing with partial acceptance problems as well as to introduce a new Rule Utilitarian suggestion. Here I survey three forms of Rule Utilitarianism, each of which represents a distinct approach to solving partial acceptance issues. I examine Fixed Rate, Variable Rate, and Optimum Rate Rule Utilitarianism, and argue that a new approach, Maximizing Expectation Rate Rule (...) Utilitarianism, better solves partial acceptance problems. (shrink)
Explanationists about epistemic justification hold that justification depends upon explanatory considerations. After a bit of a lull, there has recently been a resurgence of defenses of such views. Despite the plausibility of these defenses, explanationism still faces challenges. Recently, T. Ryan Byerly and Kraig Martin have argued that explanationist views fail to provide either necessary or sufficient conditions for epistemic justification. I argue that Byerly and Martin are mistaken on both accounts.
We argue that philosophers ought to distinguish epistemic decision theory and epistemology, in just the way ordinary decision theory is distinguished from ethics. Once one does this, the internalist arguments that motivate much of epistemic decision theory make sense, given specific interpretations of the formalism. Making this distinction also causes trouble for the principle called Propriety, which says, roughly, that the only acceptable epistemic utility functions make probabilistically coherent credence functions immodest. We cast doubt on this requirement, but then argue (...) that epistemic decision theorists should never have wanted such a strong principle in any case. (shrink)
Schwenkler (2012) criticizes a 2011 experiment by R. Held and colleagues purporting to answer Molyneux’s question. Schwenkler proposes two ways to re-run the original experiment: either by allowing subjects to move around the stimuli, or by simplifying the stimuli to planar objects rather than three-dimensional ones. In Schwenkler (2013) he expands on and defends the former. I argue that this way of re-running the experiment is flawed, since it relies on a questionable assumption that newly sighted subjects will be able (...) to appreciate depth cues. I then argue that the second way of re-running the experiment is successful both in avoiding the flaw of original Held experiment, and in avoiding the problem with the first way of re-running the experiment. (shrink)
Despite recent growth in surveillance capabilities there has been little discussion regarding the ethics of surveillance. Much of the research that has been carried out has tended to lack a coherent structure or fails to address key concerns. I argue that the just war tradition should be used as an ethical framework which is applicable to surveillance, providing the questions which should be asked of any surveillance operation. In this manner, when considering whether to employ surveillance, one should take into (...) account the reason for the surveillance, the authority of the surveillant, whether or not there has been a declaration of intent, whether surveillance is an act of last resort, what is the likelihood of success of the operation and whether surveillance is a proportionate response. Once underway, the methods of surveillance should be proportionate to the occasion and seek to target appropriate people while limiting surveillance of those deemed inappropriate. By drawing on the just war tradition, ethical questions regarding surveillance can draw on a long and considered discourse while gaining a framework which, I argue, raises all the key concerns and misses none. (shrink)
It is often claimed that surveillance should be proportionate, but it is rarely made clear exactly what proportionate surveillance would look like beyond an intuitive sense of an act being excessive. I argue that surveillance should indeed be proportionate and draw on Thomas Hurka’s work on proportionality in war to inform the debate on surveillance. After distinguishing between the proportionality of surveillance per se, and surveillance as a particular act, I deal with objections to using proportionality as a legitimate ethical (...) measure. From there I argue that only certain benefits and harms should be counted in any determination of proportionality. Finally I look at how context can affect the proportionality of a particular method of surveillance. In conclusion, I hold that proportionality is not only a morally relevant criterion by which to assess surveillance, but that it is a necessary criterion. Furthermore, while granting that it is difficult to assess, that difficulty should not prevent our trying to do so. (shrink)
A prevalent assumption among philosophers who believe that people can intentionally deceive themselves (intentionalists) is that they accomplish this by controlling what evidence they attend to. This article is concerned primarily with the evaluation of this claim, which we may call ‘attentionalism’. According to attentionalism, when one justifiably believes/suspects that not-p but wishes to make oneself believe that p, one may do this by shifting attention away from the considerations supportive of the belief that not-p and onto considerations supportive of (...) the belief that p. The details of this theory are elaborated, its theoretical importance is pointed out, and it is argued that the strategy is supposed to work by leading to the repression of one’s knowledge of the unwelcome considerations. However, I then show that the assumption that this is possible is opposed by the balance of a relevant body of empirical research, namely, the thought-suppression literature, and so intentionalism about self-deception cannot find vindication in the attentional theory. (shrink)
In the case of ventriloquism, seeing the movement of the ventriloquist dummy’s mouth changes your experience of the auditory location of the vocals. Some have argued that cases like ventriloquism provide evidence for the view that at least some of the content of perception is fundamentally multimodal. In the ventriloquism case, this would mean your experience has constitutively audio-visual content (not just a conjunction of an audio content and visual content). In this paper, I argue that cases like ventriloquism do (...) not in fact warrant that conclusion. I then try to make sense of crossmodal cases without appealing to fundamentally multimodal content. (shrink)
Ernst Mach's atomistic theory of sensation faces problems in doing justice to our ability to perceive and remember complex phenomena such as melodies and shapes. Christian von Ehrenfels attempted to solve these problems with his theory of "Gestalt qualities", which he sees as entities depending one-sidedly on the corresponding simple objects of sensation. We explore the theory of dependence relations advanced by Ehrenfels and show how it relates to the views on the objects of perception advanced by Husserl and by (...) the Gestalt psychologists. (shrink)
Stubborn belief, like self-deception, is a species of motivated irrationality. The nature of stubborn belief, however, has not been investigated by philosophers, and it is something that poses a challenge to some prominent accounts of self-deception. In this paper, I argue that the case of stubborn belief constitutes a counterexample to Alfred Mele’s proposed set of sufficient conditions for self-deception, and I attempt to distinguish between the two. The recognition of this phenomenon should force an amendment in this account, and (...) should also make a Mele-style deflationist think more carefully about the kinds of motivational factors operating in self-deception. (shrink)
James Woodward’s Making Things Happen presents the most fully developed version of a manipulability theory of causation. Although the ‘interventionist’account of causation that Woodward defends in Making Things Happen has many admirable qualities, Michael Strevens argues that it has a fatal flaw. Strevens maintains that Woodward’s interventionist account of causation renders facts about causation relative to an individual’s perspective. In response to this charge, Woodward claims that although on his account X might be a relativized cause of Y relative to (...) some perspective, this does not lead to the problematic relativity that Strevens claims. Roughly, Woodward argues this is so because if X is a relativized cause of Y with respect to some perspective, then X is a cause of Y simpliciter. So, the truth of whether X is a cause of Y is not relative to one’s perspective. Strevens counters by arguing that Woodward’s response fails because relativized causation is not monotonic. In this paper I argue that Strevens’ argument that relativized causation is not monotonic is unsound. (shrink)
A recent debate in Kant scholarship concerns the role of concepts in Kant's theory of perception. Roughly, proponents of a conceptualist interpretation argue that for Kant, the possession of concepts is a prior condition for perception, while nonconceptualist interpreters deny this. The debate has two parts. One part concerns whether possessing empirical concepts is a prior condition for having empirical intuitions. A second part concerns whether Kant allows empirical intuitions without a priori concepts. Outside of Kant interpretation, the contemporary debate (...) about conceptualism concerns whether perception requires empirical concepts. But, as I argue, the debate about whether Kant allows intuitions without empirical concepts does not show whether Kant is a conceptualist. Even if Kant allows intuitions without empirical concepts, it could still be that a priori concepts are required. While the debate could show that Kant is a conceptualist, I argue it does not. Finally, I sketch a novel way that the conceptualist interpreter might win the debate—roughly, by arguing that possessing a priori concepts is a prior condition for having appearances. (shrink)
A classic debate concerns whether reasonableness should be understood statistically (e.g., reasonableness is what is common) or prescriptively (e.g., reasonableness is what is good). This Article elaborates and defends a third possibility. Reasonableness is a partly statistical and partly prescriptive “hybrid,” reflecting both statistical and prescriptive considerations. Experiments reveal that people apply reasonableness as a hybrid concept, and the Article argues that a hybrid account offers the best general theory of reasonableness. -/- First, the Article investigates how ordinary people judge (...) what is reasonable. Reasonableness sits at the core of countless legal standards, yet little work has investigated how ordinary people (i.e., potential jurors) actually make reasonableness judgments. Experiments reveal that judgments of reasonableness are systematically intermediate between judgments of the relevant average and ideal across numerous legal domains. For example, participants’ mean judgment of the legally reasonable number of weeks’ delay before a criminal trial (ten weeks) falls between the judged average (seventeen weeks) and ideal (seven weeks). So too for the reasonable num- ber of days to accept a contract offer, the reasonable rate of attorneys’ fees, the reasonable loan interest rate, and the reasonable annual number of loud events on a football field in a residential neighborhood. Judgment of reasonableness is better predicted by both statistical and prescriptive factors than by either factor alone. -/- This Article uses this experimental discovery to develop a normative view of reasonableness. It elaborates an account of reasonableness as a hybrid standard, arguing that this view offers the best general theory of reasonableness, one that applies correctly across multiple legal domains. Moreover, this hybrid feature is the historical essence of legal reasonableness: the original use of the “reasonable person” and the “man on the Clapham omnibus” aimed to reflect both statistical and prescriptive considerations. Empirically, reasonableness is a hybrid judgment. And normatively, reasonableness should be applied as a hybrid standard. (shrink)
In this paper I critique the ethical implications of automating CCTV surveillance. I consider three modes of CCTV with respect to automation: manual, fully automated, and partially automated. In each of these I examine concerns posed by processing capacity, prejudice towards and profiling of surveilled subjects, and false positives and false negatives. While it might seem as if fully automated surveillance is an improvement over the manual alternative in these areas, I demonstrate that this is not necessarily the case. In (...) preference to the extremes I argue in favour of partial automation in which the system integrates a human CCTV operator with some level of automation. To assess the degree to which such a system should be automated I draw on the further issues of privacy and distance. Here I argue that the privacy of the surveilled subject can benefit from automation, while the distance between the surveilled subject and the CCTV operator introduced by automation can have both positive and negative effects. I conclude that in at least the majority of cases more automation is preferable to less within a partially automated system where this does not impinge on efficacy. (shrink)
Suppose that you are at a live jazz show. The drummer begins a solo. You see the cymbal jolt and you hear the clang. But in addition seeing the cymbal jolt and hearing the clang, you are also aware that the jolt and the clang are part of the same event. Casey O’Callaghan (forthcoming) calls this awareness “intermodal feature binding awareness.” Psychologists have long assumed that multimodal perceptions such as this one are the result of a subpersonal feature binding mechanism (...) (see Vatakis and Spence, 2007, Kubovy and Schutz, 2010, Pourtois et al., 2000, and Navarra et al., 2012). I present new evidence against this. I argue that there is no automatic feature binding mechanism that couples features like the jolt and the clang together. Instead, when you experience the jolt and the clang as part of the same event, this is the result of an associative learning process. The cymbal’s jolt and the clang are best understood as a single learned perceptual unit, rather than as automatically bound. I outline the specific learning process in perception called “unitization,” whereby we come to “chunk” the world into multimodal units. Unitization has never before been applied to multimodal cases. Yet I argue that this learning process can do the same work that intermodal binding would do, and that this issue has important philosophical implications. Specifically, whether we take multimodal cases to involve a binding mechanism or an associative process will have impact on philosophical issues from Molyneux’s question to the question of how active or passive we consider perception to be. (shrink)
We provide a detailed exposition of Brentano’s descriptive psychology, focusing on the unity of consciousness, the modes of connection and the types of part, including separable parts, distinctive parts, logical parts and what Brentano calls modificational quasi-parts. We also deal with Brentano’s account of the objects of sensation and the experience of time.
The self-deception debate often appears polarized between those who think that self-deceivers intentionally deceive themselves (‘intentionalists’), and those who think that intentional actions are not significantly involved in the production of self-deceptive beliefs at all. In this paper I develop a middle position between these views, according to which self-deceivers do end up self-deceived as a result of their own intentional actions, but where the intention these actions are done with is not an intention to deceive oneself. This account thus (...) keeps agency at the heart of self-deception, while also avoiding the paradox associated with other agency-centered views. (shrink)
Our ability to tell stories about ourselves has captivated many theorists, and some have taken these developments for an opportunity to answer long-standing questions about the nature of personhood. In this essay I employ two skeptical arguments to show that this move was a mistake. The first argument rests on the observation that storytelling is revisionary. The second implies that our stories about ourselves are biased in regard to our existing self-image. These arguments undercut narrative theories of identity, but they (...) leave room for a theory of narrative self-knowledge. The theory accommodates the first skeptical argument because there are event descriptions with retrospective assertibility conditions, and it accommodates the second argument by denying us epistemic privilege in regard to our own past. The result is that we do know our past through storytelling, but that it is a contingent feature of some of our stories that they are about ourselves. (shrink)
Most advocates of the so-called “neologicist” movement in the philosophy of mathematics identify themselves as “Neo-Fregeans” (e.g., Hale and Wright): presenting an updated and revised version of Frege’s form of logicism. Russell’s form of logicism is scarcely discussed in this literature, and when it is, often dismissed as not really logicism at all (in lights of its assumption of axioms of infinity, reducibiity and so on). In this paper I have three aims: firstly, to identify more clearly the primary metaontological (...) and methodological differences between Russell’s logicism and the more recent forms; secondly, to argue that Russell’s form of logicism offers more elegant and satisfactory solutions to a variety of problems that continue to plague the neo-logicist movement (the bad company objection, the embarassment of richness objection, worries about a bloated ontology, etc.); thirdly, to argue that Neo- Russellian forms of neologicism remain viable positions for current philosophers of mathematics. (shrink)
The notion of basic action has recently come under attack based on the idea that any putative basic action can always be divided into more basic sub-actions. In this paper it is argued that this criticism ignores a key aspect of the idea of basic action, namely, the ‘anything else’ part of the idea that basic actions are not done by doing anything else. This aspect is clarified, and it is argued that doing the sub-actions of which a putative basic (...) action consists does not amount to doing something different from doing that putative basic action. (shrink)
The paper seeks to develop an account of indexical phenomena based on the highly general theory of structure and dependence set forth by Husserl in his Logical Investigations. Husserl here defends an Aristotelian theory of meaning, viewing meanings as species or universals having as their instances certain sorts of concrete meaning acts. Indexical phenomena are seen to involve the combination of such acts of meaning with acts of perception, a thesis here developed in some detail and contrasted with accounts of (...) indexicals suggested by Frege, Wittgenstein and by the later Husserl himself in his Ideas I. Implications are drawn also for our understanding of the categorial grammar sketched by Husserl in his 4th Logical Investigation, as also for our understanding of the nature of proper names and other candidate indexical expressions. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.