Psychopathy refers to a range of complex behaviors and personality traits, including callousness and antisocial behavior, typically studied in criminal populations. Recent studies have used self-reports to examine psychopathic traits among noncriminal samples. The goal of the current study was to examine the underlying factor structure of the Self-Report of Psychopathy Scale–Short Form (SRP-SF) across complementary samples and examine the impact of gender on factor structure. We examined the structure of the SRP-SF among 2,554 young adults from three undergraduate samples (...) and a high-risk young adult sample. Using confirmatory factor analysis, a four-correlated factor model and a four-bifactor model showed good fit to the data. Evidence of weak invariance was found for both models across gender. These findings highlight that the SRP-SF is a useful measure of low-level psychopathic traits in noncriminal samples, although the underlying factor structure may not fully translate across men and women. (shrink)
This essay experiments with Kant’s writings on rational religion distilled through the Strange Case of Dr Jekyll and Mr Hyde as canonical confrontations with primal problems of evil. It suggests boundaries between Stevenson’s characters and their occupations comparable to the those conflicted in the Kantian university, namely, law, medicine, theology, and philosophy (which makes a short anticipatory appearance in his earlier text on rational religion). With various faculties it investigates diffuse comprehensions—respectively, legal crime, biogenetic transmission, and original sin—of key (...) ethical modes: will, inheritance, incorporation, freedom, duty, obligation, love, living, and killing to conclude on the possible logic of evil (or evils of logic) collateral and possibly innate to Kant’s comprehension of radical evil. (shrink)
I argue that there are non-trivial objective chances (that is, objective chances other than 0 and 1) even in deterministic worlds. The argument is straightforward. I observe that there are probabilistic special scientific laws even in deterministic worlds. These laws project non-trivial probabilities for the events that they concern. And these probabilities play the chance role and so should be regarded as chances as opposed, for example, to epistemic probabilities or credences. The supposition of non-trivial deterministic chances might seem to (...) land us in contradiction. The fundamental laws of deterministic worlds project trivial probabilities for the very same events that are assigned non-trivial probabilities by the special scientific laws. I argue that any appearance of tension is dissolved by recognition of the level-relativity of chances. There is therefore no obstacle to accepting non-trivial chance-role-playing deterministic probabilities as genuine chances. (shrink)
The unity of consciousness has so far been studied only as a relation holding among the many experiences of a single subject. I investigate whether this relation could hold between the experiences of distinct subjects, considering three major arguments against the possibility of such ‘between-subjects unity’. The first argument, based on the popular idea that unity implies subsumption by a composite experience, can be deflected by allowing for limited forms of ‘experience-sharing’, in which the same token experience belongs to more (...) than one subject. The second argument, based on the phenomenological claim that unified experiences have interdependent phenomenal characters, I show to rest on an equivocation. Finally, the third argument accuses between-subjects unity of being unimaginable, or more broadly a formal possibility corresponding to nothing we can make sense of. I argue that the familiar experience of perceptual co-presentation gives us an adequate phenomenological grasp on what between-subjects unity might be like. (shrink)
I discuss the apparent discrepancy between the qualitative diversity of consciousness and the relative qualitative homogeneity of the brain's basic constituents, a discrepancy that has been raised as a problem for identity theorists by Maxwell and Lockwood (as one element of the ‘grain problem’), and more recently as a problem for panpsychists (under the heading of ‘the palette problem’). The challenge posed to panpsychists by this discrepancy is to make sense of how a relatively small ‘palette’ of basic qualities could (...) give rise to the bewildering diversity of qualities we, and presumably other creatures, experience. I argue that panpsychists can meet this challenge, though it requires taking contentious stands on certain phenomenological questions, in particular on whether any familiar qualities are actual examples of ‘phenomenal blending’, and whether any other familiar qualities have a positive ‘phenomenologically simple character’. Moreover, it requires accepting an eventual theory most elements of which are in a certain explicable sense unimaginable, though not for that reason inconceivable. Nevertheless, I conclude that there are no conclusive reasons to reject such a theory, and so philosophers whose prior commitments motivate them to adopt it can do so without major theoretical cost. (shrink)
John Broome has argued that value incommensurability is vagueness, by appeal to a controversial about comparative indeterminacy. I offer a new counterexample to the collapsing principle. That principle allows us to derive an outright contradiction from the claim that some object is a borderline case of some predicate. But if there are no borderline cases, then the principle is empty. The collapsing principle is either false or empty.
The starting point in the development of probabilistic analyses of token causation has usually been the naïve intuition that, in some relevant sense, a cause raises the probability of its effect. But there are well-known examples both of non-probability-raising causation and of probability-raising non-causation. Sophisticated extant probabilistic analyses treat many such cases correctly, but only at the cost of excluding the possibilities of direct non-probability-raising causation, failures of causal transitivity, action-at-a-distance, prevention, and causation by absence and omission. I show that (...) an examination of the structure of these problem cases suggests a different treatment, one which avoids the costs of extant probabilistic analyses. (shrink)
This thesis explores the possibility of composite consciousness: phenomenally conscious states belonging to a composite being in virtue of the consciousness of, and relations among, its parts. We have no trouble accepting that a composite being has physical properties entirely in virtue of the physical properties of, and relations among, its parts. But a longstanding intuition holds that consciousness is different: my consciousness cannot be understood as a complex of interacting component consciousnesses belonging to parts of me. I ask why: (...) what is it about consciousness that makes us think it so different from matter? And should we accept this apparent difference? (shrink)
Philosophical exploration of individualism and externalism in the cognitive sciences most recently has been focused on general evaluations of these two views (Adams & Aizawa 2008, Rupert 2008, Wilson 2004, Clark 2008). Here we return to broaden an earlier phase of the debate between individualists and externalists about cognition, one that considered in detail particular theories, such as those in developmental psychology (Patterson 1991) and the computational theory of vision (Burge 1986, Segal 1989). Music cognition is an area in the (...) cognitive sciences that has received little attention from philosophers, though it has relatively recently been thrown into the externalist spotlight (Cochrane 2008, Kruger 2014, Kersten forthcoming). Given that individualism can be thought of as a kind of paradigm for research on cognition, we provide a brief overview of the field of music cognition and individualistic tendencies within the field (sections 2 and 3) before turning to consider externalist alternatives to individualistic paradigms (section 4-5) and then arguing for a qualified form of externalism about music cognition (section 6). (shrink)
Extended cognition holds that cognitive processes sometimes leak into the world (Dawson, 2013). A recent trend among proponents of extended cognition has been to put pressure on phenomena thought to be safe havens for internalists (Sneddon, 2011; Wilson, 2010; Wilson & Lenart, 2014). This paper attempts to continue this trend by arguing that music perception is an extended phenomenon. It is claimed that because music perception involves the detection of musical invariants within an “acoustic array”, the interaction between the auditory (...) system and the musical invariants can be characterized as an extended computational cognitive system. In articulating this view, the work of J. J. Gibson (1966, 1986) and Robert Wilson (1994, 1995, 2004) is drawn on. The view is defended from several objections and its implications outlined. The paper concludes with a comparison to Krueger’s (2014) view of the “musically extended emotional mind”. (shrink)
In Making Things Happen, James Woodward influentially combines a causal modeling analysis of actual causation with an interventionist semantics for the counterfactuals encoded in causal models. This leads to circularities, since interventions are defined in terms of both actual causation and interventionist counterfactuals. Circularity can be avoided by instead combining a causal modeling analysis with a semantics along the lines of that given by David Lewis, on which counterfactuals are to be evaluated with respect to worlds in which their antecedents (...) are realized by miracles. I argue, pace Woodward, that causal modeling analyses perform just as well when combined with the Lewisian semantics as when combined with the interventionist semantics. Reductivity therefore remains a reasonable hope. (shrink)
Two options are ‘incommensurate’ when neither is better than the other, but they are not equally good. Typically, we will say that one option is better in some ways, and the other in others, but neither is better ‘all things considered’. It is tempting to think that incommensurability is vagueness—that it is indeterminate which is better—but this ‘vagueness view’ of incommensurability has not proven popular. I set out the vagueness view and its implications in more detail, and argue that it (...) can explain most of the puzzling features of incommensurability. This argument proceeds without appeal to John Broome’s ‘collapsing principle’. (shrink)
Sergio Tenenbaum and Diana Raffman contend that ‘vague projects’ motivate radical revisions to orthodox, utility-maximising rational choice theory. Their argument cannot succeed if such projects merely ground instances of the paradox of the sorites, or heap. Tenenbaum and Raffman are not blind to this, and argue that Warren Quinn’s Puzzle of the Self-Torturer does not rest on the sorites. I argue that their argument both fails to generalise to most vague projects, and is ineffective in the case of the Self-Torturer (...) itself. (shrink)
MARING, Luke – Why does the excellent citizen vote? JPP 24 (2), June 2016: 245-257. Is it morally important to vote? It is common to think so, but both consequentialist and deontological strategies for defending that intuition are weak. In response, some theorists have turned to a role-based strategy, arguing that it is morally important to be an excellent citizen, and that excellent citizens vote. But there is a lingering puzzle: an individual vote changes very little (virtually nothing in (...) large-scale elections), so why would the excellent citizen be so concerned to cast a ballot? Why bother with something that has so little effect on the common good? This paper answers by developing the idea of respect for a practice, and then arguing that respect for democracy will often require citizens to vote. (shrink)
The assumption that psychological states and processes are computational in character pervades much of cognitive science, what many call the computational theory of mind. In addition to occupying a central place in cognitive science, the computational theory of mind has also had a second life supporting “individualism”, the view that psychological states should be taxonomized so as to supervene only on the intrinsic, physical properties of individuals. One response to individualism has been to raise the prospect of “wide computational systems”, (...) in which some computational units are instantiated outside the individual. “Wide computationalism” attempts to sever the link between individualism and computational psychology by enlarging the concept of computation. However, in spite of its potential interest to cognitive science, wide computationalism has received little attention in philosophy of mind and cognitive science. This paper aims to revisit the prospect of wide computationalism. It is argued that by appropriating a mechanistic conception of computation wide computationalism can overcome several issues that plague initial formulations. The aim is to show that cognitive science has overlooked an important and viable option in computational psychology. The paper marshals empirical support and responds to possible objections. (shrink)
We often have some reason to do actions insofar as they promote outcomes or states of affairs, such as the satisfaction of a desire. But what is it to promote an outcome? I defend a new version of 'probabilism about promotion'. According to Minimal Probabilistic Promotion, we promote some outcome when we make that outcome more likely than it would have been if we had done something else. This makes promotion easy and reasons cheap.
An influential tradition in the philosophy of causation has it that all token causal facts are, or are reducible to, facts about difference-making. Challenges to this tradition have typically focused on pre-emption cases, in which a cause apparently fails to make a difference to its effect. However, a novel challenge to the difference-making approach has recently been issued by Alyssa Ney. Ney defends causal foundationalism, which she characterizes as the thesis that facts about difference-making depend upon facts about physical causation. (...) She takes this to imply that causation is not fundamentally a matter of difference-making. In this paper, I defend the difference-making approach against Ney’s argument. I also offer some positive reasons for thinking, pace Ney, that causation is fundamentally a matter of difference-making. (shrink)
Is it morally important to vote? It is common to think so, but both consequentialist and deontological strategies for defending that intuition are weak. In response, some theorists have turned to a role-based strategy, arguing that it is morally important to be an excellent citizen, and that excellent citizens vote. But there is a lingering puzzle: an individual vote changes very little (virtually nothing in large-scale elections), so why would the excellent citizen be so concerned to cast a ballot? Why (...) bother with something that has so little effect on the common good? This paper answers by developing the idea of respect for a practice, and then arguing that respect for democracy will often require citizens to vote. (shrink)
The psychology and phenomenology of our knowledge of other minds is not well captured either by describing it simply as perception, nor by describing it simply as inference. A better description, I argue, is that our knowledge of other minds involves both through ‘perceptual co-presentation’, in which we experience objects as having aspects that are not revealed. This allows us to say that we perceive other minds, but perceive them as private, i.e. imperceptible, just as we routinely perceive aspects of (...) physical objects as unperceived. I discuss existing versions of this idea, particularly Joel Smith’s, on which it is taken to imply that our knowledge of other minds is, in these cases, perceptual and not inferential. Against this, I argue that perceptual co-presentation in general, and mind-perception in particular, yields knowledge that is simultaneously both perceptual and inferential. (shrink)
Chalmers (2002) argues against physicalism in part using the premise that no truth about consciousness can be deduced a priori from any set of purely structural truths. Chalmers (2012) elaborates a detailed definition of what it is for a truth to be structural, which turns out to include spatiotemporal truths. But Chalmers (2012) then proposes to define spatiotemporal terms by reference to their role in causing spatial and temporal experiences. Stoljar (2015) and Ebbers (Ms) argue that this definition of spatiotemporal (...) terms allows for the trivial falsification of Chalmers (2002)’s premise about structure and consciousness. I show that this result can be avoided by tweaking the relevant premise, and moreover that this tweak is well-motivated and not ad hoc. (shrink)
‘Radical enactivism’ (Hutto and Myin 2013, 2017) eschews representational content for all ‘basic’ mental activities. Critics have argued that this view cannot make sense of the workings of the imagination. In their recent book (2017), Hutto and Myin respond to these critics, arguing that some imaginings can be understood without attributing them any representational content. Their response relies on the claim that a system can exploit a structural isomorphism between two things without either of those things being a semantically evaluable (...) representation of the other. I argue that even if this claim is granted, there remains a problem for radically enactive accounts of imagining, namely that the active establishing and maintenance of a structural isomorphism seems to require representational content even if the exploitation of such an isomorphism, when established, does not. (shrink)
Joint actions often require agents to track others’ actions while planning and executing physically incongruent actions of their own. Previous research has indicated that this can lead to visuomotor interference effects when it occurs outside of joint action. How is this avoided or overcome in joint actions? We hypothesized that when joint action partners represent their actions as interrelated components of a plan to bring about a joint action goal, each partner’s movements need not be represented in relation to distinct, (...) incongruent proximal goals. Instead they can be represented in relation to a single proximal goal – especially if the movements are, or appear to be, mechanically linked to a more distal joint action goal. To test this, we implemented a paradigm in which participants produced finger movements that were either congruent or incongruent with those of a virtual partner, and either with or without a joint action goal (the joint flipping of a switch, which turned on two light bulbs). Our findings provide partial support for the hypothesis that visuomotor interference effects can be reduced when two physically incongruent actions are represented as mechanically interdependent contributions to a joint action goal. (shrink)
In an illuminating article, Claus Beisbart argues that the recently-popular thesis that the probabilities of statistical mechanics (SM) are Best System chances runs into a serious obstacle: there is no one axiomatization of SM that is robustly best, as judged by the theoretical virtues of simplicity, strength, and fit. Beisbart takes this 'no clear winner' result to imply that the probabilities yielded by the competing axiomatizations simply fail to count as Best System chances. In this reply, we express sympathy for (...) the 'no clear winner' thesis. However, we argue that an importantly different moral should be drawn from this. We contend that the implication for Humean chances is not that there are no SM chances, but rather that SM chances fail to be sharp. (shrink)
I analyse the meaning of a popular idiom among consciousness researchers, in which an individual's consciousness is described as a 'field'. I consider some of the contexts where this idea appears, in particular discussions of attention and the unity of consciousness. In neither case, I argue, do authors provide the resources to cash out all the implications of field-talk: in particular, they do not give sense to the idea of conscious elements being arrayed along multiple dimensions. I suggest ways to (...) extend and generalize the attentional construal of 'field-talk' to provide a genuine multiplicity of dimensions, through the notions of attentional proximity and causal proximity: the degree to which two experiential elements are disposed to bring one another into attention when attended, or to interact in other distinctively mental ways. I conclude that if consciousness is a field, it is one organized by attentional and/or causal proximity. (shrink)
In this book, Mumford and Anjum advance a theory of causation based on a metaphysics of powers. The book is for the most part lucidly written, and contains some interesting contributions: in particular on the necessary connection between cause and effect and on the perceivability of the causal relation. I do, however, have reservations about some of the book’s central theses: in particular, that cause and effect are simultaneous, and that causes can fruitfully be represented as vectors.
We introduce a family of rules for adjusting one's credences in response to learning the credences of others. These rules have a number of desirable features. 1. They yield the posterior credences that would result from updating by standard Bayesian conditionalization on one's peers' reported credences if one's likelihood function takes a particular simple form. 2. In the simplest form, they are symmetric among the agents in the group. 3. They map neatly onto the familiar Condorcet voting results. 4. They (...) preserve shared agreement about independence in a wide range of cases. 5. They commute with conditionalization and with multiple peer updates. Importantly, these rules have a surprising property that we call synergy - peer testimony of credences can provide mutually supporting evidence raising an individual's credence higher than any peer's initial prior report. At first, this may seem to be a strike against them. We argue, however, that synergy is actually a desirable feature and the failure of other updating rules to yield synergy is a strike against them. (shrink)
The traditional problem of evil sets theists the task of reconciling two things: God and evil. I argue that theists face the more difficult task of reconciling God and evils that God is specially obligated to prevent. Because of His authority, God's obligation to curtail evil goes far beyond our Samaritan duty to prevent evil when doing so isn't overly hard. Authorities owe their subjects a positive obligation to prevent certain evils; we have a right against our authorities that they (...) protect us. God's apparent mistake is not merely the impersonal wrong of failing to do enough good — though it is that too. It is the highly personal wrong of failing to live up to a moral requirement that comes bundled with authority over persons. To make my argument, I use the resources of political philosophy and defend a novel change to the orthodox account of authority. (shrink)
This paper explores what the epistemic account of vagueness means for theories of legal interpretation. The thesis of epistemicism is that vague statements are true or false even though it is impossible to know which. I argue that if epistemicism is accepted within the domain of the law, then the following three conditions must be satisfied: Interpretative reasoning within the law must adhere to the principle of bivalence and the law of excluded middle, interpretative reasoning within the law must construe (...) vague statements as an epistemic phenomenon, and epistemicism must be expanded to include normative considerations in order to account for legal theories that are consistent with the first two conditions. The first two conditions are internal to a particular theory of legal interpretation, while the third condition is external to a particular theory of legal interpretation. My conclusion shows that there are legal theories that are internally consistent with the fundamental features of epistemicism. However, within the domain of law—and specifically in the case of legal theories that are internally consistent with epistemicism—I show that vagueness cannot be explained simply by our ignorance of the meaning and use of vague expressions. Rather, epistemicism must also account for ignorance of the requisite normative considerations in legal theories with which it is otherwise consistent. (shrink)
Interpreting the content of the law is not limited to what a relevant lawmaker utters. This paper examines the extent to which implied and implicit content is part of the law, and specifically whether the Gricean concept of conversational implicature is relevant in determining the content of law. Recent work has focused on how this question relates to acts of legislation. This paper extends the analysis to case law and departs from the literature on several key issues. The paper's argument (...) is based upon two points: Precedent-setting judicial opinions may consist of multiple conversations, of which some entail opposing implicata, and if a particular precedent-setting judicial opinion consists of multiple conversations, of which some entail opposing implicata, then no meaningful conversational implicatum is part of the content of that particular precedent-setting opinion. Nevertheless, the paper's conclusion leaves open the prospect of gleaning something in between conversational implicature and what is literally said, namely, conversational impliciture. (shrink)
Though almost forty years have elapsed since its first publication, it is a testament to the philosophical acumen of its author that 'The Matter of Chance' contains much that is of continued interest to the philosopher of science. Mellor advances a sophisticated propensity theory of chance, arguing that this theory makes better sense than its rivals (in particular subjectivist, frequentist, logical and classical theories) of ‘what professional usage shows to be thought true of chance’ (p. xi) – in particular ‘that (...) chance is objective, empirical and not relational, and that it applies to the single case’ (ibid.). The book is short and dense, with the serious philosophical content delivered thick and fast. There is little by way of road-mapping or summarising to assist the reader: the introduction is hardly expansive and the concluding paragraph positively perfunctory. The result is that the book is often difficult going, and the reader is made to work hard to ensure correct understanding of the views expressed. On the other hand, the author’s avoidance of unnecessary use of formalism and jargon ensures that the book is still reasonably accessible. In the following, I shall first summarise the key features of Mellor’s propensity theory, and then offer a few critical remarks. (shrink)
We investigate whether standard counterfactual analyses of causation imply that the outcomes of space-like separated measurements on entangled particles are causally related. Although it has sometimes been claimed that standard CACs imply such a causal relation, we argue that a careful examination of David Lewis’s influential counterfactual semantics casts doubt on this. We discuss ways in which Lewis’s semantics and standard CACs might be extended to the case of space-like correlations.
In their article 'Causes and Explanations: A Structural-Model Approach. Part I: Causes', Joseph Halpern and Judea Pearl draw upon structural equation models to develop an attractive analysis of 'actual cause'. Their analysis is designed for the case of deterministic causation. I show that their account can be naturally extended to provide an elegant treatment of probabilistic causation.
Special science generalizations admit of exceptions. Among the class of non-exceptionless special science generalizations, I distinguish minutis rectis generalizations from the more familiar category of ceteris paribus generalizations. I argue that the challenges involved in showing that mr generalizations can play the law role are underappreciated, and quite different from those involved in showing that cp generalizations can do so. I outline a strategy for meeting the challenges posed by mr generalizations.
Felix Pinkert has proposed a solution to the no-difference problem for AC. He argues that AC should be supplemented with a requirement that agents’ optimal acts be modally robust. We disagree.
An actual cause of some token effect is itself a token event that helped to bring about that effect. The notion of an actual cause is different from that of a potential cause – for example a pre-empted backup – which had the capacity to bring about the effect, but which wasn't in fact operative on the occasion in question. Sometimes actual causes are also distinguished from mere background conditions: as when we judge that the struck match was a cause (...) of the fire, while the presence of oxygen was merely part of the relevant background against which the struck match operated. Actual causation is also to be distinguished from type causation: actual causation holds between token events in a particular, concrete scenario; type causation, by contrast, holds between event kinds in scenario kinds. (shrink)
Luke, steeped in the Old Testament, makes clear that to understand what God was doing in Christ, one has to know Scripture; and especially the Book of Isaiah.
This book provides a comprehensive examination of the police role from within a broader philosophical context. Contending that the police are in the midst of an identity crisis that exacerbates unjustified law enforcement tactics, Luke William Hunt examines various major conceptions of the police—those seeing them as heroes, warriors, and guardians. The book looks at the police role considering the overarching societal goal of justice and seeks to present a synthetic theory that draws upon history, law, society, psychology, and (...) philosophy. Each major conception of the police role is examined in light of how it affects the pursuit of justice, and how it may be contrary to seeking justice holistically and collectively. The book sets forth a conception of the police role that is consistent with the basic values of a constitutional democracy in the liberal tradition. Hunt’s intent is that clarifying the police role will likewise elucidate any constraints upon policing strategies, including algorithmic strategies such as predictive policing. This book is essential reading for thoughtful policing and legal scholars as well as those interested in political philosophy, political theory, psychology, and related areas. Now more than ever, the nature of the police role is a philosophical topic that is relevant not just to police officials and social scientists, but to everyone. (shrink)
A new formulation of the Fine-Tuning Argument (FTA) for the existence of God is offered, which avoids a number of commonly raised objections. I argue that we can and should focus on the fundamental constants and initial conditions of the universe, and show how physics itself provides the probabilities that are needed by the argument. I explain how this formulation avoids a number of common objections, specifically the possibility of deeper physical laws, the multiverse, normalisability, whether God would fine-tune at (...) all, whether the universe is too fine-tuned, and whether the likelihood of God creating a life-permitting universe is inscrutable. (shrink)
In his defence of an error theory for normative judgements, Bart Streumer presents a new 'reduction' argument against nonreductive normative realism. Streumer claims that unlike previous versions, his 'simple moral theory' version of the argument doesn’t rely on the supervenience of the normative on the descriptive. But this is incorrect; without supervenience the argument does not succeed.
Actual causes - e.g. Suzy's being exposed to asbestos - often bring about their effects - e.g. Suzy's suffering mesothelioma - probabilistically. I use probabilistic causal models to tackle one of the thornier difficulties for traditional accounts of probabilistic actual causation: namely probabilistic preemption.
Reprinted with modification and permission from Kennedy Institute of Ethics Journal. The phenomenon of medical overtesting in general, and specifically in the emergency room, is well-known and regarded as harmful to both the patient and the healthcare system. Although the implications of this problem raise myriad ethical concerns, this chapter explores the extent to which overtesting might mitigate race-based health inequalities. Given that medical malpractice and error greatly increase when the patients belong to a racial minority, it is no surprise (...) that the mortality rate similarly increases in proportion to white patients. For these populations, an environment that emphasizes medical overtesting may well be the desirable medical environment until care evens out among races and ethnicities; additionally, efforts to lower overtesting in conjunction with a high rate of racist medical mythology may cause harm by lower testing when it is actually warranted. Furthermore, medical overtesting may help to assuage racial distrust. This paper ultimately concludes that an environment of medical overtesting may be less pernicious than the alternative. (shrink)
Our understanding of folk and scientific psychology often informs the law’s conclusions regarding questions about the voluntariness of a defendant’s action. The field of psychology plays a direct role in the law’s conclusions about a defendant’s guilt, innocence, and term of incarceration. However, physical sciences such as neuroscience increasingly deny the intuitions behind psychology. This paper examines contemporary biases against the autonomy of psychology and responds with considerations that cast doubt upon the legitimacy of those biases. The upshot is that (...) if reasonable doubt is established regarding whether psychology’s role in the law should be displaced, then there is room for future work to be done with respect to the truth of psychology’s conclusions about criminal responsibility. (shrink)
Hobbes is known for bridging natural and political philosophy, but less attention has been given to how this distinguishes the Hobbesian conception of the self from individualist strands of liberalism. First, Hobbes’s determinism suggests a conception of the self in which externalities determine the will and what the self is at every moment. Second, there is no stable conception of the self because externalities keep it in a constant state of flux. The metaphysical underpinnings of his project downplay the notion (...) of a purely individualistic conception of the self, pointing to a positivist theory of criminology relying upon external forces. This theory is especially prescient with respect to twentieth-century variants of positivism that focus upon how social organization affects personality. In a sense, then, modern criminological theory is indebted to Hobbes’s focus upon the connections between externalities and the self; a focus that illuminates new ways of viewing responsibility and accountability. (shrink)
Essay exploring the extent to which certain agreements between the police and informants are an affront (both procedurally and substantively) to basic tenets of the liberal tradition in legal and political philosophy.
Short online essay on the state of policing in liberal societies, discussing how executive discretionary power has grown to such a degree that it has trended toward illiberal practices and policies.
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.