In their article 'Causes and Explanations: A Structural-Model Approach. Part I: Causes', Joseph Halpern and Judea Pearl draw upon structural equation models to develop an attractive analysis of 'actual cause'. Their analysis is designed for the case of deterministic causation. I show that their account can be naturally extended to provide an elegant treatment of probabilistic causation.
We investigate whether standard counterfactual analyses of causation imply that the outcomes of space-like separated measurements on entangled particles are causally related. Although it has sometimes been claimed that standard CACs imply such a causal relation, we argue that a careful examination of David Lewis’s influential counterfactual semantics casts doubt on this. We discuss ways in which Lewis’s semantics and standard CACs might be extended to the case of space-like correlations.
Special science generalizations admit of exceptions. Among the class of non-exceptionless special science generalizations, I distinguish minutis rectis generalizations from the more familiar category of ceteris paribus generalizations. I argue that the challenges involved in showing that mr generalizations can play the law role are underappreciated, and quite different from those involved in showing that cp generalizations can do so. I outline a strategy for meeting the challenges posed by mr generalizations.
We introduce a family of rules for adjusting one's credences in response to learning the credences of others. These rules have a number of desirable features. 1. They yield the posterior credences that would result from updating by standard Bayesian conditionalization on one's peers' reported credences if one's likelihood function takes a particular simple form. 2. In the simplest form, they are symmetric among the agents in the group. 3. They map neatly onto the familiar Condorcet voting results. 4. They (...) preserve shared agreement about independence in a wide range of cases. 5. They commute with conditionalization and with multiple peer updates. Importantly, these rules have a surprising property that we call synergy - peer testimony of credences can provide mutually supporting evidence raising an individual's credence higher than any peer's initial prior report. At first, this may seem to be a strike against them. We argue, however, that synergy is actually a desirable feature and the failure of other updating rules to yield synergy is a strike against them. (shrink)
Actual causes - e.g. Suzy's being exposed to asbestos - often bring about their effects - e.g. Suzy's suffering mesothelioma - probabilistically. I use probabilistic causal models to tackle one of the thornier difficulties for traditional accounts of probabilistic actual causation: namely probabilistic preemption.
I argue that there are non-trivial objective chances (that is, objective chances other than 0 and 1) even in deterministic worlds. The argument is straightforward. I observe that there are probabilistic special scientific laws even in deterministic worlds. These laws project non-trivial probabilities for the events that they concern. And these probabilities play the chance role and so should be regarded as chances as opposed, for example, to epistemic probabilities or credences. The supposition of non-trivial deterministic chances might seem to (...) land us in contradiction. The fundamental laws of deterministic worlds project trivial probabilities for the very same events that are assigned non-trivial probabilities by the special scientific laws. I argue that any appearance of tension is dissolved by recognition of the level-relativity of chances. There is therefore no obstacle to accepting non-trivial chance-role-playing deterministic probabilities as genuine chances. (shrink)
The starting point in the development of probabilistic analyses of token causation has usually been the naïve intuition that, in some relevant sense, a cause raises the probability of its effect. But there are well-known examples both of non-probability-raising causation and of probability-raising non-causation. Sophisticated extant probabilistic analyses treat many such cases correctly, but only at the cost of excluding the possibilities of direct non-probability-raising causation, failures of causal transitivity, action-at-a-distance, prevention, and causation by absence and omission. I show that (...) an examination of the structure of these problem cases suggests a different treatment, one which avoids the costs of extant probabilistic analyses. (shrink)
In Making Things Happen, James Woodward influentially combines a causal modeling analysis of actual causation with an interventionist semantics for the counterfactuals encoded in causal models. This leads to circularities, since interventions are defined in terms of both actual causation and interventionist counterfactuals. Circularity can be avoided by instead combining a causal modeling analysis with a semantics along the lines of that given by David Lewis, on which counterfactuals are to be evaluated with respect to worlds in which their antecedents (...) are realized by miracles. I argue, pace Woodward, that causal modeling analyses perform just as well when combined with the Lewisian semantics as when combined with the interventionist semantics. Reductivity therefore remains a reasonable hope. (shrink)
An influential tradition in the philosophy of causation has it that all token causal facts are, or are reducible to, facts about difference-making. Challenges to this tradition have typically focused on pre-emption cases, in which a cause apparently fails to make a difference to its effect. However, a novel challenge to the difference-making approach has recently been issued by Alyssa Ney. Ney defends causal foundationalism, which she characterizes as the thesis that facts about difference-making depend upon facts about physical causation. (...) She takes this to imply that causation is not fundamentally a matter of difference-making. In this paper, I defend the difference-making approach against Ney’s argument. I also offer some positive reasons for thinking, pace Ney, that causation is fundamentally a matter of difference-making. (shrink)
An actual cause of some token effect is itself a token event that helped to bring about that effect. The notion of an actual cause is different from that of a potential cause – for example a pre-empted backup – which had the capacity to bring about the effect, but which wasn't in fact operative on the occasion in question. Sometimes actual causes are also distinguished from mere background conditions: as when we judge that the struck match was a cause (...) of the fire, while the presence of oxygen was merely part of the relevant background against which the struck match operated. Actual causation is also to be distinguished from type causation: actual causation holds between token events in a particular, concrete scenario; type causation, by contrast, holds between event kinds in scenario kinds. (shrink)
In an illuminating article, Claus Beisbart argues that the recently-popular thesis that the probabilities of statistical mechanics (SM) are Best System chances runs into a serious obstacle: there is no one axiomatization of SM that is robustly best, as judged by the theoretical virtues of simplicity, strength, and fit. Beisbart takes this 'no clear winner' result to imply that the probabilities yielded by the competing axiomatizations simply fail to count as Best System chances. In this reply, we express sympathy for (...) the 'no clear winner' thesis. However, we argue that an importantly different moral should be drawn from this. We contend that the implication for Humean chances is not that there are no SM chances, but rather that SM chances fail to be sharp. (shrink)
In this book, Mumford and Anjum advance a theory of causation based on a metaphysics of powers. The book is for the most part lucidly written, and contains some interesting contributions: in particular on the necessary connection between cause and effect and on the perceivability of the causal relation. I do, however, have reservations about some of the book’s central theses: in particular, that cause and effect are simultaneous, and that causes can fruitfully be represented as vectors.
Though almost forty years have elapsed since its first publication, it is a testament to the philosophical acumen of its author that 'The Matter of Chance' contains much that is of continued interest to the philosopher of science. Mellor advances a sophisticated propensity theory of chance, arguing that this theory makes better sense than its rivals (in particular subjectivist, frequentist, logical and classical theories) of ‘what professional usage shows to be thought true of chance’ (p. xi) – in particular ‘that (...) chance is objective, empirical and not relational, and that it applies to the single case’ (ibid.). The book is short and dense, with the serious philosophical content delivered thick and fast. There is little by way of road-mapping or summarising to assist the reader: the introduction is hardly expansive and the concluding paragraph positively perfunctory. The result is that the book is often difficult going, and the reader is made to work hard to ensure correct understanding of the views expressed. On the other hand, the author’s avoidance of unnecessary use of formalism and jargon ensures that the book is still reasonably accessible. In the following, I shall first summarise the key features of Mellor’s propensity theory, and then offer a few critical remarks. (shrink)
Bayesian models of legal arguments generally aim to produce a single integrated model, combining each of the legal arguments under consideration. This combined approach implicitly assumes that variables and their relationships can be represented without any contradiction or misalignment, and in a way that makes sense with respect to the competing argument narratives. This paper describes a novel approach to compare and ‘average’ Bayesian models of legal arguments that have been built independently and with no attempt to make them consistent (...) in terms of variables, causal assumptions or parameterization. The approach involves assessing whether competing models of legal arguments are explained or predict facts uncovered before or during the trial process. Those models that are more heavily disconfirmed by the facts are given lower weight, as model plausibility measures, in the Bayesian model comparison and averaging framework adopted. In this way a plurality of arguments is allowed yet a single judgement based on all arguments is possible and rational. (shrink)
A new formulation of the Fine-Tuning Argument (FTA) for the existence of God is offered, which avoids a number of commonly raised objections. I argue that we can and should focus on the fundamental constants and initial conditions of the universe, and show how physics itself provides the probabilities that are needed by the argument. I explain how this formulation avoids a number of common objections, specifically the possibility of deeper physical laws, the multiverse, normalisability, whether God would fine-tune at (...) all, whether the universe is too fine-tuned, and whether the likelihood of God creating a life-permitting universe is inscrutable. (shrink)
Some researchers and autistic activists have recently suggested that because some ‘autism-related’ behavioural atypicalities have a function or purpose they may be desirable rather than undesirable. Examples of such behavioural atypicalities include hand-flapping, repeatedly ordering objects (e.g., toys) in rows, and profoundly restricted routines. A common view, as represented in the Diagnostic and Statistical Manual of Mental Disorders (DSM) IV-TR (APA, 2000), is that many of these behaviours lack adaptive function or purpose, interfere with learning, and constitute the non-social behavioural (...) dysfunctions of those disorders making up the Autism Spectrum. As the DSM IV-TR continues to be the reference source of choice for professionals working with individuals with psychiatric difficulties, its characterization of the Autism Spectrum holds significant sway. We will suggest Extended Mind and Enactive Cognition Theories, which theorize that mind (or cognition) is embodied and environmentally embedded, as coherent conceptual and theoretical spaces within which to investigate the possibility that certain repetitive behaviours exhibited by autistics possess functions or purposes that make them desirable. As lenses through which to re-examine ‘autism-related’ behavioral atypicalities, these theories not only open up explanatory possibilities underdeveloped in the research literature, but also cohere with how some autistics describe their own experience. Our position navigates a middle way between the view of autism as understood in terms of impairment, deficit and dysfunction and one that seeks to de-pathologize the Spectrum. In so doing we seek to contribute to a continuing dialogue between researchers, clinicians and self- or parent advocates. (shrink)
Philosophical exploration of individualism and externalism in the cognitive sciences most recently has been focused on general evaluations of these two views (Adams & Aizawa 2008, Rupert 2008, Wilson 2004, Clark 2008). Here we return to broaden an earlier phase of the debate between individualists and externalists about cognition, one that considered in detail particular theories, such as those in developmental psychology (Patterson 1991) and the computational theory of vision (Burge 1986, Segal 1989). Music cognition is an area in the (...) cognitive sciences that has received little attention from philosophers, though it has relatively recently been thrown into the externalist spotlight (Cochrane 2008, Kruger 2014, Kersten forthcoming). Given that individualism can be thought of as a kind of paradigm for research on cognition, we provide a brief overview of the field of music cognition and individualistic tendencies within the field (sections 2 and 3) before turning to consider externalist alternatives to individualistic paradigms (section 4-5) and then arguing for a qualified form of externalism about music cognition (section 6). (shrink)
John Broome has argued that value incommensurability is vagueness, by appeal to a controversial about comparative indeterminacy. I offer a new counterexample to the collapsing principle. That principle allows us to derive an outright contradiction from the claim that some object is a borderline case of some predicate. But if there are no borderline cases, then the principle is empty. The collapsing principle is either false or empty.
Extended cognition holds that cognitive processes sometimes leak into the world (Dawson, 2013). A recent trend among proponents of extended cognition has been to put pressure on phenomena thought to be safe havens for internalists (Sneddon, 2011; Wilson, 2010; Wilson & Lenart, 2014). This paper attempts to continue this trend by arguing that music perception is an extended phenomenon. It is claimed that because music perception involves the detection of musical invariants within an “acoustic array”, the interaction between the auditory (...) system and the musical invariants can be characterized as an extended computational cognitive system. In articulating this view, the work of J. J. Gibson (1966, 1986) and Robert Wilson (1994, 1995, 2004) is drawn on. The view is defended from several objections and its implications outlined. The paper concludes with a comparison to Krueger’s (2014) view of the “musically extended emotional mind”. (shrink)
Two options are ‘incommensurate’ when neither is better than the other, but they are not equally good. Typically, we will say that one option is better in some ways, and the other in others, but neither is better ‘all things considered’. It is tempting to think that incommensurability is vagueness—that it is (perhaps) indeterminate which is better—but this ‘vagueness view’ of incommensurability has not proven popular. I set out the vagueness view and its implications in more detail, and argue that (...) it can explain most of the puzzling features of incommensurability. This argument proceeds without appeal to John Broome’s ‘collapsing principle’. (shrink)
The unity of consciousness has so far been studied only as a relation holding among the many experiences of a single subject. I investigate whether this relation could hold between the experiences of distinct subjects, considering three major arguments against the possibility of such ‘between-subjects unity’. The first argument, based on the popular idea that unity implies subsumption by a composite experience, can be deflected by allowing for limited forms of ‘experience-sharing’, in which the same token experience belongs to more (...) than one subject. The second argument, based on the phenomenological claim that unified experiences have interdependent phenomenal characters, I show to rest on an equivocation. Finally, the third argument accuses between-subjects unity of being unimaginable, or more broadly a formal possibility corresponding to nothing we can make sense of. I argue that the familiar experience of perceptual co-presentation gives us an adequate phenomenological grasp on what between-subjects unity might be like. (shrink)
The assumption that psychological states and processes are computational in character pervades much of cognitive science, what many call the computational theory of mind. In addition to occupying a central place in cognitive science, the computational theory of mind has also had a second life supporting “individualism”, the view that psychological states should be taxonomized so as to supervene only on the intrinsic, physical properties of individuals. One response to individualism has been to raise the prospect of “wide computational systems”, (...) in which some computational units are instantiated outside the individual. “Wide computationalism” attempts to sever the link between individualism and computational psychology by enlarging the concept of computation. However, in spite of its potential interest to cognitive science, wide computationalism has received little attention in philosophy of mind and cognitive science. This paper aims to revisit the prospect of wide computationalism. It is argued that by appropriating a mechanistic conception of computation wide computationalism can overcome several issues that plague initial formulations. The aim is to show that cognitive science has overlooked an important and viable option in computational psychology. The paper marshals empirical support and responds to possible objections. (shrink)
MARING, Luke – Why does the excellent citizen vote? JPP 24 (2), June 2016: 245-257. Is it morally important to vote? It is common to think so, but both consequentialist and deontological strategies for defending that intuition are weak. In response, some theorists have turned to a role-based strategy, arguing that it is morally important to be an excellent citizen, and that excellent citizens vote. But there is a lingering puzzle: an individual vote changes very little (virtually nothing in (...) large-scale elections), so why would the excellent citizen be so concerned to cast a ballot? Why bother with something that has so little effect on the common good? This paper answers by developing the idea of respect for a practice, and then arguing that respect for democracy will often require citizens to vote. (shrink)
This chapter offers an overview and analysis of policing, the area of criminal justice associated primarily with law enforcement. The study of policing spans a variety of disciplines, including criminology, law, philosophy, politics, and psychology, among other fields. Although research on policing is broad in scope, it has become an especially notable area of study in contemporary legal and social philosophy given recent police controversies.
We often have some reason to do actions insofar as they promote outcomes or states of affairs, such as the satisfaction of a desire. But what is it to promote an outcome? I defend a new version of 'probabilism about promotion'. According to Minimal Probabilistic Promotion, we promote some outcome when we make that outcome more likely than it would have been if we had done something else. This makes promotion easy and reasons cheap.
Sergio Tenenbaum and Diana Raffman contend that ‘vague projects’ motivate radical revisions to orthodox, utility-maximising rational choice theory. Their argument cannot succeed if such projects merely ground instances of the paradox of the sorites, or heap. Tenenbaum and Raffman are not blind to this, and argue that Warren Quinn’s Puzzle of the Self-Torturer does not rest on the sorites. I argue that their argument both fails to generalise to most vague projects, and is ineffective in the case of the Self-Torturer (...) itself. (shrink)
I discuss the apparent discrepancy between the qualitative diversity of consciousness and the relative qualitative homogeneity of the brain's basic constituents, a discrepancy that has been raised as a problem for identity theorists by Maxwell and Lockwood (as one element of the ‘grain problem’), and more recently as a problem for panpsychists (under the heading of ‘the palette problem’). The challenge posed to panpsychists by this discrepancy is to make sense of how a relatively small ‘palette’ of basic qualities could (...) give rise to the bewildering diversity of qualities we, and presumably other creatures, experience. I argue that panpsychists can meet this challenge, though it requires taking contentious stands on certain phenomenological questions, in particular on whether any familiar qualities are actual examples of ‘phenomenal blending’, and whether any other familiar qualities have a positive ‘phenomenologically simple character’. Moreover, it requires accepting an eventual theory most elements of which are in a certain explicable sense unimaginable, though not for that reason inconceivable. Nevertheless, I conclude that there are no conclusive reasons to reject such a theory, and so philosophers whose prior commitments motivate them to adopt it can do so without major theoretical cost. (shrink)
The phenomenon of medical overtesting in general, and specifically in the emergency room, is well-known and regarded as harmful to both the patient and the healthcare system. Although the implications of this problem raise myriad ethical concerns, this paper explores the extent to which overtesting might mitigate race-based health inequalities. Given that medical malpractice and error greatly increase when the patients belong to a racial minority, it is no surprise that the mortality rate similarly increases in proportion to white patients. (...) For these populations, an environment that emphasizes medical overtesting may well be the desirable medical environment until care evens out among races and ethnicities; additionally, efforts to lower overtesting in conjunction with a high rate of racist medical mythology may cause harm by lower testing when it is actually warranted. Furthermore, medical overtesting may help to assuage racial distrust. This paper ultimately concludes that an environment of medical overtesting may be less pernicious than the alternative. (shrink)
Call it the Skynet hypothesis, Artificial General Intelligence, or the advent of the Singularity — for years, AI experts and non-experts alike have fretted (and, for a small group, celebrated) the idea that artificial intelligence may one day become smarter than humans. -/- According to the theory, advances in AI — specifically of the machine learning type that’s able to take on new information and rewrite its code accordingly — will eventually catch up with the wetware of the biological brain. (...) In this interpretation of events, every AI advance from Jeopardy-winning IBM machines to the massive AI language model GPT-3 is taking humanity one step closer to an existential threat. We’re literally building our soon-to-be-sentient successors. -/- Except that it will never happen. At least, according to the authors of the new book Why Machines Will Never Rule the World: Artificial Intelligence without Fear. (shrink)
This thesis explores the possibility of composite consciousness: phenomenally conscious states belonging to a composite being in virtue of the consciousness of, and relations among, its parts. We have no trouble accepting that a composite being has physical properties entirely in virtue of the physical properties of, and relations among, its parts. But a longstanding intuition holds that consciousness is different: my consciousness cannot be understood as a complex of interacting component consciousnesses belonging to parts of me. I ask why: (...) what is it about consciousness that makes us think it so different from matter? And should we accept this apparent difference? (shrink)
Automated vehicles promise much in the way of both economic boons and increased personal safety. For better or worse, the effects of automating personal vehicles will not be felt for some time. In contrast, the effects of automated work vehicles, like semi-trucks, will be felt much sooner—within the next decade. The costs and benefits of automation will not be distributed evenly; while most of us will be positively affected by the lower prices overall, those losing their livelihoods to the automated (...) semi-trucks and other similar work vehicles will be much worse off. This sets up a classic distributive justice problem: how do we balance the harms and benefits of automation? In this paper, the authors recommend work alternatives: policies that the government would enact in order to ensure that each person could live a decent life even if one could not work, especially for reasons like having one’s job automated. After arguing for this position, we propose some policy guidelines. (shrink)
Some philosophical theories of consciousness imply consciousness in things we would never intuitively think are conscious—most notably, panpsychism implies that consciousness is pervasive, even outside complex brains. Is this a reductio ab absurdum for such theories, or does it show that we should reject our original intuitions? To understand the stakes of this question as clearly as possible, we analyse the structured pattern of intuitions that panpsychism conflicts with. We consider a variety of ways that the tension between this intuition (...) and panpsychism could be resolved, ranging from complete rejection of the theory to complete dismissal of the intuition, but argue in favour of more nuanced approaches which try to reconcile the two. (shrink)
Why does institutional police brutality continue so brazenly? Criminologists and other social scientists typically theorize about the causes of such violence, but less attention is given to normative questions regarding the demands of justice. Some philosophers have taken a teleological approach, arguing that social institutions such as the police exist to realize collective ends and goods based upon the idea of collective moral responsibility. Others have approached normative questions in policing from a more explicit social-contract perspective, suggesting that legitimacy is (...) derived by adhering to (limited) authority. This article examines methodologies within political philosophy for analyzing police injustice. The methodological inquiry leads to an account of how justice constrains the police through both special (or positional) moral requirements that officers assume voluntarily, as well as general moral requirements in virtue of a polity’s commitment to moral, political and legal values beyond law enforcement and crime reduction. The upshot is a conception of a police role that is constrained by justice from multiple foundational stances. (shrink)
Talk of levels is everywhere in cognitive science. Whether it is in terms of adjudicating longstanding debates or motivating foundational concepts, one cannot go far without hearing about the need to talk at different ‘levels’. Yet in spite of its widespread application and use, the concept of levels has received little sustained attention within cognitive science. This paper provides an analysis of the various ways the notion of levels has been deployed within cognitive science. The paper begins by introducing and (...) motivating discussion via four representative accounts of levels. It then turns to outlining and relating the four accounts using two dimensions of comparison. The result is the creation of a conceptual framework that maps the logical space of levels talk, which offers an important step toward making sense of levels talk within cognitive science. (shrink)
‘Radical enactivism’ (Hutto and Myin 2013, 2017) eschews representational content for all ‘basic’ mental activities. Critics have argued that this view cannot make sense of the workings of the imagination. In their recent book (2017), Hutto and Myin respond to these critics, arguing that some imaginings can be understood without attributing them any representational content. Their response relies on the claim that a system can exploit a structural isomorphism between two things without either of those things being a semantically evaluable (...) representation of the other. I argue that even if this claim is granted, there remains a problem for radically enactive accounts of imagining, namely that the active establishing and maintenance of a structural isomorphism seems to require representational content even if the exploitation of such an isomorphism, when established, does not. (shrink)
Is it morally important to vote? It is common to think so, but both consequentialist and deontological strategies for defending that intuition are weak. In response, some theorists have turned to a role-based strategy, arguing that it is morally important to be an excellent citizen, and that excellent citizens vote. But there is a lingering puzzle: an individual vote changes very little (virtually nothing in large-scale elections), so why would the excellent citizen be so concerned to cast a ballot? Why (...) bother with something that has so little effect on the common good? This paper answers by developing the idea of respect for a practice, and then arguing that respect for democracy will often require citizens to vote. (shrink)
Policing in many parts of the world—the United States in particular—has embraced an archetypal model: a conception of the police based on the tenets of individuated archetypes, such as the heroic police “warrior” or “guardian.” Such policing has in part motivated moves to (1) a reallocative model: reallocating societal resources such that the police are no longer needed in society (defunding and abolishing) because reform strategies cannot fix the way societal problems become manifest in (archetypal) policing; and (2) an algorithmic (...) model: subsuming policing into technocratic judgements encoded in algorithms through strategies such as predictive policing (mitigating archetypal bias). This paper begins by considering the normative basis of the relationship between political community and policing. It then examines the justification of reallocative and algorithmic models in light of the relationship between political community and police. Given commitments to the depth and distribution of security—and proscriptions against dehumanizing strategies—the paper concludes that a nonideal-theory priority rule promoting respect for personhood (manifest in community and dignity-promoting policing strategies) is a necessary condition for the justification of the above models. (shrink)
Recently, some authors have begun to raise questions about the potential unity of 4E (enactive, embedded, embodied, extended) cognition as a distinct research programme within cognitive science. Two tensions, in particular, have been raised:(i) that the body-centric claims embodied cognition militate against the distributed tendencies of extended cognition and (ii) that the body/environment distinction emphasized by enactivism stands in tension with the world-spanning claims of extended cognition. The goal of this paper is to resolve tensions (i) and (ii). The proposal (...) is that a form of ‘wide computationalism’can be used to reconcile the two tensions and, in so doing, articulate a common theoretical core for 4E cognition. (shrink)
This paper explores what the epistemic account of vagueness means for theories of legal interpretation. The thesis of epistemicism is that vague statements are true or false even though it is impossible to know which. I argue that if epistemicism is accepted within the domain of the law, then the following three conditions must be satisfied: Interpretative reasoning within the law must adhere to the principle of bivalence and the law of excluded middle, interpretative reasoning within the law must construe (...) vague statements as an epistemic phenomenon, and epistemicism must be expanded to include normative considerations in order to account for legal theories that are consistent with the first two conditions. The first two conditions are internal to a particular theory of legal interpretation, while the third condition is external to a particular theory of legal interpretation. My conclusion shows that there are legal theories that are internally consistent with the fundamental features of epistemicism. However, within the domain of law—and specifically in the case of legal theories that are internally consistent with epistemicism—I show that vagueness cannot be explained simply by our ignorance of the meaning and use of vague expressions. Rather, epistemicism must also account for ignorance of the requisite normative considerations in legal theories with which it is otherwise consistent. (shrink)
One method for uncovering the subprocesses of mental processes is the “Additive Factors Method” (AFM). The AFM uses reaction time data from factorial experiments to infer the presence of separate processing stages. This paper investigates the conceptual status of the AFM. It argues that one of the AFM’s underlying assumptions is problematic in light of recent developments in cognitive neuroscience. Discussion begins by laying out the basic logic of the AFM, followed by an analysis of the challenge presented by neural (...) reuse. Following this, implications are analysed and avenues of response considered. (shrink)
This book provides a comprehensive examination of the police role from within a broader philosophical context. Contending that the police are in the midst of an identity crisis that exacerbates unjustified law enforcement tactics, Luke William Hunt examines various major conceptions of the police—those seeing them as heroes, warriors, and guardians. The book looks at the police role considering the overarching societal goal of justice and seeks to present a synthetic theory that draws upon history, law, society, psychology, and (...) philosophy. Each major conception of the police role is examined in light of how it affects the pursuit of justice, and how it may be contrary to seeking justice holistically and collectively. The book sets forth a conception of the police role that is consistent with the basic values of a constitutional democracy in the liberal tradition. Hunt’s intent is that clarifying the police role will likewise elucidate any constraints upon policing strategies, including algorithmic strategies such as predictive policing. This book is essential reading for thoughtful policing and legal scholars as well as those interested in political philosophy, political theory, psychology, and related areas. Now more than ever, the nature of the police role is a philosophical topic that is relevant not just to police officials and social scientists, but to everyone. (shrink)
Interpreting the content of the law is not limited to what a relevant lawmaker utters. This paper examines the extent to which implied and implicit content is part of the law, and specifically whether the Gricean concept of conversational implicature is relevant in determining the content of law. Recent work has focused on how this question relates to acts of legislation. This paper extends the analysis to case law and departs from the literature on several key issues. The paper's argument (...) is based upon two points: Precedent-setting judicial opinions may consist of multiple conversations, of which some entail opposing implicata, and if a particular precedent-setting judicial opinion consists of multiple conversations, of which some entail opposing implicata, then no meaningful conversational implicatum is part of the content of that particular precedent-setting opinion. Nevertheless, the paper's conclusion leaves open the prospect of gleaning something in between conversational implicature and what is literally said, namely, conversational impliciture. (shrink)
One particularly successful approach to modeling within cognitive science is computational psychology. Computational psychology explores psychological processes by building and testing computational models with human data. In this paper, it is argued that a specific approach to understanding computation, what is called the ‘narrow conception’, has problematically limited the kinds of models, theories, and explanations that are offered within computational psychology. After raising two problems for the narrow conception, an alternative, ‘wide approach’ to computational psychology is proposed.
Chalmers (2002) argues against physicalism in part using the premise that no truth about consciousness can be deduced a priori from any set of purely structural truths. Chalmers (2012) elaborates a detailed definition of what it is for a truth to be structural, which turns out to include spatiotemporal truths. But Chalmers (2012) then proposes to define spatiotemporal terms by reference to their role in causing spatial and temporal experiences. Stoljar (2015) and Ebbers (Ms) argue that this definition of spatiotemporal (...) terms allows for the trivial falsification of Chalmers (2002)’s premise about structure and consciousness. I show that this result can be avoided by tweaking the relevant premise, and moreover that this tweak is well-motivated and not ad hoc. (shrink)
The psychology and phenomenology of our knowledge of other minds is not well captured either by describing it simply as perception, nor by describing it simply as inference. A better description, I argue, is that our knowledge of other minds involves both through ‘perceptual co-presentation’, in which we experience objects as having aspects that are not revealed. This allows us to say that we perceive other minds, but perceive them as private, i.e. imperceptible, just as we routinely perceive aspects of (...) physical objects as unperceived. I discuss existing versions of this idea, particularly Joel Smith’s, on which it is taken to imply that our knowledge of other minds is, in these cases, perceptual and not inferential. Against this, I argue that perceptual co-presentation in general, and mind-perception in particular, yields knowledge that is simultaneously both perceptual and inferential. (shrink)
I analyse the meaning of a popular idiom among consciousness researchers, in which an individual's consciousness is described as a 'field'. I consider some of the contexts where this idea appears, in particular discussions of attention and the unity of consciousness. In neither case, I argue, do authors provide the resources to cash out all the implications of field-talk: in particular, they do not give sense to the idea of conscious elements being arrayed along multiple dimensions. I suggest ways to (...) extend and generalize the attentional construal of 'field-talk' to provide a genuine multiplicity of dimensions, through the notions of attentional proximity and causal proximity: the degree to which two experiential elements are disposed to bring one another into attention when attended, or to interact in other distinctively mental ways. I conclude that if consciousness is a field, it is one organized by attentional and/or causal proximity. (shrink)
The traditional problem of evil sets theists the task of reconciling two things: God and evil. I argue that theists face the more difficult task of reconciling God and evils that God is specially obligated to prevent. Because of His authority, God's obligation to curtail evil goes far beyond our Samaritan duty to prevent evil when doing so isn't overly hard. Authorities owe their subjects a positive obligation to prevent certain evils; we have a right against our authorities that they (...) protect us. God's apparent mistake is not merely the impersonal wrong of failing to do enough good — though it is that too. It is the highly personal wrong of failing to live up to a moral requirement that comes bundled with authority over persons. To make my argument, I use the resources of political philosophy and defend a novel change to the orthodox account of authority. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.