There is an inherent link between colonisation and carceral institutions, and in this paper I aim to illuminate and critically review the philosophical implications of prison structures in relation to coloniality. I draw on the work of Lewis Gordon, Frantz Fanon & Nelson Maldonado-Torres in arguing that physical incarceration not only colonises the body, but the mind too, as a form of structural violence. In order to establish an existential phenomenological framework for coloniality in incarceration, I also make reference to (...) Hannah Arendt. Her work on both totalitarianism and the banality of evil help to develop the framework, and I further utilise Lisa Guenther’s work on solitary confinement in taking a phenomenological approach to thinking about incarceration. After this critical discussion of the coloniality of incarceration, I apply the framework to the New Zealand context, where Māori are hugely overrepresented in prisons. In invoking Sharon Shalev’s recent report into solitary confinement in Aotearoa, I argue that the disproportionate numbers of Māori in New Zealand prisons are symptomatic of the inherently colonial nature of carceral institutions, and also maintain that understanding the fundamentally colonial nature of prisons is key to ensuring the Crown and government fulfil certain obligations to Māori under the Treaty of Waitangi. (shrink)
In this existential reading of Kim Kardashian-West's International Women's Day selfie of 2016, I focus on the rise of selfie culture and public discourse around emerging digital representations of women's bodies. The selfie is a relatively new phenomenon, and is particularly curious because of the subject/object paradox it creates; in taking a selfie, a person asserts control over their own image, but at the same time, becomes object in their own gaze. My argument is that selfies, like other assertions of (...) bodily subjectivity in digital spaces, are a threat to patriarchal structures that paint women as immanent, object, as reflected in public discourse around Kardashian-West's International Women's Day selfie. I draw on both Jean-Paul Sartre and Simone de Beauvoir's work on subjectivity in existentialism and phenomenology, as well as Amy Shields Dobson's work on post-feminism and young women's projections of self, in order to delineate what it is about the selfie that creates this paradox. I also make reference to the work of Elisabeth Grosz and Frantz Fanon in relation to a colonial hierarchy that prioritises body over mind, as well as Laura Mulvey's work on the male gaze. (shrink)
According to Heidegger's Being and Time, social relations are constitutive of the core features of human agency. On this view, which I call a ‘strong conception’ of sociality, the core features of human agency cannot obtain in an individual subject independently of social relations to others. I explain the strong conception of sociality captured by Heidegger's underdeveloped notion of ‘being-with’ by reconstructing Heidegger's critique of the ‘weak conception’ of sociality characteristic of Kant's theory of agency. According to a weak conception, (...) sociality is a mere aggregation of individual subjects and the core features of human agency are built into each individual mind. The weak conception of sociality remains today widely taken for granted. I show that Christine Korsgaard, one of the most creative contemporary appropriators of Kant, operates with a weak conception of sociality and that this produces a problematic explanatory deficiency in her view: she is unable to explain the peculiar motivational efficacy of shared social norms. Heidegger's view is tailor made to explain this phenomenon. I end by sketching how Heidegger provides a social explanation of a major systematic concern animating Korsgaard, the concern with the importance of individual autonomy and answerability in human life. (shrink)
Both Martin Heidegger and Harry Frankfurt have argued that the fundamental feature of human identity is care. Both contend that caring is bound up with the fact that we are finite beings related to our own impending death, and both argue that caring has a distinctive, circular and non-instantaneous, temporal structure. In this paper, I explore the way Heidegger and Frankfurt each understand the relations among care, death, and time, and I argue for the superiority of Heideggerian version of this (...) nest of claims. Frankfurt claims that we should conceive of the most basic commitments which practically orient a person in the world and define his identity (“volitional necessities”) as naturalistic facts, foundational for and located completely without the normative space of reasons. In support of this he appeals to the supposedly foundational role played in human life by the instinct for self-preservation, what Frankfurt calls the “love of living.” The claim is that in questions of practical identity there is a definite priority of the factual over the normative. Frankfurt’s naturalistic model of volitional necessity is motivated by a misunderstanding of the temporal structure of care, a misunderstanding that helps lead him to an implausible conception of the basic structures of human identity. Heidegger advances an anti-naturalistic conception of caring, one bound up with his way of understanding how human beings relate to their own future. I argue that the existential, temporal, and normative significance that Frankfurt attributes to the naturalized “love of living” is better captured by the Heideggerian claim that human identity is defined by being “for-the-sake-of” certain projects and commitments, a way of being lived out in the way Heidegger calls “being-towards-death.”. (shrink)
In common with traditional forms of epistemic internalism, epistemological disjunctivism attempts to incorporate an awareness condition on justification. Unlike traditional forms of internalism, however, epistemological disjunctivism rejects the so-called New Evil Genius thesis. In so far as epistemological disjunctivism rejects the New Evil Genius thesis, it is revisionary. -/- After explaining what epistemological disjunctivism is, and how it relates to traditional forms of epistemic internalism / externalism, I shall argue that the epistemological disjunctivist’s account of the intuitions underlying the New (...) Evil Genius thought experiment is at best incomplete. As presented, therefore, epistemological disjunctivism is unable to accommodate the core guiding intuitions of epistemic internalism. Given the stated aim of not being revisionary on this score, the view is at a dialectical disadvantage over the traditional forms of epistemic internalism the position is meant to replace. Unfortunately, therefore, at present, the impasse between internalism and externalism remains. (shrink)
The New Evil Demon problem has been hotly debated since the case was introduced in the early 1980’s (e.g. Lehrer and Cohen 1983; Cohen 1984), and there seems to be recent increased interest in the topic. In a forthcoming collection of papers on the New Evil Demon problem (Dutant and Dorsch, forthcoming), at least two of the papers, both by prominent epistemologists, attempt to resist the problem by appealing to the distinction between justification and excuses. My primary aim here is (...) to critically evaluate this new excuse maneuver as a response to the New Evil Demon problem. -/- Their response attempts to give us reason to reject the idea that victims of the New Evil Demon have justification for believing as they do. I shall argue that this approach is ultimately unsuccessful, however much of value can be learned from these attempts. In particular, progress in the debate can be made by following those who advance the excuse maneuver and make explicit the connection between epistemic justification and epistemic norms. By doing so, the questions being debated are clarified, as is the methodology being used to attempt to answer them. (shrink)
You may not know me well enough to evaluate me in terms of my moral character, but I take it you believe I can be evaluated: it sounds strange to say that I am indeterminate, neither good nor bad nor intermediate. Yet I argue that the claim that most people are indeterminate is the conclusion of a sound argument—the indeterminacy paradox—with two premises: (1) most people are fragmented (they would behave deplorably in many and admirably in many other situations); (2) (...) fragmentation entails indeterminacy. I support (1) by examining psychological experiments in which most participants behave deplorably (e.g., by maltreating “prisoners” in a simulated prison) or admirably (e.g., by intervening in a simulated theft). I support (2) by arguing that, according to certain plausible conceptions, character evaluations presuppose behavioral consistency (lack of fragmentation). Possible reactions to the paradox include: (a) denying that the experiments are relevant to character; (b) upholding conceptions according to which character evaluations do not presuppose consistency; (c) granting that most people are indeterminate and explaining why it appears otherwise. I defend (c) against (a) and (b). (shrink)
I defend the following version of the ought-implies-can principle: (OIC) by virtue of conceptual necessity, an agent at a given time has an (objective, pro tanto) obligation to do only what the agent at that time has the ability and opportunity to do. In short, obligations correspond to ability plus opportunity. My argument has three premises: (1) obligations correspond to reasons for action; (2) reasons for action correspond to potential actions; (3) potential actions correspond to ability plus opportunity. In the (...) bulk of the paper I address six objections to OIC: three objections based on putative counterexamples, and three objections based on arguments to the effect that OIC conflicts with the is/ought thesis, the possibility of hard determinism, and the denial of the Principle of Alternate Possibilities. (shrink)
Does the recent success of Podemos and Syriza herald a new era of inclusive, egalitarian left populism? Because leaders of both parties are former students of Ernesto Laclau and cite his account of populism as guiding their political practice, this essay considers whether his theory supports hope for a new kind of populism. For Laclau, the essence of populism is an “empty signifier” that provides a means by which anyone can identify with the people as a whole. However, the concept (...) of the empty signifier is not as neutral as he assumes. As I show by analyzing the role of race in his theory, some subjects are constituted in a way that prevents their unmediated identification with the people. Consequently, Laclau’s view should be read as symptomatic of the problems with populist logic if its adherents are to avoid reproducing its exclusions and practice a more inclusive politics. (shrink)
In this article I argue that the value of epistemic justification cannot be adequately explained as being instrumental to truth. I intend to show that false belief, which is no means to truth, can nevertheless still be of epistemic value. This in turn will make a good prima facie case that justification is valuable for its own sake. If this is right, we will have also found reason to think that truth value monism is false: assuming that true belief does (...) have value, there is more of final epistemic value than mere true belief. (shrink)
The current resurgence of interest in cognition and in the nature of cognitive processing has brought with it also a renewed interest in the early work of Husserl, which contains one of the most sustained attempts to come to grips with the problems of logic from a cognitive point of view. Logic, for Husserl, is a theory of science; but it is a theory which takes seriously the idea that scientific theories are constituted by the mental acts of cognitive subjects. (...) The present essay begins with an exposition of Husserl's act-based conception of what a science is, and goes on to consider his account of the role of linguistic meanings, of the ontology of scientific objects, and of evidence and truth. The essay concentrates almost exclusively on the Logical Investigations of 1900/01. This is not only because this work, which is surely Husserl's single most important masterpiece, has been overshadowed first of all by his Ideas I and then later by the Crisis. It is also because the Investigations contain, in a peculiarly clear and pregnant form, a whole panoply of ideas on logic and cognitive theory which either simply disappeared in Husserl's own later writings or became obfuscated by an admixture of that great mystery which is 'transcendental phenomenology'. (shrink)
What is it to be a woman? What is it to be a man? We start by laying out desiderata for an analysis of 'woman' and 'man': descriptively, it should link these gender categories to sex biology without reducing them to sex biology, and politically, it should help us explain and combat traditional sexism while also allowing us to make sense of the activist view that gendering should be consensual. Using a Putnam-style 'Twin Earth' example, we argue that none of (...) the existing analyses in the feminist literature succeeds in meeting all of our desiderata. Finally, we propose a positive account that we believe can satisfy all the desiderata outlined. According to our theory, the genders 'woman' and 'man' are individuated not by their contemporary connections to sex biology, but by their historical continuity with classes that were originally closely connected to sex biology. (shrink)
In virtue of what are later and an earlier group members of one and the numerically same tradition? Gallie was one of the few philosophers to have engaged with issues surrounding this question. My article is not a faithful exegesis of Gallie but develops a terminology in which to discuss issues surrounding the numerical identity of a tradition over time, based on some of his insights.
A uniform theory of conditionals is one which compositionally captures the behavior of both indicative and subjunctive conditionals without positing ambiguities. This paper raises new problems for the closest thing to a uniform analysis in the literature (Stalnaker, Philosophia, 5, 269–286 (1975)) and develops a new theory which solves them. I also show that this new analysis provides an improved treatment of three phenomena (the import-export equivalence, reverse Sobel-sequences and disjunctive antecedents). While these results concern central issues in the study (...) of conditionals, broader themes in the philosophy of language and formal semantics are also engaged here. This new analysis exploits a dynamic conception of meaning where the meaning of a symbol is its potential to change an agent’s mental state (or the state of a conversation) rather than being the symbol’s content (e.g. the proposition it expresses). The analysis of conditionals is also built on the idea that the contrast between subjunctive and indicative conditionals parallels a contrast between revising and consistently extending some body of information. (shrink)
This chapter first surveys general issues in the epistemic internalism / externalism debate: what is the distinction, what motivates it, and what arguments can be given on both sides. -/- The second part of the chapter will examine the internalism / externalism debate as regards to the specific case of the epistemology of memory belief.
We presuppose a position of scientific realism to the effect (i) that the world exists and (ii) that through the working out of ever more sophisticated theories our scientific picture of reality will approximate ever more closely to the world as it really is. Against this background consider, now, the following question: 1. Do the empirical theories with the help of which we seek to approximate a good or true picture of reality rest on any non-empirical presuppositions? One can answer (...) this question with either a 'yes' or a 'no'. 'No' is the preferred answer of most contemporary methodologists -- Murray Rothbard is one distinguished counterexample to this trend -- who maintain that empirical theories are completely free of non-empirical ('a priori') admixtures and who see science as a matter of the gathering of pure 'data' obtained through simple observation. From such data scientific propositions are then supposed to be somehow capable of being established. (shrink)
Transactive memory theory describes the processes by which benefits for memory can occur when remembering is shared in dyads or groups. In contrast, cognitive psychology experiments demonstrate that social influences on memory disrupt and inhibit individual recall. However, most research in cognitive psychology has focused on groups of strangers recalling relatively meaningless stimuli. In the current study, we examined social influences on memory in groups with a shared history, who were recalling a range of stimuli, from word lists to personal, (...) shared memories. We focused in detail on the products and processes of remembering during in-depth interviews with 12 older married couples. These interviews consisted of three recall tasks: (1) word list recall; (2) personal list recall, where stimuli were relevant to the couples’ shared past; and (3) an open-ended autobiographical interview. We conducted these tasks individually and then collaboratively two weeks later. Across each of the tasks, although some couples demonstrated collaborative inhibition, others demonstrated collaborative facilitation. We identified a number of factors that predicted collaborative success, in particular, group-level strategy use. Our results show that collaboration may help or hinder memory, and certain interactions are more likely to produce collaborative benefits. (shrink)
In his Edifying Discourses, Soren Kierkegaard published a sermon entitled ‘The Unchangeableness of God’ in which he reiterated the dogma which dominated Catholic, Protestant and even Jewish expressions of classical supernaturalist theology from the first century A.D. until the advent of process theology in the twentieth century. The dogma that as a perfect being, God must be totally unchanging in every conceivable respect was expressed by Kierkegaard in such ways as: He changes all, Himself unchanged. When everything seems stable and (...) in the overturn of all things, He remains equally unchanged; no change touches Him, not even the shadow of a change; in unaltered clearness He, the father of lights, remained eternally unchanged. 1. (shrink)
This paper assesses branching spacetime theories in light of metaphysical considerations concerning time. I present the A, B, and C series in terms of the temporal structure they impose on sets of events, and raise problems for two elements of extant branching spacetime theories—McCall’s ‘branch attrition’, and the ‘no backward branching’ feature of Belnap’s ‘branching space-time’—in terms of their respective A- and B-theoretic nature. I argue that McCall’s presentation of branch attrition can only be coherently formulated on a model with (...) at least two temporal dimensions, and that this results in severing the link between branch attrition and the flow of time. I argue that ‘no backward branching’ prohibits Belnap’s theory from capturing the modal content of indeterministic physical theories, and results in it ascribing to the world a time-asymmetric modal structure that lacks physical justification. (shrink)
In general, epistemic internalists hold that an individual’s justification for a belief is exhausted by her reflectively accessible reasons for thinking that the contents of her beliefs are true. Applying this to the epistemology of testimony, a hearer’s justification for beliefs acquired through testimony is exhausted by her reflectively accessible reasons to think that the contents of the speaker’s testimony is true. A consequence of internalism is that subjects that are alike with respect to their reflectively accessible reasons are alike (...) with respect to what they have justification to believe. Testimony should be thought no different: hearers that are alike with respect to reflectively accessible reasons to think that a speaker’s testimony is true are alike with respect to their justification for beliefs based upon that testimony. But it has been recently argued that this view faces powerful counterexamples. So the central question is this: assuming that a hearer can acquire justification to believe a proposition through the testimony of a speaker, can epistemic internalism provide the resources to explain how such justification is possible? My aim in this paper is to address these counterexamples, and in so doing, defend epistemic internalist accounts of testimony. (shrink)
Imperatives cannot be true or false, so they are shunned by logicians. And yet imperatives can be combined by logical connectives: "kiss me and hug me" is the conjunction of "kiss me" with "hug me". This example may suggest that declarative and imperative logic are isomorphic: just as the conjunction of two declaratives is true exactly if both conjuncts are true, the conjunction of two imperatives is satisfied exactly if both conjuncts are satisfied—what more is there to say? Much more, (...) I argue. "If you love me, kiss me", a conditional imperative, mixes a declarative antecedent ("you love me") with an imperative consequent ("kiss me"); it is satisfied if you love and kiss me, violated if you love but don't kiss me, and avoided if you don't love me. So we need a logic of three -valued imperatives which mixes declaratives with imperatives. I develop such a logic. (shrink)
What makes an intellectual virtue a virtue? A straightforward and influential answer to this question has been given by virtue-reliabilists: a trait is a virtue only insofar as it is truth-conducive. In this paper I shall contend that recent arguments advanced by Jack Kwong in defence of the reliabilist view are good as far as they go, in that they advance the debate by usefully clarifying ways in how best to understand the nature of open-mindedness. But I shall argue that (...) these considerations do not establish the desired conclusions that open-mindedness is truth-conducive. To establish these much stronger conclusions we would need an adequate reply to what I shall call Montmarquet’s objection. I argue that Linda Zagzebski’s reply to Montmarquet’s objection, to which Kwong defers, is inadequate. I conclude that it is contingent if open-mindedness is truth-conducive, and if a necessary tie to truth is what makes an intellectual virtue a virtue, then the status of open-mindedness as an intellectual virtue is jeopardised. We either need an adequate reliabilist response to Montmarquet’s objection, or else seek alternative accounts of what it is that makes a virtue a virtue. I conclude by briefly outlining some alternatives. (shrink)
This paper introduces a new, expanded range of relevant cognitive psychological research on collaborative recall and social memory to the philosophical debate on extended and distributed cognition. We start by examining the case for extended cognition based on the complementarity of inner and outer resources, by which neural, bodily, social, and environmental resources with disparate but complementary properties are integrated into hybrid cognitive systems, transforming or augmenting the nature of remembering or decision-making. Adams and Aizawa, noting this distinctive complementarity argument, (...) say that they agree with it completely: but they describe it as “a non-revolutionary approach” which leaves “the cognitive psychology of memory as the study of processes that take place, essentially without exception, within nervous systems.” In response, we carve out, on distinct conceptual and empirical grounds, a rich middle ground between internalist forms of cognitivism and radical anti-cognitivism. Drawing both on extended cognition literature and on Sterelny’s account of the “scaffolded mind” (this issue), we develop a multidimensional framework for understanding varying relations between agents and external resources, both technological and social. On this basis we argue that, independent of any more “revolutionary” metaphysical claims about the partial constitution of cognitive processes by external resources, a thesis of scaffolded or distributed cognition can substantially influence or transform explanatory practice in cognitive science. Critics also cite various empirical results as evidence against the idea that remembering can extend beyond skull and skin. We respond with a more principled, representative survey of the scientific psychology of memory, focussing in particular on robust recent empirical traditions for the study of collaborative recall and transactive social memory. We describe our own empirical research on socially distributed remembering, aimed at identifying conditions for mnemonic emergence in collaborative groups. Philosophical debates about extended, embedded, and distributed cognition can thus make richer, mutually beneficial contact with independently motivated research programs in the cognitive psychology of memory. (shrink)
Why does classical equilibrium statistical mechanics work? Malament and Zabell (1980) noticed that, for ergodic dynamical systems, the unique absolutely continuous invariant probability measure is the microcanonical. Earman and Rédei (1996) replied that systems of interest are very probably not ergodic, so that absolutely continuous invariant probability measures very distant from the microcanonical exist. In response I define the generalized properties of epsilon-ergodicity and epsilon-continuity, I review computational evidence indicating that systems of interest are epsilon-ergodic, I adapt Malament and Zabell’s (...) defense of absolute continuity to support epsilon-continuity, and I prove that, for epsilon-ergodic systems, every epsilon-continuous invariant probability measure is very close to the microcanonical. (shrink)
Mitochondrial DNA (mtDNA) diseases are a group of neuromuscular diseases that often cause suffering and premature death. New mitochondrial replacement techniques (MRTs) may offer women with mtDNA diseases the opportunity to have healthy offspring to whom they are genetically related. MRTs will likely be ready to license for clinical use in the near future and a discussion of the ethics of the clinical introduction ofMRTs is needed. This paper begins by evaluating three concerns about the safety of MRTs for clinical (...) use on humans: (1) Is it ethical to use MRTs if safe alternatives exist? (2) Would persons with three genetic contributors be at risk of suffering? and (3) Can society trust that MRTs will be made available for humans only once adequate safety testing has taken place, and that MRTs will only be licensed for clinical use in a way that minimises risks? It is then argued that the ethics debate about MRTs should be reoriented towards recommendingways to reduce the possible risks of MRT use on humans. Two recommendations are made: (1) licensed clinical access to MRTs should only be granted to prospective parents if they intend to tell their children about their MRT conception by adulthood; and (2) sex selection should be used in conjunction with the clinical use ofMRTs, in order to reduce transgenerational health risks. (shrink)
The current assessment of behaviors in the inventories to diagnose autism spectrum disorders (ASD) focus on observation and discrete categorizations. Behaviors require movements, yet measurements of physical movements are seldom included. Their inclusion however, could provide an objective characterization of behavior to help unveil interactions between the peripheral and the central nervous systems. Such interactions are critical for the development and maintenance of spontaneous autonomy, self-regulation and voluntary control. At present, current approaches cannot deal with the heterogeneous, dynamic and stochastic (...) nature of development. Accordingly, they leave no avenues for real-time or longitudinal assessments of change in a coping system continuously adapting and developing compensatory mechanisms. We offer a new unifying statistical framework to reveal re-afferent kinesthetic features of the individual with ASD. The new methodology is based on the non-stationary stochastic patterns of minute fluctuations (micro-movements) inherent to our natural actions. Such patterns of behavioral variability provide re-entrant sensory feedback contributing to the autonomous regulation and coordination of the motor output. From an early age, this feedback supports centrally driven volitional control and fluid, flexible transitions between intentional and spontaneous behaviors. We show that in ASD there is a disruption in the maturation of this form of proprioception. Despite this disturbance, each individual has unique adaptive compensatory capabilities that we can unveil and exploit to evoke faster and more accurate decisions. Measuring the kinesthetic re-afference in tandem with stimuli variations we can detect changes in their micro-movements indicative of a more predictive and reliable kinesthetic percept. Our methods address the heterogeneity of ASD with a personalized approach grounded in the inherent sensory-motor abilities that the individual has already developed. (shrink)
Imperatives cannot be true, but they can be obeyed or binding: `Surrender!' is obeyed if you surrender and is binding if you have a reason to surrender. A pure declarative argument — whose premisses and conclusion are declaratives — is valid exactly if, necessarily, its conclusion is true if the conjunction of its premisses is true; similarly, I suggest, a pure imperative argument — whose premisses and conclusion are imperatives — is obedience-valid (alternatively: bindingness-valid) exactly if, necessarily, its conclusion is (...) obeyed (alternatively: binding) if the conjunction of its premisses is. I argue that there are two kinds of bindingness, and that a vacillation between two corresponding variants of bindingness-validity largely explains conflicting intuitions concerning the validity of some pure imperative arguments. I prove that for each of those two variants of bindingness-validity there is an equivalent variant of obedience-validity. Finally, I address alternative accounts of pure imperative inference. (shrink)
Epistemic internalism, by stressing the indispensability of the subject’s perspective, strikes many as plausible at first blush. However, many people have tended to reject the position because certain kinds of beliefs have been thought to pose special problems for epistemic internalism. For example, internalists tend to hold that so long as a justifier is available to the subject either immediately or upon introspection, it can serve to justify beliefs. Many have thought it obvious that no such view can be correct, (...) as it has been alleged that internalism cannot account for the possibility of the justification of beliefs stored in memory. -/- My aim in this paper is to offer a response that explains how memory justification is possible in a way that is consistent with epistemic internalism and an awareness condition on justification. Specifically, I will explore the plausibility of various options open to internalists, including both foundationalist and non-foundationalist approaches to the structure of justification. I intend to show that despite other difficult challenges that epistemic internalism might face, memory belief poses no special problems that the resources of internalism cannot adequately address. (shrink)
Edited proceedings of an interdisciplinary symposium on consciousness held at the University of Cambridge in January 1978. Includes a foreword by Freeman Dyson. Chapter authors: G. Vesey, R.L. Gregory, H.C. Longuet-Higgins, N.K. Humphrey, H.B. Barlow, D.M. MacKay, B.D. Josephson, M. Roth, V.S. Ramachandran, S. Padfield, and (editorial summary only) E. Noakes. A scanned pdf is available from this web site (philpapers.org), while alternative versions more suitable for copying text are available from https://www.repository.cam.ac.uk/handle/1810/245189. -/- Page numbering convention for the pdf version (...) viewed in a pdf viewer is as follows: 'go to page n' accesses the pair of scanned pages 2n and 2n+1. Applicable licence: CC Attribution-NonCommercial-ShareAlike 2.0. (shrink)
"Surrender; therefore, surrender or fight" is apparently an argument corresponding to an inference from an imperative to an imperative. Several philosophers, however (Williams 1963; Wedeking 1970; Harrison 1991; Hansen 2008), have denied that imperative inferences exist, arguing that (1) no such inferences occur in everyday life, (2) imperatives cannot be premises or conclusions of inferences because it makes no sense to say, for example, "since surrender" or "it follows that surrender or fight", and (3) distinct imperatives have conflicting permissive presuppositions (...) ("surrender or fight" permits you to fight without surrendering, but "surrender" does not), so issuing distinct imperatives amounts to changing one's mind and thus cannot be construed as making an inference. In response I argue inter alia that, on a reasonable understanding of 'inference', some everyday-life inferences do have imperatives as premises and conclusions, and that issuing imperatives with conflicting permissive presuppositions does not amount to changing one's mind. (shrink)
The discipline of applied ethics already has a certain familiarity in the Anglo-Saxon world, above all through the work of Peter Singer. Applied ethics uses the tools of moral philosophy to resolve practical problems of the sort which arise, for example, in the running of hospitals. In the University at Buffalo (New York) there was organized on April 24-25 1998 the world's first conference on a new, sister discipline, the discipline of applied ontology. Applied ontologists seek to apply ontological tools (...) to solving practical problems of the sort which arise in various extra-philosophical domains, including law and government administration. (shrink)
This paper proposes a semantics for free choice permission that explains both the non-classical behavior of modals and disjunction in sentences used to grant permission, and their classical behavior under negation. It also explains why permissions can expire when new information comes in and why free choice arises even when modals scope under disjunction. On the proposed approach, deontic modals update preference orderings, and connectives operate on these updates rather than propositions. The success of this approach stems from its capacity (...) to capture the difference between expressing the preferences that give rise to permissions and conveying propositions about those preferences. (shrink)
This paper presents a new taxonomy of sex/gender concepts based on the idea of starting with a few basic components of the sex/gender system, and exhausting the possible types of simple associations and identities based on these. The resulting system is significantly more fine-grained than most competitors, and helps to clarify a number of points of confusion and conceptual tension in academic and activist conversations about feminism, transgender politics, and the social analysis of gender.
According to Hempel's paradox, evidence (E) that an object is a nonblack nonraven confirms the hypothesis (H) that every raven is black. According to the standard Bayesian solution, E does confirm H but only to a minute degree. This solution relies on the almost never explicitly defended assumption that the probability of H should not be affected by evidence that an object is nonblack. I argue that this assumption is implausible, and I propose a way out for Bayesians. Introduction Hempel's (...) paradox, the standard Bayesian solution, and the disputed assumption Attempts to defend the disputed assumption Attempts to refute the disputed assumption A way out for Bayesians Conclusion. (shrink)
Truthmaker says that things, broadly construed, are the ontological grounds of truth and, therefore, that things make truths true. Recently, there have been a number of arguments purporting to show that if one embraces Truthmaker, then one ought to embrace Truthmaker Maximalism—the view that all non-analytic propositions have truthmakers. But then if one embraces Truthmaker, one ought to think that negative existentials have truthmakers. I argue that this is false. I begin by arguing that recent attempts by Ross Cameron and (...) Jonathan Schaffer to provide negative existentials with truthmakers fail. I then argue that the conditional—if one embraces Truthmaker, the one ought to embrace Truthmaker Maximalism—is false by considering worlds where very little, if anything at all, exists. The conclusion is that thinking that negative existentials do not have truthmakers, and therefore rejecting Truthmaker Maximalism, need not worry Truthmaker embracers. (shrink)
Past work has demonstrated that people’s moral judgments can influence their judgments in a number of domains that might seem to involve straightforward matters of fact, including judgments about freedom, causation, the doing/allowing distinction, and intentional action. The present studies explore whether the effect of morality in these four domains can be explained by changes in the relevance of alternative possibilities. More precisely, we propose that moral judgment influences the degree to which people regard certain alternative possibilities as relevant, which (...) in turn impacts intuitions about freedom, causation, doing/allowing, and intentional action. Employing the stimuli used in previous research, Studies 1a, 2a, 3a, and 4a show that the relevance of alternatives is influenced by moral judgments and mediates the impact of morality on non-moral judgments. Studies 1b, 2b, 3b, and 4b then provide direct empirical evidence for the link between the relevance of alternatives and judgments in these four domains by manipulating (rather than measuring) the relevance of alternative possibilities. Lastly, Study 5 demonstrates that the critical mechanism is not whether alternative possibilities are considered, but whether they are regarded as relevant. These studies support a unified framework for understanding the impact of morality across these very different kinds of judgments. (shrink)
Analytic epistemologists agree that, whatever else is true of epistemic justification, it is distinct from knowledge. However, if recent work by Jonathan Sutton is correct, this view is deeply mistaken, for according to Sutton justification is knowledge. That is, a subject is justified in believing that p iff he knows that p. Sutton further claims that there is no concept of epistemic justification distinct from knowledge. Since knowledge is factive, a consequence of Sutton’s view is that there are no false (...) justified beliefs. <br> Following Sutton, I will begin by outlining kinds of beliefs that do not constitute knowledge but that seem to be justified. I will then be in a position to critically evaluate Sutton’s arguments for his position that justification is knowledge, concluding that he fails to establish his bold thesis. In the course of so doing, I will defend the following rule of assertion: (The JBK-rule) One must: assert p only if one has justification to believe that one knows that p.<br>. (shrink)
No existing conditional semantics captures the dual role of 'if' in embedded interrogatives — 'X wonders if p' — and conditionals. This paper presses the importance and extent of this challenge, linking it to cross-linguistic patterns and other phenomena involving conditionals. Among these other phenomena are conditionals with multiple 'if'-clauses in the antecedent — 'if p and if q, then r' — and relevance conditionals — 'if you are hungry, there is food in the cupboard'. Both phenomena are shown to (...) be problematic for existing analyses. Surprisingly, the decomposition of conditionals needed to capture the link with interrogatives provides a new analysis that captures all three phenomena. The model-theoretic semantics offered here relies on a dynamic conception of meaning and compositionality, a feature discussed throughout. (shrink)
The Principal Principle (PP) says that, for any proposition A, given any admissible evidence and the proposition that the chance of A is x%, one's conditional credence in A should be x%. Humean Supervenience (HS) claims that, among possible worlds like ours, no two differ without differing in the spacetime-point-by-spacetime-point arrangement of local properties. David Lewis (1986b, 1994a) has argued that PP contradicts HS, and the validity of his argument has been endorsed by Bigelow et al. (1993), Thau (1994), Hall (...) (1994), Strevens (1995), Ismael (1996), Hoefer (1997), and Black (1998). Against this consensus, I argue that PP might not contradict HS: Lewis's argument is invalid, and every attempt – within a broad class of attempts – to amend the argument fails. (shrink)
An epistemic duty would be a duty to believe, disbelieve, or withhold judgment from a proposition, and it would be grounded in purely evidential or epistemic considerations. If I promise to believe it is raining, my duty to believe is not epistemic. If my evidence is so good that, in light of it alone, I ought to believe it is raining, then my duty to believe supposedly is epistemic. I offer a new argument for the claim that there are no (...) epistemic duties. Though people do sometimes have duties to believe, disbelieve, or withhold judgment from propositions, those duties are never grounded in purely epistemic considerations. (shrink)
Divine Simplicity has it that God is absolutely simple. God exhibits no metaphysical complexity; he has neither proper parts nor distinct intrinsic properties. Recently, Jeffrey Brower has put forward an account of divine simplicity that has it that God is the truthmaker for all intrinsic essential predications about him. This allows Brower to preserve the intuitive thought that God is not a property but a concrete being. In this paper, I provide two objections to Brower’s account that are meant to (...) show that whatever merits this account of divine simplicity has, plausibility is not one of them. (shrink)
Kadri Vihvelin, in "What time travelers cannot do" (Philos Stud 81: 315-330, 1996), argued that "no time traveler can kill the baby who in fact is her younger self, because (V1) "if someone would fail to do something, no matter how hard or how many times she tried, then she cannot do it", and (V2) if a time traveler tried to kill her baby self, she would always fail. Theodore Sider (Philos Stud 110: 115-138, 2002) criticized Vihvelin's argument, and Ira (...) Kiourti (Philos Stud 139: 343-352, 2008) criticized both Vihvelin's argument and Sider's critique. I present a critique of Vihvelin's argument different from both Sider's and Kiourti's critiques: I argue in a novel way that both V1 and V2 are false. Since Vihvelin's argument might be understood as providing a challenge to the possibility of time travel, if my critique succeeds then time travel survives such a challenge unscathed. (shrink)
On the most popular account of material constitution, it is common for a material object to coincide precisely with one or more other material objects, ones that are composed of just the same matter but differ from it in sort. I argue that there is nothing that could ground the alleged difference in sort and that the account must be rejected.
"Procedural Justice" offers a theory of procedural fairness for civil dispute resolution. The core idea behind the theory is the procedural legitimacy thesis: participation rights are essential for the legitimacy of adjudicatory procedures. The theory yields two principles of procedural justice: the accuracy principle and the participation principle. The two principles require a system of procedure to aim at accuracy and to afford reasonable rights of participation qualified by a practicability constraint. The Article begins in Part I, Introduction, with two (...) observations. First, the function of procedure is to particularize general substantive norms so that they can guide action. Second, the hard problem of procedural justice corresponds to the following question: How can we regard ourselves as obligated by legitimate authority to comply with a judgment that we believe (or even know) to be in error with respect to the substantive merits? The theory of procedural justice is developed in several stages, beginning with some preliminary questions and problems. The first question - what is procedure? - is the most difficult and requires an extensive answer: Part II, Substance and Procedure, defines the subject of the inquiry by offering a new theory of the distinction between substance and procedure that acknowledges the entanglement of the action-guiding roles of substantive and procedural rules while preserving the distinction between two ideal types of rules. The key to the development of this account of the nature of procedure is a thought experiment, in which we imagine a world with the maximum possible acoustic separation between substance and procedure. Part III, The Foundations of Procedural Justice, lays out the premises of general jurisprudence that ground the theory and answers a series of objections to the notion that the search for a theory of procedural justice is a worthwhile enterprise. Sections II and III set the stage for the more difficult work of constructing a theory of procedural legitimacy. Part IV, Views of Procedural Justice, investigates the theories of procedural fairness found explicitly or implicitly in case law and commentary. After a preliminary inquiry that distinguishes procedural justice from other forms of justice, Part IV focuses on three models or theories. The first, the accuracy model, assumes that the aim of civil dispute resolution is correct application of the law to the facts. The second, the balancing model, assumes that the aim of civil procedure is to strike a fair balance between the costs and benefits of adjudication. The third, the participation model, assumes that the very idea of a correct outcome must be understood as a function of process that guarantees fair and equal participation. Part IV demonstrates that none of these models provides the basis for a fully adequate theory of procedural justice. In Part V, The Value of Participation, the lessons learned from analysis and critique of the three models are then applied to the question whether a right of participation can be justified for reasons that are not reducible to either its effect on the accuracy or its effect on the cost of adjudication. The most important result of Part V is the Participatory Legitimacy Thesis: it is (usually) a condition for the fairness of a procedure that those who are to be finally bound shall have a reasonable opportunity to participate in the proceedings. The central normative thrust of Procedural Justice is developed in Part VI, Principles of Procedural Justice. The first principle, the Participation Principle, stipulates a minimum (and minimal) right of participation, in the form of notice and an opportunity to be heard, that must be satisfied (if feasible) in order for a procedure to be considered fair. The second principle, the Accuracy Principle, specifies the achievement of legally correct outcomes as the criterion for measuring procedural fairness, subject to four provisos, each of which sets out circumstances under which a departure from the goal of accuracy is justified by procedural fairness itself. In Part VII, The Problem of Aggregation, the Participation Principle and the Accuracy Principle are applied to the central problem of contemporary civil procedure - the aggregation of claims in mass litigation. Part VIII offers some concluding observations about the point and significance of Procedural Justice. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.