: Two major philosophers of the twentieth century, the German existential phenomenologist Martin Heidegger and the seminal Japanese Kyoto School philosopher Nishida Kitarō are examined here in an attempt to discern to what extent their ideas may converge. Both are viewed as expressing, each through the lens of his own tradition, a world in transition with the rise of modernity in the West and its subsequent globalization. The popularity of Heidegger's thought among Japanese philosophers, despite its own admitted limitation to (...) the Western "history of being," is connected to Nishida's opening of a uniquely Japanese path in its confrontation with Western philosophy. The focus is primarily on their later works (the post-Kehre Heidegger and the works of Nishida that have been designated "Nishida philosophy"), in which each in his own way attempts to overcome the subject-object dichotomy inherited from the tradition of Western metaphysics by looking to a deeper structure from out of which both subjectivity and objectivity are derived and which embraces both. For Heidegger, the answer lies in being as the opening of unconcealment, from out of which beings emerge, and for Nishida, it is the place of nothingness within which beings are co-determined in their oppositions and relations. Concepts such as Nishida's "discontinuous continuity," "absolutely self-contradictory identity" (between one and many, whole and part, world and things), the mutual interdependence of individuals, and the self-determination of the world through the co-relative self-determination of individuals, and Heidegger's "simultaneity" (zugleich) and "within one another" (ineinander) (of unconcealment and concealment, presencing and absencing), and their "between" (Zwischen) and "jointure" (Fuge) are examined. Through a discussion of these ideas, the suggestion is made of a possible "transition" (Übergang) of both Western and Eastern thinking, in their mutual encounter, both in relation to each other and each in relation to its own past history, leading to both a self-discovery in the other and to a simultaneous self-reconstitution. (shrink)
In a recent issue of Philosophy East and West DouglasBerger defends a new reading of Mūlamadhyamakakārikā XXIV : 18, arguing that most contemporary translators mistranslate the important term prajñaptir upādāya, misreading it as a compound indicating "dependent designation" or something of the sort, instead of taking it simply to mean "this notion, once acquired." He attributes this alleged error, pervasive in modern scholarship, to Candrakīrti, who, Berger correctly notes, argues for the interpretation he rejects.Berger's analysis, (...) and the reading of the text he suggests is grounded on that analysis, is insightful and fascinating, and certainly generates an understanding of Nāgārjuna's enterprise that is welcome .. (shrink)
David Rosenthal explains conscious mentality in terms of two independent, though complementary, theories—the higher-order thought (“HOT”) theory of consciousness and quality-space theory (“QST”) about mental qualities. It is natural to understand this combination of views as constituting a kind of representationalism about experience—that is, a version of the view that an experience’s conscious character is identical with certain of its representational properties. At times, however, Rosenthal seems to resist this characterization of his view. We explore here whether and to what (...) extent it makes sense to construe Rosenthal’s views as representationalist. Our goal is not merely terminological—discerning how best to use the expression ‘representationalism’. Rather, we argue that understanding Rosenthal’s account as a kind of representationalism permits us not only to make sense of broader debates within the philosophy of mind, but also to extend and clarify aspects of the view itself. (shrink)
According to Rosenthal's higher-order thought (HOT) theory of consciousness, one is in a conscious mental state if and only if one is aware of oneself as being in that state via a suitable HOT. Several critics have argued that the possibility of so-called targetless HOTs?that is, HOTs that represent one as being in a state that does not exist?undermines the theory. Recently, Wilberg (2010) has argued that HOT theory can offer a straightforward account of such cases: since consciousness is a (...) property of mental state tokens, and since there are no states to exhibit consciousness, one is not in conscious states in virtue of targetless HOTs. In this paper, I argue that Wilberg's account is problematic and that Rosenthal's version of HOT theory, according to which a suitable HOT is both necessary and sufficient for consciousness, is to be preferred to Wilberg's account. I then argue that Rosenthal's account can comfortably accommodate targetless HOTs because consciousness is best understood as a property of individuals, not a property of states. (shrink)
I offer here a new hypothesis about the nature of implicit attitudes. Psy- chologists and philosophers alike often distinguish implicit from explicit attitudes by maintaining that we are aware of the latter, but not aware of the former. Recent experimental evidence, however, seems to challenge this account. It would seem, for example, that participants are frequently quite adept at predicting their own perfor- mances on measures of implicit attitudes. I propose here that most theorists in this area have nonetheless overlooked (...) a commonsense distinction regarding how we can be aware of attitudes, a difference that fundamentally distinguishes implicit and explicit attitudes. Along the way, I discuss the implications that this distinction may hold for future debates about and experimental investigations into the nature of implicit attitudes. (shrink)
Many have claimed that epistemic rationality sometimes requires us to have imprecise credal states (i.e. credal states representable only by sets of credence functions) rather than precise ones (i.e. credal states representable by single credence functions). Some writers have recently argued that this claim conflicts with accuracy-centered epistemology, i.e., the project of justifying epistemic norms by appealing solely to the overall accuracy of the doxastic states they recommend. But these arguments are far from decisive. In this essay, we prove some (...) new results, which show that there is little hope for reconciling the rationality of credal imprecision with accuracy-centered epistemology. (shrink)
Relationalism holds that perceptual experiences are relations between subjects and perceived objects. But much evidence suggests that perceptual states can be unconscious. We argue here that unconscious perception raises difficulties for relationalism. Relationalists would seem to have three options. First, they may deny that there is unconscious perception or question whether we have sufficient evidence to posit it. Second, they may allow for unconscious perception but deny that the relationalist analysis applies to it. Third, they may offer a relationalist explanation (...) of unconscious perception. We argue that each of these strategies is questionable. (shrink)
While there seems to be much evidence that perceptual states can occur without being conscious, some theorists recently express scepticism about unconscious perception. We explore here two kinds of such scepticism: Megan Peters and Hakwan Lau's experimental work regarding the well-known problem of the criterion -- which seems to show that many purported instances of unconscious perception go unreported but are weakly conscious -- and Ian Phillips' theoretical consideration, which he calls the 'problem of attribution' -- the worry that many (...) purported examples of unconscious perception are not perceptual, but rather merely informational and subpersonal. We argue that these concerns do not undermine the evidence for unconscious perception and that this sceptical approach results in a dilemma for the sceptic, who must either deny that there is unconscious mentality generally or explain why perceptual states are unique in the mind such that they cannot occur unconsciously. Both options, we argue, are problematic. (shrink)
Representationalism holds that a perceptual experience's qualitative character is identical with certain of its representational properties. To date, most representationalists endorse atomistic theories of perceptual content, according to which an experience's content, and thus character, does not depend on its relations to other experiences. David Rosenthal, by contrast, proposes a view that is naturally construed as a version of representationalism on which experiences’ relations to one another determine their contents and characters. I offer here a new defense of this holistic (...) representationalism, arguing that some objections to atomistic views are best interpreted as supporting it. (shrink)
According to a traditional view, perceptual experiences are composites of distinct sensory and cognitive components. This dual-component theory has many benefits; in particular, it purports to offer a way forward in the debate over what kinds of properties perceptual experiences represent. On this kind of view, the issue reduces to the questions of what the sensory and cognitive components respectively represent. Here, I focus on the former topic. I propose a theory of the contents of the sensory aspects of perceptual (...) experience that provides clear criteria for identifying what kinds of properties they represent. (shrink)
I discuss here the nature of nonconscious mental states and the ways in which they may differ from their conscious counterparts. I first survey reasons to think that mental states can and often do occur without being conscious. Then, insofar as the nature of nonconscious mentality depends on how we understand the nature of consciousness, I review some of the major theories of consciousness and explore what restrictions they may place on the kinds of states that can occur nonconsciously. I (...) close with a discussion of what makes a state mental, if consciousness is not the mark of the mental. (shrink)
Perceptual experiences justify beliefs. A perceptual experience of a dog justifies the belief that there is a dog present. But there is much evidence that perceptual states can occur without being conscious, as in experiments involving masked priming. Do unconscious perceptual states provide justification as well? The answer depends on one’s theory of justification. While most varieties of externalism seem compatible with unconscious perceptual justification, several theories have recently afforded to consciousness a special role in perceptual justification. We argue that (...) such views face a dilemma: either consciousness should be understood in functionalist terms, in which case our best current theories of consciousness do not seem to imbue consciousness with any special epistemic features, or it should not, in which case it is mysterious why only conscious states are justificatory. We conclude that unconscious perceptual justification is quite plausible. (shrink)
I provide a comprehensive metaphysics of causation based on the idea that fundamentally things are governed by the laws of physics, and that derivatively difference-making can be assessed in terms of what fundamental laws of physics imply for hypothesized events. Highlights include a general philosophical methodology, the fundamental/derivative distinction, and my mature account of causal asymmetry.
In a recent paper, Berger and Nanay consider, and reject, three ways of addressing the phenomenon of unconscious perception within a naïve realist framework. Since these three approaches seem to exhaust the options open to naïve realists, and since there is said to be excellent evidence that perception of the same fundamental kind can occur, both consciously and unconsciously, this is seen to present a problem for the view. We take this opportunity to show that all three approaches considered (...) remain perfectly plausible ways of addressing unconscious perception within a naïve realist framework. So far from undermining the credibility of naïve realism, Berger and Nanay simply draw our attention to an important question to be considered by naïve realists in future work. Namely, which of the approaches considered is most likely to provide an accurate account of unconscious perception in each of its purported incarnations? (shrink)
Mandik (2012)understands color-consciousness conceptualism to be the view that one deploys in a conscious qualitative state concepts for every color consciously discriminated by that state. Some argue that the experimental evidence that we can consciously discriminate barely distinct hues that are presented together but cannot do so when those hues are presented in short succession suggests that we can consciously discriminate colors that we do not conceptualize. Mandik maintains, however, that this evidence is consistent with our deploying a variety of (...) nondemonstrative concepts for those colors and so does not pose a threat to conceptualism. But even if Mandik has shown that we deploy such concepts in these experimental conditions, there are cases of conscious states that discriminate colors but do not involve concepts of those colors. Mandik’s arguments sustain only a theory in the vicinity of conceptualism: The view that we possess concepts for every color we can discriminate consciously, but need not deploy those concepts in every conscious act of color discrimination. (shrink)
This chapter reviews recent philosophical and neuroethical literature on the morality of moral neuroenhancements. It first briefly outlines the main moral arguments that have been made concerning moral status neuroenhancements. These are neurointerventions that would augment the moral status of human persons. It then surveys recent debate regarding moral desirability neuroenhancements: neurointerventions that augment that the moral desirability of human character traits, motives or conduct. This debate has contested, among other claims (i) Ingmar Persson and Julian Savulescu’s contention that there (...) is a moral imperative to pursue the development of moral desirability neuroenhancements, (ii) Thomas Douglas’ claim that voluntarily undergoing moral desirability neuroenhancements would often be morally permissible, and (iii) David DeGrazia’s claim that moral desirability neuroenhancements would often be morally desirable. The chapter discusses a number of concerns that have been raised regarding moral desirability neuroenhancements, including concerns that they would restrict freedom, would produce only a superficial kind of moral improvement, would rely on technologies that are liable to be misused, and would frequently misfire, resulting in moral deterioration rather than moral improvement. (shrink)
According to David Rosenthal’s higher-order thought (HOT) theory of consciousness, a mental state is conscious just in case one is aware of being in that state via a suitable HOT. Jesse Mulder (2016) recently objects: though HOT theory holds that conscious states are states that it seems to one that one is in, the view seems unable to explain how HOTs engender such seemings. I clarify here how HOT theory can adequately explain the relevant mental appearances, illustrating the explanatory power (...) of HOT theory. (shrink)
It seems that we can be directly accountable for our reasons-responsive attitudes—e.g., our beliefs, desires, and intentions. Yet, we rarely, if ever, have volitional control over such attitudes, volitional control being the sort of control that we exert over our intentional actions. This presents a trilemma: (Horn 1) deny that we can be directly accountable for our reasons-responsive attitudes, (Horn 2) deny that φ’s being under our control is necessary for our being directly accountable for φ-ing, or (Horn 3) deny (...) that the relevant sort of control is volitional control. This paper argues that we should take Horn 3. (shrink)
Using tools like argument diagrams and profiles of dialogue, this paper studies a number of examples of everyday conversational argumentation where determination of relevance and irrelevance can be assisted by means of adopting a new dialectical approach. According to the new dialectical theory, dialogue types are normative frameworks with specific goals and rules that can be applied to conversational argumentation. In this paper is shown how such dialectical models of reasonable argumentation can be applied to a determination of whether an (...) argument in a specific case is relevant are not in these examples. The approach is based on a linguistic account of dialogue and text from congruity theory, and on the notion of a dialectical shift. Such a shift occurs where an argument starts out as fitting into one type of dialogue, but then it only continues to makes sense as a coherent argument if it is taken to be a part of a different type of dialogue. (shrink)
I assess the thesis that counterfactual asymmetries are explained by an asymmetry of the global entropy at the temporal boundaries of the universe, by developing a method of evaluating counterfactuals that includes, as a background assumption, the low entropy of the early universe. The resulting theory attempts to vindicate the common practice of holding the past mostly fixed under counterfactual supposition while at the same time allowing the counterfactual's antecedent to obtain by a natural physical development. Although the theory has (...) some success in evaluating a wide variety of ordinary counterfactuals, it fails as an explanation of counterfactual asymmetry. (shrink)
This paper explains the importance of classifying argumentation schemes, and outlines how schemes are being used in current research in artificial intelligence and computational linguistics on argument mining. It provides a survey of the literature on scheme classification. What are so far generally taken to represent a set of the most widely useful defeasible argumentation schemes are surveyed and explained systematically, including some that are difficult to classify. A new classification system covering these centrally important schemes is built.
In his paper “There It Is” and his précis “There It Was,” Benj Hellie develops a sophisticated semantics for perceptual justification according to which perceptions in good cases can be explained by intentional psychology and can justify beliefs, whereas bad cases of perception are defective and so cannot justify beliefs. Importantly, Hellie also affords consciousness a central role in rationality insofar as only those good cases of perception within consciousness can play a justificatory function. In this commentary, I reserve judgment (...) regarding Hellie’s treatment of the rational difference between good and bad cases, but I argue there can be what he views as good cases of perceptual justification outside of consciousness. (shrink)
Maximalism is the view that an agent is permitted to perform a certain type of action if and only if she is permitted to perform some instance of this type, where φ-ing is an instance of ψ-ing if and only if φ-ing entails ψ-ing but not vice versa. Now, the aim of this paper is not to defend maximalism, but to defend a certain account of our options that when combined with maximalism results in a theory that accommodates the idea (...) that a moral theory ought to be morally harmonious—that is, ought to be such that the agents who satisfy the theory, whoever and however numerous they may be, are guaranteed to produce the morally best world that they have the option of producing. I argue that, for something to count as an option for an agent, it must, in the relevant sense, be under her control. And I argue that the relevant sort of control is the sort that we exercise over our reasons-responsive attitudes by being both receptive and reactive to reasons. I call this sort of control rational control, and I call the view that φ-ing is an option for a subject if and only if she has rational control over whether she φs rationalism. When we combine this view with maximalism, we get rationalist maximalism, which I argue is a promising moral theory. (shrink)
Working memory, an important posit in cognitive science, allows one to temporarily store and manipulate information in the service of ongoing tasks. Working memory has been traditionally classified as an explicit memory system – that is, as operating on and maintaining only consciously perceived information. Recently, however, several studies have questioned this assumption, purporting to provide evidence for unconscious working memory. In this paper, we focus on visual working memory and critically examine these studies as well as studies of unconscious (...) perception that seem to provide indirect evidence for unconscious working memory. Our analysis indicates that current evidence does not support an unconscious working memory store, though we offer independent reasons to think that working memory may operate on unconsciously perceived information. (shrink)
In this issue, Elizabeth Shaw and Gulzaar Barn offer a number of replies to my arguments in ‘Criminal Rehabilitation Through Medical Intervention: Moral Liability and the Right to Bodily Integrity’, Journal of Ethics. In this article I respond to some of their criticisms.
IN THIS PAPER, I make a presumptive case for moral rationalism: the view that agents can be morally required to do only what they have decisive reason to do, all things considered. And I argue that this view leads us to reject all traditional versions of actâ€consequentialism. I begin by explaining how moral rationalism leads us to reject utilitarianism.
This article analyses the fallacy of wrenching from context, using the dialectical notions of commitment and implicature as tools. The data, a set of key examples, is used to sharpen the conceptual borderlines around the related fallacies of straw man, accent, misquotation, and neglect of qualifications. According to the analysis, the main characteristics of wrenching from context are the manipulation of the meaning of the other’s statement through devices such as the use of misquotations, selective quotations, and quoting out of (...) context. The theoretical tools employed in the analysis are pragmatic theories of meaning and a dialectical model of commitment, used to explain how and why a standpoint is distorted. The analysis is based on a conception of fallacies as deceptive strategic moves in a game of dialogue. As a consequence, our focus is not only on misquotations as distortions of meaning, but on how they are used as dialectical tools to attack an opponent or win a dispute. Wrenching from context is described as a fallacy of unfairly attributing a commitment to another party that he never held. Its power as a deceptive argumentation tactic is based on complex mechanisms of implicit commitments and on their misemployment to improperly suggest an attribution of commitment. (shrink)
On what I take to be the standard account of supererogation, an act is supererogatory if and only if it is morally optional and there is more moral reason to perform it than to perform some permissible alternative. And, on this account, an agent has more moral reason to perform one act than to perform another if and only if she morally ought to prefer how things would be if she were to perform the one to how things would be (...) if she were to perform the other. I argue that this account has two serious problems. The first, which I call the latitude problem, is that it has counterintuitive implications in cases where the duty to be exceeded is one that allows for significant latitude in how to comply with it. The second, which I call the transitivity problem, is that it runs afoul of the plausible idea that the one-reason-morally-justifies-acting-against-another relation is transitive. What’s more, I argue that both problems can be overcome by an alternative account, which I call the maximalist account. (shrink)
The aim of the paper is to present a typology of argument schemes. In first place, we found it helpful to define what an argument scheme is. Since many argument schemes found in contemporary theories stem from the ancient tradition, we took in consideration classical and medieval dialectical studies and their relation with argumentation theory. This overview on the main works on topics and schemes provides a summary of the main principles of classification. In the second section, Walton’s theory is (...) briefly explained to introduce the schemes classification and its different levels. At least, the final part shows the main application of the schemes in computing and AI. -/- . (shrink)
Australia, Canada, and New Zealand currently apply health requirements to prospective immigrants, denying residency to those with health conditions that are likely to impose an “excessive demand” on their publicly funded health and social service programs. In this paper, I investigate the charge that such policies are wrongfully discriminatory against persons with disabilities. I first provide a freedom-based account of the wrongness of discrimination according to which discrimination is wrong when and because it involves disadvantaging people in the exercise of (...) their freedom on the basis of morally arbitrary features of their identity. Discrimination is permissible, I suggest, when it is necessary to advance a valuable exercise of the discriminating agent’s freedom. I then apply this account to the case of social cost health requirements. Against critics of these requirements, I argue that it is sometimes permissible for states to discriminate against prospective immigrants with disabilities. States may do so, I suggest, when such discriminatory treatment is necessary to prevent an increase in rates of mortality and/or morbidity amongst citizens. Alongside critics of social cost health requirements however, I argue that existing policies are likely a form of wrongful discrimination insofar as they are too broad to satisfy this standard. (shrink)
Synthetic biologists aim to generate biological organisms according to rational design principles. Their work may have many beneficial applications, but it also raises potentially serious ethical concerns. In this article, we consider what attention the discipline demands from bioethicists. We argue that the most important issue for ethicists to examine is the risk that knowledge from synthetic biology will be misused, for example, in biological terrorism or warfare. To adequately address this concern, bioethics will need to broaden its scope, contemplating (...) not just the means by which scientific knowledge is produced, but also what kinds of knowledge should be sought and disseminated. (shrink)
In most academic and non-academic circles throughout history, the world and its operation have been viewed in terms of cause and effect. The principles of causation have been applied, fruitfully, across the sciences, law, medicine, and in everyday life, despite the lack of any agreed-upon framework for understanding what causation ultimately amounts to. In this engaging and accessible introduction to the topic, Douglas Kutach explains and analyses the most prominent theories and examples in the philosophy of causation. The book (...) is organized so as to respect the various cross-cutting and interdisciplinary concerns about causation, such as the reducibility of causation, its application to scientific modeling, its connection to influence and laws of nature, and its role in causal explanation. Kutach begins by presenting the four recurring distinctions in the literature on causation, proceeding through an exploration of various accounts of causation including determination, difference making and probability-raising. He concludes by carefully considering their application to the mind-body problem. _Causation_ provides a straightforward and compact survey of contemporary approaches to causation and serves as a friendly and clear guide for anyone interested in exploring the complex jungle of ideas that surround this fundamental philosophical topic. (shrink)
After raising some minor philosophical points about Kevin Elliott’s A Tapestry of Values (2017), I argue that we should expand on the themes raised in the book and that philosophers of science need to pay as much attention to the loom of science (i.e., the institutional structures which guide the pursuit of science) as the tapestry of science. The loom of science includes such institutional aspects as patents, funding sources, and evaluation regimes that shape how science gets pursued, and that (...) attending to these aspects will enable us to provide more robust guidance on the values that infuse the tapestry of science. (shrink)
There has been much debate regarding the 'double-effect' of sedatives and analgesics administered at the end-of-life, and the possibility that health professionals using these drugs are performing 'slow euthanasia.' On the one hand analgesics and sedatives can do much to relieve suffering in the terminally ill. On the other hand, they can hasten death. According to a standard view, the administration of analgesics and sedatives amounts to euthanasia when the drugs are given with an intention to hasten death. In this (...) paper we report a small qualitative study based on interviews with 8 Australian general physicians regarding their understanding of intention in the context of questions about voluntary euthanasia, assisted suicide and particularly the use of analgesic and sedative infusions (including the possibility of voluntary or non-voluntary 'slow euthanasia'). We found a striking ambiguity and uncertainty regarding intentions amongst doctors interviewed. Some were explicit in describing a 'grey' area between palliation and euthanasia, or a continuum between the two. Not one of the respondents was consistent in distinguishing between a foreseen death and an intended death. A major theme was that 'slow euthanasia' may be more psychologically acceptable to doctors than active voluntary euthanasia by bolus injection, partly because the former would usually only result in a small loss of 'time' for patients already very close to death, but also because of the desirable ambiguities surrounding causation and intention when an infusion of analgesics and sedatives is used. The empirical and philosophical implications of these findings are discussed. (shrink)
There is considerable disagreement about the epistemic value of novel predictive success, i.e. when a scientist predicts an unexpected phenomenon, experiments are conducted, and the prediction proves to be accurate. We survey the field on this question, noting both fully articulated views such as weak and strong predictivism, and more nascent views, such as pluralist reasons for the instrumental value of prediction. By examining the various reasons offered for the value of prediction across a range of inferential contexts , we (...) can see that neither weak nor strong predictivism captures all of the reasons for valuing prediction available. A third path is presented, Pluralist Instrumental Predictivism; PIP for short. (shrink)
Biomedical technologies can increasingly be used not only to combat disease, but also to augment the capacities or traits of normal, healthy people – a practice commonly referred to as biomedical enhancement. Perhaps the best‐established examples of biomedical enhancement are cosmetic surgery and doping in sports. But most recent scientific attention and ethical debate focuses on extending lifespan, lifting mood, and augmenting cognitive capacities.
A scientific community can be modeled as a collection of epistemic agents attempting to answer questions, in part by communicating about their hypotheses and results. We can treat the pathways of scientific communication as a network. When we do, it becomes clear that the interaction between the structure of the network and the nature of the question under investigation affects epistemic desiderata, including accuracy and speed to community consensus. Here we build on previous work, both our own and others’, in (...) order to get a firmer grasp on precisely which features of scientific communities interact with which features of scientific questions in order to influence epistemic outcomes. (shrink)
Blame is multifarious. It can be passionate or dispassionate. It can be expressed or kept private. We blame both the living and the dead. And we blame ourselves as well as others. What’s more, we blame ourselves, not only for our moral failings, but also for our non-moral failings: for our aesthetic bad taste, gustatory self-indulgence, or poor athletic performance. And we blame ourselves both for things over which we exerted agential control (e.g., our voluntary acts) and for things over (...) which we lacked such control (e.g., our desires, beliefs, and intentions). I argue that, despite this manifest diversity in our blaming practices, it’s possible to provide comprehensive account of blame. Indeed, I propose a set of necessary and sufficient conditions that aims to specify blame’s extension in terms of its constitution as opposed to its function. And I argue that this proposal has a number of advantages beyond accounting for blame in all its disparate forms. For one, it can account for the fact that one’s having had control over whether one was to φ is a necessary condition for one’s being fittingly blamed for having φ-ed. For another, it can account for why, unlike fitting shame, fitting blame is always deserved, which in turn explains why there is something morally problematic about ridding oneself of one’s fitting self-blame (e.g., one’s fitting guilt). (shrink)
I defend what may loosely be called an eliminativist account of causation by showing how several of the main features of causation, namely asymmetry, transitivity, and necessitation, arise from the combination of fundamental dynamical laws and a special constraint on the macroscopic structure of matter in the past. At the microscopic level, the causal features of necessitation and transitivity are grounded, but not the asymmetry. At the coarse-grained level of the macroscopic physics, the causal asymmetry is grounded, but not the (...) necessitation or transitivity. Thus, at no single level of description does the physics justify the conditions that are taken to be constitutive of causation. Nevertheless, if we mix our reasoning about the microscopic and macroscopic descriptions, the structure provided by the dynamics and special initial conditions can justify the folk concept of causation to a significant extent. I explain why our causal concept works so well even though at bottom it is comprised of a patchwork of principles that don't mesh well. (shrink)
In some jurisdictions, the institutions of criminal justice may subject individuals who have committed crimes to preventive detention. By this, I mean detention of criminal offenders (i) who have already been punished to (or beyond) the point that no further punishment can be justified on general deterrent, retributive, restitutory, communicative or other backwardlooking grounds, (ii) for preventive purposes—that is, for the purposes of preventing the detained individual from engaging in further criminal or otherwise socially costly conduct. Preventive detention, thus understood, (...) shares many features with the quarantine measures sometimes employed in the context of infectious disease control. Both interventions involve imposing (usually severe) constraints on freedom of movement and association. Both interventions are standardly undeserved: in quarantine, the detained individual deserves no detention (or so I will, for the moment, assume), and in preventive detention, the individual has already endured any detention that can be justified by reference to desert. Both interventions are, in contrast to civil commitment under mental health legislation, normally imposed on more-or-less fully autonomous individuals. And both interventions are intended to reduce the risk that the constrained individual poses to the public. Yet despite these similarities, preventive detention and quarantine have received rather different moral report cards. (shrink)
Violence risk assessment tools are increasingly used within criminal justice and forensic psychiatry, however there is little relevant, reliable and unbiased data regarding their predictive accuracy. We argue that such data are needed to (i) prevent excessive reliance on risk assessment scores, (ii) allow matching of different risk assessment tools to different contexts of application, (iii) protect against problematic forms of discrimination and stigmatisation, and (iv) ensure that contentious demographic variables are not prematurely removed from risk assessment tools.
Nicholas Agar argues, that enhancement technologies could be used to create post-persons—beings of higher moral status than ordinary persons—and that it would be wrong to create such beings.1 I am sympathetic to the first claim. However, I wish to take issue with the second.Agar's second claim is grounded on the prediction that the creation of post-persons would, with at least moderate probability, harm those who remain mere persons. The harm that Agar has in mind here is a kind of meta-harm: (...) the harm of being made more susceptible to being permissibly harmed—more liable to harm. Agar suggests that, if post-persons existed, mere persons could frequently be permissibly sacrificed in order to provide benefits to the post-persons. For instance, perhaps they could be permissibly used in lethal medical experiments designed to develop medical treatments for post-persons. By contrast, he suggests that mere persons typically cannot be permissibly sacrificed to provide benefits to other mere persons. He thus claims that mere persons would be more liable to sacrifice if post-persons existed than they are in the absence of post-persons. The creation of post-persons would make them worse off in at least this one respect.Agar then argues that, since this meta-harm imposed on mere persons would not be compensated, it would be wrong to create post-persons. It is here that I believe his argument begins to go awry. According to the concept of compensation that Agar deploys , a harm imposed on X is compensated just in …. (shrink)
Many high-income countries have skill-selective immigration policies, favoring prospective immigrants who are highly skilled. I investigate whether it is permissible for high-income countries to adopt such policies. Adopting what Joseph Carens calls a " realistic approach " to the ethics of immigration, I argue first that it is in principle permissible for high-income countries to take skill as a consideration in favor of selecting one prospective immigrant rather than another. I argue second that high-income countries must ensure that their skill-selective (...) immigration policies do not contribute to the non-fulfillment of their duty to aid residents of low-and middle-income countries. (shrink)
The problem of standard of care in clinical research concerns the level of treatment that investigators must provide to subjects in clinical trials. Commentators often formulate answers to this problem by appealing to two distinct types of obligations: professional obligations and natural duties. In this article, I investigate whether investigators also possess institutional obligations that are directly relevant to the problem of standard of care, that is, those obligations a person has because she occupies a particular institutional role. I examine (...) two types of institutional contexts: (1) public research agencies – agencies or departments of states that fund or conduct clinical research in the public interest; and (2) private-for-profit corporations. I argue that investigators who are employed or have their research sponsored by the former have a distinctive institutional obligation to conduct their research in a way that is consistent with the state's duty of distributive justice to provide its citizens with access to basic health care, and its duty to aid citizens of lower income countries. By contrast, I argue that investigators who are employed or have their research sponsored by private-for-profit corporations do not possess this obligation nor any other institutional obligation that is directly relevant to the ethics of RCTs. My account of the institutional obligations of investigators aims to contribute to the development of a reasonable, distributive justice-based account of standard of care. (shrink)
Sport is one of the first areas in which enhancement has become commonplace. It is also one of the first areas in which the use of enhancement technologies has been heavily regulated. Some have thus seen sport as a testing ground for arguments about whether to permit enhancement. However, I argue that there are fairness-based objections to enhancement in sport that do not apply as strongly in some other areas of human activity. Thus, I claim that there will often be (...) a stronger case for permitting enhancement outside of sport than for permitting enhancement in sport. I end by considering some methodological implications of this conclusion. (shrink)
In this paper, I take it for granted both that there are two types of blameworthiness—accountability blameworthiness and attributability blameworthiness—and that avoidability is necessary only for the former. My task, then, is to explain why avoidability is necessary for accountability blameworthiness but not for attributability blameworthiness. I argue that what explains this is both the fact that these two types of blameworthiness make different sorts of reactive attitudes fitting and that only one of these two types of attitudes requires having (...) been able to refrain from φ-ing in order for them to be fitting. (shrink)
One of the most promising theories of consciousness currently available is higher-order thought (“HOT”) theory, according to which consciousness consists in having suitable HOTs regarding one’s mental life. But critiques of HOT theory abound. We explore here three recent objections to the theory, which we argue at bottom founder for the same reason. While many theorists today assume that consciousness is a feature of the actually existing mental states in virtue of which one has experiences, this assumption is in tension (...) with the underlying motivations for HOT theory and arguably false. We urge that these objections, though sophisticated, trade on this questionable conception of consciousness, thereby begging the question against HOT theory. We then explain how HOT theory might instead understand consciousness. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.