As art produced by autonomous machines becomes increasingly common, and as such machines grow increasingly sophisticated, we risk a confusion between art produced by a person but mediated by a machine, and art produced by what might be legitimately considered a machine artist. This distinction will be examined here. In particular, my argument seeks to close a gap between, on one hand, a philosophically grounded theory of art and, on the other hand, theories concerned with behavior, intentionality, expression, and creativity (...) in natural and artificial agents. This latter set of theories in some cases addresses creative behavior in relation to visual art, music, and literature, in the frequently overlapping contexts of philosophy of mind, artificial intelligence, and cognitive science. However, research in these areas does not typically address problems in the philosophy of art as a central line of inquiry. Similarly, the philosophy of art does not typically address issues pertaining to artificial agents. (shrink)
Proceedings of the papers presented at the Symposium on "Revisiting Turing and his Test: Comprehensiveness, Qualia, and the Real World" at the 2012 AISB and IACAP Symposium that was held in the Turing year 2012, 2–6 July at the University of Birmingham, UK. Ten papers. - http://www.pt-ai.org/turing-test --- Daniel Devatman Hromada: From Taxonomy of Turing Test-Consistent Scenarios Towards Attribution of Legal Status to Meta-modular Artificial Autonomous Agents - Michael Zillich: My Robot is Smarter than Your Robot: On the Need for (...) a Total Turing Test for Robots - AdamLinson, Chris Dobbyn and Robin Laney: Interactive Intelligence: Behaviour-based AI, Musical HCI and the Turing Test - Javier Insa, Jose Hernandez-Orallo, Sergio España - David Dowe and M.Victoria Hernandez-Lloreda: The anYnt Project Intelligence Test (Demo) - Jose Hernandez-Orallo, Javier Insa, David Dowe and Bill Hibbard: Turing Machines and Recursive Turing Tests — Francesco Bianchini and Domenica Bruni: What Language for Turing Test in the Age of Qualia? - Paul Schweizer: Could there be a Turing Test for Qualia? - Antonio Chella and Riccardo Manzotti: Jazz and Machine Consciousness: Towards a New Turing Test - William York and Jerry Swan: Taking Turing Seriously (But Not Literally) - Hajo Greif: Laws of Form and the Force of Function: Variations on the Turing Test. (shrink)
This is a reply to de Sousa's 'Emotional Truth', in which he argues that emotions can be objective, as propositional truths are. I say that it is better to distinguish between truth and accuracy, and agree with de Sousa to the extent of arguing that emotions can be more or less accurate, that is, based on the facts as they are.
Intellectual autonomy has long been identified as an epistemic virtue, one that has been championed influentially by Kant, Hume and Emerson. Manifesting intellectual autonomy, at least, in a virtuous way, does not require that we form our beliefs in cognitive isolation. Rather, as Roberts and Wood note, intellectually virtuous autonomy involves reliance and outsourcing to an appropriate extent, while at the same time maintaining intellectual self-direction. In this essay, I want to investigate the ramifications for intellectual autonomy of a particular (...) kind of epistemic dependence: cognitive enhancement. Cognitive enhancements involve the use of technology and medicine to improve cognitive capacities in healthy individuals, through mechanisms ranging from smart drugs to brain-computer interfaces. With reference to case studies in bioethics, as well as the philosophy of mind and cognitive science, it is shown that epistemic dependence, in this extreme form, poses a prima facie threat to the retention of intellectual autonomy, specifically, by threatening to undermine our intellectual self-direction. My aim will be to show why certain kinds of cognitive enhancements are subject to this objection from self-direction, while others are not. Once this is established, we’ll see that even some extreme kinds of cognitive enhancement might be not merely compatible with, but constitutive of, virtuous intellectual autonomy. (shrink)
Develops an empirical argument against naive realism-disjunctivism: if naive realists accept "internal dependence", then they cannot explain the evolution of perceptual success. Also presents a puzzle about our knowledge of universals.
The dead donor rule (DDR) prohibits retrieval protocols that would be lethal to the donor. Some argue that compliance with it can be maintained by satisfying the requirements of Double-Effect Reasoning (DER). If successful, one could support organ donation without reference to the definition of death while being faithful to an ethic that prohibits intentionally killing innocent human life. On the contrary, I argue that DER cannot make lethal organ donation compatible with the DDR, because there are plausible ways it (...) fails DER’s requirements. A key takeaway is that the theories of intention and proportionality assumed in DER matters for its plausibility as a constraint on practical reasoning. (shrink)
When Adam Smith published his celebrated writings on economics and moral philosophy he famously referred to the operation of an invisible hand. Adam Smith's Political Philosophy makes visible the invisible hand by examining its significance in Smith's political philosophy and relating it to similar concepts used by other philosophers, revealing a distinctive approach to social theory that stresses the significance of the unintended consequences of human action. This book introduces greater conceptual clarity to the discussion of the invisible (...) hand and the related concept of unintended order in the work of Smith and in political theory more generally. By examining the application of spontaneous order ideas in the work of Smith, Hume, Hayek and Popper, Adam Smith's Political Philosophy traces similarities in approach and from these builds a conceptual, composite model of an invisible hand argument. While setting out a clear model of the idea of spontaneous order the book also builds the case for using the idea of spontaneous order as an explanatory social theory, with chapters on its application in the fields of science, moral philosophy, law and government. (shrink)
This paper elaborates on an argument in my book *Perception*. It has two parts. In the first part, I argue against what I call "basic" naive realism, on the grounds that it fails to accommodate what I call "internal dependence" and it requires an empirically implausible theory of sensible properties. Then I turn Craig French and Ian Phillips’ modified naïve realism as set out in their recent paper "Austerity and Illusion". It accommodates internal dependence. But it may retain the empirically (...) implausible theory of sensible properties. And it faces other empirical problems. Representationalism about experiences avoids those problems and is to be preferred. (shrink)
In this paper I argue that both defence and criticism of the claim that humans act ‘under the guise of the good’ neglects the metaphysical roots of the theory. I begin with an overview of the theory and its modern commentators, with critics noting the apparent possibility of acting against the good, and supporters claiming that such actions are instances of error. These debates reduce the ‘guise of the good’ to a claim about intention and moral action, and in so (...) doing have become divorced from the theory's roots in classical and medieval philosophy. Aristotle and Aquinas’ ‘guise of the good’ is primarily a metaphysical claim resting on the equivalence between actuality and goodness, from which conclusions about moral action are derived. I show the reasoning behind their theory and how it forms the basis for the claims about intention and action at the centre of the modern debate. Finally, I argue that the absence of its original foundation is apparent in recent attacks on the ‘guise of the good’. It is unsurprising that modern action theory and ethics have not always been able to comfortably accommodate the ‘guise of the good’; they are only telling half of the story. (shrink)
In this paper I will present a puzzle about visual appearance. There are certain necessary constraints on how things can visually appear. The puzzle is about how to explain them. I have no satisfying solution. My main thesis is simply that the puzzle is a puzzle. I will develop the puzzle as it arises for representationalism about experience because it is currently the most popular theory of experience and I think it is along the right lines. However, everyone faces a (...) form of the puzzle, including the naïve realist. In §1 I explain representationalism about experience. In §§2-3 I develop the puzzle and criticize a response due to Ned Block and Jeff Speaks and another response based on a novel form of representationalism (“sensa representationalism”). In §4 I argue that defenders of “perceptual confidence” (Morrison, Munton, my earlier self) face an instance of the puzzle. In §5 I suggest that everyone faces a form of the puzzle. (shrink)
Joshua Mezrich is a practicing transplant surgeon who draws on his experiences, and those of his patients, to provide a "here's where we're at" moment in the story of transplant medicine. In so doing, he explains what it is like to practice while telling the stories of his patients, donors, and the pioneering surgeons who persisted in the face of failure to make what Mezrich does a work of healing. Written for a popular audience, When Death Becomes Life is perhaps (...) the most accessible work yet on the modern history of organ transplantation and what the current "standard of care" actually looks like. Indeed, it rounds out a "trinity" of quality books about the transplant experience, this one from the surgeon's... (shrink)
The relationship between Adam Smith's official methodology and his own actual theoretical practice as a social scientist may be grasped only against the background of the Humean project of a Moral Newtonianism. The main features in Smith's methodology are: (i) the provisional character of explanatory principles; (ii) 'internal' criteria of truth; (iii) the acknowledgement of an imaginative aspect in principles, with the related problem of the relationship between internal truth and external truth, in terms of mirroring of 'real' causes. (...) Smith's Newtonian (as opposed to Cartesian) methodology makes room for progress in social theorizing in so far as it allows for a decentralization of the various fields of the Moral Science, contributing to the shaping of political economy. On the other hand, the Cartesian legacy in Smith's Newtonian methodology makes the relationship between phenomena and theoretical principles highly problematic. (shrink)
The question of why humanity first chose to sin is an extension to the problem of evil to which the free-will defence does not easily apply. In De Libero Arbitrio and elsewhere Augustine argues that as an instance of evil, the fall is necessarily inexplicable. In this article, I identify the problems with this response and attempt to construct an alternative based on Peter van Inwagen's free will 'mysterianism'. I will argue that the origin of evil is inexplicable not because (...) it is an instance of evil, but because it is an instance of free will. (shrink)
This paper considers the prospect of moral transhumanism from the perspective of theological virtue ethics. I argue that the pursuit of goodness inherent to moral transhumanism means that there is a compelling prima facie case for moral enhancement. However, I also show that the proposed enhancements would not by themselves allow us to achieve a life of virtue, as they appear unable to create or enhance prudence, the situational judgement essential for acting in accordance with virtue. I therefore argue that (...) moral enhancement technologies should take a limited or supporting role in moral development, which I call ‘moral supplementation’. (shrink)
People’s beliefs about normality play an important role in many aspects of cognition and life (e.g., causal cognition, linguistic semantics, cooperative behavior). But how do people determine what sorts of things are normal in the first place? Past research has studied both people’s representations of statistical norms (e.g., the average) and their representations of prescriptive norms (e.g., the ideal). Four studies suggest that people’s notion of normality incorporates both of these types of norms. In particular, people’s representations of what is (...) normal were found to be influenced both by what they believed to be descriptively average and by what they believed to be prescriptively ideal. This is shown across three domains: people’s use of the word ‘‘normal” (Study 1), their use of gradable adjectives (Study 2), and their judgments of concept prototypicality (Study 3). A final study investigated the learning of normality for a novel category, showing that people actively combine statistical and prescriptive information they have learned into an undifferentiated notion of what is normal (Study 4). Taken together, these findings may help to explain how moral norms impact the acquisition of normality and, conversely, how normality impacts the acquisition of moral norms. (shrink)
Support for the biological concept of race declined slowly but steadily during the second half of the twentieth century. However, debate about the validity of the race concept has recently been reignited. Genetic-clustering studies have shown that despite the small proportion of genetic variation separating continental populations, it is possible to assign some individuals to their continents of origin, based on genetic data alone. Race naturalists have interpreted these studies as empirically confirming the existence of human subspecies, and by extension (...) biological races. However, the new racial naturalism is not convincing. The continental clusters appealed to by race naturalists are arbitrary and superficial groupings, which should not be elevated to subspecies status. Moreover, the criteria applied to humans are not consistent with those used to define subspecies in nonhuman animals, and no rationale has been given for this differential treatment. (shrink)
Models as Make-Believe offers a new approach to scientific modelling by looking to an unlikely source of inspiration: the dolls and toy trucks of children's games of make-believe.
In “Radical Interpretation” (1974), David Lewis asked: by what constraints, and to what extent, do the non-intentional, physical facts about Karl determine the intentional facts about him? There are two popular approaches: the reductive externalist program and the phenomenal intentionality program. I argue against both approaches. Then I sketch an alternative multistage account incorporating ideas from both camps. If we start with Karl's conscious experiences, we can appeal to Lewisian ideas to explain his other intentional states. This account develops the (...) multistage Lewisian approach presented at the end of my earlier "Does Phenomenology Ground Mental Content?" (2013). (shrink)
The distinction between true belief and knowledge is one of the most fundamental in philosophy, and a remarkable effort has been dedicated to formulating the conditions on which true belief constitutes knowledge. For decades, much of this epistemological undertaking has been dominated by a single strategy, referred to here as the modal approach. Shared by many of the most widely influential constraints on knowledge, including the sensitivity, safety, and anti-luck/risk conditions, this approach rests on a key underlying assumption — the (...) modal profiles available to known and unknown beliefs are in some way asymmetrical. The first aim of this paper is to deconstruct this assumption, identifying its plausibility with the way in which epistemologists frequently conceptualize human perceptual systems as excluding certain varieties of close error under conditions conducive to knowledge acquisition. The second aim of this paper is to then argue that a neural phase phenomenon indicates that this conceptualization is quite likely mistaken. This argument builds on the previous introduction of this neural phase to the context of epistemology, expanding the use of neural phase cases beyond relatively narrow questions about epistemic luck to a much more expansive critique of the modal approach as a whole. (shrink)
A critical exposition of plans to colonize other planets , especially Mars, and their costs. The final chapter links with issues about the value and future of human life. See the extended summary uploaded to this site.
A new way to transpose the virtue epistemologist’s ‘knowledge = apt belief’ template to the collective level, as a thesis about group knowledge, is developed. In particular, it is shown how specifically judgmental belief can be realised at the collective level in a way that is structurally analogous, on a telic theory of epistemic normativity (e.g., Sosa 2020), to how it is realised at the individual level—viz., through a (collective) intentional attempt to get it right aptly (whether p) by alethically (...) affirming that p. An advantage of the proposal developed is that it is shown to be compatible with competing views—viz., joint acceptance accounts and social-distributive accounts—of how group members must interact in order to materially realise a group belief. I conclude by showing how the proposed judgment-focused collective (telic) virtue epistemology has important advantages over a rival version of collective virtue epistemology defended in recent work by Jesper Kallestrup (2016). (shrink)
From folk tales to movies, stories possess features which naturally suit them to contribute to the growth of virtue. In this article I show that the fictional exemplars help the learner to grasp the moral importance of internal states and resolves a tension between existing kinds of exemplars discussed by virtue ethicists. Stories also increase the information conveyed by virtue terms and aid the growth of prudence. Stories can provide virtuous exemplars, inform learners as to the nature of the virtues (...) and offer practice in developing situational judgement. As such they are a significant resource for virtue ethics and moral education. (shrink)
I develop several new arguments against claims about "cognitive phenomenology" and its alleged role in grounding thought content. My arguments concern "absent cognitive qualia cases", "altered cognitive qualia cases", and "disembodied cognitive qualia cases". However, at the end, I sketch a positive theory of the role of phenomenology in grounding content, drawing on David Lewis's work on intentionality. I suggest that within Lewis's theory the subject's total evidence plays the central role in fixing mental content and ruling out deviant interpretations. (...) However I point out a huge unnoticed problem, the problem of evidence: Lewis really has no theory of sensory content and hence no theory of what fixes evidence. I suggest a way of plugging this hole in Lewis's theory. On the resulting theory, which I call " phenomenal functionalism", there is a sense in which sensory phenomenology is the source of all determinate intentionality. Phenomenal functionalism has similarities to the theories of Chalmers and Schwitzgebel. (shrink)
It is widely held in philosophy that knowing is not a state of mind. On this view, rather than knowledge itself constituting a mental state, when we know, we occupy a belief state that exhibits some additional non-mental characteristics. Fascinatingly, however, new empirical findings from cognitive neuroscience and experimental philosophy now offer direct, converging evidence that the brain can—and often does—treat knowledge as if it is a mental state in its own right. While some might be tempted to keep the (...) metaphysics of epistemic states separate from the neurocognitive mechanics of our judgements about them, here I will argue that these empirical findings give us sufficient reason to conclude that knowledge is at least sometimes a mental state. The basis of this argument is the epistemological principle of neurocognitive parity—roughly, if the contents of a given judgement reflect the structure of knowledge, so do the neurocognitive mechanics that produced them. This principle, which I defend here, straightforwardly supports the inference from the empirical observation that the brain sometimes treats knowledge like a mental state to the epistemological conclusion that knowledge is at least sometimes a mental state. All told, the composite, belief-centric metaphysics of knowledge widely assumed in epistemology is almost certainly mistaken. (shrink)
It would be good to have a Bayesian decision theory that assesses our decisions and thinking according to everyday standards of rationality — standards that do not require logical omniscience (Garber 1983, Hacking 1967). To that end we develop a “fragmented” decision theory in which a single state of mind is represented by a family of credence functions, each associated with a distinct choice condition (Lewis 1982, Stalnaker 1984). The theory imposes a local coherence assumption guaranteeing that as an agent's (...) attention shifts, successive batches of "obvious" logical information become available to her. A rule of expected utility maximization can then be applied to the decision of what to attend to next during a train of thought. On the resulting theory, rationality requires ordinary agents to be logically competent and to often engage in trains of thought that increase the unification of their states of mind. But rationality does not require ordinary agents to be logically omniscient. (shrink)
Adam Smith is respected as the father of contemporary economics for his work on systemizing classical economics as an independent field of study in The Wealth of Nations. But he was also a significant moral philosopher of the Scottish Enlightenment, with its characteristic concern for integrating sentiments and rationality. This article considers Adam Smith as a key moral philosopher of commercial society whose critical reflection upon the particular ethical challenges posed by the new pressures and possibilities of commercial (...) society remains relevant today. The discussion has three parts. First I address the artificial separation between self-interest and morality often attributed to Smith, in which his work on economics is stripped of its ethical context. Second I outline Smith’s ethical approach to economics, focusing on his vigorous but qualified defence of commercial society for its contributions to prosperity, justice, and freedom. Third I outline Smith’s moral philosophy proper as combining a naturalistic account of moral psychology with a virtue ethics based on propriety in commercial society. (shrink)
I explore the idea that the state should love its citizens. It should not be indifferent towards them. Nor should it merely respect them. It should love them. We begin by looking at the bases of this idea. First, it can be grounded by a concern with state subordination. The state has enormous power over its citizens. This threatens them with subordination. Love ameliorates this threat. Second, it can be grounded by the state's lack of moral status. We all have (...) reason to love everyone. But we beings with moral status have an excuse for not loving everyone: we have our own lives to lead. The state has no such excuse. So, the state should love everyone. We then explore the nature of the loving state. I argue that the loving state is a liberal state. It won't interfere in its citizens' personal spheres. It is a democratic state. It will adopt its citizens' ends as its own. It is a welfare state. It will be devoted to its citizens' well-being. And it is an egalitarian state. It will treat all its citizens equally. This constitutes a powerful third argument, an abductive argument, for the ideal of the loving state. (shrink)
Adam Smith’s account of sympathy or ‘fellow feeling’ has recently become exceedingly popular. It has been used as an antecedent of the concept of simulation: understanding, or attributing mental states to, other people by means of simulating them. It has also been singled out as the first correct account of empathy. Finally, to make things even more complicated, some of Smith’s examples for sympathy or ‘fellow feeling’ have been used as the earliest expression of emotional contagion. The aim of (...) the paper is to suggest a new interpretation of Smith’s concept of sympathy and point out that on this interpretation some of the contemporary uses of this concept, as a precursor of simulation and empathy, are misleading. My main claim is that Smith's concept of sympathy, unlike simulation and empathy, does not imply any correspondence between the mental states of the sympathizer and of the person she is sympathizing with. (shrink)
In this paper I defend the metaphysics of race as a valuable philosophical project against deflationism about race. The deflationists argue that metaphysical debate about the reality of race amounts to a non-substantive verbal dispute that diverts attention from ethical and practical issues to do with ‘race.’ In response, I show that the deflationists mischaracterize the field and fail to capture what most metaphysicians of race actually do in their work, which is almost always pluralist and very often normative and (...) explicitly political. Even if debates about the reality of race turn out to be verbal disputes, they are substantive, and worth having. (shrink)
Many contemporary democratic theorists are democratic egalitarians. They think that the distinctive value of democracy lies in equality. Yet this position faces a serious problem. All contemporary democracies are representative democracies. Such democracies are highly unequal: representatives have much more power than do ordinary citizens. So, it seems that democratic egalitarians must condemn representative democracies. In this paper, I present a solution to this problem. My solution invokes popular control. If representatives are under popular control, then their extra power is (...) not objectionable. Unfortunately, so I argue, in the United States representatives are under loose popular control. (shrink)
A recent wave of scholarship has challenged the traditional way of understanding of self-command in Adam Smith’s Theory of Moral Sentiments as ‘Stoic’ self-command. But the two most thorough alternative interpretations maintain a strong connection between self-command and rationalism, and thus apparently stand opposed to Smith’s overt allegiance to sentimentalism. In this paper I argue that we can and should interpret self-command in the context of Smith’s larger sentimentalist framework, and that when we do, we can see that self-command (...) is ‘sentimentalized’. I offer an interpretation of Smithian self-command, arguing that self-command has its motivational basis in the natural desire for the pleasure of mutual sympathy; that self-command is guided by the sentimental standard of propriety; and that self-command works through the psychological mechanism of the ‘supposed’ impartial spectator. And I show that Smithian self-command is a home-grown, sentimentalist virtue and not an awkward rationalistic transplant. (shrink)
The biological race debate is at an impasse. Issues surrounding hereditarianism aside, there is little empirical disagreement left between race naturalists and anti-realists about biological race. The disagreement is now primarily semantic. This would seem to uniquely qualify philosophers to contribute to the biological race debate. However, philosophers of race are reluctant to focus on semantics, largely because of their worries about the ‘flight to reference’. In this paper, I show how philosophers can contribute to the debate without taking the (...) flight to reference. Drawing on the theory of reference literature and the history of meaning change in science, I develop some criteria for dealing with cases where there is uncertainty about reference. I then apply these criteria to the biological race debate. All of the criteria I develop for eliminating putative kinds are met in the case of ‘race’ as understood by twentieth century geneticist Theodosius Dobzhansky and his contemporary proponents, suggesting that we should eliminate it from our biological ontology. (shrink)
Orthodox decision theory gives no advice to agents who hold two goods to be incommensurate in value because such agents will have incomplete preferences. According to standard treatments, rationality requires complete preferences, so such agents are irrational. Experience shows, however, that incomplete preferences are ubiquitous in ordinary life. In this paper, we aim to do two things: (1) show that there is a good case for revising decision theory so as to allow it to apply non-vacuously to agents with incomplete (...) preferences, and (2) to identify one substantive criterion that any such non-standard decision theory must obey. Our criterion, Competitiveness, is a weaker version of a dominance principle. Despite its modesty, Competitiveness is incompatible with prospectism, a recently developed decision theory for agents with incomplete preferences. We spend the final part of the paper showing why Competitiveness should be retained, and prospectism rejected. (shrink)
The Humean Theory of Reasons, according to which all of our reasons for action are explained by our desires, has been criticized for not being able to account for “moral reasons,” namely, overriding reasons to act on moral demands regardless of one's desires. My aim in this paper is to utilize ideas from Adam Smith's moral philosophy in order to offer a novel and alternative account of moral reasons that is both desire-based and accommodating of an adequate version of (...) the requirement that moral demands have overriding reason-giving force. In particular, I argue that the standpoint of what Smith calls “the impartial spectator” can both determine what is morally appropriate and inappropriate and provide the basis for normative reasons for action—including reasons to act on moral demands—to nearly all reason-responsive agents and, furthermore, that these reasons have the correct weight. The upshot of the proposed account is that it offers an interesting middle road out of a dilemma pertaining to the explanatory and normative dimensions of reasons for informed-desire Humean theorists. (shrink)
Slurs possess interesting linguistic properties and so have recently attracted the attention of linguists and philosophers of language. For instance the racial slur "nigger" is explosively derogatory, enough so that just hearing it mentioned can leave one feeling as if they have been made complicit in a morally atrocious act.. Indeed, the very taboo nature of these words makes discussion of them typically prohibited or frowned upon. Although it is true that the utterance of slurs is illegitimate and derogatory in (...) most contexts, sufficient evidence suggests that slurs are not always or exclusively used to derogate. In fact, slurs are frequently picked up and appropriated by the very in-group members that the slur was originally intended to target. This might be done, for instance, as a means for like speakers to strengthen in-group solidarity. So an investigation into the meaning and use of slurs can give us crucial insight into how words can be used with such derogatory impact, and how they can be turned around and appropriated as vehicles of rapport in certain contexts among in-group speakers. In this essay I will argue that slurs are best characterized as being of a mixed descriptive/expressive type. Next, I will review the most influential accounts of slurs offered thus far, explain their shortcomings, then provide a new analysis of slurs and explain in what ways it is superior to others. Finally, I suggest that a family-resemblance conception of category membership can help us achieve a clearer understanding of the various ways in which slurs, for better or worse, are actually put to use in natural language discourse. (shrink)
Experimental research suggests that people draw a moral distinction between bad outcomes brought about as a means versus a side effect (or byproduct). Such findings have informed multiple psychological and philosophical debates about moral cognition, including its computational structure, its sensitivity to the famous Doctrine of Double Effect, its reliability, and its status as a universal and innate mental module akin to universal grammar. But some studies have failed to replicate the means/byproduct effect especially in the absence of other factors, (...) such as personal contact. So we aimed to determine how robust the means/byproduct effect is by conducting a meta-analysis of both published and unpublished studies (k = 101; 24,058 participants). We found that while there is an overall small difference between moral judgments of means and byproducts (standardized mean difference = 0.87, 95% CI 0.67 – 1.06; standardized mean change = 0.57, 95% CI 0.44 – 0.69; log odds ratio = 1.59, 95% CI 1.15 – 2.02), the mean effect size is primarily moderated by whether the outcome is brought about by personal contact, which typically involves the use of personal force. (shrink)
Unnaturalised Racial Naturalism.Adam Hochman - 2014 - Studies in History and Philosophy of Science Part C: Studies in History and Philosophy of Biological and Biomedical Sciences 46 (1):79-87.details
Quayshawn Spencer (2014) misunderstands my treatment of racial naturalism. I argued that racial naturalism must entail a strong claim, such as “races are subspecies”, if it is to be a substantive position that contrasts with anti-realism about biological race. My recognition that not all race naturalists make such a strong claim is evident throughout the article Spencer reviews (Hochman, 2013a). Spencer seems to agree with me that there are no human subspecies, and he endorses a weaker form of racial naturalism. (...) However, he supports his preferred version of ‘racial naturalism’ with arguments that are not well described as ‘naturalistic’. I argue that Spencer offers us an unnaturalised racial naturalism. (shrink)
This paper defends the concept of racialization against its critics. As the concept has become increasingly popular, questions about its meaning and value have been raised, and a backlash against its use has occurred. I argue that when “racialization” is properly understood, criticisms of the concept are unsuccessful. I defend a definition of racialization and identify its companion concept, “racialized group.” Racialization is often used as a synonym for “racial formation.” I argue that this is a mistake. Racial formation theory (...) is committed to racial ontology, but racialization is best understood as the process through which racialized – rather than racial – groups are formed. “Racialization” plays a unique role in the conceptual landscape, and it is a key concept for race eliminativists and anti-realists about race. (shrink)
Phenomenal intentionality is irreducible. Empirical investigation shows it is internally-dependent. So our usual externalist (causal, etc.) theories do not apply here. Internalist views of phenomenal intentionality (e. g. interpretationism) also fail. The resulting primitivist view avoids Papineau's worry that terms for consciousness are highly indeterminate: since conscious properties are extremely natural (despite having unnatural supervenience bases) they are 'reference magnets'.
In this paper I propose an account of representation for scientific models based on Kendall Walton’s ‘make-believe’ theory of representation in art. I first set out the problem of scientific representation and respond to a recent argument due to Craig Callender and Jonathan Cohen, which aims to show that the problem may be easily dismissed. I then introduce my account of models as props in games of make-believe and show how it offers a solution to the problem. Finally, I demonstrate (...) an important advantage my account has over other theories of scientific representation. All existing theories analyse scientific representation in terms of relations, such as similarity or denotation. By contrast, my account does not take representation in modelling to be essentially relational. For this reason, it can accommodate a group of models often ignored in discussions of scientific representation, namely models which are representational but which represent no actual object. (shrink)
My aim in this paper is to assess the viability of a perceptual epistemology based on what Anil Gupta calls the “hypothetical given”. On this account, experience alone yields no unconditional entitlement to perceptual beliefs. Experience functions instead to establish relations of rational support between what Gupta calls “views” and perceptual beliefs. I argue that the hypothetical given is a genuine alternative to the prevailing theories of perceptual justification but that the account faces a dilemma: on a natural assumption about (...) the epistemic significance of support relations, any perceptual epistemology based on the hypothetical given results in either rationalism or skepticism. I conclude by examining the prospects for avoiding the dilemma. One option is to combine the hypothetical given with a form of holism. Another is to combine the view with a form of hinge epistemology. But neither offers a simple fix. (shrink)
A preview of my book *Perception*. Discusses the relationship between perception and the physical world and the issue of whether reality is as it appears. Useful examples are included throughout the book to illustrate the puzzles of perception, including hallucinations, illusions, the laws of appearance, blindsight, and neuroscientific explanations of our experience of pain, smell and color. The book covers both traditional philosophical arguments and more recent empirical arguments deriving from research in psychophysics and neuroscience. The addition of chapter summaries, (...) suggestions for further reading and a glossary of terms make Perception essential reading for anyone studying the topic in detail, as well as for students of philosophy of mind, philosophy of psychology and metaphysics. (shrink)
Many favor representationalism about color experience. To a first approximation, this view holds that experiencing is like believing. In particular, like believing, experiencing is a matter of representing the world to be a certain way. Once you view color experience along these lines, you face a big question: do our color experiences represent the world as it really is? For instance, suppose you see a tomato. Representationalists claim that having an experience with this sensory character is necessarily connected with representing (...) a distinctive quality as pervading a round area out there in external space. Let us call it “sensible redness” to highlight the fact that the representation of this property is necessarily connected with the sensory character of the experience. Is this property, sensible redness, really co-instantiated with roundness out there in the space before you? (shrink)
In this paper I defend anti-realism about race and a new theory of racialization. I argue that there are no races, only racialized groups. Many social constructionists about race have adopted racial formation theory to explain how ‘races’ are formed. However, anti-realists about race cannot adopt racial formation theory, because it assumes the reality of race. I introduce interactive constructionism about racialized groups as a theory of racialization for anti-realists about race. Interactive constructionism moves the discussion away from the dichotomous (...) (social vs. biological) metaphysics that has marred this debate, and posits that racialized groups are the joint products of a broad range of non-racial factors, which interact. (shrink)
In this paper, I do a few things. I develop a (largely) empirical argument against naïve realism (Campbell, Martin, others) and for representationalism. I answer Papineau’s recent paper “Against Representationalism (about Experience)”. And I develop a new puzzle for representationalists.
I discuss a large number of emotions that are relevant to performance at epistemic tasks. My central concern is the possibility that it is not the emotions that are most relevant to success of these tasks but associated virtues. I present cases in which it does seem to be the emotions rather than the virtues that are doing the work. I end of the paper by mentioning the connections between desirable and undesirable epistemic emotions.
As belief in the reality of race as a biological category among U.S. anthropologists has fallen, belief in the reality of race as a social category has risen in its place. The view that race simply does not exist—that it is a myth—is treated with suspicion. While racial classification is linked to many of the worst evils of recent history, it is now widely believed to be necessary to fight back against racism. In this article, I argue that race is (...) indeed a biological fiction, but I critique the claim that race is socially real. I defend a form of anti‐realist reconstructionism about race, which says that there are no races, only racialized groups—groups mistakenly believed to be races. I argue that this is the most attractive position about race from a metaphysical perspective, and that it is also the position most conductive to public understanding and social justice. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.