I develop an epistemic focalbias account of certain patterns of judgments about knowledge ascriptions by integrating it with a general dual process framework of human cognition. According to the focalbias account, judgments about knowledge ascriptions are generally reliable but systematically fallible because the cognitive processes that generate them are affected by what is in focus. I begin by considering some puzzling patters of judgments about knowledge ascriptions and sketch how a basic focal (...) class='Hi'>bias account seeks to account for them. In doing so, I argue that the basic focalbias account should be integrated in a more general framework of human cognition. Consequently, I present some central aspects of a prominent general dual process theory of human cognition and discuss how focalbias may figure at various levels of processing. On the basis of this discussion, I attempt to categorize the relevant judgments about knowledge ascriptions. Given this categorization, I argue that the basic epistemic focalbias account of certain contrast effects and salient alternatives effects can be plausibly integrated with the dual process framework. Likewise, I try to explain the absence of strong intuitions in cases of far-fetched salient alternatives. As a manner of conclusion, I consider some methodological issues concerning the relationship between cognitive psychology, experimental data and epistemological theorizing. -/- . (shrink)
Here I review Robert Trivers' 2011 book _The Folly of Fools_, in which he advocates the evolutionary theory of deceit and self-deception that he pioneered in his famous preface to Richard Dawkins' _Selfish Gene_. Although the book contains a wealth of interesting discussion on topics ranging from warfare to immunology, I find it lacking on two major fronts. First, it fails to give a proper argument for its central thesis--namely, that self-deception evolved to facilitate deception of others. Second, the book (...) lacks conceptual clarity with respect to the focal term "self-deception.". (shrink)
It has been argued that implicit biases are operative in philosophy and lead to significant epistemic costs in the field. Philosophers working on this issue have focussed mainly on implicit gender and race biases. They have overlooked ideological bias, which targets political orientations. Psychologists have found ideological bias in their field and have argued that it has negative epistemic effects on scientific research. I relate this debate to the field of philosophy and argue that if, as some studies (...) suggest, the same bias also exists in philosophy then it will lead to hitherto unrecognised epistemic hazards in the field. Furthermore, the bias is epistemically different from the more familiar biases in respects that are important for epistemology, ethics, and metaphilosophy. (shrink)
Our focus here is on whether, when influenced by implicit biases, those behavioural dispositions should be understood as being a part of that person’s character: whether they are part of the agent that can be morally evaluated.[4] We frame this issue in terms of control. If a state, process, or behaviour is not something that the agent can, in the relevant sense, control, then it is not something that counts as part of her character. A number of theorists have argued (...) that individuals do not have control, in the relevant sense, over the operation of implicit bias. We will argue that this claim is mistaken. We articulate and develop a notion of control that individuals have with respect to implicit bias, and argue that this kind of control can ground character-based evaluation of such behavioural dispositions. (shrink)
Are individuals morally responsible for their implicit biases? One reason to think not is that implicit biases are often advertised as unconscious, ‘introspectively inaccessible’ attitudes. However, recent empirical evidence consistently suggests that individuals are aware of their implicit biases, although often in partial and inarticulate ways. Here I explore the implications of this evidence of partial awareness for individuals’ moral responsibility. First, I argue that responsibility comes in degrees. Second, I argue that individuals’ partial awareness of their implicit biases makes (...) them (partially) morally responsible for them. I argue by analogy to a close relative of implicit bias: moods. (shrink)
This paper examines the role of prestige bias in shaping academic philosophy, with a focus on its demographics. I argue that prestige bias exacerbates the structural underrepresentation of minorities in philosophy. It works as a filter against (among others) philosophers of color, women philosophers, and philosophers of low socio-economic status. As a consequence of prestige bias our judgments of philosophical quality become distorted. I outline ways in which prestige bias in philosophy can be mitigated.
If you care about securing knowledge, what is wrong with being biased? Often it is said that we are less accurate and reliable knowers due to implicit biases. Likewise, many people think that biases reflect inaccurate claims about groups, are based on limited experience, and are insensitive to evidence. Chapter 3 investigates objections such as these with the help of two popular metaphors: bias as fog and bias as shortcut. Guiding readers through these metaphors, I argue that they (...) clarify the range of knowledge-related objections to implicit bias. They also suggest that there will be no unifying problem with bias from the perspective of knowledge. That is, they tell us that implicit biases can be wrong in different ways for different reasons. Finally, and perhaps most importantly, the metaphors reveal a deep—though perhaps not intractable—disagreement among theorists about whether implicit biases can be good in some cases when it comes to knowledge. (shrink)
Can we consciously see more items at once than can be held in visual working memory? This question has elud- ed resolution because the ultimate evidence is subjects’ reports in which phenomenal consciousness is filtered through working memory. However, a new technique makes use of the fact that unattended ‘ensemble prop- erties’ can be detected ‘for free’ without decreasing working memory capacity.
The term 'implicit bias' has very swiftly been incorporated into philosophical discourse. Our aim in this paper is to scrutinise the phenomena that fall under the rubric of implicit bias. The term is often used in a rather broad sense, to capture a range of implicit social cognitions, and this is useful for some purposes. However, we here articulate some of the important differences between phenomena identified as instances of implicit bias. We caution against ignoring these differences: (...) it is likely they have considerable significance, not least for the sorts of normative recommendations being made concerning how to mitigate the bad effects of implicit bias. (shrink)
Humans typically display hindsight bias. They are more confident that the evidence available beforehand made some outcome probable when they know the outcome occurred than when they don't. There is broad consensus that hindsight bias is irrational, but this consensus is wrong. Hindsight bias is generally rationally permissible and sometimes rationally required. The fact that a given outcome occurred provides both evidence about what the total evidence available ex ante was, and also evidence about what that evidence (...) supports. Even if you in fact evaluate the ex ante evidence correctly, you should not be certain of this. Then, learning the outcome provides evidence that if you erred, you are more likely to have erred low rather than high in estimating the degree to which the ex ante evidence supported the hypothesis that that outcome would occur. (shrink)
Recent empirical research has substantiated the finding that very many of us harbour implicit biases: fast, automatic, and difficult to control processes that encode stereotypes and evaluative content, and influence how we think and behave. Since it is difficult to be aware of these processes - they have sometimes been referred to as operating 'unconsciously' - we may not know that we harbour them, nor be alert to their influence on our cognition and action. And since they are difficult to (...) control, considerable work is required to prevent their influence. We here focus on the implications of these findings for epistemology. We first look at ways in which implicit biases thwart our knowledge seeking practices (sections 1 & 2). Then we set out putative epistemic benefits of implicit bias, before considering ways in which epistemic practices might be improved (section 3). Finally, we consider the distinctive challenges that the findings about implicit bias pose to us as philosophers, in the context of feminist philosophy in particular (section 4). (shrink)
The underrepresentation of women, people of color, and especially women of color—and the corresponding overrepresentation of white men—is more pronounced in philosophy than in many of the sciences. I suggest that part of the explanation for this lies in the role played by the idealized rational self, a concept that is relatively influential in philosophy but rarely employed in the sciences. The idealized rational self models the mind as consistent, unified, rationally transcendent, and introspectively transparent. I hypothesize that acceptance of (...) the idealized rational self leads philosophers to underestimate the influence of implicit bias on their own judgments and prevents them from enacting the reforms necessary to minimize the effects of implicit bias on institutional decision-making procedures. I consider recent experiments in social psychology that suggest that an increased sense of one’s own objectivity leads to greater reliance on bias in hiring scenarios, and I hypothesize how these results might be applied to philosophers’ evaluative judgments. I discuss ways that the idealized rational self is susceptible to broader critiques of ideal theory, and I consider some of the ways that the picture functions as a tool of active ignorance and color-evasive racism. (shrink)
It has widely been assumed, by philosophers, that our first-person preferences regarding pleasurable and painful experiences exhibit a bias toward the future (positive and negative hedonic future-bias), and that our preferences regarding non-hedonic events (both positive and negative) exhibit no such bias (non-hedonic time-neutrality). Further, it has been assumed that our third-person preferences are always time-neutral. Some have attempted to use these (presumed) differential patterns of future-bias—different across kinds of events and perspectives—to argue for the irrationality (...) of hedonic future-bias. This paper experimentally tests these descriptive hypotheses. While as predicted we found first-person hedonic future-bias, we did not find that participants were time-neutral in all other conditions. Hence, the presumed asymmetry of hedonic/non-hedonic and first/third-person preferences cannot be used to argue for the irrationality of future-bias, since no such asymmetries exist. Instead, we develop a more fine-grained approach, according to which three factors—positive/negative valence, first/third-person, and hedonic/non-hedonic—each independently influence, but do not determine, whether an event is treated in a future-biased or time-neutral way. We discuss the upshots of these results for the debate over the rationality of future-bias. (shrink)
Why does social injustice exist? What role, if any, do implicit biases play in the perpetuation of social inequalities? Individualistic approaches to these questions explain social injustice as the result of individuals’ preferences, beliefs, and choices. For example, they explain racial injustice as the result of individuals acting on racial stereotypes and prejudices. In contrast, structural approaches explain social injustice in terms of beyond-the-individual features, including laws, institutions, city layouts, and social norms. Often these two approaches are seen as competitors. (...) Framing them as competitors suggests that only one approach can win and that the loser offers worse explanations of injustice. In this essay, we explore each approach and compare them. Using implicit bias as an example, we argue that the relationship between individualistic and structural approaches is more complicated than it may first seem. Moreover, we contend that each approach has its place in analyses of injustice and raise the possibility that they can work together—synergistically—to produce deeper explanations of social injustice. If so, the approaches may be complementary, rather than competing. (shrink)
Research programs in empirical psychology from the past two decades have revealed implicit biases. Although implicit processes are pervasive, unavoidable, and often useful aspects of our cognitions, they may also lead us into error. The most problematic forms of implicit cognition are those which target social groups, encoding stereotypes or reflecting prejudicial evaluative hierarchies. Despite intentions to the contrary, implicit biases can influence our behaviours and judgements, contributing to patterns of discriminatory behaviour. These patterns of discrimination are obviously wrong and (...) unjust. But in remedying such wrongs, one question to be addressed concerns responsibility for implicit bias. Unlike some paradigmatic forms of wrongdoing, such discrimination is often unintentional, unendorsed, and perpetrated without awareness; and the harms are particularly damaging because they are cumulative and collectively perpetrated. So, what are we to make of questions of responsibility? In this article, we outline some of the main lines of recent philosophical thought, which address questions of responsibility for implicit bias. We focus on (a) the kind of responsibility at issue; (b) revisionist versus nonrevisionist conceptions of responsibility as applied to implicit bias; and (c) individual, institutional, and collective responsibility for implicit bias. (shrink)
Philosophers who have written about implicit bias have claimed or implied that individuals are not responsible, and therefore not blameworthy, for their implicit biases, and that this is a function of the nature of implicit bias as implicit: below the radar of conscious reflection, out of the control of the deliberating agent, and not rationally revisable in the way many of our reflective beliefs are. I argue that close attention to the findings of empirical psychology, and to the (...) conditions for blameworthiness, does not support these claims. I suggest that the arguments for the claim that individuals are not liable for blame are invalid, and that there is some reason to suppose that individuals are, at least sometimes, liable to blame for the extent to which they are influenced in behaviour and judgment by implicit biases. I also argue against the claim that it is counter-productive to see bias as something for which individuals are blameworthy; rather, understanding implicit bias as something for which we are liable to blame could be constructive. (shrink)
Nearly everyone prefers pain to be in the past rather than the future. This seems like a rationally permissible preference. But I argue that appearances are misleading, and that future-biased preferences are in fact irrational. My argument appeals to trade-offs between hedonic experiences and other goods. I argue that we are rationally required to adopt an exchange rate between a hedonic experience and another type of good that stays fixed, regardless of whether the hedonic experience is in the past or (...) future. (shrink)
The overwhelming majority of those who theorize about implicit biases posit that these biases are caused by some sort of association. However, what exactly this claim amounts to is rarely specified. In this paper, I distinguish between different understandings of association, and I argue that the crucial senses of association for elucidating implicit bias are the cognitive structure and mental process senses. A hypothesis is subsequently derived: if associations really underpin implicit biases, then implicit biases should be modulated by (...) counterconditioning or extinction but should not be modulated by rational argumentation or logical interventions. This hypothesis is false; implicit biases are not predicated on any associative structures or associative processes but instead arise because of unconscious propositionally structured beliefs. I conclude by discussing how the case study of implicit bias illuminates problems with popular dual-process models of cognitive architecture. (shrink)
In the philosophy of science, it is a common proposal that values are illegitimate in science and should be counteracted whenever they drive inquiry to the confirmation of predetermined conclusions. Drawing on recent cognitive scientific research on human reasoning and confirmation bias, I argue that this view should be rejected. Advocates of it have overlooked that values that drive inquiry to the confirmation of predetermined conclusions can contribute to the reliability of scientific inquiry at the group level even when (...) they negatively affect an individual’s cognition. This casts doubt on the proposal that such values should always be illegitimate in science. It also suggests that advocates of that proposal assume a narrow, individualistic account of science that threatens to undermine their own project of ensuring reliable belief formation in science. (shrink)
Many economists and philosophers assume that status quo bias is necessarily irrational. I argue that, in some cases, status quo bias is fully rational. I discuss the rationality of status quo bias on both subjective and objective theories of the rationality of preferences. I argue that subjective theories cannot plausibly condemn this bias as irrational. I then discuss one kind of objective theory, which holds that a conservative bias toward existing things of value is rational. (...) This account can fruitfully explain some compelling aspects of common sense morality, and it may justify status quo bias. (shrink)
We argue that work on norms provides a way to move beyond debates between proponents of individualist and structuralist approaches to bias, oppression, and injustice. We briefly map out the geography of that debate before presenting Charlotte Witt’s view, showing how her position, and the normative ascriptivism at its heart, seamlessly connects individuals to the social reality they inhabit. We then describe recent empirical work on the psychology of norms and locate the notions of informal institutions and soft structures (...) with respect to it. Finally, we argue that the empirical resources enrich Witt’s ascriptivism, and that the resulting picture shows theorists need not, indeed should not, choose between either the individualist or structuralist camp. (shrink)
Many philosophical thought experiments and arguments involve unusual cases. We present empirical reasons to doubt the reliability of intuitive judgments and conclusions about such cases. Inferences and intuitions prompted by verbal case descriptions are influenced by routine comprehension processes which invoke stereotypes. We build on psycholinguistic findings to determine conditions under which the stereotype associated with the most salient sense of a word predictably supports inappropriate inferences from descriptions of unusual (stereotype-divergent) cases. We conduct an experiment that combines plausibility ratings (...) with pupillometry to document this “salience bias.” We find that under certain conditions, competent speakers automatically make stereotypical inferences they know to be inappropriate. (shrink)
This chapter is centered around an apparent tension that research on implicit bias raises between virtue and social knowledge. Research suggests that simply knowing what the prevalent stereotypes are leads individuals to act in prejudiced ways—biasing decisions about whom to trust and whom to ignore, whom to promote and whom to imprison—even if they reflectively reject those stereotypes. Because efforts to combat discrimination obviously depend on knowledge of stereotypes, a question arises about what to do next. This chapter argues (...) that the obstacle to virtue is not knowledge of stereotypes as such, but the “accessibility” of such knowledge to the agent who has it. “Accessibility” refers to how easily knowledge comes to mind. Social agents can acquire the requisite knowledge of stereotypes while resisting their pernicious influence, so long as that knowledge remains, in relevant contexts, relatively inaccessible. (shrink)
It has recently been argued that beliefs formed on the basis of implicit biases pose a challenge for accessibilism, since implicit biases are consciously inaccessible, yet they seem to be relevant to epistemic justification. Recent empirical evidence suggests, however, that while we may typically lack conscious access to the source of implicit attitudes and their impact on our beliefs and behaviour, we do have access to their content. In this paper, I discuss the notion of accessibility required for this argument (...) to work vis-à-vis these empirical results and offer two ways in which the accessibilist could meet the challenge posed by implicit biases. Ultimately both strategies fail, but the way in which they do, I conclude, reveals something general and important about our epistemic obligations and about the intuitions that inform the role of implicit biases in accessibilist justification. (shrink)
It has recently been suggested that politically motivated cognition leads progressive individuals to form beliefs that underestimate real differences between social groups and to process information selectively to support these beliefs and an egalitarian outlook. I contend that this tendency, which I shall call ‘egalitarian confirmation bias’, is often ‘Mandevillian’ in nature. That is, while it is epistemically problematic in one’s own cognition, it often has effects that significantly improve other people’s truth tracking, especially that of stigmatized individuals in (...) academia. Due to its Mandevillian character, egalitarian confirmation bias isn’t only epistemically but also ethically beneficial, as it helps decrease social injustice. Moreover, since egalitarian confirmation bias has Mandevillian effects especially in academia, and since progressives are particularly likely to display the bias, there is an epistemic reason for maintaining the often-noted political majority of progressives in academia. That is, while many researchers hold that diversity in academia is epistemically beneficial because it helps reduce bias, I argue that precisely because political diversity would help reduce egalitarian confirmation bias, it would in fact in one important sense be epistemically costly. (shrink)
In 2006, in a special issue of this journal, several authors explored what they called the dual nature of artefacts. The core idea is simple, but attractive: to make sense of an artefact, one needs to consider both its physical nature—its being a material object—and its intentional nature—its being an entity designed to further human ends and needs. The authors construe the intentional component quite narrowly, though: it just refers to the artefact’s function, its being a means to realize a (...) certain practical end. Although such strong focus on functions is quite natural , I argue in this paper that an artefact’s intentional nature is not exhausted by functional considerations. Many non-functional properties of artefacts—such as their marketability and ease of manufacture—testify to the intentions of their users/designers; and I show that if these sorts of considerations are included, one gets much more satisfactory explanations of artefacts, their design, and normativity.Keywords: Artefacts; Dual Nature program; Function; Intentionality; Normativity. (shrink)
Migrant women are often stereotyped. Some scholars associate the feminization of migration with domestic work and criticize the “care drain” as a new form of imperialism that the First World imposes on the Third World. However, migrant women employed as domestic workers in Northern America and Europe represent only 2% of migrant women worldwide and cannot be seen as characterizing the “feminization of migration”. Why are migrant domestic workers overestimated? This paper explores two possible sources of bias. The first (...) is sampling: conclusions about “care drain” are often generalized from small samples of domestic workers. The second stems from the affect heuristic: imagining children left behind by migrant mothers provokes strong feelings of injustice which trump other considerations. The paper argues that neither source of bias is unavoidable and finds evidence of gender stereotypes in the “care drain” construal. (shrink)
The twin goals of this essay are: to investigate a family of cases in which the goal of guaranteed convergence to the truth is beyond our reach; and to argue that each of three strands prominent in contemporary epistemological thought has undesirable consequences when confronted with the existence of such problems. Approaches that follow Reichenbach in taking guaranteed convergence to the truth to be the characteristic virtue of good methods face a vicious closure problem. Approaches on which there is a (...) unique rational doxastic response to any given body of evidence can avoid incoherence only by rendering epistemology a curiously limited enterprise. Bayesian approaches rule out humility about one’s prospects of success in certain situations in which failure is typical. (shrink)
Confirmation bias is one of the most widely discussed epistemically problematic cognitions, challenging reliable belief formation and the correction of inaccurate views. Given its problematic nature, it remains unclear why the bias evolved and is still with us today. To offer an explanation, several philosophers and scientists have argued that the bias is in fact adaptive. I critically discuss three recent proposals of this kind before developing a novel alternative, what I call the ‘reality-matching account’. According to (...) the account, confirmation bias evolved because it helps us influence people and social structures so that they come to match our beliefs about them. This can result in significant developmental and epistemic benefits for us and other people, ensuring that over time we don’t become epistemically disconnected from social reality but can navigate it more easily. While that might not be the only evolved function of confirmation bias, it is an important one that has so far been neglected in the theorizing on the bias. (shrink)
In this paper, I will discuss the relevance of epistemology of disagreement to political disagreement. The two major positions in the epistemology of disagreement literature are the steadfast and the conciliationist approaches: while the conciliationist says that disagreement with one’s epistemic equals should compel one to epistemically “split the difference” with those peers, the steadfast approach claims that one can maintain one’s antecedent position even in the face of such peer disagreement. Martin Ebeling applies a conciliationist approach to democratic deliberations, (...) arguing that deliberative participants ought to pursue full epistemic conciliation when disagreeing with their peers on political questions. I argue that this epistemic “splitting the difference” could make participants vulnerable to certain cognitive biases. We might avoid these biases by paying more attention to the deliberative environment in which disagreement takes place. (shrink)
Aid and Bias.Keith Horton - 2004 - Inquiry: An Interdisciplinary Journal of Philosophy 47 (6):545 – 561.details
Over the last few decades, psychologists have amassed a great deal of evidence that our thinking is strongly influenced by a number of biases. This research appears to have important implications for moral methodology. It seems likely that these biases affect our thinking about moral issues, and a fuller awareness of them might help us to find ways to counteract their influence, and so to improve our moral thinking. And yet there is little or no reference to such biases in (...) the philosophical literature on many pressing, substantive moral questions. In this paper, I make a start on repairing this omission in relation to one such question, the 'Aid Question', which concerns how much, if anything, we are morally required to give to aid agencies. I begin by sketching a number of biases that seem particularly likely to affect our thinking about that question. I then go on to review the psychological research on 'debiasing' - that is, on attempts to counteract the influence of such biases. And finally I discuss and illustrate certain strategies for counteracting the influence of the biases in question on our thinking about the Aid Question. (shrink)
Garrett Cullity contends that fairness is appropriate impartiality Chapters 8 and 10 and Cullity ). Cullity deploys his account of fairness as a means of limiting the extreme moral demand to make sacrifices in order to aid others that was posed by Peter Singer in his seminal article ‘Famine, Affluence and Morality’. My paper is founded upon the combination of the observation that the idea that fairness consists in appropriate impartiality is very vague and the fact that psychological studies show (...) the self-serving bias is especially likely to infect one’s judgements when the ideas involved are vague. I argue that Cullity’s solution to extreme moral demandingness is threatened by these findings. I then comment on whether some other theories of fairness are vulnerable to the same objection. (shrink)
David DeGrazia tentatively defends what he calls the Interests Model of moral status (see page 135).1 On this model all sentient beings have the same moral status, though some are owed more than others in virtue of having more or stronger interests. The proponent of this model can accept, say, that one should normally save the life of a human in preference to that of a dog. But she denies that we should save the human because he has higher moral (...) status. Instead, the human should be saved because he has more at stake—he may, for example, have a stronger interest in continued existence. In defending the Interests Model, DeGrazia cuts against the grain of recent theorising on moral status, which has instead favoured what he calls the Respect Model. (shrink)
Psychologists and philosophers have not yet resolved what they take implicit attitudes to be; and, some, concerned about limitations in the psychometric evidence, have even challenged the predictive and theoretical value of positing implicit attitudes in explanations for social behavior. In the midst of this debate, prominent stakeholders in science have called for scientific communities to recognize and countenance implicit bias in STEM fields. In this paper, I stake out a stakeholder conception of implicit bias that responds to (...) these challenges in ways that are responsive to the psychometric evidence, while also being resilient to the sorts of disagreements and scientific progress that would not undermine the soundness of this call. Along the way, my account advocates for attributing collective (group-level) implicit attitudes rather than individual-level implicit attitudes. This position raises new puzzles for future research on the relationship (metaphysical, epistemic, and ethical) between collective implicit attitudes and individual-level attitudes. (shrink)
Future-biased agents care not only about what experiences they have, but also when they have them. Many believe that A-theories of time justify future bias. Although presentism is an A-theory of time, some argue that it nevertheless negates the justification for future bias. Here, I claim that the alleged discrepancy between presentism and future bias is a special case of the cross-time relations problem. To resolve the discrepancy, I propose an account of future bias as a (...) preference for certain tensed truths properly relativized to the present. (shrink)
Modeling social interactions based on individual behavior has always been an area of interest, but prior literature generally presumes rational behavior. Thus, such models may miss out on capturing the effects of biases humans are susceptible to. This work presents a method to model egocentric bias, the real-life tendency to emphasize one's own opinion heavily when presented with multiple opinions. We use a symmetric distribution, centered at an agent's own opinion, as opposed to the Bounded Confidence (BC) model used (...) in prior work. We consider a game of iterated interactions where an agent cooperates based on its opinion about an opponent. Our model also includes the concept of domain-based self-doubt, which varies as the interaction succeeds or not. An increase in doubt makes an agent reduce its egocentricity in subsequent interactions, thus enabling the agent to learn reactively. The agent system is modeled with factions not having a single leader, to overcome some of the issues associated with leader-follower factions. We find that agents belonging to factions perform better than individual agents. We observe that an intermediate level of egocentricity helps the agent perform at its best, which concurs with conventional wisdom that neither overconfidence nor low self-esteem brings benefits. (shrink)
Social psychologists often describe “implicit” racial biases as entirely unconscious, and as mere associations between groups and traits, which lack intentional content, e.g., we associate “black” and “athletic” in much the same way we associate “salt” and “pepper.” However, recent empirical evidence consistently suggests that individuals are aware of their implicit biases, albeit in partial, inarticulate, or even distorted ways. Moreover, evidence suggests that implicit biases are not “dumb” semantic associations, but instead reflect our skillful, norm-sensitive, and embodied engagement with (...) social reality. This essay draws on phenomenological and hermeneutic methods and concepts to better understand what social-psychological research has begun to reveal about the conscious access individuals have to their own racial attitudes, as well as the intentional contents of the attitudes themselves. -/- First, I argue that implicit racial biases form part of the “background” of social experience. That is, while they exert a pervasive influence on our perceptions, judgments, and actions, they are frequently felt but not noticed, or noticed but misinterpreted. Second, I argue that our unreflective racial attitudes are neither mere associations nor fully articulated, propositionally structured beliefs or emotions. Their intentional contents are fundamentally indeterminate. For example, when a white person experiences a “gut feeling” of discomfort during an interaction with a black person, there is a question about the meaning or nature of that discomfort. Is it a fear of black people? Is it anxiety about appearing racist? There is, I argue, no general, determinate answer to such questions. The contents of our unreflective racial attitudes are fundamentally vague and open-ended, although I explain how they nevertheless take on particular shapes and implications—that is, their content can become determinate—depending on context, social meaning, and structural power relations. (If, for example, a perceived authority figure, such as a politician, parent, or scientist, encourages you to believe that your uncomfortable gut feeling is a justified fear of other social groups, then that is what your gut feeling is likely to become.). (shrink)
Having a confirmation bias sometimes leads us to hold inaccurate beliefs. So, the puzzle goes: why do we have it? According to the influential argumentative theory of reasoning, confirmation bias emerges because the primary function of reason is not to form accurate beliefs, but to convince others that we’re right. A crucial prediction of the theory, then, is that confirmation bias should be found only in the reasoning domain. In this article, we argue that there is evidence (...) that confirmation bias does exist outside the reasoning domain. This undermines the main evidential basis for the argumentative theory of reasoning. In presenting the relevant evidence, we explore why having such confirmation bias may not be maladaptive. (shrink)
There is almost a consensus among philosophers that indicative conditionals are not material. Their thought hinges on the idea that if conditionals were material, A → B could be vacuously true even if the truth of A would lead to the falsity of B. But since this consequence is implausible, the material account must be false. I will argue that this point of view is mistaken, since it is motivated by the grammatical form of conditional sentences and the symbols used (...) to represent their logical form, which misleadingly suggest an inferential direction from A to B. That conditional sentences mislead us into a directionality bias is a phenomenon that is well-documented in the literature about conditional reasoning. However, this directional appearance is deceptive and does not reflect the underlying truth conditions of conditional sentences. When this illusion is dispelled, we can recognise conditionals for what they are: material truth-functions. (shrink)
Accounts of arguments from expert opinion take it for granted that expert judgments count as (defeasible) evidence for propositions, and so an argument that proceeds from premises about what an expert judges to a conclusion that the expert is probably right is a strong argument. In Mizrahi (2013), I consider a potential justification for this assumption, namely, that expert judgments are significantly more likely to be true than novice judgments, and find it wanting because of empirical evidence suggesting that expert (...) judgments under uncertainty are not significantly more likely to be true than novice judgments or even chance. In this paper, I consider another potential justification for this assumption, namely, that expert judgments are not influenced by the cognitive biases novice judgments are influenced by, and find it wanting, too, because of empirical evidence suggesting that experts are vulnerable to pretty much the same cognitive biases that novices are vulnerable to. If this is correct, then the basic assumption at the core of accounts of arguments from expert opinion, namely, that expert judgments count as (defeasible) evidence for propositions, remains unjustified. (shrink)
This paper offers an unorthodox appraisal of empirical research bearing on the question of the low representation of women in philosophy. It contends that fashionable views in the profession concerning implicit bias and stereotype threat are weakly supported, that philosophers often fail to report the empirical work responsibly, and that the standards for evidence are set very low—so long as you take a certain viewpoint.
Philosophers have long noted, and empirical psychology has lately confirmed, that most people are ‘biased toward the future’: we prefer to have positive experiences in the future, and negative experiences in the past. At least two explanations have been offered for this bias: (i) belief in temporal passage (or related theses in temporal metaphysics) and (ii) the practical irrelevance of the past resulting from our inability to influence past events. We set out to test the latter explanation. In a (...) large survey (n = 1462) we find that participants exhibit significantly less future bias when asked to consider scenarios where they can affect their own past experiences. This supports the ‘practical irrelevance’ explanation of future bias. It also suggests that future bias is not an inflexible preference hardwired by evolution, but results from a more general disposition to ‘accept the things we cannot change’. However, participants still exhibited substantial future bias in scenarios in which they could affect the past, leaving room for complementary explanations. Beyond the main finding, our results also indicate that future bias is stake-sensitive (i.e., that at least some people discount past experience rather than disregarding it entirely) and that participants endorse the normative correctness of their future-biased preferences and choices. In combination, these results shed light on philosophical debates over the rationality of future bias, suggesting that it may be a rational (reasons-responsive) response to empirical realities rather than a brute, arational disposition. (shrink)
Some recent work in philosophy of religion addresses what can be called the “axiological question,” i.e., regardless of whether God exists, would it be good or bad if God exists? Would the existence of God make the world a better or a worse place? Call the view that the existence of God would make the world a better place “Pro-Theism.” We argue that Pro-Theism is not implausible, and moreover, many Theists, at least, (often implicitly) think that it is true. That (...) is, many Theists think that various good outcomes would arise if Theism is true. We then discuss work in cognitive science concerning human cognitive bias, before discussing two noteworthy attempts to show that at least some religious beliefs arise because of cognitive bias: Hume’s, and Draper’s and Nichols’s. We then argue that, as a result of certain cognitive biases that result when good outcomes might be at stake, Pro-Theism causes many Theists to inflate the epistemic probability that God exists, and as a result, Theists should lower the probability they assign to God’s existence. Finally, based our arguments, we develop a novel objection to Pascal’s wager. (shrink)
(This contribution is primarily based on "Implicit Bias, Moods, and Moral Responsibility," (2018) Pacific Philosophical Quarterly. This version has been shortened and significantly revised to be more accessible and student-oriented.) Are individuals morally responsible for their implicit biases? One reason to think not is that implicit biases are often advertised as unconscious. However, recent empirical evidence consistently suggests that individuals are aware of their implicit biases, although often in partial and inarticulate ways. Here I explore the implications of this (...) evidence of partial awareness for individuals’ moral responsibility. First, I argue that responsibility comes in degrees. Second, I argue that individuals’ partial awareness of their implicit biases makes them (partially) morally responsible for them. I argue by analogy to a close relative of implicit bias: moods. (shrink)
What is the mental representation that is responsible for implicit bias? What is this representation that mediates between the trigger and the biased behavior? My claim is that this representation is neither a propositional attitude nor a mere association (as the two major accounts of implicit bias claim). Rather, it is mental imagery: perceptual processing that is not triggered by corresponding sensory stimulation. I will argue that this view captures the advantages of the two standard accounts without inheriting (...) their disadvantages. It also explains why manipulating mental imagery is among the most efficient ways of counteracting implicit bias. (shrink)
High percentages of submitted papers are rejected at editorial levels without offering a second chance to authors by sending their papers for further peer-reviews. In most cases, the rejections are typical quick answers without helpful argumentations related to the content of the rejected material. More surprisingly, some journals vaunt their high rejection rates as a “mark of prestige”!However, journals that reject high percentages of submitted papers have built their prominent positions based on a flawed measure, the impact factor, and from (...) a long and favorable historical context. Their shareholders may think that they are allowed to have a large margin of rejection rates without affecting their sponsorship or funding sources thanks to an extended anchorage since tens, or in some cases hundreds, of years compared to unknown or new journals that struggle to pave a way in the scientific publication world. Historical anchorage of some journals also makes it unfair to compare old and new .. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.