The view that phenomenally conscious robots are on the horizon often rests on a certain philosophical view about consciousness, one we call “nomological behaviorism.” The view entails that, as a matter of nomological necessity, if a robot had exactly the same patterns of dispositions to peripheral behavior as a phenomenally conscious being, then the robot would be phenomenally conscious; indeed it would have all and only the states of phenomenal consciousness that the phenomenally conscious being in question has. We experimentally (...) investigate whether the folk think that certain (hypothetical) robots made of silicon and steel would have the same conscious states as certain familiar biological beings with the same patterns of dispositions to peripheral behavior as the robots. Our findings provide evidence that the folk largely reject the view that silicon-based robots would have the sensations that they, the folk, attribute to the biological beings in question. (shrink)
Jerry Fodor, by common agreement, is one of the world’s leading philosophers. At the forefront of the cognitive revolution since the 1960s, his work has determined much of the research agenda in the philosophy of mind and the philosophy of psychology for well over 40 years. This special issue dedicated to his work is intended both as a tribute to Fodor and as a contribution to the fruitful debates that his work has generated. One philosophical thesis that has dominated Fodor’s (...) work since the 1960s is realism about the mental. Are there really mental states, events and processes? From his first book, Psychological Explanation (1968), onwards, Fodor has always answered this question with a resolute yes. From his early rejection of Wittgensteinian and behaviourist conceptions of the mind, to his later disputes with philosophers of mind of the elminativist ilk, he has always been opposed to views that try to explain away mental phenomena. On his view, there are minds, and minds can change the world. (shrink)
Recently four different papers have suggested that the supervaluational solution to the Problem of the Many is flawed. Stephen Schiffer (1998, 2000a, 2000b) has argued that the theory cannot account for reports of speech involving vague singular terms. Vann McGee and BrianMcLaughlin (2000) say that theory cannot, yet, account for vague singular beliefs. Neil McKinnon (2002) has argued that we cannot provide a plausible theory of when precisifications are acceptable, which the supervaluational theory needs. And Roy Sorensen (...) (2000) argues that supervaluationism is inconsistent with a directly referential theory of names. McGee and McLaughlin see the problem they raise as a cause for further research, but the other authors all take the problems they raise to provide sufficient reasons to jettison supervaluationism. I will argue that none of these problems provide such a reason, though the arguments are valuable critiques. In many cases, we must make some adjustments to the supervaluational theory to meet the posed challenges. The goal of this paper is to make those adjustments, and meet the challenges. (shrink)
Suppose a rational agent S has some evidence E that bears on p, and on that basis makes a judgment about p. For simplicity, we’ll normally assume that she judges that p, though we’re also interested in cases where the agent makes other judgments, such as that p is probable, or that p is well-supported by the evidence. We’ll also assume, again for simplicity, that the agent knows that E is the basis for her judgment. Finally, we’ll assume that the (...) judgment is a rational one to make, though we won’t assume the agent knows this. Indeed, whether the agent can always know that she’s making a rational judgment when in fact she is will be of central importance in some of the debates that follow. (shrink)
When visual attention is directed away from a stimulus, neural processing is weak and strength and precision of sensory data decreases. From a computational perspective, in such situations observers should give more weight to prior expectations in order to behave optimally during a discrimination task. Here we test a signal detection theoretic model that counter-intuitively predicts subjects will do just the opposite in a discrimination task with two stimuli, one attended and one unattended: when subjects are probed to discriminate the (...) unattended stimulus, they rely less on prior information about the probed stimulus’ identity. The model is in part inspired by recent findings that attention reduces trial-by-trial variability of the neuronal population response and that they use a common criterion for attended and unattended trials. In five different visual discrimination experiments, when attention was directed away from the target stimulus, subjects did not adjust their response bias in reaction to a change in stimulus presentation frequency despite being fully informed and despite the presence of performance feedback and monetary and social incentives. This indicates that subjects did not rely more on the priors under conditions of inattention as would be predicted by a Bayes-optimal observer model. These results inform and constrain future models of Bayesian inference in the human brain. (shrink)
I argue with my friends a lot. That is, I offer them reasons to believe all sorts of philosophical conclusions. Sadly, despite the quality of my arguments, and despite their apparent intelligence, they don’t always agree. They keep insisting on principles in the face of my wittier and wittier counterexamples, and they keep offering their own dull alleged counterexamples to my clever principles. What is a philosopher to do in these circumstances? (And I don’t mean get better friends.) One popular (...) answer these days is that I should, to some extent, defer to my friends. If I look at a batch of reasons and conclude p, and my equally talented friend reaches an incompatible conclusion q, I should revise my opinion so I’m now undecided between p and q. I should, in the preferred lingo, assign equal weight to my view as to theirs. This is despite the fact that I’ve looked at their reasons for concluding q and found them wanting. If I hadn’t, I would have already concluded q. The mere fact that a friend (from now on I’ll leave off the qualifier ‘equally talented and informed’, since all my friends satisfy that) reaches a contrary opinion should be reason to move me. Such a position is defended by Richard Feldman (2006a, 2006b), David Christensen (2007) and Adam Elga (forthcoming). This equal weight view, hereafter EW, is itself a philosophical position. And while some of my friends believe it, some of my friends do not. (Nor, I should add for your benefit, do I.) This raises an odd little dilemma. If EW is correct, then the fact that my friends disagree about it means that I shouldn’t be particularly confident that it is true, since EW says that I shouldn’t be too confident about any position on which my friends disagree. But, as I’ll argue below, to consistently implement EW, I have to be maximally confident that it is true. So to accept EW, I have to inconsistently both be very confident that it is true and not very confident that it is true. This seems like a problem, and a reason to not accept EW.. (shrink)
Epistemologists have become increasingly interested in the practical role of knowledge. One prominent principle, which I call PREMISE, states that if you know that p, then you are justified in using p as a premise in your reasoning. In response, a number of critics have proposed a variety of counter-examples. In order to evaluate these problem cases, we need to consider the broader context in which this principle is situated by specifying in greater detail the types of activity that the (...) principle governs. I argue that if PREMISE is interpreted as governing deductive reasoning, then the examples lose their force. In addition, I consider the cases, discussed by Keith DeRose, where the subject is in more than one practical context at the same time. In order to account for these latter cases, we need to further specify the scope of PREMISE. I distinguish two ways of understanding PREMISE, as a knowledge-action principle and as a knowledge-deliberation principle. I conclude by arguing for the knowledge-deliberation version of the principle and by exploring what this principle says about the practical role of knowledge. (shrink)
Many epistemologists hold that an agent can come to justifiably believe that p is true by seeing that it appears that p is true, without having any antecedent reason to believe that visual impressions are generally reliable. Certain reliabilists think this, at least if the agent’s vision is generally reliable. And it is a central tenet of dogmatism (as described by Pryor (2000) and Pryor (2004)) that this is possible. Against these positions it has been argued (e.g. by Cohen (2005) (...) and White (2006)) that this violates some principles from probabilistic learning theory. To see the problem, let’s note what the dogmatist thinks we can learn by paying attention to how things appear. (The reliabilist says the same things, but we’ll focus on the dogmatist.) Suppose an agent receives an appearance that p, and comes to believe that p. Letting Ap be the proposition that it appears to the agent that p, and → be the material implication, we can say that the agent learns that p, and hence is in a position to infer Ap → p, once they receive the evidence Ap.1 This is surprising, because we can prove the following. (shrink)
P. F. Strawson's influential article "Freedom and Resentment" has been much commented on, and one of the most trenchant commentaries is Rajendra Prasad's, "Reactive Attitudes, Rationality, and Determinism." In his article, Prasad contests the significance of the reactive attitude over a precise theory of determinism, concluding that Strawson's argument is ultimately unconvincing. In this article, I evaluate Prasad's challenges to Strawson by summarizing and categorizing all of the relevant arguments in both Strawson's and Prasad's pieces. -/- Strawson offers four types (...) of arguments to demonstrate that determinism and free agency cannot be incompatible, showing that the reactive attitude is natural and desirable and the objective attitude is not natural, not desirable, not sustainable, and not compatible with the reactive attitude. Prasad targets Strawson's incompatibilist arguments, showing that determinism and free agency are incompatible. Of Prasad's seven types of arguments, four target Strawson's four above. Three of these succeed and one fails. The remaining three target Strawson's support of the reactive attitude, and of these, one succeeds, and the others fail. Although Prasad's arguments miss the mark at times, he does succeed in putting forth a legitimate challenge to Strawson's notion that determinism is no inhibitor of the reactive attitude. (shrink)
In this chapter, we attempt to show that J.P. Moreland's understanding of apologetics is beautifully positioned to counter resistance to a rationally defensible Christianity—resistance arising from the mistaken idea that any rational defense will fail to support or even undermine relationship. We look first at Paul Moser's complaint that since rational apologetics doesn’t prove the God of Christianity, it falls short of delivering what matters most—a personal agent worthy of worship and relationship. We then consider John Wilkinson's charge that the (...) use of reason and argument in evangelistic contexts is relationally futile. Since people aren’t looking for arguments, and logic is an arbitrary human invention, we should present Christianity to others as an irrational faith story. (shrink)
Some time ago, Joel Katzav and Brian Ellis debated the compatibility of dispositional essentialism with the principle of least action. Surprisingly, very little has been said on the matter since, even by the most naturalistically inclined metaphysicians. Here, we revisit the Katzav–Ellis arguments of 2004–05. We outline the two problems for the dispositionalist identified Katzav in his 2004 , and claim they are not as problematic for the dispositional essentialist at it first seems – but not for the reasons (...) espoused by Ellis. (shrink)
In our paper we investigate a difficulty arising when one tries to reconsiliateessentialis t’s thinking with classification practice in the biological sciences. The article outlinessome varieties of essentialism with particular attention to the version defended by Brian Ellis. Weunderline the basic difference: Ellis thinks that essentialism is not a viable position in biology dueto its incompatibility with biological typology and other essentialists think that these two elementscan be reconciled. However, both parties have in common metaphysical starting point and theylack (...) explicit track of methodological procedures. Methodological inquiry involves less demandingassumptions than metaphysical, and therefore it is justified to analyse abovementioned discrepancy between Ellis and other essentialist in this context. We do it by bottom-up investigation whichfocuses on the practice of taxonomists in the particular field of biology. A case study helps us todiscover four characteristics of biological typology practice: impossibility of algorithmization,relativity, subjectivity and conventionality. These features prove non-realistic and therefore anti-essentialistic character of biological classification. We conclude by saying that any essentialismrelated to the notion of biological kind cannot be regarded as justified by scientific enterprise of creating typologies. (shrink)
The Neo-Moorean Deduction (I have a hand, so I am not a brain-in-a-vat) and the Zebra Deduction (the creature is a zebra, so isn’t a cleverly disguised mule) are notorious. Crispin Wright, Martin Davies, Fred Dretske, and BrianMcLaughlin, among others, argue that these deductions are instances of transmission failure. That is, they argue that these deductions cannot transmit justification to their conclusions. I contend, however, that the notoriety of these deductions is undeserved. My strategy is to clarify, (...) attack, defend, and apply. I clarify what transmission and transmission failure really are, thereby exposing two questionable but quotidian assumptions. I attack existing views of transmission failure, especially those of Crispin Wright. I defend a permissive view of transmission failure, one which holds that deductions of a certain kind fail to transmit only because of premise circularity. Finally, I apply this account to the Neo-Moorean and Zebra Deductions and show that, given my permissive view, these deductions transmit in an intuitively acceptable way—at least if either a certain type of circularity is benign or a certain view of perceptual justification is false. (shrink)
The field of textbooks in philosophy of mind is a crowded one. I shall consider six recent texts for their pedagogical usefulness. All have been published within the last five years, though two are new editions of previously published books. The first three are authored monographs: by K. T. Maslin, Barbara Montero, and André Kukla and Joel Walmsley. I then review three anthologies, each with two editors: William Lycan and Jesse Prinz, Brie Gertler and Lawrence Shapiro, and Brian (...) class='Hi'>McLaughlin and Jonathan Cohen. These six texts constitute a diverse bunch. Within each of the two groups (monographs and anthologies), each individual text differs significantly from the other two in its approach, scope, and thus suitability for various levels of teaching. (shrink)
Deontological internalism is the family of views where justification is a positive deontological appraisal of someone's epistemic agency: S is justified, that is, when S is blameless, praiseworthy, or responsible in believing that p. Brian Weatherson discusses very briefly how a plausible principle of ampliative transmission reveals a worry for versions of deontological internalism formulated in terms of epistemic blame. Weatherson denies, however, that similar principles reveal similar worries for other versions. I disagree. In this article, I argue that (...) plausible principles of ampliative transmission reveal a worry for deontological internalism in general. (shrink)
Vision is organized around material objects; they are most of what we see. But we also see beams of light, depictions, shadows, reflections, etc. These things look like material objects in many ways, but it is still visually obvious that they are not material objects. This chapter articulates some principles that allow us to understand how we see these ‘ephemera’. H.P. Grice’s definition of seeing is standard in many discussions; here I clarify and augment it with a criterion drawn from (...) Fred Dretske. This enables me to re-analyse certain ephemera that have received counter-intuitive treatments in the work of Kendall Walton (photographs), Brian O’Shaughnessy (light), and Roy Sorenson (occlusions). (shrink)
In "The Compatibility of Naturalism and Scientific Realism" (Dec. 2003) , Brian Holtz offers two objections to my argument in "The Incompatibility of Naturalism and Scientific Realism" (in Naturalism: A Critical Appraisal , edited by William Lane Craig and J. P. Moreland, Routledge, 2000). His responses are: (1) my argument can be deflected by adopting a pragmatic or empiricist "definition" of "truth", and (2) the extra-spatiotemporal cause of the simplicity of the laws need not be God, or any other (...) personal being. (shrink)
*This work is no longer under development* Two major themes in the literature on indicative conditionals are that the content of indicative conditionals typically depends on what is known;1 that conditionals are intimately related to conditional probabilities.2 In possible world semantics for counterfactual conditionals, a standard assumption is that conditionals whose antecedents are metaphysically impossible are vacuously true.3 This aspect has recently been brought to the fore, and defended by Tim Williamson, who uses it in to characterize alethic necessity by (...) exploiting such equivalences as: A⇔¬A A. One might wish to postulate an analogous connection for indicative conditionals, with indicatives whose antecedents are epistemically impossible being vacuously true: and indeed, the modal account of indicative conditionals of Brian Weatherson has exactly this feature.4 This allows one to characterize an epistemic modal by the equivalence A⇔¬A→A. For simplicity, in what follows we write A as KA and think of it as expressing that subject S knows that A.5 The connection to probability has received much attention. Stalnaker suggested, as a way of articulating the ‘Ramsey Test’, the following very general schema for indicative conditionals relative to some probability function P: P = P 1For example, Nolan ; Weatherson ; Gillies. 2For example Stalnaker ; McGee ; Adams. 3Lewis. See Nolan for criticism. 4‘epistemically possible’ here means incompatible with what is known. 5This idea was suggested to me in conversation by John Hawthorne. I do not know of it being explored in print. The plausibility of this characterization will depend on the exact sense of ‘epistemically possible’ in play—if it is compatibility with what a single subject knows, then can be read ‘the relevant subject knows that p’. If it is more delicately formulated, we might be able to read as the epistemic modal ‘must’. (shrink)
The work of Richard H. Popkin both introduced the concept of skeptical fideism and served to impressively document its importance in the philosophies of a diverse range of thinkers, including Montaigne, Pascal, Huet, and Bayle. Popkin’s landmark History of Scepticism, however, begins its coverage with the Renaissance. In this paper I explore the roots of skeptical fideism in ancient Greek and Roman philosophy, with special attention to Cicero’s De Natura Deorum, the oldest surviving text to clearly develop a skeptical fideist (...) perspective. (shrink)
Tradução para o português do livro "Ceticismo e naturalismo: algumas variedades", Strawson, P. F. . São Leopoldo, RS: Editora da Unisinos, 2008, 114 p. Coleção: Ideias. ISBN: 9788574313214. Capítulo 1 - Ceticismo, naturalismo e argumentos transcendentais 1. Notas introdutórias; 2. Ceticismo tradicional; 3. Hume: Razão e Natureza; 4. Hume e Wittgenstein; 5. “Apenas relacionar”: O papel dos argumentos transcendentais; 6. Três citações; 7. Historicismo: e o passado.
We live in a world of crowds and corporations, artworks and artifacts, legislatures and languages, money and markets. These are all social objects — they are made, at least in part, by people and by communities. But what exactly are these things? How are they made, and what is the role of people in making them? In The Ant Trap, Brian Epstein rewrites our understanding of the nature of the social world and the foundations of the social sciences. Epstein (...) explains and challenges the three prevailing traditions about how the social world is made. One tradition takes the social world to be built out of people, much as traffic is built out of cars. A second tradition also takes people to be the building blocks of the social world, but focuses on thoughts and attitudes we have toward one another. And a third tradition takes the social world to be a collective projection onto the physical world. Epstein shows that these share critical flaws. Most fundamentally, all three traditions overestimate the role of people in building the social world: they are overly anthropocentric. Epstein starts from scratch, bringing the resources of contemporary metaphysics to bear. In the place of traditional theories, he introduces a model based on a new distinction between the grounds and the anchors of social facts. Epstein illustrates the model with a study of the nature of law, and shows how to interpret the prevailing traditions about the social world. Then he turns to social groups, and to what it means for a group to take an action or have an intention. Contrary to the overwhelming consensus, these often depend on more than the actions and intentions of group members. (shrink)
This is a transcript of a conversation between P F Strawson and Gareth Evans in 1973, filmed for The Open University. Under the title 'Truth', Strawson and Evans discuss the question as to whether the distinction between genuinely fact-stating uses of language and other uses can be grounded on a theory of truth, especially a 'thin' notion of truth in the tradition of F P Ramsey.
Brian C. Ribeiro’s _Sextus, Montaigne, Hume: Pyrrhonizers_ invites us to view the Pyrrhonist tradition as involving all those who share a commitment to the activity of Pyrrhonizing and develops fresh, provocative readings of Sextus, Montaigne, and Hume as radical Pyrrhonizing skeptics.
Historically, laws and policies to criminalize drug use or possession were rooted in explicit racism, and they continue to wreak havoc on certain racialized communities. We are a group of bioethicists, drug experts, legal scholars, criminal justice researchers, sociologists, psychologists, and other allied professionals who have come together in support of a policy proposal that is evidence-based and ethically recommended. We call for the immediate decriminalization of all so-called recreational drugs and, ultimately, for their timely and appropriate legal regulation. We (...) also call for criminal convictions for nonviolent offenses pertaining to the use or possession of small quantities of such drugs to be expunged, and for those currently serving time for these offenses to be released. In effect, we call for an end to the “war on drugs.”. (shrink)
ABSTRACT Quine insisted that the satisfaction of an open modalised formula by an object depends on how that object is described. Kripke's ‘objectual’ interpretation of quantified modal logic, whereby variables are rigid, is commonly thought to avoid these Quinean worries. Yet there remain residual Quinean worries for epistemic modality. Theorists have recently been toying with assignment-shifting treatments of epistemic contexts. On such views an epistemic operator ends up binding all the variables in its scope. One might worry that this yields (...) the undesirable result that any attempt to ‘quantify in’ to an epistemic environment is blocked. If quantifying into the relevant constructions is vacuous, then such views would seem hopelessly misguided and empirically inadequate. But a famous alternative to Kripke's semantics, namely Lewis' counterpart semantics, also faces this worry since it also treats the boxes and diamonds as assignment-shifting devices. As I'll demonstrate, the mere fact that a variable is bound is no obstacle to binding it. This provides a helpful lesson for those modelling de re epistemic contexts with assignment sensitivity, and perhaps leads the way toward the proper treatment of binding in both metaphysical and epistemic contexts: Kripke for metaphysical modality, Lewis for epistemic modality. (shrink)
Intuitively, Gettier cases are instances of justified true beliefs that are not cases of knowledge. Should we therefore conclude that knowledge is not justified true belief? Only if we have reason to trust intuition here. But intuitions are unreliable in a wide range of cases. And it can be argued that the Gettier intuitions have a greater resemblance to unreliable intuitions than to reliable intuitions. Whats distinctive about the faulty intuitions, I argue, is that respecting them would mean abandoning a (...) simple, systematic and largely successful theory in favour of a complicated, disjunctive and idiosyncratic theory. So maybe respecting the Gettier intuitions was the wrong reaction, we should instead have been explaining why we are all so easily misled by these kinds of cases. (shrink)
I defend normative externalism from the objection that it cannot account for the wrongfulness of moral recklessness. The defence is fairly simple—there is no wrong of moral recklessness. There is an intuitive argument by analogy that there should be a wrong of moral recklessness, and the bulk of the paper consists of a response to this analogy. A central part of my response is that if people were motivated to avoid moral recklessness, they would have to have an unpleasant sort (...) of motivation, what Michael Smith calls “moral fetishism”. (shrink)
In his Principles of Philosophy, Descartes says, Finally, it is so manifest that we possess a free will, capable of giving or withholding its assent, that this truth must be reckoned among the first and most common notions which are born with us.
I consider the problem of how to derive what an agent believes from their credence function and utility function. I argue the best solution of this problem is pragmatic, i.e. it is sensitive to the kinds of choices actually facing the agent. I further argue that this explains why our notion of justified belief appears to be pragmatic, as is argued e.g. by Fantl and McGrath. The notion of epistemic justification is not really a pragmatic notion, but it is being (...) applied to a pragmatically defined concept, i.e. belief. (shrink)
This paper explores an emerging sub-field of both empirical bioethics and experimental philosophy, which has been called “experimental philosophical bioethics” (bioxphi). As an empirical discipline, bioxphi adopts the methods of experimental moral psychology and cognitive science; it does so to make sense of the eliciting factors and underlying cognitive processes that shape people’s moral judgments, particularly about real-world matters of bioethical concern. Yet, as a normative discipline situated within the broader field of bioethics, it also aims to contribute to substantive (...) ethical questions about what should be done in a given context. What are some of the ways in which this aim has been pursued? In this paper, we employ a case study approach to examine and critically evaluate four strategies from the recent literature by which scholars in bioxphi have leveraged empirical data in the service of normative arguments. (shrink)
Predictive algorithms are playing an increasingly prominent role in society, being used to predict recidivism, loan repayment, job performance, and so on. With this increasing influence has come an increasing concern with the ways in which they might be unfair or biased against individuals in virtue of their race, gender, or, more generally, their group membership. Many purported criteria of algorithmic fairness concern statistical relationships between the algorithm’s predictions and the actual outcomes, for instance requiring that the rate of false (...) positives be equal across the relevant groups. We might seek to ensure that algorithms satisfy all of these purported fairness criteria. But a series of impossibility results shows that this is impossible, unless base rates are equal across the relevant groups. What are we to make of these pessimistic results? I argue that none of the purported criteria, except for a calibration criterion, are necessary conditions for fairness, on the grounds that they can all be simultaneously violated by a manifestly fair and uniquely optimal predictive algorithm, even when base rates are equal. I conclude with some general reflections on algorithmic fairness. (shrink)
In this paper, we show that presentism -- the view that the way things are is the way things presently are -- is not undermined by the objection from being-supervenience. This objection claims, roughly, that presentism has trouble accounting for the truth-value of past-tense claims. Our demonstration amounts to the articulation and defence of a novel version of presentism. This is brute past presentism, according to which the truth-value of past-tense claims is determined by the past understood as a fundamental (...) aspect of reality different from things and how things are. (shrink)
I advocate Time-Slice Rationality, the thesis that the relationship between two time-slices of the same person is not importantly different, for purposes of rational evaluation, from the relationship between time-slices of distinct persons. The locus of rationality, so to speak, is the time-slice rather than the temporally extended agent. This claim is motivated by consideration of puzzle cases for personal identity over time and by a very moderate form of internalism about rationality. Time-Slice Rationality conflicts with two proposed principles of (...) rationality, Conditionalization and Reflection. Conditionalization is a diachronic norm saying how your current degrees of belief should fit with your old ones, while Reflection is a norm enjoining you to defer to the degrees of belief that you expect to have in the future. But they are independently problematic and should be replaced by improved, time-slice-centric principles. Conditionalization should be replaced by a synchronic norm saying what degrees of belief you ought to have given your current evidence and Reflection should be replaced by a norm which instructs you to defer to the degrees of belief of agents you take to be experts. These replacement principles do all the work that the old principles were supposed to do while avoiding their problems. In this way, Time-Slice Rationality puts the theory of rationality on firmer foundations and yields better norms than alternative, non-time-slice-centric approaches. (shrink)
Dogmatism is sometimes thought to be incompatible with Bayesian models of rational learning. I show that the best model for updating imprecise credences is compatible with dogmatism.
One's inaccuracy for a proposition is defined as the squared difference between the truth value (1 or 0) of the proposition and the credence (or subjective probability, or degree of belief) assigned to the proposition. One should have the epistemic goal of minimizing the expected inaccuracies of one's credences. We show that the method of minimizing expected inaccuracy can be used to solve certain probability problems involving information loss and self-locating beliefs (where a self-locating belief of a temporal part of (...) an individual is a belief about where or when that temporal part is located). We analyze the Sleeping Beauty problem, the duplication version of the Sleeping Beauty problem, and various related problems. (shrink)
Intelligent activity requires the use of various intellectual skills. While these skills are connected to knowledge, they should not be identified with knowledge. There are realistic examples where the skills in question come apart from knowledge. That is, there are realistic cases of knowledge without skill, and of skill without knowledge. Whether a person is intelligent depends, in part, on whether they have these skills. Whether a particular action is intelligent depends, in part, on whether it was produced by an (...) exercise of skill. These claims promote a picture of intelligence that is in tension with a strongly intellectualist picture, though they are not in tension with a number of prominent claims recently made by intellectualists. (shrink)
This paper presents a systematic approach for analyzing and explaining the nature of social groups. I argue against prominent views that attempt to unify all social groups or to divide them into simple typologies. Instead I argue that social groups are enormously diverse, but show how we can investigate their natures nonetheless. I analyze social groups from a bottom-up perspective, constructing profiles of the metaphysical features of groups of specific kinds. We can characterize any given kind of social group with (...) four complementary profiles: its “construction” profile, its “extra essentials” profile, its “anchor” profile, and its “accident” profile. Together these provide a framework for understanding the nature of groups, help classify and categorize groups, and shed light on group agency. (shrink)
According to moral intuitionism, at least some moral seeming states are justification-conferring. The primary defense of this view currently comes from advocates of the standard account, who take the justification-conferring power of a moral seeming to be determined by its phenomenological credentials alone. However, the standard account is vulnerable to a problem. In brief, the standard account implies that moral knowledge is seriously undermined by those commonplace moral disagreements in which both agents have equally good phenomenological credentials supporting their disputed (...) moral beliefs. However, it is implausible to think that commonplace disagreement seriously undermines moral knowledge, and thus it is implausible to think that the standard account of moral intuitionism is true. (shrink)
Environmental studies is a highly interdisciplinary field of inquiry, involving philosophers, ecologists, biologists, sociologists, activists, historians and professionals in public and private environmental organizations. It comes with no surprise, then, that the follow-up to Nelson and Callicott’s original anthology The Great Wilderness Debate (1998) features essays from authors in a broad array of disciplines. While there is considerable overlap between the two volumes, this new version offers forty-one essays, five of which are new additions, organized into four sections. What constitutes (...) wilderness? Is wilderness real or social constructed? What kinds of values are served—recreational, aesthetic, scientific, or others—by protecting wild areas? While many commentators trace these questions back to an exchange in the 1990s between two environmental ethicists, J. Baird Callicott and Holmes Rolston III, the debate over the wilderness idea actually has older roots. At least in the U.S. context, it travels back in time to the earliest part of the twentieth-century, when the American public, politicians and ecologists were pressed to justify why wilderness areas should be set aside in a new National Park system. Since then, the fundamental question fuelling the ‘Great Wilderness Debate’ is whether what is being preserved is actually wilderness. Is there such a thing or place as wilderness, that is, a quintessentially non-human or wild setting untainted by human influence? If so, why do we believe such areas deserve protection? (shrink)
ABSTRACTRational agents have consistent beliefs. Bayesianism is a theory of consistency for partial belief states. Rational agents also respond appropriately to experience. Dogmatism is a theory of how to respond appropriately to experience. Hence, Dogmatism and Bayesianism are theories of two very different aspects of rationality. It's surprising, then, that in recent years it has become common to claim that Dogmatism and Bayesianism are jointly inconsistent: how can two independently consistent theories with distinct subject matter be jointly inconsistent? In this (...) essay I argue that Bayesianism and Dogmatism are inconsistent only with the addition of a specific hypothesis about how the appropriate responses to perceptual experience are to be incorporated into the formal models of the Bayesian. That hypothesis isn't essential either to Bayesianism or to Dogmatism, and so Bayesianism and Dogmatism are jointly consistent. That leaves the matter of how experiences and credences are related, a... (shrink)
In previous work I’ve defended an interest-relative theory of belief. This paper continues the defence. It has four aims. -/- 1. To offer a new kind of reason for being unsatis ed with the simple Lockean reduction of belief to credence. 2. To defend the legitimacy of appealing to credences in a theory of belief. 3. To illustrate the importance of theoretical, as well as practical, interests in an interest-relative account of belief. 4. To revise my account to cover propositions (...) that are practically and theoretically irrelevant to the agent. (shrink)
Accuracy‐first epistemology is an approach to formal epistemology which takes accuracy to be a measure of epistemic utility and attempts to vindicate norms of epistemic rationality by showing how conformity with them is beneficial. If accuracy‐first epistemology can actually vindicate any epistemic norms, it must adopt a plausible account of epistemic value. Any such account must avoid the epistemic version of Derek Parfit's “repugnant conclusion.” I argue that the only plausible way of doing so is to say that accurate credences (...) in certain propositions have no, or almost no, epistemic value. I prove that this is incompatible with standard accuracy‐first arguments for probabilism, and argue that there is no way for accuracy‐first epistemology to show that all credences of all agents should be coherent. (shrink)
Conciliatory theories of disagreement face a revenge problem; they cannot be coherently believed by one who thinks they have peers who are not conciliationists. I argue that this is a deep problem for conciliationism.
Almost entirely ignored in the linguistic theorising on names and descriptions is a hybrid form of expression which, like definite descriptions, begin with 'the' but which, like proper names, are capitalised and seem to lack descriptive content. These are expressions such as the following, 'the Holy Roman Empire', 'the Mississippi River', or 'the Space Needle'. Such capitalised descriptions are ubiquitous in natural language, but to which linguistic categories do they belong? Are they simply proper names? Or are they definite descriptions (...) with unique orthography? Or are they something else entirely? This paper assesses two obvious assimilation strategies: (i) assimilation to proper names and (ii) assimilation to definite descriptions. It is argued that both of these strategies face major difficulties. The primary goal is to lay the groundwork for a linguistic analysis of capitalised descriptions. Yet, the hope is that clearing the ground on capitalised descriptions may reveal useful insights for the on-going research into the semantics and syntax of their lower-case or 'the'-less relatives. (shrink)
I set out and defend a view on indicative conditionals that I call “indexical relativism ”. The core of the view is that which proposition is expressed by an utterance of a conditional is a function of the speaker’s context and the assessor’s context. This implies a kind of relativism, namely that a single utterance may be correctly assessed as true by one assessor and false by another.
Many writers have held that in his later work, David Lewis adopted a theory of predicate meaning such that the meaning of a predicate is the most natural property that is (mostly) consistent with the way the predicate is used. That orthodox interpretation is shared by both supporters and critics of Lewis's theory of meaning, but it has recently been strongly criticised by Wolfgang Schwarz. In this paper, I accept many of Schwarze's criticisms of the orthodox interpretation, and add some (...) more. But I also argue that the orthodox interpretation has a grain of truth in it, and seeing that helps us appreciate the strength of Lewis's late theory of meaning. (shrink)
Certain puzzling cases have been discussed in the literature recently which appear to support the thought that knowledge can be obtained by way of deduction from a falsehood; moreover, these cases put pressure, prima facie, on the thesis of counter closure for knowledge. We argue that the cases do not involve knowledge from falsehood; despite appearances, the false beliefs in the cases in question are causally, and therefore epistemologically, incidental, and knowledge is achieved despite falsehood. We also show that the (...) principle of counter closure, and the concomitant denial of knowledge from falsehood, is well motivated by considerations in epistemological theory--in particular, by the view that knowledge is first in the epistemological order of things. (shrink)
Keith DeRose has argued that the two main problems facing subject-sensitive invariantism come from the appropriateness of certain third-person denials of knowledge and the inappropriateness of now you know it, now you don't claims. I argue that proponents of SSI can adequately address both problems. First, I argue that the debate between contextualism and SSI has failed to account for an important pragmatic feature of third-person denials of knowledge. Appealing to these pragmatic features, I show that straightforward third-person denials are (...) inappropriate in the relevant cases. And while there are certain denials that are appropriate, they pose no problems for SSI. Next, I offer an explanation, compatible with SSI, of the oddity of now you know it, now you don't claims. To conclude, I discuss the intuitiveness of purism, whose rejection is the source of many problems for SSI. I propose to explain away the intuitiveness of purism as a side-effect of the narrow focus of previous epistemological inquiries. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.