Conspiracy theories and conspiracy theorists have been accused of a great many sins, but are the conspiracy theories conspiracy theorists believe epistemically problematic? Well, according to some recent work, yes, they are. Yet a number of other philosophers like Brian L. Keeley, Charles Pigden, Kurtis Hagen, Lee Basham, and the like have argued ‘No!’ I will argue that there are features of certain conspiracy theories which license suspicion of such theories. I will also argue that these features only (...) license a limited suspicion of these conspiracy theories, and thus we need to be careful about generalising from such suspicions to a view of the warrant of conspiracy theories more generally. To understand why, we need to get to the bottom of what exactly makes us suspicious of certain conspiracy theories, and how being suspicious of a conspiracy theory does not always tell us anything about how likely the theory in question is to be false. (shrink)
It is suggested that the impetus to generate models is probably the most fundamental point of connection between mysticism and psychology. In their concern with the relation between ‘unseen’ realms and the ‘seen’, mystical maps parallel cognitive models of the relation between ‘unconscious’ and ‘conscious’ processes. The map or model constitutes an explanation employing terms current within the respective canon. The case of language mysticism is examined to illustrate the premise that cognitive models may benefit from an understanding of the (...) kinds of experiences gained, and explanatory concepts advanced, within mystical traditions. Language mysticism is of particular interest on account of the central role thought to be played by language in relation to self and the individual's construction of reality. The discussion focuses on traditions of language mysticism within Judaism, in which emphasis is placed on the deconstruction of language into primary elements and the overarching significance of the divine Name. Analysis of the detailed techniques used suggests ways in which multiple associations to any given word/concept were consciously explored in an altered state. It appears that these mystics were consciously engaging with what are normally preconscious cognitive processes, whereby schematic associations to sensory images or thoughts are activated. The testimony from their writings implies that these mystics experienced distortions of the sense of self , which may suggest that, in the normal state, ‘I’ is constructed in relation to the preconscious system of associations. Moreover, an important feature of Hebrew language mysticism is its emphasis on embodiment -- specific associations were deemed to exist between the letters and each structure of the body. Implications, first, for the relationship between language and self, and, second, for the role of embodiment in relation to self are discussed. The importance of the continual emphasis on the Name of God throughout the linguistic practices may have provided a means for effectively replacing the cognitive indexing function hypothesized here to be normally played by ‘I’ with a more transpersonal cognitive index, especially in relation to memory. (shrink)
Historically, laws and policies to criminalize drug use or possession were rooted in explicit racism, and they continue to wreak havoc on certain racialized communities. We are a group of bioethicists, drug experts, legal scholars, criminal justice researchers, sociologists, psychologists, and other allied professionals who have come together in support of a policy proposal that is evidence-based and ethically recommended. We call for the immediate decriminalization of all so-called recreational drugs and, ultimately, for their timely and appropriate legal regulation. We (...) also call for criminal convictions for nonviolent offenses pertaining to the use or possession of small quantities of such drugs to be expunged, and for those currently serving time for these offenses to be released. In effect, we call for an end to the “war on drugs.”. (shrink)
In a recent study, we found a negative association between psychopathy and violence against genetic relatives. We interpreted this result as a form of nepotism and argued that it failed to support the hypothesis that psychopathy is a mental disorder, suggesting instead that it supports the hypothesis that psychopathy is an evolved life history strategy. This interpretation and subsequent arguments have been challenged in a number of ways. Here, we identify several misunderstandings regarding the harmful dysfunction definition of mental disorder (...) as it applies to psychopathy and regarding the meaning of nepotism. Furthermore, we examine the evidence provided by our critics that psychopathy is associated with other disorders, and we offer a comment on their alternative model of psychopathy. We conclude that there remains little evidence that psychopathy is the product of dysfunctional mechanisms. (shrink)
The INBIOSA project brings together a group of experts across many disciplines who believe that science requires a revolutionary transformative step in order to address many of the vexing challenges presented by the world. It is INBIOSA’s purpose to enable the focused collaboration of an interdisciplinary community of original thinkers. This paper sets out the case for support for this effort. The focus of the transformative research program proposal is biology-centric. We admit that biology to date has been more fact-oriented (...) and less theoretical than physics. However, the key leverageable idea is that careful extension of the science of living systems can be more effectively applied to some of our most vexing modern problems than the prevailing scheme, derived from abstractions in physics. While these have some universal application and demonstrate computational advantages, they are not theoretically mandated for the living. A new set of mathematical abstractions derived from biology can now be similarly extended. This is made possible by leveraging new formal tools to understand abstraction and enable computability. [The latter has a much expanded meaning in our context from the one known and used in computer science and biology today, that is "by rote algorithmic means", since it is not known if a living system is computable in this sense (Mossio et al., 2009).] Two major challenges constitute the effort. The first challenge is to design an original general system of abstractions within the biological domain. The initial issue is descriptive leading to the explanatory. There has not yet been a serious formal examination of the abstractions of the biological domain. What is used today is an amalgam; much is inherited from physics (via the bridging abstractions of chemistry) and there are many new abstractions from advances in mathematics (incentivized by the need for more capable computational analyses). Interspersed are abstractions, concepts and underlying assumptions “native” to biology and distinct from the mechanical language of physics and computation as we know them. A pressing agenda should be to single out the most concrete and at the same time the most fundamental process-units in biology and to recruit them into the descriptive domain. Therefore, the first challenge is to build a coherent formal system of abstractions and operations that is truly native to living systems. Nothing will be thrown away, but many common methods will be philosophically recast, just as in physics relativity subsumed and reinterpreted Newtonian mechanics. -/- This step is required because we need a comprehensible, formal system to apply in many domains. Emphasis should be placed on the distinction between multi-perspective analysis and synthesis and on what could be the basic terms or tools needed. The second challenge is relatively simple: the actual application of this set of biology-centric ways and means to cross-disciplinary problems. In its early stages, this will seem to be a “new science”. This White Paper sets out the case of continuing support of Information and Communication Technology (ICT) for transformative research in biology and information processing centered on paradigm changes in the epistemological, ontological, mathematical and computational bases of the science of living systems. Today, curiously, living systems cannot be said to be anything more than dissipative structures organized internally by genetic information. There is not anything substantially different from abiotic systems other than the empirical nature of their robustness. We believe that there are other new and unique properties and patterns comprehensible at this bio-logical level. The report lays out a fundamental set of approaches to articulate these properties and patterns, and is composed as follows. -/- Sections 1 through 4 (preamble, introduction, motivation and major biomathematical problems) are incipient. Section 5 describes the issues affecting Integral Biomathics and Section 6 -- the aspects of the Grand Challenge we face with this project. Section 7 contemplates the effort to formalize a General Theory of Living Systems (GTLS) from what we have today. The goal is to have a formal system, equivalent to that which exists in the physics community. Here we define how to perceive the role of time in biology. Section 8 describes the initial efforts to apply this general theory of living systems in many domains, with special emphasis on crossdisciplinary problems and multiple domains spanning both “hard” and “soft” sciences. The expected result is a coherent collection of integrated mathematical techniques. Section 9 discusses the first two test cases, project proposals, of our approach. They are designed to demonstrate the ability of our approach to address “wicked problems” which span across physics, chemistry, biology, societies and societal dynamics. The solutions require integrated measurable results at multiple levels known as “grand challenges” to existing methods. Finally, Section 10 adheres to an appeal for action, advocating the necessity for further long-term support of the INBIOSA program. -/- The report is concluded with preliminary non-exclusive list of challenging research themes to address, as well as required administrative actions. The efforts described in the ten sections of this White Paper will proceed concurrently. Collectively, they describe a program that can be managed and measured as it progresses. (shrink)
Objective: Compassion has been associated with eudaimonia and prosocial behavior, and has been regarded as a virtue, both historically and cross-culturally. However, the psychological study of compassion has been limited to laboratory settings and/or standard survey assessments. Here, we use an experience sampling method (ESM) to compare naturalistic assessments of compassion with standard assessments, and to examine compassion, its variability, and associations with eudaimonia and prosocial behavior. -/- Methods: Participants took a survey which included standard assessments of compassion and eudaimonia. (...) Then, over four days, they were repeatedly asked about their level of compassion, eudaimonia, and situational factors within the moments of daily life. Finally, prosocial behavior was tested using the Dual Gamble Task and an opportunity to donate task winnings. -/- Results: Analyses revealed within-person associations between ESM compassion and eudaimonia. ESM compassion also predicted eudaimonia at the next ESM time point. While not impervious to situational factors, considerable consistency was observed in ESM compassion in comparison with eudaimonia. Further, ESM compassion along with eudaimonia predicted donating behavior. Standard assessments did not. -/- Conclusion: Consistent with virtue theory, some individual’s reports displayed a probabilistic tendency toward compassion, and ESM compassion predicted ESM eudaimonia and prosocial behavior toward those in need. (shrink)
The law presents itself as a body of meaning, open to discovery, interpretation, application, criticism, development and change. But what sort of meaning does the law possess? Legal theory provides three sorts of answers. The first portrays the law as a mode of communication through which law-makers convey certain standards or norms to the larger community. The law's meaning is that imparted by its authors. On this view, law is a vehicle, conveying a message from a speaker to an intended (...) audience. The second theory portrays the law as a mode of interpretation, whereby judges, officials, and ordinary citizens make decisions about how the law applies in various practical contexts. The law's meaning is that furnished by its interpreters. According to this theory, law is a receptacle into which decision-makers pour meaning. The third viewpoint argues that these theories, while not altogether wrong, are incomplete because they downplay or ignore the autonomous meaning that the law itself possesses. This theory suggests that the law is basically a mode of participation, whereby legislators, judges, officials, and ordinary people attune themselves to an autonomous field of legal meaning. The law's meaning is grounded in a body of social practice which is independent of both the law's authors and its interpreters and which is infused with basic values and principles that transcend the practice. On this view, law is the emblem of meaning that lies beyond it. -/- Elements of all three theories are present in H.L.A. Hart's influential work, The Concept of Law, which attempts to fuse them into a single, all-encompassing theory. Nevertheless, as we will argue here, the attempt is not successful. Any true reconciliation of the communication and interpretation theories can only take place within the framework of a fully developed participation theory. In the early stages of his work, Hart lays the foundation for such a theory. However, his failure to elaborate it in a thoroughgoing way renders the work incomplete and ultimately unbalanced. As we will see, there is something to be learned from this failure. (shrink)
A key source of support for the view that challenging people’s beliefs about free will may undermine moral behavior is two classic studies by Vohs and Schooler (2008). These authors reported that exposure to certain prompts suggesting that free will is an illusion increased cheating behavior. In the present paper, we report several attempts to replicate this influential and widely cited work. Over a series of five studies (sample sizes of N = 162, N = 283, N = 268, N (...) = 804, N = 982) (four preregistered) we tested the relationship between (1) anti-free-will prompts and free will beliefs and (2) free will beliefs and immoral behavior. Our primary task was to closely replicate the findings from Vohs and Schooler (2008) using the same or highly similar manipulations and measurements as the ones used in their original studies. Our efforts were largely unsuccessful. We suggest that manipulating free will beliefs in a robust way is more difficult than has been implied by prior work, and that the proposed link with immoral behavior may not be as consistent as previous work suggests. (shrink)
Edited proceedings of an interdisciplinary symposium on consciousness held at the University of Cambridge in January 1978. Includes a foreword by Freeman Dyson. Chapter authors: G. Vesey, R.L. Gregory, H.C. Longuet-Higgins, N.K. Humphrey, H.B. Barlow, D.M. MacKay, B.D. Josephson, M. Roth, V.S. Ramachandran, S. Padfield, and (editorial summary only) E. Noakes. A scanned pdf is available from this web site (philpapers.org), while alternative versions more suitable for copying text are available from https://www.repository.cam.ac.uk/handle/1810/245189. -/- Page numbering convention for the pdf version (...) viewed in a pdf viewer is as follows: 'go to page n' accesses the pair of scanned pages 2n and 2n+1. Applicable licence: CC Attribution-NonCommercial-ShareAlike 2.0. (shrink)
The idea about this special issue came from a paper published as an updated and upridged version of an older memorial lecture given by Brian D. Josephson and Michael Conrad at the Gujarat Vidyapith University in Ahmedabad, India on March 2, 1984. The title of this paper was “Uniting Eastern Philosophy and Western Science” (1992). We thought that this topic deserves to be revisited after 25 years to demonstrate to the scientific community which new insights and achievements were attained (...) in this fairly broad field during this period. (shrink)
Abstract In this paper I embrace what BrianKeeley calls in “Of Conspiracy Theories” the absurdist horn of the dilemma for philosophers who criticize such theories. I thus defend the view that there is indeed something deeply epistemically wrong with conspiracy theorizing. My complaint is that conspiracy theories apply intentional explanations to situations that give rise to special problems concerning the elimination of competing intentional explanations.
We live in a world of crowds and corporations, artworks and artifacts, legislatures and languages, money and markets. These are all social objects — they are made, at least in part, by people and by communities. But what exactly are these things? How are they made, and what is the role of people in making them? In The Ant Trap, Brian Epstein rewrites our understanding of the nature of the social world and the foundations of the social sciences. Epstein (...) explains and challenges the three prevailing traditions about how the social world is made. One tradition takes the social world to be built out of people, much as traffic is built out of cars. A second tradition also takes people to be the building blocks of the social world, but focuses on thoughts and attitudes we have toward one another. And a third tradition takes the social world to be a collective projection onto the physical world. Epstein shows that these share critical flaws. Most fundamentally, all three traditions overestimate the role of people in building the social world: they are overly anthropocentric. Epstein starts from scratch, bringing the resources of contemporary metaphysics to bear. In the place of traditional theories, he introduces a model based on a new distinction between the grounds and the anchors of social facts. Epstein illustrates the model with a study of the nature of law, and shows how to interpret the prevailing traditions about the social world. Then he turns to social groups, and to what it means for a group to take an action or have an intention. Contrary to the overwhelming consensus, these often depend on more than the actions and intentions of group members. (shrink)
In this article we examine obsessive compulsive disorder. We examine and reject two existing models of this disorder: the Dysfunctional Belief Model and the Inference-Based Approach. Instead, we propose that the main distinctive characteristic of OCD is a hyperactive sub-personal signal of being in error, experienced by the individual as uncertainty about his or her intentional actions. This signalling interacts with the anxiety sensitivities of the individual to trigger conscious checking processes, including speculations about possible harms. We examine the implications (...) of this model for the individual's capacity to control his or her thoughts. (shrink)
This paper presents a systematic approach for analyzing and explaining the nature of social groups. I argue against prominent views that attempt to unify all social groups or to divide them into simple typologies. Instead I argue that social groups are enormously diverse, but show how we can investigate their natures nonetheless. I analyze social groups from a bottom-up perspective, constructing profiles of the metaphysical features of groups of specific kinds. We can characterize any given kind of social group with (...) four complementary profiles: its “construction” profile, its “extra essentials” profile, its “anchor” profile, and its “accident” profile. Together these provide a framework for understanding the nature of groups, help classify and categorize groups, and shed light on group agency. (shrink)
Intuitively, Gettier cases are instances of justified true beliefs that are not cases of knowledge. Should we therefore conclude that knowledge is not justified true belief? Only if we have reason to trust intuition here. But intuitions are unreliable in a wide range of cases. And it can be argued that the Gettier intuitions have a greater resemblance to unreliable intuitions than to reliable intuitions. Whats distinctive about the faulty intuitions, I argue, is that respecting them would mean abandoning a (...) simple, systematic and largely successful theory in favour of a complicated, disjunctive and idiosyncratic theory. So maybe respecting the Gettier intuitions was the wrong reaction, we should instead have been explaining why we are all so easily misled by these kinds of cases. (shrink)
ABSTRACT Quine insisted that the satisfaction of an open modalised formula by an object depends on how that object is described. Kripke's ‘objectual’ interpretation of quantified modal logic, whereby variables are rigid, is commonly thought to avoid these Quinean worries. Yet there remain residual Quinean worries for epistemic modality. Theorists have recently been toying with assignment-shifting treatments of epistemic contexts. On such views an epistemic operator ends up binding all the variables in its scope. One might worry that this yields (...) the undesirable result that any attempt to ‘quantify in’ to an epistemic environment is blocked. If quantifying into the relevant constructions is vacuous, then such views would seem hopelessly misguided and empirically inadequate. But a famous alternative to Kripke's semantics, namely Lewis' counterpart semantics, also faces this worry since it also treats the boxes and diamonds as assignment-shifting devices. As I'll demonstrate, the mere fact that a variable is bound is no obstacle to binding it. This provides a helpful lesson for those modelling de re epistemic contexts with assignment sensitivity, and perhaps leads the way toward the proper treatment of binding in both metaphysical and epistemic contexts: Kripke for metaphysical modality, Lewis for epistemic modality. (shrink)
I defend normative externalism from the objection that it cannot account for the wrongfulness of moral recklessness. The defence is fairly simple—there is no wrong of moral recklessness. There is an intuitive argument by analogy that there should be a wrong of moral recklessness, and the bulk of the paper consists of a response to this analogy. A central part of my response is that if people were motivated to avoid moral recklessness, they would have to have an unpleasant sort (...) of motivation, what Michael Smith calls “moral fetishism”. (shrink)
I consider the problem of how to derive what an agent believes from their credence function and utility function. I argue the best solution of this problem is pragmatic, i.e. it is sensitive to the kinds of choices actually facing the agent. I further argue that this explains why our notion of justified belief appears to be pragmatic, as is argued e.g. by Fantl and McGrath. The notion of epistemic justification is not really a pragmatic notion, but it is being (...) applied to a pragmatically defined concept, i.e. belief. (shrink)
In his Principles of Philosophy, Descartes says, Finally, it is so manifest that we possess a free will, capable of giving or withholding its assent, that this truth must be reckoned among the first and most common notions which are born with us.
Predictive algorithms are playing an increasingly prominent role in society, being used to predict recidivism, loan repayment, job performance, and so on. With this increasing influence has come an increasing concern with the ways in which they might be unfair or biased against individuals in virtue of their race, gender, or, more generally, their group membership. Many purported criteria of algorithmic fairness concern statistical relationships between the algorithm’s predictions and the actual outcomes, for instance requiring that the rate of false (...) positives be equal across the relevant groups. We might seek to ensure that algorithms satisfy all of these purported fairness criteria. But a series of impossibility results shows that this is impossible, unless base rates are equal across the relevant groups. What are we to make of these pessimistic results? I argue that none of the purported criteria, except for a calibration criterion, are necessary conditions for fairness, on the grounds that they can all be simultaneously violated by a manifestly fair and uniquely optimal predictive algorithm, even when base rates are equal. I conclude with some general reflections on algorithmic fairness. (shrink)
I advocate Time-Slice Rationality, the thesis that the relationship between two time-slices of the same person is not importantly different, for purposes of rational evaluation, from the relationship between time-slices of distinct persons. The locus of rationality, so to speak, is the time-slice rather than the temporally extended agent. This claim is motivated by consideration of puzzle cases for personal identity over time and by a very moderate form of internalism about rationality. Time-Slice Rationality conflicts with two proposed principles of (...) rationality, Conditionalization and Reflection. Conditionalization is a diachronic norm saying how your current degrees of belief should fit with your old ones, while Reflection is a norm enjoining you to defer to the degrees of belief that you expect to have in the future. But they are independently problematic and should be replaced by improved, time-slice-centric principles. Conditionalization should be replaced by a synchronic norm saying what degrees of belief you ought to have given your current evidence and Reflection should be replaced by a norm which instructs you to defer to the degrees of belief of agents you take to be experts. These replacement principles do all the work that the old principles were supposed to do while avoiding their problems. In this way, Time-Slice Rationality puts the theory of rationality on firmer foundations and yields better norms than alternative, non-time-slice-centric approaches. (shrink)
According to moral intuitionism, at least some moral seeming states are justification-conferring. The primary defense of this view currently comes from advocates of the standard account, who take the justification-conferring power of a moral seeming to be determined by its phenomenological credentials alone. However, the standard account is vulnerable to a problem. In brief, the standard account implies that moral knowledge is seriously undermined by those commonplace moral disagreements in which both agents have equally good phenomenological credentials supporting their disputed (...) moral beliefs. However, it is implausible to think that commonplace disagreement seriously undermines moral knowledge, and thus it is implausible to think that the standard account of moral intuitionism is true. (shrink)
In this paper, we show that presentism -- the view that the way things are is the way things presently are -- is not undermined by the objection from being-supervenience. This objection claims, roughly, that presentism has trouble accounting for the truth-value of past-tense claims. Our demonstration amounts to the articulation and defence of a novel version of presentism. This is brute past presentism, according to which the truth-value of past-tense claims is determined by the past understood as a fundamental (...) aspect of reality different from things and how things are. (shrink)
Dogmatism is sometimes thought to be incompatible with Bayesian models of rational learning. I show that the best model for updating imprecise credences is compatible with dogmatism.
Certain puzzling cases have been discussed in the literature recently which appear to support the thought that knowledge can be obtained by way of deduction from a falsehood; moreover, these cases put pressure, prima facie, on the thesis of counter closure for knowledge. We argue that the cases do not involve knowledge from falsehood; despite appearances, the false beliefs in the cases in question are causally, and therefore epistemologically, incidental, and knowledge is achieved despite falsehood. We also show that the (...) principle of counter closure, and the concomitant denial of knowledge from falsehood, is well motivated by considerations in epistemological theory--in particular, by the view that knowledge is first in the epistemological order of things. (shrink)
Recently four different papers have suggested that the supervaluational solution to the Problem of the Many is flawed. Stephen Schiffer (1998, 2000a, 2000b) has argued that the theory cannot account for reports of speech involving vague singular terms. Vann McGee and Brian McLaughlin (2000) say that theory cannot, yet, account for vague singular beliefs. Neil McKinnon (2002) has argued that we cannot provide a plausible theory of when precisifications are acceptable, which the supervaluational theory needs. And Roy Sorensen (2000) (...) argues that supervaluationism is inconsistent with a directly referential theory of names. McGee and McLaughlin see the problem they raise as a cause for further research, but the other authors all take the problems they raise to provide sufficient reasons to jettison supervaluationism. I will argue that none of these problems provide such a reason, though the arguments are valuable critiques. In many cases, we must make some adjustments to the supervaluational theory to meet the posed challenges. The goal of this paper is to make those adjustments, and meet the challenges. (shrink)
Intelligent activity requires the use of various intellectual skills. While these skills are connected to knowledge, they should not be identified with knowledge. There are realistic examples where the skills in question come apart from knowledge. That is, there are realistic cases of knowledge without skill, and of skill without knowledge. Whether a person is intelligent depends, in part, on whether they have these skills. Whether a particular action is intelligent depends, in part, on whether it was produced by an (...) exercise of skill. These claims promote a picture of intelligence that is in tension with a strongly intellectualist picture, though they are not in tension with a number of prominent claims recently made by intellectualists. (shrink)
One's inaccuracy for a proposition is defined as the squared difference between the truth value (1 or 0) of the proposition and the credence (or subjective probability, or degree of belief) assigned to the proposition. One should have the epistemic goal of minimizing the expected inaccuracies of one's credences. We show that the method of minimizing expected inaccuracy can be used to solve certain probability problems involving information loss and self-locating beliefs (where a self-locating belief of a temporal part of (...) an individual is a belief about where or when that temporal part is located). We analyze the Sleeping Beauty problem, the duplication version of the Sleeping Beauty problem, and various related problems. (shrink)
The thesis of methodological individualism in social science is commonly divided into two different claims—explanatory individualism and ontological individualism. Ontological individualism is the thesis that facts about individuals exhaustively determine social facts. Initially taken to be a claim about the identity of groups with sets of individuals or their properties, ontological individualism has more recently been understood as a global supervenience claim. While explanatory individualism has remained controversial, ontological individualism thus understood is almost universally accepted. In this paper I argue (...) that ontological individualism is false. Only if the thesis is weakened to the point that it is equivalent to physicalism can it be true, but then it fails to be a thesis about the determination of social facts by facts about individual persons. Even when individualistic facts are expanded to include people’s local environments and practices, I shall argue, those still underdetermine the social facts that obtain. If true, this has implications for explanation as well as ontology. I first consider arguments against the local supervenience of social facts on facts about individuals, correcting some flaws in existing arguments and affirming that local supervenience fails for a broad set of social properties. I subsequently apply a similar approach to defeat a particularly weak form of global supervenience, and consider potential responses. Finally, I explore why it is that people have taken ontological individualism to be true. (shrink)
ABSTRACTRational agents have consistent beliefs. Bayesianism is a theory of consistency for partial belief states. Rational agents also respond appropriately to experience. Dogmatism is a theory of how to respond appropriately to experience. Hence, Dogmatism and Bayesianism are theories of two very different aspects of rationality. It's surprising, then, that in recent years it has become common to claim that Dogmatism and Bayesianism are jointly inconsistent: how can two independently consistent theories with distinct subject matter be jointly inconsistent? In this (...) essay I argue that Bayesianism and Dogmatism are inconsistent only with the addition of a specific hypothesis about how the appropriate responses to perceptual experience are to be incorporated into the formal models of the Bayesian. That hypothesis isn't essential either to Bayesianism or to Dogmatism, and so Bayesianism and Dogmatism are jointly consistent. That leaves the matter of how experiences and credences are related, a... (shrink)
In previous work I’ve defended an interest-relative theory of belief. This paper continues the defence. It has four aims. -/- 1. To offer a new kind of reason for being unsatis ed with the simple Lockean reduction of belief to credence. 2. To defend the legitimacy of appealing to credences in a theory of belief. 3. To illustrate the importance of theoretical, as well as practical, interests in an interest-relative account of belief. 4. To revise my account to cover propositions (...) that are practically and theoretically irrelevant to the agent. (shrink)
Conciliatory theories of disagreement face a revenge problem; they cannot be coherently believed by one who thinks they have peers who are not conciliationists. I argue that this is a deep problem for conciliationism.
?Love hurts??as the saying goes?and a certain amount of pain and difficulty in intimate relationships is unavoidable. Sometimes it may even be beneficial, since adversity can lead to personal growth, self-discovery, and a range of other components of a life well-lived. But other times, love can be downright dangerous. It may bind a spouse to her domestic abuser, draw an unscrupulous adult toward sexual involvement with a child, put someone under the insidious spell of a cult leader, and even inspire (...) jealousy-fueled homicide. How might these perilous devotions be diminished? The ancients thought that treatments such as phlebotomy, exercise, or bloodletting could ?cure? an individual of love. But modern neuroscience and emerging developments in psychopharmacology open up a range of possible interventions that might actually work. These developments raise profound moral questions about the potential uses?and misuses?of such anti-love biotechnology. In this article, we describe a number of prospective love-diminishing interventions, and offer a preliminary ethical framework for dealing with them responsibly should they arise. (shrink)
In his insightful and challenging paper, Jonathan Schaffer argues against a distinction I make in The Ant Trap (Epstein 2015) and related articles. I argue that in addition to the widely discussed “grounding” relation, there is a different kind of metaphysical determination I name “anchoring.” Grounding and anchoring are distinct, and both need to be a part of full explanations of how facts are metaphysically determined. Schaffer argues instead that anchoring is a species of grounding. The crux of his argument (...) comes in the last sections of his paper, in his discussion of “exportation,” the relations strategy, and the definitions strategy. I am inclined to agree that Schaffer’s interesting strategies offer the best choices for the philosopher who wants to insist that anchoring is a species of grounding. But both, I will argue, are fatally flawed. I do not take the separation of anchors from grounds lightly, but find the evidence in its favor overwhelming. And once the distinction is made, I find anchoring to be a powerful practical tool in metaphysics. Philosophy and Phenomenological Research, Volume 99, Issue 3, Page 768-781, November 2019. (shrink)
Almost entirely ignored in the linguistic theorising on names and descriptions is a hybrid form of expression which, like definite descriptions, begin with 'the' but which, like proper names, are capitalised and seem to lack descriptive content. These are expressions such as the following, 'the Holy Roman Empire', 'the Mississippi River', or 'the Space Needle'. Such capitalised descriptions are ubiquitous in natural language, but to which linguistic categories do they belong? Are they simply proper names? Or are they definite descriptions (...) with unique orthography? Or are they something else entirely? This paper assesses two obvious assimilation strategies: (i) assimilation to proper names and (ii) assimilation to definite descriptions. It is argued that both of these strategies face major difficulties. The primary goal is to lay the groundwork for a linguistic analysis of capitalised descriptions. Yet, the hope is that clearing the ground on capitalised descriptions may reveal useful insights for the on-going research into the semantics and syntax of their lower-case or 'the'-less relatives. (shrink)
In recent years, theorists have debated how we introduce new social objects and kinds into the world. Searle, for instance, proposes that they are introduced by collective acceptance of a constitutive rule; Millikan and Elder that they are the products of reproduction processes; Thomasson that they result from creator intentions and subsequent intentional reproduction; and so on. In this chapter, I argue against the idea that there is a single generic method or set of requirements for doing so. Instead, there (...) is a variety of what I call “anchoring schemas,” or methods by which new social kinds are generated. Not only are social kinds a diverse lot, but the metaphysical explanation for their being the kinds they are is diverse as well. I explain the idea of anchoring and present examples of social kinds that are similar to one another but that are anchored in different ways. I also respond to Millikan’s argument that there is only one kind of “glue” that is “sticky enough” for holding together kinds. I argue that no anchoring schema will work in all environments. It is a contingent matter which schemas are successful for anchoring new social kinds, and an anchoring schema need only be “sticky enough” for practical purposes in a given environment. (shrink)
I set out and defend a view on indicative conditionals that I call “indexical relativism ”. The core of the view is that which proposition is expressed by an utterance of a conditional is a function of the speaker’s context and the assessor’s context. This implies a kind of relativism, namely that a single utterance may be correctly assessed as true by one assessor and false by another.
Many writers have held that in his later work, David Lewis adopted a theory of predicate meaning such that the meaning of a predicate is the most natural property that is (mostly) consistent with the way the predicate is used. That orthodox interpretation is shared by both supporters and critics of Lewis's theory of meaning, but it has recently been strongly criticised by Wolfgang Schwarz. In this paper, I accept many of Schwarze's criticisms of the orthodox interpretation, and add some (...) more. But I also argue that the orthodox interpretation has a grain of truth in it, and seeing that helps us appreciate the strength of Lewis's late theory of meaning. (shrink)
Kaplan (1989) famously claimed that monsters--operators that shift the context--do not exist in English and "could not be added to it". Several recent theorists have pointed out a range of data that seem to refute Kaplan's claim, but others (most explicitly Stalnaker 2014) have offered a principled argument that monsters are impossible. This paper interprets and resolves the dispute. Contra appearances, this is no dry, technical matter: it cuts to the heart of a deep disagreement about the fundamental structure of (...) a semantic theory. We argue that: (i) the interesting notion of a monster is not an operator that shifts some formal parameter, but rather an operator that shifts parameters that play a certain theoretical role; (ii) one cannot determine whether a given semantic theory allows monsters simply by looking at the formal semantics; (iii) theories which forbid shifting the formal "context" parameter are perfectly compatible with the existence of monsters (in the interesting sense). We explain and defend these claims by contrasting two kinds of semantic theory--Kaplan's (1989) and Lewis's (1980). (shrink)
Humans typically display hindsight bias. They are more confident that the evidence available beforehand made some outcome probable when they know the outcome occurred than when they don't. There is broad consensus that hindsight bias is irrational, but this consensus is wrong. Hindsight bias is generally rationally permissible and sometimes rationally required. The fact that a given outcome occurred provides both evidence about what the total evidence available ex ante was, and also evidence about what that evidence supports. Even if (...) you in fact evaluate the ex ante evidence correctly, you should not be certain of this. Then, learning the outcome provides evidence that if you erred, you are more likely to have erred low rather than high in estimating the degree to which the ex ante evidence supported the hypothesis that that outcome would occur. (shrink)
This paper explores an emerging sub-field of both empirical bioethics and experimental philosophy, which has been called “experimental philosophical bioethics” (bioxphi). As an empirical discipline, bioxphi adopts the methods of experimental moral psychology and cognitive science; it does so to make sense of the eliciting factors and underlying cognitive processes that shape people’s moral judgments, particularly about real-world matters of bioethical concern. Yet, as a normative discipline situated within the broader field of bioethics, it also aims to contribute to substantive (...) ethical questions about what should be done in a given context. What are some of the ways in which this aim has been pursued? In this paper, we employ a case study approach to examine and critically evaluate four strategies from the recent literature by which scholars in bioxphi have leveraged empirical data in the service of normative arguments. (shrink)
Keith DeRose has argued that the two main problems facing subject-sensitive invariantism come from the appropriateness of certain third-person denials of knowledge and the inappropriateness of now you know it, now you don't claims. I argue that proponents of SSI can adequately address both problems. First, I argue that the debate between contextualism and SSI has failed to account for an important pragmatic feature of third-person denials of knowledge. Appealing to these pragmatic features, I show that straightforward third-person denials are (...) inappropriate in the relevant cases. And while there are certain denials that are appropriate, they pose no problems for SSI. Next, I offer an explanation, compatible with SSI, of the oddity of now you know it, now you don't claims. To conclude, I discuss the intuitiveness of purism, whose rejection is the source of many problems for SSI. I propose to explain away the intuitiveness of purism as a side-effect of the narrow focus of previous epistemological inquiries. (shrink)
Accuracy‐first epistemology is an approach to formal epistemology which takes accuracy to be a measure of epistemic utility and attempts to vindicate norms of epistemic rationality by showing how conformity with them is beneficial. If accuracy‐first epistemology can actually vindicate any epistemic norms, it must adopt a plausible account of epistemic value. Any such account must avoid the epistemic version of Derek Parfit's “repugnant conclusion.” I argue that the only plausible way of doing so is to say that accurate credences (...) in certain propositions have no, or almost no, epistemic value. I prove that this is incompatible with standard accuracy‐first arguments for probabilism, and argue that there is no way for accuracy‐first epistemology to show that all credences of all agents should be coherent. (shrink)
Pharmaceuticals or other emerging technologies could be used to enhance (or diminish) feelings of lust, attraction, and attachment in adult romantic partnerships. While such interventions could conceivably be used to promote individual (and couple) well-being, their widespread development and/or adoption might lead to “medicalization” of human love and heartache—for some, a source of serious concern. In this essay, we argue that the “medicalization of love” need not necessarily be problematic, on balance, but could plausibly be expected to have either good (...) or bad consequences depending upon how it unfolds. By anticipating some of the specific ways in which these technologies could yield unwanted outcomes, bioethicists and others can help direct the course of love’s “medicalization”—should it happen to occur—more toward the “good” side than the “bad.”. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.