With the advent of the digital age and new mediums of communication, it is becoming increasingly important for those interested in the interpretation of religious text to look beyond traditional ideas of text and textuality to find the sacred in unlikely places. Paul Ricoeur’s phenomenological reorientation of classical hermeneutics from romanticized notions of authorial intent and psychological divinations to a serious engagement with the “science of the text” is a hermeneutical tool that opens up an important dialogue between the interpreter, (...) the world of the text, and the contemporary world in front of the text. This article examines three significant insights that Paul Ricoeur contributes to our expanding understanding of text. First under scrutiny will be Ricoeur’s de-regionalization of classic hermeneutics culminating in his understanding of Dasein (Being) as “being-in-the-world,” allowing mean-ing to transcend the physical boundaries of the text. Next, Ricoeur’s three-fold under-standing of traditionality/Traditions/tradition as the “chain of interpretations” through which religious language transcends the tem-poral boundary of historicity will be explored. The final section will focus on Ricoeur’s understanding of the productive imagination and metaphoric truth as the under-appreciated yet key insight around which Ricoeur’s philosophical investigation into the metaphoric transfer from text to life revolves. (shrink)
This article presents Paul Ricœur’s hermeneutic of the productive imagination as a methodological tool for understanding the innovative social function of texts that in exceeding their semantic meaning, iconically augment reality. Through the reasoning of Rastafari elder Mortimo Planno’s unpublished text, Rastafarian: The Earth’s Most Strangest Man, and the religious and biblical signification from the music of his most famous postulate, Bob Marley, this article applies Paul Ricœur’s schema of the religious productive imagination to conceptualize the metaphoric transfer from text (...) to life of verbal and iconic images of Rastafari’s hermeneutic of word, sound and power. This transformation is accomplished through what Ricœur terms the phenomenology of the iconic augmentation of reality. Understanding this semantic innovation is critical to understanding the capacity of the religious imagination to transform reality as a proclamation of hope in the midst of despair. (shrink)
There is no such thing as moral luck or everyone is profoundly mistaken about its nature and a radical rethinking of moral luck is needed. The argument to be developed is not complicated, and relies almost entirely on premises that should seem obviously correct to anyone who follows the moral luck literature. The conclusion, however, is surprising and disturbing. The classic cases of moral luck always involve an agent who lacks control over an event whose occurrence affects her praiseworthiness or (...) blameworthiness. Close examination of what it is to have control or to lack it reveals the logical space for counterexamples that do not fit the pattern constitutive of moral luck, and so unravel the whole. (shrink)
Much of the philosophical literature on causation has focused on the concept of actual causation, sometimes called token causation. In particular, it is this notion of actual causation that many philosophical theories of causation have attempted to capture.2 In this paper, we address the question: what purpose does this concept serve? As we shall see in the next section, one does not need this concept for purposes of prediction or rational deliberation. What then could the purpose be? We will argue (...) that one can gain an important clue here by looking at the ways in which causal judgments are shaped by people‘s understanding of norms. (shrink)
This paper examines the debate between permissive and impermissive forms of Bayesianism. It briefly discusses some considerations that might be offered by both sides of the debate, and then replies to some new arguments in favor of impermissivism offered by Roger White. First, it argues that White’s defense of Indifference Principles is unsuccessful. Second, it contends that White’s arguments against permissive views do not succeed.
This paper examines three accounts of the sleeping beauty case: an account proposed by Adam Elga, an account proposed by David Lewis, and a third account defended in this paper. It provides two reasons for preferring the third account. First, this account does a good job of capturing the temporal continuity of our beliefs, while the accounts favored by Elga and Lewis do not. Second, Elga’s and Lewis’ treatments of the sleeping beauty case lead to highly counterintuitive consequences. The proposed (...) account also leads to counterintuitive consequences, but they’re not as bad as those of Elga’s account, and no worse than those of Lewis’ account. (shrink)
Nietzsche offers a positive epistemology, and those who interpret him as a skeptic or a mere pragmatist are mistaken. Instead he supports what he calls per- spectivism. This is a familiar take on Nietzsche, as perspectivism has been analyzed by many previous interpreters. The present paper presents a sketch of the textually best supported and logically most consistent treatment of perspectivism as a first- order epistemic theory. What’s original in the present paper is an argument that Nietzsche also offers a (...) second-order methodological perspectivism aimed at enhancing understanding, an epistemic state distinct from knowledge. Just as Descartes considers and rejects radical skepticism while at the same time adopting methodological skepticism, one could consistently reject perspectivism as a theory of knowledge while accepting it as contributing to our understanding. It is argued that Nietzsche’s perspectivism is in fact two-tiered: knowledge is perspectival because truth itself is, and in addition there is a methodological perspectivism in which distinct ways of knowing are utilized to produce understanding. A review of the manner in which understanding is conceptualized in contemporary epistemology and philosophy of science serves to illuminate how Nietzsche was tackling these ideas. (shrink)
We explore the question of whether machines can infer information about our psychological traits or mental states by observing samples of our behaviour gathered from our online activities. Ongoing technical advances across a range of research communities indicate that machines are now able to access this information, but the extent to which this is possible and the consequent implications have not been well explored. We begin by highlighting the urgency of asking this question, and then explore its conceptual underpinnings, in (...) order to help emphasise the relevant issues. To answer the question, we review a large number of empirical studies, in which samples of behaviour are used to automatically infer a range of psychological constructs, including affect and emotions, aptitudes and skills, attitudes and orientations (e.g. values and sexual orientation), personality, and disorders and conditions (e.g. depression and addiction). We also present a general perspective that can bring these disparate studies together and allow us to think clearly about their philosophical and ethical implications, such as issues related to consent, privacy, and the use of persuasive technologies for controlling human behaviour. (shrink)
Deference principles are principles that describe when, and to what extent, it’s rational to defer to others. Recently, some authors have used such principles to argue for Evidential Uniqueness, the claim that for every batch of evidence, there’s a unique doxastic state that it’s permissible for subjects with that total evidence to have. This paper has two aims. The first aim is to assess these deference-based arguments for Evidential Uniqueness. I’ll show that these arguments only work given a particular kind (...) of deference principle, and I’ll argue that there are reasons to reject these kinds of principles. The second aim of this paper is to spell out what a plausible generalized deference principle looks like. I’ll start by offering a principled rationale for taking deference to constrain rational belief. Then I’ll flesh out the kind of deference principle suggested by this rationale. Finally, I’ll show that this principle is both more plausible and more general than the principles used in the deference-based arguments for Evidential Uniqueness. (shrink)
This paper examines two mistakes regarding David Lewis’ Principal Principle that have appeared in the recent literature. These particular mistakes are worth looking at for several reasons: The thoughts that lead to these mistakes are natural ones, the principles that result from these mistakes are untenable, and these mistakes have led to significant misconceptions regarding the role of admissibility and time. After correcting these mistakes, the paper discusses the correct roles of time and admissibility. With these results in hand, the (...) paper concludes by showing that one way of formulating the chance–credence relation has a distinct advantage over its rivals. (shrink)
Representation theorems are often taken to provide the foundations for decision theory. First, they are taken to characterize degrees of belief and utilities. Second, they are taken to justify two fundamental rules of rationality: that we should have probabilistic degrees of belief and that we should act as expected utility maximizers. We argue that representation theorems cannot serve either of these foundational purposes, and that recent attempts to defend the foundational importance of representation theorems are unsuccessful. As a result, we (...) should reject these claims, and lay the foundations of decision theory on firmer ground. (shrink)
Interactions between an intelligent software agent and a human user are ubiquitous in everyday situations such as access to information, entertainment, and purchases. In such interactions, the ISA mediates the user’s access to the content, or controls some other aspect of the user experience, and is not designed to be neutral about outcomes of user choices. Like human users, ISAs are driven by goals, make autonomous decisions, and can learn from experience. Using ideas from bounded rationality, we frame these interactions (...) as instances of an ISA whose reward depends on actions performed by the user. Such agents benefit by steering the user’s behaviour towards outcomes that maximise the ISA’s utility, which may or may not be aligned with that of the user. Video games, news recommendation aggregation engines, and fitness trackers can all be instances of this general case. Our analysis facilitates distinguishing various subcases of interaction, as well as second-order effects that might include the possibility for adaptive interfaces to induce behavioural addiction, and/or change in user belief. We present these types of interaction within a conceptual framework, and review current examples of persuasive technologies and the issues that arise from their use. We argue that the nature of the feedback commonly used by learning agents to update their models and subsequent decisions could steer the behaviour of human users away from what benefits them, and in a direction that can undermine autonomy and cause further disparity between actions and goals as exemplified by addictive and compulsive behaviour. We discuss some of the ethical, social and legal implications of this technology and argue that it can sometimes exploit and reinforce weaknesses in human beings. (shrink)
In Modal Logic as Metaphysics, Timothy Williamson claims that the possibilism-actualism (P-A) distinction is badly muddled. In its place, he introduces a necessitism-contingentism (N-C) distinction that he claims is free of the confusions that purportedly plague the P-A distinction. In this paper I argue first that the P-A distinction, properly understood, is historically well-grounded and entirely coherent. I then look at the two arguments Williamson levels at the P-A distinction and find them wanting and show, moreover, that, when the N-C (...) distinction is broadened (as per Williamson himself) so as to enable necessitists to fend off contingentist objections, the P-A distinction can be faithfully reconstructed in terms of the N-C distinction. However, Williamson’s critique does point to a genuine shortcoming in the common formulation of the P-A distinction. I propose a new definition of the distinction in terms of essential properties that avoids this shortcoming. (shrink)
According to commonsense psychology, one is conscious of everything that one pays attention to, but one does not pay attention to all the things that one is conscious of. Recent lines of research purport to show that commonsense is mistaken on both of these points: Mack and Rock (1998) tell us that attention is necessary for consciousness, while Kentridge and Heywood (2001) claim that consciousness is not necessary for attention. If these lines of research were successful they would have important (...) implications regarding the prospects of using attention research to inform us about consciousness. The present essay shows that these lines of research are not successful, and that the commonsense picture of the relationship between attention and consciousness can be. (shrink)
Conditionalization is a widely endorsed rule for updating one’s beliefs. But a sea of complaints have been raised about it, including worries regarding how the rule handles error correction, changing desiderata of theory choice, evidence loss, self-locating beliefs, learning about new theories, and confirmation. In light of such worries, a number of authors have suggested replacing Conditionalization with a different rule — one that appeals to what I’ll call “ur-priors”. But different authors have understood the rule in different ways, and (...) these different understandings solve different problems. In this paper, I aim to map out the terrain regarding these issues. I survey the different problems that might motivate the adoption of such a rule, flesh out the different understandings of the rule that have been proposed, and assess their pros and cons. I conclude by suggesting that one particular batch of proposals, proposals that appeal to what I’ll call “loaded evidential standards”, are especially promising. (shrink)
Should economics study the psychological basis of agents’ choice behaviour? I show how this question is multifaceted and profoundly ambiguous. There is no sharp distinction between ‘mentalist’ answ...
I argue that the best interpretation of the general theory of relativity has need of a causal entity, and causal structure that is not reducible to light cone structure. I suggest that this causal interpretation of GTR helps defeat a key premise in one of the most popular arguments for causal reductionism, viz., the argument from physics.
The advent of contemporary evolutionary theory ushered in the eventual decline of Aristotelian Essentialism (Æ) – for it is widely assumed that essence does not, and cannot have any proper place in the age of evolution. This paper argues that this assumption is a mistake: if Æ can be suitably evolved, it need not face extinction. In it, I claim that if that theory’s fundamental ontology consists of dispositional properties, and if its characteristic metaphysical machinery is interpreted within the framework (...) of contemporary evolutionary developmental biology, an evolved essentialism is available. The reformulated theory of Æ offered in this paper not only fails to fall prey to the typical collection of criticisms, but is also independently both theoretically and empirically plausible. The paper contends that, properly understood, essence belongs in the age of evolution. (shrink)
Though the realm of biology has long been under the philosophical rule of the mechanistic magisterium, recent years have seen a surprisingly steady rise in the usurping prowess of process ontology. According to its proponents, theoretical advances in the contemporary science of evo-devo have afforded that ontology a particularly powerful claim to the throne: in that increasingly empirically confirmed discipline, emergently autonomous, higher-order entities are the reigning explanantia. If we are to accept the election of evo-devo as our best conceptualisation (...) of the biological realm with metaphysical rigour, must we depose our mechanistic ontology for failing to properly “carve at the joints” of organisms? In this paper, I challenge the legitimacy of that claim: not only can the theoretical benefits offered by a process ontology be had without it, they cannot be sufficiently grounded without the metaphysical underpinning of the very mechanisms which processes purport to replace. The biological realm, I argue, remains one best understood as under the governance of mechanistic principles. (shrink)
At the heart of the Bayesianism is a rule, Conditionalization, which tells us how to update our beliefs. Typical formulations of this rule are underspecified. This paper considers how, exactly, this rule should be formulated. It focuses on three issues: when a subject’s evidence is received, whether the rule prescribes sequential or interval updates, and whether the rule is narrow or wide scope. After examining these issues, it argues that there are two distinct and equally viable versions of Conditionalization to (...) choose from. And which version we choose has interesting ramifications, bearing on issues such as whether Conditionalization can handle continuous evidence, and whether Jeffrey Conditionalization is really a generalization of Conditionalization. (shrink)
Selection against embryos that are predisposed to develop disabilities is one of the less controversial uses of embryo selection technologies. Many bio-conservatives argue that while the use of ESTs to select for non-disease-related traits, such as height and eye-colour, should be banned, their use to avoid disease and disability should be permitted. Nevertheless, there remains significant opposition, particularly from the disability rights movement, to the use of ESTs to select against disability. In this article we examine whether and why the (...) state could be justified in restricting the use of ESTs to select against disability. We first outline the challenge posed by proponents of ‘liberal eugenics’. Liberal eugenicists challenge those who defend restrictions on the use of ESTs to show why the use of these technologies would create a harm of the type and magnitude required to justify coercive measures. We argue that this challenge could be met by adverting to the risk of harms to future persons that would result from a loss of certain forms of cognitive diversity. We suggest that this risk establishes a pro tanto case for restricting selection against some disabilities, including dyslexia and Asperger's syndrome. (shrink)
This paper explores the level of obligation called for by Milton Friedman’s classic essay “The Social Responsibility of Business is to Increase Profits.” Several scholars have argued that Friedman asserts that businesses have no or minimal social duties beyond compliance with the law. This paper argues that this reading of Friedman does not give adequate weight to some claims that he makes and to their logical extensions. Throughout his article, Friedman emphasizes the values of freedom, respect for law, and duty. (...) The principle that a business professional should not infringe upon the liberty of other members of society can be used by business ethicists to ground a vigorous line of ethical analysis. Any practice, which has a negative externality that requires another party to take a significant loss without consent or compensation, can be seen as unethical. With Friedman’s framework, we can see how ethics can be seen as arising from the nature of business practice itself. Business involves an ethics in which we consider, work with, and respect strangers who are outside of traditional in-groups. (shrink)
In this essay, I argue that a proper understanding of the historicity of love requires an appreciation of the irreplaceability of the beloved. I do this through a consideration of ideas that were first put forward by Robert Kraut in “Love De Re” (1986). I also evaluate Amelie Rorty's criticisms of Kraut's thesis in “The Historicity of Psychological Attitudes: Love is Not Love Which Alters Not When It Alteration Finds” (1986). I argue that Rorty fundamentally misunderstands Kraut's Kripkean analogy, and (...) I go on to criticize her claim that concern over the proper object of love should be best understood as a concern over constancy. This leads me to an elaboration of the distinct senses in which love can be seen as historical. I end with a further defense of the irreplaceability of the beloved and a discussion of the relevance of recent debates over the importance of personal identity for an adequate account of the historical dimension of love. (shrink)
A number of cases involving self-locating beliefs have been discussed in the Bayesian literature. I suggest that many of these cases, such as the sleeping beauty case, are entangled with issues that are independent of self-locating beliefs per se. In light of this, I propose a division of labor: we should address each of these issues separately before we try to provide a comprehensive account of belief updating. By way of example, I sketch some ways of extending Bayesianism in order (...) to accommodate these issues. Then, putting these other issues aside, I sketch some ways of extending Bayesianism in order to accommodate self-locating beliefs. Finally, I propose a constraint on updating rules, the "Learning Principle", which rules out certain kinds of troubling belief changes, and I use this principle to assess some of the available options. (shrink)
I argue that the theory of chance proposed by David Lewis has three problems: (i) it is time asymmetric in a manner incompatible with some of the chance theories of physics, (ii) it is incompatible with statistical mechanical chances, and (iii) the content of Lewis's Principal Principle depends on how admissibility is cashed out, but there is no agreement as to what admissible evidence should be. I proposes two modifications of Lewis's theory which resolve these difficulties. I conclude by tentatively (...) proposing a third modification of Lewis's theory, one which explains many of the common features shared by the chance theories of physics. (shrink)
The iterative conception of set is typically considered to provide the intuitive underpinnings for ZFCU (ZFC+Urelements). It is an easy theorem of ZFCU that all sets have a definite cardinality. But the iterative conception seems to be entirely consistent with the existence of “wide” sets, sets (of, in particular, urelements) that are larger than any cardinal. This paper diagnoses the source of the apparent disconnect here and proposes modifications of the Replacement and Powerset axioms so as to allow for the (...) existence of wide sets. Drawing upon Cantor’s notion of the absolute infinite, the paper argues that the modifications are warranted and preserve a robust iterative conception of set. The resulting theory is proved consistent relative to ZFC + “there exists an inaccessible cardinal number.”. (shrink)
Blaming (construed broadly to include both blaming-attitudes and blaming-actions) is a puzzling phenomenon. Even when we grant that someone is blameworthy, we can still sensibly wonder whether we ought to blame him. We sometimes choose to forgive and show mercy, even when it is not asked for. We are naturally led to wonder why we shouldn’t always do this. Wouldn’t it be a better to wholly reject the punitive practices of blame, especially in light of their often undesirable effects, and (...) embrace an ethic of unrelenting forgiveness and mercy? In this paper I seek to address these questions by offering an account of blame that provides a rationale for thinking that to wholly forswear blaming blameworthy agents would be deeply mistaken. This is because, as I will argue, blaming is a way of valuing, it is “a mode of valuation.” I will argue that among the minimal standards of respect generated by valuable objects, notably persons, is the requirement to redress disvalue with blame. It is not just that blame is something additional we are required to do in properly valuing, but rather blame is part of what that it is to properly value. Blaming, given the existence of blameworthy agents, is mode of valuation required by the standards of minimal respect. To forswear blame would be to fail value what we ought to value. (shrink)
In Reasons and Persons, Parfit (1984) posed a challenge: provide a satisfying normative account that solves the Non-Identity Problem, avoids the Repugnant and Absurd Conclusions, and solves the Mere-Addition Paradox. In response, some have suggested that we look toward person-affecting views of morality for a solution. But the person-affecting views that have been offered so far have been unable to satisfy Parfit's four requirements, and these views have been subject to a number of independent complaints. This paper describes a person-affecting (...) account which meets Parfit's challenge. The account satisfies Parfit's four requirements, and avoids many of the criticisms that have been raised against person-affecting views. (shrink)
Nearly all defences of the agent-causal theory of free will portray the theory as a distinctively libertarian one — a theory that only libertarians have reason to accept. According to what I call ‘the standard argument for the agent-causal theory of free will’, the reason to embrace agent-causal libertarianism is that libertarians can solve the problem of enhanced control only if they furnish agents with the agent-causal power. In this way it is assumed that there is only reason to accept (...) the agent-causal theory if there is reason to accept libertarianism. I aim to refute this claim. I will argue that the reasons we have for endorsing the agent-causal theory of free will are nonpartisan. The real reason for going agent-causal has nothing to do with determinism or indeterminism, but rather with avoiding reductionism about agency and the self. As we will see, if there is reason for libertarians to accept the agent-causal theory, there is just as much reason for compatibilists to accept it. It is in this sense that I contend that if anyone should be an agent-causalist, then everyone should be an agent-causalist. (shrink)
In this paper, we argue that Plotinus denies deliberative forethought about the physical cosmos to the demiurge on the basis of certain basic and widely shared Platonic and Aristotelian assumptions about the character of divine thought. We then discuss how Plotinus can nonetheless maintain that the cosmos is «providentially» ordered.
Recognizing that truth is socially constructed or that knowledge and power are related is hardly a novelty in the social sciences. In the twenty-first century, however, there appears to be a renewed concern regarding people’s relationship with the truth and the propensity for certain actors to undermine it. Organizations are highly implicated in this, given their central roles in knowledge management and production and their attempts to learn, although the entanglement of these epistemological issues with business ethics has not been (...) engaged as explicitly as it might be. Drawing on work from a virtue epistemology perspective, this paper outlines the idea of a set of epistemic vices permeating organizations, along with examples of unethical epistemic conduct by organizational actors. While existing organizational research has examined various epistemic virtues that make people and organizations effective and responsible epistemic agents, much less is known about the epistemic vices that make them ineffective and irresponsible ones. Accordingly, this paper introduces vice epistemology, a nascent but growing subfield of virtue epistemology which, to the best of our knowledge, has yet to be explicitly developed in terms of business ethics. The paper concludes by outlining a business ethics research agenda on epistemic vice, with implications for responding to epistemic vices and their illegitimacy in practice. (shrink)
This article investigates the semantics of sentences that express numerical averages, focusing initially on cases such as 'The average American has 2.3 children'. Such sentences have been used both by linguists and philosophers to argue for a disjuncture between semantics and ontology. For example, Noam Chomsky and Norbert Hornstein have used them to provide evidence against the hypothesis that natural language semantics includes a reference relation holding between words and objects in the world, whereas metaphysicians such as Joseph Melia and (...) Stephen Yablo have used them to provide evidence that apparent singular reference need not be taken as ontologically committing. We develop a fully general and independently justified compositional semantics in which such constructions are assigned truth conditions that are not ontologically problematic, and show that our analysis is superior to all extant rivals. Our analysis provides evidence that a good semantics yields a sensible ontology. It also reveals that natural language contains genuine singular terms that refer to numbers. (shrink)
I distinguish several doctrines that economic methodologists have found attractive, all of which have a positivist flavour. One of these is the doctrine that preference assignments in economics are just shorthand descriptions of agents' choice behaviour. Although most of these doctrines are problematic, the latter doctrine about preference assignments is a respectable one, I argue. It doesn't entail any of the problematic doctrines, and indeed it is warranted independently of them.
I discuss what Aquinas’ doctrine of divine simplicity is, and what he takes to be its implications. I also discuss the extent to which Aquinas succeeds in motivating and defending those implications.
Although they are continually compositionally reconstituted and reconfigured, organisms nonetheless persist as ontologically unified beings over time – but in virtue of what? A common answer is: in virtue of their continued possession of the capacity for morphological invariance which persists through, and in spite of, their mereological alteration. While we acknowledge that organisms‟ capacity for the “stability of form” – homeostasis - is an important aspect of their diachronic unity, we argue that this capacity is derived from, and grounded (...) in a more primitive one – namely, the homeodynamic capacity for the “specified variation of form”. In introducing a novel type of causal power – a „structural power‟ – we claim that it is the persistence of their dynamic potential to produce a specified series of structurally adaptive morphologies which grounds organisms‟ privileged status as metaphysically “one over many” over time. (shrink)
Some explanations are relatively abstract: they abstract away from the idiosyncratic or messy details of the case in hand. The received wisdom in philosophy is that this is a virtue for any explanation to possess. I argue that the apparent consensus on this point is illusory. When philosophers make this claim, they differ on which of four alternative varieties of abstractness they have in mind. What’s more, for each variety of abstractness there are several alternative reasons to think that the (...) variety of abstractness in question is a virtue. I identify the most promising reasons, and dismiss some others. The paper concludes by relating this discussion to the idea that explanations in biology, psychology and social science cannot be replaced by relatively micro explanations without loss of understanding. (shrink)
This chapter surveys hybrid theories of well-being. It also discusses some criticisms, and suggests some new directions that philosophical discussion of hybrid theories might take.
In his paper, Jakob Hohwy outlines a theory of the brain as an organ for prediction-error minimization, which he claims has the potential to profoundly alter our understanding of mind and cognition. One manner in which our understanding of the mind is altered, according to PEM, stems from the neurocentric conception of the mind that falls out of the framework, which portrays the mind as “inferentially-secluded” from its environment. This in turn leads Hohwy to reject certain theses of embodied cognition. (...) Focusing on this aspect of Hohwy’s argument, we first outline the key components of the PEM framework such as the “evidentiary boundary,” before looking at why this leads Hohwy to reject certain theses of embodied cognition. We will argue that although Hohwy may be correct to reject specific theses of embodied cognition, others are in fact implied by the PEM framework and may contribute to its development. We present the metaphor of the “body as a laboratory” in order to highlight wha... (shrink)
In “Bayesianism, Infinite Decisions, and Binding”, Arntzenius et al. (Mind 113:251–283, 2004 ) present cases in which agents who cannot bind themselves are driven by standard decision theory to choose sequences of actions with disastrous consequences. They defend standard decision theory by arguing that if a decision rule leads agents to disaster only when they cannot bind themselves, this should not be taken to be a mark against the decision rule. I show that this claim has surprising implications for a (...) number of other debates in decision theory. I then assess the plausibility of this claim, and suggest that it should be rejected. (shrink)
Kevin Elliott and others separate two common arguments for the legitimacy of societal values in scientific reasoning as the gap and the error arguments. This article poses two questions: How are these two arguments related, and what can we learn from their interrelation? I contend that we can better understand the error argument as nested within the gap because the error is a limited case of the gap with narrower features. Furthermore, this nestedness provides philosophers with conceptual tools for analyzing (...) more robustly how values pervade science. (shrink)
This review is a critical evaluation of the main points of Steven D. Hales’ significant book: Relativism and the Foundations of Philosophy. To that end, I will first summarize his major line of argument pointing out to the richness and significance of the book. After that, I will argue that Hales’ account of intuition is subject to the challenge shown by some recent works written on the topic, and that it postulates a concept of knowledge that opposes Gettier’s (...) one, without arguing why it is so. And, I will show that except rational intuition, none of the methods adopted by Hales are adequate to acquire beliefs about philosophical propositions. Next, I will argue that his method of wide reflective equilibrium is committed to foundationalism and conservatism, and that all what his criticism of skepticism show is that skepticism is true. Also, I will try to show that his form of perspectival relativism is committed to the problem of infinitum; it is incompatible with his foundationalism. It is powerless regarding some forms of skepticism, sharing the same source with some others. It is not progressive, and not perspectival enough regarding Goldman’s view, naturalists’ view, and its alternatives. And, if it is perspectival enough, then it refutes itself. (shrink)
A community, for ecologists, is a unit for discussing collections of organisms. It refers to collections of populations, which consist (by definition) of individuals of a single species. This is straightforward. But communities are unusual kinds of objects, if they are objects at all. They are collections consisting of other diverse, scattered, partly-autonomous, dynamic entities (that is, animals, plants, and other organisms). They often lack obvious boundaries or stable memberships, as their constituent populations not only change but also move in (...) and out of areas, and in and out of relationships with other populations. Familiar objects have identifiable boundaries, for example, and if communities do not, maybe they are not objects. Maybe they do not exist at all. The question this possibility suggests, of what criteria there might be for identifying communities, and for determining whether such communities exist at all, has long been discussed by ecologists. This essay addresses this question as it has recently been taken up by philosophers of science, by examining answers to it which appeared a century ago and which have framed the continuing discussion. (shrink)
A number of claims are closely connected with, though logically distinct from, animalism. One is that organisms cease to exist when they die. Two others concern the relation of the brain, or the brainstem, to animal life. One of these holds that the brainstem is necessary for life?more precisely, that (say) my cat's brainstem is necessary for my cat's life to continue. The other is that it is sufficient for life?more precisely, that so long as (say) my cat's brainstem continues (...) to function, so too does my cat. I argue against these claims. (shrink)
Some of the most interesting recent work in formal epistemology has focused on developing accuracy-based approaches to justifying Bayesian norms. These approaches are interesting not only because they offer new ways to justify these norms, but because they potentially offer a way to justify all of these norms by appeal to a single, attractive epistemic goal: having accurate beliefs. Recently, Easwaran & Fitelson (2012) have raised worries regarding whether such “all-accuracy” or “purely alethic” approaches can accommodate and justify evidential Bayesian (...) norms. In response, proponents of purely alethic approaches, such as Pettigrew (2013b) and Joyce (2016), have argued that scoring rule arguments provide us with compatible and purely alethic justifications for the traditional Bayesian norms, including evidential norms. In this paper I raise several challenges to this claim. First, I argue that many of the justifications these scoring rule arguments provide are not compatible. Second, I raise worries for the claim that these scoring rule arguments provide purely alethic justifications. Third, I turn to assess the more general question of whether purely alethic justifications for evidential norms are even possible, and argue that, without making some contentious assumptions, they are not. Fourth, I raise some further worries for the possibility of providing purely alethic justifications for content-sensitive evidential norms, like the Principal Principle. (shrink)
Can one pay attention to objects without being conscious of them? Some years ago there was evidence that had been taken to show that the answer is 'yes'. That evidence was inconclusive, but there is recent work that makes the case more compellingly: it now seems that it is indeed possible to pay attention to objects of which one is not conscious. This is bad news for theories in which the connection between attention and consciousness is taken to be an (...) essential one. It is good news for theories (including mine) in which the connection between attention and agency is taken to be essential. (shrink)
This pair of articles provides a critical commentary on contemporary approaches to statistical mechanical probabilities. These articles focus on the two ways of understanding these probabilities that have received the most attention in the recent literature: the epistemic indifference approach, and the Lewis-style regularity approach. These articles describe these approaches, highlight the main points of contention, and make some attempts to advance the discussion. The first of these articles provides a brief sketch of statistical mechanics, and discusses the indifference approach (...) to statistical mechanical probabilities. (shrink)
I am going to argue for a robust realism about magnitudes, as irreducible elements in our ontology. This realistic attitude, I will argue, gives a better metaphysics than the alternatives. It suggests some new options in the philosophy of science. It also provides the materials for a better account of the mind’s relation to the world, in particular its perceptual relations.
This essay begins with a consideration of one way in which animals and persons may be valued as “irreplaceable.” Drawing on both Plato and Pascal, I consider reasons for skepticism regarding the legitimacy of this sort of attachment. While I do not offer a complete defense against such skepticism, I do show that worries here may be overblown due to the conflation of distinct metaphysical and normative concerns. I then go on to clarify what sort of value is at issue (...) in cases of irreplaceable attachment. I characterize “unique value” as the kind of value attributed to a thing when we take that thing to be (theoretically, not just practically) irreplaceable. I then consider the relationship between this sort of value and intrinsic value. After considering the positions of Gowans, Moore, Korsgaard, Frankfurt, and others, I conclude that unique value is best understood not as a variety of intrinsic value but rather as one kind of final value that is grounded in the extrinsic properties of the object. (shrink)
Evolutionary developmental biology represents a paradigm shift in the understanding of the ontogenesis and evolutionary progression of the denizens of the natural world. Given the empirical successes of the evo-devo framework, and its now widespread acceptance, a timely and important task for the philosophy of biology is to critically discern the ontological commitments of that framework and assess whether and to what extent our current metaphysical models are able to accommodate them. In this paper, I argue that one particular model (...) is a natural fit: an ontology of dispositional properties coherently and adequately captures the crucial casual-cum-explanatory role that the fundamental elements of evo-devo play within that framework. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.