Major Research Paper Abstract -/- A Part of This World: Deleuze & The Logic Of Creation. -/- Is there a particular danger in following Deleuze’s philosophy to its end result? According to Peter Hallward and Alain Badiou, Deleuze’s philosophy has some rather severe conclusions. Deleuze has been known as a vitalist thinker of life and affirmation. Hallward & Badiou seek to challenge the accepted view of Deleuze; showing that these accepted norms in Deleuzian scholarship should be challenged; and that initially (...) Deleuze calls for the evacuation of political action in order to remain firm in the realm of pure contemplation. I intend to investigate and defend Deleuze’s philosophy and against critics like Badiou and Hallward; and that not only is Deleuze’s philosophy creative and vital but also highly revolutionary and ‘a part of this world.’ I will look at several works in Deleuze’s corpus, as well as look at Deleuzian scholars whom defend Deleuze’s position -/- Hallward sees Deleuze as a theophantic thinker of the one and like Spinoza an individual mode must align oneself with the intellectual love of god, so that creativity and expressivity may mediate through them. Thus, according to Hallward the major theme of Deleuze’s philosophy is creativity; and a subject or a creature must tap into this vital spark of creation, which, is also a form of creatural confinement. Hallward states this creative act can only occur in the realm of the virtual, by lines of flight leading 'out of this world'. The subject is then re-introduced to an extra-worldly existence of contemplation and remains further away from decisions and lived experience. Deleuze, according to Hallward, falls prey to a cosmological pantheism. -/- Badiou has similar concerns. Deleuze’s philosophy is too systematic and abstract. The entirety of Deleuzes’ work is surrounded by metaphysics of the one; and essentially its repercussions lead to an overt asceticism. Badiou notes that Deleuze wants us all to surrender thought to a renewed concept of the one. Through the surrender of the one, the multiple is lost and incorporated into the realm of simulacra. Everything in this Deleuzian world is ‘always-already’ in the infinite and inhuman totality of the one. According to Badiou, this entire process is articulated in the power of inorganic life that operates through all of us. Like Hallward, Badiou sees Deleuze demolishing the subject, who is stuck between machinery and exteriority. Subjects are forced to transcend and go beyond their limits, slowly collapsing into an infinite virtuality. Badiou believes this is a powerful metaphor for a philosophy of death. Thus the conditions of Deleuzian thought are contingent upon asceticism, making a Deleuzian world a sort of ‘crowned anarchy’. Badiou sees Deleuze’s ascetic mission intimately linked with a philosophy of death, and like Hallward we should pay careful attention to the outcome of such an aristocratic philosophy. Death according to Badiou, symbolizes Deleuzian thought, not only making it dangerous, but also actualizing it as an ineffective position. Badiou also points out that Deleuze’s conceptual sources are not only limited but also repeated time and time again through a monotonous selection of concepts. Is this a fair critique and representation of Deleuzian thought? -/- Eugene Holland states, that both Hallward and Badiou have misrepresented Deleuze. Deleuze does invoke the creation of a new earth but one which we all fully believe in. The only world Deleuze wants to get out of is the world of habits, conformity, power; and forces that block creative being. According to Holland, Hallward presents us a Deleuze that inhibits an engagement with the world. However Deleuze’s creative enterprise is insistent on forming concepts that can change and transform our world. -/- So the question arises where does the problem of misrepresentation begin? It begins with both Badiou and Hallward having an erroneous account of the actual/virtual distinction in Deleuze’s Philosophy. According to Protevi, Hallward posits a dualism between the actual and the virtual, denying the role of the intensive. Hallward initially sees the relationship between the intensive and the virtual, ignoring the fact that the intensive has its own ontological register that mediates both the virtual and the actual. However, Protevi notes if one could not accept the intensive for an ontological register and had to place it with one or the other; you would have to accept an interrelationship between the actual and the intensive. Hallward places it in the realm of the virtual, thus, leading us to his major claim that Deleuze’s philosophy leads us out of the world. Protevi states, intensive processes happen in our world they are a part of this world. Hallward completely empties all creativity from the actual, thus depending on the virtual and its slippery slope. Both Hallward and Badiou have missed the point altogether. We live in an intensive/actual world and the main point about Deleuze’s politics has to do with experimentation and social interaction and the transformation and intervention of the concept. As Daniel W. Smith states, unlike Badiou, Deleuze is not searching for an axiomatic approach to the world, one that is prone to reductionism but rather with problematic, inventive and creative methods to transform a society. (shrink)
What does it mean to take “one more step, a single step” … towards universality? What does it mean to be forced to think and what kind of thought would we need in order to make the logic of the world shift? For Badiou, philosophy must be reckless or it is simply nothing at all. Thought must force a shift in the laws of a world. This recklessness is the violence of thought; it is the unknown form of a discipline, (...) opening a new terrain to make that 'one more step' possible. It is the moment when we are pushed to think beyond our own desires; it comes in the form of militant participation and brutal contingency. Above all, it comes down to a single choice; one must 'become', a subject to truth, and stay loyal to the event. The following essay will investigate that “One more step” and its relationship to radical choice, the subject and truth and whether we need a violent thought to push us into committed action in order for there to be eternal truths. (shrink)
Quentin Meillassoux’s ‘Spectral Dilemma offers philosophy an answer to an age old problem, one that Pascal had intimated on in the wager. Is it better to believe in God for life or abstain from belief and declare atheism? The paradox of theism and atheism has separated philosophy for centuries by limiting the possibilities for real thought. For Meillassoux, there is more at stake than just the limitations of thought. Both atheism and theism have exhausted all the conditions of human life. (...) In order to answer this paradox Meillassoux must combine religious insight that the dead must be resurrected with the atheist conviction that God does not exist. (Harman, Quentin Meillassoux, Philosophy in the Making p.87). The aforementioned insight, grounds Meillassoux’s position in what he calls a Divine Ethics. The concept of the Divine carries both atheism and theism to their ultimate consequences to unveil the truth that … “God does not exist, and also that it is quite necessary to believe in God (Harman p.236). The Divine links both assertions in an absolute ethics which Meillassoux calls Divinology. This leads us to our next question what is the Spectral Dilemma? In order to answer this question, we cannot rely solely on what the dilemma is, for it is not, a sufficient account of what Meillassoux is trying to solve. We must proceed to understand the conditions of what Divine Ethics are, and how the Divinology can best represent life for both the living and the dead. What is a spectre? According to Meillassoux it is a dead person who has not been properly mourned (Meillassoux, Spectral Dilemma p.261). This person’s death haunts us. This haunting is the mere fact that we cannot mourn their loss, for as time passes by, our bond with the dead, or dearly departed, proves to be inadequate for our own lives. This haunting leads us to utter despair. The sheer horror of their death is a burden that lays heavy on our backs; and not just our own backs, or our families backs, but for all those people who have crossed their paths in history (Meillassoux p. 262). These terrible deaths are called Essential Spectres and they include all deaths, such as, odious, premature, the death of children, the death of parents; and all of those poor individuals who know that their own destiny will at some point in time, be the same as these poor individuals (Meillassoux p.262). But it is precisely all death in its inconclusive finality that haunts us all, not just natural deaths or even violent grueling death and causalities of war. These essential spectres, are the dead who refuse to Passover, even though they are gone. This concept must seem absurd to any reader to believe in them, but still, Meillassoux makes his point that these essential spectres still cry out to us all, that they still exist with us (Meillassoux p.262). Meillassoux claims that the completion of mourning must occur and Essential Mourning, which is described as an accomplishment of a living relationship with the dead, as opposed to maintaining a morbid bond with those who have survived after their deaths (Meillassoux p.263). According to Meillassoux, essential mourning grants us the possibility, of forming a bond again with the dearly departed (Meillassoux p.263). This bond actively animates their memory into our lives again, but the concept of accomplishment means, that to live again with those essential spectres … “[R]ather than relating to them with the memory of their morbid death” (Meillassoux p.263). In order to fully understand this possibility, we must ask the question does God exist. Is there a merciful spirit, which transcends all of humanity? Is this God working in the world? If so, then why do essential spectres exist? How is it possible that these terrible deaths occur in the world and are allowed by such a God? If this is so, that God allows these deaths to occur then perhaps God is not all powerful? Perhaps this so called transcendent principle is absent in the world (Meillassoux p.263).Meillassoux states that both the religious and atheist options do not allow for essential mourning to take place. Both these positions lead to disappear when confronting death. What is needed is what will follow in the rest of this paper … a Divine Inexistence. We need to assert the existence of a Virtual-God that is both inexistent and possible, but also contingent and unmasterable (Harman p.89). The Divine Inexistence is the main concept of Meillassoux’s Divinology but also the answer to the spectral dilemma, that a new ethics is both possible and needed. This Divine Ethics will both save Essential Spectres and Philosophy. My essential claim is that the actual article ‘Spectral Dilemma’ published in collapse Journal, is insufficient in fully answering the problem of essential spectres. The juxtaposition of atheism and theism is not enough to philosophically explain the significance of the Divine Inexistence. The article is not lengthy enough to explain how ontology, contingent-metaphysics and ethics relate to the fundamental problem. The Divine Inexistence must be fully articulated with the entirety of Meillassoux’s Divinology. I will attempt to fully express the entirety of Meillassoux for my reader, while at the same time, offering a comprehensive answer to the spectral dilemma. -/- . (shrink)
Is there a particular danger in following Deleuze's philosophy to its end result? According to Peter Hallward, Deleuze's philosophy has some rather severe conclusions. Deleuze has been portrayed by him as a theological and spiritual thinker of life. Hallward seeks to challenge the accepted view of Deleuze, showing that these accepted norms in Deleuzian scholarship should be challenged and that, initially, Deleuze calls for the evacuation of political action in order to remain firm in the realm of pure contemplation. This (...) article intends to investigate and defend Deleuze's philosophy against the critical and theological accounts portrayed by Hallward, arguing that Deleuze's philosophy is not only creative and vital but also highly revolutionary and ‘a part’ of the given world. It then goes on to examine Hallward's distortion of the actual/virtual distinction in Deleuze because Hallward is not able to come to grips with the concept of life in Deleuze's philosophy. We live in an intensive and dynamic world and the main points of Deleuze's philosophy concern the transformation of the world. Deleuze is not seeking to escape the world, but rather to deal with inventive and creative methods to transform society. (shrink)
Much of the philosophical literature on causation has focused on the concept of actual causation, sometimes called token causation. In particular, it is this notion of actual causation that many philosophical theories of causation have attempted to capture.2 In this paper, we address the question: what purpose does this concept serve? As we shall see in the next section, one does not need this concept for purposes of prediction or rational deliberation. What then could the purpose be? We will argue (...) that one can gain an important clue here by looking at the ways in which causal judgments are shaped by people‘s understanding of norms. (shrink)
This paper examines the debate between permissive and impermissive forms of Bayesianism. It briefly discusses some considerations that might be offered by both sides of the debate, and then replies to some new arguments in favor of impermissivism offered by Roger White. First, it argues that White’s defense of Indifference Principles is unsuccessful. Second, it contends that White’s arguments against permissive views do not succeed.
This paper examines three accounts of the sleeping beauty case: an account proposed by Adam Elga, an account proposed by David Lewis, and a third account defended in this paper. It provides two reasons for preferring the third account. First, this account does a good job of capturing the temporal continuity of our beliefs, while the accounts favored by Elga and Lewis do not. Second, Elga’s and Lewis’ treatments of the sleeping beauty case lead to highly counterintuitive consequences. The proposed (...) account also leads to counterintuitive consequences, but they’re not as bad as those of Elga’s account, and no worse than those of Lewis’ account. (shrink)
This paper examines two mistakes regarding David Lewis’ Principal Principle that have appeared in the recent literature. These particular mistakes are worth looking at for several reasons: The thoughts that lead to these mistakes are natural ones, the principles that result from these mistakes are untenable, and these mistakes have led to significant misconceptions regarding the role of admissibility and time. After correcting these mistakes, the paper discusses the correct roles of time and admissibility. With these results in hand, the (...) paper concludes by showing that one way of formulating the chance–credence relation has a distinct advantage over its rivals. (shrink)
We explore the question of whether machines can infer information about our psychological traits or mental states by observing samples of our behaviour gathered from our online activities. Ongoing technical advances across a range of research communities indicate that machines are now able to access this information, but the extent to which this is possible and the consequent implications have not been well explored. We begin by highlighting the urgency of asking this question, and then explore its conceptual underpinnings, in (...) order to help emphasise the relevant issues. To answer the question, we review a large number of empirical studies, in which samples of behaviour are used to automatically infer a range of psychological constructs, including affect and emotions, aptitudes and skills, attitudes and orientations (e.g. values and sexual orientation), personality, and disorders and conditions (e.g. depression and addiction). We also present a general perspective that can bring these disparate studies together and allow us to think clearly about their philosophical and ethical implications, such as issues related to consent, privacy, and the use of persuasive technologies for controlling human behaviour. (shrink)
Interactions between an intelligent software agent and a human user are ubiquitous in everyday situations such as access to information, entertainment, and purchases. In such interactions, the ISA mediates the user’s access to the content, or controls some other aspect of the user experience, and is not designed to be neutral about outcomes of user choices. Like human users, ISAs are driven by goals, make autonomous decisions, and can learn from experience. Using ideas from bounded rationality, we frame these interactions (...) as instances of an ISA whose reward depends on actions performed by the user. Such agents benefit by steering the user’s behaviour towards outcomes that maximise the ISA’s utility, which may or may not be aligned with that of the user. Video games, news recommendation aggregation engines, and fitness trackers can all be instances of this general case. Our analysis facilitates distinguishing various subcases of interaction, as well as second-order effects that might include the possibility for adaptive interfaces to induce behavioural addiction, and/or change in user belief. We present these types of interaction within a conceptual framework, and review current examples of persuasive technologies and the issues that arise from their use. We argue that the nature of the feedback commonly used by learning agents to update their models and subsequent decisions could steer the behaviour of human users away from what benefits them, and in a direction that can undermine autonomy and cause further disparity between actions and goals as exemplified by addictive and compulsive behaviour. We discuss some of the ethical, social and legal implications of this technology and argue that it can sometimes exploit and reinforce weaknesses in human beings. (shrink)
Deference principles are principles that describe when, and to what extent, it’s rational to defer to others. Recently, some authors have used such principles to argue for Evidential Uniqueness, the claim that for every batch of evidence, there’s a unique doxastic state that it’s permissible for subjects with that total evidence to have. This paper has two aims. The first aim is to assess these deference-based arguments for Evidential Uniqueness. I’ll show that these arguments only work given a particular kind (...) of deference principle, and I’ll argue that there are reasons to reject these kinds of principles. The second aim of this paper is to spell out what a plausible generalized deference principle looks like. I’ll start by offering a principled rationale for taking deference to constrain rational belief. Then I’ll flesh out the kind of deference principle suggested by this rationale. Finally, I’ll show that this principle is both more plausible and more general than the principles used in the deference-based arguments for Evidential Uniqueness. (shrink)
Representation theorems are often taken to provide the foundations for decision theory. First, they are taken to characterize degrees of belief and utilities. Second, they are taken to justify two fundamental rules of rationality: that we should have probabilistic degrees of belief and that we should act as expected utility maximizers. We argue that representation theorems cannot serve either of these foundational purposes, and that recent attempts to defend the foundational importance of representation theorems are unsuccessful. As a result, we (...) should reject these claims, and lay the foundations of decision theory on firmer ground. (shrink)
In Modal Logic as Metaphysics, Timothy Williamson claims that the possibilism-actualism (P-A) distinction is badly muddled. In its place, he introduces a necessitism-contingentism (N-C) distinction that he claims is free of the confusions that purportedly plague the P-A distinction. In this paper I argue first that the P-A distinction, properly understood, is historically well-grounded and entirely coherent. I then look at the two arguments Williamson levels at the P-A distinction and find them wanting and show, moreover, that, when the N-C (...) distinction is broadened (as per Williamson himself) so as to enable necessitists to fend off contingentist objections, the P-A distinction can be faithfully reconstructed in terms of the N-C distinction. However, Williamson’s critique does point to a genuine shortcoming in the common formulation of the P-A distinction. I propose a new definition of the distinction in terms of essential properties that avoids this shortcoming. (shrink)
I argue that the best interpretation of the general theory of relativity has need of a causal entity, and causal structure that is not reducible to light cone structure. I suggest that this causal interpretation of GTR helps defeat a key premise in one of the most popular arguments for causal reductionism, viz., the argument from physics.
According to commonsense psychology, one is conscious of everything that one pays attention to, but one does not pay attention to all the things that one is conscious of. Recent lines of research purport to show that commonsense is mistaken on both of these points: Mack and Rock (1998) tell us that attention is necessary for consciousness, while Kentridge and Heywood (2001) claim that consciousness is not necessary for attention. If these lines of research were successful they would have important (...) implications regarding the prospects of using attention research to inform us about consciousness. The present essay shows that these lines of research are not successful, and that the commonsense picture of the relationship between attention and consciousness can be. (shrink)
Conditionalization is a widely endorsed rule for updating one’s beliefs. But a sea of complaints have been raised about it, including worries regarding how the rule handles error correction, changing desiderata of theory choice, evidence loss, self-locating beliefs, learning about new theories, and confirmation. In light of such worries, a number of authors have suggested replacing Conditionalization with a different rule — one that appeals to what I’ll call “ur-priors”. But different authors have understood the rule in different ways, and (...) these different understandings solve different problems. In this paper, I aim to map out the terrain regarding these issues. I survey the different problems that might motivate the adoption of such a rule, flesh out the different understandings of the rule that have been proposed, and assess their pros and cons. I conclude by suggesting that one particular batch of proposals, proposals that appeal to what I’ll call “loaded evidential standards”, are especially promising. (shrink)
Should economics study the psychological basis of agents’ choice behaviour? I show how this question is multifaceted and profoundly ambiguous. There is no sharp distinction between ‘mentalist’ answ...
The advent of contemporary evolutionary theory ushered in the eventual decline of Aristotelian Essentialism (Æ) – for it is widely assumed that essence does not, and cannot have any proper place in the age of evolution. This paper argues that this assumption is a mistake: if Æ can be suitably evolved, it need not face extinction. In it, I claim that if that theory’s fundamental ontology consists of dispositional properties, and if its characteristic metaphysical machinery is interpreted within the framework (...) of contemporary evolutionary developmental biology, an evolved essentialism is available. The reformulated theory of Æ offered in this paper not only fails to fall prey to the typical collection of criticisms, but is also independently both theoretically and empirically plausible. The paper contends that, properly understood, essence belongs in the age of evolution. (shrink)
Though the realm of biology has long been under the philosophical rule of the mechanistic magisterium, recent years have seen a surprisingly steady rise in the usurping prowess of process ontology. According to its proponents, theoretical advances in the contemporary science of evo-devo have afforded that ontology a particularly powerful claim to the throne: in that increasingly empirically confirmed discipline, emergently autonomous, higher-order entities are the reigning explanantia. If we are to accept the election of evo-devo as our best conceptualisation (...) of the biological realm with metaphysical rigour, must we depose our mechanistic ontology for failing to properly “carve at the joints” of organisms? In this paper, I challenge the legitimacy of that claim: not only can the theoretical benefits offered by a process ontology be had without it, they cannot be sufficiently grounded without the metaphysical underpinning of the very mechanisms which processes purport to replace. The biological realm, I argue, remains one best understood as under the governance of mechanistic principles. (shrink)
At the heart of the Bayesianism is a rule, Conditionalization, which tells us how to update our beliefs. Typical formulations of this rule are underspecified. This paper considers how, exactly, this rule should be formulated. It focuses on three issues: when a subject’s evidence is received, whether the rule prescribes sequential or interval updates, and whether the rule is narrow or wide scope. After examining these issues, it argues that there are two distinct and equally viable versions of Conditionalization to (...) choose from. And which version we choose has interesting ramifications, bearing on issues such as whether Conditionalization can handle continuous evidence, and whether Jeffrey Conditionalization is really a generalization of Conditionalization. (shrink)
Selection against embryos that are predisposed to develop disabilities is one of the less controversial uses of embryo selection technologies. Many bio-conservatives argue that while the use of ESTs to select for non-disease-related traits, such as height and eye-colour, should be banned, their use to avoid disease and disability should be permitted. Nevertheless, there remains significant opposition, particularly from the disability rights movement, to the use of ESTs to select against disability. In this article we examine whether and why the (...) state could be justified in restricting the use of ESTs to select against disability. We first outline the challenge posed by proponents of ‘liberal eugenics’. Liberal eugenicists challenge those who defend restrictions on the use of ESTs to show why the use of these technologies would create a harm of the type and magnitude required to justify coercive measures. We argue that this challenge could be met by adverting to the risk of harms to future persons that would result from a loss of certain forms of cognitive diversity. We suggest that this risk establishes a pro tanto case for restricting selection against some disabilities, including dyslexia and Asperger's syndrome. (shrink)
This paper explores the level of obligation called for by Milton Friedman’s classic essay “The Social Responsibility of Business is to Increase Profits.” Several scholars have argued that Friedman asserts that businesses have no or minimal social duties beyond compliance with the law. This paper argues that this reading of Friedman does not give adequate weight to some claims that he makes and to their logical extensions. Throughout his article, Friedman emphasizes the values of freedom, respect for law, and duty. (...) The principle that a business professional should not infringe upon the liberty of other members of society can be used by business ethicists to ground a vigorous line of ethical analysis. Any practice, which has a negative externality that requires another party to take a significant loss without consent or compensation, can be seen as unethical. With Friedman’s framework, we can see how ethics can be seen as arising from the nature of business practice itself. Business involves an ethics in which we consider, work with, and respect strangers who are outside of traditional in-groups. (shrink)
In this essay, I argue that a proper understanding of the historicity of love requires an appreciation of the irreplaceability of the beloved. I do this through a consideration of ideas that were first put forward by Robert Kraut in “Love De Re” (1986). I also evaluate Amelie Rorty's criticisms of Kraut's thesis in “The Historicity of Psychological Attitudes: Love is Not Love Which Alters Not When It Alteration Finds” (1986). I argue that Rorty fundamentally misunderstands Kraut's Kripkean analogy, and (...) I go on to criticize her claim that concern over the proper object of love should be best understood as a concern over constancy. This leads me to an elaboration of the distinct senses in which love can be seen as historical. I end with a further defense of the irreplaceability of the beloved and a discussion of the relevance of recent debates over the importance of personal identity for an adequate account of the historical dimension of love. (shrink)
A number of cases involving self-locating beliefs have been discussed in the Bayesian literature. I suggest that many of these cases, such as the sleeping beauty case, are entangled with issues that are independent of self-locating beliefs per se. In light of this, I propose a division of labor: we should address each of these issues separately before we try to provide a comprehensive account of belief updating. By way of example, I sketch some ways of extending Bayesianism in order (...) to accommodate these issues. Then, putting these other issues aside, I sketch some ways of extending Bayesianism in order to accommodate self-locating beliefs. Finally, I propose a constraint on updating rules, the "Learning Principle", which rules out certain kinds of troubling belief changes, and I use this principle to assess some of the available options. (shrink)
I argue that the theory of chance proposed by David Lewis has three problems: (i) it is time asymmetric in a manner incompatible with some of the chance theories of physics, (ii) it is incompatible with statistical mechanical chances, and (iii) the content of Lewis's Principal Principle depends on how admissibility is cashed out, but there is no agreement as to what admissible evidence should be. I proposes two modifications of Lewis's theory which resolve these difficulties. I conclude by tentatively (...) proposing a third modification of Lewis's theory, one which explains many of the common features shared by the chance theories of physics. (shrink)
The iterative conception of set is typically considered to provide the intuitive underpinnings for ZFCU (ZFC+Urelements). It is an easy theorem of ZFCU that all sets have a definite cardinality. But the iterative conception seems to be entirely consistent with the existence of “wide” sets, sets (of, in particular, urelements) that are larger than any cardinal. This paper diagnoses the source of the apparent disconnect here and proposes modifications of the Replacement and Powerset axioms so as to allow for the (...) existence of wide sets. Drawing upon Cantor’s notion of the absolute infinite, the paper argues that the modifications are warranted and preserve a robust iterative conception of set. The resulting theory is proved consistent relative to ZFC + “there exists an inaccessible cardinal number.”. (shrink)
Blaming (construed broadly to include both blaming-attitudes and blaming-actions) is a puzzling phenomenon. Even when we grant that someone is blameworthy, we can still sensibly wonder whether we ought to blame him. We sometimes choose to forgive and show mercy, even when it is not asked for. We are naturally led to wonder why we shouldn’t always do this. Wouldn’t it be a better to wholly reject the punitive practices of blame, especially in light of their often undesirable effects, and (...) embrace an ethic of unrelenting forgiveness and mercy? In this paper I seek to address these questions by offering an account of blame that provides a rationale for thinking that to wholly forswear blaming blameworthy agents would be deeply mistaken. This is because, as I will argue, blaming is a way of valuing, it is “a mode of valuation.” I will argue that among the minimal standards of respect generated by valuable objects, notably persons, is the requirement to redress disvalue with blame. It is not just that blame is something additional we are required to do in properly valuing, but rather blame is part of what that it is to properly value. Blaming, given the existence of blameworthy agents, is mode of valuation required by the standards of minimal respect. To forswear blame would be to fail value what we ought to value. (shrink)
In Reasons and Persons, Parfit (1984) posed a challenge: provide a satisfying normative account that solves the Non-Identity Problem, avoids the Repugnant and Absurd Conclusions, and solves the Mere-Addition Paradox. In response, some have suggested that we look toward person-affecting views of morality for a solution. But the person-affecting views that have been offered so far have been unable to satisfy Parfit's four requirements, and these views have been subject to a number of independent complaints. This paper describes a person-affecting (...) account which meets Parfit's challenge. The account satisfies Parfit's four requirements, and avoids many of the criticisms that have been raised against person-affecting views. (shrink)
Nearly all defences of the agent-causal theory of free will portray the theory as a distinctively libertarian one — a theory that only libertarians have reason to accept. According to what I call ‘the standard argument for the agent-causal theory of free will’, the reason to embrace agent-causal libertarianism is that libertarians can solve the problem of enhanced control only if they furnish agents with the agent-causal power. In this way it is assumed that there is only reason to accept (...) the agent-causal theory if there is reason to accept libertarianism. I aim to refute this claim. I will argue that the reasons we have for endorsing the agent-causal theory of free will are nonpartisan. The real reason for going agent-causal has nothing to do with determinism or indeterminism, but rather with avoiding reductionism about agency and the self. As we will see, if there is reason for libertarians to accept the agent-causal theory, there is just as much reason for compatibilists to accept it. It is in this sense that I contend that if anyone should be an agent-causalist, then everyone should be an agent-causalist. (shrink)
In this paper, we argue that Plotinus denies deliberative forethought about the physical cosmos to the demiurge on the basis of certain basic and widely shared Platonic and Aristotelian assumptions about the character of divine thought. We then discuss how Plotinus can nonetheless maintain that the cosmos is «providentially» ordered.
Recognizing that truth is socially constructed or that knowledge and power are related is hardly a novelty in the social sciences. In the twenty-first century, however, there appears to be a renewed concern regarding people’s relationship with the truth and the propensity for certain actors to undermine it. Organizations are highly implicated in this, given their central roles in knowledge management and production and their attempts to learn, although the entanglement of these epistemological issues with business ethics has not been (...) engaged as explicitly as it might be. Drawing on work from a virtue epistemology perspective, this paper outlines the idea of a set of epistemic vices permeating organizations, along with examples of unethical epistemic conduct by organizational actors. While existing organizational research has examined various epistemic virtues that make people and organizations effective and responsible epistemic agents, much less is known about the epistemic vices that make them ineffective and irresponsible ones. Accordingly, this paper introduces vice epistemology, a nascent but growing subfield of virtue epistemology which, to the best of our knowledge, has yet to be explicitly developed in terms of business ethics. The paper concludes by outlining a business ethics research agenda on epistemic vice, with implications for responding to epistemic vices and their illegitimacy in practice. (shrink)
This chapter surveys hybrid theories of well-being. It also discusses some criticisms, and suggests some new directions that philosophical discussion of hybrid theories might take.
In his paper, Jakob Hohwy outlines a theory of the brain as an organ for prediction-error minimization, which he claims has the potential to profoundly alter our understanding of mind and cognition. One manner in which our understanding of the mind is altered, according to PEM, stems from the neurocentric conception of the mind that falls out of the framework, which portrays the mind as “inferentially-secluded” from its environment. This in turn leads Hohwy to reject certain theses of embodied cognition. (...) Focusing on this aspect of Hohwy’s argument, we first outline the key components of the PEM framework such as the “evidentiary boundary,” before looking at why this leads Hohwy to reject certain theses of embodied cognition. We will argue that although Hohwy may be correct to reject specific theses of embodied cognition, others are in fact implied by the PEM framework and may contribute to its development. We present the metaphor of the “body as a laboratory” in order to highlight wha... (shrink)
This article investigates the semantics of sentences that express numerical averages, focusing initially on cases such as 'The average American has 2.3 children'. Such sentences have been used both by linguists and philosophers to argue for a disjuncture between semantics and ontology. For example, Noam Chomsky and Norbert Hornstein have used them to provide evidence against the hypothesis that natural language semantics includes a reference relation holding between words and objects in the world, whereas metaphysicians such as Joseph Melia and (...) Stephen Yablo have used them to provide evidence that apparent singular reference need not be taken as ontologically committing. We develop a fully general and independently justified compositional semantics in which such constructions are assigned truth conditions that are not ontologically problematic, and show that our analysis is superior to all extant rivals. Our analysis provides evidence that a good semantics yields a sensible ontology. It also reveals that natural language contains genuine singular terms that refer to numbers. (shrink)
I distinguish several doctrines that economic methodologists have found attractive, all of which have a positivist flavour. One of these is the doctrine that preference assignments in economics are just shorthand descriptions of agents' choice behaviour. Although most of these doctrines are problematic, the latter doctrine about preference assignments is a respectable one, I argue. It doesn't entail any of the problematic doctrines, and indeed it is warranted independently of them.
I discuss what Aquinas’ doctrine of divine simplicity is, and what he takes to be its implications. I also discuss the extent to which Aquinas succeeds in motivating and defending those implications.
Although they are continually compositionally reconstituted and reconfigured, organisms nonetheless persist as ontologically unified beings over time – but in virtue of what? A common answer is: in virtue of their continued possession of the capacity for morphological invariance which persists through, and in spite of, their mereological alteration. While we acknowledge that organisms‟ capacity for the “stability of form” – homeostasis - is an important aspect of their diachronic unity, we argue that this capacity is derived from, and grounded (...) in a more primitive one – namely, the homeodynamic capacity for the “specified variation of form”. In introducing a novel type of causal power – a „structural power‟ – we claim that it is the persistence of their dynamic potential to produce a specified series of structurally adaptive morphologies which grounds organisms‟ privileged status as metaphysically “one over many” over time. (shrink)
Some explanations are relatively abstract: they abstract away from the idiosyncratic or messy details of the case in hand. The received wisdom in philosophy is that this is a virtue for any explanation to possess. I argue that the apparent consensus on this point is illusory. When philosophers make this claim, they differ on which of four alternative varieties of abstractness they have in mind. What’s more, for each variety of abstractness there are several alternative reasons to think that the (...) variety of abstractness in question is a virtue. I identify the most promising reasons, and dismiss some others. The paper concludes by relating this discussion to the idea that explanations in biology, psychology and social science cannot be replaced by relatively micro explanations without loss of understanding. (shrink)
In “Bayesianism, Infinite Decisions, and Binding”, Arntzenius et al. (Mind 113:251–283, 2004 ) present cases in which agents who cannot bind themselves are driven by standard decision theory to choose sequences of actions with disastrous consequences. They defend standard decision theory by arguing that if a decision rule leads agents to disaster only when they cannot bind themselves, this should not be taken to be a mark against the decision rule. I show that this claim has surprising implications for a (...) number of other debates in decision theory. I then assess the plausibility of this claim, and suggest that it should be rejected. (shrink)
Kevin Elliott and others separate two common arguments for the legitimacy of societal values in scientific reasoning as the gap and the error arguments. This article poses two questions: How are these two arguments related, and what can we learn from their interrelation? I contend that we can better understand the error argument as nested within the gap because the error is a limited case of the gap with narrower features. Furthermore, this nestedness provides philosophers with conceptual tools for analyzing (...) more robustly how values pervade science. (shrink)
A cognitivist account of decision-making views choice behaviour as a serial process of deliberation and commitment, which is separate from perception and action. By contrast, recent work in embodied decision-making has argued that this account is incompatible with emerging neurophysiological data. We argue that this account has significant overlap with an embodied account of predictive processing, and that both can offer mutual development for the other. However, more importantly, by demonstrating this close connection we uncover an alternative perspective on the (...) nature of decision-making, and the mechanisms that underlie our choice behaviour. This alternative perspective allows us to respond to a challenge for predictive processing, which claims that the satisfaction of distal goal-states is underspecified. Answering this challenge requires the adoption of an embodied perspective. (shrink)
A community, for ecologists, is a unit for discussing collections of organisms. It refers to collections of populations, which consist (by definition) of individuals of a single species. This is straightforward. But communities are unusual kinds of objects, if they are objects at all. They are collections consisting of other diverse, scattered, partly-autonomous, dynamic entities (that is, animals, plants, and other organisms). They often lack obvious boundaries or stable memberships, as their constituent populations not only change but also move in (...) and out of areas, and in and out of relationships with other populations. Familiar objects have identifiable boundaries, for example, and if communities do not, maybe they are not objects. Maybe they do not exist at all. The question this possibility suggests, of what criteria there might be for identifying communities, and for determining whether such communities exist at all, has long been discussed by ecologists. This essay addresses this question as it has recently been taken up by philosophers of science, by examining answers to it which appeared a century ago and which have framed the continuing discussion. (shrink)
A number of claims are closely connected with, though logically distinct from, animalism. One is that organisms cease to exist when they die. Two others concern the relation of the brain, or the brainstem, to animal life. One of these holds that the brainstem is necessary for life?more precisely, that (say) my cat's brainstem is necessary for my cat's life to continue. The other is that it is sufficient for life?more precisely, that so long as (say) my cat's brainstem continues (...) to function, so too does my cat. I argue against these claims. (shrink)
Some of the most interesting recent work in formal epistemology has focused on developing accuracy-based approaches to justifying Bayesian norms. These approaches are interesting not only because they offer new ways to justify these norms, but because they potentially offer a way to justify all of these norms by appeal to a single, attractive epistemic goal: having accurate beliefs. Recently, Easwaran & Fitelson (2012) have raised worries regarding whether such “all-accuracy” or “purely alethic” approaches can accommodate and justify evidential Bayesian (...) norms. In response, proponents of purely alethic approaches, such as Pettigrew (2013b) and Joyce (2016), have argued that scoring rule arguments provide us with compatible and purely alethic justifications for the traditional Bayesian norms, including evidential norms. In this paper I raise several challenges to this claim. First, I argue that many of the justifications these scoring rule arguments provide are not compatible. Second, I raise worries for the claim that these scoring rule arguments provide purely alethic justifications. Third, I turn to assess the more general question of whether purely alethic justifications for evidential norms are even possible, and argue that, without making some contentious assumptions, they are not. Fourth, I raise some further worries for the possibility of providing purely alethic justifications for content-sensitive evidential norms, like the Principal Principle. (shrink)
Can one pay attention to objects without being conscious of them? Some years ago there was evidence that had been taken to show that the answer is 'yes'. That evidence was inconclusive, but there is recent work that makes the case more compellingly: it now seems that it is indeed possible to pay attention to objects of which one is not conscious. This is bad news for theories in which the connection between attention and consciousness is taken to be an (...) essential one. It is good news for theories (including mine) in which the connection between attention and agency is taken to be essential. (shrink)
This pair of articles provides a critical commentary on contemporary approaches to statistical mechanical probabilities. These articles focus on the two ways of understanding these probabilities that have received the most attention in the recent literature: the epistemic indifference approach, and the Lewis-style regularity approach. These articles describe these approaches, highlight the main points of contention, and make some attempts to advance the discussion. The first of these articles provides a brief sketch of statistical mechanics, and discusses the indifference approach (...) to statistical mechanical probabilities. (shrink)
I am going to argue for a robust realism about magnitudes, as irreducible elements in our ontology. This realistic attitude, I will argue, gives a better metaphysics than the alternatives. It suggests some new options in the philosophy of science. It also provides the materials for a better account of the mind’s relation to the world, in particular its perceptual relations.
This essay begins with a consideration of one way in which animals and persons may be valued as “irreplaceable.” Drawing on both Plato and Pascal, I consider reasons for skepticism regarding the legitimacy of this sort of attachment. While I do not offer a complete defense against such skepticism, I do show that worries here may be overblown due to the conflation of distinct metaphysical and normative concerns. I then go on to clarify what sort of value is at issue (...) in cases of irreplaceable attachment. I characterize “unique value” as the kind of value attributed to a thing when we take that thing to be (theoretically, not just practically) irreplaceable. I then consider the relationship between this sort of value and intrinsic value. After considering the positions of Gowans, Moore, Korsgaard, Frankfurt, and others, I conclude that unique value is best understood not as a variety of intrinsic value but rather as one kind of final value that is grounded in the extrinsic properties of the object. (shrink)
Evolutionary developmental biology represents a paradigm shift in the understanding of the ontogenesis and evolutionary progression of the denizens of the natural world. Given the empirical successes of the evo-devo framework, and its now widespread acceptance, a timely and important task for the philosophy of biology is to critically discern the ontological commitments of that framework and assess whether and to what extent our current metaphysical models are able to accommodate them. In this paper, I argue that one particular model (...) is a natural fit: an ontology of dispositional properties coherently and adequately captures the crucial casual-cum-explanatory role that the fundamental elements of evo-devo play within that framework. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.