In The Future of Human Nature, Jürgen Habermas raises the question of whether the embryonic genetic diagnosis and genetic modification threatens the foundations of the species ethics that underlies current understandings of morality. While morality, in the normative sense, is based on moral interactions enabling communicative action, justification, and reciprocal respect, the reification involved in the new technologies may preclude individuals to uphold a sense of the undisposability of human life and the inviolability of human beings that is necessary for (...) their own identity as well as for reciprocal relations. Engaging with liberal bioethics and Catholic approaches to bioethics, the article clarifies how Habermas’ position offers a radical critique of liberal autonomy while maintaining its postmetaphysical stance. The essay argues that Habermas’ approach may guide the question of rights of future generations regarding germline gene editing. But it calls for a different turn in the conversation between philosophy and theology, namely one that emphasizes the necessary attention to rights violations and injustices as a common, postmetaphysical starting point for critical theory and critical theology alike. In 2001, Jürgen Habermas published a short book on questions of biomedicine that took many by surprise.[1] To some of his students, the turn to a substantive position invoking the need to comment on a species ethics rather than outlining a public moral framework was seen as the departure from the “path of deontological virtue,”[2] and at the same time a departure from postmetaphysical reason. Habermas’ motivation to address the developments in biomedicine had certainly been sparked by the intense debate in Germany, the European Union, and internationally on human cloning, pre-implantation genetic diagnosis, embryonic stem cell research, and human enhancement. He turned to a strand of critical theory that had been pushed to the background by the younger Frankfurt School in favor of cultural theory and social critique, even though it had been an important element of its initial working programs. The relationship of instrumental reason and critical theory, examined, among others, by Max Horkheimer, Theodor W. Adorno, and Herbert Marcuse and taken up in Habermas’ own Knowledge and Interest and Theory of Communicative Action became ever-more actual with the development of the life sciences, human genome analysis, and genetic engineering of human offspring. Today, some of the fictional scenarios discussed at the end of the last century as “science fiction” have become reality: in 2018, the first “germline gene-edited” children were born in China.[3] Furthermore, the UK’s permission to create so-called “three-parent” children may create a legal and political pathway to hereditary germline interventions summarized under the name of “gene editing.”In this article, I want to explore Habermas’ “substantial” argument in the hope that philosophy and theology become allies in their struggle against an ever-more reifying lifeworld, which may create a “moral void” that would, at least from today’s perspective, be “unbearable”, and for upholding the conditions of human dignity, freedom, and justice. I will contextualize Habermas’ concerns in the broader discourse of bioethics, because only by doing this, his concerns are rescued from some misinterpretations.[1] Jürgen Habermas, The Future of Human Nature.[2] Ibid., 125, fn. 58. 8[3] Up to the present, no scientific publication of the exact procedure exists, but it is known that the scientist, Jiankui He, circumvented the existing national regulatory framework and may have misled the prospective parents about existing alternatives and the unprecedented nature of his conduct. Yuanwu Ma, Lianfeng Zhang, and Chuan Qin, "The First Genetically Gene‐Edited Babies: It's “Irresponsible and Too Early”," Animal Models and Experimental Medicine ; Matthias Braun, Meacham, Darian, "The Trust Game: Crispr for Human Germline Editing Unsettles Scientists and Society," EMBO reports 20, no. 2. (shrink)
Several variants of Lewis's Best System Account of Lawhood have been proposed that avoid its commitment to perfectly natural properties. There has been little discussion of the relative merits of these proposals, and little discussion of how one might extend this strategy to provide natural property-free variants of Lewis's other accounts, such as his accounts of duplication, intrinsicality, causation, counterfactuals, and reference. We undertake these projects in this paper. We begin by providing a framework for classifying and assessing the variants (...) of the Best System Account. We then evaluate these proposals, and identify the most promising candidates. We go on to develop a proposal for systematically modifying Lewis's other accounts so that they, too, avoid commitment to perfectly natural properties. We conclude by briefly considering a different route one might take to developing natural property-free versions of Lewis's other accounts, drawing on recent work by Williams. (shrink)
A number of criticisms of Utilitarianism – such as “nearest and dearest” objections, “demandingness” objections, and “altruistic” objections – arise because Utilitarianism doesn’t permit partially or wholly disregarding the utility of certain subjects. A number of authors, including Sider, Portmore and Vessel, have responded to these objections by suggesting we adopt “dual-maximizing” theories which provide a way to incorporate disregarding. And in response to “altruistic” objections in particular – objections noting that it seems permissible to make utility- decreasing sacrifices – (...) these authors have suggested adopting a dual-maximizing theory that permits disregarding one’s own utility. In this paper I’ll defend two claims. First, I’ll argue that dual-maximizing theories are a poor way to incorporate disregarding. Instead, I’ll suggest that “variable- disregarding” theories provide a more attractive way to incorporate disregarding. Second, I’ll argue that the right way to handle these “altruistic” objections isn’t to permit disregarding one’s own utility, it’s to permit disregarding the utility of those who consent. Together, these two claims entail that the best way to modify Utilitarianism to handle “altruistic” objections is to adopt a variable-disregarding view that disregards the utility of those who consent. (shrink)
An adequate account of laws should satisfy at least five desiderata: it should provide a unified account of laws and chances, it should yield plausible relations between laws and chances, it should vindicate numerical chance assignments, it should accommodate dynamical and non-dynamical chances, and it should accommodate a plausible range of nomic possibilities. No extant account of laws satisfies these desiderata. This paper presents a non-Humean account of laws, the Nomic Likelihood Account, that does.
Clark and Shackel have recently argued that previous attempts to resolve the two-envelope paradox fail, and that we must look to symmetries of the relevant expected-value calculations for a solution. Clark and Shackel also argue for a novel solution to the peeking case, a variant of the two-envelope scenario in which you are allowed to look in your envelope before deciding whether or not to swap. Whatever the merits of these solutions, they go beyond accepted decision theory, even contradicting it (...) in the peeking case. Thus if we are to take their solutions seriously, we must understand Clark and Shackel to be proposing a revision of standard decision theory. Understood as such, we will argue, their proposal is both implausible and unnecessary. (shrink)
This paper examines the debate between permissive and impermissive forms of Bayesianism. It briefly discusses some considerations that might be offered by both sides of the debate, and then replies to some new arguments in favor of impermissivism offered by Roger White. First, it argues that White’s defense of Indifference Principles is unsuccessful. Second, it contends that White’s arguments against permissive views do not succeed.
In this paper we apply the popular Best System Account of laws to typical eternal worlds – both classical eternal worlds and eternal worlds of the kind posited by popular contemporary cosmological theories. We show that, according to the Best System Account, such worlds will have no laws that meaningfully constrain boundary conditions. It’s generally thought that lawful constraints on boundary conditions are required to avoid skeptical arguments. Thus the lack of such laws given the Best System Account may seem (...) like a severe problem for the view. We show, however, that at eternal worlds, lawful constraints on boundary conditions do little to help fend off skeptical worries. So with respect to handling these skeptical worries, the proponent of the Best System Account is no worse off than their competitors. (shrink)
This paper examines three accounts of the sleeping beauty case: an account proposed by Adam Elga, an account proposed by David Lewis, and a third account defended in this paper. It provides two reasons for preferring the third account. First, this account does a good job of capturing the temporal continuity of our beliefs, while the accounts favored by Elga and Lewis do not. Second, Elga’s and Lewis’ treatments of the sleeping beauty case lead to highly counterintuitive consequences. The proposed (...) account also leads to counterintuitive consequences, but they’re not as bad as those of Elga’s account, and no worse than those of Lewis’ account. (shrink)
Conditionalization is a widely endorsed rule for updating one’s beliefs. But a sea of complaints have been raised about it, including worries regarding how the rule handles error correction, changing desiderata of theory choice, evidence loss, self-locating beliefs, learning about new theories, and confirmation. In light of such worries, a number of authors have suggested replacing Conditionalization with a different rule — one that appeals to what I’ll call “ur-priors”. But different authors have understood the rule in different ways, and (...) these different understandings solve different problems. In this paper, I aim to map out the terrain regarding these issues. I survey the different problems that might motivate the adoption of such a rule, flesh out the different understandings of the rule that have been proposed, and assess their pros and cons. I conclude by suggesting that one particular batch of proposals, proposals that appeal to what I’ll call “loaded evidential standards”, are especially promising. (shrink)
Representation theorems are often taken to provide the foundations for decision theory. First, they are taken to characterize degrees of belief and utilities. Second, they are taken to justify two fundamental rules of rationality: that we should have probabilistic degrees of belief and that we should act as expected utility maximizers. We argue that representation theorems cannot serve either of these foundational purposes, and that recent attempts to defend the foundational importance of representation theorems are unsuccessful. As a result, we (...) should reject these claims, and lay the foundations of decision theory on firmer ground. (shrink)
This paper examines two mistakes regarding David Lewis’ Principal Principle that have appeared in the recent literature. These particular mistakes are worth looking at for several reasons: The thoughts that lead to these mistakes are natural ones, the principles that result from these mistakes are untenable, and these mistakes have led to significant misconceptions regarding the role of admissibility and time. After correcting these mistakes, the paper discusses the correct roles of time and admissibility. With these results in hand, the (...) paper concludes by showing that one way of formulating the chance–credence relation has a distinct advantage over its rivals. (shrink)
Deference principles are principles that describe when, and to what extent, it’s rational to defer to others. Recently, some authors have used such principles to argue for Evidential Uniqueness, the claim that for every batch of evidence, there’s a unique doxastic state that it’s permissible for subjects with that total evidence to have. This paper has two aims. The first aim is to assess these deference-based arguments for Evidential Uniqueness. I’ll show that these arguments only work given a particular kind (...) of deference principle, and I’ll argue that there are reasons to reject these kinds of principles. The second aim of this paper is to spell out what a plausible generalized deference principle looks like. I’ll start by offering a principled rationale for taking deference to constrain rational belief. Then I’ll flesh out the kind of deference principle suggested by this rationale. Finally, I’ll show that this principle is both more plausible and more general than the principles used in the deference-based arguments for Evidential Uniqueness. (shrink)
At the heart of the Bayesianism is a rule, Conditionalization, which tells us how to update our beliefs. Typical formulations of this rule are underspecified. This paper considers how, exactly, this rule should be formulated. It focuses on three issues: when a subject’s evidence is received, whether the rule prescribes sequential or interval updates, and whether the rule is narrow or wide scope. After examining these issues, it argues that there are two distinct and equally viable versions of Conditionalization to (...) choose from. And which version we choose has interesting ramifications, bearing on issues such as whether Conditionalization can handle continuous evidence, and whether Jeffrey Conditionalization is really a generalization of Conditionalization. (shrink)
In Reasons and Persons, Parfit (1984) posed a challenge: provide a satisfying normative account that solves the Non-Identity Problem, avoids the Repugnant and Absurd Conclusions, and solves the Mere-Addition Paradox. In response, some have suggested that we look toward person-affecting views of morality for a solution. But the person-affecting views that have been offered so far have been unable to satisfy Parfit's four requirements, and these views have been subject to a number of independent complaints. This paper describes a person-affecting (...) account which meets Parfit's challenge. The account satisfies Parfit's four requirements, and avoids many of the criticisms that have been raised against person-affecting views. (shrink)
In “Bayesianism, Infinite Decisions, and Binding”, Arntzenius et al. (Mind 113:251–283, 2004 ) present cases in which agents who cannot bind themselves are driven by standard decision theory to choose sequences of actions with disastrous consequences. They defend standard decision theory by arguing that if a decision rule leads agents to disaster only when they cannot bind themselves, this should not be taken to be a mark against the decision rule. I show that this claim has surprising implications for a (...) number of other debates in decision theory. I then assess the plausibility of this claim, and suggest that it should be rejected. (shrink)
A number of cases involving self-locating beliefs have been discussed in the Bayesian literature. I suggest that many of these cases, such as the sleeping beauty case, are entangled with issues that are independent of self-locating beliefs per se. In light of this, I propose a division of labor: we should address each of these issues separately before we try to provide a comprehensive account of belief updating. By way of example, I sketch some ways of extending Bayesianism in order (...) to accommodate these issues. Then, putting these other issues aside, I sketch some ways of extending Bayesianism in order to accommodate self-locating beliefs. Finally, I propose a constraint on updating rules, the "Learning Principle", which rules out certain kinds of troubling belief changes, and I use this principle to assess some of the available options. (shrink)
I argue that the theory of chance proposed by David Lewis has three problems: (i) it is time asymmetric in a manner incompatible with some of the chance theories of physics, (ii) it is incompatible with statistical mechanical chances, and (iii) the content of Lewis's Principal Principle depends on how admissibility is cashed out, but there is no agreement as to what admissible evidence should be. I proposes two modifications of Lewis's theory which resolve these difficulties. I conclude by tentatively (...) proposing a third modification of Lewis's theory, one which explains many of the common features shared by the chance theories of physics. (shrink)
Some of the most interesting recent work in formal epistemology has focused on developing accuracy-based approaches to justifying Bayesian norms. These approaches are interesting not only because they offer new ways to justify these norms, but because they potentially offer a way to justify all of these norms by appeal to a single, attractive epistemic goal: having accurate beliefs. Recently, Easwaran & Fitelson (2012) have raised worries regarding whether such “all-accuracy” or “purely alethic” approaches can accommodate and justify evidential Bayesian (...) norms. In response, proponents of purely alethic approaches, such as Pettigrew (2013b) and Joyce (2016), have argued that scoring rule arguments provide us with compatible and purely alethic justifications for the traditional Bayesian norms, including evidential norms. In this paper I raise several challenges to this claim. First, I argue that many of the justifications these scoring rule arguments provide are not compatible. Second, I raise worries for the claim that these scoring rule arguments provide purely alethic justifications. Third, I turn to assess the more general question of whether purely alethic justifications for evidential norms are even possible, and argue that, without making some contentious assumptions, they are not. Fourth, I raise some further worries for the possibility of providing purely alethic justifications for content-sensitive evidential norms, like the Principal Principle. (shrink)
This pair of articles provides a critical commentary on contemporary approaches to statistical mechanical probabilities. These articles focus on the two ways of understanding these probabilities that have received the most attention in the recent literature: the epistemic indifference approach, and the Lewis-style regularity approach. These articles describe these approaches, highlight the main points of contention, and make some attempts to advance the discussion. The first of these articles provides a brief sketch of statistical mechanics, and discusses the indifference approach (...) to statistical mechanical probabilities. (shrink)
Standard decision theory has trouble handling cases involving acts without finite expected values. This paper has two aims. First, building on earlier work by Colyvan (2008), Easwaran (2014), and Lauwers and Vallentyne (2016), it develops a proposal for dealing with such cases, Difference Minimizing Theory. Difference Minimizing Theory provides satisfactory verdicts in a broader range of cases than its predecessors. And it vindicates two highly plausible principles of standard decision theory, Stochastic Equivalence and Stochastic Dominance. The second aim is to (...) assess some recent arguments against Stochastic Equivalence and Stochastic Dominance. If successful, these arguments refute Difference Minimizing Theory. This paper contends that these arguments are not successful. (shrink)
In recent work, Callender and Cohen (2009) and Hoefer (2007) have proposed variants of the account of chance proposed by Lewis (1994). One of the ways in which these accounts diverge from Lewis’s is that they allow special sciences and the macroscopic realm to have chances that are autonomous from those of physics and the microscopic realm. A worry for these proposals is that autonomous chances may place incompatible constraints on rational belief. I examine this worry, and attempt to determine (...) (i) what kinds of conflicts would be problematic, and (ii) whether these proposals lead to problematic conflicts. After working through a pair of cases, I conclude that these proposals do give rise to problematic conflicts. (shrink)
Theories that use expected utility maximization to evaluate acts have difficulty handling cases with infinitely many utility contributions. In this paper I present and motivate a way of modifying such theories to deal with these cases, employing what I call “Direct Difference Taking”. This proposal has a number of desirable features: it’s natural and well-motivated, it satisfies natural dominance intuitions, and it yields plausible prescriptions in a wide range of cases. I then compare my account to the most plausible alternative, (...) a proposal offered by Arntzenius :31–58, 2014). I argue that while Arntzenius’s proposal has many attractive features, it runs into a number of problems which Direct Difference Taking avoids. (shrink)
This pair of articles provides a critical commentary on contemporary approaches to statistical mechanical probabilities. These articles focus on the two ways of understanding these probabilities that have received the most attention in the recent literature: the epistemic indifference approach, and the Lewis-style regularity approach. These articles describe these approaches, highlight the main points of contention, and make some attempts to advance the discussion. The second of these articles discusses the regularity approach to statistical mechanical probabilities, and describes some areas (...) where further research is needed. (shrink)
One popular approach to statistical mechanics understands statistical mechanical probabilities as measures of rational indifference. Naive formulations of this ``indifference approach'' face reversibility worries - while they yield the right prescriptions regarding future events, they yield the wrong prescriptions regarding past events. This paper begins by showing how the indifference approach can overcome the standard reversibility worries by appealing to the Past Hypothesis. But, the paper argues, positing a Past Hypothesis doesn't free the indifference approach from all reversibility worries. For (...) while appealing to the Past Hypothesis allows it to escape one kind of reversibility worry, it makes it susceptible to another - the Meta-Reversibility Objection. And there is no easy way for the indifference approach to escape the Meta-Reversibility Objection. As a result, reversibility worries pose a steep challenge to the viability of the indifference approach. (shrink)
Book Review. The author asserts that scientific inquiry can tell us what we should and should not value. He says that the proper meaning of "morality" is that which leads to human flourishing and that careful observation of what in fact fulfils people is not a matter of philosophical or religious debate but rather a matter of scientific inquiry. But he fails to make the move from concern for one's own well being to concern for the well being of conscious (...) creatures generally. (shrink)
Evidential Uniqueness is the thesis that, for any batch of evidence, there’s a unique doxastic state that a subject with that evidence should have. One of the most common kinds of objections to views that violate Evidential Uniqueness are arbitrariness objections – objections to the effect that views that don’t satisfy Evidential Uniqueness lead to unacceptable arbitrariness. The goal of this paper is to examine a variety of arbitrariness objections that have appeared in the literature, and to assess the extent (...) to which these objections bolster the case for Evidential Uniqueness. After examining a number of different arbitrariness objections, I’ll conclude that, by and large, these objections do little to bolster the case for Evidential Uniqueness. (shrink)
This is a review of Toby Handfield's book, "A Philosophical Guide to Chance", that discusses Handfield's Debunking Argument against realist accounts of chance.
In this collection of essays, Strawson investigates wide-ranging topics pertaining to the nature of the self: What do we mean by the term ‘self’? In what sense do selves exist? To what extent is continuity over time essential to selfhood? Must one be able to make a story of one’s life in order to be a coherent self? Must one be self-conscious in order to be conscious at all? and more. The fourteen essays here are not necessarily meant to be (...) read in order. They do not offer a sustained argument, but rather a number of themes that appear in different places, like threads in a tapestry. (shrink)
Book review. The author incisively defends moral anti-realism. He advises that one should act only on one's considered desires, not on moral absolutes. But he fails to give guidance about what is important or advisable to desire.
1Department of Philosophy, University of Nottingham, University Park, Nottingham NG7 2RD, UK. [email protected] of Philosophy, University of Nottingham, University Park, Nottingham NG7 2RD, UK. [email protected]
For aggregative theories of moral value, it is a challenge to rank worlds that each contain infinitely many valuable events. And, although there are several existing proposals for doing so, few provide a cardinal measure of each world's value. This raises the even greater challenge of ranking lotteries over such worlds—without a cardinal value for each world, we cannot apply expected value theory. How then can we compare such lotteries? To date, we have just one method for doing so (proposed (...) separately by Arntzenius, Bostrom, and Meacham), which is to compare the prospects for value at each individual location, and to then represent and compare lotteries by their expected values at each of those locations. But, as I show here, this approach violates several key principles of decision theory and generates some implausible verdicts. I propose an alternative—one which delivers plausible rankings of lotteries, which is implied by a plausible collection of axioms, and which can be applied alongside almost any ranking of infinite worlds. (shrink)
White, Christensen, and Feldman have recently endorsed uniqueness, the thesis that given the same total evidence, two rational subjects cannot hold different views. Kelly, Schoenfield, and Meacham argue that White and others have at best only supported the weaker, merely intrapersonal view that, given the total evidence, there are no two views which a single rational agent could take. Here, we give a new argument for uniqueness, an argument with deliberate focus on the interpersonal element of the thesis. Our (...) argument is that the best explanation of the value of promoting rationality is an explanation that entails uniqueness. (shrink)
There has been much discussion on the two-envelope paradox. Clark and Shackel (2000) have proposed a solution to the paradox, which has been refuted by Meacham and Weisberg (2003). Surprisingly, however, the literature still contains no axiomatic justification for the claim that one should be indifferent between the two envelopes before opening one of them. According to Meacham and Weisberg, "decision theory does not rank swapping against sticking [before opening any envelope]" (p. 686). To fill this gap in (...) the literature, we present a simple axiomatic justification for indifference, avoiding any expectation reasoning, which is often considered problematic in infinite cases. Although the two-envelope paradox assumes an expectation-maximizing agent, we show that analogous paradoxes arise for agents using different decision principles such as maximin and maximax, and that our justification for indifference before opening applies here too. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.