Sometimes it’s not certain which of several mutually exclusive moral views is correct. Like almost everyone, I think that there’s some sense in which what one should do depends on which of these theories is correct, plus the way the world is non-morally. But I also think there’s an important sense in which what one should do depends upon the probabilities of each of these views being correct. Call this second claim “moral uncertaintism”. In this paper, I want (...) to address an argument against moral uncertaintism offered in the pages of this journal by Brian Weatherson, and seconded elsewhere by Brian Hedden, the crucial premises of which are: that acting on moral uncertaintist norms necessarily involves motivation by reasons or rightness as such, and that such motivation is bad. I will argue that and are false, and that at any rate, the quality of an agent’s motivation is not pertinent to the truth or falsity of moral uncertaintism in the way that Weatherson’s and Hedden’s arguments require. (shrink)
Defenders of deontological constraints in normative ethics face a challenge: how should an agent decide what to do when she is uncertain whether some course of action would violate a constraint? The most common response to this challenge has been to defend a threshold principle on which it is subjectively permissible to act iff the agent's credence that her action would be constraint-violating is below some threshold t. But the threshold approach seems arbitrary and unmotivated: what would possibly determine where (...) the threshold should be set, and why should there be any precise threshold at all? Threshold views also seem to violate ought agglomeration, since a pair of actions each of which is below the threshold for acceptable moral risk can, in combination, exceed that threshold. In this paper, I argue that stochastic dominance reasoning can vindicate and lend rigor to the threshold approach: given characteristically deontological assumptions about the moral value of acts, it turns out that morally safe options will stochastically dominate morally risky alternatives when and only when the likelihood that the risky option violates a moral constraint is greater than some precisely definable threshold (in the simplest case, .5). I also show how, in combination with the observation that deontological moral evaluation is relativized to particular choice situations, this approach can overcome the agglomeration problem. This allows the deontologist to give a precise and well-motivated response to the problem of uncertainty. (shrink)
In this article, I present a new interpretation of the pro-life view on the status of early human embryos. In my understanding, this position is based not on presumptions about the ontological status of embryos and their developmental capabilities but on the specific criteria of rational decisions under uncertainty and on a cautious response to the ambiguous status of embryos. This view, which uses the decision theory model of moral reasoning, promises to reconcile the uncertainty about the (...) ontological status of embryos with the certainty about normative obligations. I will demonstrate that my interpretation of the pro-life view, although seeming to be stronger than the standard one, has limited scope and cannot be used to limit destructive research on human embryos. (shrink)
In this essay, we explore an issue of moraluncertainty: what we are permitted to do when we are unsure about which moral principles are correct. We develop a novel approach to this issue that incorporates important insights from previous work on moraluncertainty, while avoiding some of the difficulties that beset existing alternative approaches. Our approach is based on evaluating and choosing between option sets rather than particular conduct options. We show how our approach (...) is particularly well-suited to address this issue of moraluncertainty with respect to agents that have credence in moral theories that are not fully consequentialist. (shrink)
Given the deep disagreement surrounding population axiology, one should remain uncertain about which theory is best. However, this uncertainty need not leave one neutral about which acts are better or worse. We show that as the number of lives at stake grows, the Expected Moral Value approach to axiological uncertainty systematically pushes one towards choosing the option preferred by the Total and Critical Level views, even if one’s credence in those theories is low.
Moral dilemmas can arise from uncertainty, including uncertainty of the real values involved. One interesting example of this is that of experimentation on human embryos and foetuses, If these have a moral stauts similar to that of human persons then there will be server constraitns on what may be done to them. If embryous have a moral status similar to that of other small clusters of cells, then constraints will be motivated largely by consideration for (...) the persons into whom the embryos may develop. If the truth lies somewhere between these two extremes, the embryo having neither the full moral weight of persons, nor a completely negligible moral weight, then different kinds of constraints will be appropriate. On the face of it, in order to know what kinds of experiements, if any, we are morally justified in performing on embryos we have to know what the moral weight of the embryo is. But then an impasse threatens, for it seems implausible that we can settle with certainty the exact moral status of the human embryo. It is the purpose of this paper to show that moraluncertainty need not make rational moral justification impossible. I develop a framework which distinguishes between what is morally right/wrong, and what is morally justified/unjustified, and applies standard decision theoretic tools to the case of moral uncertainties. (This was the first published account of what has subsequently become known as Expected Moral Value Theory. An earlier version of the paper, "A decision theoretic argument against human embryo experimentation", was published in M. Fricke (ed.), Essays in honor of Bob Durrant. (University of Otago Press, 1986) 111-27.). (shrink)
Several philosophers have recently argued that decision-theoretic frameworks for rational choice under risk fail to provide prescriptions for choice in cases of moraluncertainty. They conclude that there are no rational norms that are “sensitive” to a decision-maker's moraluncertainty. But in this paper, I argue that one sometimes has a rational obligation to take one's moraluncertainty into account in the course of moral deliberation. I first provide positive motivation for the view (...) that one's moral beliefs can affect what it's rational for one to choose. I then address the problem of value comparison, according to which one cannot determine the expected moral value of one's actions. I argue that we should not infer from the problem of value comparison that there are no rational norms governing choice under moraluncertainty. (shrink)
This paper develops a philosophical account of moral disruption. According to Robert Baker, moral disruption is a process in which technological innovations undermine established moral norms without clearly leading to a new set of norms. Here I analyze this process in terms of moraluncertainty, formulating a philosophical account with two variants. On the harm account, such uncertainty is always harmful because it blocks our knowledge of our own and others’ moral obligations. On (...) the qualified harm account, there is no harm in cases where moraluncertainty is related to innovation that is “for the best” in historical perspective or where uncertainty is the expression of a deliberative virtue. The two accounts are compared by applying them to Baker’s historical case of the introduction of mechanical ventilation and organ transplantation technologies, as well as the present-day case of mass data practices in the health domain. (shrink)
In this paper we introduce the nascent literature on MoralUncertainty Theory and explore its application to the criminal law. MoralUncertainty Theory seeks to address the question of what we ought to do when we are uncertain about what to do because we are torn between rival moral theories. For instance, we may have some credence in one theory that tells us to do A but also in another that tells us to do B. (...) We examine how we might decide whether or not to criminalize some conduct when we are unsure as to whether or not the conduct is morally permitted, and whether or not it is permissible to criminalize the conduct. We also look at how we might make sentencing decisions under moraluncertainty. We argue that MoralUncertainty Theory can be an illuminating way to address these questions, but find that doing so is a lot more complicated than applying MoralUncertainty Theory to individual conduct. (shrink)
This paper explores the role of moraluncertainty in explaining the morally disruptive character of new technologies. We argue that existing accounts of technomoral change do not fully explain its disruptiveness. This explanatory gap can be bridged by examining the epistemic dimensions of technomoral change, focusing on moraluncertainty and inquiry. To develop this account, we examine three historical cases: the introduction of the early pregnancy test, the contraception pill, and brain death. The resulting account highlights (...) what we call “differential disruption” and provides a resource for fields such as technology assessment, ethics of technology, and responsible innovation. (shrink)
This paper develops a philosophical account of moral disruption. According to Robert Baker (2013), moral disruption is a process in which technological innovations undermine established moral norms without clearly leading to a new set of norms. Here I analyze this process in terms of moraluncertainty, formulating a philosophical account with two variants. On the Harm Account, such uncertainty is always harmful because it blocks our knowledge of our own and others’ moral obligations. (...) On the Qualified Harm Account, there is no harm in cases where moraluncertainty is related to innovation that is “for the best” in historical perspective, or where uncertainty is the expression of a deliberative virtue. The two accounts are compared by applying them to Baker’s historical case of the introduction of mechanical ventilation and organ transplantation technologies, as well as the present-day case of mass data practices in the health domain. (shrink)
How can someone reconcile the desire to eat meat, and a tendency toward vegetarian ideals? How should we reconcile contradictory moral values? How can we aggregate different moral theories? How individual preferences can be fairly aggregated to represent a will, norm, or social decision? Conflict resolution and preference aggregation are tasks that intrigue philosophers, economists, sociologists, decision theorists, and many other scholars, being a rich interdisciplinary area for research. When trying to solve questions about moraluncertainty (...) a meta understanding of the concept of normativity can help us to develop strategies to deal with norms themselves. 2nd-order normativity, or norms about norms, is a hierarchical way to think about how to combine many different normative structures and preferences into a single coherent decision. That is what metanormativity is all about, a way to answer: what should we do when we don’t know what to do? In this study, we will review a decision-making strategy dealing with moraluncertainty, Maximization of Expected Choice-Worthiness. This strategy, proposed by William MacAskill, allows for the aggregation and inter-theoretical comparison of different normative structures, cardinal theories, and ordinal theories. In this study, we will exemplify the metanormative methods proposed by MacAskill, using has an example, a series of vegetarian dilemmas. Given the similarity to this metanormative strategy to expected utility theory, we will also show that it is possible to integrate both models to address decision-making problems in situations of empirical and moraluncertainty. We believe that this kind of ethical-mathematical formalism can be useful to help develop strategies to better aggregate moral preferences and solve conflicts. (shrink)
Investigation of neural and cognitive processes underlying individual variation in moral preferences is underway, with notable similarities emerging between moral- and risk-based decision-making. Here we specifically assessed moral distributive justice preferences and non-moral financial gambling preferences in the same individuals, and report an association between these seemingly disparate forms of decision-making. Moreover, we find this association between distributive justice and risky decision-making exists primarily when the latter is assessed with the Iowa Gambling Task. These findings are (...) consistent with neuroimaging studies of brain function during moral and risky decision-making. This research also constitutes the first replication of a novel experimental measure of distributive justice decision-making, for which individual variation in performance was found. Further examination of decision-making processes across different contexts may lead to an improved understanding of the factors affecting moral behaviour. (shrink)
I analyze recent discussions about making moral decisions under normative uncertainty. I discuss whether this kind of uncertainty should have practical consequences for decisions and whether there are reliable methods of reasoning that deal with the possibility that we are wrong about some moral issues. I defend a limited use of the decision theory model of reasoning in cases of normative uncertainty.
Until very recently, normative theorizing in ethics was frequently conducted without even mentioning uncertainty. Just a few years ago, Sven Ove Hansson described this state of affairs with the slogan: “Ethics still lives in a Newtonian world.” In the new Oxford Handbook of Philosophy and Probability, David McCarthy writes that “mainstream moral philosophy has not been much concerned with probability,” understanding probability as “the best-known tool for thinking about uncertainty.” This special predilection for certainty in ethics was (...) surprising since most decisions or evaluations are made both by individuals and policy-makers through the fog of a widely understood uncertainty that includes risk, ignorance, indeterminacy. Therefore, the main task of this special issue and international essay prize competition is to encourage philosophers to rethink the standard paradigm in ethics by redirecting discussions about ethical questions to problems involving different kinds of uncertainty when an individual or a policy maker does not have access to or knowledge about. (shrink)
How should an agent decide what to do when she is uncertain not just about morally relevant empirical matters, like the consequences of some course of action, but about the basic principles of morality itself? This question has only recently been taken up in a systematic way by philosophers. Advocates of moral hedging claim that an agent should weigh the reasons put forward by each moral theory in which she has positive credence, considering both the likelihood that that (...) theory is true and the strength of the reasons it posits. The view that it is sometimes rational to hedge for one's moral uncertainties, however, has recently come under attack both from those who believe that an agent should always be guided by the dictates of the single moral theory she deems most probable and from those who believe that an agent's moral beliefs are simply irrelevant to what she ought to do. Among the many objections to hedging that have been pressed in the recent literature is the worry that there is no non-arbitrary way of making the intertheoretic comparisons of moral value necessary to aggregate the value assignments of rival moral theories into a single ranking of an agent's options. -/- This dissertation has two principal objectives: First, I argue that, contra these recent objections, an agent's moral beliefs and uncertainties are relevant to what she rationally ought to do, and more particularly, that agents are at least sometimes rationally required to hedge for their moral uncertainties. My principal argument for these claims appeals to the enkratic conception of rationality, according to which the requirements of practical rationality derive from an agent's beliefs about the objective, desire-independent value or choiceworthiness of her options. Second, I outline a new general theory of rational choice under moraluncertainty. Central to this theory is the idea of content-based aggregation, that the principles according to which an agent should compare and aggregate rival moral theories are grounded in the content of those theories themselves, including not only their value assignments but also the metaethical and other non-surface-level propositions that underlie, justify, or explain those value assignments. (shrink)
Is normative uncertainty like factual uncertainty? Should it have the same effects on our actions? Some have thought not. Those who defend an asymmetry between normative and factual uncertainty typically do so as part of the claim that our moral beliefs in general are irrelevant to both the moral value and the moral worth of our actions. Here I use the consideration of Jackson cases to challenge this view, arguing that we can explain away (...) the apparent asymmetries between normative and factual uncertainty by considering the particular features of the cases in greater detail. Such consideration shows that, in fact, normative and factual uncertainty are equally relevant to moral assessment. (shrink)
Modern health data practices come with many practical uncertainties. In this paper, I argue that data subjects’ trust in the institutions and organizations that control their data, and their ability to know their own moral obligations in relation to their data, are undermined by significant uncertainties regarding the what, how, and who of mass data collection and analysis. I conclude by considering how proposals for managing situations of high uncertainty might be applied to this problem. These emphasize increasing (...) organizational flexibility, knowledge, and capacity, and reducing hazard. (shrink)
I outline four conditions on permissible promise-making: the promise must be for a morally permissible end, must not be deceptive, must be in good faith, and must involve a realistic assessment of oneself. I then address whether promises that you are uncertain you can keep can meet these four criteria, with a focus on campaign promises as an illustrative example. I argue that uncertain promises can meet the first two criteria, but that whether they can meet the second two depends (...) on the source of the promisor's uncertainty. External uncertainty stemming from outside factors is unproblematic, but internal uncertainty stemming from the promisor's doubts about her own strength leads to promises that are in bad faith or unrealistic. I conclude that campaign promises are often subject to internal uncertainty and are therefore morally impermissible to make, all else being equal. (shrink)
What should a person do when, through no fault of her own, she ends up believing a false moral theory? Some suggest that she should act against what the false theory recommends; others argue that she should follow her rationally held moral beliefs. While the former view better accords with intuitions about cases, the latter one seems to enjoy a critical advantage: It seems better able to render moral requirements ‘followable’ or ‘action-guiding.’ But this tempting thought proves (...) difficult to justify. Indeed, whether it can be justified turns out to depend importantly on the rational status of epistemic akrasia. Furthermore, it can be argued, from premises all parties to the moral ignorance debate should accept, that rational epistemic akrasia is possible. If the argument proves successful, it follows that a person should sometimes act against her rationally held moral convictions. (shrink)
In this paper I present an argument in favour of a parental duty to use preimplantation genetic diagnosis (PGD). I argue that if embryos created in vitro were able to decide for themselves in a rational manner, they would sometimes choose PGD as a method of selection. Couples, therefore, should respect their hypothetical choices on a principle similar to that of patient autonomy. My thesis shows that no matter which moral doctrine couples subscribe to, they ought to conduct the (...) PGD procedure in the situations when it is impossible to implant all of the created embryos and if there is a significant risk for giving birth to a child with a serious condition. (shrink)
Moral reasoning is as fallible as reasoning in any other cognitive domain, but we often behave as if it were not. I argue for a form of epistemically-based moral humility, in which we downgrade our moral beliefs in the face of moral disagreement. My argument combines work in metaethics and moral intuitionism with recent developments in epistemology. I argue against any demands for deep self-sufficiency in moral reasoning. Instead, I argue that we need to (...) take into account significant socially sourced information, especially as a check for failures on our own moral intuitions and reasoning. -/- First, I argue for an epistemically plausible version of moral intuitionism, based on recent work in epistemic entitlement and epistemic warrant. Second, I argue that getting clear on the epistemic basis shows the defeasibility of moral judgment. Third, I argue the existence of moral disagreement is a reason to reduce our certainty in moral judgment. Fourth, I argue that this effect is not a violation of norms of autonomy for moral judgment. (shrink)
Non-Consequentialist moral theories posit the existence of moral constraints: prohibitions on performing particular kinds of wrongful acts, regardless of the good those acts could produce. Many believe that such theories cannot give satisfactory verdicts about what we morally ought to do when there is some probability that we will violate a moral constraint. In this article, I defend Non-Consequentialist theories from this critique. Using a general choice-theoretic framework, I identify various types of Non-Consequentialism that have otherwise been (...) conflated in the debate. I then prove a number of formal possibility and impossibility results establishing which types of Non-Consequentialism can -- and which cannot -- give us adequate guidance through through a risky world. (shrink)
The topic of this thesis is axiological uncertainty – the question of how you should evaluate your options if you are uncertain about which axiology is true. As an answer, I defend Expected Value Maximisation (EVM), the view that one option is better than another if and only if it has the greater expected value across axiologies. More precisely, I explore the axiomatic foundations of this view. I employ results from state-dependent utility theory, extend them in various ways and (...) interpret them accordingly, and thus provide axiomatisations of EVM as a theory of axiological uncertainty. (shrink)
This paper offers a general model of substantive moral principles as a kind of hedged moral principles that can (but don't have to) tolerate exceptions. I argue that the kind of principles I defend provide an account of what would make an exception to them permissible. I also argue that these principles are nonetheless robustly explanatory with respect to a variety of moral facts; that they make sense of error, uncertainty, and disagreement concerning moral principles (...) and their implications; and that one can grasp these principles without having to grasp any particular list of their permissibly exceptional instances. I conclude by pointing out various advantages that this model of principles has over several of its rivals. The bottom line is that we should find nothing peculiarly odd or problematic about the idea of exception-tolerating and yet robustly explanatory moral principles. (shrink)
This article gives two arguments for believing that our society is unknowingly guilty of serious, large-scale wrongdoing. First is an inductive argument: most other societies, in history and in the world today, have been unknowingly guilty of serious wrongdoing, so ours probably is too. Second is a disjunctive argument: there are a large number of distinct ways in which our practices could turn out to be horribly wrong, so even if no particular hypothesized moral mistake strikes us as very (...) likely, the disjunction of all such mistakes should receive significant credence. The article then discusses what our society should do in light of the likelihood that we are doing something seriously wrong: we should regard intellectual progress, of the sort that will allow us to find and correct our moral mistakes as soon as possible, as an urgent moral priority rather than as a mere luxury; and we should also consider it important to save resources and cultivate flexibility, so that when the time comes to change our policies we will be able to do so quickly and smoothly. (shrink)
In this paper, I argue that the fetishism objection to moral hedging fails. The objection rests on a reasons-responsiveness account of moral worth, according to which an action has moral worth only if the agent is responsive to moral reasons. However, by adopting a plausible theory of non-ideal moral reasons, one can endorse a reasons-responsiveness account of moral worth while maintaining that moral hedging is sometimes an appropriate response to moraluncertainty. (...) Thus, the theory of moral worth upon which the fetishism objection relies does not, in fact, support that objection. (shrink)
The long unbearable sufferings in the past and agonies experienced in some future timelines in which a malevolent AI could torture people for some idiosyncratic reasons (s-risks) is a significant moral problem. Such events either already happened or will happen in causally disconnected regions of the multiverse and thus it seems unlikely that we can do anything about it. However, at least one pure theoretic way to cure past sufferings exists. If we assume that there is no stable substrate (...) of personal identity and thus a copy equals original, then by creating many copies of the next observer-moment of a person in pain in which he stops suffer, we could create indexical uncertainty in her future location and thus effectively steal her consciousness from her initial location and immediately relieve her sufferings. However, to accomplish this for people who have already died, we need to perform this operation for all possible people thus requiring enormous amounts of computations. Such computation could be performed by the future benevolent AI of Galactic scale. Many such AIs could cooperate acausally by distributing parts of the work between them via quantum randomness. To ensure their success, they need to outnumber all possible evil AIs by orders of magnitude, and thus they need to convert most of the available matter into computronium in all universes where they exist and cooperate acausally across the whole multiverse. Another option for curing past suffering is the use of wormhole time-travel to send a nanobot in the past which will, after a period of secret replication, collect the data about people and secretly upload them when their suffering becomes unbearable. (shrink)
In the growing literature on decision-making under moraluncertainty, a number of skeptics have argued that there is an insuperable barrier to rational "hedging" for the risk of moral error, namely the apparent incomparability of moral reasons given by rival theories like Kantianism and utilitarianism. Various general theories of intertheoretic value comparison have been proposed to meet this objection, but each suffers from apparently fatal flaws. In this paper, I propose a more modest approach that aims (...) to identify classes of moral theories that share common principles strong enough to establish bases for intertheoretic comparison. I show that, contra the claims of skeptics, there are often rationally perspicuous grounds for precise, quantitative value comparisons within such classes. In light of this fact, I argue, the existence of some apparent incomparabilities between widely divergent moral theories cannot serve as a general argument against hedging for one's moral uncertainties. (shrink)
Robert Adams argues that often our moral commitment outstrips what we are epistemically entitled to believe; in these cases, the virtuous agent doxastic states are instances of “moral faith”. I argue against Adams’ views on the need for moral faith; at least in some cases, our moral “intuitions” provide us with certain moral knowledge. The appearance that there can be no certainty here is the result of dubious views about second-order or indirect doubts. Nonetheless, discussing (...) the phenomena that lead Adams to postulate moral faith brings to light the nature of the epistemic warrant underlying various kinds of moral commitments. (shrink)
The paper discusses the notion of reasoning with comparative moral judgements (i.e judgements of the form “act a is morally superior to act b”) from the point of view of several meta-ethical positions. Using a simple formal result, it is argued that only a version of moral cognitivism that is committed to the claim that moral beliefs come in degrees can give a normatively plausible account of such reasoning. Some implications of accepting such a version of (...) class='Hi'>moral cognitivism are discussed. (shrink)
Some, but not all, of the mistakes a person makes when acting in apparently necessary self-defense are reasonable: we take them not to violate the rights of the apparent aggressor. I argue that this is explained by duties grounded in agents' entitlements to a fair distribution of the risk of suffering unjust harm. I suggest that the content of these duties is filled in by a social signaling norm, and offer some moral constraints on the form such a norm (...) can take. (shrink)
People engage in pure moral inquiry whenever they inquire into the moral features of some act, agent, or state of affairs without inquiring into the non-moral features of that act, agent, or state of affairs. This chapter argues that ordinary people act rationally when they engage in pure moral inquiry, and so any adequate view in metaethics ought to be able to explain this fact. The Puzzle of Pure Moral Motivation is to provide such an (...) explanation. This chapter argues that each of the standard views in metaethics has trouble providing such an explanation. Discussion of why reveals that a metaethical view can provide such an explanation only if it meets two constraints: it allows ordinary moral inquirers to know the essences of moral properties, and the essence of each moral property makes it rational to care for its own sake whether that property is instantiated. (shrink)
How should you decide what to do when you're uncertain about basic normative principles (e.g., Kantianism vs. utilitarianism)? A natural suggestion is to follow some "second-order" norm: e.g., "comply with the first-order norm you regard as most probable" or "maximize expected choiceworthiness". But what if you're uncertain about second-order norms too -- must you then invoke some third-order norm? If so, it seems that any norm-guided response to normative uncertainty is doomed to a vicious regress. In this paper, I (...) aim to rescue second-order norms from this threat of regress. I first elaborate and defend the suggestion some philosophers have entertained that the regress problem forces us to accept normative externalism, the view that at least one norm is incumbent on agents regardless of their beliefs or evidence concerning that norm. But, I then argue, we need not accept externalism about first-order (e.g., moral) norms, thus closing off any question of what an agent should do in light of her normative beliefs. Rather, it is more plausible to ascribe external force to a single, second-order rational norm: the enkratic principle, correctly formulated. This modest form of externalism, I argue, is both intrinsically well-motivated and sufficient to head off the threat of regress. (shrink)
I defend normative externalism from the objection that it cannot account for the wrongfulness of moral recklessness. The defence is fairly simple—there is no wrong of moral recklessness. There is an intuitive argument by analogy that there should be a wrong of moral recklessness, and the bulk of the paper consists of a response to this analogy. A central part of my response is that if people were motivated to avoid moral recklessness, they would have to (...) have an unpleasant sort of motivation, what Michael Smith calls “moral fetishism”. (shrink)
Divine command theories come in several different forms but at their core all of these theories claim that certain moral statuses exist in virtue of the fact that God has commanded them to exist. Several authors argue that this core version of the DCT is vulnerable to an epistemological objection. According to this objection, DCT is deficient because certain groups of moral agents lack epistemic access to God’s commands. But there is confusion as to the precise nature and (...) significance of this objection, and critiques of its key premises. In this article, I try to clear up this confusion and address these critiques. I do so in three ways. First, I offer a simplified general version of the objection. Second, I address the leading criticisms of the premises of this objection, focusing in particular on the role of moral risk/uncertainty in our understanding of God’s commands. And third, I outline four possible interpretations of the argument, each with a differing degree of significance for the proponent of the DCT. (shrink)
Decision-making under normative uncertainty requires an agent to aggregate the assessments of options given by rival normative theories into a single assessment that tells her what to do in light of her uncertainty. But what if the assessments of rival theories differ not just in their content but in their structure -- e.g., some are merely ordinal while others are cardinal? This paper describes and evaluates three general approaches to this "problem of structural diversity": structural enrichment, structural depletion, (...) and multi-stage aggregation. All three approaches have notable drawbacks, but I tentatively defend multi-stage aggregation as least bad of the three. (shrink)
W tekście omawiam tę część internetowej dyskusji, przeprowadzonej w listopadzie 2012 r. na stronie Polskiego Towarzystwa Bioetycznego, która dotyczyła niepewności na temat moralnego statusu embrionów ludzkich. W trakcie dyskusji PTB na temat Stanowiska Komitetu Bioetyki przy Prezydium PAN w sprawie preimplantacyjnej diagnostyki genetycznej (PDG) pojawił się następujący argument: skoro spór o moralny status embrionu jest nierozstrzygalny, to powinniśmy opowiedzieć się przeciwko moralnej dopuszczalności wykonywania PDG na embrionach, a także przeciwko prawnej dopuszczalności tego rodzaju diagnostyki. W tekście omawiam tezy Stanowiska i (...) zdań odrębnych, które wywołały tę część dyskusji, a potem koncentruję się na następujących problemach: (I) dopuszczalność stosowania narzędzi teorii decyzji w debatach etycznych; (II) procedura podejmowania decyzji przez ciała doradcze w sytuacji niepewności moralnej; (III) znaczenie osiągnięć nauk biologicznych przy ustaleniu statusu moralnego embrionu; (IV) możliwość kompromisu w sytuacji pluralizmu wartości; (V) znaczenie statusu moralnego embrionów dla oceny moralnej i prawnej dopuszczalności PDG. (shrink)
Cases of reasonable, mistaken belief figure prominently in discussions of the knowledge norm of assertion and practical reason as putative counterexamples to these norms. These cases are supposed to show that the knowledge norm is too demanding and that some weaker norm ought to put in its place. These cases don't show what they're intended to. When you assert something false or treat some falsehood as if it's a reason for action, you might deserve an excuse. You often don't deserve (...) even that. (shrink)
I will argue that physicians have an ethical obligation to justify their conscientious objection and the most reliable interpretation of the Polish legal framework claims that conscientious objection is permissible only when the justification shows the genuineness of the judgment of conscience that is not based on false beliefs and arises from a moral norm that has a high rank. I will demonstrate that the dogma accepted in the Polish doctrine that the reasons that lie behind conscientious objection in (...) medicine cannot be evaluated or controlled by anyone is based either on a mistaken interpretation of the Constitution or on the unreliable concept of conscience. I will refer to the legal regulations concerning military refusals that require from objectors to reveal and justify their views. Finally, I will demonstrate why conscientious objection under uncertainty does not deserve acceptance, because it is based on a specific version of the precautionary principle. (shrink)
Automated decision making for sentencing is the use of a software algorithm to analyse a convicted offender’s case and deliver a sentence. This chapter reviews the moral arguments for and against employing automated decision making for sentencing and finds that its use is in principle morally permissible. Specifically, it argues that well-designed automated decision making for sentencing will better approximate the just sentence than human sentencers. Moreover, it dismisses common concerns about transparency, privacy and bias as unpersuasive or inapplicable. (...) The chapter also notes that moral disagreement about theories of just sentencing are plausibly resolved by applying the principle of maximising expected moral choiceworthiness, and that automated decision making is better suited to the resulting ensemble model. Finally, the chapter considers the challenge posed by penal populism. The dispiriting conclusion is that although it is in theory morally desirable to use automated decision-making for criminal sentencing, it may well be the case that we ought not to try. (shrink)
According to the traditional Bayesian view of credence, its structure is that of precise probability, its objects are descriptive propositions about the empirical world, and its dynamics are given by conditionalization. Each of the three essays that make up this thesis deals with a different variation on this traditional picture. The first variation replaces precise probability with sets of probabilities. The resulting imprecise Bayesianism is sometimes motivated on the grounds that our beliefs should not be more precise than the evidence (...) calls for. One known problem for this evidentially motivated imprecise view is that in certain cases, our imprecise credence in a particular proposition will remain the same no matter how much evidence we receive. In the first essay I argue that the problem is much more general than has been appreciated so far, and that it’s difficult to avoid without compromising the initial evidentialist motivation. The second variation replaces descriptive claims with moral claims as the objects of credence. I consider three standard arguments for probabilism with respect to descriptive uncertainty—representation theorem arguments, Dutch book arguments, and accuracy arguments—in order to examine whether such arguments can also be used to establish probabilism with respect to moraluncertainty. In the second essay, I argue that by and large they can, with some caveats. First, I don’t examine whether these arguments can be given sound non-cognitivist readings, and any conclusions therefore only hold conditional on cognitivism. Second, decision-theoretic representation theorems are found to be less convincing in the moral case, because there they implausibly commit us to thinking that intertheoretic comparisons of value are always possible. Third and finally, certain considerations may lead one to think that imprecise probabilism provides a more plausible model of moral epistemology. The third variation considers whether, in addition to conditionalization, agents may also change their minds by becoming aware of propositions they had not previously entertained, and therefore not previously assigned any probability. More specifically, I argue that if we wish to make room for reflective equilibrium in a probabilistic moral epistemology, we must allow for awareness growth. In the third essay, I sketch the outline of such a Bayesian account of reflective equilibrium. Given that this account gives a central place to awareness growth, and that the rationality constraints on belief change by awareness growth are much weaker than those on belief change by conditionalization, it follows that the rationality constraints on the credences of agents who are seeking reflective equilibrium are correspondingly weaker. (shrink)
This book examines the moral luck paradox, relating it to Kantian, consequentialist and virtue-based approaches to ethics. It also applies the paradox to areas in medical ethics, including allocation of scarce medical resources, informed consent to treatment, withholding life-sustaining treatment, psychiatry, reproductive ethics, genetic testing and medical research. If risk and luck are taken seriously, it might seem to follow that we cannot develop any definite moral standards, that we are doomed to moral relativism. However, Dickenson offers (...) strong counter-arguments to this view that enable us to think in terms of universal standards. (shrink)
This article analyses the moral status of racial profiling from a consequentialist perspective and argues that, contrary to what proponents of racial profiling might assume, there is a prima facie case against racial profiling on consequentialist grounds. To do so it establishes general definitions of police practices and profiling, sketches out the costs and benefits involved in racial profiling in particular and presents three challenges. The foundation challenge suggests that the shifting of burdens onto marginalized minorities may, even when (...) profiling itself is justified, serve to prolong unjustified police practices. The valuation challenge argues that although both costs and benefits are difficult to establish, the benefits of racial profiling are afflicted with greater uncertainty than the costs, and must be comparatively discounted. Finally, the application challenge argues that using racial profiling in practice will be complicated by both cognitive and psychological biases, which together reduce the effectiveness of profiling while still incurring its costs. Jointly, it is concluded, these challenges establish a prima facie case against racial profiling, so that the real challenge consists in helping officers practice the art of the police and not see that which it is useless that they should see. (shrink)
The aim of this investigation is to answer the question of why it is prima facie morally wrong to cause or contribute to the extinction of species. The first potential answer investigated in the book is that other species are instrumentally valuable for human beings. The results of this part of the investigation are that many species are instrumentally valuable for human beings but that not all species are equally valuable in all cases. The instrumental values of different species also (...) have to compete with other human values. Sometimes these other values probably outweigh the value of the continued existence of the species. In general the degree of uncertainty is very high and the precautionaty principle is recommended to deal with these uncertainties. We also found that we have a duty to consider the interests of future generations of human beings and that these duties, in general, speak in favour of preservation. Anthropocentric instrumentalism therefore provides us with rather strong reasons to consider many cases of human caused extinction as prima facie morally wrong. Even so, anthropocentric instrumentalism does not fully account for the moral intuition we set out to investigate. The next potential answer that is investigated in the book is that species have a moral standing in their own right. The result of this part of the investigation is that this idea is highly unlikely, in particular because species cannot have any interests to consider. Anotgher potential answer is that species have intrinsic value in some other meaning that does not imply moral standing. We concluded that it is possible to be subjectively valued as an end and that many species have properties that make them highly suitable for being valued as ends by human beings. Finally, we found that our contributions to the extinction of species in most cases frustrate the interests of many non-human sentient beings. This is true if the species in question is made up of sentient individuals, and it is also true when the species in question is made up of non-sentient individuals that have instrumental value for sentient individuals of other species. There are exceptions to this rule, but all in all it seems that the inclusion of non-human sentient individuals together with us humans as moral objects, in most cases, tip the scale drastically in favour of preservation. The main result of the investigation is that there is not one but several explanations to why it is prima facie morally wrong to contribute to the extinction of species – and all of them are about duties to respect the interests of individual sentient animals, including human beings. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.