This book examines the moral luck paradox, relating it to Kantian, consequentialist and virtue-based approaches to ethics. It also applies the paradox to areas in medical ethics, including allocation of scarce medical resources, informed consent to treatment, withholding life-sustaining treatment, psychiatry, reproductive ethics, genetic testing and medical research. If risk and luck are taken seriously, it might seem to follow that we cannot develop any definite moral standards, that we are doomed to moral relativism. However, Dickenson offers strong counter-arguments (...) to this view that enable us to think in terms of universal standards. (shrink)
Under what conditions does machine learning (ML) model opacity inhibit the possibility of explaining and understanding phenomena? In this paper, I argue that non-epistemic values give shape to the ML opacity problem even if we keep researcher interests fixed. Treating ML models as an instance of doing model-based science to explain and understand phenomena reveals that there is (i) an external opacity problem, where the presence of inductive risk imposes higher standards on externally validating models, and (ii) an internal (...) opacity problem, where greater inductive risk demands a higher level of transparency regarding the inferences the model makes. (shrink)
Special Issue “Risks of artificial general intelligence”, Journal of Experimental and Theoretical Artificial Intelligence, 26/3 (2014), ed. Vincent C. Müller. http://www.tandfonline.com/toc/teta20/26/3# - Risks of general artificial intelligence, Vincent C. Müller, pages 297-301 - Autonomous technology and the greater human good - Steve Omohundro - pages 303-315 - - - The errors, insights and lessons of famous AI predictions – and what they mean for the future - Stuart Armstrong, Kaj Sotala & Seán S. Ó hÉigeartaigh - pages 317-342 - - (...) - The path to more general artificial intelligence - Ted Goertzel - pages 343-354 - - - Limitations and risks of machine ethics - Miles Brundage - pages 355-372 - - - Utility function security in artificially intelligent agents - Roman V. Yampolskiy - pages 373-389 - - - GOLEM: towards an AGI meta-architecture enabling both goal preservation and radical self-improvement - Ben Goertzel - pages 391-403 - - - Universal empathy and ethical bias for artificial general intelligence - Alexey Potapov & Sergey Rodionov - pages 405-416 - - - Bounding the impact of AGI - András Kornai - pages 417-438 - - - Ethics of brain emulations - Anders Sandberg - pages 439-457. (shrink)
Almost all philosophers agree that a necessary condition on lying is that one says what one believes to be false. But, philosophers haven’t considered the possibility that the true requirement on lying concerns, rather, one’s degree-of-belief. Liars impose a risk on their audience. The greater the liar’s confidence that what she asserts is false, the greater the risk she’ll think she’s imposing on the dupe, and, therefore, the greater her blameworthiness. From this, I arrive at a dilemma: either (...) the belief requirement is wrong, or lying isn’t interesting. I suggest an alternative necessary condition for lying on a degree-of-belief framework. (shrink)
The short abstract: Epistemic utility theory + permissivism about attitudes to epistemic risk => permissivism about rational credences. The longer abstract: I argue that epistemic rationality is permissive. More specifically, I argue for two claims. First, a radical version of interpersonal permissivism about rational credence: for many bodies of evidence, there is a wide range of credal states for which there is some individual who might rationally adopt that state in response to that evidence. Second, a slightly less radical (...) version of intrapersonal permissivism about rational credence: for many bodies of evidence and for many individuals, there is a narrower but still wide range of credal states that the individual might rationally adopt in response to that evidence. My argument proceeds from two premises: (1) epistemic utility theory; and (2) permissivism about attitudes to epistemic risk. Epistemic utility theory says this: What it is epistemically rational for you to believe is what it would be rational for you to choose if you got to pick your beliefs and, when picking them, you cared only for their purely epistemic value. So, to say which credences it is epistemically rational for you to have, we must say how you should measure purely epistemic value, and which decision rule it is appropriate for you to use when you face the hypothetical choice between the possible credences you might adopt. Permissivism about attitudes to epistemic risk says that rationality permits many different attitudes to epistemic risk. These attitudes can show up in epistemic utility theory in two ways: in the way that you measure epistemic value; and in the decision rule that you use to pick your credences. I explore what happens if we encode our attitudes to epistemic risk in our epistemic decision rule. The result is the interpersonal and intrapersonal permissivism described above: different attitudes to epistemic risk lead to different choices of priors; given most bodies of evidence you might acquire, different priors lead to different posteriors; and even once we fix your attitudes to epistemic risk, if they are at all risk-inclined, there is a range of different priors and therefore different posteriors they permit. The essay ends by considering a range of objections to the sort of permissivism for which I’ve argued. (shrink)
Papers from the conference on AI Risk (published in JETAI), supplemented by additional work. --- If the intelligence of artificial systems were to surpass that of humans, humanity would face significant risks. The time has come to consider these issues, and this consideration must include progress in artificial intelligence (AI) as much as insights from AI theory. -- Featuring contributions from leading experts and thinkers in artificial intelligence, Risks of Artificial Intelligence is the first volume of collected chapters dedicated (...) to examining the risks of AI. The book evaluates predictions of the future of AI, proposes ways to ensure that AI systems will be beneficial to humans, and then critically evaluates such proposals. 1 Vincent C. Müller, Editorial: Risks of Artificial Intelligence - 2 Steve Omohundro, Autonomous Technology and the Greater Human Good - 3 Stuart Armstrong, Kaj Sotala and Sean O’Heigeartaigh, The Errors, Insights and Lessons of Famous AI Predictions - and What they Mean for the Future - 4 Ted Goertzel, The Path to More General Artificial Intelligence - 5 Miles Brundage, Limitations and Risks of Machine Ethics - 6 Roman Yampolskiy, Utility Function Security in Artificially Intelligent Agents - 7 Ben Goertzel, GOLEM: Toward an AGI Meta-Architecture Enabling Both Goal Preservation and Radical Self-Improvement - 8 Alexey Potapov and Sergey Rodionov, Universal Empathy and Ethical Bias for Artificial General Intelligence - 9 András Kornai, Bounding the Impact of AGI - 10 Anders Sandberg, Ethics and Impact of Brain Emulations 11 Daniel Dewey, Long-Term Strategies for Ending Existential Risk from Fast Takeoff - 12 Mark Bishop, The Singularity, or How I Learned to Stop Worrying and Love AI -. (shrink)
The co-evolutionary concept of three-modal stable evolutionary strategy of Homo sapiens is developed. The concept based on the principle of evolutionary complementarity of anthropogenesis: value of evolutionary risk and evolutionary path of human evolution are defined by descriptive (evolutionary efficiency) and creative-teleological (evolutionary correctness) parameters simultaneously, that cannot be instrumental reduced to other ones. Resulting volume of both parameters define the vectors of biological, social, cultural and techno-rationalistic human evolution by two gear mechanism — genetic and cultural co-evolution and (...) techno-humanitarian balance. Explanatory model and methodology of evaluation of creatively teleological evolutionary risk component of NBIC technological complex is proposed. Integral part of the model is evolutionary semantics (time-varying semantic code, the compliance of the biological, socio-cultural and techno-rationalist adaptive modules of human stable evolutionary strategy). (shrink)
The author argues for a theory of responsibility for outcomes of imposed risk, based on whether it was permissible to impose the risk. When one tries to apply this persuasive model of responsibility for outcomes of risk imposition to procreation, which is a risk imposing act, one finds that it doesn’t match one’s intuitions about responsibility for outcomes of procreative risk. This mismatch exposes a justificatory gap for procreativity, namely, that procreation cannot avail itself of (...) the shared vulnerability to risks and their constraints—to the balance one is forced to strike between one’s interest in being free to impose risks on others and one’s interest in being safe from harm resulting from the risk imposed by others—which serves to justify risk imposition, generally. Whereas most risk impositions involve trade-offs of liberty and security among people who share the vulnerabilities associated with the taking, imposing, or being constrained from imposing risks, procreation involves the introduction of people into that position of vulnerability in the first place. Thus, when one procreates, one imposes risks in the absence of the shared vulnerability that usually serves as a justification for risk imposition. Procreative risks may not be wrongfully imposed, but they aren’t permissibly imposed in a manner fully comparable to other permissibly imposed risks. This makes procreation a unique form of risk imposition, with unique implications for its justification and for one’s responsibility for its outcomes. This insight can help explain several puzzling procreative asymmetries. (shrink)
When an abortion is performed, someone dies. Are we killing an innocent human person? Widespread disagreement exists. However, it’s not necessary to establish personhood in order to establish the wrongness of abortion: a substantial chance of personhood is enough. We defend The Don’t Risk Homicide Argument: abortions are wrong after 10 weeks gestation because they substantially and unjustifiably risk homicide, the unjust killing of an innocent person. Why 10 weeks? Because the cumulative evidence establishes a substantial chance (a (...) more than 1 in 5 chance) that preborn humans are persons around this stage of development. We submit evidence from our bad track record, widespread disagreement about personhood (after 10 weeks gestation), problems with theories of personhood, the similarity between preborn humans and newborn babies, gestational age miscalculations, and the common intuitive responses of women to their pregnancies and miscarriages. Our argument is cogent because it bypasses the stalemate over preborn personhood and rests on common ground rather than contentious metaphysics. It also strongly suggests that society must do more to protect preborn humans. We discuss its practical implications for fetal pain relief, social policy, and abortion law. (shrink)
This article argues that Lara Buchak’s risk-weighted expected utility theory fails to offer a true alternative to expected utility theory. Under commonly held assumptions about dynamic choice and the framing of decision problems, rational agents are guided by their attitudes to temporally extended courses of action. If so, REU theory makes approximately the same recommendations as expected utility theory. Being more permissive about dynamic choice or framing, however, undermines the theory’s claim to capturing a steady choice disposition in the (...) face of risk. I argue that this poses a challenge to alternatives to expected utility theory more generally. (shrink)
The Argument from Inductive Risk (AIR) is taken to show that values are inevitably involved in making judgements or forming beliefs. After reviewing this conclusion, I pose cases which are prima facie counterexamples: the unreflective application of conventions, use of black-boxed instruments, reliance on opaque algorithms, and unskilled observation reports. These cases are counterexamples to the AIR posed in ethical terms as a matter of personal values. Nevertheless, it need not be understood in those terms. The values which load (...) a theory choice may be those of institutions or past actors. This means that the challenge of responsibly handling inductive risk is not merely an ethical issue, but is also social, political, and historical. (shrink)
In addition to protecting agents’ autonomy, consent plays a crucial social role: it enables agents to secure partners in valuable interactions that would be prohibitively morally risk otherwise. To do this, consent must be observable: agents must be able to track the facts about whether they have received a consent-based permission. I argue that this morally justifies a consent-practice on which communicating that one consents is sufficient for consent, but also generates robust constraints on what sorts of behaviors can (...) be taken as consent- communicating. (shrink)
A natural view in distributive ethics is that everyone's interests matter, but the interests of the relatively worse off matter more than the interests of the relatively better off. I provide a new argument for this view. The argument takes as its starting point the proposal, due to Harsanyi and Rawls, that facts about distributive ethics are discerned from individual preferences in the "original position." I draw on recent work in decision theory, along with an intuitive principle about risk-taking, (...) to derive the view. (shrink)
Perhaps the topic of acceptable risk never had a sexier and more succinct introduction than the one Edward Norton, playing an automobile company executive, gave it in Fight Club: “Take the number of vehicles in the field (A), multiply it by the probable rate of failure (B), and multiply the result by the average out of court settlement (C). A*B*C=X. If X is less than the cost of the recall, we don’t do one.” Of course, this dystopic scene also (...) gets to the heart of the issue in another way: acceptable risk deals with mathematical calculations about the value of life, injury, and emotional wreckage, making calculation a difficult matter ethically, politically, and economically. This entry will explore the history of this idea, focusing on its development alongside statistics into its wide importance today. (shrink)
The way that diseases such as high blood pressure (hypertension), high cholesterol, and diabetes are defined is closely tied to ideas about modifiable risk. In particular, the threshold for diagnosing each of these conditions is set at the level where future risk of disease can be reduced by lowering the relevant parameter (of blood pressure, low-density lipoprotein, or blood glucose, respectively). In this article, I make the case that these criteria, and those for diagnosing and treating other “ (...) class='Hi'>risk-based diseases,” reflect an unfortunate trend towards reclassifying risk as disease. I closely examine stage 1 hypertension and high cholesterol and argue that many patients diagnosed with these “diseases” do not actually have a pathological condition. In addition, though, I argue that the fact that they are risk factors, rather than diseases, does not diminish the importance of treating them, since there is good evidence that such treatment can reduce morbidity and mortality. For both philosophical and ethical reasons, however, the conditions should not be labeled as pathological.The tendency to reclassify risk factors as diseases is an important trend to examine and critique. (shrink)
I have claimed that risk-weighted expected utility maximizers are rational, and that their preferences cannot be captured by expected utility theory. Richard Pettigrew and Rachael Briggs have recently challenged these claims. Both authors argue that only EU-maximizers are rational. In addition, Pettigrew argues that the preferences of REU-maximizers can indeed be captured by EU theory, and Briggs argues that REU-maximizers lose a valuable tool for simplifying their decision problems. I hold that their arguments do not succeed and that my (...) original claims still stand. However, their arguments do highlight some costs of REU theory. (shrink)
A moderately risk averse person may turn down a 50/50 gamble that either results in her winning $200 or losing $100. Such behaviour seems rational if, for instance, the pain of losing $100 is felt more strongly than the joy of winning $200. The aim of this paper is to examine an influential argument that some have interpreted as showing that such moderate risk aversion is irrational. After presenting an axiomatic argument that I take to be the strongest (...) case for the claim that moderate risk aversion is irrational, I show that it essentially depends on an assumption that those who think that risk aversion can be rational should be skeptical of. Hence, I conclude that risk aversion need not be irrational. (shrink)
Violence risk assessment tools are increasingly used within criminal justice and forensic psychiatry, however there is little relevant, reliable and unbiased data regarding their predictive accuracy. We argue that such data are needed to (i) prevent excessive reliance on risk assessment scores, (ii) allow matching of different risk assessment tools to different contexts of application, (iii) protect against problematic forms of discrimination and stigmatisation, and (iv) ensure that contentious demographic variables are not prematurely removed from risk (...) assessment tools. (shrink)
Catastrophic risk raises questions that are not only of practical importance, but also of great philosophical interest, such as how to define catastrophe and what distinguishes catastrophic outcomes from non-catastrophic ones. Catastrophic risk also raises questions about how to rationally respond to such risks. How to rationally respond arguably partly depends on the severity of the uncertainty, for instance, whether quantitative probabilistic information is available, or whether only comparative likelihood information is available, or neither type of information. Finally, (...) catastrophic risk raises important ethical questions about what to do when catastrophe avoidance conflicts with equity promotion. (shrink)
I defend normative externalism from the objection that it cannot account for the wrongfulness of moral recklessness. The defence is fairly simple—there is no wrong of moral recklessness. There is an intuitive argument by analogy that there should be a wrong of moral recklessness, and the bulk of the paper consists of a response to this analogy. A central part of my response is that if people were motivated to avoid moral recklessness, they would have to have an unpleasant sort (...) of motivation, what Michael Smith calls “moral fetishism”. (shrink)
A small but growing number of studies have aimed to understand, assess and reduce existential risks, or risks that threaten the continued existence of mankind. However, most attention has been focused on known and tangible risks. This paper proposes a heuristic for reducing the risk of black swan extinction events. These events are, as the name suggests, stochastic and unforeseen when they happen. Decision theory based on a fixed model of possible outcomes cannot properly deal with this kind of (...) event. Neither can probabilistic risk analysis. This paper will argue that the approach that is referred to as engineering safety could be applied to reducing the risk from black swan extinction events. It will also propose a conceptual sketch of how such a strategy may be implemented: isolated, self-sufficient, and continuously manned underground refuges. Some characteristics of such refuges are also described, in particular the psychosocial aspects. Furthermore, it is argued that this implementation of the engineering safety strategy safety barriers would be effective and plausible and could reduce the risk of an extinction event in a wide range of possible scenarios. Considering the staggering opportunity cost of an existential catastrophe, such strategies ought to be explored more vigorously. (shrink)
Shaming behavior on social media has been the cause of concern in recent public discourse. Supporters of online shaming argue that it is an important tool in helping to make social media and online communities safer and more welcoming to traditionally marginalized groups. Objections to shaming often sound like high-minded calls for civility, but I argue that shaming behavior poses serious risks. Here I identify moral and political risks of online shaming. In particular, shaming threatens to undermine our commitment to (...) the co-deliberative practices of morality. As a result, online shaming can undermine the very goals it is supposed to accomplish. (shrink)
We investigate risk attitudes when the underlying domain of payoffs is finite and the payoffs are, in general, not numerical. In such cases, the traditional notions of absolute risk attitudes, that are designed for convex domains of numerical payoffs, are not applicable. We introduce comparative notions of weak and strong risk attitudes that remain applicable. We examine how they are characterized within the rank-dependent utility model, thus including expected utility as a special case. In particular, we characterize (...) strong comparative risk aversion under rank-dependent utility. This is our main result. From this and other findings, we draw two novel conclusions. First, under expected utility, weak and strong comparative risk aversion are characterized by the same condition over finite domains. By contrast, such is not the case under non-expected utility. Second, under expected utility, weak (respectively: strong) comparative risk aversion is characterized by the same condition when the utility functions have finite range and when they have convex range (alternatively, when the payoffs are numerical and their domain is finite or convex, respectively). By contrast, such is not the case under non-expected utility. Thus, considering comparative risk aversion over finite domains leads to a better understanding of the divide between expected and non-expected utility, more generally, the structural properties of the main models of decision-making under risk. (shrink)
A classification of the global catastrophic risks of AI is presented, along with a comprehensive list of previously identified risks. This classification allows the identification of several new risks. We show that at each level of AI’s intelligence power, separate types of possible catastrophes dominate. Our classification demonstrates that the field of AI risks is diverse, and includes many scenarios beyond the commonly discussed cases of a paperclip maximizer or robot-caused unemployment. Global catastrophic failure could happen at various levels of (...) AI development, namely, before it starts self-improvement, during its takeoff, when it uses various instruments to escape its initial confinement, or after it successfully takes over the world and starts to implement its goal system, which could be plainly unaligned, or feature-flawed friendliness. AI could also halt at later stages of its development either due to technical glitches or ontological problems. Overall, we identified around several dozen scenarios of AI-driven global catastrophe. The extent of this list illustrates that there is no one simple solution to the problem of AI safety, and that AI safety theory is complex and must be customized for each AI development level. (shrink)
It is widely held that the influence of risk on rational decisions is not entirely explained by the shape of an agent’s utility curve. Buchak (Erkenntnis, 2013, Risk and rationality, Oxford University Press, Oxford, in press) presents an axiomatic decision theory, risk-weighted expected utility theory (REU), in which decision weights are the agent’s subjective probabilities modified by his risk-function r. REU is briefly described, and the global applicability of r is discussed. Rabin’s (Econometrica 68:1281–1292, 2000) calibration (...) theorem strongly suggests that plausible levels of risk aversion cannot be fully explained by concave utility functions; this provides motivation for REU and other theories. But applied to the synchronic preferences of an individual agent, Rabin’s result is not as problematic as it may first appear. Theories that treat outcomes as gains and losses (e.g. prospect theory and cumulative prospect theory) account for risk sensitivity in a way not available to REU. Reference points that mark the difference between gains and losses are subject to framing, many instances of which cannot be regarded as rational. However, rational decision theory may recognize the difference between gains and losses, without endorsing all ways of fixing the point of reference. In any event, REU is a very interesting theory. (shrink)
Uncertainty, insufficient information or information of poor quality, limited cognitive capacity and time, along with value conflicts and ethical considerations, are all aspects thatmake risk managementand riskcommunication difficult. This paper provides a review of different risk concepts and describes how these influence risk management, communication and planning in relation to forest ecosystem services. Based on the review and results of empirical studies, we suggest that personal assessment of risk is decisive in the management of forest ecosystem (...) services. The results are used together with a reviewof different principles of the distribution of risk to propose an approach to risk communication that is effective aswell as ethically sound. Knowledge of heuristics and mutual information on both beliefs and desires are important in the proposed risk communication approach. Such knowledge provides an opportunity for relevant information exchange, so that gaps in personal knowledge maps can be filled in and effective risk communication can be promoted. (shrink)
The epistemology of risk examines how risks bear on epistemic properties. A common framework for examining the epistemology of risk holds that strength of evidential support is best modelled as numerical probability given the available evidence. In this essay I develop and motivate a rival ‘relevant alternatives’ framework for theorising about the epistemology of risk. I describe three loci for thinking about the epistemology of risk. The first locus concerns consequences of relying on a belief for (...) action, where those consequences are significant if the belief is false. The second locus concerns whether beliefs themselves—regardless of action—can be risky, costly, or harmful. The third locus concerns epistemic risks we confront as social epistemic agents. I aim to motivate the relevant alternatives framework as a fruitful approach to the epistemology of risk. I first articulate a ‘relevant alternatives’ model of the relationship between stakes, evidence, and action. I then employ the relevant alternatives framework to undermine the motivation for moral encroachment. Finally, I argue the relevant alternatives framework illuminates epistemic phenomena such as gaslighting, conspiracy theories, and crying wolf, and I draw on the framework to diagnose the undue skepticism endemic to rape accusations. (shrink)
The standard argument to the conclusion that artificial intelligence (AI) constitutes an existential risk for the human species uses two premises: (1) AI may reach superintelligent levels, at which point we humans lose control (the ‘singularity claim’); (2) Any level of intelligence can go along with any goal (the ‘orthogonality thesis’). We find that the singularity claim requires a notion of ‘general intelligence’, while the orthogonality thesis requires a notion of ‘instrumental intelligence’. If this interpretation is correct, they cannot (...) be joined as premises and the argument for the existential risk of AI turns out invalid. If the interpretation is incorrect and both premises use the same notion of intelligence, then at least one of the premises is false and the orthogonality thesis remains itself orthogonal to the argument to existential risk from AI. In either case, the standard argument for existential risk from AI is not sound.—Having said that, there remains a risk of instrumental AI to cause very significant damage if designed or used badly, though this is not due to superintelligence or a singularity. (shrink)
In this paper we aim to demonstrate the enormous ethical complexity that is prevalent in child obesity cases. This complexity, we argue, favors a cautious approach. Against those perhaps inclined to blame neglectful parents, we argue that laying the blame for child obesity at the feet of parents is simplistic once the broader context is taken into account. We also show that parents not only enjoy important relational prerogatives worth defending, but that children, too, are beneficiaries of that relationship in (...) ways difficult to match elsewhere. Finally, against the backdrop of growing public concern and pressure to intervene earlier in the life cycle, we examine the perhaps unintended stigmatizing effects that labeling and intervention can have and consider a number of risks and potential harms occasioned by state interventions in these cases. (shrink)
If the intelligence of artificial systems were to surpass that of humans significantly, this would constitute a significant risk for humanity. Time has come to consider these issues, and this consideration must include progress in AI as much as insights from the theory of AI. The papers in this volume try to make cautious headway in setting the problem, evaluating predictions on the future of AI, proposing ways to ensure that AI systems will be beneficial to humans – and (...) critically evaluating such proposals. (shrink)
This is the editorial for a special volume of JETAI, featuring papers by Omohundro, Armstrong/Sotala/O’Heigeartaigh, T Goertzel, Brundage, Yampolskiy, B. Goertzel, Potapov/Rodinov, Kornai and Sandberg. - If the general intelligence of artificial systems were to surpass that of humans significantly, this would constitute a significant risk for humanity – so even if we estimate the probability of this event to be fairly low, it is necessary to think about it now. We need to estimate what progress we can expect, (...) what the impact of superintelligent machines might be, how we might design safe and controllable systems, and whether there are directions of research that should best be avoided or strengthened. (shrink)
Critical race theorists and standpoint epistemologists argue that agents who are members of dominant social groups are often in a state of ignorance about the extent of their social dominance, where this ignorance is explained by these agents' membership in a socially dominant group (e.g., Mills 2007). To illustrate this claim bluntly, it is argued: 1) that many white men do not know the extent of their social dominance, 2) that they remain ignorant as to the extent of their dominant (...) social position even where this information is freely attainable, and 3) that this ignorance is due in part to the fact that they are white men. We argue that on Buchak's (2010, 2013) model of risk averse instrumental rationality, ignorance of one's privileges can be rational. This argument yields a new account of elite-group ignorance, why it may occur, and how it might be alleviated. (shrink)
One type of argument to sceptical paradox proceeds by making a case that a certain kind of metaphysically “heavyweight or “cornerstone” proposition is beyond all possible evidence and hence may not be known or justifiably believed. Crispin Wright has argued that we can concede that our acceptance of these propositions is evidentially risky and still remain rationally entitled to those of our ordinary knowledge claims that are seemingly threatened by that concession. A problem for Wright’s proposal is the so-called Leaching (...) worry: if we are merely rationally entitled to accept the cornerstones without evidence, how can we achieve evidence-based knowledge of the multitude of quotidian propositions that we think we know, which require the cornerstones to be true? This paper presents a rigorous, novel explication of this worry within a Bayesian framework, and offers the Entitlement theorist two distinct responses. (shrink)
The orthodox theory of instrumental rationality, expected utility (EU) theory, severely restricts the way in which risk-considerations can figure into a rational individual's preferences. It is argued here that this is because EU theory neglects an important component of instrumental rationality. This paper presents a more general theory of decision-making, risk-weighted expected utility (REU) theory, of which expected utility maximization is a special case. According to REU theory, the weight that each outcome gets in decision-making is not the (...) subjective probability of that outcome; rather, the weight each outcome gets depends on both its subjective probability and its position in the gamble. Furthermore, the individual's utility function, her subjective probability function, and a function that measures her attitude towards risk can be separately derived from her preferences via a Representation Theorem. This theorem illuminates the role that each of these entities plays in preferences, and shows how REU theory explicates the components of instrumental rationality. (shrink)
Trust is a core feature of the physician-patient relationship, and risk is central to trust. Patients take risks when they trust their providers to care for them effectively and appropriately. Not all patients take these risks: some medical relationships are marked by mistrust and suspicion. Empirical evidence suggests that some patients and families of color in the United States may be more likely to mistrust their providers and to be suspicious of specific medical practices and institutions. Given both historical (...) and ongoing oppression and injustice in American medical institutions, such mistrust can be apt. Yet it can also frustrate patient care, leading to family and provider distress. In this paper, I propose one way that providers might work to reestablish trust by taking risks in signaling their own trustworthiness. This interpersonal step is not meant to replace efforts to remedy systemic injustice, but is an immediate measure for addressing mistrust in occurrent cases. (shrink)
Risk communication has been generally categorized as a warning act, which is performed in order to prevent or minimize risk. On the other side, risk analysis has also underscored the role played by information in reducing uncertainty about risk. In both approaches the safety aspects related to the protection of the right to health are on focus. However, it seems that there are cases where a risk cannot possibly be avoided or uncertainty reduced, this is (...) for instance valid for the declaration of side effects associated with pharmaceutical products or when a decision about drug approval or retirement must be delivered on the available evidence. In these cases, risk communication seems to accomplish other tasks than preventing risk or reducing uncertainty. The present paper analyzes the legal instruments which have been developed in order to control and manage the risks related to drugs – such as the notion of “development risk” or “residual risk” – and relates them to different kinds of uncertainty. These are conceptualized as epistemic, ecological, metric, ethical, and stochastic, depending on their nature. By referring to this taxonomy, different functions of pharmaceutical risk communication are identified and connected with the legal tools of uncertainty management. The purpose is to distinguish the different functions of risk communication and make explicit their different legal nature and implications. (shrink)
In this paper, I examine the decision-theoretic status of risk attitudes. I start by providing evidence showing that the risk attitude concepts do not play a major role in the axiomatic analysis of the classic models of decision-making under risk. This can be interpreted as reflecting the neutrality of these models between the possible risk attitudes. My central claim, however, is that such neutrality needs to be qualified and the axiomatic relevance of risk attitudes needs (...) to be re-evaluated accordingly. Specifically, I highlight the importance of the conditional variation and the strengthening of risk attitudes, and I explain why they establish the axiomatic significance of the risk attitude concepts. I also present several questions for future research regarding the strengthening of risk attitudes. (shrink)
The primary responsibility of the US Food and Drug Administration is to protect public health by ensuring the safety of the food supply. To that end, it sometimes conducts risk assessments of novel food products, such as genetically modified food. The FDA describes its regulatory review of GM food as a purely scientific activity, untainted by any normative considerations. This paper provides evidence that the regulatory agency is not justified in making that claim. It is argued that the FDA’s (...) policy stance on GM food is shaped by neoliberal considerations. The agency’s review of a genetically engineered animal, the AquAdvantage salmon, is used as a case study to track the influence of neoliberalism on its regulatory review protocol. After that, an epistemic argument justifying public engagement in the risk assessment of new GM food is outlined. It is because risk evaluations involve normative judgments, in a democracy, layperson representatives of informal epistemic communities that could be affected by a new GM food should have the opportunity to decide the ethical, political or other normative questions that arise during the regulatory review of that entity. (shrink)
Population axiology concerns how to evaluate populations in terms of their moral goodness, that is, how to order populations by the relations “is better than” and “is as good as”. The task has been to find an adequate theory about the moral value of states of affairs where the number of people, the quality of their lives, and their identities may vary. So far, this field has largely ignored issues about uncertainty and the conditions that have been discussed mostly pertain (...) to the ranking of risk-free outcomes. Most public policy choices, however, are decisions under uncertainty, including policy choices that affect the size of a population. Here, we shall address the question of how to rank population prospects—that is, alternatives that contain uncertainty as to which population they will bring about—by the relations “is better than” and “is as good as”. We start by illustrating how well-known population axiologies can be extended to population prospect axiologies. And we show that new problems arise when extending population axiologies to prospects. In particular, traditional population axiologies lead to prospect-versions of the problems that they praised for avoiding in the risk-free settings. Finally, we identify an intuitive adequacy condition that, we contend, should be satisfied by any population prospect axiology, and show how given this condition, the impossibility theorems in population axiology can be extended to (non-trivial) impossibility theorems for population prospect axiology. (shrink)
Why can testimony alone be enough for findings of liability? Why statistical evidence alone can’t? These questions underpin the “Proof Paradox” (Redmayne 2008, Enoch et al. 2012). Many epistemologists have attempted to explain this paradox from a purely epistemic perspective. I call it the “Epistemic Project”. In this paper, I take a step back from this recent trend. Stemming from considerations about the nature and role of standards of proof, I define three requirements that any successful account in line with (...) the Epistemic Project should meet. I then consider three recent epistemic accounts on which the standard is met when the evidence rules out modal risk (Pritchard 2018), normic risk (Ebert et al. 2020), or relevant alternatives (Gardiner 2019 2020). I argue that none of these accounts meets all the requirements. Finally, I offer reasons to be pessimistic about the prospects of having a successful epistemic explanation of the paradox. I suggest the discussion on the proof paradox would benefit from undergoing a ‘value-turn’. (shrink)
This paper argues for a view described as risk-limited indulgent permissivism. This term may be new to the epistemology of disagreement literature, but the general position denoted has many examples. The paper argues for the need for an epistemology for domains of controversial views (morals, philosophy, politics, and religion), and for the advantages of endorsing a risk-limited indulgent permissivism across these domains. It takes a double-edge approach in articulating for the advantages of interpersonal belief permissivism that is yet (...)risk-limited: Advantages are apparent both in comparison with impermissivist epistemologies of disagreement, which make little allowance for the many distinct features of these domains, as well as in comparison with defenses of permissivism which confuse it with dogmatism, potentially making a virtue of the latter. In an appropriately critical form of interpersonal belief permissivism, the close connections between epistemic risk-taking and our doxastic responsibilities become focal concerns. (shrink)
An axiom of medical research ethics is that a protocol is moral only if it has a “favorable risk-benefit ratio”. This axiom is usually interpreted in the following way: a medical research protocol is moral only if it has a positive expected value -- that is, if it is likely to do more good (to both subjects and society) than harm. I argue that, thus interpreted, the axiom has two problems. First, it is unusable, because it requires us to (...) know more about the potential outcomes of research than we ever could. Second, it is false, because it conflicts with the so-called “soft paternalist” principles of liberal democracy. In place of this flawed rule I propose a new way of making risk-benefit assessments, one that does comport with the principles of liberalism. I argue that a protocol is moral only if it would be entered into by competent subjects who are informed about the protocol. The new rule this eschews all pseudo-utilitarian calculation about the protocol’s likely harms and benefits. (shrink)
Many of us believe (1) Saving a life is more important than averting any number of headaches. But what about risky cases? Surely: (2) In a single choice, if the risk of death is low enough, and the number of headaches at stake high enough, one should avert the headaches rather than avert the risk of death. And yet, if we will face enough iterations of cases like that in (2), in the long run some of those small (...) risks of serious harms will surely eventuate. And yet: (3) Isn't it still permissible for us to run these repeated risks, despite that knowledge? After all, if it were not, then many of the risky activities that we standardly think permissible would in fact be impermissible. Nobody has yet offered a principle that can accommodate all of 1-3. In this paper, I show that we can accommodate all of these judgements, by taking into account both ex ante and ex post perspectives. In doing so, I clear aside an important obstacle to a viable deontological decision theory. (shrink)
This essay answers two questions that continue to drive debate in moral and legal philosophy; namely, ‘Is a risk of harm a wrong?’ and ‘Is a risk of harm a harm?’. The essay’s central claim is that to risk harm can be both to wrong and to harm. This stands in contrast to the respective positions of Heidi Hurd and Stephen Perry, whose views represent prominent extremes in this debate about risks. The essay shows that there is (...) at least one category of risks – intentional impositions of risk on unconsenting agents – which can be both wrongful and harmful. The wrongfulness of these risks can be established when, on the balance of reasons, one ought not to impose them. The harmfulness of these risks can be established when the risks are shown to set back legitimate interests. In those cases where risks constitute a denial of the moral status of agents, risks set back agents’ interest in dignity. In these ways, the essay shows that there are instances when a risk can constitute both a wrong and a harm. (shrink)
Deontological theories face difficulties in accounting for situations involving risk; the most natural ways of extending deontological principles to such situations have unpalatable consequences. In extending ethical principles to decision under risk, theorists often assume the risk must be incorporated into the theory by means of a function from the product of probability assignments to certain values. Deontologists should reject this assumption; essentially different actions are available to the agent when she cannot know that a certain act (...) is in her power, so we cannot simply understand her choice situation as a “risk-weighted” version of choice under certainty. (shrink)
The co-evolutionary concept of Three-modal stable evolutionary strategy of Homo sapiens is developed. The concept based on the principle of evolutionary complementarity of anthropogenesis: value of evolutionary risk and evolutionary path of human evolution are defined by descriptive (evolutionary efficiency) and creative-teleological (evolutionary correctly) parameters simultaneously, that cannot be instrumental reduced to others ones. Resulting volume of both parameters define the trends of biological, social, cultural and techno-rationalistic human evolution by two gear mechanism ˗ gene-cultural co-evolution and techno- humanitarian (...) balance. The resultant each of them can estimated by the ratio of socio-psychological predispositions of humanization/dehumanization in mentality. Explanatory model and methodology of evaluation of creatively teleological evolutionary risk component of NBIC technological complex is proposed. Integral part of the model is evolutionary semantics (time-varying semantic code, the compliance of the biological, socio-cultural and techno-rationalist adaptive modules of human stable evolutionary strategy). (shrink)
Crispin Wright maintains that the architecture of perceptual justification is such that we can acquire justification for our perceptual beliefs only if we have antecedent justification for ruling out any sceptical alternative. Wright contends that this principle doesn’t elicit scepticism, for we are non-evidentially entitled to accept the negation of any sceptical alternative. Sebastiano Moruzzi has challenged Wright’s contention by arguing that since our non-evidential entitlements don’t remove the epistemic risk of our perceptual beliefs, they don’t actually enable us (...) to acquire justification for these beliefs. In this paper I show that Wright’s responses to Moruzzi are ineffective and that Moruzzi’s argument is validated by probabilistic reasoning. I also suggest that Wright cannot answer Moruzzi’s challenge without weakening the support available for his conception of the architecture of perceptual justification. (shrink)
I argue that riskier killings of innocent people are, other things equal, objectively worse than less risky killings. I ground these views in considerations of disrespect and security. Killing someone more riskily shows greater disrespect for him by more grievously undervaluing his standing and interests, and more seriously undermines his security by exposing a disposition to harm him across all counterfactual scenarios in which the probability of killing an innocent person is that high or less. I argue that the salient (...) probabilities are the agent’s sincere, sane, subjective probabilities, and that this thesis is relevant whether your risk-taking pertains to the probability of killing a person or to the probability that the person you kill is not liable to be killed. I then defend the view’s relevance to intentional killing; show how it differs from an account of blameworthiness; and explain its significance for all-things-considered justification and justification under uncertainty. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.