Related

Contents
85 found
Order:
1 — 50 / 85
Material to categorize
  1. Probability, Normalcy, and the Right against Risk Imposition.Martin Smith - 2024 - Journal of Ethics and Social Philosophy 27 (3).
    Many philosophers accept that, as well as having a right that others not harm us, we also have a right that others not subject us to a risk of harm. And yet, when we attempt to spell out precisely what this ‘right against risk imposition’ involves, we encounter a series of notorious puzzles. Existing attempts to deal with these puzzles have tended to focus on the nature of rights – but I propose an approach that focusses instead on the nature (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  2. Suspension of judgment, non-additivity, and additivity of possibilities.Aldo Filomeno - forthcoming - Acta Analytica:1-22.
    In situations where we ignore everything but the space of possibilities, we ought to suspend judgment—that is, remain agnostic—about which of these possibilities is the case. This means that we cannot sum our degrees of belief in different possibilities, something that has been formalized as an axiom of non-additivity. Consistent with this way of representing our ignorance, I defend a doxastic norm that recommends that we should nevertheless follow a certain additivity of possibilities: even if we cannot sum degrees of (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  3. How a pure risk of harm can itself be a harm: A reply to Rowe.H. Orri Stefánsson - 2024 - Analysis 84 (1):112-116.
    Rowe has recently argued that pure risk of harm cannot itself be a harm. I respond to Rowe and argue that given an appropriate understanding of objective probabilities, pure objective risk of harm can itself be a harm.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  4. (1 other version)Climate Change and Decision Theory.Andrea S. Asker & H. Orri Stefánsson - 2023 - In Pellegrino Gianfranco & Marcello Di Paola (eds.), Handbook of Philosophy of Climate Change. Springer Nature. pp. 267-286.
    Many people are worried about the harmful effects of climate change but nevertheless enjoy some activities that contribute to the emission of greenhouse gas (driving, flying, eating meat, etc.), the main cause of climate change. How should such people make choices between engaging in and refraining from enjoyable greenhouse-gas-emitting activities? In this chapter, we look at the answer provided by decision theory. Some scholars think that the right answer is given by interactive decision theory, or game theory; and moreover think (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  5. In defence of Pigou-Dalton for chances.Stefánsson H. Orri - 2023 - Utilitas 35 (4):292-311.
    I defend a weak version of the Pigou-Dalton principle for chances. The principle says that it is better to increase the survival chance of a person who is more likely to die rather than a person who is less likely to die, assuming that the two people do not differ in any other morally relevant respect. The principle justifies plausible moral judgements that standard ex post views, such as prioritarianism and rank-dependent egalitarianism, cannot accommodate. However, the principle can be justified (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   2 citations  
  6. An objection to the modal account of risk.Martin Smith - 2023 - Synthese 201 (5):1-9.
    In a recent paper in this journal Duncan Pritchard responds to an objection to the modal account of risk pressed by Ebert, Smith and Durbach ( 2020 ). In this paper, I expand upon the objection and argue that it still stands. I go on to consider a more general question raised by this exchange – whether risk is ‘objective’, or whether it is something that varies from one perspective to another.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  7. (1 other version)Climate Change and Decision Theory.Andrea S. Asker & H. Orri Stefánsson - 2023 - In Gianfranco Pellegrino & Marcello Di Paola (eds.), Handbook of the Philosophy of Climate Change. Springer.
    Many people are worried about the harmful effects of climate change but nevertheless enjoy some activities that contribute to the emission of greenhouse gas (driving, flying, eating meat, etc.), the main cause of climate change. How should such people make choices between engaging in and refraining from enjoyable greenhouse-gas-emitting activities? In this chapter we look at the answer provided by decision theory. Some scholars think that the right answer is given by interactive decision theory, or game theory; and moreover think (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  8. Ignore risk; Maximize expected moral value.Michael Zhao - 2021 - Noûs 57 (1):144-161.
    Many philosophers assume that, when making moral decisions under uncertainty, we should choose the option that has the greatest expected moral value, regardless of how risky it is. But their arguments for maximizing expected moral value do not support it over rival, risk-averse approaches. In this paper, I present a novel argument for maximizing expected value: when we think about larger series of decisions that each decision is a part of, all but the most risk-averse agents would prefer that we (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   8 citations  
  9. Risk, Overdiagnosis and Ethical Justifications.Wendy A. Rogers, Vikki A. Entwistle & Stacy M. Carter - 2019 - Health Care Analysis 27 (4):231-248.
    Many healthcare practices expose people to risks of harmful outcomes. However, the major theories of moral philosophy struggle to assess whether, when and why it is ethically justifiable to expose individuals to risks, as opposed to actually harming them. Sven Ove Hansson has proposed an approach to the ethical assessment of risk imposition that encourages attention to factors including questions of justice in the distribution of advantage and risk, people’s acceptance or otherwise of risks, and the scope individuals have to (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   3 citations  
Existential Risk
  1. Meaningful Lives and Meaningful Futures.Michal Masny - forthcoming - Journal of Ethics and Social Philosophy.
    What moral reasons, if any, do we have to prevent the extinction of humanity? In “Unfinished Business,” Jonathan Knutzen argues that certain further developments in culture would make our history more ‘collectively meaningful,’ and that premature extinction would be bad because it would close off that possibility. Here, I critically examine this proposal. I argue that if collective meaningfulness is analogous to individual meaningfulness, then our meaning-based reasons to prevent the extinction of humanity are substantially different from the reasons discussed (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  2. “Emergent Abilities,” AI, and Biosecurity: Conceptual Ambiguity, Stability, and Policy.Alex John London - 2024 - Disincentivizing Bioweapons: Theory and Policy Approaches.
    Recent claims that artificial intelligence (AI) systems demonstrate “emergent abilities” have fueled excitement but also fear grounded in the prospect that such systems may enable a wider range of parties to make unprecedented advances in areas that include the development of chemical or biological weapons. Ambiguity surrounding the term “emergent abilities” has added avoidable uncertainty to a topic that has the potential to destabilize the strategic landscape, including the perception of key parties about the viability of nonproliferation efforts. To avert (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  3. AI Survival Stories: a Taxonomic Analysis of AI Existential Risk.Herman Cappelen, Simon Goldstein & John Hawthorne - forthcoming - Philosophy of Ai.
    Since the release of ChatGPT, there has been a lot of debate about whether AI systems pose an existential risk to humanity. This paper develops a general framework for thinking about the existential risk of AI systems. We analyze a two-premise argument that AI systems pose a threat to humanity. Premise one: AI systems will become extremely powerful. Premise two: if AI systems become extremely powerful, they will destroy humanity. We use these two premises to construct a taxonomy of ‘survival (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  4. Why prevent human extinction?James Fanciullo - 2024 - Philosophy and Phenomenological Research 109 (2):650-662.
    Many of us think human extinction would be a very bad thing, and that we have moral reasons to prevent it. But there is disagreement over what would make extinction so bad, and thus over what grounds these moral reasons. Recently, several theorists have argued that our reasons to prevent extinction stem not just from the value of the welfare of future lives, but also from certain additional values relating to the existence of humanity itself (for example, humanity’s “final” value, (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  5. Deep Uncertainty and Incommensurability: General Cautions about Precaution.Rush T. Stewart - forthcoming - Philosophy of Science.
    The precautionary principle is invoked in a number of important personal and policy decision contexts. Peterson shows that certain ways of making the principle precise are inconsistent with other criteria of decision-making. Some object that the results do not apply to cases of deep uncertainty or value incommensurability which are alleged to be in the principle’s wheelhouse. First, I show that Peterson’s impossibility results can be generalized considerably to cover cases of both deep uncertainty and incommensurability. Second, I contrast an (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  6. Mistakes in the moral mathematics of existential risk.David Thorstad - 2024 - Ethics 135 (1):122-150.
    Longtermists have recently argued that it is overwhelmingly important to do what we can to mitigate existential risks to humanity. I consider three mistakes that are often made in calculating the value of existential risk mitigation. I show how correcting these mistakes pushes the value of existential risk mitigation substantially below leading estimates, potentially low enough to threaten the normative case for existential risk mitigation. I use this discussion to draw four positive lessons for the study of existential risk. -/- (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  7. Rethinking the Redlines Against AI Existential Risks.Yi Zeng, Xin Guan, Enmeng Lu & Jinyu Fan - manuscript
    The ongoing evolution of advanced AI systems will have profound, enduring, and significant impacts on human existence that must not be overlooked. These impacts range from empowering humanity to achieve unprecedented transcendence to potentially causing catastrophic threats to our existence. To proactively and preventively mitigate these potential threats, it is crucial to establish clear redlines to prevent AI-induced existential risks by constraining and regulating advanced AI and their related AI actors. This paper explores different concepts of AI existential risk, connects (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  8. Nuclear Fine-Tuning and the Illusion of Teleology.Ember Reed - 2022 - Sound Ideas.
    Recent existential-risk thinkers have noted that the analysis of the fine-tuning argument for God’s existence, and the analysis of certain forms of existential risk, employ similar types of reasoning. This paper argues that insofar as the “many worlds objection” undermines the inference to God’s existence from universal fine-tuning, then a similar many worlds objection undermines the inference that the historic risk of global nuclear catastrophe has been low from the lack of such a catastrophe has occurred in our world. A (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  9. Existential Risk and Equal Political Liberty.J. Joseph Porter & Adam F. Gibbons - forthcoming - Asian Journal of Philosophy 3.
    Rawls famously argues that the parties in the original position would agree upon the two principles of justice. Among other things, these principles guarantee equal political liberty—that is, democracy—as a requirement of justice. We argue on the contrary that the parties have reason to reject this requirement. As we show, by Rawls’ own lights, the parties would be greatly concerned to mitigate existential risk. But it is doubtful whether democracy always minimizes such risk. Indeed, no one currently knows which political (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  10. Should longtermists recommend hastening extinction rather than delaying it?Richard Pettigrew - 2024 - The Monist 107 (2):130-145.
    Longtermism is the view that the most urgent global priorities, and those to which we should devote the largest portion of our resources, are those that focus on (i) ensuring a long future for humanity, and perhaps sentient or intelligent life more generally, and (ii) improving the quality of the lives that inhabit that long future. While it is by no means the only one, the argument most commonly given for this conclusion is that these interventions have greater expected goodness (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   2 citations  
  11. Existential Risk, Astronomical Waste, and the Reasonableness of a Pure Time Preference for Well-Being.S. J. Beard & Patrick Kaczmarek - 2024 - The Monist 107 (2):157-175.
    In this paper, we argue that our moral concern for future well-being should reduce over time due to important practical considerations about how humans interact with spacetime. After surveying several of these considerations (around equality, special duties, existential contingency, and overlapping moral concern) we develop a set of core principles that can both explain their moral significance and highlight why this is inherently bound up with our relationship with spacetime. These relate to the equitable distribution of (1) moral concern in (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  12. Risk, Non-Identity, and Extinction.Kacper Kowalczyk & Nikhil Venkatesh - 2024 - The Monist 107 (2):146–156.
    This paper examines a recent argument in favour of strong precautionary action—possibly including working to hasten human extinction—on the basis of a decision-theoretic view that accommodates the risk-attitudes of all affected while giving more weight to the more risk-averse attitudes. First, we dispute the need to take into account other people’s attitudes towards risk at all. Second we argue that a version of the non-identity problem undermines the case for doing so in the context of future people. Lastly, we suggest (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  13. Extinction Risks from AI: Invisible to Science?Vojtech Kovarik, Christiaan van Merwijk & Ida Mattsson - manuscript
    In an effort to inform the discussion surrounding existential risks from AI, we formulate Extinction-level Goodhart’s Law as “Virtually any goal specification, pursued to the extreme, will result in the extinction of humanity”, and we aim to understand which formal models are suitable for investigating this hypothesis. Note that we remain agnostic as to whether Extinction-level Goodhart’s Law holds or not. As our key contribution, we identify a set of conditions that are necessary for a model that aims to be (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  14. Hope in an Illiberal Age? [REVIEW]Mark R. Reiff - 2024 - Ethics, Policy and Environment 2024 (1):1-9.
    In this commentary on Darrel Moellendorf’s Mobilizing Hope: Climate Change & Global Poverty (Oxford: Oxford University Press, 2022), I discuss his use of the precautionary principle, whether his hope for climate-friendly ‘green growth’ is realistic given the tendency for inequality to accelerate as it gets higher, and what I call his assumption of a liberal baseline. That is, I worry that the audience to whom the book is addressed are those who already accept the environmental and economic values to which (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  15. High Risk, Low Reward: A Challenge to the Astronomical Value of Existential Risk Mitigation.David Thorstad - 2023 - Philosophy and Public Affairs 51 (4):373-412.
    Philosophy &Public Affairs, Volume 51, Issue 4, Page 373-412, Fall 2023.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   5 citations  
  16. Existential risk pessimism and the time of perils.David Thorstad - manuscript
    When our choice affects some other person and the outcome is unknown, it has been argued that we should defer to their risk attitude, if known, or else default to use of a risk avoidant risk function. This, in turn, has been claimed to require the use of a risk avoidant risk function when making decisions that primarily affect future people, and to decrease the desirability of efforts to prevent human extinction, owing to the significant risks associated with continued human (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   2 citations  
  17. Book Review "Thomas Moynihan: X-Risk: How Humanity Discovered its Own Extinction". [REVIEW]Kritika Maheshwari - 2023 - Intergenerational Justice Review 8 (2):61-62.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  18. Diving to extinction: Water birds at risk.Minh-Hoang Nguyen - 2023 - Sm3D Portal.
    Our Earth’s climate is changing. Any species living in the Earth’s ecosystem need to thrive to adapt to the new living conditions. Otherwise, extinction will be their outcome. In the race for adaptation, waterbirds (Aequorlitornithes), such as penguins, cormorants, and alcids, seem disadvantageous.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   2 citations  
  19. Language Agents Reduce the Risk of Existential Catastrophe.Simon Goldstein & Cameron Domenico Kirk-Giannini - 2023 - AI and Society:1-11.
    Recent advances in natural language processing have given rise to a new kind of AI architecture: the language agent. By repeatedly calling an LLM to perform a variety of cognitive tasks, language agents are able to function autonomously to pursue goals specified in natural language and stored in a human-readable format. Because of their architecture, language agents exhibit behavior that is predictable according to the laws of folk psychology: they function as though they have desires and beliefs, and then make (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   6 citations  
  20. Protecting Future Generations by Enhancing Current Generations.Parker Crutchfield - 2023 - In Fabrice Jotterand & Marcello Ienca (eds.), The Routledge Handbook of the Ethics of Human Enhancement. Routledge.
    It is plausible that current generations owe something to future generations. One possibility is that we have a duty to not harm them. Another possibility is that we have a duty to protect them. In either case, however, to satisfy the duties to future generations from environmental or political degradation, we need to engage in widespread collective action. But, as we are, we have a limited ability to do so, in part because we lack the self-discipline necessary for successful collective (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  21. (1 other version)The epistemic challenge to longtermism.Christian Tarsney - 2023 - Synthese 201 (6):1-37.
    Longtermists claim that what we ought to do is mainly determined by how our actions might affect the very long-run future. A natural objection to longtermism is that these effects may be nearly impossible to predict — perhaps so close to impossible that, despite the astronomical importance of the far future, the expected value of our present actions is mainly determined by near-term considerations. This paper aims to precisify and evaluate one version of this epistemic objection to longtermism. To that (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   9 citations  
  22. (1 other version)Economic inequality and the long-term future.Andreas T. Schmidt & Daan Juijn - 2023 - Politics, Philosophy and Economics (1):67-99.
    Why, if at all, should we object to economic inequality? Some central arguments – the argument from decreasing marginal utility for example – invoke instrumental reasons and object to inequality because of its effects. Such instrumental arguments, however, often concern only the static effects of inequality and neglect its intertemporal conse- quences. In this article, we address this striking gap and investigate income inequality’s intertemporal consequences, including its potential effects on humanity’s (very) long-term future. Following recent arguments around future generations (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  23. Longtermism and social risk-taking.H. Orri Stefánsson - forthcoming - In Jacob Barrett, Hilary Greaves & David Thorstad (eds.), Essays on Longtermism. Oxford University Press.
    A social planner who evaluates risky public policies in light of the other risks with which their society will be faced should judge favourably some such policies even though they would deem them too risky when considered in isolation. I suggest that a longtermist would—or at least should—evaluate risky polices in light of their prediction about future risks; hence, longtermism supports social risk-taking. I consider two formal versions of this argument, discuss the conditions needed for the argument to be valid, (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  24. How Much Should Governments Pay to Prevent Catastrophes? Longtermism's Limited Role.Carl Shulman & Elliott Thornley - forthcoming - In Jacob Barrett, Hilary Greaves & David Thorstad (eds.), Essays on Longtermism. Oxford University Press.
    Longtermists have argued that humanity should significantly increase its efforts to prevent catastrophes like nuclear wars, pandemics, and AI disasters. But one prominent longtermist argument overshoots this conclusion: the argument also implies that humanity should reduce the risk of existential catastrophe even at extreme cost to the present generation. This overshoot means that democratic governments cannot use the longtermist argument to guide their catastrophe policy. In this paper, we show that the case for preventing catastrophe does not depend on longtermism. (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   5 citations  
  25. Nuclear war as a predictable surprise.Matthew Rendall - 2022 - Global Policy 13 (5):782-791.
    Like asteroids, hundred-year floods and pandemic disease, thermonuclear war is a low-frequency, high-impact threat. In the long run, catastrophe is inevitable if nothing is done − yet each successive government and generation may fail to address it. Drawing on risk perception research, this paper argues that psychological biases cause the threat of nuclear war to receive less attention than it deserves. Nuclear deterrence is, moreover, a ‘front-loaded good’: its benefits accrue disproportionately to proximate generations, whereas much of the expected cost (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  26. Unfinished Business.Jonathan Knutzen - 2023 - Philosophers' Imprint 23 (1): 4, 1-15.
    According to an intriguing though somewhat enigmatic line of thought first proposed by Jonathan Bennett, if humanity went extinct any time soon this would be unfortunate because important business would be left unfinished. This line of thought remains largely unexplored. I offer an interpretation of the idea that captures its intuitive appeal, is consistent with plausible constraints, and makes it non-redundant to other views in the literature. The resulting view contrasts with a welfare-promotion perspective, according to which extinction would be (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   5 citations  
  27. The Worst Case: Planetary Defense against a Doomsday Impactor.Joel Marks - 2022 - Space Policy 61.
    Current planetary defense policy prioritizes a probability assessment of risk of Earth impact by an asteroid or a comet in the planning of detection and mitigation strategies and in setting the levels of urgency and budgeting to operationalize them. The result has been a focus on asteroids of Tunguska size, which could destroy a city or a region, since this is the most likely sort of object we would need to defend against. However a complete risk assessment would consider not (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  28. Human Extinction from a Thomist Perspective.Stefan Riedener - 2021 - In Stefan Riedener, Dominic Roser & Markus Huppenbauer (eds.), Effective Altruism and Religion: Synergies, Tensions, Dialogue. Baden-Baden, Germany: Nomos. pp. 187-210.
    “Existential risks” are risks that threaten the destruction of humanity’s long-term potential: risks of nuclear wars, pandemics, supervolcano eruptions, and so on. On standard utilitarianism, it seems, the reduction of such risks should be a key global priority today. Many effective altruists agree with this verdict. But how should the importance of these risks be assessed on a Christian moral theory? In this paper, I begin to answer this question – taking Thomas Aquinas as a reference, and the risks of (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  29. Existential risk from AI and orthogonality: Can we have it both ways?Vincent C. Müller & Michael Cannon - 2021 - Ratio 35 (1):25-36.
    The standard argument to the conclusion that artificial intelligence (AI) constitutes an existential risk for the human species uses two premises: (1) AI may reach superintelligent levels, at which point we humans lose control (the ‘singularity claim’); (2) Any level of intelligence can go along with any goal (the ‘orthogonality thesis’). We find that the singularity claim requires a notion of ‘general intelligence’, while the orthogonality thesis requires a notion of ‘instrumental intelligence’. If this interpretation is correct, they cannot be (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   8 citations  
  30. If now isn't the most influential time ever, when is? [REVIEW]Kritika Maheshwari - 2020 - The Philosopher 108:94-101.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  31. How does Artificial Intelligence Pose an Existential Risk?Karina Vold & Daniel R. Harris - 2023 - In Carissa Véliz (ed.), The Oxford Handbook of Digital Ethics. Oxford University Press.
    Alan Turing, one of the fathers of computing, warned that Artificial Intelligence (AI) could one day pose an existential risk to humanity. Today, recent advancements in the field AI have been accompanied by a renewed set of existential warnings. But what exactly constitutes an existential risk? And how exactly does AI pose such a threat? In this chapter we aim to answer these questions. In particular, we will critically explore three commonly cited reasons for thinking that AI poses an existential (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  32. COVID-19 PANDEMIC AS AN INDICATOR OF EXISTENTIAL EVOLUTIONARY RISK OF ANTHROPOCENE (ANTHROPOLOGICAL ORIGIN AND GLOBAL POLITICAL MECHANISMS).Valentin Cheshko & Konnova Nina - 2021 - In MOChashin O. Kristal (ed.), Bioethics: from theory to practice. pp. 29-44.
    The coronavirus pandemic, like its predecessors - AIDS, Ebola, etc., is evidence of the evolutionary instability of the socio-cultural and ecological niche created by mankind, as the main factor in the evolutionary success of our biological species and the civilization created by it. At least, this applies to the modern global civilization, which is called technogenic or technological, although it exists in several varieties. As we hope to show, the current crisis has less ontological as well as epistemological roots; its (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  33. Autonomy and Machine Learning as Risk Factors at the Interface of Nuclear Weapons, Computers and People.S. M. Amadae & Shahar Avin - 2019 - In Vincent Boulanin (ed.), The Impact of Artificial Intelligence on Strategic Stability and Nuclear Risk: Euro-Atlantic Perspectives. Stockholm: SIPRI. pp. 105-118.
    This article assesses how autonomy and machine learning impact the existential risk of nuclear war. It situates the problem of cyber security, which proceeds by stealth, within the larger context of nuclear deterrence, which is effective when it functions with transparency and credibility. Cyber vulnerabilities poses new weaknesses to the strategic stability provided by nuclear deterrence. This article offers best practices for the use of computer and information technologies integrated into nuclear weapons systems. Focusing on nuclear command and control, avoiding (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  34. Responses to Catastrophic AGI Risk: A Survey.Kaj Sotala & Roman V. Yampolskiy - 2015 - Physica Scripta 90.
    Many researchers have argued that humanity will create artificial general intelligence (AGI) within the next twenty to one hundred years. It has been suggested that AGI may inflict serious damage to human well-being on a global scale ('catastrophic risk'). After summarizing the arguments for why AGI may pose such a risk, we review the fieldʼs proposed responses to AGI risk. We consider societal proposals, proposals for external constraints on AGI behaviors and proposals for creating AGIs that are safe due to (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   12 citations  
  35. Space Colonization and Existential Risk.Joseph Gottlieb - 2019 - Journal of the American Philosophical Association 5 (3):306-320.
    Ian Stoner has recently argued that we ought not to colonize Mars because doing so would flout our pro tanto obligation not to violate the principle of scientific conservation, and there is no countervailing considerations that render our violation of the principle permissible. While I remain agnostic on, my primary goal in this article is to challenge : there are countervailing considerations that render our violation of the principle permissible. As such, Stoner has failed to establish that we ought not (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   7 citations  
  36. The Fragile World Hypothesis: Complexity, Fragility, and Systemic Existential Risk.David Manheim - forthcoming - Futures.
    The possibility of social and technological collapse has been the focus of science fiction tropes for decades, but more recent focus has been on specific sources of existential and global catastrophic risk. Because these scenarios are simple to understand and envision, they receive more attention than risks due to complex interplay of failures, or risks that cannot be clearly specified. In this paper, we discuss the possibility that complexity of a certain type leads to fragility which can function as a (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  37. Existential risks: New Zealand needs a method to agree on a value framework and how to quantify future lives at risk.Matthew Boyd & Nick Wilson - 2018 - Policy Quarterly 14 (3):58-65.
    Human civilisation faces a range of existential risks, including nuclear war, runaway climate change and superintelligent artificial intelligence run amok. As we show here with calculations for the New Zealand setting, large numbers of currently living and, especially, future people are potentially threatened by existential risks. A just process for resource allocation demands that we consider future generations but also account for solidarity with the present. Here we consider the various ethical and policy issues involved and make a case for (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  38. Approaches to the Prevention of Global Catastrophic Risks.Alexey Turchin - 2018 - Human Prospect 7 (2):52-65.
    Many global catastrophic and existential risks (X-risks) threaten the existence of humankind. There are also many ideas for their prevention, but the meta-problem is that these ideas are not structured. This lack of structure means it is not easy to choose the right plan(s) or to implement them in the correct order. I suggest using a “Plan A, Plan B” model, which has shown its effectiveness in planning actions in unpredictable environments. In this approach, Plan B is a backup option, (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   3 citations  
  39. Global Catastrophic and Existential Risks Communication Scale.Alexey Turchin & Denkeberger David - 2018 - Futures:not defiend yet.
    Existential risks threaten the future of humanity, but they are difficult to measure. However, to communicate, prioritize and mitigate such risks it is important to estimate their relative significance. Risk probabilities are typically used, but for existential risks they are problematic due to ambiguity, and because quantitative probabilities do not represent some aspects of these risks. Thus, a standardized and easily comprehensible instrument is called for, to communicate dangers from various global catastrophic and existential risks. In this article, inspired by (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  40. Human Extinction, Narrative Ending, and Meaning of Life.Brooke Alan Trisel - 2016 - Journal of Philosophy of Life 6 (1):1-22.
    Some people think that the inevitability of human extinction renders life meaningless. Joshua Seachris has argued that naturalism can be conceptualized as a meta-narrative and that it narrates across important questions of human life, including what is the meaning of life and how life will end. How a narrative ends is important, Seachris argues. In the absence of God, and with knowledge that human extinction is a certainty, is there any way that humanity could be meaningful and have a good (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   6 citations  
  41. Risks of artificial intelligence.Vincent C. Muller (ed.) - 2015 - CRC Press - Chapman & Hall.
    Papers from the conference on AI Risk (published in JETAI), supplemented by additional work. --- If the intelligence of artificial systems were to surpass that of humans, humanity would face significant risks. The time has come to consider these issues, and this consideration must include progress in artificial intelligence (AI) as much as insights from AI theory. -- Featuring contributions from leading experts and thinkers in artificial intelligence, Risks of Artificial Intelligence is the first volume of collected chapters dedicated to (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
1 — 50 / 85