Contents
37 found
Order:
  1. Existential risk pessimism and the time of perils.David Thorstad - manuscript
    When our choice affects some other person and the outcome is unknown, it has been argued that we should defer to their risk attitude, if known, or else default to use of a risk avoidant risk function. This, in turn, has been claimed to require the use of a risk avoidant risk function when making decisions that primarily affect future people, and to decrease the desirability of efforts to prevent human extinction, owing to the significant risks associated with continued human (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   2 citations  
  2. The Fragile World Hypothesis: Complexity, Fragility, and Systemic Existential Risk.David Manheim - forthcoming - Futures.
    The possibility of social and technological collapse has been the focus of science fiction tropes for decades, but more recent focus has been on specific sources of existential and global catastrophic risk. Because these scenarios are simple to understand and envision, they receive more attention than risks due to complex interplay of failures, or risks that cannot be clearly specified. In this paper, we discuss the possibility that complexity of a certain type leads to fragility which can function as a (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  3. Existential Risk and Equal Political Liberty.J. Joseph Porter & Adam F. Gibbons - forthcoming - Asian Journal of Philosophy 3.
    Rawls famously argues that the parties in the original position would agree upon the two principles of justice. Among other things, these principles guarantee equal political liberty—that is, democracy—as a requirement of justice. We argue on the contrary that the parties have reason to reject this requirement. As we show, by Rawls’ own lights, the parties would be greatly concerned to mitigate existential risk. But it is doubtful whether democracy always minimizes such risk. Indeed, no one currently knows which political (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  4. How Much Should Governments Pay to Prevent Catastrophes? Longtermism's Limited Role.Carl Shulman & Elliott Thornley - forthcoming - In Jacob Barrett, Hilary Greaves & David Thorstad (eds.), Essays on Longtermism. Oxford University Press.
    Longtermists have argued that humanity should significantly increase its efforts to prevent catastrophes like nuclear wars, pandemics, and AI disasters. But one prominent longtermist argument overshoots this conclusion: the argument also implies that humanity should reduce the risk of existential catastrophe even at extreme cost to the present generation. This overshoot means that democratic governments cannot use the longtermist argument to guide their catastrophe policy. In this paper, we show that the case for preventing catastrophe does not depend on longtermism. (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   5 citations  
  5. Longtermism and social risk-taking.H. Orri Stefánsson - forthcoming - In Jacob Barrett, Hilary Greaves & David Thorstad (eds.), Essays on Longtermism. Oxford University Press.
    A social planner who evaluates risky public policies in light of the other risks with which their society will be faced should judge favourably some such policies even though they would deem them too risky when considered in isolation. I suggest that a longtermist would—or at least should—evaluate risky polices in light of their prediction about future risks; hence, longtermism supports social risk-taking. I consider two formal versions of this argument, discuss the conditions needed for the argument to be valid, (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  6. Deep Uncertainty and Incommensurability: General Cautions about Precaution.Rush T. Stewart - forthcoming - Philosophy of Science.
    The precautionary principle is invoked in a number of important personal and policy decision contexts. Peterson shows that certain ways of making the principle precise are inconsistent with other criteria of decision-making. Some object that the results do not apply to cases of deep uncertainty or value incommensurability which are alleged to be in the principle’s wheelhouse. First, I show that Peterson’s impossibility results can be generalized considerably to cover cases of both deep uncertainty and incommensurability. Second, I contrast an (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  7. Existential Risk, Astronomical Waste, and the Reasonableness of a Pure Time Preference for Well-Being.S. J. Beard & Patrick Kaczmarek - 2024 - The Monist 107 (2):157-175.
    In this paper, we argue that our moral concern for future well-being should reduce over time due to important practical considerations about how humans interact with spacetime. After surveying several of these considerations (around equality, special duties, existential contingency, and overlapping moral concern) we develop a set of core principles that can both explain their moral significance and highlight why this is inherently bound up with our relationship with spacetime. These relate to the equitable distribution of (1) moral concern in (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  8. Why prevent human extinction?James Fanciullo - 2024 - Philosophy and Phenomenological Research 109 (2):650-662.
    Many of us think human extinction would be a very bad thing, and that we have moral reasons to prevent it. But there is disagreement over what would make extinction so bad, and thus over what grounds these moral reasons. Recently, several theorists have argued that our reasons to prevent extinction stem not just from the value of the welfare of future lives, but also from certain additional values relating to the existence of humanity itself (for example, humanity’s “final” value, (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  9. Risk, Non-Identity, and Extinction.Kacper Kowalczyk & Nikhil Venkatesh - 2024 - The Monist 107 (2):146–156.
    This paper examines a recent argument in favour of strong precautionary action—possibly including working to hasten human extinction—on the basis of a decision-theoretic view that accommodates the risk-attitudes of all affected while giving more weight to the more risk-averse attitudes. First, we dispute the need to take into account other people’s attitudes towards risk at all. Second we argue that a version of the non-identity problem undermines the case for doing so in the context of future people. Lastly, we suggest (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  10. “Emergent Abilities,” AI, and Biosecurity: Conceptual Ambiguity, Stability, and Policy.Alex John London - 2024 - Disincentivizing Bioweapons: Theory and Policy Approaches.
    Recent claims that artificial intelligence (AI) systems demonstrate “emergent abilities” have fueled excitement but also fear grounded in the prospect that such systems may enable a wider range of parties to make unprecedented advances in areas that include the development of chemical or biological weapons. Ambiguity surrounding the term “emergent abilities” has added avoidable uncertainty to a topic that has the potential to destabilize the strategic landscape, including the perception of key parties about the viability of nonproliferation efforts. To avert (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  11. Should longtermists recommend hastening extinction rather than delaying it?Richard Pettigrew - 2024 - The Monist 107 (2):130-145.
    Longtermism is the view that the most urgent global priorities, and those to which we should devote the largest portion of our resources, are those that focus on (i) ensuring a long future for humanity, and perhaps sentient or intelligent life more generally, and (ii) improving the quality of the lives that inhabit that long future. While it is by no means the only one, the argument most commonly given for this conclusion is that these interventions have greater expected goodness (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   2 citations  
  12. Hope in an Illiberal Age? [REVIEW]Mark R. Reiff - 2024 - Ethics, Policy and Environment 2024 (1):1-9.
    In this commentary on Darrel Moellendorf’s Mobilizing Hope: Climate Change & Global Poverty (Oxford: Oxford University Press, 2022), I discuss his use of the precautionary principle, whether his hope for climate-friendly ‘green growth’ is realistic given the tendency for inequality to accelerate as it gets higher, and what I call his assumption of a liberal baseline. That is, I worry that the audience to whom the book is addressed are those who already accept the environmental and economic values to which (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  13. Mistakes in the moral mathematics of existential risk.David Thorstad - 2024 - Ethics 135 (1):122-150.
    Longtermists have recently argued that it is overwhelmingly important to do what we can to mitigate existential risks to humanity. I consider three mistakes that are often made in calculating the value of existential risk mitigation. I show how correcting these mistakes pushes the value of existential risk mitigation substantially below leading estimates, potentially low enough to threaten the normative case for existential risk mitigation. I use this discussion to draw four positive lessons for the study of existential risk. -/- (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  14. Protecting Future Generations by Enhancing Current Generations.Parker Crutchfield - 2023 - In Fabrice Jotterand & Marcello Ienca (eds.), The Routledge Handbook of the Ethics of Human Enhancement. Routledge.
    It is plausible that current generations owe something to future generations. One possibility is that we have a duty to not harm them. Another possibility is that we have a duty to protect them. In either case, however, to satisfy the duties to future generations from environmental or political degradation, we need to engage in widespread collective action. But, as we are, we have a limited ability to do so, in part because we lack the self-discipline necessary for successful collective (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  15. Language Agents Reduce the Risk of Existential Catastrophe.Simon Goldstein & Cameron Domenico Kirk-Giannini - 2023 - AI and Society:1-11.
    Recent advances in natural language processing have given rise to a new kind of AI architecture: the language agent. By repeatedly calling an LLM to perform a variety of cognitive tasks, language agents are able to function autonomously to pursue goals specified in natural language and stored in a human-readable format. Because of their architecture, language agents exhibit behavior that is predictable according to the laws of folk psychology: they function as though they have desires and beliefs, and then make (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   6 citations  
  16. Unfinished Business.Jonathan Knutzen - 2023 - Philosophers' Imprint 23 (1): 4, 1-15.
    According to an intriguing though somewhat enigmatic line of thought first proposed by Jonathan Bennett, if humanity went extinct any time soon this would be unfortunate because important business would be left unfinished. This line of thought remains largely unexplored. I offer an interpretation of the idea that captures its intuitive appeal, is consistent with plausible constraints, and makes it non-redundant to other views in the literature. The resulting view contrasts with a welfare-promotion perspective, according to which extinction would be (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   3 citations  
  17. Book Review "Thomas Moynihan: X-Risk: How Humanity Discovered its Own Extinction". [REVIEW]Kritika Maheshwari - 2023 - Intergenerational Justice Review 8 (2):61-62.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  18. Diving to extinction: Water birds at risk.Minh-Hoang Nguyen - 2023 - Sm3D Portal.
    Our Earth’s climate is changing. Any species living in the Earth’s ecosystem need to thrive to adapt to the new living conditions. Otherwise, extinction will be their outcome. In the race for adaptation, waterbirds (Aequorlitornithes), such as penguins, cormorants, and alcids, seem disadvantageous.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   2 citations  
  19. (1 other version)Economic inequality and the long-term future.Andreas T. Schmidt & Daan Juijn - 2023 - Politics, Philosophy and Economics (1):67-99.
    Why, if at all, should we object to economic inequality? Some central arguments – the argument from decreasing marginal utility for example – invoke instrumental reasons and object to inequality because of its effects. Such instrumental arguments, however, often concern only the static effects of inequality and neglect its intertemporal conse- quences. In this article, we address this striking gap and investigate income inequality’s intertemporal consequences, including its potential effects on humanity’s (very) long-term future. Following recent arguments around future generations (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  20. (1 other version)The epistemic challenge to longtermism.Christian Tarsney - 2023 - Synthese 201 (6):1-37.
    Longtermists claim that what we ought to do is mainly determined by how our actions might affect the very long-run future. A natural objection to longtermism is that these effects may be nearly impossible to predict — perhaps so close to impossible that, despite the astronomical importance of the far future, the expected value of our present actions is mainly determined by near-term considerations. This paper aims to precisify and evaluate one version of this epistemic objection to longtermism. To that (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   9 citations  
  21. High Risk, Low Reward: A Challenge to the Astronomical Value of Existential Risk Mitigation.David Thorstad - 2023 - Philosophy and Public Affairs 51 (4):373-412.
    Philosophy &Public Affairs, Volume 51, Issue 4, Page 373-412, Fall 2023.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   5 citations  
  22. How does Artificial Intelligence Pose an Existential Risk?Karina Vold & Daniel R. Harris - 2023 - In Carissa Véliz (ed.), The Oxford Handbook of Digital Ethics. Oxford University Press.
    Alan Turing, one of the fathers of computing, warned that Artificial Intelligence (AI) could one day pose an existential risk to humanity. Today, recent advancements in the field AI have been accompanied by a renewed set of existential warnings. But what exactly constitutes an existential risk? And how exactly does AI pose such a threat? In this chapter we aim to answer these questions. In particular, we will critically explore three commonly cited reasons for thinking that AI poses an existential (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  23. The Worst Case: Planetary Defense against a Doomsday Impactor.Joel Marks - 2022 - Space Policy 61.
    Current planetary defense policy prioritizes a probability assessment of risk of Earth impact by an asteroid or a comet in the planning of detection and mitigation strategies and in setting the levels of urgency and budgeting to operationalize them. The result has been a focus on asteroids of Tunguska size, which could destroy a city or a region, since this is the most likely sort of object we would need to defend against. However a complete risk assessment would consider not (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  24. Nuclear Fine-Tuning and the Illusion of Teleology.Ember Reed - 2022 - Sound Ideas.
    Recent existential-risk thinkers have noted that the analysis of the fine-tuning argument for God’s existence, and the analysis of certain forms of existential risk, employ similar types of reasoning. This paper argues that insofar as the “many worlds objection” undermines the inference to God’s existence from universal fine-tuning, then a similar many worlds objection undermines the inference that the historic risk of global nuclear catastrophe has been low from the lack of such a catastrophe has occurred in our world. A (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  25. Nuclear war as a predictable surprise.Matthew Rendall - 2022 - Global Policy 13 (5):782-791.
    Like asteroids, hundred-year floods and pandemic disease, thermonuclear war is a low-frequency, high-impact threat. In the long run, catastrophe is inevitable if nothing is done − yet each successive government and generation may fail to address it. Drawing on risk perception research, this paper argues that psychological biases cause the threat of nuclear war to receive less attention than it deserves. Nuclear deterrence is, moreover, a ‘front-loaded good’: its benefits accrue disproportionately to proximate generations, whereas much of the expected cost (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  26. COVID-19 PANDEMIC AS AN INDICATOR OF EXISTENTIAL EVOLUTIONARY RISK OF ANTHROPOCENE (ANTHROPOLOGICAL ORIGIN AND GLOBAL POLITICAL MECHANISMS).Valentin Cheshko & Konnova Nina - 2021 - In MOChashin O. Kristal (ed.), Bioethics: from theory to practice. pp. 29-44.
    The coronavirus pandemic, like its predecessors - AIDS, Ebola, etc., is evidence of the evolutionary instability of the socio-cultural and ecological niche created by mankind, as the main factor in the evolutionary success of our biological species and the civilization created by it. At least, this applies to the modern global civilization, which is called technogenic or technological, although it exists in several varieties. As we hope to show, the current crisis has less ontological as well as epistemological roots; its (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  27. Existential risk from AI and orthogonality: Can we have it both ways?Vincent C. Müller & Michael Cannon - 2021 - Ratio 35 (1):25-36.
    The standard argument to the conclusion that artificial intelligence (AI) constitutes an existential risk for the human species uses two premises: (1) AI may reach superintelligent levels, at which point we humans lose control (the ‘singularity claim’); (2) Any level of intelligence can go along with any goal (the ‘orthogonality thesis’). We find that the singularity claim requires a notion of ‘general intelligence’, while the orthogonality thesis requires a notion of ‘instrumental intelligence’. If this interpretation is correct, they cannot be (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   7 citations  
  28. Human Extinction from a Thomist Perspective.Stefan Riedener - 2021 - In Stefan Riedener, Dominic Roser & Markus Huppenbauer (eds.), Effective Altruism and Religion: Synergies, Tensions, Dialogue. Baden-Baden, Germany: Nomos. pp. 187-210.
    “Existential risks” are risks that threaten the destruction of humanity’s long-term potential: risks of nuclear wars, pandemics, supervolcano eruptions, and so on. On standard utilitarianism, it seems, the reduction of such risks should be a key global priority today. Many effective altruists agree with this verdict. But how should the importance of these risks be assessed on a Christian moral theory? In this paper, I begin to answer this question – taking Thomas Aquinas as a reference, and the risks of (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  29. If now isn't the most influential time ever, when is? [REVIEW]Kritika Maheshwari - 2020 - The Philosopher 108:94-101.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  30. Autonomy and Machine Learning as Risk Factors at the Interface of Nuclear Weapons, Computers and People.S. M. Amadae & Shahar Avin - 2019 - In Vincent Boulanin (ed.), The Impact of Artificial Intelligence on Strategic Stability and Nuclear Risk: Euro-Atlantic Perspectives. Stockholm: SIPRI. pp. 105-118.
    This article assesses how autonomy and machine learning impact the existential risk of nuclear war. It situates the problem of cyber security, which proceeds by stealth, within the larger context of nuclear deterrence, which is effective when it functions with transparency and credibility. Cyber vulnerabilities poses new weaknesses to the strategic stability provided by nuclear deterrence. This article offers best practices for the use of computer and information technologies integrated into nuclear weapons systems. Focusing on nuclear command and control, avoiding (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  31. Space Colonization and Existential Risk.Joseph Gottlieb - 2019 - Journal of the American Philosophical Association 5 (3):306-320.
    Ian Stoner has recently argued that we ought not to colonize Mars because doing so would flout our pro tanto obligation not to violate the principle of scientific conservation, and there is no countervailing considerations that render our violation of the principle permissible. While I remain agnostic on, my primary goal in this article is to challenge : there are countervailing considerations that render our violation of the principle permissible. As such, Stoner has failed to establish that we ought not (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   7 citations  
  32. Existential risks: New Zealand needs a method to agree on a value framework and how to quantify future lives at risk.Matthew Boyd & Nick Wilson - 2018 - Policy Quarterly 14 (3):58-65.
    Human civilisation faces a range of existential risks, including nuclear war, runaway climate change and superintelligent artificial intelligence run amok. As we show here with calculations for the New Zealand setting, large numbers of currently living and, especially, future people are potentially threatened by existential risks. A just process for resource allocation demands that we consider future generations but also account for solidarity with the present. Here we consider the various ethical and policy issues involved and make a case for (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  33. Global Catastrophic and Existential Risks Communication Scale.Alexey Turchin & Denkeberger David - 2018 - Futures:not defiend yet.
    Existential risks threaten the future of humanity, but they are difficult to measure. However, to communicate, prioritize and mitigate such risks it is important to estimate their relative significance. Risk probabilities are typically used, but for existential risks they are problematic due to ambiguity, and because quantitative probabilities do not represent some aspects of these risks. Thus, a standardized and easily comprehensible instrument is called for, to communicate dangers from various global catastrophic and existential risks. In this article, inspired by (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  34. Human Extinction, Narrative Ending, and Meaning of Life.Brooke Alan Trisel - 2016 - Journal of Philosophy of Life 6 (1):1-22.
    Some people think that the inevitability of human extinction renders life meaningless. Joshua Seachris has argued that naturalism can be conceptualized as a meta-narrative and that it narrates across important questions of human life, including what is the meaning of life and how life will end. How a narrative ends is important, Seachris argues. In the absence of God, and with knowledge that human extinction is a certainty, is there any way that humanity could be meaningful and have a good (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   6 citations  
  35. Existential Risks: Exploring a Robust Risk Reduction Strategy.Karim Jebari - 2015 - Science and Engineering Ethics 21 (3):541-554.
    A small but growing number of studies have aimed to understand, assess and reduce existential risks, or risks that threaten the continued existence of mankind. However, most attention has been focused on known and tangible risks. This paper proposes a heuristic for reducing the risk of black swan extinction events. These events are, as the name suggests, stochastic and unforeseen when they happen. Decision theory based on a fixed model of possible outcomes cannot properly deal with this kind of event. (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   6 citations  
  36. Responses to Catastrophic AGI Risk: A Survey.Kaj Sotala & Roman V. Yampolskiy - 2015 - Physica Scripta 90.
    Many researchers have argued that humanity will create artificial general intelligence (AGI) within the next twenty to one hundred years. It has been suggested that AGI may inflict serious damage to human well-being on a global scale ('catastrophic risk'). After summarizing the arguments for why AGI may pose such a risk, we review the fieldʼs proposed responses to AGI risk. We consider societal proposals, proposals for external constraints on AGI behaviors and proposals for creating AGIs that are safe due to (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   12 citations  
  37. Human extinction and the value of our efforts.Brooke Alan Trisel - 2004 - Philosophical Forum 35 (3):371–391.
    Some people feel distressed reflecting on human extinction. Some people even claim that our efforts and lives would be empty and pointless if humanity becomes extinct, even if this will not occur for millions of years. In this essay, I will attempt to demonstrate that this claim is false. The desire for long-lastingness or quasi-immortality is often unwittingly adopted as a standard for judging whether our efforts are significant. If we accomplish our goals and then later in life conclude that (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   15 citations