Related

Contents
47 found
Order:
  1. Existential risk pessimism and the time of perils.David Thorstad - manuscript
    When our choice affects some other person and the outcome is unknown, it has been argued that we should defer to their risk attitude, if known, or else default to use of a risk avoidant risk function. This, in turn, has been claimed to require the use of a risk avoidant risk function when making decisions that primarily affect future people, and to decrease the desirability of efforts to prevent human extinction, owing to the significant risks associated with continued human (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   2 citations  
  2. AI Survival Stories: a Taxonomic Analysis of AI Existential Risk.Herman Cappelen, Simon Goldstein & John Hawthorne - forthcoming - Philosophy of Ai.
    Since the release of ChatGPT, there has been a lot of debate about whether AI systems pose an existential risk to humanity. This paper develops a general framework for thinking about the existential risk of AI systems. We analyze a two-premise argument that AI systems pose a threat to humanity. Premise one: AI systems will become extremely powerful. Premise two: if AI systems become extremely powerful, they will destroy humanity. We use these two premises to construct a taxonomy of ‘survival (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  3. The Fragile World Hypothesis: Complexity, Fragility, and Systemic Existential Risk.David Manheim - forthcoming - Futures.
    The possibility of social and technological collapse has been the focus of science fiction tropes for decades, but more recent focus has been on specific sources of existential and global catastrophic risk. Because these scenarios are simple to understand and envision, they receive more attention than risks due to complex interplay of failures, or risks that cannot be clearly specified. In this paper, we discuss the possibility that complexity of a certain type leads to fragility which can function as a (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  4. Extension and Replacement.Michal Masny - forthcoming - Philosophical Studies.
    Many people believe that it is better to extend the length of a happy life than to create a new happy life, even if the total welfare is the same in both cases. Despite the popularity of this view, one would be hard-pressed to find a fully compelling justification for it in the literature. This paper develops a novel account of why and when extension is better than replacement that applies not just to persons but also to non-human animals and (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  5. Effective Altruism, Disaster Prevention, and the Possibility of Hell: A Dilemma for Secular Longtermists (12th edition).Eric Sampson - forthcoming - Oxford Studies in Philosophy of Religion.
    Abstract: Longtermist Effective Altruists (EAs) aim to mitigate the risk of existential catastrophes. In this paper, I have three goals. First, I identify a catastrophic risk that EAs have completely ignored. I call it religious catastrophe: the threat that (as Christians and Muslims have warned for centuries) billions of people stand in danger of going to hell for all eternity. Second, I argue that, even by secular EA lights, religious catastrophe is at least as bad and at least as probable (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  6. Deep Uncertainty and Incommensurability: General Cautions about Precaution.Rush T. Stewart - forthcoming - Philosophy of Science.
    The precautionary principle is invoked in a number of important personal and policy decision contexts. Peterson shows that certain ways of making the principle precise are inconsistent with other criteria of decision-making. Some object that the results do not apply to cases of deep uncertainty or value incommensurability which are alleged to be in the principle’s wheelhouse. First, I show that Peterson’s impossibility results can be generalized considerably to cover cases of both deep uncertainty and incommensurability. Second, I contrast an (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  7. Capitalism and the Very Long Term.Nikhil Venkatesh - forthcoming - Moral Philosophy and Politics.
    Capitalism is defined as the economic structure in which decisions over production are largely made by or on behalf of individuals in virtue of their private property ownership, subject to the incentives and constraints of market competition. In this paper, I will argue that considerations of long-term welfare, such as those developed by Greaves and MacAskill (2021), support anticapitalism in a weak sense (reducing the extent to which the economy is capitalistic) and perhaps support anticapitalism in a stronger sense (establishing (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  8. Meaningful Lives and Meaningful Futures.Michal Masny - 2025 - Journal of Ethics and Social Philosophy 30 (1).
    What moral reasons, if any, do we have to prevent the extinction of humanity? In “Unfinished Business,” Jonathan Knutzen argues that certain further developments in culture would make our history more “collectively meaningful” and that premature extinction would be bad because it would close off that possibility. Here, I critically examine this proposal. I argue that if collective meaningfulness is analogous to individual meaningfulness, then our meaning-based reasons to prevent the extinction of humanity are substantially different from the reasons discussed (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  9. How Much Should Governments Pay to Prevent Catastrophes? Longtermism's Limited Role.Carl Shulman & Elliott Thornley - 2025 - In Jacob Barrett, Hilary Greaves & David Thorstad, Essays on Longtermism: Present Action for the Distant Future. Oxford University Press.
    Longtermists have argued that humanity should significantly increase its efforts to prevent catastrophes like nuclear wars, pandemics, and AI disasters. But one prominent longtermist argument overshoots this conclusion: the argument also implies that humanity should reduce the risk of existential catastrophe even at extreme cost to the present generation. This overshoot means that democratic governments cannot use the longtermist argument to guide their catastrophe policy. In this paper, we show that the case for preventing catastrophe does not depend on longtermism. (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   6 citations  
  10. Longtermism and social risk-taking.H. Orri Stefánsson - 2025 - In Jacob Barrett, Hilary Greaves & David Thorstad, Essays on Longtermism: Present Action for the Distant Future. Oxford University Press.
    A social planner who evaluates risky public policies in light of the other risks with which their society will be faced should judge favourably some such policies even though they would deem them too risky when considered in isolation. I suggest that a longtermist would—or at least should—evaluate risky polices in light of their prediction about future risks; hence, longtermism supports social risk-taking. I consider two formal versions of this argument, discuss the conditions needed for the argument to be valid, (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  11. Existential Risk, Astronomical Waste, and the Reasonableness of a Pure Time Preference for Well-Being.S. J. Beard & Patrick Kaczmarek - 2024 - The Monist 107 (2):157-175.
    In this paper, we argue that our moral concern for future well-being should reduce over time due to important practical considerations about how humans interact with spacetime. After surveying several of these considerations (around equality, special duties, existential contingency, and overlapping moral concern) we develop a set of core principles that can both explain their moral significance and highlight why this is inherently bound up with our relationship with spacetime. These relate to the equitable distribution of (1) moral concern in (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  12. Hegemonía de campos catastrofistas: tensiones de la colapsología emergente con los societal collapse studies.Sergio Chaparro-Arenas - 2024 - Dissertation, National University of Colombia
    Adoptando una revitalización de la tradición marxista en los Estudios Sociales de Ciencia y Tecnología (STS) y en la filosofía contemporánea, mi trabajo se ocupa de la emergencia de la colapsología en Francia y Bélgica, Europa y el mundo, sus tensiones constituyentes con el campo hegemónico de los societal collapse studies. La tesis realiza un seguimiento minucioso de la colapsología y sus profetas fundacionales, el ingeniero agrónomo y Doctor en Biología, Pablo Servigne, y el administrador ambiental de empresas y Máster (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  13. Why prevent human extinction?James Fanciullo - 2024 - Philosophy and Phenomenological Research 109 (2):650-662.
    Many of us think human extinction would be a very bad thing, and that we have moral reasons to prevent it. But there is disagreement over what would make extinction so bad, and thus over what grounds these moral reasons. Recently, several theorists have argued that our reasons to prevent extinction stem not just from the value of the welfare of future lives, but also from certain additional values relating to the existence of humanity itself (for example, humanity’s “final” value, (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  14. Risk, Non-Identity, and Extinction.Kacper Kowalczyk & Nikhil Venkatesh - 2024 - The Monist 107 (2):146–156.
    This paper examines a recent argument in favour of strong precautionary action—possibly including working to hasten human extinction—on the basis of a decision-theoretic view that accommodates the risk-attitudes of all affected while giving more weight to the more risk-averse attitudes. First, we dispute the need to take into account other people’s attitudes towards risk at all. Second we argue that a version of the non-identity problem undermines the case for doing so in the context of future people. Lastly, we suggest (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  15. “Emergent Abilities,” AI, and Biosecurity: Conceptual Ambiguity, Stability, and Policy.Alex John London - 2024 - Disincentivizing Bioweapons: Theory and Policy Approaches.
    Recent claims that artificial intelligence (AI) systems demonstrate “emergent abilities” have fueled excitement but also fear grounded in the prospect that such systems may enable a wider range of parties to make unprecedented advances in areas that include the development of chemical or biological weapons. Ambiguity surrounding the term “emergent abilities” has added avoidable uncertainty to a topic that has the potential to destabilize the strategic landscape, including the perception of key parties about the viability of nonproliferation efforts. To avert (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  16. Should longtermists recommend hastening extinction rather than delaying it?Richard Pettigrew - 2024 - The Monist 107 (2):130-145.
    Longtermism is the view that the most urgent global priorities, and those to which we should devote the largest portion of our resources, are those that focus on (i) ensuring a long future for humanity, and perhaps sentient or intelligent life more generally, and (ii) improving the quality of the lives that inhabit that long future. While it is by no means the only one, the argument most commonly given for this conclusion is that these interventions have greater expected goodness (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   2 citations  
  17. Existential risk and equal political liberty.J. Joseph Porter & Adam F. Gibbons - 2024 - Asian Journal of Philosophy 3 (2):1-26.
    Rawls famously argues that the parties in the original position would agree upon the two principles of justice. Among other things, these principles guarantee equal political liberty—that is, democracy—as a requirement of justice. We argue on the contrary that the parties have reason to reject this requirement. As we show, by Rawls’ own lights, the parties would be greatly concerned to mitigate existential risk. But it is doubtful whether democracy always minimizes such risk. Indeed, no one currently knows which political (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  18. Hope in an Illiberal Age? [REVIEW]Mark R. Reiff - 2024 - Ethics, Policy and Environment 2024 (1):1-9.
    In this commentary on Darrel Moellendorf’s Mobilizing Hope: Climate Change & Global Poverty (Oxford: Oxford University Press, 2022), I discuss his use of the precautionary principle, whether his hope for climate-friendly ‘green growth’ is realistic given the tendency for inequality to accelerate as it gets higher, and what I call his assumption of a liberal baseline. That is, I worry that the audience to whom the book is addressed are those who already accept the environmental and economic values to which (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  19. Mistakes in the moral mathematics of existential risk.David Thorstad - 2024 - Ethics 135 (1):122-150.
    Longtermists have recently argued that it is overwhelmingly important to do what we can to mitigate existential risks to humanity. I consider three mistakes that are often made in calculating the value of existential risk mitigation. I show how correcting these mistakes pushes the value of existential risk mitigation substantially below leading estimates, potentially low enough to threaten the normative case for existential risk mitigation. I use this discussion to draw four positive lessons for the study of existential risk. -/- (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   2 citations  
  20. Protecting Future Generations by Enhancing Current Generations.Parker Crutchfield - 2023 - In Fabrice Jotterand & Marcello Ienca, The Routledge Handbook of the Ethics of Human Enhancement. Routledge.
    It is plausible that current generations owe something to future generations. One possibility is that we have a duty to not harm them. Another possibility is that we have a duty to protect them. In either case, however, to satisfy the duties to future generations from environmental or political degradation, we need to engage in widespread collective action. But, as we are, we have a limited ability to do so, in part because we lack the self-discipline necessary for successful collective (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  21. Language Agents Reduce the Risk of Existential Catastrophe.Simon Goldstein & Cameron Domenico Kirk-Giannini - 2023 - AI and Society:1-11.
    Recent advances in natural language processing have given rise to a new kind of AI architecture: the language agent. By repeatedly calling an LLM to perform a variety of cognitive tasks, language agents are able to function autonomously to pursue goals specified in natural language and stored in a human-readable format. Because of their architecture, language agents exhibit behavior that is predictable according to the laws of folk psychology: they function as though they have desires and beliefs, and then make (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   8 citations  
  22. Unfinished Business.Jonathan Knutzen - 2023 - Philosophers' Imprint 23 (1): 4, 1-15.
    According to an intriguing though somewhat enigmatic line of thought first proposed by Jonathan Bennett, if humanity went extinct any time soon this would be unfortunate because important business would be left unfinished. This line of thought remains largely unexplored. I offer an interpretation of the idea that captures its intuitive appeal, is consistent with plausible constraints, and makes it non-redundant to other views in the literature. The resulting view contrasts with a welfare-promotion perspective, according to which extinction would be (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   5 citations  
  23. Book Review "Thomas Moynihan: X-Risk: How Humanity Discovered its Own Extinction". [REVIEW]Kritika Maheshwari - 2023 - Intergenerational Justice Review 8 (2):61-62.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  24. Diving to extinction: Water birds at risk.Minh-Hoang Nguyen - 2023 - Sm3D Portal.
    Our Earth’s climate is changing. Any species living in the Earth’s ecosystem need to thrive to adapt to the new living conditions. Otherwise, extinction will be their outcome. In the race for adaptation, waterbirds (Aequorlitornithes), such as penguins, cormorants, and alcids, seem disadvantageous.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   2 citations  
  25. (1 other version)Economic inequality and the long-term future.Andreas T. Schmidt & Daan Juijn - 2023 - Politics, Philosophy and Economics (1):67-99.
    Why, if at all, should we object to economic inequality? Some central arguments – the argument from decreasing marginal utility for example – invoke instrumental reasons and object to inequality because of its effects. Such instrumental arguments, however, often concern only the static effects of inequality and neglect its intertemporal conse- quences. In this article, we address this striking gap and investigate income inequality’s intertemporal consequences, including its potential effects on humanity’s (very) long-term future. Following recent arguments around future generations (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   2 citations  
  26. (1 other version)The epistemic challenge to longtermism.Christian Tarsney - 2023 - Synthese 201 (6):1-37.
    Longtermists claim that what we ought to do is mainly determined by how our actions might affect the very long-run future. A natural objection to longtermism is that these effects may be nearly impossible to predict — perhaps so close to impossible that, despite the astronomical importance of the far future, the expected value of our present actions is mainly determined by near-term considerations. This paper aims to precisify and evaluate one version of this epistemic objection to longtermism. To that (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   11 citations  
  27. High Risk, Low Reward: A Challenge to the Astronomical Value of Existential Risk Mitigation.David Thorstad - 2023 - Philosophy and Public Affairs 51 (4):373-412.
    Philosophy &Public Affairs, Volume 51, Issue 4, Page 373-412, Fall 2023.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   9 citations  
  28. The Worst Case: Planetary Defense against a Doomsday Impactor.Joel Marks - 2022 - Space Policy 61.
    Current planetary defense policy prioritizes a probability assessment of risk of Earth impact by an asteroid or a comet in the planning of detection and mitigation strategies and in setting the levels of urgency and budgeting to operationalize them. The result has been a focus on asteroids of Tunguska size, which could destroy a city or a region, since this is the most likely sort of object we would need to defend against. However a complete risk assessment would consider not (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  29. Nuclear Fine-Tuning and the Illusion of Teleology.Ember Reed - 2022 - Sound Ideas.
    Recent existential-risk thinkers have noted that the analysis of the fine-tuning argument for God’s existence, and the analysis of certain forms of existential risk, employ similar types of reasoning. This paper argues that insofar as the “many worlds objection” undermines the inference to God’s existence from universal fine-tuning, then a similar many worlds objection undermines the inference that the historic risk of global nuclear catastrophe has been low from the lack of such a catastrophe has occurred in our world. A (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  30. Nuclear war as a predictable surprise.Matthew Rendall - 2022 - Global Policy 13 (5):782-791.
    Like asteroids, hundred-year floods and pandemic disease, thermonuclear war is a low-frequency, high-impact threat. In the long run, catastrophe is inevitable if nothing is done − yet each successive government and generation may fail to address it. Drawing on risk perception research, this paper argues that psychological biases cause the threat of nuclear war to receive less attention than it deserves. Nuclear deterrence is, moreover, a ‘front-loaded good’: its benefits accrue disproportionately to proximate generations, whereas much of the expected cost (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  31. COVID-19 PANDEMIC AS AN INDICATOR OF EXISTENTIAL EVOLUTIONARY RISK OF ANTHROPOCENE (ANTHROPOLOGICAL ORIGIN AND GLOBAL POLITICAL MECHANISMS).Valentin Cheshko & Konnova Nina - 2021 - In MOChashin O. Kristal, Bioethics: from theory to practice. pp. 29-44.
    The coronavirus pandemic, like its predecessors - AIDS, Ebola, etc., is evidence of the evolutionary instability of the socio-cultural and ecological niche created by mankind, as the main factor in the evolutionary success of our biological species and the civilization created by it. At least, this applies to the modern global civilization, which is called technogenic or technological, although it exists in several varieties. As we hope to show, the current crisis has less ontological as well as epistemological roots; its (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  32. Existential risk from AI and orthogonality: Can we have it both ways?Vincent C. Müller & Michael Cannon - 2021 - Ratio 35 (1):25-36.
    The standard argument to the conclusion that artificial intelligence (AI) constitutes an existential risk for the human species uses two premises: (1) AI may reach superintelligent levels, at which point we humans lose control (the ‘singularity claim’); (2) Any level of intelligence can go along with any goal (the ‘orthogonality thesis’). We find that the singularity claim requires a notion of ‘general intelligence’, while the orthogonality thesis requires a notion of ‘instrumental intelligence’. If this interpretation is correct, they cannot be (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   9 citations  
  33. Human Extinction from a Thomist Perspective.Stefan Riedener - 2021 - In Stefan Riedener, Dominic Roser & Markus Huppenbauer, Effective Altruism and Religion: Synergies, Tensions, Dialogue. Baden-Baden, Germany: Nomos. pp. 187-210.
    “Existential risks” are risks that threaten the destruction of humanity’s long-term potential: risks of nuclear wars, pandemics, supervolcano eruptions, and so on. On standard utilitarianism, it seems, the reduction of such risks should be a key global priority today. Many effective altruists agree with this verdict. But how should the importance of these risks be assessed on a Christian moral theory? In this paper, I begin to answer this question – taking Thomas Aquinas as a reference, and the risks of (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  34. How does Artificial Intelligence Pose an Existential Risk?Karina Vold & Daniel R. Harris - 2021 - In Carissa Véliz, The Oxford Handbook of Digital Ethics. Oxford University Press.
    Alan Turing, one of the fathers of computing, warned that Artificial Intelligence (AI) could one day pose an existential risk to humanity. Today, recent advancements in the field AI have been accompanied by a renewed set of existential warnings. But what exactly constitutes an existential risk? And how exactly does AI pose such a threat? In this chapter we aim to answer these questions. In particular, we will critically explore three commonly cited reasons for thinking that AI poses an existential (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  35. If now isn't the most influential time ever, when is? [REVIEW]Kritika Maheshwari - 2020 - The Philosopher 108:94-101.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  36. Autonomy and Machine Learning as Risk Factors at the Interface of Nuclear Weapons, Computers and People.S. M. Amadae & Shahar Avin - 2019 - In Vincent Boulanin, The Impact of Artificial Intelligence on Strategic Stability and Nuclear Risk: Euro-Atlantic Perspectives. Stockholm: SIPRI. pp. 105-118.
    This article assesses how autonomy and machine learning impact the existential risk of nuclear war. It situates the problem of cyber security, which proceeds by stealth, within the larger context of nuclear deterrence, which is effective when it functions with transparency and credibility. Cyber vulnerabilities poses new weaknesses to the strategic stability provided by nuclear deterrence. This article offers best practices for the use of computer and information technologies integrated into nuclear weapons systems. Focusing on nuclear command and control, avoiding (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  37. Space Colonization and Existential Risk.Joseph Gottlieb - 2019 - Journal of the American Philosophical Association 5 (3):306-320.
    Ian Stoner has recently argued that we ought not to colonize Mars because doing so would flout our pro tanto obligation not to violate the principle of scientific conservation, and there is no countervailing considerations that render our violation of the principle permissible. While I remain agnostic on, my primary goal in this article is to challenge : there are countervailing considerations that render our violation of the principle permissible. As such, Stoner has failed to establish that we ought not (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   7 citations  
  38. Existential risks: New Zealand needs a method to agree on a value framework and how to quantify future lives at risk.Matthew Boyd & Nick Wilson - 2018 - Policy Quarterly 14 (3):58-65.
    Human civilisation faces a range of existential risks, including nuclear war, runaway climate change and superintelligent artificial intelligence run amok. As we show here with calculations for the New Zealand setting, large numbers of currently living and, especially, future people are potentially threatened by existential risks. A just process for resource allocation demands that we consider future generations but also account for solidarity with the present. Here we consider the various ethical and policy issues involved and make a case for (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  39. Approaches to the Prevention of Global Catastrophic Risks.Alexey Turchin - 2018 - Human Prospect 7 (2):52-65.
    Many global catastrophic and existential risks (X-risks) threaten the existence of humankind. There are also many ideas for their prevention, but the meta-problem is that these ideas are not structured. This lack of structure means it is not easy to choose the right plan(s) or to implement them in the correct order. I suggest using a “Plan A, Plan B” model, which has shown its effectiveness in planning actions in unpredictable environments. In this approach, Plan B is a backup option, (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   3 citations  
  40. Global Catastrophic and Existential Risks Communication Scale.Alexey Turchin & Denkeberger David - 2018 - Futures:not defiend yet.
    Existential risks threaten the future of humanity, but they are difficult to measure. However, to communicate, prioritize and mitigate such risks it is important to estimate their relative significance. Risk probabilities are typically used, but for existential risks they are problematic due to ambiguity, and because quantitative probabilities do not represent some aspects of these risks. Thus, a standardized and easily comprehensible instrument is called for, to communicate dangers from various global catastrophic and existential risks. In this article, inspired by (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  41. Human Extinction, Narrative Ending, and Meaning of Life.Brooke Alan Trisel - 2016 - Journal of Philosophy of Life 6 (1):1-22.
    Some people think that the inevitability of human extinction renders life meaningless. Joshua Seachris has argued that naturalism can be conceptualized as a meta-narrative and that it narrates across important questions of human life, including what is the meaning of life and how life will end. How a narrative ends is important, Seachris argues. In the absence of God, and with knowledge that human extinction is a certainty, is there any way that humanity could be meaningful and have a good (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   6 citations  
  42. Existential Risks: Exploring a Robust Risk Reduction Strategy.Karim Jebari - 2015 - Science and Engineering Ethics 21 (3):541-554.
    A small but growing number of studies have aimed to understand, assess and reduce existential risks, or risks that threaten the continued existence of mankind. However, most attention has been focused on known and tangible risks. This paper proposes a heuristic for reducing the risk of black swan extinction events. These events are, as the name suggests, stochastic and unforeseen when they happen. Decision theory based on a fixed model of possible outcomes cannot properly deal with this kind of event. (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   6 citations  
  43. Risks of artificial intelligence.Vincent C. Muller (ed.) - 2015 - CRC Press - Chapman & Hall.
    Papers from the conference on AI Risk (published in JETAI), supplemented by additional work. --- If the intelligence of artificial systems were to surpass that of humans, humanity would face significant risks. The time has come to consider these issues, and this consideration must include progress in artificial intelligence (AI) as much as insights from AI theory. -- Featuring contributions from leading experts and thinkers in artificial intelligence, Risks of Artificial Intelligence is the first volume of collected chapters dedicated to (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   2 citations  
  44. Responses to Catastrophic AGI Risk: A Survey.Kaj Sotala & Roman V. Yampolskiy - 2015 - Physica Scripta 90.
    Many researchers have argued that humanity will create artificial general intelligence (AGI) within the next twenty to one hundred years. It has been suggested that AGI may inflict serious damage to human well-being on a global scale ('catastrophic risk'). After summarizing the arguments for why AGI may pose such a risk, we review the fieldʼs proposed responses to AGI risk. We consider societal proposals, proposals for external constraints on AGI behaviors and proposals for creating AGIs that are safe due to (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   13 citations  
  45. Human extinction and the value of our efforts.Brooke Alan Trisel - 2004 - Philosophical Forum 35 (3):371–391.
    Some people feel distressed reflecting on human extinction. Some people even claim that our efforts and lives would be empty and pointless if humanity becomes extinct, even if this will not occur for millions of years. In this essay, I will attempt to demonstrate that this claim is false. The desire for long-lastingness or quasi-immortality is often unwittingly adopted as a standard for judging whether our efforts are significant. If we accomplish our goals and then later in life conclude that (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   15 citations  
  46. Extinction Risks from AI: Invisible to Science?Vojtech Kovarik, Christiaan van Merwijk & Ida Mattsson - manuscript
    In an effort to inform the discussion surrounding existential risks from AI, we formulate Extinction-level Goodhart’s Law as “Virtually any goal specification, pursued to the extreme, will result in the extinction of humanity”, and we aim to understand which formal models are suitable for investigating this hypothesis. Note that we remain agnostic as to whether Extinction-level Goodhart’s Law holds or not. As our key contribution, we identify a set of conditions that are necessary for a model that aims to be (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  47. Rethinking the Redlines Against AI Existential Risks.Yi Zeng, Xin Guan, Enmeng Lu & Jinyu Fan - manuscript
    The ongoing evolution of advanced AI systems will have profound, enduring, and significant impacts on human existence that must not be overlooked. These impacts range from empowering humanity to achieve unprecedented transcendence to potentially causing catastrophic threats to our existence. To proactively and preventively mitigate these potential threats, it is crucial to establish clear redlines to prevent AI-induced existential risks by constraining and regulating advanced AI and their related AI actors. This paper explores different concepts of AI existential risk, connects (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark