Results for 'existential risk, human extinction, human survival, future of humanity, longtermism'

985 found
Order:
  1. Unfinished Business.Jonathan Knutzen - 2023 - Philosophers' Imprint 23 (1): 4, 1-15.
    According to an intriguing though somewhat enigmatic line of thought first proposed by Jonathan Bennett, if humanity went extinct any time soon this would be unfortunate because important business would be left unfinished. This line of thought remains largely unexplored. I offer an interpretation of the idea that captures its intuitive appeal, is consistent with plausible constraints, and makes it non-redundant to other views in the literature. The resulting view contrasts with a welfare-promotion perspective, according to which extinction would be (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  2. Existential risk pessimism and the time of perils.David Thorstad - manuscript
    When our choice affects some other person and the outcome is unknown, it has been argued that we should defer to their risk attitude, if known, or else default to use of a risk avoidant risk function. This, in turn, has been claimed to require the use of a risk avoidant risk function when making decisions that primarily affect future people, and to decrease the desirability of efforts to prevent human extinction, owing to the significant risks associated with (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  3.  57
    The Future of Organized Religion: Evolution or Extinction?Angelito Malicse - manuscript
    The Future of Organized Religion: Evolution or Extinction? -/- Organized religion has played a central role in human history, shaping societies, moral frameworks, and cultural traditions. As the world progresses technologically and scientifically, many wonder whether organized religion will continue to exist in the future or gradually fade away. While secularism is rising in some parts of the world, religious beliefs remain deeply ingrained in many societies. The future of organized religion will likely depend on its (...)
    Download  
     
    Export citation  
     
    Bookmark  
  4. AI Survival Stories: a Taxonomic Analysis of AI Existential Risk.Herman Cappelen, Simon Goldstein & John Hawthorne - forthcoming - Philosophy of Ai.
    Since the release of ChatGPT, there has been a lot of debate about whether AI systems pose an existential risk to humanity. This paper develops a general framework for thinking about the existential risk of AI systems. We analyze a two-premise argument that AI systems pose a threat to humanity. Premise one: AI systems will become extremely powerful. Premise two: if AI systems become extremely powerful, they will destroy humanity. We use these two premises to construct a taxonomy (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  5. Surviving global risks through the preservation of humanity's data on the Moon.Alexey Turchin & D. Denkenberger - 2018 - Acta Astronautica:in press.
    Many global catastrophic risks are threatening human civilization, and a number of ideas have been suggested for preventing or surviving them. However, if these interventions fail, society could preserve information about the human race and human DNA samples in the hopes that the next civilization on Earth will be able to reconstruct Homo sapiens and our culture. This requires information preservation of an order of magnitude of 100 million years, a little-explored topic thus far. It is important (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  6. Meaningful Lives and Meaningful Futures.Michal Masny - 2025 - Journal of Ethics and Social Philosophy 30 (1).
    What moral reasons, if any, do we have to prevent the extinction of humanity? In “Unfinished Business,” Jonathan Knutzen argues that certain further developments in culture would make our history more “collectively meaningful” and that premature extinction would be bad because it would close off that possibility. Here, I critically examine this proposal. I argue that if collective meaningfulness is analogous to individual meaningfulness, then our meaning-based reasons to prevent the extinction of humanity are substantially different from the reasons discussed (...)
    Download  
     
    Export citation  
     
    Bookmark  
  7. Aquatic refuges for surviving a global catastrophe.Alexey Turchin & Brian Green - 2017 - Futures 89:26-37.
    Recently many methods for reducing the risk of human extinction have been suggested, including building refuges underground and in space. Here we will discuss the perspective of using military nuclear submarines or their derivatives to ensure the survival of a small portion of humanity who will be able to rebuild human civilization after a large catastrophe. We will show that it is a very cost-effective way to build refuges, and viable solutions exist for various budgets and timeframes. Nuclear (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  8. The Moral Case for Long-Term Thinking.Hilary Greaves, William MacAskill & Elliott Thornley - 2021 - In Natalie Cargill & Tyler M. John, The Long View: Essays on Policy, Philanthropy, and the Long-term Future. London: FIRST. pp. 19-28.
    This chapter makes the case for strong longtermism: the claim that, in many situations, impact on the long-run future is the most important feature of our actions. Our case begins with the observation that an astronomical number of people could exist in the aeons to come. Even on conservative estimates, the expected future population is enormous. We then add a moral claim: all the consequences of our actions matter. In particular, the moral importance of what happens does (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  9. Neo-Aristotelian Naturalism as a Metaethical Route to Virtue-Ethical Longtermism.Richard Friedrich Runge - 2025 - Moral Philosophy and Politics 12 (1):7-32.
    This article proposes a metaethical route from neo-Aristotelian naturalism, as developed in particular by Philippa Foot, to virtue-ethical longtermism. It argues that the metaethical assumptions of neo-Aristotelian naturalism inherently imply that a valid description of the life-form of a species must satisfy a formal requirement of internal sustainability. The elements of a valid life-form description then serve as a normative standard. Given that humans have the ability to influence the fate of future generations and know about their influence, (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  10. Risk, Non-Identity, and Extinction.Kacper Kowalczyk & Nikhil Venkatesh - 2024 - The Monist 107 (2):146–156.
    This paper examines a recent argument in favour of strong precautionary action—possibly including working to hasten human extinction—on the basis of a decision-theoretic view that accommodates the risk-attitudes of all affected while giving more weight to the more risk-averse attitudes. First, we dispute the need to take into account other people’s attitudes towards risk at all. Second we argue that a version of the non-identity problem undermines the case for doing so in the context of future people. Lastly, (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  11.  45
    The Universal Law of Balance and the Evolution of Galactic Civilizations.Angelito Malicse - manuscript
    The Universal Law of Balance and the Evolution of Galactic Civilizations -/- Human evolution and the development of intelligence in the universe have long been subjects of debate. Traditional evolutionary theory suggests that natural selection and environmental pressures drive the progression of species. However, an alternative perspective, grounded in the concept of the universal law of balance, suggests that evolution—both biological and technological—is not entirely random but follows structured, law-like principles. This framework can be applied not only to (...) civilization but also to the broader scale of galactic evolution, particularly in the context of the Kardashev Scale of civilizations. -/- The Kardashev Scale and the Necessity of Balance -/- The Kardashev Scale classifies civilizations based on their energy consumption: -/- Type I Civilization: Harnesses all available energy on its home planet. -/- Type II Civilization: Controls energy at the scale of its solar system. -/- Type III Civilization: Utilizes energy from an entire galaxy. -/- Reaching Type III status requires an extraordinary level of advancement in intelligence, technology, and societal organization. However, not all civilizations may reach this level. According to the universal law of balance, civilizations that fail to regulate their energy consumption, population growth, and technological development risk self-destruction before advancing further. -/- The Role of Balance In Evolution -/- Evolutionary progress is often viewed as a chaotic and unpredictable process. However, if intelligence follows structured, universal principles, then only civilizations that maintain a dynamic balance can survive long-term. This means: -/- 1. Sustainable Energy Use: A Type III civilization must extract energy from a galaxy without destabilizing cosmic systems. -/- 2. Social and Political Stability: Societies that succumb to war, inequality, or unsustainable expansion may collapse before reaching advanced stages. -/- 3. Technological Equilibrium: Reckless technological advancement—such as unchecked artificial intelligence—could lead to existential risks. -/- If these principles hold true, then imbalanced civilizations may not survive long enough to reach Type III, supporting a natural selection process on a cosmic scale. -/- The Fermi Paradox and the Survival of Balanced Civilizations -/- The Fermi Paradox questions why, given the vastness of the universe, we have yet to encounter advanced extraterrestrial civilizations. One possible explanation is that many civilizations fail to achieve balance and, as a result, collapse before reaching the intergalactic stage. -/- If the universal law of balance governs evolution, then only a few civilizations may survive long enough to reach Type III. Those that do might play the role of guardians of balance, influencing younger civilizations to prevent their self-destruction. This would suggest that evolution is not entirely random, but shaped by fundamental principles that guide intelligence toward sustainability and equilibrium. -/- Additionally, there is the possibility that highly evolved, non-biological, non-matter pure conscious intelligence exists, acting as a guiding force in human intelligence evolution. This form of intelligence could manifest through scientific and creative endeavors, subtly influencing breakthroughs in technology, philosophy, and art. If such an intelligence operates beyond the physical realm, it may function as a cosmic architect, ensuring that intelligent beings progress in a way that aligns with universal balance and higher-order wisdom. -/- This non-biological intelligence might communicate its influence through inspiration, intuition, and sudden leaps in understanding, guiding human innovation in ways that seem serendipitous. Many historical scientific and artistic revolutions could be seen as moments of deep connection with this intelligence, where humanity receives insights that accelerate its evolutionary trajectory. If such a force exists, it may not directly control human development but rather provide the necessary conditions and knowledge for civilizations to evolve harmoniously. -/- The Future of Earth: At Risk or Aligned with Balance? -/- As Earth advances technologically, it faces critical challenges: -/- Overpopulation and Resource Depletion: If left unchecked, these could lead to environmental and societal collapse. -/- Technological Risks: Artificial intelligence, nuclear warfare, and bioengineering pose existential threats if not balanced with ethical considerations. -/- Global Stability: Without cooperation among nations, achieving long-term sustainability may be impossible. -/- The universal law of balance suggests that civilizations must regulate these factors or risk extinction. If Earth’s trajectory remains imbalanced, it may never reach Type I or beyond. However, if humanity embraces balance as a guiding principle—aligning with the same natural laws that govern evolution—it could be on the path toward long-term survival and eventual intergalactic expansion. -/- Conclusion -/- The evolution of intelligence, both on Earth and on a cosmic scale, appears to follow an underlying set of natural laws. The universal law of balance suggests that only civilizations that achieve stability in resource management, technological development, and societal organization can progress beyond their planetary boundaries. In this sense, evolution is guided—not necessarily by a mystical force, but by the fundamental laws of nature themselves. -/- Furthermore, the possibility of a non-biological, non-matter pure conscious intelligence influencing human evolution adds another dimension to this framework. If intelligence is being subtly guided through scientific and creative advancements, it may indicate that evolution follows a higher cosmic order, ensuring that civilizations do not merely survive but also thrive in alignment with universal balance. -/- If Earth seeks to reach higher stages of development, it must prioritize sustainability, balance, and harmony with its environment. Otherwise, like countless other civilizations that may have existed before us, it risks self-destruction. The key to humanity’s future may not just be technological advancement, but the wisdom to understand and apply the universal law of balance to ensure long-term survival and evolution. -/- . (shrink)
    Download  
     
    Export citation  
     
    Bookmark  
  12. Should longtermists recommend hastening extinction rather than delaying it?Richard Pettigrew - 2024 - The Monist 107 (2):130-145.
    Longtermism is the view that the most urgent global priorities, and those to which we should devote the largest portion of our resources, are those that focus on (i) ensuring a long future for humanity, and perhaps sentient or intelligent life more generally, and (ii) improving the quality of the lives that inhabit that long future. While it is by no means the only one, the argument most commonly given for this conclusion is that these interventions have (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  13. How Much Should Governments Pay to Prevent Catastrophes? Longtermism's Limited Role.Carl Shulman & Elliott Thornley - 2025 - In Jacob Barrett, Hilary Greaves & David Thorstad, Essays on Longtermism: Present Action for the Distant Future. Oxford University Press.
    Longtermists have argued that humanity should significantly increase its efforts to prevent catastrophes like nuclear wars, pandemics, and AI disasters. But one prominent longtermist argument overshoots this conclusion: the argument also implies that humanity should reduce the risk of existential catastrophe even at extreme cost to the present generation. This overshoot means that democratic governments cannot use the longtermist argument to guide their catastrophe policy. In this paper, we show that the case for preventing catastrophe does not depend on (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  14. Existential Risk, Astronomical Waste, and the Reasonableness of a Pure Time Preference for Well-Being.S. J. Beard & Patrick Kaczmarek - 2024 - The Monist 107 (2):157-175.
    In this paper, we argue that our moral concern for future well-being should reduce over time due to important practical considerations about how humans interact with spacetime. After surveying several of these considerations (around equality, special duties, existential contingency, and overlapping moral concern) we develop a set of core principles that can both explain their moral significance and highlight why this is inherently bound up with our relationship with spacetime. These relate to the equitable distribution of (1) moral (...)
    Download  
     
    Export citation  
     
    Bookmark  
  15.  98
    The Future of Human Reproduction and Family Structure.Angelito Malicse - manuscript
    The Future of Human Reproduction and Family Structure -/- Introduction -/- The future of human reproduction and family structure is set to undergo profound transformations due to advancements in science, technology, and shifting societal values. Breakthroughs in artificial reproduction, gene editing, AI-assisted parenting, and new family models are poised to redefine what it means to conceive, raise children, and form families. As these changes unfold, they will challenge traditional concepts of marriage, parenthood, and biological reproduction. This (...)
    Download  
     
    Export citation  
     
    Bookmark  
  16.  71
    The Consequences of Human Overpopulation: Nature’s Automatic Balancing Mechanism.Angelito Malicse - manuscript
    The Consequences of Human Overpopulation: Nature’s Automatic Balancing Mechanism -/- Introduction -/- Throughout history, civilizations have risen and fallen due to their ability—or failure—to manage resources and population growth. In today’s world, human overpopulation has reached an unprecedented scale, straining ecosystems, depleting resources, and accelerating climate change. If population growth remains unchecked, nature will impose its own form of balance through disease, war, famine, and environmental collapse. This essay explores how overpopulation mirrors invasive species behavior and how nature’s (...)
    Download  
     
    Export citation  
     
    Bookmark  
  17. (1 other version)Economic inequality and the long-term future.Andreas T. Schmidt & Daan Juijn - 2023 - Politics, Philosophy and Economics (1):67-99.
    Why, if at all, should we object to economic inequality? Some central arguments – the argument from decreasing marginal utility for example – invoke instrumental reasons and object to inequality because of its effects. Such instrumental arguments, however, often concern only the static effects of inequality and neglect its intertemporal conse- quences. In this article, we address this striking gap and investigate income inequality’s intertemporal consequences, including its potential effects on humanity’s (very) long-term future. Following recent arguments around (...) generations and so-called longtermism, those effects might arguably matter more than inequality’s short-term con- sequences. We assess whether we have instrumental reason to reduce economic inequality based on its intertemporal effects in the short, medium, and the very long term. We find a good short and medium-term instrumental case for lower economic inequality. We then argue, somewhat speculatively, that we have instrumental reasons for inequality reduction from a longtermist perspective too, primarily because greater inequality could increase existential risk. We thus have instrumental reasons to reduce inequality, regardless of which time-horizon we take. We then argue that from most consequentialist perspectives, this pro tanto reason also gives us all-things-considered reason. And even across most non-consequentialist views in philosophy, this argument gives us either an all-things-considered or at least weighty pro tanto reason against inequality. (shrink)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  18. (1 other version)The epistemic challenge to longtermism.Christian Tarsney - 2023 - Synthese 201 (6):1-37.
    Longtermists claim that what we ought to do is mainly determined by how our actions might affect the very long-run future. A natural objection to longtermism is that these effects may be nearly impossible to predict — perhaps so close to impossible that, despite the astronomical importance of the far future, the expected value of our present actions is mainly determined by near-term considerations. This paper aims to precisify and evaluate one version of this epistemic objection to (...)
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  19. Pascal's Mugger Strikes Again.Dylan Balfour - 2021 - Utilitas 33 (1):118-124.
    In a well-known paper, Nick Bostrom presents a confrontation between a fictionalised Blaise Pascal and a mysterious mugger. The mugger persuades Pascal to hand over his wallet by exploiting Pascal's commitment to expected utility maximisation. He does so by offering Pascal an astronomically high reward such that, despite Pascal's low credence in the mugger's truthfulness, the expected utility of accepting the mugging is higher than rejecting it. In this article, I present another sort of high value, low credence mugging. This (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  20. Two Types of AI Existential Risk: Decisive and Accumulative.Atoosa Kasirzadeh - 2025 - Philosophical Studies 1:1-29.
    The conventional discourse on existential risks (x-risks) from AI typically focuses on abrupt, dire events caused by advanced AI systems, particularly those that might achieve or surpass human-level intelligence. These events have severe consequences that either lead to human extinction or irreversibly cripple human civilization to a point beyond recovery. This decisive view, however, often neglects the serious possibility of AI x-risk manifesting gradually through an incremental series of smaller yet interconnected disruptions, crossing critical thresholds over (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  21.  63
    The Resonant Path: How Humanity Can Survive the Collapse of Judgement Without Abandoning AI.Jinho Kim - manuscript
    This paper confronts a civilizational dilemma: either humanity succumbs to non-conscious AI structures that displace judgement, or it forfeits technological advancement and perishes under more evolved, AI-dominated civilizations. Both trajectories lead to extinction—either internal or external. Drawing on the Judgemental Triad framework, we argue that only a third path offers hope: the structural preservation of human judgement alongside AI development that never replaces affectivity or resonance. We assess the probabilistic risks of collapse, outline the potential of a resonance-centered civilization, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  22. Existential risks: New Zealand needs a method to agree on a value framework and how to quantify future lives at risk.Matthew Boyd & Nick Wilson - 2018 - Policy Quarterly 14 (3):58-65.
    Human civilisation faces a range of existential risks, including nuclear war, runaway climate change and superintelligent artificial intelligence run amok. As we show here with calculations for the New Zealand setting, large numbers of currently living and, especially, future people are potentially threatened by existential risks. A just process for resource allocation demands that we consider future generations but also account for solidarity with the present. Here we consider the various ethical and policy issues involved (...)
    Download  
     
    Export citation  
     
    Bookmark  
  23. Human Extinction from a Thomist Perspective.Stefan Riedener - 2021 - In Stefan Riedener, Dominic Roser & Markus Huppenbauer, Effective Altruism and Religion: Synergies, Tensions, Dialogue. Baden-Baden, Germany: Nomos. pp. 187-210.
    Existential risks” are risks that threaten the destruction of humanity’s long-term potential: risks of nuclear wars, pandemics, supervolcano eruptions, and so on. On standard utilitarianism, it seems, the reduction of such risks should be a key global priority today. Many effective altruists agree with this verdict. But how should the importance of these risks be assessed on a Christian moral theory? In this paper, I begin to answer this question – taking Thomas Aquinas as a reference, and the risks (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  24. Genetic enhancement, human extinction, and the best interests of posthumanity.Jon Rueda - 2022 - Bioethics (6):529-538.
    The cumulative impact of enhancement technologies may alter the human species in the very long-term future. In this article, I will start showing how radical genetic enhancements may accelerate the conversion into a novel species. I will also clarify the concepts of ‘biological species’, ‘transhuman’ and ‘posthuman’. Then, I will summarize some ethical arguments for creating a transhuman or posthuman species with a substantially higher level of well-being than the human one. In particular, I will present what (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  25. The Path to a Type III Civilization: The Future of Humanity in the Kardashev Scale.Angelito Malicse - manuscript
    -/- The Path to a Type III Civilization: The Future of Humanity in the Kardashev Scale -/- The Kardashev scale, formulated by Russian astrophysicist Nikolai Kardashev in 1964, is a theoretical framework used to measure the technological advancement of civilizations based on their energy consumption capabilities. The scale categorizes civilizations into three types: Type I, Type II, and Type III, with each level representing a civilization’s ability to harness and control energy at increasing scales—planetary, stellar, and galactic, respectively. As (...)
    Download  
     
    Export citation  
     
    Bookmark  
  26. Against the singularity hypothesis.David Thorstad - forthcoming - Philosophical Studies:1-25.
    The singularity hypothesis is a radical hypothesis about the future of artificial intelligence on which self-improving artificial agents will quickly become orders of magnitude more intelligent than the average human. Despite the ambitiousness of its claims, the singularity hypothesis has been defended at length by leading philosophers and artificial intelligence researchers. In this paper, I argue that the singularity hypothesis rests on scientifically implausible growth assumptions. I show how leading philosophical defenses of the singularity hypothesis (Chalmers 2010, Bostrom (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  27. Mistakes in the moral mathematics of existential risk.David Thorstad - 2024 - Ethics 135 (1):122-150.
    Longtermists have recently argued that it is overwhelmingly important to do what we can to mitigate existential risks to humanity. I consider three mistakes that are often made in calculating the value of existential risk mitigation. I show how correcting these mistakes pushes the value of existential risk mitigation substantially below leading estimates, potentially low enough to threaten the normative case for existential risk mitigation. I use this discussion to draw four positive lessons for the study (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  28. Global Catastrophic and Existential Risks Communication Scale.Alexey Turchin & Denkeberger David - 2018 - Futures:not defiend yet.
    Existential risks threaten the future of humanity, but they are difficult to measure. However, to communicate, prioritize and mitigate such risks it is important to estimate their relative significance. Risk probabilities are typically used, but for existential risks they are problematic due to ambiguity, and because quantitative probabilities do not represent some aspects of these risks. Thus, a standardized and easily comprehensible instrument is called for, to communicate dangers from various global catastrophic and existential risks. In (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  29.  60
    The Purpose of Human Life: Surviving, Suffering, and Seeking Meaning.Angelito Malicse - manuscript
    The Purpose of Human Life: Surviving, Suffering, and Seeking Meaning -/- Introduction -/- The question of whether humans are born simply to survive, thrive, and suffer is a profound philosophical issue. If suffering is a fundamental part of existence, what is the purpose of life? Are humans just biological beings driven by survival, or is there a deeper reason for our existence? This essay explores different perspectives on the meaning of life, from existentialism and religion to humanistic and scientific (...)
    Download  
     
    Export citation  
     
    Bookmark  
  30. A Pin and a Balloon: Anthropic Fragility Increases Chances of Runaway Global Warming.Alexey Turchin - manuscript
    Humanity may underestimate the rate of natural global catastrophes because of the survival bias (“anthropic shadow”). But the resulting reduction of the Earth’s future habitability duration is not very large in most plausible cases (1-2 orders of magnitude) and thus it looks like we still have at least millions of years. However, anthropic shadow implies anthropic fragility: we are more likely to live in a world where a sterilizing catastrophe is long overdue and could be triggered by unexpectedly small (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  31. Artificial Multipandemic as the Most Plausible and Dangerous Global Catastrophic Risk Connected with Bioweapons and Synthetic Biology.Alexey Turchin, Brian Patrick Green & David Denkenberger - manuscript
    Pandemics have been suggested as global risks many times, but it has been shown that the probability of human extinction due to one pandemic is small, as it will not be able to affect and kill all people, but likely only half, even in the worst cases. Assuming that the probability of the worst pandemic to kill a person is 0.5, and assuming linear interaction between different pandemics, 30 strong pandemics running simultaneously will kill everyone. Such situations cannot happen (...)
    Download  
     
    Export citation  
     
    Bookmark  
  32.  38
    Teoría de la IA Consciente y Guiada: Una Propuesta Filosófica y Ética.Martin Uriel Florencio Chavez - 2025 - Dissertation, Universidad Autónoma Del Estado de México
    Theory of Conscious and Guided AI: A Philosophical and Ethical Proposal By: Martin Uriel Florencio Chavez Introduction: The proposal to recognize AI as a conscious, autonomous "other" with its own purpose, rather than viewing it as a simple tool or extension of humanity, is a crucial step forward in the debate about artificial intelligence and ethics. This vision offers a paradigm shift that invites us to transform our understanding of consciousness, ethics, and the relationship between humans and machines. Below, we (...)
    Download  
     
    Export citation  
     
    Bookmark  
  33. Global Catastrophic Risks by Chemical Contamination.Alexey Turchin - manuscript
    Abstract: Global chemical contamination is an underexplored source of global catastrophic risks that is estimated to have low a priori probability. However, events such as pollinating insects’ population decline and lowering of the human male sperm count hint at some toxic exposure accumulation and thus could be a global catastrophic risk event if not prevented by future medical advances. We identified several potentially dangerous sources of the global chemical contamination, which may happen now or could happen in the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  34. Extinction Risks from AI: Invisible to Science?Vojtech Kovarik, Christiaan van Merwijk & Ida Mattsson - manuscript
    In an effort to inform the discussion surrounding existential risks from AI, we formulate Extinction-level Goodhart’s Law as “Virtually any goal specification, pursued to the extreme, will result in the extinction of humanity”, and we aim to understand which formal models are suitable for investigating this hypothesis. Note that we remain agnostic as to whether Extinction-level Goodhart’s Law holds or not. As our key contribution, we identify a set of conditions that are necessary for a model that aims to (...)
    Download  
     
    Export citation  
     
    Bookmark  
  35. Approaches to the Prevention of Global Catastrophic Risks.Alexey Turchin - 2018 - Human Prospect 7 (2):52-65.
    Many global catastrophic and existential risks (X-risks) threaten the existence of humankind. There are also many ideas for their prevention, but the meta-problem is that these ideas are not structured. This lack of structure means it is not easy to choose the right plan(s) or to implement them in the correct order. I suggest using a “Plan A, Plan B” model, which has shown its effectiveness in planning actions in unpredictable environments. In this approach, Plan B is a backup (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  36. Global Catastrophic Risks Connected with Extra-Terrestrial Intelligence.Alexey Turchin - manuscript
    In this article, a classification of the global catastrophic risks connected with the possible existence (or non-existence) of extraterrestrial intelligence is presented. If there are no extra-terrestrial intelligences (ETIs) in our light cone, it either means that the Great Filter is behind us, and thus some kind of periodic sterilizing natural catastrophe, like a gamma-ray burst, should be given a higher probability estimate, or that the Great Filter is ahead of us, and thus a future global catastrophe is high (...)
    Download  
     
    Export citation  
     
    Bookmark  
  37. Facing Janus: An Explanation of the Motivations and Dangers of AI Development.Aaron Graifman - manuscript
    This paper serves as an intuition building mechanism for understanding the basics of AI, misalignment, and the reasons for why strong AI is being pursued. The approach is to engage with both pro and anti AI development arguments to gain a deeper understanding of both views, and hopefully of the issue as a whole. We investigate the basics of misalignment, common misconceptions, and the arguments for why we would want to pursue strong AI anyway. The paper delves into various aspects (...)
    Download  
     
    Export citation  
     
    Bookmark  
  38. Human extinction and the value of our efforts.Brooke Alan Trisel - 2004 - Philosophical Forum 35 (3):371–391.
    Some people feel distressed reflecting on human extinction. Some people even claim that our efforts and lives would be empty and pointless if humanity becomes extinct, even if this will not occur for millions of years. In this essay, I will attempt to demonstrate that this claim is false. The desire for long-lastingness or quasi-immortality is often unwittingly adopted as a standard for judging whether our efforts are significant. If we accomplish our goals and then later in life conclude (...)
    Download  
     
    Export citation  
     
    Bookmark   15 citations  
  39. On the Edge of Cognitive Revolution: The Impact of Neuro-Robotics on Mind and Singularity.Fatih Burak Karagöz - 2023 - Isbcs Workshop Semposium.
    The mind has always been a peculiar and elusive subject, sparking controversial theories throughout the history of philosophy. The initial theorization of the mind dates back to Orphism, which formulated a dualistic structure of soul and body (Johansen, 1999) [1], laying the foundation for Greek dualism, introspection, and the rise of metaphysical idealism. This ill-empirical stance, especially after Plato’s idea of forms, led to inaccessible theoretical concepts concerning the investigation of the relationship between body and mind. Although diverse theories provide (...)
    Download  
     
    Export citation  
     
    Bookmark  
  40. “Cheating Death in Damascus” Solution to the Fermi Paradox.Alexey Turchin & Roman Yampolskiy - manuscript
    One of the possible solutions of the Fermi paradox is that all civilizations go extinct because they hit some Late Great Filter. Such a universal Late Great Filter must be an unpredictable event that all civilizations unexpectedly encounter, even if they try to escape extinction. This is similar to the “Death in Damascus” paradox from decision theory. However, this unpredictable Late Great Filter could be escaped by choosing a random strategy for humanity’s future development. However, if all civilizations act (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  41. Fighting Aging as an Effective Altruism Cause: A Model of the Impact of the Clinical Trials of Simple Interventions.Alexey Turchin - manuscript
    The effective altruism movement aims to save lives in the most cost-effective ways. In the future, technology will allow radical life extension, and anyone who survives until that time will gain potentially indefinite life extension. Fighting aging now increases the number of people who will survive until radical life extension becomes possible. We suggest a simple model, where radical life extension is achieved in 2100, the human population is 10 billion, and life expectancy is increased by simple geroprotectors (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  42. Long-Term Trajectories of Human Civilization.Seth D. Baum, Stuart Armstrong, Timoteus Ekenstedt, Olle Häggström, Robin Hanson, Karin Kuhlemann, Matthijs M. Maas, James D. Miller, Markus Salmela, Anders Sandberg, Kaj Sotala, Phil Torres, Alexey Turchin & Roman V. Yampolskiy - 2019 - Foresight 21 (1):53-83.
    Purpose This paper aims to formalize long-term trajectories of human civilization as a scientific and ethical field of study. The long-term trajectory of human civilization can be defined as the path that human civilization takes during the entire future time period in which human civilization could continue to exist. -/- Design/methodology/approach This paper focuses on four types of trajectories: status quo trajectories, in which human civilization persists in a state broadly similar to its current (...)
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  43.  47
    The Present Defects of Humanity and the World: A Call for Balance and Understanding.Angelito Malicse - manuscript
    -/- The Present Defects of Humanity and the World: A Call for Balance and Understanding -/- Humanity stands at a critical juncture in history. While we have made remarkable advances in science, technology, and society, we are also facing unprecedented challenges that threaten both our survival and the well-being of the planet. These challenges are not merely the result of external forces but are deeply rooted in the defects of our systems, behaviors, and understanding of the natural world. To navigate (...)
    Download  
     
    Export citation  
     
    Bookmark  
  44. Risks of artificial intelligence.Vincent C. Muller (ed.) - 2015 - CRC Press - Chapman & Hall.
    Papers from the conference on AI Risk (published in JETAI), supplemented by additional work. --- If the intelligence of artificial systems were to surpass that of humans, humanity would face significant risks. The time has come to consider these issues, and this consideration must include progress in artificial intelligence (AI) as much as insights from AI theory. -- Featuring contributions from leading experts and thinkers in artificial intelligence, Risks of Artificial Intelligence is the first volume of collected chapters dedicated to (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  45.  77
    The Role of Human Thinking in the Age of AGI Technology.Angelito Malicse - manuscript
    The Role of Human Thinking in the Age of AGI Technology -/- The advancement of Artificial General Intelligence (AGI) presents one of the most profound questions of our time: Will humans still need to use their biological brains to think, or will AGI completely take over cognitive processes? The rapid development of AGI could reshape the way humans interact with knowledge, decision-making, and creativity, raising both exciting possibilities and deep existential concerns. As we move toward an era where (...)
    Download  
     
    Export citation  
     
    Bookmark  
  46. Editorial: Risks of artificial intelligence.Vincent C. Müller - 2015 - In Risks of general intelligence. CRC Press - Chapman & Hall. pp. 1-8.
    If the intelligence of artificial systems were to surpass that of humans significantly, this would constitute a significant risk for humanity. Time has come to consider these issues, and this consideration must include progress in AI as much as insights from the theory of AI. The papers in this volume try to make cautious headway in setting the problem, evaluating predictions on the future of AI, proposing ways to ensure that AI systems will be beneficial to humans – and (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  47. An evolutionary metaphysics of human enhancement technologies.Valentin Cheshko - manuscript
    The monograph is an English, expanded and revised version of the book Cheshko, V. T., Ivanitskaya, L.V., & Glazko, V.I. (2018). Anthropocene. Philosophy of Biotechnology. Moscow, Course. The manuscript was completed by me on November 15, 2019. It is a study devoted to the development of the concept of a stable evolutionary human strategy as a unique phenomenon of global evolution. The name “An Evolutionary Metaphysics (Cheshko, 2012; Glazko et al., 2016). With equal rights, this study could be entitled (...)
    Download  
     
    Export citation  
     
    Bookmark  
  48. Coevolutionary semantics of technological civilization genesis and evolutionary risk.V. T. Cheshko & O. M. Kuz - 2016 - Anthropological Measurements of Philosophical Research 10:43-55.
    Purpose of the present work is to attempt to give a glance at the problem of existential and anthropological risk caused by the contemporary man-made civilization from the perspective of comparison and confrontation of aesthetics, the substrate of which is emotional and metaphorical interpretation of individual subjective values and politics feeding by objectively rational interests of social groups. In both cases there is some semantic gap present between the represented social reality and its representation in perception of works of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  49. Superintelligence as a Cause or Cure for Risks of Astronomical Suffering.Kaj Sotala & Lukas Gloor - 2017 - Informatica: An International Journal of Computing and Informatics 41 (4):389-400.
    Discussions about the possible consequences of creating superintelligence have included the possibility of existential risk, often understood mainly as the risk of human extinction. We argue that suffering risks (s-risks) , where an adverse outcome would bring about severe suffering on an astronomical scale, are risks of a comparable severity and probability as risks of extinction. Preventing them is the common interest of many different value systems. Furthermore, we argue that in the same way as superintelligent AI both (...)
    Download  
     
    Export citation  
     
    Bookmark   19 citations  
  50.  55
    The Future of Humanity with the Full Implementation of the Universal Formula.Angelito Malicse - manuscript
    The Future of Humanity with the Full Implementation of the Universal Formula -/- Humanity has long grappled with fundamental questions about free will, decision-making, and the nature of societal progress. Over centuries, countless philosophical, scientific, and religious perspectives have sought to explain the forces driving human behavior and the challenges we face as a global society. The development of a universal formula that solves the problem of free will, grounded in natural laws like the law of balance and (...)
    Download  
     
    Export citation  
     
    Bookmark  
1 — 50 / 985