Results for 'existential risks'

998 found
Order:
  1. Existential Risks: Exploring a Robust Risk Reduction Strategy.Karim Jebari - 2015 - Science and Engineering Ethics 21 (3):541-554.
    A small but growing number of studies have aimed to understand, assess and reduce existential risks, or risks that threaten the continued existence of mankind. However, most attention has been focused on known and tangible risks. This paper proposes a heuristic for reducing the risk of black swan extinction events. These events are, as the name suggests, stochastic and unforeseen when they happen. Decision theory based on a fixed model of possible outcomes cannot properly deal with (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  2. Space Colonization and Existential Risk.Joseph Gottlieb - 2019 - Journal of the American Philosophical Association 5 (3):306-320.
    Ian Stoner has recently argued that we ought not to colonize Mars because doing so would flout our pro tanto obligation not to violate the principle of scientific conservation, and there is no countervailing considerations that render our violation of the principle permissible. While I remain agnostic on, my primary goal in this article is to challenge : there are countervailing considerations that render our violation of the principle permissible. As such, Stoner has failed to establish that we ought not (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  3.  24
    Existential Risk From AI and Orthogonality: Can We Have It Both Ways?Vincent C. Müller & Michael Cannon - 2021 - Ratio:1-12.
    The standard argument to the conclusion that artificial intelligence (AI) constitutes an existential risk for the human species uses two premises: (1) AI may reach superintelligent levels, at which point we humans lose control (the ‘singularity claim’); (2) Any level of intelligence can go along with any goal (the ‘orthogonality thesis’). We find that the singularity claim requires a notion of ‘general intelligence’, while the orthogonality thesis requires a notion of ‘instrumental intelligence’. If this interpretation is correct, they cannot (...)
    Download  
     
    Export citation  
     
    Bookmark  
  4. Global Catastrophic and Existential Risks Communication Scale.Alexey Turchin & Denkeberger David - 2018 - Futures:not defiend yet.
    Existential risks threaten the future of humanity, but they are difficult to measure. However, to communicate, prioritize and mitigate such risks it is important to estimate their relative significance. Risk probabilities are typically used, but for existential risks they are problematic due to ambiguity, and because quantitative probabilities do not represent some aspects of these risks. Thus, a standardized and easily comprehensible instrument is called for, to communicate dangers from various global catastrophic and (...) risks. In this article, inspired by the Torino scale of asteroid danger, we suggest a color coded scale to communicate the magnitude of global catastrophic and existential risks. The scale is based on the probability intervals of risks in the next century if they are available. The risks’ estimations could be adjusted based on their severities and other factors. The scale covers not only existential risks, but smaller size global catastrophic risks. It consists of six color levels, which correspond to previously suggested levels of prevention activity. We estimate artificial intelligence risks as “red”, while “orange” risks include nanotechnology, synthetic biology, full scale nuclear war and a large global agricultural shortfall (caused by regional nuclear war, coincident extreme weather, etc.) The risks of natural pandemic, supervolcanic eruption and global warming are marked as “yellow” and the danger from asteroids is “green”. -/- Keywords: global catastrophic risks; existential risks; Torino scale; policy; risk probability . (shrink)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  5. Existential Risks: New Zealand Needs a Method to Agree on a Value Framework and How to Quantify Future Lives at Risk.Matthew Boyd & Nick Wilson - 2018 - Policy Quarterly 14 (3):58-65.
    Human civilisation faces a range of existential risks, including nuclear war, runaway climate change and superintelligent artificial intelligence run amok. As we show here with calculations for the New Zealand setting, large numbers of currently living and, especially, future people are potentially threatened by existential risks. A just process for resource allocation demands that we consider future generations but also account for solidarity with the present. Here we consider the various ethical and policy issues involved and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  6.  69
    How Does Artificial Intelligence Pose an Existential Risk?Karina Vold & Daniel R. Harris - forthcoming - In Carissa Véliz (ed.), Oxford Handbook of Digital Ethics.
    Alan Turing, one of the fathers of computing, warned that Artificial Intelligence (AI) could one day pose an existential risk to humanity. Today, recent advancements in the field AI have been accompanied by a renewed set of existential warnings. But what exactly constitutes an existential risk? And how exactly does AI pose such a threat? In this chapter we aim to answer these questions. In particular, we will critically explore three commonly cited reasons for thinking that AI (...)
    Download  
     
    Export citation  
     
    Bookmark  
  7. The Fragile World Hypothesis: Complexity, Fragility, and Systemic Existential Risk.David Manheim - forthcoming - Futures.
    The possibility of social and technological collapse has been the focus of science fiction tropes for decades, but more recent focus has been on specific sources of existential and global catastrophic risk. Because these scenarios are simple to understand and envision, they receive more attention than risks due to complex interplay of failures, or risks that cannot be clearly specified. In this paper, we discuss the possibility that complexity of a certain type leads to fragility which can (...)
    Download  
     
    Export citation  
     
    Bookmark  
  8. Classification of Global Catastrophic Risks Connected with Artificial Intelligence.Alexey Turchin & David Denkenberger - 2020 - AI and Society 35 (1):147-163.
    A classification of the global catastrophic risks of AI is presented, along with a comprehensive list of previously identified risks. This classification allows the identification of several new risks. We show that at each level of AI’s intelligence power, separate types of possible catastrophes dominate. Our classification demonstrates that the field of AI risks is diverse, and includes many scenarios beyond the commonly discussed cases of a paperclip maximizer or robot-caused unemployment. Global catastrophic failure could happen (...)
    Download  
    Translate
     
     
    Export citation  
     
    Bookmark   6 citations  
  9. Risks of Artificial Intelligence.Vincent C. Müller (ed.) - 2016 - CRC Press - Chapman & Hall.
    Papers from the conference on AI Risk (published in JETAI), supplemented by additional work. --- If the intelligence of artificial systems were to surpass that of humans, humanity would face significant risks. The time has come to consider these issues, and this consideration must include progress in artificial intelligence (AI) as much as insights from AI theory. -- Featuring contributions from leading experts and thinkers in artificial intelligence, Risks of Artificial Intelligence is the first volume of collected chapters (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  10. Approaches to the Prevention of Global Catastrophic Risks.Alexey Turchin - 2018 - Human Prospect 7 (2):52-65.
    Many global catastrophic and existential risks (X-risks) threaten the existence of humankind. There are also many ideas for their prevention, but the meta-problem is that these ideas are not structured. This lack of structure means it is not easy to choose the right plan(s) or to implement them in the correct order. I suggest using a “Plan A, Plan B” model, which has shown its effectiveness in planning actions in unpredictable environments. In this approach, Plan B is (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  11. Superintelligence as a Cause or Cure for Risks of Astronomical Suffering.Kaj Sotala & Lukas Gloor - 2017 - Informatica: An International Journal of Computing and Informatics 41 (4):389-400.
    Discussions about the possible consequences of creating superintelligence have included the possibility of existential risk, often understood mainly as the risk of human extinction. We argue that suffering risks (s-risks) , where an adverse outcome would bring about severe suffering on an astronomical scale, are risks of a comparable severity and probability as risks of extinction. Preventing them is the common interest of many different value systems. Furthermore, we argue that in the same way as (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  12. Responses to Catastrophic AGI Risk: A Survey.Kaj Sotala & Roman V. Yampolskiy - 2015 - Physica Scripta 90.
    Many researchers have argued that humanity will create artificial general intelligence (AGI) within the next twenty to one hundred years. It has been suggested that AGI may inflict serious damage to human well-being on a global scale ('catastrophic risk'). After summarizing the arguments for why AGI may pose such a risk, we review the fieldʼs proposed responses to AGI risk. We consider societal proposals, proposals for external constraints on AGI behaviors and proposals for creating AGIs that are safe due to (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  13.  16
    COVID-19 PANDEMIC AS AN INDICATOR OF EXISTENTIAL EVOLUTIONARY RISK OF ANTHROPOCENE (ANTHROPOLOGICAL ORIGIN AND GLOBAL POLITICAL MECHANISMS).Valentin Cheshko & Konnova Nina - 2021 - In MOChashin O. Kristal (ed.), Bioethics: from theory to practice. Киев, Украина, 02000: pp. 29-44.
    The coronavirus pandemic, like its predecessors - AIDS, Ebola, etc., is evidence of the evolutionary instability of the socio-cultural and ecological niche created by mankind, as the main factor in the evolutionary success of our biological species and the civilization created by it. At least, this applies to the modern global civilization, which is called technogenic or technological, although it exists in several varieties. As we hope to show, the current crisis has less ontological as well as epistemological roots; its (...)
    Download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  14.  95
    Simulation Typology and Termination Risks.Alexey Turchin & Roman Yampolskiy - manuscript
    The goal of the article is to explore what is the most probable type of simulation in which humanity lives (if any) and how this affects simulation termination risks. We firstly explore the question of what kind of simulation in which humanity is most likely located based on pure theoretical reasoning. We suggest a new patch to the classical simulation argument, showing that we are likely simulated not by our own descendants, but by alien civilizations. Based on this, we (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  15.  91
    AI Risk Denialism.Roman V. Yampolskiy - manuscript
    In this work, we survey skepticism regarding AI risk and show parallels with other types of scientific skepticism. We start by classifying different types of AI Risk skepticism and analyze their root causes. We conclude by suggesting some intervention approaches, which may be successful in reducing AI risk skepticism, at least amongst artificial intelligence researchers.
    Download  
     
    Export citation  
     
    Bookmark  
  16. Editorial: Risks of Artificial Intelligence.Vincent C. Müller - 2016 - In Risks of artificial intelligence. CRC Press - Chapman & Hall. pp. 1-8.
    If the intelligence of artificial systems were to surpass that of humans significantly, this would constitute a significant risk for humanity. Time has come to consider these issues, and this consideration must include progress in AI as much as insights from the theory of AI. The papers in this volume try to make cautious headway in setting the problem, evaluating predictions on the future of AI, proposing ways to ensure that AI systems will be beneficial to humans – and critically (...)
    Download  
     
    Export citation  
     
    Bookmark  
  17. Global Catastrophic Risks Connected with Extra-Terrestrial Intelligence.Alexey Turchin - manuscript
    In this article, a classification of the global catastrophic risks connected with the possible existence (or non-existence) of extraterrestrial intelligence is presented. If there are no extra-terrestrial intelligences (ETIs) in our light cone, it either means that the Great Filter is behind us, and thus some kind of periodic sterilizing natural catastrophe, like a gamma-ray burst, should be given a higher probability estimate, or that the Great Filter is ahead of us, and thus a future global catastrophe is high (...)
    Download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  18. Global Catastrophic Risks by Chemical Contamination.Alexey Turchin - manuscript
    Abstract: Global chemical contamination is an underexplored source of global catastrophic risks that is estimated to have low a priori probability. However, events such as pollinating insects’ population decline and lowering of the human male sperm count hint at some toxic exposure accumulation and thus could be a global catastrophic risk event if not prevented by future medical advances. We identified several potentially dangerous sources of the global chemical contamination, which may happen now or could happen in the future: (...)
    Download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  19. EVOLUTIONARY RISK OF HIGH HUME TECHNOLOGIES. Article 2. THE GENESIS AND MECHANISMS OF EVOLUTIONARY RISK.V. T. Cheshko, L. V. Ivanitskaya & V. I. Glazko - 2015 - Integrative Anthropology (1):4-15.
    Sources of evolutionary risk for stable strategy of adaptive Homo sapiens are an imbalance of: (1) the intra-genomic co-evolution (intragenomic conflicts); (2) the gene-cultural co-evolution; (3) inter-cultural co-evolution; (4) techno-humanitarian balance; (5) inter-technological conflicts (technological traps). At least phenomenologically the components of the evolutionary risk are reversible, but in the aggregate they are in potentio irreversible destructive ones for biosocial, and cultural self-identity of Homo sapiens. When the actual evolution is the subject of a rationalist control and/or manipulation, the magnitude (...)
    Download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  20. The Global Catastrophic Risks Connected with Possibility of Finding Alien AI During SETI.Alexey Turchin - 2018 - Journal of British Interpanetary Society 71 (2):71-79.
    Abstract: This article examines risks associated with the program of passive search for alien signals (Search for Extraterrestrial Intelligence, or SETI) connected with the possibility of finding of alien transmission which includes description of AI system aimed on self-replication (SETI-attack). A scenario of potential vulnerability is proposed as well as the reasons why the proportion of dangerous to harmless signals may be high. The article identifies necessary conditions for the feasibility and effectiveness of the SETI-attack: ETI existence, possibility of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  21. Could Slaughterbots Wipe Out Humanity? Assessment of the Global Catastrophic Risk Posed by Autonomous Weapons.Alexey Turchin - manuscript
    Recently criticisms against autonomous weapons were presented in a video in which an AI-powered drone kills a person. However, some said that this video is a distraction from the real risk of AI—the risk of unlimitedly self-improving AI systems. In this article, we analyze arguments from both sides and turn them into conditions. The following conditions are identified as leading to autonomous weapons becoming a global catastrophic risk: 1) Artificial General Intelligence (AGI) development is delayed relative to progress in narrow (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  22.  82
    Autonomy and Machine Learning as Risk Factors at the Interface of Nuclear Weapons, Computers and People.S. M. Amadae & Shahar Avin - 2019 - In Vincent Boulanin (ed.), The Impact of Artificial Intelligence on Strategic Stability and Nuclear Risk: Euro-Atlantic Perspectives. Stockholm, Sweden: pp. 105-118.
    This article assesses how autonomy and machine learning impact the existential risk of nuclear war. It situates the problem of cyber security, which proceeds by stealth, within the larger context of nuclear deterrence, which is effective when it functions with transparency and credibility. Cyber vulnerabilities poses new weaknesses to the strategic stability provided by nuclear deterrence. This article offers best practices for the use of computer and information technologies integrated into nuclear weapons systems. Focusing on nuclear command and control, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  23. Surviving Global Risks Through the Preservation of Humanity's Data on the Moon.Alexey Turchin & D. Denkenberger - 2018 - Acta Astronautica:in press.
    Many global catastrophic risks are threatening human civilization, and a number of ideas have been suggested for preventing or surviving them. However, if these interventions fail, society could preserve information about the human race and human DNA samples in the hopes that the next civilization on Earth will be able to reconstruct Homo sapiens and our culture. This requires information preservation of an order of magnitude of 100 million years, a little-explored topic thus far. It is important that a (...)
    Download  
     
    Export citation  
     
    Bookmark  
  24. UAP and Global Catastrophic Risks.Alexey Turchin - manuscript
    Abstract: After 2017 NY Times publication, the stigma of the scientific discussion of the problem of so-called UAP (Unidentified Aerial Phenomena) was lifted. Now the question arises: how UAP will affect the future of humanity, and especially, the probability of the global catastrophic risks? To answer this question, we assume that the Nimitz case in 2004 was real and we will suggest a classification of the possible explanations of the phenomena. The first level consists of mundane explanations: hardware glitches, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  25. Configuration of Stable Evolutionary Strategy of Homo Sapiens and Evolutionary Risks of Technological Civilization (the Conceptual Model Essay).Valentin T. Cheshko, Lida V. Ivanitskaya & Yulia V. Kosova - 2014 - Biogeosystem Technique 1 (1):58-68.
    Stable evolutionary strategy of Homo sapiens (SESH) is built in accordance with the modular and hierarchical principle and consists of the same type of self-replicating elements, i.e. is a system of systems. On the top level of the organization of SESH is the superposition of genetic, social, cultural and techno-rationalistic complexes. The components of this triad differ in the mechanism of cycles of generation - replication - transmission - fixing/elimination of adoptively relevant information. This mechanism is implemented either in accordance (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  26. Artificial Multipandemic as the Most Plausible and Dangerous Global Catastrophic Risk Connected with Bioweapons and Synthetic Biology.Alexey Turchin, Brian Patrick Green & David Denkenberger - manuscript
    Pandemics have been suggested as global risks many times, but it has been shown that the probability of human extinction due to one pandemic is small, as it will not be able to affect and kill all people, but likely only half, even in the worst cases. Assuming that the probability of the worst pandemic to kill a person is 0.5, and assuming linear interaction between different pandemics, 30 strong pandemics running simultaneously will kill everyone. Such situations cannot happen (...)
    Download  
     
    Export citation  
     
    Bookmark  
  27. Configuration of Stable Evolutionary Strategy of Homo Sapiens and Evolutionary Risks of Technological Civilization (the Conceptual Model Essay).Valentin T. Cheshko, Lida V. Ivanitskaya & Yulia V. Kosova - 2014 - Biogeosystem Technique, 1 (1):58-68.
    Stable evolutionary strategy of Homo sapiens (SESH) is built in accordance with the modular and hierarchical principle and consists of the same type of self-replicating elements, i.e. is a system of systems. On the top level of the organization of SESH is the superposition of genetic, social, cultural and techno-rationalistic complexes. The components of this triad differ in the mechanism of cycles of generation - replication - transmission - fixing/elimination of adoptively relevant information. This mechanism is implemented either in accordance (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  28. Сo-evolutionary biosemantics of evolutionary risk at technogenic civilization: Hiroshima, Chernobyl – Fukushima and further….Valentin Cheshko & Valery Glazko - 2016 - International Journal of Environmental Problems 3 (1):14-25.
    From Chernobyl to Fukushima, it became clear that the technology is a system evolutionary factor, and the consequences of man-made disasters, as the actualization of risk related to changes in the social heredity (cultural transmission) elements. The uniqueness of the human phenomenon is a characteristic of the system arising out of the nonlinear interaction of biological, cultural and techno-rationalistic adaptive modules. Distribution emerging adaptive innovation within each module is in accordance with the two algorithms that are characterized by the dominance (...)
    Download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  29. The Prolegomens to Theory of Human Stable Evolutionarciety at Age of Controlled Evolution Techny Strategy as Ideology of Risk Soologies.V. T. Cheshko - 2016 - In Teodor N. Țîrdea (ed.), // Strategia supravietuirii din perspectiva bioeticii, filosofiei și medicinei. Culegere de articole științifice. Vol. 22–. pp. 134-139.
    Stable adaptive strategy of Homo sapiens (SESH) is a superposition of three different adaptive data arrays: biological, socio-cultural and technological modules, based on three independent processes of generation and replication of an adaptive information – genetic, socio-cultural and symbolic transmissions (inheritance). Third component SESH focused equally to the adaptive transformation of the environment and carrier of SESH. With the advent of High Hume technology, risk has reached the existential significance level. The existential level of technical risk is, by (...)
    Download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  30. Bioeconomics, Biopolitics and Bioethics: Evolutionary Semantics of Evolutionary Risk (Anthropological Essay).V. T. Cheshko - 2016 - Bioeconomics and Ecobiopolitic (1 (2)).
    Attempt of trans-disciplinary analysis of the evolutionary value of bioethics is realized. Currently, there are High Tech schemes for management and control of genetic, socio-cultural and mental evolution of Homo sapiens (NBIC, High Hume, etc.). The biological, socio-cultural and technological factors are included in the fabric of modern theories and technologies of social and political control and manipulation. However, the basic philosophical and ideological systems of modern civilization formed mainly in the 17–18 centuries and are experiencing ever-increasing and destabilizing risk-taking (...)
    Download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  31. COEVOLUTIONARY SEMANTICS OF TECHNOLOGICAL CIVILIZATION GENESIS AND EVOLUTIONARY RISK (BETWEEN THE BIOAESTHETICS AND BIOPOLITICS).V. T. Cheshko & O. N. Kuz - 2016 - Anthropological Dimensions of Philosophical Studies (10):43-55.
    Purpose (metatask) of the present work is to attempt to give a glance at the problem of existential and anthropo- logical risk caused by the contemporary man-made civilization from the perspective of comparison and confronta- tion of aesthetics, the substrate of which is emotional and metaphorical interpretation of individual subjective values and politics feeding by objectively rational interests of social groups. In both cases there is some semantic gap pre- sent between the represented social reality and its representation in (...)
    Download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  32.  88
    Coevolutionary Semantics of Technological Civilization Genesis and Evolutionary Risk.V. T. Cheshko & O. M. Kuz - 2016 - Anthropological Measurements of Philosophical Research 10:43-55.
    Purpose of the present work is to attempt to give a glance at the problem of existential and anthropological risk caused by the contemporary man-made civilization from the perspective of comparison and confrontation of aesthetics, the substrate of which is emotional and metaphorical interpretation of individual subjective values and politics feeding by objectively rational interests of social groups. In both cases there is some semantic gap present between the represented social reality and its representation in perception of works of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  33.  30
    UN75 ↔ Towards Security Council Reform ↔ Metaphysical, Ontological, and Existential Statuses of the Veto Right (1).Vladimir Rogozhin - manuscript
    From year to year some of us, people of planet Earth, Earthlings, attacks intensify on the veto right in the UN Security Council. They consciously or unconsciously ignore its metaphisical, ontological and existential statuses established in 1945 by the founders of the United Nations as a result of the multimillion sacrificial struggle of all Humanity against nazism. Perhaps this is due to a misunderstanding of the metaphysics of international relations, the enduring existential significance of the veto for the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  34. Why AI Doomsayers Are Like Sceptical Theists and Why It Matters.John Danaher - 2015 - Minds and Machines 25 (3):231-246.
    An advanced artificial intelligence could pose a significant existential risk to humanity. Several research institutes have been set-up to address those risks. And there is an increasing number of academic publications analysing and evaluating their seriousness. Nick Bostrom’s superintelligence: paths, dangers, strategies represents the apotheosis of this trend. In this article, I argue that in defending the credibility of AI risk, Bostrom makes an epistemic move that is analogous to one made by so-called sceptical theists in the debate (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  35. Aquatic Refuges for Surviving a Global Catastrophe.Alexey Turchin & Brian Green - 2017 - Futures 89:26-37.
    Recently many methods for reducing the risk of human extinction have been suggested, including building refuges underground and in space. Here we will discuss the perspective of using military nuclear submarines or their derivatives to ensure the survival of a small portion of humanity who will be able to rebuild human civilization after a large catastrophe. We will show that it is a very cost-effective way to build refuges, and viable solutions exist for various budgets and timeframes. Nuclear submarines are (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  36. Islands as Refuges for Surviving Global Catastrophes.Alexey Turchin & Brian Patrick Green - 2018 - Foresight.
    Purpose Islands have long been discussed as refuges from global catastrophes; this paper will evaluate them systematically, discussing both the positives and negatives of islands as refuges. There are examples of isolated human communities surviving for thousands of years on places like Easter Island. Islands could provide protection against many low-level risks, notably including bio-risks. However, they are vulnerable to tsunamis, bird-transmitted diseases, and other risks. This article explores how to use the advantages of islands for survival (...)
    Download  
     
    Export citation  
     
    Bookmark  
  37. Pascal's Mugger Strikes Again.Dylan Balfour - 2021 - Utilitas 33 (1):118-124.
    In a well-known paper, Nick Bostrom presents a confrontation between a fictionalised Blaise Pascal and a mysterious mugger. The mugger persuades Pascal to hand over his wallet by exploiting Pascal's commitment to expected utility maximisation. He does so by offering Pascal an astronomically high reward such that, despite Pascal's low credence in the mugger's truthfulness, the expected utility of accepting the mugging is higher than rejecting it. In this article, I present another sort of high value, low credence mugging. This (...)
    Download  
     
    Export citation  
     
    Bookmark  
  38. Wireheading as a Possible Contributor to Civilizational Decline.Alexey Turchin - manuscript
    Abstract: Advances in new technologies create new ways to stimulate the pleasure center of the human brain via new chemicals, direct application of electricity, electromagnetic fields, “reward hacking” in games and social networks, and in the future, possibly via genetic manipulation, nanorobots and AI systems. This may have two consequences: a) human life may become more interesting, b) humans may stop participating in any external activities, including work, maintenance, reproduction, and even caring for their own health, which could slowly contribute (...)
    Download  
    Translate
     
     
    Export citation  
     
    Bookmark   1 citation  
  39. A Meta-Doomsday Argument: Uncertainty About the Validity of the Probabilistic Prediction of the End of the World.Alexey Turchin - manuscript
    Abstract: Four main forms of Doomsday Argument (DA) exist—Gott’s DA, Carter’s DA, Grace’s DA and Universal DA. All four forms use different probabilistic logic to predict that the end of the human civilization will happen unexpectedly soon based on our early location in human history. There are hundreds of publications about the validity of the Doomsday argument. Most of the attempts to disprove the Doomsday Argument have some weak points. As a result, we are uncertain about the validity of DA (...)
    Download  
     
    Export citation  
     
    Bookmark  
  40. Long-Term Trajectories of Human Civilization.Seth D. Baum, Stuart Armstrong, Timoteus Ekenstedt, Olle Häggström, Robin Hanson, Karin Kuhlemann, Matthijs M. Maas, James D. Miller, Markus Salmela, Anders Sandberg, Kaj Sotala, Phil Torres, Alexey Turchin & Roman V. Yampolskiy - 2019 - Foresight 21 (1):53-83.
    Purpose This paper aims to formalize long-term trajectories of human civilization as a scientific and ethical field of study. The long-term trajectory of human civilization can be defined as the path that human civilization takes during the entire future time period in which human civilization could continue to exist. -/- Design/methodology/approach This paper focuses on four types of trajectories: status quo trajectories, in which human civilization persists in a state broadly similar to its current state into the distant future; catastrophe (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  41.  76
    Robustness to Fundamental Uncertainty in AGI Alignment.G. G. Worley Iii - 2020 - Journal of Consciousness Studies 27 (1-2):225-241.
    The AGI alignment problem has a bimodal distribution of outcomes with most outcomes clustering around the poles of total success and existential, catastrophic failure. Consequently, attempts to solve AGI alignment should, all else equal, prefer false negatives (ignoring research programs that would have been successful) to false positives (pursuing research programs that will unexpectedly fail). Thus, we propose adopting a policy of responding to points of philosophical and practical uncertainty associated with the alignment problem by limiting and choosing necessary (...)
    Download  
     
    Export citation  
     
    Bookmark  
  42. First Human Upload as AI Nanny.Alexey Turchin - manuscript
    Abstract: As there are no visible ways to create safe self-improving superintelligence, but it is looming, we probably need temporary ways to prevent its creation. The only way to prevent it, is to create special AI, which is able to control and monitor all places in the world. The idea has been suggested by Goertzel in form of AI Nanny, but his Nanny is still superintelligent and not easy to control, as was shown by Bensinger at al. We explore here (...)
    Download  
     
    Export citation  
     
    Bookmark  
  43. Assessing the Future Plausibility of Catastrophically Dangerous AI.Alexey Turchin - 2018 - Futures.
    In AI safety research, the median timing of AGI creation is often taken as a reference point, which various polls predict will happen in second half of the 21 century, but for maximum safety, we should determine the earliest possible time of dangerous AI arrival and define a minimum acceptable level of AI risk. Such dangerous AI could be either narrow AI facilitating research into potentially dangerous technology like biotech, or AGI, capable of acting completely independently in the real world (...)
    Download  
     
    Export citation  
     
    Bookmark  
  44. Military AI as a Convergent Goal of Self-Improving AI.Alexey Turchin & Denkenberger David - 2018 - In Artificial Intelligence Safety and Security. Louiswille: CRC Press.
    Better instruments to predict the future evolution of artificial intelligence (AI) are needed, as the destiny of our civilization depends on it. One of the ways to such prediction is the analysis of the convergent drives of any future AI, started by Omohundro. We show that one of the convergent drives of AI is a militarization drive, arising from AI’s need to wage a war against its potential rivals by either physical or software means, or to increase its bargaining power. (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  45. A Lack of Ideological Diversity is Killing Social Research.Musa al-Gharbi - 2017 - Times Higher Education 2298:27-28.
    The lack of ideological diversity in social research, paired with the lack of engagement with citizens and policymakers who come from other places on the ideological spectrum, poses an existential risk to the continued credibility, utility and even viability of social research. The need for reform is urgent.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  46. Presumptuous Philosopher Proves Panspermia.Alexey Turchin - manuscript
    Abstract. The presumptuous philosopher (PP) thought experiment lends more credence to the hypothesis which postulates the existence of a larger number of observers than other hypothesis. The PP was suggested as a purely speculative endeavor. However, there is a class of real world observer-selection effects where it could be applied, and one of them is the possibility of interstellar panspermia (IP). PP suggests that the universes with interstellar panspermia will have orders of magnitude more civilizations than universes without IP, and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  47. (The Impossibility of) Acting Upon a Story That We Can Believe.Zoltán Simon - 2018 - Rethinking History 22 (1):105-125.
    The historical sensibility of Western modernity is best captured by the phrase “acting upon a story that we can believe.” Whereas the most famous stories of historians facilitated nation-building processes, philosophers of history told the largest possible story to act upon: history itself. When the rise of an overwhelming postwar skepticism about the modern idea of history discredited the entire enterprise, the historical sensibility of “acting upon a story that we can believe” fell apart to its constituents: action, story form, (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  48. Levels of Self-Improvement in AI and Their Implications for AI Safety.Alexey Turchin - manuscript
    Abstract: This article presents a model of self-improving AI in which improvement could happen on several levels: hardware, learning, code and goals system, each of which has several sublevels. We demonstrate that despite diminishing returns at each level and some intrinsic difficulties of recursive self-improvement—like the intelligence-measuring problem, testing problem, parent-child problem and halting risks—even non-recursive self-improvement could produce a mild form of superintelligence by combining small optimizations on different levels and the power of learning. Based on this, we (...)
    Download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  49. Unveiling Thomas Moynihan's Spinal Catastrophism: The Spine Considered as Chronogenetic Media Artifact. [REVIEW]Ekin Erkan - 2019 - Cosmos and History 15 (1):564-571.
    A review of Thomas Moynihan's Spinal Catastrophism: A Secret History (2019).
    Download  
     
    Export citation  
     
    Bookmark  
  50. Global Solutions Vs. Local Solutions for the AI Safety Problem.Alexey Turchin - 2019 - Big Data Cogn. Comput 3 (1).
    There are two types of artificial general intelligence (AGI) safety solutions: global and local. Most previously suggested solutions are local: they explain how to align or “box” a specific AI (Artificial Intelligence), but do not explain how to prevent the creation of dangerous AI in other places. Global solutions are those that ensure any AI on Earth is not dangerous. The number of suggested global solutions is much smaller than the number of proposed local solutions. Global solutions can be divided (...)
    Download  
     
    Export citation  
     
    Bookmark  
1 — 50 / 998