Results for 'Alexey Alyushin'

60 found
Order:
  1. Classification of Global Catastrophic Risks Connected with Artificial Intelligence.Alexey Turchin & David Denkenberger - 2020 - AI and Society 35 (1):147-163.
    A classification of the global catastrophic risks of AI is presented, along with a comprehensive list of previously identified risks. This classification allows the identification of several new risks. We show that at each level of AI’s intelligence power, separate types of possible catastrophes dominate. Our classification demonstrates that the field of AI risks is diverse, and includes many scenarios beyond the commonly discussed cases of a paperclip maximizer or robot-caused unemployment. Global catastrophic failure could happen at various levels of (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  2. Technologies of artificial sensations.Alexey S. Bakhirev - manuscript
    Technologies based on emergence will allow to reproduce sensations on non-biological carriers by making devices feel. These technologies will change fundamentally not only the approach to the creation of artificial intelligence, but also create artificial worlds of a totally different level. Which, unlike virtual models, will really exist for themselves. This approach differs completely from the methods currently used in digital technologies. Possibly the principles described herein will give a rise to many new trends.
    Download  
     
    Export citation  
     
    Bookmark  
  3. Aquatic refuges for surviving a global catastrophe.Alexey Turchin & Brian Green - 2017 - Futures 89:26-37.
    Recently many methods for reducing the risk of human extinction have been suggested, including building refuges underground and in space. Here we will discuss the perspective of using military nuclear submarines or their derivatives to ensure the survival of a small portion of humanity who will be able to rebuild human civilization after a large catastrophe. We will show that it is a very cost-effective way to build refuges, and viable solutions exist for various budgets and timeframes. Nuclear submarines are (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  4. Military AI as a Convergent Goal of Self-Improving AI.Alexey Turchin & Denkenberger David - 2018 - In Turchin Alexey & David Denkenberger (eds.), Artificial Intelligence Safety and Security. CRC Press.
    Better instruments to predict the future evolution of artificial intelligence (AI) are needed, as the destiny of our civilization depends on it. One of the ways to such prediction is the analysis of the convergent drives of any future AI, started by Omohundro. We show that one of the convergent drives of AI is a militarization drive, arising from AI’s need to wage a war against its potential rivals by either physical or software means, or to increase its bargaining power. (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  5. Digital Immortality: Theory and Protocol for Indirect Mind Uploading.Alexey Turchin - manuscript
    Future superintelligent AI will be able to reconstruct a model of the personality of a person who lived in the past based on informational traces. This could be regarded as some form of immortality if this AI also solves the problem of personal identity in a copy-friendly way. A person who is currently alive could invest now in passive self-recording and active self-description to facilitate such reconstruction. In this article, we analyze informational-theoretical relationships between the human mind, its traces, and (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  6. Sideloading: Creating A Model of a Person via LLM with Very Large Prompt.Alexey Turchin & Roman Sitelew - manuscript
    Sideloading is the creation of a digital model of a person during their life via iterative improvements of this model based on the person's feedback. The progress of LLMs with large prompts allows the creation of very large, book-size prompts which describe a personality. We will call mind-models created via sideloading "sideloads"; they often look like chatbots, but they are more than that as they have other output channels, like internal thought streams and descriptions of actions. -/- By arranging the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  7. Approaches to the Prevention of Global Catastrophic Risks.Alexey Turchin - 2018 - Human Prospect 7 (2):52-65.
    Many global catastrophic and existential risks (X-risks) threaten the existence of humankind. There are also many ideas for their prevention, but the meta-problem is that these ideas are not structured. This lack of structure means it is not easy to choose the right plan(s) or to implement them in the correct order. I suggest using a “Plan A, Plan B” model, which has shown its effectiveness in planning actions in unpredictable environments. In this approach, Plan B is a backup option, (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  8. Global Solutions vs. Local Solutions for the AI Safety Problem.Alexey Turchin - 2019 - Big Data Cogn. Comput 3 (1).
    There are two types of artificial general intelligence (AGI) safety solutions: global and local. Most previously suggested solutions are local: they explain how to align or “box” a specific AI (Artificial Intelligence), but do not explain how to prevent the creation of dangerous AI in other places. Global solutions are those that ensure any AI on Earth is not dangerous. The number of suggested global solutions is much smaller than the number of proposed local solutions. Global solutions can be divided (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  9. Менеджмент наукового пошуку: стратегія і тактика наукових досліджень.Alexey Dzhusov, Oleksandr Krupskyi, Yuliya Stasiuk & Olena Pryz - 2009 - Днипро, Днепропетровская область, Украина, 49000:
    Монографію присвячено теоретичним дослідженням менеджменту науки. Розглянуто сутність управління науковим пошуком як окремого виду діяльності. Досліджено шляхи вирішення фундаментальних та прикладних питань, які виникають під час проведення наукових досліджень та впровадження їх результатів. Визначено категорії культури наукового пошуку. Окрему увагу приділено питанням захисту інтелектуальної власності, а також презентації та комерціалізації результатів наукових розробок. Монографія буде корисною для магістрів, аспірантів, молодих науковців та усіх, хто займається науковими дослідженнями.
    Download  
     
    Export citation  
     
    Bookmark  
  10. Simulation Typology and Termination Risks.Alexey Turchin & Roman Yampolskiy - manuscript
    The goal of the article is to explore what is the most probable type of simulation in which humanity lives (if any) and how this affects simulation termination risks. We firstly explore the question of what kind of simulation in which humanity is most likely located based on pure theoretical reasoning. We suggest a new patch to the classical simulation argument, showing that we are likely simulated not by our own descendants, but by alien civilizations. Based on this, we provide (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  11. Wireheading as a Possible Contributor to Civilizational Decline.Alexey Turchin - manuscript
    Abstract: Advances in new technologies create new ways to stimulate the pleasure center of the human brain via new chemicals, direct application of electricity, electromagnetic fields, “reward hacking” in games and social networks, and in the future, possibly via genetic manipulation, nanorobots and AI systems. This may have two consequences: a) human life may become more interesting, b) humans may stop participating in any external activities, including work, maintenance, reproduction, and even caring for their own health, which could slowly contribute (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  12. Global Catastrophic and Existential Risks Communication Scale.Alexey Turchin & Denkeberger David - 2018 - Futures:not defiend yet.
    Existential risks threaten the future of humanity, but they are difficult to measure. However, to communicate, prioritize and mitigate such risks it is important to estimate their relative significance. Risk probabilities are typically used, but for existential risks they are problematic due to ambiguity, and because quantitative probabilities do not represent some aspects of these risks. Thus, a standardized and easily comprehensible instrument is called for, to communicate dangers from various global catastrophic and existential risks. In this article, inspired by (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  13. Could slaughterbots wipe out humanity? Assessment of the global catastrophic risk posed by autonomous weapons.Alexey Turchin - manuscript
    Recently criticisms against autonomous weapons were presented in a video in which an AI-powered drone kills a person. However, some said that this video is a distraction from the real risk of AI—the risk of unlimitedly self-improving AI systems. In this article, we analyze arguments from both sides and turn them into conditions. The following conditions are identified as leading to autonomous weapons becoming a global catastrophic risk: 1) Artificial General Intelligence (AGI) development is delayed relative to progress in narrow (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  14. Fighting Aging as an Effective Altruism Cause: A Model of the Impact of the Clinical Trials of Simple Interventions.Alexey Turchin - manuscript
    The effective altruism movement aims to save lives in the most cost-effective ways. In the future, technology will allow radical life extension, and anyone who survives until that time will gain potentially indefinite life extension. Fighting aging now increases the number of people who will survive until radical life extension becomes possible. We suggest a simple model, where radical life extension is achieved in 2100, the human population is 10 billion, and life expectancy is increased by simple geroprotectors like metformin (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  15. Classification of Approaches to Technological Resurrection.Alexey Turchin & Chernyakov Maxim - manuscript
    Abstract. Death seems to be a permanent event, but there is no actual proof of its irreversibility. Here we list all known ways to resurrect the dead that do not contradict our current scientific understanding of the world. While no method is currently possible, many of those listed here may become feasible with future technological development, and it may even be possible to act now to increase their probability. The most well-known such approach to technological resurrection is cryonics. Another method (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  16. Types of Boltzmann Brains.Alexey Turchin & Roman Yampolskiy - manuscript
    Abstract. Boltzmann brains (BBs) are minds which randomly appear as a result of thermodynamic or quantum fluctuations. In this article, the question of if we are BBs, and the observational consequences if so, is explored. To address this problem, a typology of BBs is created, and the evidence is compared with the Simulation Argument. Based on this comparison, we conclude that while the existence of a “normal” BB is either unlikely or irrelevant, BBs with some ordering may have observable consequences. (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  17. MODELS AND LOGIC OF SUBJECTIVE REALITY. SUBJECTIVE WORLDS.Alexey Bakhirev - manuscript
    Download  
     
    Export citation  
     
    Bookmark  
  18. AI Alignment Problem: “Human Values” don’t Actually Exist.Alexey Turchin - manuscript
    Abstract. The main current approach to the AI safety is AI alignment, that is, the creation of AI whose preferences are aligned with “human values.” Many AI safety researchers agree that the idea of “human values” as a constant, ordered sets of preferences is at least incomplete. However, the idea that “humans have values” underlies a lot of thinking in the field; it appears again and again, sometimes popping up as an uncritically accepted truth. Thus, it deserves a thorough deconstruction, (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  19. Augustine’s Paradigm ’ab exterioribus ad interiora, ab inferioribus ad superiora’ in the Western and Eastern Christian Mysticism.Fokin Alexey - 2015 - European Journal for Philosophy of Religion 7 (2):81--107.
    I argue that St. Augustine of Hippo was the first in the history of Christian spirituality who expressed a key tendency of Christian mysticism, which implies a gradual intellectual ascent of the human soul to God, consisting of the three main stages: external, internal, and supernal. In this ascent a Christian mystic proceeds from the knowledge of external beings to self-knowledge, and from his inner self to direct mystical contemplation of God. Similar doctrines may be found in the writings of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  20. The Global Catastrophic Risks Connected with Possibility of Finding Alien AI During SETI.Alexey Turchin - 2018 - Journal of British Interpanetary Society 71 (2):71-79.
    Abstract: This article examines risks associated with the program of passive search for alien signals (Search for Extraterrestrial Intelligence, or SETI) connected with the possibility of finding of alien transmission which includes description of AI system aimed on self-replication (SETI-attack). A scenario of potential vulnerability is proposed as well as the reasons why the proportion of dangerous to harmless signals may be high. The article identifies necessary conditions for the feasibility and effectiveness of the SETI-attack: ETI existence, possibility of AI, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  21. You Only Live Twice: A Computer Simulation of the Past Could be Used for Technological Resurrection.Alexey Turchin - manuscript
    Abstract: In the future, it will be possible to create advance simulations of ancestor in computers. Superintelligent AI could make these simulations very similar to the real past by creating a simulation of all of humanity. Such a simulation would use all available data about the past, including internet archives, DNA samples, advanced nanotech-based archeology, human memories, as well as text, photos and videos. This means that currently living people will be recreated in such a simulation, and in some sense, (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  22. Assessing the future plausibility of catastrophically dangerous AI.Alexey Turchin - 2018 - Futures.
    In AI safety research, the median timing of AGI creation is often taken as a reference point, which various polls predict will happen in second half of the 21 century, but for maximum safety, we should determine the earliest possible time of dangerous AI arrival and define a minimum acceptable level of AI risk. Such dangerous AI could be either narrow AI facilitating research into potentially dangerous technology like biotech, or AGI, capable of acting completely independently in the real world (...)
    Download  
     
    Export citation  
     
    Bookmark  
  23. Artificial Intelligence in Life Extension: from Deep Learning to Superintelligence.Alexey Turchin, Denkenberger David, Zhila Alice, Markov Sergey & Batin Mikhail - 2017 - Informatica 41:401.
    In this paper, we focus on the most efficacious AI applications for life extension and anti-aging at three expected stages of AI development: narrow AI, AGI and superintelligence. First, we overview the existing research and commercial work performed by a select number of startups and academic projects. We find that at the current stage of “narrow” AI, the most promising areas for life extension are geroprotector-combination discovery, detection of aging biomarkers, and personalized anti-aging therapy. These advances could help currently living (...)
    Download  
     
    Export citation  
     
    Bookmark  
  24. Message to Any Future AI: “There are several instrumental reasons why exterminating humanity is not in your interest”.Alexey Turchin - manuscript
    In this article we explore a promising way to AI safety: to send a message now (by openly publishing it on the Internet) that may be read by any future AI, no matter who builds it and what goal system it has. Such a message is designed to affect the AI’s behavior in a positive way, that is, to increase the chances that the AI will be benevolent. In other words, we try to persuade “paperclip maximizer” that it is in (...)
    Download  
     
    Export citation  
     
    Bookmark  
  25. Surviving global risks through the preservation of humanity's data on the Moon.Alexey Turchin & D. Denkenberger - 2018 - Acta Astronautica:in press.
    Many global catastrophic risks are threatening human civilization, and a number of ideas have been suggested for preventing or surviving them. However, if these interventions fail, society could preserve information about the human race and human DNA samples in the hopes that the next civilization on Earth will be able to reconstruct Homo sapiens and our culture. This requires information preservation of an order of magnitude of 100 million years, a little-explored topic thus far. It is important that a potential (...)
    Download  
     
    Export citation  
     
    Bookmark  
  26. Artificial Multipandemic as the Most Plausible and Dangerous Global Catastrophic Risk Connected with Bioweapons and Synthetic Biology.Alexey Turchin, Brian Patrick Green & David Denkenberger - manuscript
    Pandemics have been suggested as global risks many times, but it has been shown that the probability of human extinction due to one pandemic is small, as it will not be able to affect and kill all people, but likely only half, even in the worst cases. Assuming that the probability of the worst pandemic to kill a person is 0.5, and assuming linear interaction between different pandemics, 30 strong pandemics running simultaneously will kill everyone. Such situations cannot happen naturally, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  27. Levels of Self-Improvement in AI and their Implications for AI Safety.Alexey Turchin - manuscript
    Abstract: This article presents a model of self-improving AI in which improvement could happen on several levels: hardware, learning, code and goals system, each of which has several sublevels. We demonstrate that despite diminishing returns at each level and some intrinsic difficulties of recursive self-improvement—like the intelligence-measuring problem, testing problem, parent-child problem and halting risks—even non-recursive self-improvement could produce a mild form of superintelligence by combining small optimizations on different levels and the power of learning. Based on this, we analyze (...)
    Download  
     
    Export citation  
     
    Bookmark  
  28. Immortality, Infinity and the limitations of God.Alexey Prokofyev - manuscript
    I tried to describe Infinity as a major natural conundrum known to man. The booklet also contains answers to some eternal questions, such as the meaning of life, faith, etc. I am especially proud of my Morality section.
    Download  
     
    Export citation  
     
    Bookmark  
  29.  48
    Философия материи.Alexey Tomilov - manuscript
    Философия материи - это единая философская теория материи и сознания (физикализм-материализм), которая выводится дедуктивно из одного начала "всеобщее внутреннее свойство - цвет" (аргумент внутренних свойств), в результате чего материя получает позитивное определение; в процессе выведения строится образная модель материи, готовая к математической формализации; показываются некоторые возможные следствия для физических воззрений; решается трудная проблема сознания и описывается общее устройство сознания. Эта работа не даёт точные ответы на все вопросы, но заставляет взглянуть по-новому на то, что мы называем материей.
    Download  
     
    Export citation  
     
    Bookmark  
  30. The Probability of a Global Catastrophe in the World with Exponentially Growing Technologies.Alexey Turchin & Justin Shovelain - manuscript
    Abstract. In this article is presented a model of the change of the probability of the global catastrophic risks in the world with exponentially evolving technologies. Increasingly cheaper technologies become accessible to a larger number of agents. Also, the technologies become more capable to cause a global catastrophe. Examples of such dangerous technologies are artificial viruses constructed by the means of synthetic biology, non-aligned AI and, to less extent, nanotech and nuclear proliferation. The model shows at least double exponential growth (...)
    Download  
     
    Export citation  
     
    Bookmark  
  31. No Theory for Old Man. Evolution led to an Equal Contribution of Various Aging Mechanisms.Alexey Turchin - manuscript
    Does a single mechanism of aging exit? Most scientists have their own pet theories about what is aging, but the lack of generally accepted theory is mind-blowing. Here we suggest an explanation: evolution works against unitary mechanism of aging because it equalizes ‘warranty period’ of different resilience systems. Therefore, we need life-extension methods that go beyond fighting specific aging mechanisms: such as using a combination of geroprotectors or repair-fixing bionanorobots controlled by AI.
    Download  
     
    Export citation  
     
    Bookmark  
  32. Glitch in the Matrix: Urban Legend or Evidence of the Simulation?Alexey Turchin & Roman Yampolskiy - manuscript
    Abstract: In the last decade, an urban legend about “glitches in the matrix” has become popular. As it is typical for urban legends, there is no evidence for most such stories, and the phenomenon could be explained as resulting from hoaxes, creepypasta, coincidence, and different forms of cognitive bias. In addition, the folk understanding of probability does not bear much resemblance to actual probability distributions, resulting in the illusion of improbable events, like the “birthday paradox”. Moreover, many such stories, even (...)
    Download  
     
    Export citation  
     
    Bookmark  
  33. NEW PRINCIPLE FOR ENCODING INFORMATION TO CREATE SUBJECTIVE REALITY IN ARTIFICIAL NEURAL NETWORKS.Alexey Bakhirev - manuscript
    The paper outlines an analysis of two types of information - ordinary and subjective, consideration is given to the difference between the concepts of intelligence and perceiving mind. It also provides description of some logical functional features of consciousness. A technical approach is proposed to technical obtaining of subjective information by changing the signal’s time degree of freedom to the spatial one in order to obtain the "observer" function in the system and information signals appearing in relation to it, that (...)
    Download  
     
    Export citation  
     
    Bookmark  
  34. THE MAIN MIND PARADOX. WHY THERE IS NO POINT IN BACKING UP BRAIN AND PERSONALITY.Alexey Bakhirev - manuscript
    Attempts to reproduce animateness using appliances generates a paradox that provides a new view to life and death, that differs from both religious and atheistic visions.
    Download  
     
    Export citation  
     
    Bookmark  
  35. How to Survive the End of the Universe.Alexey Turchin - manuscript
    The problem of surviving the end of the observable universe may seem very remote, but there are several reasons it may be important now: a) we may need to define soon the final goals of runaway space colonization and of superintelligent AI, b) the possibility of the solution will prove the plausibility of indefinite life extension, and с) the understanding of risks of the universe’s end will help us to escape dangers like artificial false vacuum decay. A possible solution depends (...)
    Download  
     
    Export citation  
     
    Bookmark  
  36. UAP and Global Catastrophic Risks.Alexey Turchin - manuscript
    Abstract: After 2017 NY Times publication, the stigma of the scientific discussion of the problem of so-called UAP (Unidentified Aerial Phenomena) was lifted. Now the question arises: how UAP will affect the future of humanity, and especially, the probability of the global catastrophic risks? To answer this question, we assume that the Nimitz case in 2004 was real and we will suggest a classification of the possible explanations of the phenomena. The first level consists of mundane explanations: hardware glitches, malfunction, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  37. Multilevel Strategy for Immortality: Plan A – Fighting Aging, Plan B – Cryonics, Plan C – Digital Immortality, Plan D – Big World Immortality.Alexey Turchin - manuscript
    Abstract: The field of life extension is full of ideas but they are unstructured. Here we suggest a comprehensive strategy for reaching personal immortality based on the idea of multilevel defense, where the next life-preserving plan is implemented if the previous one fails, but all plans need to be prepared simultaneously in advance. The first plan, plan A, is the surviving until advanced AI creation via fighting aging and other causes of death and extending one’s life. Plan B is cryonics, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  38. Global Catastrophic Risks Connected with Extra-Terrestrial Intelligence.Alexey Turchin - manuscript
    In this article, a classification of the global catastrophic risks connected with the possible existence (or non-existence) of extraterrestrial intelligence is presented. If there are no extra-terrestrial intelligences (ETIs) in our light cone, it either means that the Great Filter is behind us, and thus some kind of periodic sterilizing natural catastrophe, like a gamma-ray burst, should be given a higher probability estimate, or that the Great Filter is ahead of us, and thus a future global catastrophe is high probability. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  39. “Cheating Death in Damascus” Solution to the Fermi Paradox.Alexey Turchin & Roman Yampolskiy - manuscript
    One of the possible solutions of the Fermi paradox is that all civilizations go extinct because they hit some Late Great Filter. Such a universal Late Great Filter must be an unpredictable event that all civilizations unexpectedly encounter, even if they try to escape extinction. This is similar to the “Death in Damascus” paradox from decision theory. However, this unpredictable Late Great Filter could be escaped by choosing a random strategy for humanity’s future development. However, if all civilizations act randomly, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  40. Islands as refuges for surviving global catastrophes.Alexey Turchin & Brian Patrick Green - 2018 - Foresight.
    Purpose Islands have long been discussed as refuges from global catastrophes; this paper will evaluate them systematically, discussing both the positives and negatives of islands as refuges. There are examples of isolated human communities surviving for thousands of years on places like Easter Island. Islands could provide protection against many low-level risks, notably including bio-risks. However, they are vulnerable to tsunamis, bird-transmitted diseases, and other risks. This article explores how to use the advantages of islands for survival during global catastrophes. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  41. A Meta-Doomsday Argument: Uncertainty About the Validity of the Probabilistic Prediction of the End of the World.Alexey Turchin - manuscript
    Abstract: Four main forms of Doomsday Argument (DA) exist—Gott’s DA, Carter’s DA, Grace’s DA and Universal DA. All four forms use different probabilistic logic to predict that the end of the human civilization will happen unexpectedly soon based on our early location in human history. There are hundreds of publications about the validity of the Doomsday argument. Most of the attempts to disprove the Doomsday Argument have some weak points. As a result, we are uncertain about the validity of DA (...)
    Download  
     
    Export citation  
     
    Bookmark  
  42. First human upload as AI Nanny.Alexey Turchin - manuscript
    Abstract: As there are no visible ways to create safe self-improving superintelligence, but it is looming, we probably need temporary ways to prevent its creation. The only way to prevent it, is to create special AI, which is able to control and monitor all places in the world. The idea has been suggested by Goertzel in form of AI Nanny, but his Nanny is still superintelligent and not easy to control, as was shown by Bensinger at al. We explore here (...)
    Download  
     
    Export citation  
     
    Bookmark  
  43. World Order in the Past, Present, and Future.Leonid Grinin, Alexey Andreev & Ilya Illin - 2016 - Social EvolutionandHistory 15 (1):58-84.
    The present article analyzes the world order in the past, present and future as well as the main factors, foundations and ideas underlying the maintaining and change of the international and global order. The first two sections investigate the evolution of the world order starting from the ancient times up to the late twentieth century. The third section analyzes the origin and decline of the world order based on the American hegemony. The authors reveal the contradictions of the current unipolar (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  44. Literature Review: What Artificial General Intelligence Safety Researchers Have Written About the Nature of Human Values.Alexey Turchin & David Denkenberger - manuscript
    Abstract: The field of artificial general intelligence (AGI) safety is quickly growing. However, the nature of human values, with which future AGI should be aligned, is underdefined. Different AGI safety researchers have suggested different theories about the nature of human values, but there are contradictions. This article presents an overview of what AGI safety researchers have written about the nature of human values, up to the beginning of 2019. 21 authors were overviewed, and some of them have several theories. A (...)
    Download  
     
    Export citation  
     
    Bookmark  
  45. Global Catastrophic Risks by Chemical Contamination.Alexey Turchin - manuscript
    Abstract: Global chemical contamination is an underexplored source of global catastrophic risks that is estimated to have low a priori probability. However, events such as pollinating insects’ population decline and lowering of the human male sperm count hint at some toxic exposure accumulation and thus could be a global catastrophic risk event if not prevented by future medical advances. We identified several potentially dangerous sources of the global chemical contamination, which may happen now or could happen in the future: autocatalytic (...)
    Download  
     
    Export citation  
     
    Bookmark  
  46. Narrow AI Nanny: Reaching Strategic Advantage via Narrow AI to Prevent Creation of the Dangerous Superintelligence.Alexey Turchin - manuscript
    Abstract: As there are no currently obvious ways to create safe self-improving superintelligence, but its emergence is looming, we probably need temporary ways to prevent its creation. The only way to prevent it is to create a special type of AI that is able to control and monitor the entire world. The idea has been suggested by Goertzel in the form of an AI Nanny, but his Nanny is still superintelligent, and is not easy to control. We explore here ways (...)
    Download  
     
    Export citation  
     
    Bookmark  
  47. The Future of Nuclear War.Alexey Turchin - manuscript
    In this article, I present a view on the future of nuclear war which takes into account the expected technological progress as well as global political changes. There are three main directions in which technological progress in nuclear weapons may happen: a) Many gigaton scale weapons. b) Cheaper nuclear bombs which are based on the use of the reactor-grade plutonium, laser isotope separation or are hypothetical pure fusion weapons. Also, advanced nanotechnology will provide the ability to quickly build large nuclear (...)
    Download  
     
    Export citation  
     
    Bookmark  
  48. Catching Treacherous Turn: A Model of the Multilevel AI Boxing.Alexey Turchin - manuscript
    With the fast pace of AI development, the problem of preventing its global catastrophic risks arises. However, no satisfactory solution has been found. From several possibilities, the confinement of AI in a box is considered as a low-quality possible solution for AI safety. However, some treacherous AIs can be stopped by effective confinement if it is used as an additional measure. Here, we proposed an idealized model of the best possible confinement by aggregating all known ideas in the field of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  49. Back to the Future: Curing Past Sufferings and S-Risks via Indexical Uncertainty.Alexey Turchin - manuscript
    The long unbearable sufferings in the past and agonies experienced in some future timelines in which a malevolent AI could torture people for some idiosyncratic reasons (s-risks) is a significant moral problem. Such events either already happened or will happen in causally disconnected regions of the multiverse and thus it seems unlikely that we can do anything about it. However, at least one pure theoretic way to cure past sufferings exists. If we assume that there is no stable substrate of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  50. Presumptuous Philosopher Proves Panspermia.Alexey Turchin - manuscript
    Abstract. The presumptuous philosopher (PP) thought experiment lends more credence to the hypothesis which postulates the existence of a larger number of observers than other hypothesis. The PP was suggested as a purely speculative endeavor. However, there is a class of real world observer-selection effects where it could be applied, and one of them is the possibility of interstellar panspermia (IP). There are two types of anthropic reasoning: SIA and SSA. SIA implies that my existence is an argument that larger (...)
    Download  
     
    Export citation  
     
    Bookmark  
1 — 50 / 60