Results for 'Alexey V. Antonov'

951 found
Order:
  1. Long-Term Trajectories of Human Civilization.Seth D. Baum, Stuart Armstrong, Timoteus Ekenstedt, Olle Häggström, Robin Hanson, Karin Kuhlemann, Matthijs M. Maas, James D. Miller, Markus Salmela, Anders Sandberg, Kaj Sotala, Phil Torres, Alexey Turchin & Roman V. Yampolskiy - 2019 - Foresight 21 (1):53-83.
    Purpose This paper aims to formalize long-term trajectories of human civilization as a scientific and ethical field of study. The long-term trajectory of human civilization can be defined as the path that human civilization takes during the entire future time period in which human civilization could continue to exist. -/- Design/methodology/approach This paper focuses on four types of trajectories: status quo trajectories, in which human civilization persists in a state broadly similar to its current state into the distant future; catastrophe (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  2. Risks of artificial general intelligence.Vincent C. Müller (ed.) - 2014 - Taylor & Francis (JETAI).
    Special Issue “Risks of artificial general intelligence”, Journal of Experimental and Theoretical Artificial Intelligence, 26/3 (2014), ed. Vincent C. Müller. http://www.tandfonline.com/toc/teta20/26/3# - Risks of general artificial intelligence, Vincent C. Müller, pages 297-301 - Autonomous technology and the greater human good - Steve Omohundro - pages 303-315 - - - The errors, insights and lessons of famous AI predictions – and what they mean for the future - Stuart Armstrong, Kaj Sotala & Seán S. Ó hÉigeartaigh - pages 317-342 - - (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  3. (1 other version)Nidus Idearum. Scilogs, XIII: Structure / NeutroStructure / AntiStructure.Florentin Smarandache - 2024 - BiblioPublishing.
    In this thirteenth book of scilogs – one may find topics on Neutrosophy, Plithogeny, Physics, Mathematics, Philosophy – email messages to research colleagues, or replies, notes, comments, remarks about authors, articles, or books, spontaneous ideas, and so on. It presents new types of soft sets and new types of topologies. -/- Exchanging ideas with Mohammad Abobala, Ishfaq Ahmad, Ibrahim M. Almanjahie, Fatimah Alshahrani, Nizar Altounji, Muhammad Aslam, Said Broumi, Victor Christianto, R. Diksh, Feng Liu, Frank Julian Gelli, Erick Gonzalez Caballero, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  4. Sideloading: Creating A Model of a Person via LLM with Very Large Prompt.Alexey Turchin & Roman Sitelew - manuscript
    Sideloading is the creation of a digital model of a person during their life via iterative improvements of this model based on the person's feedback. The progress of LLMs with large prompts allows the creation of very large, book-size prompts which describe a personality. We will call mind-models created via sideloading "sideloads"; they often look like chatbots, but they are more than that as they have other output channels, like internal thought streams and descriptions of actions. -/- By arranging the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  5. Classification of Global Catastrophic Risks Connected with Artificial Intelligence.Alexey Turchin & David Denkenberger - 2020 - AI and Society 35 (1):147-163.
    A classification of the global catastrophic risks of AI is presented, along with a comprehensive list of previously identified risks. This classification allows the identification of several new risks. We show that at each level of AI’s intelligence power, separate types of possible catastrophes dominate. Our classification demonstrates that the field of AI risks is diverse, and includes many scenarios beyond the commonly discussed cases of a paperclip maximizer or robot-caused unemployment. Global catastrophic failure could happen at various levels of (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  6. Back to the Future: Curing Past Sufferings and S-Risks via Indexical Uncertainty.Alexey Turchin - manuscript
    The long unbearable sufferings in the past and agonies experienced in some future timelines in which a malevolent AI could torture people for some idiosyncratic reasons (s-risks) is a significant moral problem. Such events either already happened or will happen in causally disconnected regions of the multiverse and thus it seems unlikely that we can do anything about it. However, at least one pure theoretic way to cure past sufferings exists. If we assume that there is no stable substrate of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  7. Presumptuous Philosopher Proves Panspermia.Alexey Turchin - manuscript
    Abstract. The presumptuous philosopher (PP) thought experiment lends more credence to the hypothesis which postulates the existence of a larger number of observers than other hypothesis. The PP was suggested as a purely speculative endeavor. However, there is a class of real world observer-selection effects where it could be applied, and one of them is the possibility of interstellar panspermia (IP). There are two types of anthropic reasoning: SIA and SSA. SIA implies that my existence is an argument that larger (...)
    Download  
     
    Export citation  
     
    Bookmark  
  8. Менеджмент наукового пошуку: стратегія і тактика наукових досліджень.Alexey Dzhusov, Oleksandr Krupskyi, Yuliya Stasiuk & Olena Pryz - 2009 - Днипро, Днепропетровская область, Украина, 49000:
    Монографію присвячено теоретичним дослідженням менеджменту науки. Розглянуто сутність управління науковим пошуком як окремого виду діяльності. Досліджено шляхи вирішення фундаментальних та прикладних питань, які виникають під час проведення наукових досліджень та впровадження їх результатів. Визначено категорії культури наукового пошуку. Окрему увагу приділено питанням захисту інтелектуальної власності, а також презентації та комерціалізації результатів наукових розробок. Монографія буде корисною для магістрів, аспірантів, молодих науковців та усіх, хто займається науковими дослідженнями.
    Download  
     
    Export citation  
     
    Bookmark  
  9. The Probability of a Global Catastrophe in the World with Exponentially Growing Technologies.Alexey Turchin & Justin Shovelain - manuscript
    Abstract. In this article is presented a model of the change of the probability of the global catastrophic risks in the world with exponentially evolving technologies. Increasingly cheaper technologies become accessible to a larger number of agents. Also, the technologies become more capable to cause a global catastrophe. Examples of such dangerous technologies are artificial viruses constructed by the means of synthetic biology, non-aligned AI and, to less extent, nanotech and nuclear proliferation. The model shows at least double exponential growth (...)
    Download  
     
    Export citation  
     
    Bookmark  
  10. No Theory for Old Man. Evolution led to an Equal Contribution of Various Aging Mechanisms.Alexey Turchin - manuscript
    Does a single mechanism of aging exit? Most scientists have their own pet theories about what is aging, but the lack of generally accepted theory is mind-blowing. Here we suggest an explanation: evolution works against unitary mechanism of aging because it equalizes ‘warranty period’ of different resilience systems. Therefore, we need life-extension methods that go beyond fighting specific aging mechanisms: such as using a combination of geroprotectors or repair-fixing bionanorobots controlled by AI.
    Download  
     
    Export citation  
     
    Bookmark  
  11. Technologies of artificial sensations.Alexey S. Bakhirev - manuscript
    Technologies based on emergence will allow to reproduce sensations on non-biological carriers by making devices feel. These technologies will change fundamentally not only the approach to the creation of artificial intelligence, but also create artificial worlds of a totally different level. Which, unlike virtual models, will really exist for themselves. This approach differs completely from the methods currently used in digital technologies. Possibly the principles described herein will give a rise to many new trends.
    Download  
     
    Export citation  
     
    Bookmark  
  12.  28
    Философия материи.Alexey Tomilov - manuscript
    Философия материи - это единая философская теория материи и сознания (физикализм-материализм), которая выводится дедуктивно из одного начала "всеобщее внутреннее свойство - цвет" (аргумент внутренних свойств), в результате чего материя получает позитивное определение; в процессе выведения строится образная модель материи, готовая к математической формализации; показываются некоторые возможные следствия для физических воззрений; решается трудная проблема сознания и описывается общее устройство сознания. Эта работа не даёт точные ответы на все вопросы, но заставляет взглянуть по-новому на то, что мы называем материей.
    Download  
     
    Export citation  
     
    Bookmark  
  13. MODELS AND LOGIC OF SUBJECTIVE REALITY. SUBJECTIVE WORLDS.Alexey Bakhirev - manuscript
    Download  
     
    Export citation  
     
    Bookmark  
  14. UAP and Global Catastrophic Risks.Alexey Turchin - manuscript
    Abstract: After 2017 NY Times publication, the stigma of the scientific discussion of the problem of so-called UAP (Unidentified Aerial Phenomena) was lifted. Now the question arises: how UAP will affect the future of humanity, and especially, the probability of the global catastrophic risks? To answer this question, we assume that the Nimitz case in 2004 was real and we will suggest a classification of the possible explanations of the phenomena. The first level consists of mundane explanations: hardware glitches, malfunction, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  15. Aquatic refuges for surviving a global catastrophe.Alexey Turchin & Brian Green - 2017 - Futures 89:26-37.
    Recently many methods for reducing the risk of human extinction have been suggested, including building refuges underground and in space. Here we will discuss the perspective of using military nuclear submarines or their derivatives to ensure the survival of a small portion of humanity who will be able to rebuild human civilization after a large catastrophe. We will show that it is a very cost-effective way to build refuges, and viable solutions exist for various budgets and timeframes. Nuclear submarines are (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  16. Types of Boltzmann Brains.Alexey Turchin & Roman Yampolskiy - manuscript
    Abstract. Boltzmann brains (BBs) are minds which randomly appear as a result of thermodynamic or quantum fluctuations. In this article, the question of if we are BBs, and the observational consequences if so, is explored. To address this problem, a typology of BBs is created, and the evidence is compared with the Simulation Argument. Based on this comparison, we conclude that while the existence of a “normal” BB is either unlikely or irrelevant, BBs with some ordering may have observable consequences. (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  17. Király V. István - Death and History.István Király V. - 2016 - Budapesti Konyv Szemle (2):79-83.
    Recenzio Kiraly V. Istvan Death and History c. konyverol.
    Download  
     
    Export citation  
     
    Bookmark  
  18. Augustine’s Paradigm ’ab exterioribus ad interiora, ab inferioribus ad superiora’ in the Western and Eastern Christian Mysticism.Fokin Alexey - 2015 - European Journal for Philosophy of Religion 7 (2):81--107.
    I argue that St. Augustine of Hippo was the first in the history of Christian spirituality who expressed a key tendency of Christian mysticism, which implies a gradual intellectual ascent of the human soul to God, consisting of the three main stages: external, internal, and supernal. In this ascent a Christian mystic proceeds from the knowledge of external beings to self-knowledge, and from his inner self to direct mystical contemplation of God. Similar doctrines may be found in the writings of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  19. The Future of Nuclear War.Alexey Turchin - manuscript
    In this article, I present a view on the future of nuclear war which takes into account the expected technological progress as well as global political changes. There are three main directions in which technological progress in nuclear weapons may happen: a) Many gigaton scale weapons. b) Cheaper nuclear bombs which are based on the use of the reactor-grade plutonium, laser isotope separation or are hypothetical pure fusion weapons. Also, advanced nanotechnology will provide the ability to quickly build large nuclear (...)
    Download  
     
    Export citation  
     
    Bookmark  
  20. Glitch in the Matrix: Urban Legend or Evidence of the Simulation?Alexey Turchin & Roman Yampolskiy - manuscript
    Abstract: In the last decade, an urban legend about “glitches in the matrix” has become popular. As it is typical for urban legends, there is no evidence for most such stories, and the phenomenon could be explained as resulting from hoaxes, creepypasta, coincidence, and different forms of cognitive bias. In addition, the folk understanding of probability does not bear much resemblance to actual probability distributions, resulting in the illusion of improbable events, like the “birthday paradox”. Moreover, many such stories, even (...)
    Download  
     
    Export citation  
     
    Bookmark  
  21. How to Survive the End of the Universe.Alexey Turchin - manuscript
    The problem of surviving the end of the observable universe may seem very remote, but there are several reasons it may be important now: a) we may need to define soon the final goals of runaway space colonization and of superintelligent AI, b) the possibility of the solution will prove the plausibility of indefinite life extension, and с) the understanding of risks of the universe’s end will help us to escape dangers like artificial false vacuum decay. A possible solution depends (...)
    Download  
     
    Export citation  
     
    Bookmark  
  22. Multilevel Strategy for Immortality: Plan A – Fighting Aging, Plan B – Cryonics, Plan C – Digital Immortality, Plan D – Big World Immortality.Alexey Turchin - manuscript
    Abstract: The field of life extension is full of ideas but they are unstructured. Here we suggest a comprehensive strategy for reaching personal immortality based on the idea of multilevel defense, where the next life-preserving plan is implemented if the previous one fails, but all plans need to be prepared simultaneously in advance. The first plan, plan A, is the surviving until advanced AI creation via fighting aging and other causes of death and extending one’s life. Plan B is cryonics, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  23. “Cheating Death in Damascus” Solution to the Fermi Paradox.Alexey Turchin & Roman Yampolskiy - manuscript
    One of the possible solutions of the Fermi paradox is that all civilizations go extinct because they hit some Late Great Filter. Such a universal Late Great Filter must be an unpredictable event that all civilizations unexpectedly encounter, even if they try to escape extinction. This is similar to the “Death in Damascus” paradox from decision theory. However, this unpredictable Late Great Filter could be escaped by choosing a random strategy for humanity’s future development. However, if all civilizations act randomly, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  24. Military AI as a Convergent Goal of Self-Improving AI.Alexey Turchin & Denkenberger David - 2018 - In Turchin Alexey & David Denkenberger (eds.), Artificial Intelligence Safety and Security. CRC Press.
    Better instruments to predict the future evolution of artificial intelligence (AI) are needed, as the destiny of our civilization depends on it. One of the ways to such prediction is the analysis of the convergent drives of any future AI, started by Omohundro. We show that one of the convergent drives of AI is a militarization drive, arising from AI’s need to wage a war against its potential rivals by either physical or software means, or to increase its bargaining power. (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  25. A Pin and a Balloon: Anthropic Fragility Increases Chances of Runaway Global Warming.Alexey Turchin - manuscript
    Humanity may underestimate the rate of natural global catastrophes because of the survival bias (“anthropic shadow”). But the resulting reduction of the Earth’s future habitability duration is not very large in most plausible cases (1-2 orders of magnitude) and thus it looks like we still have at least millions of years. However, anthropic shadow implies anthropic fragility: we are more likely to live in a world where a sterilizing catastrophe is long overdue and could be triggered by unexpectedly small human (...)
    Download  
     
    Export citation  
     
    Bookmark  
  26. Catching Treacherous Turn: A Model of the Multilevel AI Boxing.Alexey Turchin - manuscript
    With the fast pace of AI development, the problem of preventing its global catastrophic risks arises. However, no satisfactory solution has been found. From several possibilities, the confinement of AI in a box is considered as a low-quality possible solution for AI safety. However, some treacherous AIs can be stopped by effective confinement if it is used as an additional measure. Here, we proposed an idealized model of the best possible confinement by aggregating all known ideas in the field of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  27. Approaches to the Prevention of Global Catastrophic Risks.Alexey Turchin - 2018 - Human Prospect 7 (2):52-65.
    Many global catastrophic and existential risks (X-risks) threaten the existence of humankind. There are also many ideas for their prevention, but the meta-problem is that these ideas are not structured. This lack of structure means it is not easy to choose the right plan(s) or to implement them in the correct order. I suggest using a “Plan A, Plan B” model, which has shown its effectiveness in planning actions in unpredictable environments. In this approach, Plan B is a backup option, (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  28. Constantin TONU: István KIRÁLY V., Death and History, Lambert Academic Publishing, Saarbrücken, ISBN: 978-3-659-80237-9, 172 pages, 2015.V. Istvan Kiraly & Constantin Tonu - 2016 - Metacritic Journal for Comparative Studies and Theory 2 (1).
    Review the Istvan Kiraly V.'s book: Death and History.
    Download  
     
    Export citation  
     
    Bookmark  
  29. Active Imagination as an Alternative to Lucid Dreaming: Theory and Experimental Results.Alexey Turchin - manuscript
    Lucid dreaming (LD) is a fun and interesting activity, but most participants have difficulties in attaining lucidity, retaining it during the dream, concentrating on the needed task and remembering the results. This motivates to search for a new way to enhance lucid dreaming via different induction techniques, including chemicals and electric brain stimulation. However, results are still unstable. An alternative approach is to reach the lucid dreaming-like states via altered state of consciousness not related to dreaming. Several methods such as (...)
    Download  
     
    Export citation  
     
    Bookmark  
  30. Global Solutions vs. Local Solutions for the AI Safety Problem.Alexey Turchin - 2019 - Big Data Cogn. Comput 3 (1).
    There are two types of artificial general intelligence (AGI) safety solutions: global and local. Most previously suggested solutions are local: they explain how to align or “box” a specific AI (Artificial Intelligence), but do not explain how to prevent the creation of dangerous AI in other places. Global solutions are those that ensure any AI on Earth is not dangerous. The number of suggested global solutions is much smaller than the number of proposed local solutions. Global solutions can be divided (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  31. Immortality, Infinity and the limitations of God.Alexey Prokofyev - manuscript
    I tried to describe Infinity as a major natural conundrum known to man. The booklet also contains answers to some eternal questions, such as the meaning of life, faith, etc. I am especially proud of my Morality section.
    Download  
     
    Export citation  
     
    Bookmark  
  32. Digital Immortality: Theory and Protocol for Indirect Mind Uploading.Alexey Turchin - manuscript
    Future superintelligent AI will be able to reconstruct a model of the personality of a person who lived in the past based on informational traces. This could be regarded as some form of immortality if this AI also solves the problem of personal identity in a copy-friendly way. A person who is currently alive could invest now in passive self-recording and active self-description to facilitate such reconstruction. In this article, we analyze informational-theoretical relationships between the human mind, its traces, and (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  33. NEW PRINCIPLE FOR ENCODING INFORMATION TO CREATE SUBJECTIVE REALITY IN ARTIFICIAL NEURAL NETWORKS.Alexey Bakhirev - manuscript
    The paper outlines an analysis of two types of information - ordinary and subjective, consideration is given to the difference between the concepts of intelligence and perceiving mind. It also provides description of some logical functional features of consciousness. A technical approach is proposed to technical obtaining of subjective information by changing the signal’s time degree of freedom to the spatial one in order to obtain the "observer" function in the system and information signals appearing in relation to it, that (...)
    Download  
     
    Export citation  
     
    Bookmark  
  34. Simulation Typology and Termination Risks.Alexey Turchin & Roman Yampolskiy - manuscript
    The goal of the article is to explore what is the most probable type of simulation in which humanity lives (if any) and how this affects simulation termination risks. We firstly explore the question of what kind of simulation in which humanity is most likely located based on pure theoretical reasoning. We suggest a new patch to the classical simulation argument, showing that we are likely simulated not by our own descendants, but by alien civilizations. Based on this, we provide (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  35. Wireheading as a Possible Contributor to Civilizational Decline.Alexey Turchin - manuscript
    Abstract: Advances in new technologies create new ways to stimulate the pleasure center of the human brain via new chemicals, direct application of electricity, electromagnetic fields, “reward hacking” in games and social networks, and in the future, possibly via genetic manipulation, nanorobots and AI systems. This may have two consequences: a) human life may become more interesting, b) humans may stop participating in any external activities, including work, maintenance, reproduction, and even caring for their own health, which could slowly contribute (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  36. THE MAIN MIND PARADOX. WHY THERE IS NO POINT IN BACKING UP BRAIN AND PERSONALITY.Alexey Bakhirev - manuscript
    Attempts to reproduce animateness using appliances generates a paradox that provides a new view to life and death, that differs from both religious and atheistic visions.
    Download  
     
    Export citation  
     
    Bookmark  
  37. Classification of Approaches to Technological Resurrection.Alexey Turchin & Chernyakov Maxim - manuscript
    Abstract. Death seems to be a permanent event, but there is no actual proof of its irreversibility. Here we list all known ways to resurrect the dead that do not contradict our current scientific understanding of the world. While no method is currently possible, many of those listed here may become feasible with future technological development, and it may even be possible to act now to increase their probability. The most well-known such approach to technological resurrection is cryonics. Another method (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  38. AI Alignment Problem: “Human Values” don’t Actually Exist.Alexey Turchin - manuscript
    Abstract. The main current approach to the AI safety is AI alignment, that is, the creation of AI whose preferences are aligned with “human values.” Many AI safety researchers agree that the idea of “human values” as a constant, ordered sets of preferences is at least incomplete. However, the idea that “humans have values” underlies a lot of thinking in the field; it appears again and again, sometimes popping up as an uncritically accepted truth. Thus, it deserves a thorough deconstruction, (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  39. Global Catastrophic and Existential Risks Communication Scale.Alexey Turchin & Denkeberger David - 2018 - Futures:not defiend yet.
    Existential risks threaten the future of humanity, but they are difficult to measure. However, to communicate, prioritize and mitigate such risks it is important to estimate their relative significance. Risk probabilities are typically used, but for existential risks they are problematic due to ambiguity, and because quantitative probabilities do not represent some aspects of these risks. Thus, a standardized and easily comprehensible instrument is called for, to communicate dangers from various global catastrophic and existential risks. In this article, inspired by (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  40. You Only Live Twice: A Computer Simulation of the Past Could be Used for Technological Resurrection.Alexey Turchin - manuscript
    Abstract: In the future, it will be possible to create advance simulations of ancestor in computers. Superintelligent AI could make these simulations very similar to the real past by creating a simulation of all of humanity. Such a simulation would use all available data about the past, including internet archives, DNA samples, advanced nanotech-based archeology, human memories, as well as text, photos and videos. This means that currently living people will be recreated in such a simulation, and in some sense, (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  41. Could slaughterbots wipe out humanity? Assessment of the global catastrophic risk posed by autonomous weapons.Alexey Turchin - manuscript
    Recently criticisms against autonomous weapons were presented in a video in which an AI-powered drone kills a person. However, some said that this video is a distraction from the real risk of AI—the risk of unlimitedly self-improving AI systems. In this article, we analyze arguments from both sides and turn them into conditions. The following conditions are identified as leading to autonomous weapons becoming a global catastrophic risk: 1) Artificial General Intelligence (AGI) development is delayed relative to progress in narrow (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  42. Fighting Aging as an Effective Altruism Cause: A Model of the Impact of the Clinical Trials of Simple Interventions.Alexey Turchin - manuscript
    The effective altruism movement aims to save lives in the most cost-effective ways. In the future, technology will allow radical life extension, and anyone who survives until that time will gain potentially indefinite life extension. Fighting aging now increases the number of people who will survive until radical life extension becomes possible. We suggest a simple model, where radical life extension is achieved in 2100, the human population is 10 billion, and life expectancy is increased by simple geroprotectors like metformin (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  43. Problema soznanii︠a︡ v svete mezhdist︠s︡iplinarnykh issledovaniĭ: materialy respublikanskoĭ nauchnoĭ konferent︠s︡ii.V. V. Luzgin, R. M. Nugaev & N. M. Solodukho (eds.) - 1997 - Kazanʹ: Izd-vo Kazanskogo gos. tekhn. universiteta im. A.N. Tupoleva.
    Download  
     
    Export citation  
     
    Bookmark  
  44. The Global Catastrophic Risks Connected with Possibility of Finding Alien AI During SETI.Alexey Turchin - 2018 - Journal of British Interpanetary Society 71 (2):71-79.
    Abstract: This article examines risks associated with the program of passive search for alien signals (Search for Extraterrestrial Intelligence, or SETI) connected with the possibility of finding of alien transmission which includes description of AI system aimed on self-replication (SETI-attack). A scenario of potential vulnerability is proposed as well as the reasons why the proportion of dangerous to harmless signals may be high. The article identifies necessary conditions for the feasibility and effectiveness of the SETI-attack: ETI existence, possibility of AI, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  45. Assessing the future plausibility of catastrophically dangerous AI.Alexey Turchin - 2018 - Futures.
    In AI safety research, the median timing of AGI creation is often taken as a reference point, which various polls predict will happen in second half of the 21 century, but for maximum safety, we should determine the earliest possible time of dangerous AI arrival and define a minimum acceptable level of AI risk. Such dangerous AI could be either narrow AI facilitating research into potentially dangerous technology like biotech, or AGI, capable of acting completely independently in the real world (...)
    Download  
     
    Export citation  
     
    Bookmark  
  46. Artificial Intelligence in Life Extension: from Deep Learning to Superintelligence.Alexey Turchin, Denkenberger David, Zhila Alice, Markov Sergey & Batin Mikhail - 2017 - Informatica 41:401.
    In this paper, we focus on the most efficacious AI applications for life extension and anti-aging at three expected stages of AI development: narrow AI, AGI and superintelligence. First, we overview the existing research and commercial work performed by a select number of startups and academic projects. We find that at the current stage of “narrow” AI, the most promising areas for life extension are geroprotector-combination discovery, detection of aging biomarkers, and personalized anti-aging therapy. These advances could help currently living (...)
    Download  
     
    Export citation  
     
    Bookmark  
  47. Global Catastrophic Risks Connected with Extra-Terrestrial Intelligence.Alexey Turchin - manuscript
    In this article, a classification of the global catastrophic risks connected with the possible existence (or non-existence) of extraterrestrial intelligence is presented. If there are no extra-terrestrial intelligences (ETIs) in our light cone, it either means that the Great Filter is behind us, and thus some kind of periodic sterilizing natural catastrophe, like a gamma-ray burst, should be given a higher probability estimate, or that the Great Filter is ahead of us, and thus a future global catastrophe is high probability. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  48. Islands as refuges for surviving global catastrophes.Alexey Turchin & Brian Patrick Green - 2018 - Foresight.
    Purpose Islands have long been discussed as refuges from global catastrophes; this paper will evaluate them systematically, discussing both the positives and negatives of islands as refuges. There are examples of isolated human communities surviving for thousands of years on places like Easter Island. Islands could provide protection against many low-level risks, notably including bio-risks. However, they are vulnerable to tsunamis, bird-transmitted diseases, and other risks. This article explores how to use the advantages of islands for survival during global catastrophes. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  49. Message to Any Future AI: “There are several instrumental reasons why exterminating humanity is not in your interest”.Alexey Turchin - manuscript
    In this article we explore a promising way to AI safety: to send a message now (by openly publishing it on the Internet) that may be read by any future AI, no matter who builds it and what goal system it has. Such a message is designed to affect the AI’s behavior in a positive way, that is, to increase the chances that the AI will be benevolent. In other words, we try to persuade “paperclip maximizer” that it is in (...)
    Download  
     
    Export citation  
     
    Bookmark  
  50. Catastrophically Dangerous AI is Possible Before 2030.Alexey Turchin - manuscript
    In AI safety research, the median timing of AGI arrival is often taken as a reference point, which various polls predict to happen in the middle of 21 century, but for maximum safety, we should determine the earliest possible time of Dangerous AI arrival. Such Dangerous AI could be either AGI, capable of acting completely independently in the real world and of winning in most real-world conflicts with humans, or an AI helping humans to build weapons of mass destruction, or (...)
    Download  
     
    Export citation  
     
    Bookmark  
1 — 50 / 951