Results for 'anthropogenic existential risk'

1000+ found
Order:
  1. Space Colonization and Existential Risk.Joseph Gottlieb - 2019 - Journal of the American Philosophical Association 5 (3):306-320.
    Ian Stoner has recently argued that we ought not to colonize Mars because doing so would flout our pro tanto obligation not to violate the principle of scientific conservation, and there is no countervailing considerations that render our violation of the principle permissible. While I remain agnostic on, my primary goal in this article is to challenge : there are countervailing considerations that render our violation of the principle permissible. As such, Stoner has failed to establish that we ought not (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  2. (The Impossibility of) Acting Upon a Story That We Can Believe.Zoltán Simon - 2018 - Rethinking History 22 (1):105-125.
    The historical sensibility of Western modernity is best captured by the phrase “acting upon a story that we can believe.” Whereas the most famous stories of historians facilitated nation-building processes, philosophers of history told the largest possible story to act upon: history itself. When the rise of an overwhelming postwar skepticism about the modern idea of history discredited the entire enterprise, the historical sensibility of “acting upon a story that we can believe” fell apart to its constituents: action, story form, (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  3. Existential Risks: Exploring a Robust Risk Reduction Strategy.Karim Jebari - 2015 - Science and Engineering Ethics 21 (3):541-554.
    A small but growing number of studies have aimed to understand, assess and reduce existential risks, or risks that threaten the continued existence of mankind. However, most attention has been focused on known and tangible risks. This paper proposes a heuristic for reducing the risk of black swan extinction events. These events are, as the name suggests, stochastic and unforeseen when they happen. Decision theory based on a fixed model of possible outcomes cannot properly deal with this kind (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  4. The Fragile World Hypothesis: Complexity, Fragility, and Systemic Existential Risk.David Manheim - forthcoming - Futures.
    The possibility of social and technological collapse has been the focus of science fiction tropes for decades, but more recent focus has been on specific sources of existential and global catastrophic risk. Because these scenarios are simple to understand and envision, they receive more attention than risks due to complex interplay of failures, or risks that cannot be clearly specified. In this paper, we discuss the possibility that complexity of a certain type leads to fragility which can function (...)
    Download  
     
    Export citation  
     
    Bookmark  
  5. Does Anthropogenic Climate Change Violate Human Rights?Derek Bell - 2011 - Critical Review of International Social and Political Philosophy 14 (2):99-124.
    Early discussions of ?climate justice? have been dominated by economists rather than political philosophers. More recently, analytical liberal political philosophers have joined the debate. However, the philosophical discussion of climate justice remains in its early stages. This paper considers one promising approach based on human rights, which has been advocated recently by several theorists, including Simon Caney, Henry Shue and Tim Hayward. A basic argument supporting the claim that anthropogenic climate change violates human rights is presented. Four objections to (...)
    Download  
     
    Export citation  
     
    Bookmark   18 citations  
  6. Existential Risks: New Zealand Needs a Method to Agree on a Value Framework and How to Quantify Future Lives at Risk.Matthew Boyd & Nick Wilson - 2018 - Policy Quarterly 14 (3):58-65.
    Human civilisation faces a range of existential risks, including nuclear war, runaway climate change and superintelligent artificial intelligence run amok. As we show here with calculations for the New Zealand setting, large numbers of currently living and, especially, future people are potentially threatened by existential risks. A just process for resource allocation demands that we consider future generations but also account for solidarity with the present. Here we consider the various ethical and policy issues involved and make a (...)
    Download  
     
    Export citation  
     
    Bookmark  
  7. Responses to Catastrophic AGI Risk: A Survey.Kaj Sotala & Roman V. Yampolskiy - 2015 - Physica Scripta 90.
    Many researchers have argued that humanity will create artificial general intelligence (AGI) within the next twenty to one hundred years. It has been suggested that AGI may inflict serious damage to human well-being on a global scale ('catastrophic risk'). After summarizing the arguments for why AGI may pose such a risk, we review the fieldʼs proposed responses to AGI risk. We consider societal proposals, proposals for external constraints on AGI behaviors and proposals for creating AGIs that are (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  8. Global Catastrophic and Existential Risks Communication Scale.Alexey Turchin & Denkeberger David - 2018 - Futures:not defiend yet.
    Existential risks threaten the future of humanity, but they are difficult to measure. However, to communicate, prioritize and mitigate such risks it is important to estimate their relative significance. Risk probabilities are typically used, but for existential risks they are problematic due to ambiguity, and because quantitative probabilities do not represent some aspects of these risks. Thus, a standardized and easily comprehensible instrument is called for, to communicate dangers from various global catastrophic and existential risks. In (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  9.  94
    EVOLUTIONARY RISK OF HIGH HUME TECHNOLOGIES. Article 2. THE GENESIS AND MECHANISMS OF EVOLUTIONARY RISK.V. T. Cheshko, L. V. Ivanitskaya & V. I. Glazko - 2015 - Integrative Anthropology (1):4-15.
    Sources of evolutionary risk for stable strategy of adaptive Homo sapiens are an imbalance of: (1) the intra-genomic co-evolution (intragenomic conflicts); (2) the gene-cultural co-evolution; (3) inter-cultural co-evolution; (4) techno-humanitarian balance; (5) inter-technological conflicts (technological traps). At least phenomenologically the components of the evolutionary risk are reversible, but in the aggregate they are in potentio irreversible destructive ones for biosocial, and cultural self-identity of Homo sapiens. When the actual evolution is the subject of a rationalist control and/or manipulation, (...)
    Download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  10. Could Slaughterbots Wipe Out Humanity? Assessment of the Global Catastrophic Risk Posed by Autonomous Weapons.Alexey Turchin - manuscript
    Recently criticisms against autonomous weapons were presented in a video in which an AI-powered drone kills a person. However, some said that this video is a distraction from the real risk of AI—the risk of unlimitedly self-improving AI systems. In this article, we analyze arguments from both sides and turn them into conditions. The following conditions are identified as leading to autonomous weapons becoming a global catastrophic risk: 1) Artificial General Intelligence (AGI) development is delayed relative to (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  11.  89
    Presumptuous Philosopher Proves Panspermia.Alexey Turchin - manuscript
    The presumptuous philosopher (PP) thought experiment lends more credence to the hypothesis which postulates the existence of a larger number of observers than other hypothesis. The PP was suggested as a purely speculative endeavor. However, there is a class of real observer selection effects where it could apply, and one is the possibility of interstellar panspermia (IP)—meaning that the universes where interstellar panspermia is possible will have a billion times more civilizations than universes without IP, and thus we are likely (...)
    Download  
     
    Export citation  
     
    Bookmark  
  12. Artificial Multipandemic as the Most Plausible and Dangerous Global Catastrophic Risk Connected with Bioweapons and Synthetic Biology.Alexey Turchin, Brian Patrick Green & David Denkenberger - manuscript
    Pandemics have been suggested as global risks many times, but it has been shown that the probability of human extinction due to one pandemic is small, as it will not be able to affect and kill all people, but likely only half, even in the worst cases. Assuming that the probability of the worst pandemic to kill a person is 0.5, and assuming linear interaction between different pandemics, 30 strong pandemics running simultaneously will kill everyone. Such situations cannot happen naturally, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  13. Сo-evolutionary biosemantics of evolutionary risk at technogenic civilization: Hiroshima, Chernobyl – Fukushima and further….Valentin Cheshko & Valery Glazko - 2016 - International Journal of Environmental Problems 3 (1):14-25.
    From Chernobyl to Fukushima, it became clear that the technology is a system evolutionary factor, and the consequences of man-made disasters, as the actualization of risk related to changes in the social heredity (cultural transmission) elements. The uniqueness of the human phenomenon is a characteristic of the system arising out of the nonlinear interaction of biological, cultural and techno-rationalistic adaptive modules. Distribution emerging adaptive innovation within each module is in accordance with the two algorithms that are characterized by the (...)
    Download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  14. The Prolegomens to Theory of Human Stable Evolutionarciety at Age of Controlled Evolution Techny Strategy as Ideology of Risk Soologies.V. T. Cheshko - 2016 - In Teodor N. Țîrdea (ed.), // Strategia supravietuirii din perspectiva bioeticii, filosofiei și medicinei. Culegere de articole științifice. Vol. 22–. pp. 134-139.
    Stable adaptive strategy of Homo sapiens (SESH) is a superposition of three different adaptive data arrays: biological, socio-cultural and technological modules, based on three independent processes of generation and replication of an adaptive information – genetic, socio-cultural and symbolic transmissions (inheritance). Third component SESH focused equally to the adaptive transformation of the environment and carrier of SESH. With the advent of High Hume technology, risk has reached the existential significance level. The existential level of technical risk (...)
    Download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  15. Bioeconomics, Biopolitics and Bioethics: Evolutionary Semantics of Evolutionary Risk (Anthropological Essay).V. T. Cheshko - 2016 - Bioeconomics and Ecobiopolitic (1 (2)).
    Attempt of trans-disciplinary analysis of the evolutionary value of bioethics is realized. Currently, there are High Tech schemes for management and control of genetic, socio-cultural and mental evolution of Homo sapiens (NBIC, High Hume, etc.). The biological, socio-cultural and technological factors are included in the fabric of modern theories and technologies of social and political control and manipulation. However, the basic philosophical and ideological systems of modern civilization formed mainly in the 17–18 centuries and are experiencing ever-increasing and destabilizing (...)-taking pressure from the scientific theories and technological realities. The sequence of diagnostic signs of a new era once again split into technological and natural sciences’ from one hand, and humanitarian and anthropological sciences’, from other. The natural sciences series corresponds to a system of technological risks be solved using algorithms established safety procedures. The socio-humanitarian series presented anthropological risk. Global bioethics phenomenon is regarded as systemic socio-cultural adaptation for technology-driven human evolution. The conceptual model for meta-structure of stable evolutionary strategy of Homo sapiens (SESH) is proposes. In accordance to model, SESH composed of genetic, socio-cultural and techno-rationalist modules, and global bioethics as a tool to minimize existential evolutionary risk. An existence of objectively descriptive and value-teleological evolutionary trajectory parameters of humanity in the modern technological and civilizational context (1), and the genesis of global bioethics as a system social adaptation to ensure self-identity (2) are postulated. -/- . (shrink)
    Download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  16. COEVOLUTIONARY SEMANTICS OF TECHNOLOGICAL CIVILIZATION GENESIS AND EVOLUTIONARY RISK (BETWEEN THE BIOAESTHETICS AND BIOPOLITICS).V. T. Cheshko & O. N. Kuz - 2016 - Anthropological Dimensions of Philosophical Studies (10):43-55.
    Purpose (metatask) of the present work is to attempt to give a glance at the problem of existential and anthropo- logical risk caused by the contemporary man-made civilization from the perspective of comparison and confronta- tion of aesthetics, the substrate of which is emotional and metaphorical interpretation of individual subjective values and politics feeding by objectively rational interests of social groups. In both cases there is some semantic gap pre- sent between the represented social reality and its representation (...)
    Download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  17.  79
    Coevolutionary Semantics of Technological Civilization Genesis and Evolutionary Risk.V. T. Cheshko & O. M. Kuz - 2016 - Anthropological Measurements of Philosophical Research 10:43-55.
    Purpose of the present work is to attempt to give a glance at the problem of existential and anthropological risk caused by the contemporary man-made civilization from the perspective of comparison and confrontation of aesthetics, the substrate of which is emotional and metaphorical interpretation of individual subjective values and politics feeding by objectively rational interests of social groups. In both cases there is some semantic gap present between the represented social reality and its representation in perception of works (...)
    Download  
     
    Export citation  
     
    Bookmark  
  18. Superintelligence as a Cause or Cure for Risks of Astronomical Suffering.Kaj Sotala & Lukas Gloor - 2017 - Informatica: An International Journal of Computing and Informatics 41 (4):389-400.
    Discussions about the possible consequences of creating superintelligence have included the possibility of existential risk, often understood mainly as the risk of human extinction. We argue that suffering risks (s-risks) , where an adverse outcome would bring about severe suffering on an astronomical scale, are risks of a comparable severity and probability as risks of extinction. Preventing them is the common interest of many different value systems. Furthermore, we argue that in the same way as superintelligent AI (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  19. Why AI Doomsayers Are Like Sceptical Theists and Why It Matters.John Danaher - 2015 - Minds and Machines 25 (3):231-246.
    An advanced artificial intelligence could pose a significant existential risk to humanity. Several research institutes have been set-up to address those risks. And there is an increasing number of academic publications analysing and evaluating their seriousness. Nick Bostrom’s superintelligence: paths, dangers, strategies represents the apotheosis of this trend. In this article, I argue that in defending the credibility of AI risk, Bostrom makes an epistemic move that is analogous to one made by so-called sceptical theists in the (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  20. Aquatic Refuges for Surviving a Global Catastrophe.Alexey Turchin & Brian Green - 2017 - Futures 89:26-37.
    Recently many methods for reducing the risk of human extinction have been suggested, including building refuges underground and in space. Here we will discuss the perspective of using military nuclear submarines or their derivatives to ensure the survival of a small portion of humanity who will be able to rebuild human civilization after a large catastrophe. We will show that it is a very cost-effective way to build refuges, and viable solutions exist for various budgets and timeframes. Nuclear submarines (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  21. Risks of Artificial Intelligence.Vincent C. Müller (ed.) - 2016 - CRC Press - Chapman & Hall.
    Papers from the conference on AI Risk (published in JETAI), supplemented by additional work. --- If the intelligence of artificial systems were to surpass that of humans, humanity would face significant risks. The time has come to consider these issues, and this consideration must include progress in artificial intelligence (AI) as much as insights from AI theory. -- Featuring contributions from leading experts and thinkers in artificial intelligence, Risks of Artificial Intelligence is the first volume of collected chapters dedicated (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  22. Pascal's Mugger Strikes Again.Dylan Balfour - forthcoming - Utilitas:1-7.
    In a well-known paper, Nick Bostrom presents a confrontation between a fictionalised Blaise Pascal and a mysterious mugger. The mugger persuades Pascal to hand over his wallet by exploiting Pascal's commitment to expected utility maximisation. He does so by offering Pascal an astronomically high reward such that, despite Pascal's low credence in the mugger's truthfulness, the expected utility of accepting the mugging is higher than rejecting it. In this article, I present another sort of high value, low credence mugging. This (...)
    Download  
     
    Export citation  
     
    Bookmark  
  23. Classification of Global Catastrophic Risks Connected with Artificial Intelligence.Alexey Turchin & David Denkenberger - 2020 - AI and Society 35 (1):147-163.
    A classification of the global catastrophic risks of AI is presented, along with a comprehensive list of previously identified risks. This classification allows the identification of several new risks. We show that at each level of AI’s intelligence power, separate types of possible catastrophes dominate. Our classification demonstrates that the field of AI risks is diverse, and includes many scenarios beyond the commonly discussed cases of a paperclip maximizer or robot-caused unemployment. Global catastrophic failure could happen at various levels of (...)
    Download  
    Translate
     
     
    Export citation  
     
    Bookmark   4 citations  
  24. Long-Term Trajectories of Human Civilization.Seth D. Baum, Stuart Armstrong, Timoteus Ekenstedt, Olle Häggström, Robin Hanson, Karin Kuhlemann, Matthijs M. Maas, James D. Miller, Markus Salmela, Anders Sandberg, Kaj Sotala, Phil Torres, Alexey Turchin & Roman V. Yampolskiy - 2019 - Foresight 21 (1):53-83.
    Purpose This paper aims to formalize long-term trajectories of human civilization as a scientific and ethical field of study. The long-term trajectory of human civilization can be defined as the path that human civilization takes during the entire future time period in which human civilization could continue to exist. -/- Design/methodology/approach This paper focuses on four types of trajectories: status quo trajectories, in which human civilization persists in a state broadly similar to its current state into the distant future; catastrophe (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  25.  63
    Robustness to Fundamental Uncertainty in AGI Alignment.G. G. Worley Iii - 2020 - Journal of Consciousness Studies 27 (1-2):225-241.
    The AGI alignment problem has a bimodal distribution of outcomes with most outcomes clustering around the poles of total success and existential, catastrophic failure. Consequently, attempts to solve AGI alignment should, all else equal, prefer false negatives (ignoring research programs that would have been successful) to false positives (pursuing research programs that will unexpectedly fail). Thus, we propose adopting a policy of responding to points of philosophical and practical uncertainty associated with the alignment problem by limiting and choosing necessary (...)
    Download  
     
    Export citation  
     
    Bookmark  
  26. Assessing the Future Plausibility of Catastrophically Dangerous AI.Alexey Turchin - 2018 - Futures.
    In AI safety research, the median timing of AGI creation is often taken as a reference point, which various polls predict will happen in second half of the 21 century, but for maximum safety, we should determine the earliest possible time of dangerous AI arrival and define a minimum acceptable level of AI risk. Such dangerous AI could be either narrow AI facilitating research into potentially dangerous technology like biotech, or AGI, capable of acting completely independently in the real (...)
    Download  
     
    Export citation  
     
    Bookmark  
  27. Military AI as a Convergent Goal of Self-Improving AI.Alexey Turchin & Denkenberger David - 2018 - In Artificial Intelligence Safety and Security. Louiswille: CRC Press.
    Better instruments to predict the future evolution of artificial intelligence (AI) are needed, as the destiny of our civilization depends on it. One of the ways to such prediction is the analysis of the convergent drives of any future AI, started by Omohundro. We show that one of the convergent drives of AI is a militarization drive, arising from AI’s need to wage a war against its potential rivals by either physical or software means, or to increase its bargaining power. (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  28. Approaches to the Prevention of Global Catastrophic Risks.Alexey Turchin - 2018 - Human Prospect 7 (2):52-65.
    Many global catastrophic and existential risks (X-risks) threaten the existence of humankind. There are also many ideas for their prevention, but the meta-problem is that these ideas are not structured. This lack of structure means it is not easy to choose the right plan(s) or to implement them in the correct order. I suggest using a “Plan A, Plan B” model, which has shown its effectiveness in planning actions in unpredictable environments. In this approach, Plan B is a backup (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  29. A Lack of Ideological Diversity is Killing Social Research.Musa al-Gharbi - 2017 - Times Higher Education 2298:27-28.
    The lack of ideological diversity in social research, paired with the lack of engagement with citizens and policymakers who come from other places on the ideological spectrum, poses an existential risk to the continued credibility, utility and even viability of social research. The need for reform is urgent.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  30. Editorial: Risks of Artificial Intelligence.Vincent C. Müller - 2016 - In Risks of artificial intelligence. CRC Press - Chapman & Hall. pp. 1-8.
    If the intelligence of artificial systems were to surpass that of humans significantly, this would constitute a significant risk for humanity. Time has come to consider these issues, and this consideration must include progress in AI as much as insights from the theory of AI. The papers in this volume try to make cautious headway in setting the problem, evaluating predictions on the future of AI, proposing ways to ensure that AI systems will be beneficial to humans – and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  31.  74
    Unveiling Thomas Moynihan's Spinal Catastrophism: The Spine Considered as Chronogenetic Media Artifact. [REVIEW]Ekin Erkan - 2019 - Cosmos and History 15 (1):564-571.
    A review of Thomas Moynihan's Spinal Catastrophism: A Secret History (2019).
    Download  
     
    Export citation  
     
    Bookmark  
  32. Global Solutions Vs. Local Solutions for the AI Safety Problem.Alexey Turchin - 2019 - Big Data Cogn. Comput 3 (1).
    There are two types of artificial general intelligence (AGI) safety solutions: global and local. Most previously suggested solutions are local: they explain how to align or “box” a specific AI (Artificial Intelligence), but do not explain how to prevent the creation of dangerous AI in other places. Global solutions are those that ensure any AI on Earth is not dangerous. The number of suggested global solutions is much smaller than the number of proposed local solutions. Global solutions can be divided (...)
    Download  
     
    Export citation  
     
    Bookmark  
  33. The Global Catastrophic Risks Connected with Possibility of Finding Alien AI During SETI.Alexey Turchin - 2018 - Journal of British Interpanetary Society 71 (2):71-79.
    Abstract: This article examines risks associated with the program of passive search for alien signals (Search for Extraterrestrial Intelligence, or SETI) connected with the possibility of finding of alien transmission which includes description of AI system aimed on self-replication (SETI-attack). A scenario of potential vulnerability is proposed as well as the reasons why the proportion of dangerous to harmless signals may be high. The article identifies necessary conditions for the feasibility and effectiveness of the SETI-attack: ETI existence, possibility of AI, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  34.  46
    Non-Additive Axiologies in Large Worlds.Christian Tarsney & Teruji Thomas - manuscript
    Is the overall value of a world just the sum of values contributed by each value-bearing entity in that world? Additively separable axiologies (like total utilitarianism, prioritarianism, and critical level views) say 'yes', but non-additive axiologies (like average utilitarianism, rank-discounted utilitarianism, and variable value views) say 'no'. This distinction is practically important: additive axiologies support 'arguments from astronomical scale' which suggest (among other things) that it is overwhelmingly important for humanity to avoid premature extinction and ensure the existence of a (...)
    Download  
     
    Export citation  
     
    Bookmark  
  35. Surviving Global Risks Through the Preservation of Humanity's Data on the Moon.Alexey Turchin & D. Denkenberger - 2018 - Acta Astronautica:in press.
    Many global catastrophic risks are threatening human civilization, and a number of ideas have been suggested for preventing or surviving them. However, if these interventions fail, society could preserve information about the human race and human DNA samples in the hopes that the next civilization on Earth will be able to reconstruct Homo sapiens and our culture. This requires information preservation of an order of magnitude of 100 million years, a little-explored topic thus far. It is important that a potential (...)
    Download  
     
    Export citation  
     
    Bookmark  
  36. Islands as Refuges for Surviving Global Catastrophes.Alexey Turchin & Brian Patrick Green - 2018 - Foresight.
    Purpose Islands have long been discussed as refuges from global catastrophes; this paper will evaluate them systematically, discussing both the positives and negatives of islands as refuges. There are examples of isolated human communities surviving for thousands of years on places like Easter Island. Islands could provide protection against many low-level risks, notably including bio-risks. However, they are vulnerable to tsunamis, bird-transmitted diseases, and other risks. This article explores how to use the advantages of islands for survival during global catastrophes. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  37. Configuration of Stable Evolutionary Strategy of Homo Sapiens and Evolutionary Risks of Technological Civilization (the Conceptual Model Essay).Valentin T. Cheshko, Lida V. Ivanitskaya & Yulia V. Kosova - 2014 - Biogeosystem Technique 1 (1):58-68.
    Stable evolutionary strategy of Homo sapiens (SESH) is built in accordance with the modular and hierarchical principle and consists of the same type of self-replicating elements, i.e. is a system of systems. On the top level of the organization of SESH is the superposition of genetic, social, cultural and techno-rationalistic complexes. The components of this triad differ in the mechanism of cycles of generation - replication - transmission - fixing/elimination of adoptively relevant information. This mechanism is implemented either in accordance (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  38.  32
    Near the Omega point: Anthropological-epistemological essay on the COVID-19 pandemic.Valentin Cheshko - 2020 - Practical Philosophy 76 (2):53-62.
    Summary. The prerequisites of this study have three interwoven sources, the natural sciences and philosophical and socio-political ones. They are trends in the way of being of a modern, technogenic civilization. The COVID-19 pandemic caused significant damage to the image of the omnipotent techno-science that has developed in the mentality of this sociocultural type.Our goal was to study the co-evolutionary nature of this phenomenon as a natural consequence of the nature of the evolutionary strategy of our biological species. Technological civilization (...)
    Download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  39. Configuration of Stable Evolutionary Strategy of Homo Sapiens and Evolutionary Risks of Technological Civilization (the Conceptual Model Essay).Valentin T. Cheshko, Lida V. Ivanitskaya & Yulia V. Kosova - 2014 - Biogeosystem Technique, 1 (1):58-68.
    Stable evolutionary strategy of Homo sapiens (SESH) is built in accordance with the modular and hierarchical principle and consists of the same type of self-replicating elements, i.e. is a system of systems. On the top level of the organization of SESH is the superposition of genetic, social, cultural and techno-rationalistic complexes. The components of this triad differ in the mechanism of cycles of generation - replication - transmission - fixing/elimination of adoptively relevant information. This mechanism is implemented either in accordance (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  40. Bioethics: Reincarnation of Natural Philosophy in Modern Science.Valentin Teodorovich Cheshko, Valery I. Glazko & Yulia V. Kosova - 2017 - Biogeosystem Technique 4 (2):111-121.
    The theory of evolution of complex and comprising of human systems and algorithm for its constructing are the synthesis of evolutionary epistemology, philosophical anthropology and concrete scientific empirical basis in modern (transdisciplinary) science. «Trans-disciplinary» in the context is interpreted as a completely new epistemological situation, which is fraught with the initiation of a civilizational crisis. Philosophy and ideology of technogenic civilization is based on the possibility of unambiguous demarcation of public value and descriptive scientific discourses (1), and the object and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  41. When is Scientific Dissent Epistemically Inappropriate?Boaz Miller - forthcoming - Philosophy of Science.
    Normatively inappropriate scientific dissent prevents warranted closure of scientific controversies, and confuses the public about the state of policy-relevant science, such as anthropogenic climate change. Against recent criticism by de Melo-Martín and Intemann of the viability of any conception of normatively inappropriate dissent, I identify three conditions for normatively inappropriate dissent : its generation process is politically illegitimate; it imposes an unjust distribution of inductive risks; it adopts evidential thresholds outside an accepted range. I supplement these conditions with an (...)
    Download  
     
    Export citation  
     
    Bookmark  
  42. Global Catastrophic Risks by Chemical Contamination.Alexey Turchin - manuscript
    Abstract: Global chemical contamination is an underexplored source of global catastrophic risks that is estimated to have low a priori probability. However, events such as pollinating insects’ population decline and lowering of the human male sperm count hint at some toxic exposure accumulation and thus could be a global catastrophic risk event if not prevented by future medical advances. We identified several potentially dangerous sources of the global chemical contamination, which may happen now or could happen in the future: (...)
    Download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  43. First Human Upload as AI Nanny.Alexey Turchin - manuscript
    Abstract: As there are no visible ways to create safe self-improving superintelligence, but it is looming, we probably need temporary ways to prevent its creation. The only way to prevent it, is to create special AI, which is able to control and monitor all places in the world. The idea has been suggested by Goertzel in form of AI Nanny, but his Nanny is still superintelligent and not easy to control, as was shown by Bensinger at al. We explore here (...)
    Download  
     
    Export citation  
     
    Bookmark  
  44. Bio-power and bio-policy: Anthropological and socio-political dimensions of techno-humanitarian balance.V. Cheshko & O. Kuss - 2016 - Hyleya 107 (4):267-272.
    The sociobiological and socio-political aspects of human existence have been the subject of techno-rationalistic control and manipulation. The investigation of the mutual complementarity of anthropological and ontological paradigms under these circumstances is the main purpose of present publication. The comparative conceptual analysis of the bio-power and bio-politics in the mentality of the modern technological civilization is a main method of the research. The methodological and philosophical analogy of biological and social engineering allows combining them in the nature and social implications (...)
    Download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  45.  70
    Robustness to Fundamental Uncertainty in AGI Alignment.I. I. I. G. Gordon Worley - manuscript
    The AGI alignment problem has a bimodal distribution of outcomes with most outcomes clustering around the poles of total success and existential, catastrophic failure. Consequently, attempts to solve AGI alignment should, all else equal, prefer false negatives (ignoring research programs that would have been successful) to false positives (pursuing research programs that will unexpectedly fail). Thus, we propose adopting a policy of responding to points of metaphysical and practical uncertainty associated with the alignment problem by limiting and choosing necessary (...)
    Download  
     
    Export citation  
     
    Bookmark  
  46.  15
    Anti-Realism, Easy Ontology, and Issues of Reference.Iñaki Xavier Larrauri Pertierra - manuscript
    In order to re-contextualize the otherwise ontologically privileged meaning of metaphysical debates into a more insubstantial form, metaphysical deflationism runs the risk of having to adopt potentially unwanted anti-realist tendencies. This tension between deflationism and anti-realism can be expressed as follows: in order to claim truthfully that something exists, how can deflationism avoid the anti-realist feature of construing such claims singularly in an analytical fashion? One may choose to adopt a Yablovian fallibilism about existential claims, but other approaches (...)
    Download  
     
    Export citation  
     
    Bookmark  
  47.  62
    Jaspers, Husserl, Kant: Boundary Situations as a " Turning Point".Gladys L. Portuondo - manuscript
    Abstract: The article summarizes some comments -as discussed in my book La existencia en busca de la razón. Apuntes sobre la filosofía de Karl Jaspers (Existence in search of Reason. Notes on Karl Jaspers' Philosophy), Editorial Académica Española, LAP LAMBERT Academic Publishing GmbH&Co. KG, Alemania, 2012- about the meaning of the boundary situations in the philosophy of Karl Jaspers, as a turning point regarding Husserl's phenomenology and Kant's transcendental philosophy. For Jaspers, the meaning of the boundary situations as a structure (...)
    Download  
     
    Export citation  
     
    Bookmark  
  48. Risk Aversion and the Long Run.Johanna Thoma - 2018 - Ethics 129 (2):230-253.
    This article argues that Lara Buchak’s risk-weighted expected utility theory fails to offer a true alternative to expected utility theory. Under commonly held assumptions about dynamic choice and the framing of decision problems, rational agents are guided by their attitudes to temporally extended courses of action. If so, REU theory makes approximately the same recommendations as expected utility theory. Being more permissive about dynamic choice or framing, however, undermines the theory’s claim to capturing a steady choice disposition in the (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  49. Is risk aversion irrational? Examining the “fallacy” of large numbers.H. Orri Stefánsson - 2020 - Synthese 197 (10):4425-4437.
    A moderately risk averse person may turn down a 50/50 gamble that either results in her winning $200 or losing $100. Such behaviour seems rational if, for instance, the pain of losing $100 is felt more strongly than the joy of winning $200. The aim of this paper is to examine an influential argument that some have interpreted as showing that such moderate risk aversion is irrational. After presenting an axiomatic argument that I take to be the strongest (...)
    Download  
    Translate
     
     
    Export citation  
     
    Bookmark   1 citation  
  50. Existential Nihilism: The Only Really Serious Problem in Philosophy.Walter Veit - 2018 - Journal of Camus Studies 2018:211-232.
    Since Friedrich Nietzsche, philosophers have grappled with the question of how to respond to nihilism. Nihilism, often seen as a derogative term for a ‘life-denying’, destructive and perhaps most of all depressive philosophy is what drove existentialists to write about the right response to a meaningless universe devoid of purpose. This latter diagnosis is what I shall refer to as existential nihilism, the denial of meaning and purpose, a view that not only existentialists but also a long line of (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
1 — 50 / 1000