Ian Stoner has recently argued that we ought not to colonize Mars because doing so would flout our pro tanto obligation not to violate the principle of scientific conservation, and there is no countervailing considerations that render our violation of the principle permissible. While I remain agnostic on, my primary goal in this article is to challenge : there are countervailing considerations that render our violation of the principle permissible. As such, Stoner has failed to establish that we ought not (...) to colonize Mars. I close with some thoughts on what it would take to show that we do have an obligation to colonize Mars and related issues concerning the relationship between the way we discount our preferences over time and projects with long time horizons, like space colonization. (shrink)
The historical sensibility of Western modernity is best captured by the phrase “acting upon a story that we can believe.” Whereas the most famous stories of historians facilitated nation-building processes, philosophers of history told the largest possible story to act upon: history itself. When the rise of an overwhelming postwar skepticism about the modern idea of history discredited the entire enterprise, the historical sensibility of “acting upon a story that we can believe” fell apart to its constituents: action, story form, (...) and belief in a feasible future outcome. Its constituent parts are nevertheless still hold, either separately or in paired arrangements. First, believable stories are still told, but without an equally believable future outcome to facilitate. Second, in the shape of what I call the prospect of unprecedented change, there still is a feasible vision of a future (in prospects of technology and the Anthropocene), but it defies story form. And third, it is even possible to upon that feasible future, but such action aims at avoiding worst case scenarios instead of facilitating best outcomes. These are, I believe, features of an emerging postwar historical sensibility that the theory and philosophy of history is yet to understand. (shrink)
A small but growing number of studies have aimed to understand, assess and reduce existential risks, or risks that threaten the continued existence of mankind. However, most attention has been focused on known and tangible risks. This paper proposes a heuristic for reducing the risk of black swan extinction events. These events are, as the name suggests, stochastic and unforeseen when they happen. Decision theory based on a fixed model of possible outcomes cannot properly deal with this kind (...) of event. Neither can probabilistic risk analysis. This paper will argue that the approach that is referred to as engineering safety could be applied to reducing the risk from black swan extinction events. It will also propose a conceptual sketch of how such a strategy may be implemented: isolated, self-sufficient, and continuously manned underground refuges. Some characteristics of such refuges are also described, in particular the psychosocial aspects. Furthermore, it is argued that this implementation of the engineering safety strategy safety barriers would be effective and plausible and could reduce the risk of an extinction event in a wide range of possible scenarios. Considering the staggering opportunity cost of an existential catastrophe, such strategies ought to be explored more vigorously. (shrink)
The possibility of social and technological collapse has been the focus of science fiction tropes for decades, but more recent focus has been on specific sources of existential and global catastrophic risk. Because these scenarios are simple to understand and envision, they receive more attention than risks due to complex interplay of failures, or risks that cannot be clearly specified. In this paper, we discuss the possibility that complexity of a certain type leads to fragility which can function (...) as a source of catastrophic or even existentialrisk. The paper first reviews a hypothesis by Bostrom about inevitable technological risks, named the vulnerable world hypothesis. This paper next hypothesizes that fragility may not only be a possible risk, but could be inevitable,and would therefore be a subclass or example of Bostrom’s vulnerable worlds. After introducing the titular fragile world hypothesis, the paper details the conditions under which it would be correct, and presents arguments for why the conditions may in fact may apply. Finally, the assumptions and potential mitigations of the new hypothesis are contrasted with those Bostrom suggests. (shrink)
Early discussions of ?climate justice? have been dominated by economists rather than political philosophers. More recently, analytical liberal political philosophers have joined the debate. However, the philosophical discussion of climate justice remains in its early stages. This paper considers one promising approach based on human rights, which has been advocated recently by several theorists, including Simon Caney, Henry Shue and Tim Hayward. A basic argument supporting the claim that anthropogenic climate change violates human rights is presented. Four objections to (...) this argument are examined: the ?future persons? objection; the ?risk? objection; the ?collective causation? objection; and the ?demandingness? objection. This critical examination leads to a more detailed specification and defence of the claim that anthropogenic climate change violates human rights. (shrink)
Human civilisation faces a range of existential risks, including nuclear war, runaway climate change and superintelligent artificial intelligence run amok. As we show here with calculations for the New Zealand setting, large numbers of currently living and, especially, future people are potentially threatened by existential risks. A just process for resource allocation demands that we consider future generations but also account for solidarity with the present. Here we consider the various ethical and policy issues involved and make a (...) case for further engagement with the New Zealand public to determine societal values towards future lives and their protection. (shrink)
Many researchers have argued that humanity will create artificial general intelligence (AGI) within the next twenty to one hundred years. It has been suggested that AGI may inflict serious damage to human well-being on a global scale ('catastrophic risk'). After summarizing the arguments for why AGI may pose such a risk, we review the fieldʼs proposed responses to AGI risk. We consider societal proposals, proposals for external constraints on AGI behaviors and proposals for creating AGIs that are (...) safe due to their internal design. (shrink)
Existential risks threaten the future of humanity, but they are difficult to measure. However, to communicate, prioritize and mitigate such risks it is important to estimate their relative significance. Risk probabilities are typically used, but for existential risks they are problematic due to ambiguity, and because quantitative probabilities do not represent some aspects of these risks. Thus, a standardized and easily comprehensible instrument is called for, to communicate dangers from various global catastrophic and existential risks. In (...) this article, inspired by the Torino scale of asteroid danger, we suggest a color coded scale to communicate the magnitude of global catastrophic and existential risks. The scale is based on the probability intervals of risks in the next century if they are available. The risks’ estimations could be adjusted based on their severities and other factors. The scale covers not only existential risks, but smaller size global catastrophic risks. It consists of six color levels, which correspond to previously suggested levels of prevention activity. We estimate artificial intelligence risks as “red”, while “orange” risks include nanotechnology, synthetic biology, full scale nuclear war and a large global agricultural shortfall (caused by regional nuclear war, coincident extreme weather, etc.) The risks of natural pandemic, supervolcanic eruption and global warming are marked as “yellow” and the danger from asteroids is “green”. -/- Keywords: global catastrophic risks; existential risks; Torino scale; policy; risk probability . (shrink)
Sources of evolutionary risk for stable strategy of adaptive Homo sapiens are an imbalance of: (1) the intra-genomic co-evolution (intragenomic conflicts); (2) the gene-cultural co-evolution; (3) inter-cultural co-evolution; (4) techno-humanitarian balance; (5) inter-technological conflicts (technological traps). At least phenomenologically the components of the evolutionary risk are reversible, but in the aggregate they are in potentio irreversible destructive ones for biosocial, and cultural self-identity of Homo sapiens. When the actual evolution is the subject of a rationalist control and/or manipulation, (...) the magnitude of the 4th and 5th components of the evolutionary risk reaches a level of existential significance. (shrink)
Recently criticisms against autonomous weapons were presented in a video in which an AI-powered drone kills a person. However, some said that this video is a distraction from the real risk of AI—the risk of unlimitedly self-improving AI systems. In this article, we analyze arguments from both sides and turn them into conditions. The following conditions are identified as leading to autonomous weapons becoming a global catastrophic risk: 1) Artificial General Intelligence (AGI) development is delayed relative to (...) progress in narrow AI and manufacturing. 2) The potential for very cheap manufacture of drones, with prices below 1 USD each. 3) Anti-drone defense capabilities lagging offensive development. 4) Special global military posture encouraging development of drone swarms as a strategic offensive weapon, able to kill civilians. We conclude that while it is unlikely that drone swarms alone will become existentialrisk, lethal autonomous weapons could contribute to civilizational collapse in case of new world war. (shrink)
The presumptuous philosopher (PP) thought experiment lends more credence to the hypothesis which postulates the existence of a larger number of observers than other hypothesis. The PP was suggested as a purely speculative endeavor. However, there is a class of real observer selection effects where it could apply, and one is the possibility of interstellar panspermia (IP)—meaning that the universes where interstellar panspermia is possible will have a billion times more civilizations than universes without IP, and thus we are likely (...) to be in such a universe. This is a strong counterargument against a variant of the Rare Earth hypothesis based on the difficulty of abiogenesis: even if abiogenesis is difficult, IP will disseminate life over billions of planets, meaning we are likely to find ourselves in a universe where IP has happened. This implies that there should be many planets with life in our galaxy, and renders the Fermi paradox even sharper. This means that either the Great Filter is ahead for us and there are high risks of anthropogenic extinction, or that there are many alien civilizations hiding nearby—itself a global catastrophic risk. (shrink)
Pandemics have been suggested as global risks many times, but it has been shown that the probability of human extinction due to one pandemic is small, as it will not be able to affect and kill all people, but likely only half, even in the worst cases. Assuming that the probability of the worst pandemic to kill a person is 0.5, and assuming linear interaction between different pandemics, 30 strong pandemics running simultaneously will kill everyone. Such situations cannot happen naturally, (...) but because biotechnology is developing analogously to Moore’s law, it may become possible in the near future (10-50 years from now), because of biohackers, CRISPR, bioprinters, AI-assisted DNA-programing, and weapons of “knowledge-enabled mass destruction” published on the Internet. It could also happened in case of large-scale biological war, or if a rogue country released its entire biological arsenal simultaneously. We also will address other scenarios and risk increasing factors as well as mitigation and adaptation strategies. (shrink)
From Chernobyl to Fukushima, it became clear that the technology is a system evolutionary factor, and the consequences of man-made disasters, as the actualization of risk related to changes in the social heredity (cultural transmission) elements. The uniqueness of the human phenomenon is a characteristic of the system arising out of the nonlinear interaction of biological, cultural and techno-rationalistic adaptive modules. Distribution emerging adaptive innovation within each module is in accordance with the two algorithms that are characterized by the (...) dominance of vertical (transgenerational) and horizontal (infection, contagion) adaptive streams of information, respectively. Evolutionary risk is the result of an imbalance of autonomous adaptive systems have an essential attribute of adaptibe strategy of Homo. Technological civilization inherent predisposition to overcome their dependence on biological and physical components. This feature serves as an enhancer of the evolutionary generating conjugate with the scientific and technological development risk We can assume the existence of an intention of Western mentality to a high priority (positive or negative) of technological modifications micro-social environment and post- Soviet (East Slavic) mentality to modification of macro-social system. (shrink)
Stable adaptive strategy of Homo sapiens (SESH) is a superposition of three different adaptive data arrays: biological, socio-cultural and technological modules, based on three independent processes of generation and replication of an adaptive information – genetic, socio-cultural and symbolic transmissions (inheritance). Third component SESH focused equally to the adaptive transformation of the environment and carrier of SESH. With the advent of High Hume technology, risk has reached the existential significance level. The existential level of technical risk (...) is, by definition, an evolutionary risk as possible leads to the genesis of disappearance of humanity as a species. The emergence of bioethics has to consider as a form of modern (transdisciplinary) scientific concept and sociocultural adaptation for regulate human identity in the global-evolutionary transformation and performs the function of self-preservation. (shrink)
Attempt of trans-disciplinary analysis of the evolutionary value of bioethics is realized. Currently, there are High Tech schemes for management and control of genetic, socio-cultural and mental evolution of Homo sapiens (NBIC, High Hume, etc.). The biological, socio-cultural and technological factors are included in the fabric of modern theories and technologies of social and political control and manipulation. However, the basic philosophical and ideological systems of modern civilization formed mainly in the 17–18 centuries and are experiencing ever-increasing and destabilizing (...) class='Hi'>risk-taking pressure from the scientific theories and technological realities. The sequence of diagnostic signs of a new era once again split into technological and natural sciences’ from one hand, and humanitarian and anthropological sciences’, from other. The natural sciences series corresponds to a system of technological risks be solved using algorithms established safety procedures. The socio-humanitarian series presented anthropological risk. Global bioethics phenomenon is regarded as systemic socio-cultural adaptation for technology-driven human evolution. The conceptual model for meta-structure of stable evolutionary strategy of Homo sapiens (SESH) is proposes. In accordance to model, SESH composed of genetic, socio-cultural and techno-rationalist modules, and global bioethics as a tool to minimize existential evolutionary risk. An existence of objectively descriptive and value-teleological evolutionary trajectory parameters of humanity in the modern technological and civilizational context (1), and the genesis of global bioethics as a system social adaptation to ensure self-identity (2) are postulated. -/- . (shrink)
Purpose (metatask) of the present work is to attempt to give a glance at the problem of existential and anthropo- logical risk caused by the contemporary man-made civilization from the perspective of comparison and confronta- tion of aesthetics, the substrate of which is emotional and metaphorical interpretation of individual subjective values and politics feeding by objectively rational interests of social groups. In both cases there is some semantic gap pre- sent between the represented social reality and its representation (...) in perception of works of art and in the political doctrines as well. Methodology of the research is evolutionary anthropological comparativistics. Originality of the conducted analysis comes to the following: As the antithesis to biological and social reductionism in interpretation of the phenomenon of bio-power it is proposed a co-evolutionary semantic model in accordance with which the de- scribed semantic gap is of the substantial nature related to the complex module organization of a consistent and adaptive human strategy consisting of three associated but independently functional modules (genetic, cultural and techno-rational). Evolutionary trajectory of all anthropogenesis components including civilization cultural and so- cial-political evolution is identified by the proportion between two macro variables – evolutionary effectiveness and evolutionary stability (sameness), i.e. preservation in the context of consequential transformations of some invari- ants of Homo sapiens species specificity organization. It should be noted that inasmuch as in respect to human, some modules of the evolutionary (adaptive) strategy assume self-reflection attributes, it would be more correctly to state about evolutionary correctness, i.e. correspondence to some perfection. As a result, the future of human nature de- pends not only on the rationalist principles of ethics of Homo species (the archaism of Jurgen Habermas), but also on the holistic and emotionally aesthetic image of «Self». In conclusion it should be noted that there is a causal link between the development of High Hume (NBIC) technologies and the totality of the trend in the anthropological phenomenon of bio-power that permeates all the available human existence in modern civilization. As a result, there is a transformation of a contemporary social (man-made) risk in the evolutionary civilization risk. (shrink)
Purpose of the present work is to attempt to give a glance at the problem of existential and anthropological risk caused by the contemporary man-made civilization from the perspective of comparison and confrontation of aesthetics, the substrate of which is emotional and metaphorical interpretation of individual subjective values and politics feeding by objectively rational interests of social groups. In both cases there is some semantic gap present between the represented social reality and its representation in perception of works (...) of art and in the political doctrines as well. Methodology of the research is evolutionary anthropologicalcomparativistics. Originality of the conducted analysis comes to the following: As the antithesis to biological and social reductionism in interpretation of the phenomenon of bio-power it is proposed a co-evolutionary semantic model in accordance with which the described semantic gap is of the substantial nature related to the complex module organization of a consistent and adaptive human strategy consisting of three associated but independently functional modules. Evolutionary trajectory of all anthropogenesis components including civilization cultural and social-political evolution is identified by the proportion between two macro variables – evolutionary effectiveness and evolutionary stability, i.e. preservation in the context of consequential transformations of some invariants of Homo sapiens species specificity organization. It should be noted that inasmuch as in respect to human, some modules of the evolutionary strategy assume self-reflection attributes, it would be more correctly to state about evolutionary correctness, i.e. correspondence to some perfection. As a result, the future of human nature depends not only on the rationalist principles of ethics of Homo species, but also on the holistic and emotionally aesthetic image of «Self». In conclusion it should be noted that there is a causal link between the development of High Hume technologies and the totality of the trend in the anthropological phenomenon of bio-power that permeates all the available human existence in modern civilization. As a result, there is a transformation of a contemporary social risk in the evolutionary civilization risk. (shrink)
Discussions about the possible consequences of creating superintelligence have included the possibility of existentialrisk, often understood mainly as the risk of human extinction. We argue that suffering risks (s-risks) , where an adverse outcome would bring about severe suffering on an astronomical scale, are risks of a comparable severity and probability as risks of extinction. Preventing them is the common interest of many different value systems. Furthermore, we argue that in the same way as superintelligent AI (...) both contributes to existentialrisk but can also help prevent it, superintelligent AI can both be a suffering risk or help avoid it. Some types of work aimed at making superintelligent AI safe will also help prevent suffering risks, and there may also be a class of safeguards for AI that helps specifically against s-risks. (shrink)
An advanced artificial intelligence could pose a significant existentialrisk to humanity. Several research institutes have been set-up to address those risks. And there is an increasing number of academic publications analysing and evaluating their seriousness. Nick Bostrom’s superintelligence: paths, dangers, strategies represents the apotheosis of this trend. In this article, I argue that in defending the credibility of AI risk, Bostrom makes an epistemic move that is analogous to one made by so-called sceptical theists in the (...) debate about the existence of God. And while this analogy is interesting in its own right, what is more interesting are its potential implications. It has been repeatedly argued that sceptical theism has devastating effects on our beliefs and practices. Could it be that AI-doomsaying has similar effects? I argue that it could. Specifically, and somewhat paradoxically, I argue that it could amount to either a reductio of the doomsayers position, or an important and additional reason to join their cause. I use this paradox to suggest that the modal standards for argument in the superintelligence debate need to be addressed. (shrink)
Recently many methods for reducing the risk of human extinction have been suggested, including building refuges underground and in space. Here we will discuss the perspective of using military nuclear submarines or their derivatives to ensure the survival of a small portion of humanity who will be able to rebuild human civilization after a large catastrophe. We will show that it is a very cost-effective way to build refuges, and viable solutions exist for various budgets and timeframes. Nuclear submarines (...) are surface independent, and could provide energy, oxygen, fresh water and perhaps even food for their inhabitants for years. They are able to withstand close nuclear explosions and radiation. They are able to maintain isolation from biological attacks and most known weapons. They already exist and need only small adaptation to be used as refuges. But building refuges is only “Plan B” of existentialrisk preparation; it is better to eliminate such risks than try to survive them. (shrink)
Papers from the conference on AI Risk (published in JETAI), supplemented by additional work. --- If the intelligence of artificial systems were to surpass that of humans, humanity would face significant risks. The time has come to consider these issues, and this consideration must include progress in artificial intelligence (AI) as much as insights from AI theory. -- Featuring contributions from leading experts and thinkers in artificial intelligence, Risks of Artificial Intelligence is the first volume of collected chapters dedicated (...) to examining the risks of AI. The book evaluates predictions of the future of AI, proposes ways to ensure that AI systems will be beneficial to humans, and then critically evaluates such proposals. 1 Vincent C. Müller, Editorial: Risks of Artificial Intelligence - 2 Steve Omohundro, Autonomous Technology and the Greater Human Good - 3 Stuart Armstrong, Kaj Sotala and Sean O’Heigeartaigh, The Errors, Insights and Lessons of Famous AI Predictions - and What they Mean for the Future - 4 Ted Goertzel, The Path to More General Artificial Intelligence - 5 Miles Brundage, Limitations and Risks of Machine Ethics - 6 Roman Yampolskiy, Utility Function Security in Artificially Intelligent Agents - 7 Ben Goertzel, GOLEM: Toward an AGI Meta-Architecture Enabling Both Goal Preservation and Radical Self-Improvement - 8 Alexey Potapov and Sergey Rodionov, Universal Empathy and Ethical Bias for Artificial General Intelligence - 9 András Kornai, Bounding the Impact of AGI - 10 Anders Sandberg, Ethics and Impact of Brain Emulations 11 Daniel Dewey, Long-Term Strategies for Ending ExistentialRisk from Fast Takeoff - 12 Mark Bishop, The Singularity, or How I Learned to Stop Worrying and Love AI -. (shrink)
In a well-known paper, Nick Bostrom presents a confrontation between a fictionalised Blaise Pascal and a mysterious mugger. The mugger persuades Pascal to hand over his wallet by exploiting Pascal's commitment to expected utility maximisation. He does so by offering Pascal an astronomically high reward such that, despite Pascal's low credence in the mugger's truthfulness, the expected utility of accepting the mugging is higher than rejecting it. In this article, I present another sort of high value, low credence mugging. This (...) time, the mugger utilises research on existentialrisk and the long-term potential of humanity to exploit Pascal's expected-utility-maximising descendant. This mugging is more insidious than Bostrom's original as it relies on plausible facts about the long-term future, as well as realistic credences about how our everyday actions could, albeit with infinitesimally low likelihood, affect the future of humanity. (shrink)
A classification of the global catastrophic risks of AI is presented, along with a comprehensive list of previously identified risks. This classification allows the identification of several new risks. We show that at each level of AI’s intelligence power, separate types of possible catastrophes dominate. Our classification demonstrates that the field of AI risks is diverse, and includes many scenarios beyond the commonly discussed cases of a paperclip maximizer or robot-caused unemployment. Global catastrophic failure could happen at various levels of (...) AI development, namely, before it starts self-improvement, during its takeoff, when it uses various instruments to escape its initial confinement, or after it successfully takes over the world and starts to implement its goal system, which could be plainly unaligned, or feature-flawed friendliness. AI could also halt at later stages of its development either due to technical glitches or ontological problems. Overall, we identified around several dozen scenarios of AI-driven global catastrophe. The extent of this list illustrates that there is no one simple solution to the problem of AI safety, and that AI safety theory is complex and must be customized for each AI development level. (shrink)
Purpose This paper aims to formalize long-term trajectories of human civilization as a scientific and ethical field of study. The long-term trajectory of human civilization can be defined as the path that human civilization takes during the entire future time period in which human civilization could continue to exist. -/- Design/methodology/approach This paper focuses on four types of trajectories: status quo trajectories, in which human civilization persists in a state broadly similar to its current state into the distant future; catastrophe (...) trajectories, in which one or more events cause significant harm to human civilization; technological transformation trajectories, in which radical technological breakthroughs put human civilization on a fundamentally different course; and astronomical trajectories, in which human civilization expands beyond its home planet and into the accessible portions of the cosmos. -/- Findings Status quo trajectories appear unlikely to persist into the distant future, especially in light of long-term astronomical processes. Several catastrophe, technological transformation and astronomical trajectories appear possible. -/- Originality/value Some current actions may be able to affect the long-term trajectory. Whether these actions should be pursued depends on a mix of empirical and ethical factors. For some ethical frameworks, these actions may be especially important to pursue. (shrink)
The AGI alignment problem has a bimodal distribution of outcomes with most outcomes clustering around the poles of total success and existential, catastrophic failure. Consequently, attempts to solve AGI alignment should, all else equal, prefer false negatives (ignoring research programs that would have been successful) to false positives (pursuing research programs that will unexpectedly fail). Thus, we propose adopting a policy of responding to points of philosophical and practical uncertainty associated with the alignment problem by limiting and choosing necessary (...) assumptions to reduce the risk of false positives. Herein we explore in detail two relevant points of uncertainty that AGI alignment research hinges on---meta-ethical uncertainty and uncertainty about mental phenomena---and show how to reduce false positives in response to them. (shrink)
In AI safety research, the median timing of AGI creation is often taken as a reference point, which various polls predict will happen in second half of the 21 century, but for maximum safety, we should determine the earliest possible time of dangerous AI arrival and define a minimum acceptable level of AI risk. Such dangerous AI could be either narrow AI facilitating research into potentially dangerous technology like biotech, or AGI, capable of acting completely independently in the real (...) world or an AI capable of starting unlimited self-improvement. In this article, I present arguments that place the earliest timing of dangerous AI in the coming 10–20 years, using several partly independent sources of information: 1. Polls, which show around a 10 percent of the probability of an early creation of artificial general intelligence in the next 10-15 years. 2. The fact that artificial neural network (ANN) performance and other characteristics, like number of “neurons”, are doubling every year, and extrapolating this tendency suggests that roughly human-level performance will be reached in less than a decade. 3. The acceleration of the hardware performance available for AI research, which outperforms Moore’s law thanks to advances in specialized AI hardware, better integration of such hardware in larger computers, cloud computing and larger budgets. 4. Hyperbolic growth extrapolations of big history models. (shrink)
Better instruments to predict the future evolution of artificial intelligence (AI) are needed, as the destiny of our civilization depends on it. One of the ways to such prediction is the analysis of the convergent drives of any future AI, started by Omohundro. We show that one of the convergent drives of AI is a militarization drive, arising from AI’s need to wage a war against its potential rivals by either physical or software means, or to increase its bargaining power. (...) This militarization trend increases global catastrophic risk or even existentialrisk during AI takeoff, which includes the use of nuclear weapons against rival AIs, blackmail by the threat of creating a global catastrophe, and the consequences of a war between two AIs. As a result, even benevolent AI may evolve into potentially dangerous military AI. The type and intensity of militarization drive depend on the relative speed of the AI takeoff and the number of potential rivals. We show that AI militarization drive and evolution of national defense will merge, as a superintelligence created in the defense environment will have quicker takeoff speeds, but a distorted value system. We conclude with peaceful alternatives. (shrink)
Many global catastrophic and existential risks (X-risks) threaten the existence of humankind. There are also many ideas for their prevention, but the meta-problem is that these ideas are not structured. This lack of structure means it is not easy to choose the right plan(s) or to implement them in the correct order. I suggest using a “Plan A, Plan B” model, which has shown its effectiveness in planning actions in unpredictable environments. In this approach, Plan B is a backup (...) option, implemented if Plan A fails. In the case of global risks, Plan A is intended to prevent a catastrophe and Plan B to survive it, if it is not avoided. Each plan has similar stages: analysis, planning, funding, low-level realization and high-level realization. Two variables—plans and stages—provide an effective basis for classification of all possible X-risks prevention methods in the form of a two-dimensional map, allowing the choice of optimal paths for human survival. I have created a framework for estimating the utility of various prevention methods based on their probability of success, the chances that they will be realized in time, their opportunity cost, and their risk. I also distinguish between top-down and bottom-up approaches. (shrink)
The lack of ideological diversity in social research, paired with the lack of engagement with citizens and policymakers who come from other places on the ideological spectrum, poses an existentialrisk to the continued credibility, utility and even viability of social research. The need for reform is urgent.
If the intelligence of artificial systems were to surpass that of humans significantly, this would constitute a significant risk for humanity. Time has come to consider these issues, and this consideration must include progress in AI as much as insights from the theory of AI. The papers in this volume try to make cautious headway in setting the problem, evaluating predictions on the future of AI, proposing ways to ensure that AI systems will be beneficial to humans – and (...) critically evaluating such proposals. (shrink)
There are two types of artificial general intelligence (AGI) safety solutions: global and local. Most previously suggested solutions are local: they explain how to align or “box” a specific AI (Artificial Intelligence), but do not explain how to prevent the creation of dangerous AI in other places. Global solutions are those that ensure any AI on Earth is not dangerous. The number of suggested global solutions is much smaller than the number of proposed local solutions. Global solutions can be divided (...) into four groups: 1. No AI: AGI technology is banned or its use is otherwise prevented; 2. One AI: the first superintelligent AI is used to prevent the creation of any others; 3. Net of AIs as AI police: a balance is created between many AIs, so they evolve as a net and can prevent any rogue AI from taking over the world; 4. Humans inside AI: humans are augmented or part of AI. We explore many ideas, both old and new, regarding global solutions for AI safety. They include changing the number of AI teams, different forms of “AI Nanny” (non-self-improving global control AI system able to prevent creation of dangerous AIs), selling AI safety solutions, and sending messages to future AI. Not every local solution scales to a global solution or does it ethically and safely. The choice of the best local solution should include understanding of the ways in which it will be scaled up. Human-AI teams or a superintelligent AI Service as suggested by Drexler may be examples of such ethically scalable local solutions, but the final choice depends on some unknown variables such as the speed of AI progress. (shrink)
Abstract: This article examines risks associated with the program of passive search for alien signals (Search for Extraterrestrial Intelligence, or SETI) connected with the possibility of finding of alien transmission which includes description of AI system aimed on self-replication (SETI-attack). A scenario of potential vulnerability is proposed as well as the reasons why the proportion of dangerous to harmless signals may be high. The article identifies necessary conditions for the feasibility and effectiveness of the SETI-attack: ETI existence, possibility of AI, (...) small size of the Seed AI, small speed of physical interstellar travel and large distance between civilizations. Needed additions to the SETI protocol are considered: keep the signal existence, its content and source location in secret, don’t run alien programs, jam dangerous signals, wait until creation of humanity’s own AI. In addition, scenarios in which it may be reasonable to start an alien AI (if it has been downloaded) are explored. (shrink)
Is the overall value of a world just the sum of values contributed by each value-bearing entity in that world? Additively separable axiologies (like total utilitarianism, prioritarianism, and critical level views) say 'yes', but non-additive axiologies (like average utilitarianism, rank-discounted utilitarianism, and variable value views) say 'no'. This distinction is practically important: additive axiologies support 'arguments from astronomical scale' which suggest (among other things) that it is overwhelmingly important for humanity to avoid premature extinction and ensure the existence of a (...) large future population, while non-additive axiologies need not. We show, however, that when there is a large enough 'background population' unaffected by our choices, a wide range of non-additive axiologies converge in their implications with some additive axiology -- for instance, average utilitarianism converges to critical-level utilitarianism and various egalitarian theories converge to prioritiarianism. We further argue that real-world background populations may be large enough to make these limit results practically significant. This means that arguments from astronomical scale, and other arguments in practical ethics that seem to presuppose additive separability, may be truth-preserving in practice whether or not we accept additive separability as a basic axiological principle. (shrink)
Many global catastrophic risks are threatening human civilization, and a number of ideas have been suggested for preventing or surviving them. However, if these interventions fail, society could preserve information about the human race and human DNA samples in the hopes that the next civilization on Earth will be able to reconstruct Homo sapiens and our culture. This requires information preservation of an order of magnitude of 100 million years, a little-explored topic thus far. It is important that a potential (...) future civilization will discover this information as early as possible, thus a beacon should accompany the message in order to increase visibility. The message should ideally contain information about how humanity was destroyed, perhaps including a continuous recording until the end. This could help the potential future civilization to survive. The best place for long-term data storage is under the surface of the Moon, with the beacon constructed as a complex geometric figure drawn by small craters or trenches around a central point. There are several cost-effective options for sending the message as opportunistic payloads on different planned landers. (shrink)
Purpose Islands have long been discussed as refuges from global catastrophes; this paper will evaluate them systematically, discussing both the positives and negatives of islands as refuges. There are examples of isolated human communities surviving for thousands of years on places like Easter Island. Islands could provide protection against many low-level risks, notably including bio-risks. However, they are vulnerable to tsunamis, bird-transmitted diseases, and other risks. This article explores how to use the advantages of islands for survival during global catastrophes. (...) Methodology: Preliminary horizon scanning based on the application of the research principles established in the previous global catastrophic literature to the existing geographic data. Findings The large number of islands on Earth, and their diverse conditions, increase the chance that one of them will provide protection from a catastrophe. Additionally, this protection could be increased if an island were used as a base for a nuclear submarine refuge combined with underground bunkers, and/or extremely long-term data storage. The requirements for survival on islands, their vulnerabilities, and ways to mitigate and adapt to risks are explored. Several existing islands, suitable for the survival of different types of risk, timing, and budgets, are examined. Islands suitable for different types of refuges and other island-like options that could also provide protection are also discussed. Originality/value The possible use of islands as refuges from social collapse and existential risks has not been previously examined systematically. This article contributes to the expanding research on survival scenarios. (shrink)
Stable evolutionary strategy of Homo sapiens (SESH) is built in accordance with the modular and hierarchical principle and consists of the same type of self-replicating elements, i.e. is a system of systems. On the top level of the organization of SESH is the superposition of genetic, social, cultural and techno-rationalistic complexes. The components of this triad differ in the mechanism of cycles of generation - replication - transmission - fixing/elimination of adoptively relevant information. This mechanism is implemented either in accordance (...) with the Darwin-Weismann modus, or according to the Lamarck modus, the difference between them is clear from the title. The integral attribute of the system of systems including ESSH is the production of evolutionary risks. The sources of evolutionary risk for stable adaptive strategy of Homo sapiens are the imbalance of (1) the intra-genomic co-evolution (intragenomic conflicts); (2) the gene-cultural coevolution; (3) the inter-cultural co-evolution; (4) techno-humanitarian balance; (5) intertechnological conflicts (technological traps). At least phenomenologically the components of the evolutionary risk are reversible, but in the aggregate they are in potentio irreversible destructive ones for bio-social, and cultural self-identity of Homo sapiens. When the actual evolution is the subject of a rationalist control and/or manipulation, the magnitude of the 4th and 5th components of the evolutionary risk reaches the level of existential significance. (shrink)
Summary. The prerequisites of this study have three interwoven sources, the natural sciences and philosophical and socio-political ones. They are trends in the way of being of a modern, technogenic civilization. The COVID-19 pandemic caused significant damage to the image of the omnipotent techno-science that has developed in the mentality of this sociocultural type.Our goal was to study the co-evolutionary nature of this phenomenon as a natural consequence of the nature of the evolutionary strategy of our biological species. Technological civilization (...) as a cultural-civilizational type is one of the evolutionary options of Homo sapiens, the ontological basis of which is a stable evolutionary strategy (SESH). The latter is capable to spontaneous development and is consisting from biological, sociocultural, and techno-rationalistic modules. Evolutionary risks inextricably linked with it. At present, humankind has entered an era when its existence should be ensured by the meta-system engineering of its own cultural and ecological niche. By such artificially created elements, it implies the design and implementation of ecological systems of various levels of complexity, from the design of elements of such systems (which are biological individuals, populations and species) to the design of a global ecological system (biosphere). At the same time, there is a sharp jump in the value of anthropogenic and technological risk due to a sharp increase in the instability of the structure of ecological systems. We are witnessing a third wave of breaking and reconstruction of the existing relations between the elements of such systems after the Neolithic revolution and great geographical discoveries. Technological innovations led to sharp violations of normal social ecodynamics, leading to outbreaks of new infections, in particular. Theuncontested development of all sectors of controlled evolution technology is the way out of the last such crisis and the prevention of similar ones in the future. There are global nature of the organization of modern technological civilization and the transformation of the biosphere not so much into the noosphere (according to Vernadsky) but into the technosphere that determines the effect of a cascade reaction: any local co-adaptive conflict between elements of a cultural, socio-ecological niche tends to turnglobal and systemic problems. Humanity is constantly forced to make a choice between the stability and adaptability of the technosphere. (shrink)
Stable evolutionary strategy of Homo sapiens (SESH) is built in accordance with the modular and hierarchical principle and consists of the same type of self-replicating elements, i.e. is a system of systems. On the top level of the organization of SESH is the superposition of genetic, social, cultural and techno-rationalistic complexes. The components of this triad differ in the mechanism of cycles of generation - replication - transmission - fixing/elimination of adoptively relevant information. This mechanism is implemented either in accordance (...) with the Darwin-Weismann modus, or according to the Lamarck modus, the difference between them is clear from the title. The integral attribute of the system of systems including ESSH is the production of evolutionary risks. The sources of evolutionary risk for stable adaptive strategy of Homo sapiens are the imbalance of (1) the intra-genomic co-evolution (intragenomic conflicts); (2) the gene-cultural co- evolution; (3) the inter-cultural co-evolution; (4) techno-humanitarian balance; (5) inter- technological conflicts (technological traps). At least phenomenologically the components of the evolutionary risk are reversible, but in the aggregate they are in potentio irreversible destructive ones for bio-social, and cultural self-identity of Homo sapiens. When the actual evolution is the subject of a rationalist control and/or manipulation, the magnitude of the 4th and 5th components of the evolutionary risk reaches the level of existential significance. (shrink)
The theory of evolution of complex and comprising of human systems and algorithm for its constructing are the synthesis of evolutionary epistemology, philosophical anthropology and concrete scientific empirical basis in modern (transdisciplinary) science. «Trans-disciplinary» in the context is interpreted as a completely new epistemological situation, which is fraught with the initiation of a civilizational crisis. Philosophy and ideology of technogenic civilization is based on the possibility of unambiguous demarcation of public value and descriptive scientific discourses (1), and the object and (...) subject of the cognitive process (2). Both of these attributes are no longer valid. For mass, everyday consciousness and institutional philosophical tradition it is intuitively obvious that having the ability to control the evolutionary process, Homo sapiens came close to the borders of their own biological and cultural identity. The spontaneous coevolutionary process of interaction between the «subject» (rational living organisms) and the «object» (material world), is the teleological trend of the movement towards the complete rationalization of the World as It Is, its merger with the World of Due. The stratification of the global evolutionary process into selective and semantic (teleological) coevolutionary and therefore ontologically inseparable components follows. With the entry of anthropogenic civilization into the stage of the information society, firsty, the post-academic phase of the historical evolution of scientific rationality began, the attributes of which are the specific methodology of scientific knowledge, scientific ethos and ontology. Bioethics as a phenomenon of intellectual culture represents a natural philosophical core of modern post- academic (human-dimensional) science, in which the ethical neutrality of scientific theory principle is inapplicable, and elements of public-axiological and scientific-descriptive discourses are integrated into a single logic construction. As result, hermeneutics precedes epistemology not only methodologically, but also meaningfully, and natural philosophy is regaining the status of the backbone of the theory of evolution – in an explicit form. (shrink)
Normatively inappropriate scientific dissent prevents warranted closure of scientific controversies, and confuses the public about the state of policy-relevant science, such as anthropogenic climate change. Against recent criticism by de Melo-Martín and Intemann of the viability of any conception of normatively inappropriate dissent, I identify three conditions for normatively inappropriate dissent : its generation process is politically illegitimate; it imposes an unjust distribution of inductive risks; it adopts evidential thresholds outside an accepted range. I supplement these conditions with an (...) inference-to-the-best-explanation account of knowledge-based consensus and dissent to allow policy makers to reliably identify unreliable scientific dissent. (shrink)
Abstract: Global chemical contamination is an underexplored source of global catastrophic risks that is estimated to have low a priori probability. However, events such as pollinating insects’ population decline and lowering of the human male sperm count hint at some toxic exposure accumulation and thus could be a global catastrophic risk event if not prevented by future medical advances. We identified several potentially dangerous sources of the global chemical contamination, which may happen now or could happen in the future: (...) autocatalytic reactions, exposure to multiple subthreshold sources, and long-term unintended consequences, arising from both natural and bioengineered sources. We list several especially dangerous chemicals—dioxin, organiс compounds, and toxic heavy metals. We also discuss the features of such dangerous chemicals—molecules that can stay in the biosphere for a long time and affect it over time. We explore several social processes and scenarios where global chemical contamination becomes possible: large natural catastrophe like meteorite impact, supervolcano eruption, new ways of predicting properties of the chemicals via machine learning and their manufacturing via synthetic biology, uncontrolled “capitalistic” economic development with a corresponding large waste production, quick adoption of many chemicals with unknown long-term properties and unintended side-effects. These are all low probability, so work on other global catastrophic risks should be prioritized, but chemical risks could exacerbate other types of catastrophe contributing to social collapse. (shrink)
Abstract: As there are no visible ways to create safe self-improving superintelligence, but it is looming, we probably need temporary ways to prevent its creation. The only way to prevent it, is to create special AI, which is able to control and monitor all places in the world. The idea has been suggested by Goertzel in form of AI Nanny, but his Nanny is still superintelligent and not easy to control, as was shown by Bensinger at al. We explore here (...) the ways to create the safest and simplest form of AI, which may work as AI Nanny. Such AI system will be enough to solve most problems, which we expect the AI will solve, including control of robotics, acceleration of the medical research, but will present less risk, as it will be less different from humans. As AI police, it will work as operation system for most computers, producing world surveillance system, which will be able to envision and stop any potential terrorists and bad actors in advance. As uploading technology is lagging, and neuromorphic AI is intrinsically dangerous, the most plausible way to human-based AI Nanny is either functional model of the human mind or a Narrow-AI empowered group of people. (shrink)
The sociobiological and socio-political aspects of human existence have been the subject of techno-rationalistic control and manipulation. The investigation of the mutual complementarity of anthropological and ontological paradigms under these circumstances is the main purpose of present publication. The comparative conceptual analysis of the bio-power and bio-politics in the mentality of the modern technological civilization is a main method of the research. The methodological and philosophical analogy of biological and social engineering allows combining them in the nature and social implications (...) as part of a High Hume technologies class. As result of the transformation of somatic foundations of the human being and his behavior, stereotypes in the object of control and management the technogenic risk reached the existential level. The rapidly growing status of biopower and biopolitics in the conceptual field of contemporary political science becomes by phenomenological expression of these processes. (shrink)
The AGI alignment problem has a bimodal distribution of outcomes with most outcomes clustering around the poles of total success and existential, catastrophic failure. Consequently, attempts to solve AGI alignment should, all else equal, prefer false negatives (ignoring research programs that would have been successful) to false positives (pursuing research programs that will unexpectedly fail). Thus, we propose adopting a policy of responding to points of metaphysical and practical uncertainty associated with the alignment problem by limiting and choosing necessary (...) assumptions to reduce the risk false positives. Herein we explore in detail some of the relevant points of uncertainty that AGI alignment research hinges on and consider how to reduce false positives in response to them. (shrink)
In order to re-contextualize the otherwise ontologically privileged meaning of metaphysical debates into a more insubstantial form, metaphysical deflationism runs the risk of having to adopt potentially unwanted anti-realist tendencies. This tension between deflationism and anti-realism can be expressed as follows: in order to claim truthfully that something exists, how can deflationism avoid the anti-realist feature of construing such claims singularly in an analytical fashion? One may choose to adopt a Yablovian fallibilism about existential claims, but other approaches (...) can be appealed to as well. Amie Thomasson's Easy Ontology is one such approach, whereby the interaction of empirical as well as analytic features within the construction of its ontological theory affords a role for analytical rules without forsaking a link with empirical reality. The development of these analytical and empirical features has resulted in significant critiques regarding, one, potential reference failure of easy-ontological claims, and two, a circularity in how easy ontology explains the relation between its analytical rules and existential claims. This essay offers a response by showcasing how an easy ontological deflationism can reject these criticisms without succumbing to, one, the anti-realism of pure analyticism, and two, a further critique of wholesale referential indeterminacy in its referring terms. (shrink)
Abstract: The article summarizes some comments -as discussed in my book La existencia en busca de la razón. Apuntes sobre la filosofía de Karl Jaspers (Existence in search of Reason. Notes on Karl Jaspers' Philosophy), Editorial Académica Española, LAP LAMBERT Academic Publishing GmbH&Co. KG, Alemania, 2012- about the meaning of the boundary situations in the philosophy of Karl Jaspers, as a turning point regarding Husserl's phenomenology and Kant's transcendental philosophy. For Jaspers, the meaning of the boundary situations as a structure (...) of Existenz underlines the possibility of risk in the individual historicity, which breaks the "flow" of the reflection and, at the same time, appeals to an opening of ethics -without sacrificing the universality of the categorical imperative. (shrink)
This article argues that Lara Buchak’s risk-weighted expected utility theory fails to offer a true alternative to expected utility theory. Under commonly held assumptions about dynamic choice and the framing of decision problems, rational agents are guided by their attitudes to temporally extended courses of action. If so, REU theory makes approximately the same recommendations as expected utility theory. Being more permissive about dynamic choice or framing, however, undermines the theory’s claim to capturing a steady choice disposition in the (...) face of risk. I argue that this poses a challenge to alternatives to expected utility theory more generally. (shrink)
A moderately risk averse person may turn down a 50/50 gamble that either results in her winning $200 or losing $100. Such behaviour seems rational if, for instance, the pain of losing $100 is felt more strongly than the joy of winning $200. The aim of this paper is to examine an influential argument that some have interpreted as showing that such moderate risk aversion is irrational. After presenting an axiomatic argument that I take to be the strongest (...) case for the claim that moderate risk aversion is irrational, I show that it essentially depends on an assumption that those who think that risk aversion can be rational should be skeptical of. Hence, I conclude that risk aversion need not be irrational. (shrink)
Since Friedrich Nietzsche, philosophers have grappled with the question of how to respond to nihilism. Nihilism, often seen as a derogative term for a ‘life-denying’, destructive and perhaps most of all depressive philosophy is what drove existentialists to write about the right response to a meaningless universe devoid of purpose. This latter diagnosis is what I shall refer to as existential nihilism, the denial of meaning and purpose, a view that not only existentialists but also a long line of (...) philosophers in the empiricist tradition ascribe to. The absurd stems from the fact that though life is without meaning and the universe devoid of purpose, man still longs for meaning, significance and purpose. Inspired by Bojack Horseman and Rick and Morty, two modern existentialist masterpieces, this paper explores the various alternatives that have been offered in how to respond to the absurd, or as Albert Camus puts it; the only “really serious philosophical problem” and concludes that the problem is compatible with a naturalistic world-view, thereby genuine and transcending existentialism. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.