Results for 'Catastrophic Risk'

966 found
Order:
  1. Catastrophic risk.H. Orri Stefánsson - 2020 - Philosophy Compass 15 (11):1-11.
    Catastrophic risk raises questions that are not only of practical importance, but also of great philosophical interest, such as how to define catastrophe and what distinguishes catastrophic outcomes from non-catastrophic ones. Catastrophic risk also raises questions about how to rationally respond to such risks. How to rationally respond arguably partly depends on the severity of the uncertainty, for instance, whether quantitative probabilistic information is available, or whether only comparative likelihood information is available, or neither (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  2. Classification of Global Catastrophic Risks Connected with Artificial Intelligence.Alexey Turchin & David Denkenberger - 2020 - AI and Society 35 (1):147-163.
    A classification of the global catastrophic risks of AI is presented, along with a comprehensive list of previously identified risks. This classification allows the identification of several new risks. We show that at each level of AI’s intelligence power, separate types of possible catastrophes dominate. Our classification demonstrates that the field of AI risks is diverse, and includes many scenarios beyond the commonly discussed cases of a paperclip maximizer or robot-caused unemployment. Global catastrophic failure could happen at various (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  3. Artificial Intelligence: Arguments for Catastrophic Risk.Adam Bales, William D'Alessandro & Cameron Domenico Kirk-Giannini - 2024 - Philosophy Compass 19 (2):e12964.
    Recent progress in artificial intelligence (AI) has drawn attention to the technology’s transformative potential, including what some see as its prospects for causing large-scale harm. We review two influential arguments purporting to show how AI could pose catastrophic risks. The first argument — the Problem of Power-Seeking — claims that, under certain assumptions, advanced AI systems are likely to engage in dangerous power-seeking behavior in pursuit of their goals. We review reasons for thinking that AI systems might seek power, (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  4. Global Catastrophic Risks by Chemical Contamination.Alexey Turchin - manuscript
    Abstract: Global chemical contamination is an underexplored source of global catastrophic risks that is estimated to have low a priori probability. However, events such as pollinating insects’ population decline and lowering of the human male sperm count hint at some toxic exposure accumulation and thus could be a global catastrophic risk event if not prevented by future medical advances. We identified several potentially dangerous sources of the global chemical contamination, which may happen now or could happen in (...)
    Download  
     
    Export citation  
     
    Bookmark  
  5. Global Catastrophic Risks Connected with Extra-Terrestrial Intelligence.Alexey Turchin - manuscript
    In this article, a classification of the global catastrophic risks connected with the possible existence (or non-existence) of extraterrestrial intelligence is presented. If there are no extra-terrestrial intelligences (ETIs) in our light cone, it either means that the Great Filter is behind us, and thus some kind of periodic sterilizing natural catastrophe, like a gamma-ray burst, should be given a higher probability estimate, or that the Great Filter is ahead of us, and thus a future global catastrophe is high (...)
    Download  
     
    Export citation  
     
    Bookmark  
  6. Longtermism, Aggregation, and Catastrophic Risk.Emma J. Curran - manuscript
    Advocates of longtermism point out that interventions which focus on improving the prospects of people in the very far future will, in expectation, bring about a significant amount of good. Indeed, in expectation, such long-term interventions bring about far more good than their short-term counterparts. As such, longtermists claim we have compelling moral reason to prefer long-term interventions. In this paper, I show that longtermism is in conflict with plausible deontic scepticism about aggregation. I do so by demonstrating that, from (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  7. UAP and Global Catastrophic Risks.Alexey Turchin - manuscript
    Abstract: After 2017 NY Times publication, the stigma of the scientific discussion of the problem of so-called UAP (Unidentified Aerial Phenomena) was lifted. Now the question arises: how UAP will affect the future of humanity, and especially, the probability of the global catastrophic risks? To answer this question, we assume that the Nimitz case in 2004 was real and we will suggest a classification of the possible explanations of the phenomena. The first level consists of mundane explanations: hardware glitches, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  8. Continuity and catastrophic risk.H. Orri Stefánsson - 2022 - Economics and Philosophy 38 (2):266-274.
    Suppose that a decision-maker's aim, under certainty, is to maximise some continuous value, such as lifetime income or continuous social welfare. Can such a decision-maker rationally satisfy what has been called "continuity for easy cases" while at the same time satisfying what seems to be a widespread intuition against the full-blown continuity axiom of expected utility theory? In this note I argue that the answer is "no": given transitivity and a weak trade-off principle, continuity for easy cases violates the anti-continuity (...)
    Download  
     
    Export citation  
     
    Bookmark  
  9. The Global Catastrophic Risks Connected with Possibility of Finding Alien AI During SETI.Alexey Turchin - 2018 - Journal of British Interpanetary Society 71 (2):71-79.
    Abstract: This article examines risks associated with the program of passive search for alien signals (Search for Extraterrestrial Intelligence, or SETI) connected with the possibility of finding of alien transmission which includes description of AI system aimed on self-replication (SETI-attack). A scenario of potential vulnerability is proposed as well as the reasons why the proportion of dangerous to harmless signals may be high. The article identifies necessary conditions for the feasibility and effectiveness of the SETI-attack: ETI existence, possibility of AI, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  10. Could slaughterbots wipe out humanity? Assessment of the global catastrophic risk posed by autonomous weapons.Alexey Turchin - manuscript
    Recently criticisms against autonomous weapons were presented in a video in which an AI-powered drone kills a person. However, some said that this video is a distraction from the real risk of AI—the risk of unlimitedly self-improving AI systems. In this article, we analyze arguments from both sides and turn them into conditions. The following conditions are identified as leading to autonomous weapons becoming a global catastrophic risk: 1) Artificial General Intelligence (AGI) development is delayed relative (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  11. Approaches to the Prevention of Global Catastrophic Risks.Alexey Turchin - 2018 - Human Prospect 7 (2):52-65.
    Many global catastrophic and existential risks (X-risks) threaten the existence of humankind. There are also many ideas for their prevention, but the meta-problem is that these ideas are not structured. This lack of structure means it is not easy to choose the right plan(s) or to implement them in the correct order. I suggest using a “Plan A, Plan B” model, which has shown its effectiveness in planning actions in unpredictable environments. In this approach, Plan B is a backup (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  12. Global Catastrophic and Existential Risks Communication Scale.Alexey Turchin & Denkeberger David - 2018 - Futures:not defiend yet.
    Existential risks threaten the future of humanity, but they are difficult to measure. However, to communicate, prioritize and mitigate such risks it is important to estimate their relative significance. Risk probabilities are typically used, but for existential risks they are problematic due to ambiguity, and because quantitative probabilities do not represent some aspects of these risks. Thus, a standardized and easily comprehensible instrument is called for, to communicate dangers from various global catastrophic and existential risks. In this article, (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  13. Artificial Multipandemic as the Most Plausible and Dangerous Global Catastrophic Risk Connected with Bioweapons and Synthetic Biology.Alexey Turchin, Brian Patrick Green & David Denkenberger - manuscript
    Pandemics have been suggested as global risks many times, but it has been shown that the probability of human extinction due to one pandemic is small, as it will not be able to affect and kill all people, but likely only half, even in the worst cases. Assuming that the probability of the worst pandemic to kill a person is 0.5, and assuming linear interaction between different pandemics, 30 strong pandemics running simultaneously will kill everyone. Such situations cannot happen naturally, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  14. Responses to Catastrophic AGI Risk: A Survey.Kaj Sotala & Roman V. Yampolskiy - 2015 - Physica Scripta 90.
    Many researchers have argued that humanity will create artificial general intelligence (AGI) within the next twenty to one hundred years. It has been suggested that AGI may inflict serious damage to human well-being on a global scale ('catastrophic risk'). After summarizing the arguments for why AGI may pose such a risk, we review the fieldʼs proposed responses to AGI risk. We consider societal proposals, proposals for external constraints on AGI behaviors and proposals for creating AGIs that (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  15. Language Agents Reduce the Risk of Existential Catastrophe.Simon Goldstein & Cameron Domenico Kirk-Giannini - 2023 - AI and Society:1-11.
    Recent advances in natural language processing have given rise to a new kind of AI architecture: the language agent. By repeatedly calling an LLM to perform a variety of cognitive tasks, language agents are able to function autonomously to pursue goals specified in natural language and stored in a human-readable format. Because of their architecture, language agents exhibit behavior that is predictable according to the laws of folk psychology: they function as though they have desires and beliefs, and then make (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  16. Ethics of the scientist qua policy advisor: inductive risk, uncertainty, and catastrophe in climate economics.David M. Frank - 2019 - Synthese:3123-3138.
    This paper discusses ethical issues surrounding Integrated Assessment Models (IAMs) of the economic effects of climate change, and how climate economists acting as policy advisors ought to represent the uncertain possibility of catastrophe. Some climate economists, especially Martin Weitzman, have argued for a precautionary approach where avoiding catastrophe should structure climate economists’ welfare analysis. This paper details ethical arguments that justify this approach, showing how Weitzman’s “fat tail” probabilities of climate catastrophe pose ethical problems for widely used IAMs. The main (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  17. How Much Should Governments Pay to Prevent Catastrophes? Longtermism's Limited Role.Carl Shulman & Elliott Thornley - forthcoming - In Jacob Barrett, Hilary Greaves & David Thorstad (eds.), Essays on Longtermism. Oxford University Press.
    Longtermists have argued that humanity should significantly increase its efforts to prevent catastrophes like nuclear wars, pandemics, and AI disasters. But one prominent longtermist argument overshoots this conclusion: the argument also implies that humanity should reduce the risk of existential catastrophe even at extreme cost to the present generation. This overshoot means that democratic governments cannot use the longtermist argument to guide their catastrophe policy. In this paper, we show that the case for preventing catastrophe does not depend on (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  18. Aquatic refuges for surviving a global catastrophe.Alexey Turchin & Brian Green - 2017 - Futures 89:26-37.
    Recently many methods for reducing the risk of human extinction have been suggested, including building refuges underground and in space. Here we will discuss the perspective of using military nuclear submarines or their derivatives to ensure the survival of a small portion of humanity who will be able to rebuild human civilization after a large catastrophe. We will show that it is a very cost-effective way to build refuges, and viable solutions exist for various budgets and timeframes. Nuclear submarines (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  19. Existential Risks: Exploring a Robust Risk Reduction Strategy.Karim Jebari - 2015 - Science and Engineering Ethics 21 (3):541-554.
    A small but growing number of studies have aimed to understand, assess and reduce existential risks, or risks that threaten the continued existence of mankind. However, most attention has been focused on known and tangible risks. This paper proposes a heuristic for reducing the risk of black swan extinction events. These events are, as the name suggests, stochastic and unforeseen when they happen. Decision theory based on a fixed model of possible outcomes cannot properly deal with this kind of (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  20. Surviving global risks through the preservation of humanity's data on the Moon.Alexey Turchin & D. Denkenberger - 2018 - Acta Astronautica:in press.
    Many global catastrophic risks are threatening human civilization, and a number of ideas have been suggested for preventing or surviving them. However, if these interventions fail, society could preserve information about the human race and human DNA samples in the hopes that the next civilization on Earth will be able to reconstruct Homo sapiens and our culture. This requires information preservation of an order of magnitude of 100 million years, a little-explored topic thus far. It is important that a (...)
    Download  
     
    Export citation  
     
    Bookmark  
  21. The Probability of a Global Catastrophe in the World with Exponentially Growing Technologies.Alexey Turchin & Justin Shovelain - manuscript
    Abstract. In this article is presented a model of the change of the probability of the global catastrophic risks in the world with exponentially evolving technologies. Increasingly cheaper technologies become accessible to a larger number of agents. Also, the technologies become more capable to cause a global catastrophe. Examples of such dangerous technologies are artificial viruses constructed by the means of synthetic biology, non-aligned AI and, to less extent, nanotech and nuclear proliferation. The model shows at least double exponential (...)
    Download  
     
    Export citation  
     
    Bookmark  
  22. Simulation Typology and Termination Risks.Alexey Turchin & Roman Yampolskiy - manuscript
    The goal of the article is to explore what is the most probable type of simulation in which humanity lives (if any) and how this affects simulation termination risks. We firstly explore the question of what kind of simulation in which humanity is most likely located based on pure theoretical reasoning. We suggest a new patch to the classical simulation argument, showing that we are likely simulated not by our own descendants, but by alien civilizations. Based on this, we provide (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  23. Risk, Precaution, and Causation.Masaki Ichinose - 2022 - Tetsugaku: International Journal of the Philosophical Association of Japan 6:22-53.
    This paper aims to scrutinize how the notion of risk should be understood and applied to possibly catastrophic cases. I begin with clarifying the standard usage of the notion of risk, particularly emphasizing the conceptual relation between risk and probability. Then, I investigate how to make decisions in the case of seemingly catastrophic disasters by contrasting the precautionary principle with the preventive (prevention) principle. Finally, I examine what kind of causal thinking tends to be actually (...)
    Download  
     
    Export citation  
     
    Bookmark  
  24. A Socialist Approach to Disaster Preparedness: A Leftist guide for the coming catastrophes.James Hughes - 2021 - After The Storm.
    Socialists have historically thought a lot about the catastrophic risks society faces. Today many DSA chapters have gotten involved in mutual aid to respond to the Covid crisis, generating a debate about how mutual aid fits into socialist work. One form of community engagement that is likely to be increasingly necessary, and is an opportunity for radicalizing angry neighbors, is disaster preparedness. While the prepper subculture is perceived as right-wing, and parts are tied into the militia movement, there are (...)
    Download  
     
    Export citation  
     
    Bookmark  
  25. The Fragile World Hypothesis: Complexity, Fragility, and Systemic Existential Risk.David Manheim - forthcoming - Futures.
    The possibility of social and technological collapse has been the focus of science fiction tropes for decades, but more recent focus has been on specific sources of existential and global catastrophic risk. Because these scenarios are simple to understand and envision, they receive more attention than risks due to complex interplay of failures, or risks that cannot be clearly specified. In this paper, we discuss the possibility that complexity of a certain type leads to fragility which can function (...)
    Download  
     
    Export citation  
     
    Bookmark  
  26. Islands as refuges for surviving global catastrophes.Alexey Turchin & Brian Patrick Green - 2018 - Foresight.
    Purpose Islands have long been discussed as refuges from global catastrophes; this paper will evaluate them systematically, discussing both the positives and negatives of islands as refuges. There are examples of isolated human communities surviving for thousands of years on places like Easter Island. Islands could provide protection against many low-level risks, notably including bio-risks. However, they are vulnerable to tsunamis, bird-transmitted diseases, and other risks. This article explores how to use the advantages of islands for survival during global catastrophes. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  27. Ecological Risk: Climate Change as Abstract-Corporeal Problem.Tom Sparrow - 2018 - Revista Latinoamericana de Estudios Sobre Cuerpos, Emociones y Sociedad 10 (28):88-97.
    This essay uses Ulrich Beck’s concept of risk society to understand the threat of catastrophic climate change. It argues that this threat is “abstract-corporeal”, and therefore a special kind of threat that poses special kinds of epistemic and ecological challenges. At the center of these challenges is the problem of human vulnerability, which entails a complex form of trust that both sustains and threatens human survival.
    Download  
     
    Export citation  
     
    Bookmark  
  28. Catastrophically Dangerous AI is Possible Before 2030.Alexey Turchin - manuscript
    In AI safety research, the median timing of AGI arrival is often taken as a reference point, which various polls predict to happen in the middle of 21 century, but for maximum safety, we should determine the earliest possible time of Dangerous AI arrival. Such Dangerous AI could be either AGI, capable of acting completely independently in the real world and of winning in most real-world conflicts with humans, or an AI helping humans to build weapons of mass destruction, or (...)
    Download  
     
    Export citation  
     
    Bookmark  
  29. AI Governance and the Policymaking Process: Key Considerations for Reducing AI Risk.Brandon Perry & Risto Uuk - 2019 - Big Data and Cognitive Computing 3 (2):1-17.
    This essay argues that a new subfield of AI governance should be explored that examines the policy-making process and its implications for AI governance. A growing number of researchers have begun working on the question of how to mitigate the catastrophic risks of transformative artificial intelligence, including what policies states should adopt. However, this essay identifies a preceding, meta-level problem of how the space of possible policies is affected by the politics and administrative mechanisms of how those policies are (...)
    Download  
     
    Export citation  
     
    Bookmark  
  30. On the Limits of the Precautionary Principle.H. Orri Stefansson - 2019 - Risk Analysis 39 (6):1204-1222.
    The Precautionary Principle (PP) is an influential principle of risk management. It has been widely introduced into environmental legislation, and it plays an important role in most international environmental agreements. Yet, there is little consensus on precisely how to understand and formulate the principle. In this paper I prove some impossibility results for two plausible formulations of the PP as a decision-rule. These results illustrate the difficulty in making the PP consistent with the acceptance of any trade-offs between (...) risks and more ordinary goods. (shrink)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  31. Сo-evolutionary biosemantics of evolutionary risk at technogenic civilization: Hiroshima, Chernobyl – Fukushima and further….Valentin Cheshko & Valery Glazko - 2016 - International Journal of Environmental Problems 3 (1):14-25.
    From Chernobyl to Fukushima, it became clear that the technology is a system evolutionary factor, and the consequences of man-made disasters, as the actualization of risk related to changes in the social heredity (cultural transmission) elements. The uniqueness of the human phenomenon is a characteristic of the system arising out of the nonlinear interaction of biological, cultural and techno-rationalistic adaptive modules. Distribution emerging adaptive innovation within each module is in accordance with the two algorithms that are characterized by the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  32.  52
    Leveraging Disaster: Securitization in the Shadow of an Environmental Catastrophe: The Case of the Safer Floating Oil Tanker.Pezzano Riccardo - 2024 - Dissertation, Leiden University
    Recent scholarship on climate and conflict has increasingly examined the dynamic relationship between environmental scarcities and geopolitical tensions. Overall, climate-related shocks often lead to escalated disputes over natural resources, thus positioning climate hazards not only as an environmental issue but also as a catalyst that intensifies existing geopolitical and social frictions. The discourse on climate and conflict has mostly centred on its direct effects on natural resources and environmental conditions. However, its role in precipitating conflicts underscores the critical need to (...)
    Download  
     
    Export citation  
     
    Bookmark  
  33.  80
    Rethinking the Redlines Against AI Existential Risks.Yi Zeng, Xin Guan, Enmeng Lu & Jinyu Fan - manuscript
    The ongoing evolution of advanced AI systems will have profound, enduring, and significant impacts on human existence that must not be overlooked. These impacts range from empowering humanity to achieve unprecedented transcendence to potentially causing catastrophic threats to our existence. To proactively and preventively mitigate these potential threats, it is crucial to establish clear redlines to prevent AI-induced existential risks by constraining and regulating advanced AI and their related AI actors. This paper explores different concepts of AI existential (...), connects the enactment of AI red lines to broader efforts addressing AI's impacts, constructs a theoretical framework for analyzing the direct impacts of AI existential risk, and upon that proposes a set of exemplary AI red lines. By contemplating AI existential risks and formulating these red lines, we aim to foster a deeper and systematic understanding of the potential dangers associated with advanced AI and the importance of proactive risk management. We hope this work will contribute to the strengthening and refinement of a comprehensive AI redline system for preventing humanity from AI existential risks. (shrink)
    Download  
     
    Export citation  
     
    Bookmark  
  34. Assessing the future plausibility of catastrophically dangerous AI.Alexey Turchin - 2018 - Futures.
    In AI safety research, the median timing of AGI creation is often taken as a reference point, which various polls predict will happen in second half of the 21 century, but for maximum safety, we should determine the earliest possible time of dangerous AI arrival and define a minimum acceptable level of AI risk. Such dangerous AI could be either narrow AI facilitating research into potentially dangerous technology like biotech, or AGI, capable of acting completely independently in the real (...)
    Download  
     
    Export citation  
     
    Bookmark  
  35. Nuclear war as a predictable surprise.Matthew Rendall - 2022 - Global Policy 13 (5):782-791.
    Like asteroids, hundred-year floods and pandemic disease, thermonuclear war is a low-frequency, high-impact threat. In the long run, catastrophe is inevitable if nothing is done − yet each successive government and generation may fail to address it. Drawing on risk perception research, this paper argues that psychological biases cause the threat of nuclear war to receive less attention than it deserves. Nuclear deterrence is, moreover, a ‘front-loaded good’: its benefits accrue disproportionately to proximate generations, whereas much of the expected (...)
    Download  
     
    Export citation  
     
    Bookmark  
  36.  90
    Deep Uncertainty and Incommensurability: General Cautions about Precaution.Rush T. Stewart - forthcoming - Philosophy of Science.
    The precautionary principle is invoked in a number of important personal and policy decision contexts. Peterson shows that certain ways of making the principle precise are inconsistent with other criteria of decision-making. Some object that the results do not apply to cases of deep uncertainty or value incommensurability which are alleged to be in the principle’s wheelhouse. First, I show that Peterson’s impossibility results can be generalized considerably to cover cases of both deep uncertainty and incommensurability. Second, I contrast an (...)
    Download  
     
    Export citation  
     
    Bookmark  
  37. A trilemma for the lexical utility model of the precautionary principle.H. Orri Stefánsson - forthcoming - Philosophical Studies:1-17.
    Bartha and DesRoches (2021) and Steel and Bartha (2023) argue that we should understand the precautionary principle as the injunction to maximise lexical utilities. They show that the lexical utility model has important pragmatic advantages. Moreover, the model has the theoretical advantage of satisfying all axioms of expected utility theory except continuity. In this paper I raise a trilemma for any attempt at modelling the precautionary principle with lexical utilities: it permits choice cycles or leads to paralysis or implies that (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  38. On theory X and what matters most.Simon Beard & Patrick Kaczmarek - 2022 - In Jeff McMahan, Timothy Campbell, Ketan Ramakrishnan & Jimmy Goodrich (eds.), Ethics and Existence: The Legacy of Derek Parfit. New York, NY: Oxford University Press. pp. 358-386.
    One of Derek Parfit’s greatest legacies was the search for Theory X, a theory of population ethics that avoided all the implausible conclusions and paradoxes that have dogged the field since its inception: the Absurd Conclusion, the Repugnant Conclusion, the Non-Identity Problem, and the Mere Addition Paradox. In recent years, it has been argued that this search is doomed to failure and no satisfactory population axiology is possible. This chapter reviews Parfit’s life’s work in the field and argues that he (...)
    Download  
     
    Export citation  
     
    Bookmark  
  39. Nuclear Fine-Tuning and the Illusion of Teleology.Ember Reed - 2022 - Sound Ideas.
    Recent existential-risk thinkers have noted that the analysis of the fine-tuning argument for God’s existence, and the analysis of certain forms of existential risk, employ similar types of reasoning. This paper argues that insofar as the “many worlds objection” undermines the inference to God’s existence from universal fine-tuning, then a similar many worlds objection undermines the inference that the historic risk of global nuclear catastrophe has been low from the lack of such a catastrophe has occurred in (...)
    Download  
     
    Export citation  
     
    Bookmark  
  40. Presumptuous Philosopher Proves Panspermia.Alexey Turchin - manuscript
    Abstract. The presumptuous philosopher (PP) thought experiment lends more credence to the hypothesis which postulates the existence of a larger number of observers than other hypothesis. The PP was suggested as a purely speculative endeavor. However, there is a class of real world observer-selection effects where it could be applied, and one of them is the possibility of interstellar panspermia (IP). There are two types of anthropic reasoning: SIA and SSA. SIA implies that my existence is an argument that larger (...)
    Download  
     
    Export citation  
     
    Bookmark  
  41. AI Alignment vs. AI Ethical Treatment: Ten Challenges.Adam Bradley & Bradford Saad - manuscript
    A morally acceptable course of AI development should avoid two dangers: creating unaligned AI systems that pose a threat to humanity and mistreating AI systems that merit moral consideration in their own right. This paper argues these two dangers interact and that if we create AI systems that merit moral consideration, simultaneously avoiding both of these dangers would be extremely challenging. While our argument is straightforward and supported by a wide range of pretheoretical moral judgments, it has far-reaching moral implications (...)
    Download  
     
    Export citation  
     
    Bookmark  
  42. Military AI as a Convergent Goal of Self-Improving AI.Alexey Turchin & Denkenberger David - 2018 - In Turchin Alexey & David Denkenberger (eds.), Artificial Intelligence Safety and Security. CRC Press.
    Better instruments to predict the future evolution of artificial intelligence (AI) are needed, as the destiny of our civilization depends on it. One of the ways to such prediction is the analysis of the convergent drives of any future AI, started by Omohundro. We show that one of the convergent drives of AI is a militarization drive, arising from AI’s need to wage a war against its potential rivals by either physical or software means, or to increase its bargaining power. (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  43. Doing Good Badly? Philosophical Issues Related to Effective Altruism.Michael Plant - 2019 - Dissertation, Oxford University
    Suppose you want to do as much good as possible. What should you do? According to members of the effective altruism movement—which has produced much of the thinking on this issue and counts several moral philosophers as its key protagonists—we should prioritise among the world’s problems by assessing their scale, solvability, and neglectedness. Once we’ve done this, the three top priorities, not necessarily in this order, are (1) aiding the world’s poorest people by providing life-saving medical treatments or alleviating poverty (...)
    Download  
     
    Export citation  
     
    Bookmark  
  44. Do More Informed Citizens Make Better Climate Policy Decisions?Michael Lokshin, Ivan Torre, Michael Hannon & Miguel Purroy - manuscript
    This study explores the relationship between perceptions of catastrophic events and beliefs about climate change. Using data from the 2023 Life in Transition Survey, the study finds that contrary to conventional wisdom, more accurate knowledge about past catastrophes is associated with lower concern about climate change. The paper proposes that heightened threat sensitivity may underlie both the tendency to overestimate disaster impacts and increased concern about climate change. The findings challenge the assumption that a more informed citizenry necessarily leads (...)
    Download  
     
    Export citation  
     
    Bookmark  
  45. Catching Treacherous Turn: A Model of the Multilevel AI Boxing.Alexey Turchin - manuscript
    With the fast pace of AI development, the problem of preventing its global catastrophic risks arises. However, no satisfactory solution has been found. From several possibilities, the confinement of AI in a box is considered as a low-quality possible solution for AI safety. However, some treacherous AIs can be stopped by effective confinement if it is used as an additional measure. Here, we proposed an idealized model of the best possible confinement by aggregating all known ideas in the field (...)
    Download  
     
    Export citation  
     
    Bookmark  
  46. Multilevel Strategy for Immortality: Plan A – Fighting Aging, Plan B – Cryonics, Plan C – Digital Immortality, Plan D – Big World Immortality.Alexey Turchin - manuscript
    Abstract: The field of life extension is full of ideas but they are unstructured. Here we suggest a comprehensive strategy for reaching personal immortality based on the idea of multilevel defense, where the next life-preserving plan is implemented if the previous one fails, but all plans need to be prepared simultaneously in advance. The first plan, plan A, is the surviving until advanced AI creation via fighting aging and other causes of death and extending one’s life. Plan B is cryonics, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  47. Les risques majeurs et l'action publique.Céline Grislain-Letremy, Reza Lahidji & Philippe Mongin - 2012 - Paris: La Documentation Française.
    Par risques majeurs, on entend ceux qui s’attachent à des événements dont les conséquences défavorables, pour l’humanité ou pour l’environnement, sont d’une gravité exceptionnelle. On n’ajoutera ni que ces événements sont d’une intensité physique extrême, ni qu’ils surviennent rarement, car ce n’est pas toujours le cas. Seuls des risques majeurs de nature civile seront considérés dans cet ouvrage, et il s'agira, plus limitativement, de risques naturels, comme ceux d’inondation et de submersion marine, illustrés par la tempête Xynthia en 2010, de (...)
    Download  
     
    Export citation  
     
    Bookmark  
  48. Reconciling Regulation with Scientific Autonomy in Dual-Use Research.Nicholas G. Evans, Michael J. Selgelid & Robert Mark Simpson - 2022 - Journal of Medicine and Philosophy 47 (1):72-94.
    In debates over the regulation of communication related to dual-use research, the risks that such communication creates must be weighed against against the value of scientific autonomy. The censorship of such communication seems justifiable in certain cases, given the potentially catastrophic applications of some dual-use research. This conclusion however, gives rise to another kind of danger: that regulators will use overly simplistic cost-benefit analysis to rationalize excessive regulation of scientific research. In response to this, we show how institutional design (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  49. Normal Accidents of Expertise.Stephen P. Turner - 2010 - Minerva 48 (3):239-258.
    Charles Perrow used the term normal accidents to characterize a type of catastrophic failure that resulted when complex, tightly coupled production systems encountered a certain kind of anomalous event. These were events in which systems failures interacted with one another in a way that could not be anticipated, and could not be easily understood and corrected. Systems of the production of expert knowledge are increasingly becoming tightly coupled. Unlike classical science, which operated with a long time horizon, many current (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  50. Long-Term Trajectories of Human Civilization.Seth D. Baum, Stuart Armstrong, Timoteus Ekenstedt, Olle Häggström, Robin Hanson, Karin Kuhlemann, Matthijs M. Maas, James D. Miller, Markus Salmela, Anders Sandberg, Kaj Sotala, Phil Torres, Alexey Turchin & Roman V. Yampolskiy - 2019 - Foresight 21 (1):53-83.
    Purpose This paper aims to formalize long-term trajectories of human civilization as a scientific and ethical field of study. The long-term trajectory of human civilization can be defined as the path that human civilization takes during the entire future time period in which human civilization could continue to exist. -/- Design/methodology/approach This paper focuses on four types of trajectories: status quo trajectories, in which human civilization persists in a state broadly similar to its current state into the distant future; catastrophe (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
1 — 50 / 966