Results for 'Catastrophic Risk'

998 found
Order:
  1. Catastrophic risk.H. Orri Stefánsson - 2020 - Philosophy Compass 15 (11):1-11.
    Catastrophic risk raises questions that are not only of practical importance, but also of great philosophical interest, such as how to define catastrophe and what distinguishes catastrophic outcomes from non-catastrophic ones. Catastrophic risk also raises questions about how to rationally respond to such risks. How to rationally respond arguably partly depends on the severity of the uncertainty, for instance, whether quantitative probabilistic information is available, or whether only comparative likelihood information is available, or neither (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  2. Classification of Global Catastrophic Risks Connected with Artificial Intelligence.Alexey Turchin & David Denkenberger - 2020 - AI and Society 35 (1):147-163.
    A classification of the global catastrophic risks of AI is presented, along with a comprehensive list of previously identified risks. This classification allows the identification of several new risks. We show that at each level of AI’s intelligence power, separate types of possible catastrophes dominate. Our classification demonstrates that the field of AI risks is diverse, and includes many scenarios beyond the commonly discussed cases of a paperclip maximizer or robot-caused unemployment. Global catastrophic failure could happen at various (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  3. Artificial Intelligence: Arguments for Catastrophic Risk.Adam Bales, William D'Alessandro & Cameron Domenico Kirk-Giannini - 2024 - Philosophy Compass 19 (2):e12964.
    Recent progress in artificial intelligence (AI) has drawn attention to the technology’s transformative potential, including what some see as its prospects for causing large-scale harm. We review two influential arguments purporting to show how AI could pose catastrophic risks. The first argument — the Problem of Power-Seeking — claims that, under certain assumptions, advanced AI systems are likely to engage in dangerous power-seeking behavior in pursuit of their goals. We review reasons for thinking that AI systems might seek power, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  4. Global Catastrophic Risks by Chemical Contamination.Alexey Turchin - manuscript
    Abstract: Global chemical contamination is an underexplored source of global catastrophic risks that is estimated to have low a priori probability. However, events such as pollinating insects’ population decline and lowering of the human male sperm count hint at some toxic exposure accumulation and thus could be a global catastrophic risk event if not prevented by future medical advances. We identified several potentially dangerous sources of the global chemical contamination, which may happen now or could happen in (...)
    Download  
     
    Export citation  
     
    Bookmark  
  5. Global Catastrophic Risks Connected with Extra-Terrestrial Intelligence.Alexey Turchin - manuscript
    In this article, a classification of the global catastrophic risks connected with the possible existence (or non-existence) of extraterrestrial intelligence is presented. If there are no extra-terrestrial intelligences (ETIs) in our light cone, it either means that the Great Filter is behind us, and thus some kind of periodic sterilizing natural catastrophe, like a gamma-ray burst, should be given a higher probability estimate, or that the Great Filter is ahead of us, and thus a future global catastrophe is high (...)
    Download  
     
    Export citation  
     
    Bookmark  
  6. Longtermism, Aggregation, and Catastrophic Risk.Emma J. Curran - manuscript
    Advocates of longtermism point out that interventions which focus on improving the prospects of people in the very far future will, in expectation, bring about a significant amount of good. Indeed, in expectation, such long-term interventions bring about far more good than their short-term counterparts. As such, longtermists claim we have compelling moral reason to prefer long-term interventions. In this paper, I show that longtermism is in conflict with plausible deontic scepticism about aggregation. I do so by demonstrating that, from (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  7. Continuity and catastrophic risk.H. Orri Stefánsson - 2022 - Economics and Philosophy 38 (2):266-274.
    Suppose that a decision-maker's aim, under certainty, is to maximise some continuous value, such as lifetime income or continuous social welfare. Can such a decision-maker rationally satisfy what has been called "continuity for easy cases" while at the same time satisfying what seems to be a widespread intuition against the full-blown continuity axiom of expected utility theory? In this note I argue that the answer is "no": given transitivity and a weak trade-off principle, continuity for easy cases violates the anti-continuity (...)
    Download  
     
    Export citation  
     
    Bookmark  
  8. UAP and Global Catastrophic Risks.Alexey Turchin - manuscript
    Abstract: After 2017 NY Times publication, the stigma of the scientific discussion of the problem of so-called UAP (Unidentified Aerial Phenomena) was lifted. Now the question arises: how UAP will affect the future of humanity, and especially, the probability of the global catastrophic risks? To answer this question, we assume that the Nimitz case in 2004 was real and we will suggest a classification of the possible explanations of the phenomena. The first level consists of mundane explanations: hardware glitches, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  9. The Global Catastrophic Risks Connected with Possibility of Finding Alien AI During SETI.Alexey Turchin - 2018 - Journal of British Interpanetary Society 71 (2):71-79.
    Abstract: This article examines risks associated with the program of passive search for alien signals (Search for Extraterrestrial Intelligence, or SETI) connected with the possibility of finding of alien transmission which includes description of AI system aimed on self-replication (SETI-attack). A scenario of potential vulnerability is proposed as well as the reasons why the proportion of dangerous to harmless signals may be high. The article identifies necessary conditions for the feasibility and effectiveness of the SETI-attack: ETI existence, possibility of AI, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  10. Approaches to the Prevention of Global Catastrophic Risks.Alexey Turchin - 2018 - Human Prospect 7 (2):52-65.
    Many global catastrophic and existential risks (X-risks) threaten the existence of humankind. There are also many ideas for their prevention, but the meta-problem is that these ideas are not structured. This lack of structure means it is not easy to choose the right plan(s) or to implement them in the correct order. I suggest using a “Plan A, Plan B” model, which has shown its effectiveness in planning actions in unpredictable environments. In this approach, Plan B is a backup (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  11. Could slaughterbots wipe out humanity? Assessment of the global catastrophic risk posed by autonomous weapons.Alexey Turchin - manuscript
    Recently criticisms against autonomous weapons were presented in a video in which an AI-powered drone kills a person. However, some said that this video is a distraction from the real risk of AI—the risk of unlimitedly self-improving AI systems. In this article, we analyze arguments from both sides and turn them into conditions. The following conditions are identified as leading to autonomous weapons becoming a global catastrophic risk: 1) Artificial General Intelligence (AGI) development is delayed relative (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  12. Artificial Multipandemic as the Most Plausible and Dangerous Global Catastrophic Risk Connected with Bioweapons and Synthetic Biology.Alexey Turchin, Brian Patrick Green & David Denkenberger - manuscript
    Pandemics have been suggested as global risks many times, but it has been shown that the probability of human extinction due to one pandemic is small, as it will not be able to affect and kill all people, but likely only half, even in the worst cases. Assuming that the probability of the worst pandemic to kill a person is 0.5, and assuming linear interaction between different pandemics, 30 strong pandemics running simultaneously will kill everyone. Such situations cannot happen naturally, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  13. Global Catastrophic and Existential Risks Communication Scale.Alexey Turchin & Denkeberger David - 2018 - Futures:not defiend yet.
    Existential risks threaten the future of humanity, but they are difficult to measure. However, to communicate, prioritize and mitigate such risks it is important to estimate their relative significance. Risk probabilities are typically used, but for existential risks they are problematic due to ambiguity, and because quantitative probabilities do not represent some aspects of these risks. Thus, a standardized and easily comprehensible instrument is called for, to communicate dangers from various global catastrophic and existential risks. In this article, (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  14. Language Agents Reduce the Risk of Existential Catastrophe.Simon Goldstein & Cameron Domenico Kirk-Giannini - forthcoming - AI and Society:1-11.
    Recent advances in natural language processing have given rise to a new kind of AI architecture: the language agent. By repeatedly calling an LLM to perform a variety of cognitive tasks, language agents are able to function autonomously to pursue goals specified in natural language and stored in a human-readable format. Because of their architecture, language agents exhibit behavior that is predictable according to the laws of folk psychology: they function as though they have desires and beliefs, and then make (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  15. Responses to Catastrophic AGI Risk: A Survey.Kaj Sotala & Roman V. Yampolskiy - 2015 - Physica Scripta 90.
    Many researchers have argued that humanity will create artificial general intelligence (AGI) within the next twenty to one hundred years. It has been suggested that AGI may inflict serious damage to human well-being on a global scale ('catastrophic risk'). After summarizing the arguments for why AGI may pose such a risk, we review the fieldʼs proposed responses to AGI risk. We consider societal proposals, proposals for external constraints on AGI behaviors and proposals for creating AGIs that (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  16. Ethics of the scientist qua policy advisor: inductive risk, uncertainty, and catastrophe in climate economics.David M. Frank - 2019 - Synthese:3123-3138.
    This paper discusses ethical issues surrounding Integrated Assessment Models (IAMs) of the economic effects of climate change, and how climate economists acting as policy advisors ought to represent the uncertain possibility of catastrophe. Some climate economists, especially Martin Weitzman, have argued for a precautionary approach where avoiding catastrophe should structure climate economists’ welfare analysis. This paper details ethical arguments that justify this approach, showing how Weitzman’s “fat tail” probabilities of climate catastrophe pose ethical problems for widely used IAMs. The main (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  17. How Much Should Governments Pay to Prevent Catastrophes? Longtermism's Limited Role.Carl Shulman & Elliott Thornley - forthcoming - In Jacob Barrett, Hilary Greaves & David Thorstad (eds.), Essays on Longtermism. Oxford University Press.
    Longtermists have argued that humanity should significantly increase its efforts to prevent catastrophes like nuclear wars, pandemics, and AI disasters. But one prominent longtermist argument overshoots this conclusion: the argument also implies that humanity should reduce the risk of existential catastrophe even at extreme cost to the present generation. This overshoot means that democratic governments cannot use the longtermist argument to guide their catastrophe policy. In this paper, we show that the case for preventing catastrophe does not depend on (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  18. Existential Risks: Exploring a Robust Risk Reduction Strategy.Karim Jebari - 2015 - Science and Engineering Ethics 21 (3):541-554.
    A small but growing number of studies have aimed to understand, assess and reduce existential risks, or risks that threaten the continued existence of mankind. However, most attention has been focused on known and tangible risks. This paper proposes a heuristic for reducing the risk of black swan extinction events. These events are, as the name suggests, stochastic and unforeseen when they happen. Decision theory based on a fixed model of possible outcomes cannot properly deal with this kind of (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  19. Aquatic refuges for surviving a global catastrophe.Alexey Turchin & Brian Green - 2017 - Futures 89:26-37.
    Recently many methods for reducing the risk of human extinction have been suggested, including building refuges underground and in space. Here we will discuss the perspective of using military nuclear submarines or their derivatives to ensure the survival of a small portion of humanity who will be able to rebuild human civilization after a large catastrophe. We will show that it is a very cost-effective way to build refuges, and viable solutions exist for various budgets and timeframes. Nuclear submarines (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  20. Risk, Precaution, and Causation.Masaki Ichinose - 2022 - Tetsugaku: International Journal of the Philosophical Association of Japan 6:22-53.
    This paper aims to scrutinize how the notion of risk should be understood and applied to possibly catastrophic cases. I begin with clarifying the standard usage of the notion of risk, particularly emphasizing the conceptual relation between risk and probability. Then, I investigate how to make decisions in the case of seemingly catastrophic disasters by contrasting the precautionary principle with the preventive (prevention) principle. Finally, I examine what kind of causal thinking tends to be actually (...)
    Download  
     
    Export citation  
     
    Bookmark  
  21. Ecological Risk: Climate Change as Abstract-Corporeal Problem.Tom Sparrow - 2018 - Revista Latinoamericana de Estudios Sobre Cuerpos, Emociones y Sociedad 10 (28):88-97.
    This essay uses Ulrich Beck’s concept of risk society to understand the threat of catastrophic climate change. It argues that this threat is “abstract-corporeal”, and therefore a special kind of threat that poses special kinds of epistemic and ecological challenges. At the center of these challenges is the problem of human vulnerability, which entails a complex form of trust that both sustains and threatens human survival.
    Download  
     
    Export citation  
     
    Bookmark  
  22. Catastrophically Dangerous AI is Possible Before 2030.Alexey Turchin - manuscript
    In AI safety research, the median timing of AGI arrival is often taken as a reference point, which various polls predict to happen in the middle of 21 century, but for maximum safety, we should determine the earliest possible time of Dangerous AI arrival. Such Dangerous AI could be either AGI, capable of acting completely independently in the real world and of winning in most real-world conflicts with humans, or an AI helping humans to build weapons of mass destruction, or (...)
    Download  
     
    Export citation  
     
    Bookmark  
  23. Simulation Typology and Termination Risks.Alexey Turchin & Roman Yampolskiy - manuscript
    The goal of the article is to explore what is the most probable type of simulation in which humanity lives (if any) and how this affects simulation termination risks. We firstly explore the question of what kind of simulation in which humanity is most likely located based on pure theoretical reasoning. We suggest a new patch to the classical simulation argument, showing that we are likely simulated not by our own descendants, but by alien civilizations. Based on this, we provide (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  24.  57
    The Probability of a Global Catastrophe in the World with Exponentially Growing Technologies.Alexey Turchin & Justin Shovelain - manuscript
    Abstract. In this article is presented a model of the change of the probability of the global catastrophic risks in the world with exponentially evolving technologies. Increasingly cheaper technologies become accessible to a larger number of agents. Also, the technologies become more capable to cause a global catastrophe. Examples of such dangerous technologies are artificial viruses constructed by the means of synthetic biology, non-aligned AI and, to less extent, nanotech and nuclear proliferation. The model shows at least double exponential (...)
    Download  
     
    Export citation  
     
    Bookmark  
  25. A Socialist Approach to Disaster Preparedness: A Leftist guide for the coming catastrophes.James Hughes - 2021 - After The Storm.
    Socialists have historically thought a lot about the catastrophic risks society faces. Today many DSA chapters have gotten involved in mutual aid to respond to the Covid crisis, generating a debate about how mutual aid fits into socialist work. One form of community engagement that is likely to be increasingly necessary, and is an opportunity for radicalizing angry neighbors, is disaster preparedness. While the prepper subculture is perceived as right-wing, and parts are tied into the militia movement, there are (...)
    Download  
     
    Export citation  
     
    Bookmark  
  26. Surviving global risks through the preservation of humanity's data on the Moon.Alexey Turchin & D. Denkenberger - 2018 - Acta Astronautica:in press.
    Many global catastrophic risks are threatening human civilization, and a number of ideas have been suggested for preventing or surviving them. However, if these interventions fail, society could preserve information about the human race and human DNA samples in the hopes that the next civilization on Earth will be able to reconstruct Homo sapiens and our culture. This requires information preservation of an order of magnitude of 100 million years, a little-explored topic thus far. It is important that a (...)
    Download  
     
    Export citation  
     
    Bookmark  
  27. Islands as refuges for surviving global catastrophes.Alexey Turchin & Brian Patrick Green - 2018 - Foresight.
    Purpose Islands have long been discussed as refuges from global catastrophes; this paper will evaluate them systematically, discussing both the positives and negatives of islands as refuges. There are examples of isolated human communities surviving for thousands of years on places like Easter Island. Islands could provide protection against many low-level risks, notably including bio-risks. However, they are vulnerable to tsunamis, bird-transmitted diseases, and other risks. This article explores how to use the advantages of islands for survival during global catastrophes. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  28. The Fragile World Hypothesis: Complexity, Fragility, and Systemic Existential Risk.David Manheim - forthcoming - Futures.
    The possibility of social and technological collapse has been the focus of science fiction tropes for decades, but more recent focus has been on specific sources of existential and global catastrophic risk. Because these scenarios are simple to understand and envision, they receive more attention than risks due to complex interplay of failures, or risks that cannot be clearly specified. In this paper, we discuss the possibility that complexity of a certain type leads to fragility which can function (...)
    Download  
     
    Export citation  
     
    Bookmark  
  29. AI Governance and the Policymaking Process: Key Considerations for Reducing AI Risk.Brandon Perry & Risto Uuk - 2019 - Big Data and Cognitive Computing 3 (2):1-17.
    This essay argues that a new subfield of AI governance should be explored that examines the policy-making process and its implications for AI governance. A growing number of researchers have begun working on the question of how to mitigate the catastrophic risks of transformative artificial intelligence, including what policies states should adopt. However, this essay identifies a preceding, meta-level problem of how the space of possible policies is affected by the politics and administrative mechanisms of how those policies are (...)
    Download  
     
    Export citation  
     
    Bookmark  
  30. Сo-evolutionary biosemantics of evolutionary risk at technogenic civilization: Hiroshima, Chernobyl – Fukushima and further….Valentin Cheshko & Valery Glazko - 2016 - International Journal of Environmental Problems 3 (1):14-25.
    From Chernobyl to Fukushima, it became clear that the technology is a system evolutionary factor, and the consequences of man-made disasters, as the actualization of risk related to changes in the social heredity (cultural transmission) elements. The uniqueness of the human phenomenon is a characteristic of the system arising out of the nonlinear interaction of biological, cultural and techno-rationalistic adaptive modules. Distribution emerging adaptive innovation within each module is in accordance with the two algorithms that are characterized by the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  31. Assessing the future plausibility of catastrophically dangerous AI.Alexey Turchin - 2018 - Futures.
    In AI safety research, the median timing of AGI creation is often taken as a reference point, which various polls predict will happen in second half of the 21 century, but for maximum safety, we should determine the earliest possible time of dangerous AI arrival and define a minimum acceptable level of AI risk. Such dangerous AI could be either narrow AI facilitating research into potentially dangerous technology like biotech, or AGI, capable of acting completely independently in the real (...)
    Download  
     
    Export citation  
     
    Bookmark  
  32. On the Limits of the Precautionary Principle.H. Orri Stefansson - 2019 - Risk Analysis 39 (6):1204-1222.
    The Precautionary Principle (PP) is an influential principle of risk management. It has been widely introduced into environmental legislation, and it plays an important role in most international environmental agreements. Yet, there is little consensus on precisely how to understand and formulate the principle. In this paper I prove some impossibility results for two plausible formulations of the PP as a decision-rule. These results illustrate the difficulty in making the PP consistent with the acceptance of any trade-offs between (...) risks and more ordinary goods. (shrink)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  33. Nuclear war as a predictable surprise.Matthew Rendall - 2022 - Global Policy 13 (5):782-791.
    Like asteroids, hundred-year floods and pandemic disease, thermonuclear war is a low-frequency, high-impact threat. In the long run, catastrophe is inevitable if nothing is done − yet each successive government and generation may fail to address it. Drawing on risk perception research, this paper argues that psychological biases cause the threat of nuclear war to receive less attention than it deserves. Nuclear deterrence is, moreover, a ‘front-loaded good’: its benefits accrue disproportionately to proximate generations, whereas much of the expected (...)
    Download  
     
    Export citation  
     
    Bookmark  
  34. A trilemma for the lexical utility model of the precautionary principle.H. Orri Stefánsson - forthcoming - Philosophical Studies:1-17.
    Bartha and DesRoches (2021) and Steel and Bartha (2023) argue that we should understand the precautionary principle as the injunction to maximise lexical utilities. They show that the lexical utility model has important pragmatic advantages. Moreover, the model has the theoretical advantage of satisfying all axioms of expected utility theory except continuity. In this paper I raise a trilemma for any attempt at modelling the precautionary principle with lexical utilities: it permits choice cycles or leads to paralysis or implies that (...)
    Download  
     
    Export citation  
     
    Bookmark  
  35. On theory X and what matters most.Simon Beard & Patrick Kaczmarek - 2022 - In Jeff McMahan, Tim Campbell, James Goodrich & Ketan Ramakrishan (eds.), Ethics and Existence: The Legacy of Derek Parfit. Oxford: Oxford University Press. pp. 358-386.
    One of Derek Parfit’s greatest legacies was the search for Theory X, a theory of population ethics that avoided all the implausible conclusions and paradoxes that have dogged the field since its inception: the Absurd Conclusion, the Repugnant Conclusion, the Non-Identity Problem, and the Mere Addition Paradox. In recent years, it has been argued that this search is doomed to failure and no satisfactory population axiology is possible. This chapter reviews Parfit’s life’s work in the field and argues that he (...)
    Download  
     
    Export citation  
     
    Bookmark  
  36. Presumptuous Philosopher Proves Panspermia.Alexey Turchin - manuscript
    Abstract. The presumptuous philosopher (PP) thought experiment lends more credence to the hypothesis which postulates the existence of a larger number of observers than other hypothesis. The PP was suggested as a purely speculative endeavor. However, there is a class of real world observer-selection effects where it could be applied, and one of them is the possibility of interstellar panspermia (IP). There are two types of anthropic reasoning: SIA and SSA. SIA implies that my existence is an argument that larger (...)
    Download  
     
    Export citation  
     
    Bookmark  
  37. Military AI as a Convergent Goal of Self-Improving AI.Alexey Turchin & Denkenberger David - 2018 - In Artificial Intelligence Safety and Security. Louiswille: CRC Press.
    Better instruments to predict the future evolution of artificial intelligence (AI) are needed, as the destiny of our civilization depends on it. One of the ways to such prediction is the analysis of the convergent drives of any future AI, started by Omohundro. We show that one of the convergent drives of AI is a militarization drive, arising from AI’s need to wage a war against its potential rivals by either physical or software means, or to increase its bargaining power. (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  38. Doing Good Badly? Philosophical Issues Related to Effective Altruism.Michael Plant - 2019 - Dissertation, Oxford University
    Suppose you want to do as much good as possible. What should you do? According to members of the effective altruism movement—which has produced much of the thinking on this issue and counts several moral philosophers as its key protagonists—we should prioritise among the world’s problems by assessing their scale, solvability, and neglectedness. Once we’ve done this, the three top priorities, not necessarily in this order, are (1) aiding the world’s poorest people by providing life-saving medical treatments or alleviating poverty (...)
    Download  
     
    Export citation  
     
    Bookmark  
  39. Les risques majeurs et l'action publique.Céline Grislain-Letremy, Reza Lahidji & Philippe Mongin - 2012 - Paris: La Documentation Française.
    Par risques majeurs, on entend ceux qui s’attachent à des événements dont les conséquences défavorables, pour l’humanité ou pour l’environnement, sont d’une gravité exceptionnelle. On n’ajoutera ni que ces événements sont d’une intensité physique extrême, ni qu’ils surviennent rarement, car ce n’est pas toujours le cas. Seuls des risques majeurs de nature civile seront considérés dans cet ouvrage, et il s'agira, plus limitativement, de risques naturels, comme ceux d’inondation et de submersion marine, illustrés par la tempête Xynthia en 2010, de (...)
    Download  
     
    Export citation  
     
    Bookmark  
  40. Catching Treacherous Turn: A Model of the Multilevel AI Boxing.Alexey Turchin - manuscript
    With the fast pace of AI development, the problem of preventing its global catastrophic risks arises. However, no satisfactory solution has been found. From several possibilities, the confinement of AI in a box is considered as a low-quality possible solution for AI safety. However, some treacherous AIs can be stopped by effective confinement if it is used as an additional measure. Here, we proposed an idealized model of the best possible confinement by aggregating all known ideas in the field (...)
    Download  
     
    Export citation  
     
    Bookmark  
  41. Multilevel Strategy for Immortality: Plan A – Fighting Aging, Plan B – Cryonics, Plan C – Digital Immortality, Plan D – Big World Immortality.Alexey Turchin - manuscript
    Abstract: The field of life extension is full of ideas but they are unstructured. Here we suggest a comprehensive strategy for reaching personal immortality based on the idea of multilevel defense, where the next life-preserving plan is implemented if the previous one fails, but all plans need to be prepared simultaneously in advance. The first plan, plan A, is the surviving until advanced AI creation via fighting aging and other causes of death and extending one’s life. Plan B is cryonics, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  42. Long-Term Trajectories of Human Civilization.Seth D. Baum, Stuart Armstrong, Timoteus Ekenstedt, Olle Häggström, Robin Hanson, Karin Kuhlemann, Matthijs M. Maas, James D. Miller, Markus Salmela, Anders Sandberg, Kaj Sotala, Phil Torres, Alexey Turchin & Roman V. Yampolskiy - 2019 - Foresight 21 (1):53-83.
    Purpose This paper aims to formalize long-term trajectories of human civilization as a scientific and ethical field of study. The long-term trajectory of human civilization can be defined as the path that human civilization takes during the entire future time period in which human civilization could continue to exist. -/- Design/methodology/approach This paper focuses on four types of trajectories: status quo trajectories, in which human civilization persists in a state broadly similar to its current state into the distant future; catastrophe (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  43. Reconciling Regulation with Scientific Autonomy in Dual-Use Research.Nicholas G. Evans, Michael J. Selgelid & Robert Mark Simpson - 2022 - Journal of Medicine and Philosophy 47 (1):72-94.
    In debates over the regulation of communication related to dual-use research, the risks that such communication creates must be weighed against against the value of scientific autonomy. The censorship of such communication seems justifiable in certain cases, given the potentially catastrophic applications of some dual-use research. This conclusion however, gives rise to another kind of danger: that regulators will use overly simplistic cost-benefit analysis to rationalize excessive regulation of scientific research. In response to this, we show how institutional design (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  44. If now isn't the most influential time ever, when is? [REVIEW]Kritika Maheshwari - 2020 - The Philosopher 108:94-101.
    Download  
     
    Export citation  
     
    Bookmark  
  45. A Meta-Doomsday Argument: Uncertainty About the Validity of the Probabilistic Prediction of the End of the World.Alexey Turchin - manuscript
    Abstract: Four main forms of Doomsday Argument (DA) exist—Gott’s DA, Carter’s DA, Grace’s DA and Universal DA. All four forms use different probabilistic logic to predict that the end of the human civilization will happen unexpectedly soon based on our early location in human history. There are hundreds of publications about the validity of the Doomsday argument. Most of the attempts to disprove the Doomsday Argument have some weak points. As a result, we are uncertain about the validity of DA (...)
    Download  
     
    Export citation  
     
    Bookmark  
  46. Normal Accidents of Expertise.Stephen P. Turner - 2010 - Minerva 48 (3):239-258.
    Charles Perrow used the term normal accidents to characterize a type of catastrophic failure that resulted when complex, tightly coupled production systems encountered a certain kind of anomalous event. These were events in which systems failures interacted with one another in a way that could not be anticipated, and could not be easily understood and corrected. Systems of the production of expert knowledge are increasingly becoming tightly coupled. Unlike classical science, which operated with a long time horizon, many current (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  47. Science and Religion Shift in the First Three Months of the Covid-19 Pandemic.Margaret Boone Rappaport, Christopher Corbally, Riccardo Campa & Ziba Norman - 2020 - Studia Humana 10 (1):1-17.
    The goal of this pilot study is to investigate expressions of the collective disquiet of people in the first months of Covid-19 pandemic, and to try to understand how they manage covert risk, especially with religion and magic. Four co-authors living in early hot spots of the pandemic speculate on the roles of science, religion, and magic, in the latest global catastrophe. They delve into the consolidation that should be occurring worldwide because of a common, viral enemy, but find (...)
    Download  
     
    Export citation  
     
    Bookmark  
  48. Robustness to Fundamental Uncertainty in AGI Alignment.G. G. Worley Iii - 2020 - Journal of Consciousness Studies 27 (1-2):225-241.
    The AGI alignment problem has a bimodal distribution of outcomes with most outcomes clustering around the poles of total success and existential, catastrophic failure. Consequently, attempts to solve AGI alignment should, all else equal, prefer false negatives (ignoring research programs that would have been successful) to false positives (pursuing research programs that will unexpectedly fail). Thus, we propose adopting a policy of responding to points of philosophical and practical uncertainty associated with the alignment problem by limiting and choosing necessary (...)
    Download  
     
    Export citation  
     
    Bookmark  
  49. Social Justice and the Future of Flood Insurance.John O'Neill & Martin O'Neill - 2012 - Joseph Rowntree Foundation.
    What would be a fair model for flood insurance? Catastrophic flooding has become increasingly frequent in the UK and, with climate change, is likely to become even more frequent in the future. With the UK's current flood insurance regime ending in 2013, we argues that: -/- - there is an overwhelming case for rejecting a free market in flood insurance after 2013; - this market-based approach threatens to leave many thousands of properties uninsurable, leading to extensive social blight; - (...)
    Download  
     
    Export citation  
     
    Bookmark  
  50. Taking Into Account Sentient Non-Humans in AI Ambitious Value Learning: Sentientist Coherent Extrapolated Volition.Adrià Moret - 2023 - Journal of Artificial Intelligence and Consciousness 10 (02):309-334.
    Ambitious value learning proposals to solve the AI alignment problem and avoid catastrophic outcomes from a possible future misaligned artificial superintelligence (such as Coherent Extrapolated Volition [CEV]) have focused on ensuring that an artificial superintelligence (ASI) would try to do what humans would want it to do. However, present and future sentient non-humans, such as non-human animals and possible future digital minds could also be affected by the ASI’s behaviour in morally relevant ways. This paper puts forward Sentientist Coherent (...)
    Download  
     
    Export citation  
     
    Bookmark  
1 — 50 / 998