Results for 'Global catastrophic risk'

1000+ found
Order:
  1. Classification of Global Catastrophic Risks Connected with Artificial Intelligence.Alexey Turchin & David Denkenberger - 2020 - AI and Society 35 (1):147-163.
    A classification of the global catastrophic risks of AI is presented, along with a comprehensive list of previously identified risks. This classification allows the identification of several new risks. We show that at each level of AI’s intelligence power, separate types of possible catastrophes dominate. Our classification demonstrates that the field of AI risks is diverse, and includes many scenarios beyond the commonly discussed cases of a paperclip maximizer or robot-caused unemployment. Global catastrophic failure could happen (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  2. Global Catastrophic Risks by Chemical Contamination.Alexey Turchin - manuscript
    Abstract: Global chemical contamination is an underexplored source of global catastrophic risks that is estimated to have low a priori probability. However, events such as pollinating insects’ population decline and lowering of the human male sperm count hint at some toxic exposure accumulation and thus could be a global catastrophic risk event if not prevented by future medical advances. We identified several potentially dangerous sources of the global chemical contamination, which may happen now (...)
    Download  
     
    Export citation  
     
    Bookmark  
  3. Global Catastrophic Risks Connected with Extra-Terrestrial Intelligence.Alexey Turchin - manuscript
    In this article, a classification of the global catastrophic risks connected with the possible existence (or non-existence) of extraterrestrial intelligence is presented. If there are no extra-terrestrial intelligences (ETIs) in our light cone, it either means that the Great Filter is behind us, and thus some kind of periodic sterilizing natural catastrophe, like a gamma-ray burst, should be given a higher probability estimate, or that the Great Filter is ahead of us, and thus a future global catastrophe (...)
    Download  
     
    Export citation  
     
    Bookmark  
  4. The Global Catastrophic Risks Connected with Possibility of Finding Alien AI During SETI.Alexey Turchin - 2018 - Journal of British Interpanetary Society 71 (2):71-79.
    Abstract: This article examines risks associated with the program of passive search for alien signals (Search for Extraterrestrial Intelligence, or SETI) connected with the possibility of finding of alien transmission which includes description of AI system aimed on self-replication (SETI-attack). A scenario of potential vulnerability is proposed as well as the reasons why the proportion of dangerous to harmless signals may be high. The article identifies necessary conditions for the feasibility and effectiveness of the SETI-attack: ETI existence, possibility of AI, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  5. UAP and Global Catastrophic Risks.Alexey Turchin - manuscript
    Abstract: After 2017 NY Times publication, the stigma of the scientific discussion of the problem of so-called UAP (Unidentified Aerial Phenomena) was lifted. Now the question arises: how UAP will affect the future of humanity, and especially, the probability of the global catastrophic risks? To answer this question, we assume that the Nimitz case in 2004 was real and we will suggest a classification of the possible explanations of the phenomena. The first level consists of mundane explanations: hardware (...)
    Download  
     
    Export citation  
     
    Bookmark  
  6. Global Catastrophic and Existential Risks Communication Scale.Alexey Turchin & Denkeberger David - 2018 - Futures:not defiend yet.
    Existential risks threaten the future of humanity, but they are difficult to measure. However, to communicate, prioritize and mitigate such risks it is important to estimate their relative significance. Risk probabilities are typically used, but for existential risks they are problematic due to ambiguity, and because quantitative probabilities do not represent some aspects of these risks. Thus, a standardized and easily comprehensible instrument is called for, to communicate dangers from various global catastrophic and existential risks. In this (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  7. Approaches to the Prevention of Global Catastrophic Risks.Alexey Turchin - 2018 - Human Prospect 7 (2):52-65.
    Many global catastrophic and existential risks (X-risks) threaten the existence of humankind. There are also many ideas for their prevention, but the meta-problem is that these ideas are not structured. This lack of structure means it is not easy to choose the right plan(s) or to implement them in the correct order. I suggest using a “Plan A, Plan B” model, which has shown its effectiveness in planning actions in unpredictable environments. In this approach, Plan B is a (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  8. Could slaughterbots wipe out humanity? Assessment of the global catastrophic risk posed by autonomous weapons.Alexey Turchin - manuscript
    Recently criticisms against autonomous weapons were presented in a video in which an AI-powered drone kills a person. However, some said that this video is a distraction from the real risk of AI—the risk of unlimitedly self-improving AI systems. In this article, we analyze arguments from both sides and turn them into conditions. The following conditions are identified as leading to autonomous weapons becoming a global catastrophic risk: 1) Artificial General Intelligence (AGI) development is delayed (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  9. Artificial Multipandemic as the Most Plausible and Dangerous Global Catastrophic Risk Connected with Bioweapons and Synthetic Biology.Alexey Turchin, Brian Patrick Green & David Denkenberger - manuscript
    Pandemics have been suggested as global risks many times, but it has been shown that the probability of human extinction due to one pandemic is small, as it will not be able to affect and kill all people, but likely only half, even in the worst cases. Assuming that the probability of the worst pandemic to kill a person is 0.5, and assuming linear interaction between different pandemics, 30 strong pandemics running simultaneously will kill everyone. Such situations cannot happen (...)
    Download  
     
    Export citation  
     
    Bookmark  
  10.  61
    The Probability of a Global Catastrophe in the World with Exponentially Growing Technologies.Alexey Turchin & Justin Shovelain - manuscript
    Abstract. In this article is presented a model of the change of the probability of the global catastrophic risks in the world with exponentially evolving technologies. Increasingly cheaper technologies become accessible to a larger number of agents. Also, the technologies become more capable to cause a global catastrophe. Examples of such dangerous technologies are artificial viruses constructed by the means of synthetic biology, non-aligned AI and, to less extent, nanotech and nuclear proliferation. The model shows at least (...)
    Download  
     
    Export citation  
     
    Bookmark  
  11. Aquatic refuges for surviving a global catastrophe.Alexey Turchin & Brian Green - 2017 - Futures 89:26-37.
    Recently many methods for reducing the risk of human extinction have been suggested, including building refuges underground and in space. Here we will discuss the perspective of using military nuclear submarines or their derivatives to ensure the survival of a small portion of humanity who will be able to rebuild human civilization after a large catastrophe. We will show that it is a very cost-effective way to build refuges, and viable solutions exist for various budgets and timeframes. Nuclear submarines (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  12. Islands as refuges for surviving global catastrophes.Alexey Turchin & Brian Patrick Green - 2018 - Foresight.
    Purpose Islands have long been discussed as refuges from global catastrophes; this paper will evaluate them systematically, discussing both the positives and negatives of islands as refuges. There are examples of isolated human communities surviving for thousands of years on places like Easter Island. Islands could provide protection against many low-level risks, notably including bio-risks. However, they are vulnerable to tsunamis, bird-transmitted diseases, and other risks. This article explores how to use the advantages of islands for survival during (...) catastrophes. Methodology: Preliminary horizon scanning based on the application of the research principles established in the previous global catastrophic literature to the existing geographic data. Findings The large number of islands on Earth, and their diverse conditions, increase the chance that one of them will provide protection from a catastrophe. Additionally, this protection could be increased if an island were used as a base for a nuclear submarine refuge combined with underground bunkers, and/or extremely long-term data storage. The requirements for survival on islands, their vulnerabilities, and ways to mitigate and adapt to risks are explored. Several existing islands, suitable for the survival of different types of risk, timing, and budgets, are examined. Islands suitable for different types of refuges and other island-like options that could also provide protection are also discussed. Originality/value The possible use of islands as refuges from social collapse and existential risks has not been previously examined systematically. This article contributes to the expanding research on survival scenarios. (shrink)
    Download  
     
    Export citation  
     
    Bookmark  
  13. Surviving global risks through the preservation of humanity's data on the Moon.Alexey Turchin & D. Denkenberger - 2018 - Acta Astronautica:in press.
    Many global catastrophic risks are threatening human civilization, and a number of ideas have been suggested for preventing or surviving them. However, if these interventions fail, society could preserve information about the human race and human DNA samples in the hopes that the next civilization on Earth will be able to reconstruct Homo sapiens and our culture. This requires information preservation of an order of magnitude of 100 million years, a little-explored topic thus far. It is important that (...)
    Download  
     
    Export citation  
     
    Bookmark  
  14. Responses to Catastrophic AGI Risk: A Survey.Kaj Sotala & Roman V. Yampolskiy - 2015 - Physica Scripta 90.
    Many researchers have argued that humanity will create artificial general intelligence (AGI) within the next twenty to one hundred years. It has been suggested that AGI may inflict serious damage to human well-being on a global scale ('catastrophic risk'). After summarizing the arguments for why AGI may pose such a risk, we review the fieldʼs proposed responses to AGI risk. We consider societal proposals, proposals for external constraints on AGI behaviors and proposals for creating AGIs (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  15. Existential Risks: Exploring a Robust Risk Reduction Strategy.Karim Jebari - 2015 - Science and Engineering Ethics 21 (3):541-554.
    A small but growing number of studies have aimed to understand, assess and reduce existential risks, or risks that threaten the continued existence of mankind. However, most attention has been focused on known and tangible risks. This paper proposes a heuristic for reducing the risk of black swan extinction events. These events are, as the name suggests, stochastic and unforeseen when they happen. Decision theory based on a fixed model of possible outcomes cannot properly deal with this kind of (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  16. Simulation Typology and Termination Risks.Alexey Turchin & Roman Yampolskiy - manuscript
    The goal of the article is to explore what is the most probable type of simulation in which humanity lives (if any) and how this affects simulation termination risks. We firstly explore the question of what kind of simulation in which humanity is most likely located based on pure theoretical reasoning. We suggest a new patch to the classical simulation argument, showing that we are likely simulated not by our own descendants, but by alien civilizations. Based on this, we provide (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  17. The Fragile World Hypothesis: Complexity, Fragility, and Systemic Existential Risk.David Manheim - forthcoming - Futures.
    The possibility of social and technological collapse has been the focus of science fiction tropes for decades, but more recent focus has been on specific sources of existential and global catastrophic risk. Because these scenarios are simple to understand and envision, they receive more attention than risks due to complex interplay of failures, or risks that cannot be clearly specified. In this paper, we discuss the possibility that complexity of a certain type leads to fragility which can (...)
    Download  
     
    Export citation  
     
    Bookmark  
  18. Catastrophic risk.H. Orri Stefánsson - 2020 - Philosophy Compass 15 (11):1-11.
    Catastrophic risk raises questions that are not only of practical importance, but also of great philosophical interest, such as how to define catastrophe and what distinguishes catastrophic outcomes from non-catastrophic ones. Catastrophic risk also raises questions about how to rationally respond to such risks. How to rationally respond arguably partly depends on the severity of the uncertainty, for instance, whether quantitative probabilistic information is available, or whether only comparative likelihood information is available, or neither (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  19. A Pin and a Balloon: Anthropic Fragility Increases Chances of Runaway Global Warming.Alexey Turchin - manuscript
    Humanity may underestimate the rate of natural global catastrophes because of the survival bias (“anthropic shadow”). But the resulting reduction of the Earth’s future habitability duration is not very large in most plausible cases (1-2 orders of magnitude) and thus it looks like we still have at least millions of years. However, anthropic shadow implies anthropic fragility: we are more likely to live in a world where a sterilizing catastrophe is long overdue and could be triggered by unexpectedly small (...)
    Download  
     
    Export citation  
     
    Bookmark  
  20. Artificial Intelligence: Arguments for Catastrophic Risk.Adam Bales, William D'Alessandro & Cameron Domenico Kirk-Giannini - 2024 - Philosophy Compass 19 (2):e12964.
    Recent progress in artificial intelligence (AI) has drawn attention to the technology’s transformative potential, including what some see as its prospects for causing large-scale harm. We review two influential arguments purporting to show how AI could pose catastrophic risks. The first argument — the Problem of Power-Seeking — claims that, under certain assumptions, advanced AI systems are likely to engage in dangerous power-seeking behavior in pursuit of their goals. We review reasons for thinking that AI systems might seek power, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  21. Longtermism, Aggregation, and Catastrophic Risk.Emma J. Curran - manuscript
    Advocates of longtermism point out that interventions which focus on improving the prospects of people in the very far future will, in expectation, bring about a significant amount of good. Indeed, in expectation, such long-term interventions bring about far more good than their short-term counterparts. As such, longtermists claim we have compelling moral reason to prefer long-term interventions. In this paper, I show that longtermism is in conflict with plausible deontic scepticism about aggregation. I do so by demonstrating that, from (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  22. Continuity and catastrophic risk.H. Orri Stefánsson - 2022 - Economics and Philosophy 38 (2):266-274.
    Suppose that a decision-maker's aim, under certainty, is to maximise some continuous value, such as lifetime income or continuous social welfare. Can such a decision-maker rationally satisfy what has been called "continuity for easy cases" while at the same time satisfying what seems to be a widespread intuition against the full-blown continuity axiom of expected utility theory? In this note I argue that the answer is "no": given transitivity and a weak trade-off principle, continuity for easy cases violates the anti-continuity (...)
    Download  
     
    Export citation  
     
    Bookmark  
  23. Presumptuous Philosopher Proves Panspermia.Alexey Turchin - manuscript
    Abstract. The presumptuous philosopher (PP) thought experiment lends more credence to the hypothesis which postulates the existence of a larger number of observers than other hypothesis. The PP was suggested as a purely speculative endeavor. However, there is a class of real world observer-selection effects where it could be applied, and one of them is the possibility of interstellar panspermia (IP). There are two types of anthropic reasoning: SIA and SSA. SIA implies that my existence is an argument that larger (...)
    Download  
     
    Export citation  
     
    Bookmark  
  24. Military AI as a Convergent Goal of Self-Improving AI.Alexey Turchin & Denkenberger David - 2018 - In Artificial Intelligence Safety and Security. Louiswille: CRC Press.
    Better instruments to predict the future evolution of artificial intelligence (AI) are needed, as the destiny of our civilization depends on it. One of the ways to such prediction is the analysis of the convergent drives of any future AI, started by Omohundro. We show that one of the convergent drives of AI is a militarization drive, arising from AI’s need to wage a war against its potential rivals by either physical or software means, or to increase its bargaining power. (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  25. Doing Good Badly? Philosophical Issues Related to Effective Altruism.Michael Plant - 2019 - Dissertation, Oxford University
    Suppose you want to do as much good as possible. What should you do? According to members of the effective altruism movement—which has produced much of the thinking on this issue and counts several moral philosophers as its key protagonists—we should prioritise among the world’s problems by assessing their scale, solvability, and neglectedness. Once we’ve done this, the three top priorities, not necessarily in this order, are (1) aiding the world’s poorest people by providing life-saving medical treatments or alleviating poverty (...)
    Download  
     
    Export citation  
     
    Bookmark  
  26. Catching Treacherous Turn: A Model of the Multilevel AI Boxing.Alexey Turchin - manuscript
    With the fast pace of AI development, the problem of preventing its global catastrophic risks arises. However, no satisfactory solution has been found. From several possibilities, the confinement of AI in a box is considered as a low-quality possible solution for AI safety. However, some treacherous AIs can be stopped by effective confinement if it is used as an additional measure. Here, we proposed an idealized model of the best possible confinement by aggregating all known ideas in the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  27. Multilevel Strategy for Immortality: Plan A – Fighting Aging, Plan B – Cryonics, Plan C – Digital Immortality, Plan D – Big World Immortality.Alexey Turchin - manuscript
    Abstract: The field of life extension is full of ideas but they are unstructured. Here we suggest a comprehensive strategy for reaching personal immortality based on the idea of multilevel defense, where the next life-preserving plan is implemented if the previous one fails, but all plans need to be prepared simultaneously in advance. The first plan, plan A, is the surviving until advanced AI creation via fighting aging and other causes of death and extending one’s life. Plan B is cryonics, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  28. On theory X and what matters most.Simon Beard & Patrick Kaczmarek - 2022 - In Jeff McMahan, Tim Campbell, James Goodrich & Ketan Ramakrishan (eds.), Ethics and Existence: The Legacy of Derek Parfit. Oxford: Oxford University Press. pp. 358-386.
    One of Derek Parfit’s greatest legacies was the search for Theory X, a theory of population ethics that avoided all the implausible conclusions and paradoxes that have dogged the field since its inception: the Absurd Conclusion, the Repugnant Conclusion, the Non-Identity Problem, and the Mere Addition Paradox. In recent years, it has been argued that this search is doomed to failure and no satisfactory population axiology is possible. This chapter reviews Parfit’s life’s work in the field and argues that he (...)
    Download  
     
    Export citation  
     
    Bookmark  
  29. Dark Matters and Hidden Variables of Unitary Science: How Neglected Complexity Generates Mysteries and Crises, from Quantum Mechanics and Cosmology to Genetics and Global Development Risks.Andrei P. Kirilyuk - manuscript
    The unreduced many-body interaction problem solution, absent in usual science framework, reveals a new quality of emerging multiple, equally real but mutually incompatible system configurations, or “realisations”, giving rise to the universal concept of dynamic complexity and chaoticity. Their imitation by a single, “average” realisation or trajectory in usual theory (corresponding to postulated “exact” or perturbative problem solutions) is a rough simplification of reality underlying all stagnating and emerging problems of conventional (unitary) science, often in the form of missing, or (...)
    Download  
     
    Export citation  
     
    Bookmark  
  30. Nuclear war as a predictable surprise.Matthew Rendall - 2022 - Global Policy 13 (5):782-791.
    Like asteroids, hundred-year floods and pandemic disease, thermonuclear war is a low-frequency, high-impact threat. In the long run, catastrophe is inevitable if nothing is done − yet each successive government and generation may fail to address it. Drawing on risk perception research, this paper argues that psychological biases cause the threat of nuclear war to receive less attention than it deserves. Nuclear deterrence is, moreover, a ‘front-loaded good’: its benefits accrue disproportionately to proximate generations, whereas much of the expected (...)
    Download  
     
    Export citation  
     
    Bookmark  
  31. If now isn't the most influential time ever, when is? [REVIEW]Kritika Maheshwari - 2020 - The Philosopher 108:94-101.
    Download  
     
    Export citation  
     
    Bookmark  
  32. Science and Religion Shift in the First Three Months of the Covid-19 Pandemic.Margaret Boone Rappaport, Christopher Corbally, Riccardo Campa & Ziba Norman - 2020 - Studia Humana 10 (1):1-17.
    The goal of this pilot study is to investigate expressions of the collective disquiet of people in the first months of Covid-19 pandemic, and to try to understand how they manage covert risk, especially with religion and magic. Four co-authors living in early hot spots of the pandemic speculate on the roles of science, religion, and magic, in the latest global catastrophe. They delve into the consolidation that should be occurring worldwide because of a common, viral enemy, but (...)
    Download  
     
    Export citation  
     
    Bookmark  
  33. Language Agents Reduce the Risk of Existential Catastrophe.Simon Goldstein & Cameron Domenico Kirk-Giannini - forthcoming - AI and Society:1-11.
    Recent advances in natural language processing have given rise to a new kind of AI architecture: the language agent. By repeatedly calling an LLM to perform a variety of cognitive tasks, language agents are able to function autonomously to pursue goals specified in natural language and stored in a human-readable format. Because of their architecture, language agents exhibit behavior that is predictable according to the laws of folk psychology: they function as though they have desires and beliefs, and then make (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  34. A Meta-Doomsday Argument: Uncertainty About the Validity of the Probabilistic Prediction of the End of the World.Alexey Turchin - manuscript
    Abstract: Four main forms of Doomsday Argument (DA) exist—Gott’s DA, Carter’s DA, Grace’s DA and Universal DA. All four forms use different probabilistic logic to predict that the end of the human civilization will happen unexpectedly soon based on our early location in human history. There are hundreds of publications about the validity of the Doomsday argument. Most of the attempts to disprove the Doomsday Argument have some weak points. As a result, we are uncertain about the validity of DA (...)
    Download  
     
    Export citation  
     
    Bookmark  
  35. Risk-driven global compliance regimes in banking and accounting: the new Law Merchant.James Franklin - 2005 - Law, Probability and Risk 4 (4):237-250.
    Powerful, technically complex international compliance regimes have developed recently in certain professions that deal with risk: banking (the Basel II regime), accountancy (IFRS) and the actuarial profession. The need to deal with major risks has acted as a strong driver of international co-operation to create enforceable international semilegal systems, as happened earlier in such fields as international health regulations. This regulation in technical fields contrasts with the failure of an international general-purpose political and legal regime to develop. We survey (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  36. Responding to the Call through Translating Science into Impact: Building an Evidence-Based Approaches to Effectively Curb Public Health Emergencies [Covid-19 Crisis]. [REVIEW]Morufu Olalekan Raimi, Kalada Godson Mcfubara, Oyeyemi Sunday Abisoye, Clinton Ifeanyichukwu Ezekwe, Olawale Henry Sawyerr & Gift Aziba-Anyam Raimi - 2021 - Global Journal of Epidemiology and Infectious Disease 1:12-45.
    COVID-19 demonstrated a global catastrophe that touched everybody, including the scientific community. As we respond and recover rapidly from this pandemic, there is an opportunity to guarantee that the fabric of our society includes sustainability, fairness, and care. However, approaches to environmental health attempt to decrease the population burden of COVID-19, toward saving patients from becoming ill along with preserving the allocation of clinical resources and public safety standards. This paper explores environmental and public health evidence-based practices toward responding (...)
    Download  
     
    Export citation  
     
    Bookmark  
  37. Ethics of the scientist qua policy advisor: inductive risk, uncertainty, and catastrophe in climate economics.David M. Frank - 2019 - Synthese:3123-3138.
    This paper discusses ethical issues surrounding Integrated Assessment Models (IAMs) of the economic effects of climate change, and how climate economists acting as policy advisors ought to represent the uncertain possibility of catastrophe. Some climate economists, especially Martin Weitzman, have argued for a precautionary approach where avoiding catastrophe should structure climate economists’ welfare analysis. This paper details ethical arguments that justify this approach, showing how Weitzman’s “fat tail” probabilities of climate catastrophe pose ethical problems for widely used IAMs. The main (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  38. Forever and Again: Necessary Conditions for “Quantum Immortality” and its Practical Implications.Alexey Turchin - 2018 - Journal of Evolution and Technology 28 (1).
    This article explores theoretical conditions necessary for “quantum immortality” (QI) as well as its possible practical implications. It is demonstrated that the QI is a particular case of “multiverse immortality” (MI) which is based on two main assumptions: the very large size of the Universe (not necessary because of quantum effects), and the copy-friendly theory of personal identity. It is shown that a popular objection about the lowering of the world-share (measure) of an observer in the case of QI doesn’t (...)
    Download  
     
    Export citation  
     
    Bookmark  
  39. Climate Change and the Threat of Disaster: The Moral Case for Taking Out Insurance at Our Grandchildren's Expense.Matthew Rendall - 2011 - Political Studies 59 (4):884-99.
    Is drastic action against global warming essential to avoid impoverishing our descendants? Or does it mean robbing the poor to give to the rich? We do not yet know. Yet most of us can agree on the importance of minimising expected deprivation. Because of the vast number of future generations, if there is any significant risk of catastrophe, this implies drastic and expensive carbon abatement unless we discount the future. I argue that we should not discount. Instead, the (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  40. Catastrophic Times. Against Equivalencies of History and Vulnerability in the «Anthropocene».Ralf Gisinger - 2023 - Filosofia Revista da Faculdade de Letras da Universidade do Porto 39 (Philosophy and Catastrophe):61-77.
    With catastrophic events of «nature» like global warming, arguments emerge that insinuate an equivalence of vulnerability, responsibility or being affected by these catastrophes. Such an alleged equivalence when facing climate catastrophe is already visible, for example, in the notion of the «Anthropocene» itself, which obscures both causes and various vulnerabilities in a homogenized as well as universalized concept of humanity (anthropos). Taking such narratives as a starting point, the paper explores questions about the connection between catastrophe, temporality, and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  41. How Much Should Governments Pay to Prevent Catastrophes? Longtermism's Limited Role.Carl Shulman & Elliott Thornley - forthcoming - In Jacob Barrett, Hilary Greaves & David Thorstad (eds.), Essays on Longtermism. Oxford University Press.
    Longtermists have argued that humanity should significantly increase its efforts to prevent catastrophes like nuclear wars, pandemics, and AI disasters. But one prominent longtermist argument overshoots this conclusion: the argument also implies that humanity should reduce the risk of existential catastrophe even at extreme cost to the present generation. This overshoot means that democratic governments cannot use the longtermist argument to guide their catastrophe policy. In this paper, we show that the case for preventing catastrophe does not depend on (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  42.  71
    Climate Risk Management.Klaus Keller, Casey Helgeson & Vivek Srikrishnan - 2021 - Annual Review of Earth and Planetary Sciences 49:95–116.
    Accelerating global climate change drives new climate risks. People around the world are researching, designing, and implementing strategies to manage these risks. Identifying and implementing sound climate risk management strategies poses nontrivial challenges including (a) linking the required disciplines, (b) identifying relevant values and objectives, (c) identifying and quantifying important uncertainties, (d) resolving interactions between decision levers and the system dynamics, (e) quantifying the trade-offs between diverse values under deep and dynamic uncertainties, (f) communicating to inform decisions, and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  43. Climate Change, Moral Bioenhancement and the Ultimate Mostropic.Jon Rueda - 2020 - Ramon Llull Journal of Applied Ethics 11:277-303.
    Tackling climate change is one of the most demanding challenges of humanity in the 21st century. Still, the efforts to mitigate the current environmental crisis do not seem enough to deal with the increased existential risks for the human and other species. Persson and Savulescu have proposed that our evolutionarily forged moral psychology is one of the impediments to facing as enormous a problem as global warming. They suggested that if we want to address properly some of the most (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  44. COVID-19 PANDEMIC AS AN INDICATOR OF EXISTENTIAL EVOLUTIONARY RISK OF ANTHROPOCENE (ANTHROPOLOGICAL ORIGIN AND GLOBAL POLITICAL MECHANISMS).Valentin Cheshko & Konnova Nina - 2021 - In MOChashin O. Kristal (ed.), Bioethics: from theory to practice. pp. 29-44.
    The coronavirus pandemic, like its predecessors - AIDS, Ebola, etc., is evidence of the evolutionary instability of the socio-cultural and ecological niche created by mankind, as the main factor in the evolutionary success of our biological species and the civilization created by it. At least, this applies to the modern global civilization, which is called technogenic or technological, although it exists in several varieties. As we hope to show, the current crisis has less ontological as well as epistemological roots; (...)
    Download  
     
    Export citation  
     
    Bookmark  
  45. La catastrophe écologique, les gilets jaunes et le sabotage de la démocratie.Donato Bergandi, Fabienne Galangau-Querat & Hervé Lelièvre - manuscript
    Caste : Groupe qui se distingue par ses privilèges et son esprit d’exclusive à l’égard de toute personne qui n’appartient pas au groupe. Larousse -/- La hausse des prix des carburants proposée pour lutter contre le changement climatique et mettre en œuvre les principes de la « transition écologique » adoptés par la France lors de la COP21, a fait naître le mouvement des gilets jaunes. Plus globalement c’est une bonne partie des français qui se trouve concernée, celle qui vit (...)
    Download  
     
    Export citation  
     
    Bookmark  
  46. An ethical analysis of vaccinating children against COVID-19: benefits, risks, and issues of global health equity [version 2; peer review: 1 approved, 1 approved with reservations].Rachel Gur-Arie, Steven R. Kraaijeveld & Euzebiusz Jamrozik - forthcoming - Wellcome Open Research.
    COVID-19 vaccination of children has begun in various high-income countries with regulatory approval and general public support, but largely without careful ethical consideration. This trend is expected to extend to other COVID-19 vaccines and lower ages as clinical trials progress. This paper provides an ethical analysis of COVID-19 vaccination of healthy children. Specifically, we argue that it is currently unclear whether routine COVID-19 vaccination of healthy children is ethically justified in most contexts, given the minimal direct benefit that COVID-19 vaccination (...)
    Download  
     
    Export citation  
     
    Bookmark  
  47. On Risk and Rationality.Brad Armendt - 2014 - Erkenntnis 79 (S6):1-9.
    It is widely held that the influence of risk on rational decisions is not entirely explained by the shape of an agent’s utility curve. Buchak (Erkenntnis, 2013, Risk and rationality, Oxford University Press, Oxford, in press) presents an axiomatic decision theory, risk-weighted expected utility theory (REU), in which decision weights are the agent’s subjective probabilities modified by his risk-function r. REU is briefly described, and the global applicability of r is discussed. Rabin’s (Econometrica 68:1281–1292, 2000) (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  48. Global factors which influence the directions of social development.Sergii Sardak & O. Bilskaya S. Sardak, M. Korneyev, A. Simakhova - 2017 - Problems and Perspectives in Management 15 (3):323 – 333.
    This study identifies global factors conditioning the global problematics of the direction of social development. Global threats were evaluated and defined as dangerous processes, phenomena, and situations that cause harm to health, safety, well-being, and the lives of all humanity, and require removal. The essence of global risks was defined. These risks were defined as events or conditions that may cause a significant negative effect for several countries or spheres within a strategic period if they occur. (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  49. Risk, Precaution, and Causation.Masaki Ichinose - 2022 - Tetsugaku: International Journal of the Philosophical Association of Japan 6:22-53.
    This paper aims to scrutinize how the notion of risk should be understood and applied to possibly catastrophic cases. I begin with clarifying the standard usage of the notion of risk, particularly emphasizing the conceptual relation between risk and probability. Then, I investigate how to make decisions in the case of seemingly catastrophic disasters by contrasting the precautionary principle with the preventive (prevention) principle. Finally, I examine what kind of causal thinking tends to be actually (...)
    Download  
     
    Export citation  
     
    Bookmark  
  50. Ecological Risk: Climate Change as Abstract-Corporeal Problem.Tom Sparrow - 2018 - Revista Latinoamericana de Estudios Sobre Cuerpos, Emociones y Sociedad 10 (28):88-97.
    This essay uses Ulrich Beck’s concept of risk society to understand the threat of catastrophic climate change. It argues that this threat is “abstract-corporeal”, and therefore a special kind of threat that poses special kinds of epistemic and ecological challenges. At the center of these challenges is the problem of human vulnerability, which entails a complex form of trust that both sustains and threatens human survival.
    Download  
     
    Export citation  
     
    Bookmark  
1 — 50 / 1000