Results for 'existential catastrophe'

1000+ found
Order:
  1. Language Agents Reduce the Risk of Existential Catastrophe.Simon Goldstein & Cameron Domenico Kirk-Giannini - forthcoming - AI and Society:1-11.
    Recent advances in natural language processing have given rise to a new kind of AI architecture: the language agent. By repeatedly calling an LLM to perform a variety of cognitive tasks, language agents are able to function autonomously to pursue goals specified in natural language and stored in a human-readable format. Because of their architecture, language agents exhibit behavior that is predictable according to the laws of folk psychology: they function as though they have desires and beliefs, and then make (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  2. Global Catastrophic and Existential Risks Communication Scale.Alexey Turchin & Denkeberger David - 2018 - Futures:not defiend yet.
    Existential risks threaten the future of humanity, but they are difficult to measure. However, to communicate, prioritize and mitigate such risks it is important to estimate their relative significance. Risk probabilities are typically used, but for existential risks they are problematic due to ambiguity, and because quantitative probabilities do not represent some aspects of these risks. Thus, a standardized and easily comprehensible instrument is called for, to communicate dangers from various global catastrophic and existential risks. In this (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  3. How Much Should Governments Pay to Prevent Catastrophes? Longtermism's Limited Role.Carl Shulman & Elliott Thornley - forthcoming - In Jacob Barrett, Hilary Greaves & David Thorstad (eds.), Essays on Longtermism. Oxford University Press.
    Longtermists have argued that humanity should significantly increase its efforts to prevent catastrophes like nuclear wars, pandemics, and AI disasters. But one prominent longtermist argument overshoots this conclusion: the argument also implies that humanity should reduce the risk of existential catastrophe even at extreme cost to the present generation. This overshoot means that democratic governments cannot use the longtermist argument to guide their catastrophe policy. In this paper, we show that the case for preventing catastrophe does (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  4. Existential Risks: Exploring a Robust Risk Reduction Strategy.Karim Jebari - 2015 - Science and Engineering Ethics 21 (3):541-554.
    A small but growing number of studies have aimed to understand, assess and reduce existential risks, or risks that threaten the continued existence of mankind. However, most attention has been focused on known and tangible risks. This paper proposes a heuristic for reducing the risk of black swan extinction events. These events are, as the name suggests, stochastic and unforeseen when they happen. Decision theory based on a fixed model of possible outcomes cannot properly deal with this kind of (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  5. Artificial Intelligence: Arguments for Catastrophic Risk.Adam Bales, William D'Alessandro & Cameron Domenico Kirk-Giannini - 2024 - Philosophy Compass 19 (2):e12964.
    Recent progress in artificial intelligence (AI) has drawn attention to the technology’s transformative potential, including what some see as its prospects for causing large-scale harm. We review two influential arguments purporting to show how AI could pose catastrophic risks. The first argument — the Problem of Power-Seeking — claims that, under certain assumptions, advanced AI systems are likely to engage in dangerous power-seeking behavior in pursuit of their goals. We review reasons for thinking that AI systems might seek power, that (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  6. Responses to Catastrophic AGI Risk: A Survey.Kaj Sotala & Roman V. Yampolskiy - 2015 - Physica Scripta 90.
    Many researchers have argued that humanity will create artificial general intelligence (AGI) within the next twenty to one hundred years. It has been suggested that AGI may inflict serious damage to human well-being on a global scale ('catastrophic risk'). After summarizing the arguments for why AGI may pose such a risk, we review the fieldʼs proposed responses to AGI risk. We consider societal proposals, proposals for external constraints on AGI behaviors and proposals for creating AGIs that are safe due to (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  7. Classification of Global Catastrophic Risks Connected with Artificial Intelligence.Alexey Turchin & David Denkenberger - 2020 - AI and Society 35 (1):147-163.
    A classification of the global catastrophic risks of AI is presented, along with a comprehensive list of previously identified risks. This classification allows the identification of several new risks. We show that at each level of AI’s intelligence power, separate types of possible catastrophes dominate. Our classification demonstrates that the field of AI risks is diverse, and includes many scenarios beyond the commonly discussed cases of a paperclip maximizer or robot-caused unemployment. Global catastrophic failure could happen at various levels of (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  8. Global Catastrophic Risks by Chemical Contamination.Alexey Turchin - manuscript
    Abstract: Global chemical contamination is an underexplored source of global catastrophic risks that is estimated to have low a priori probability. However, events such as pollinating insects’ population decline and lowering of the human male sperm count hint at some toxic exposure accumulation and thus could be a global catastrophic risk event if not prevented by future medical advances. We identified several potentially dangerous sources of the global chemical contamination, which may happen now or could happen in the future: autocatalytic (...)
    Download  
     
    Export citation  
     
    Bookmark  
  9. Global Catastrophic Risks Connected with Extra-Terrestrial Intelligence.Alexey Turchin - manuscript
    In this article, a classification of the global catastrophic risks connected with the possible existence (or non-existence) of extraterrestrial intelligence is presented. If there are no extra-terrestrial intelligences (ETIs) in our light cone, it either means that the Great Filter is behind us, and thus some kind of periodic sterilizing natural catastrophe, like a gamma-ray burst, should be given a higher probability estimate, or that the Great Filter is ahead of us, and thus a future global catastrophe is (...)
    Download  
     
    Export citation  
     
    Bookmark  
  10. Aquatic refuges for surviving a global catastrophe.Alexey Turchin & Brian Green - 2017 - Futures 89:26-37.
    Recently many methods for reducing the risk of human extinction have been suggested, including building refuges underground and in space. Here we will discuss the perspective of using military nuclear submarines or their derivatives to ensure the survival of a small portion of humanity who will be able to rebuild human civilization after a large catastrophe. We will show that it is a very cost-effective way to build refuges, and viable solutions exist for various budgets and timeframes. Nuclear submarines (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  11. Catastrophically Dangerous AI is Possible Before 2030.Alexey Turchin - manuscript
    In AI safety research, the median timing of AGI arrival is often taken as a reference point, which various polls predict to happen in the middle of 21 century, but for maximum safety, we should determine the earliest possible time of Dangerous AI arrival. Such Dangerous AI could be either AGI, capable of acting completely independently in the real world and of winning in most real-world conflicts with humans, or an AI helping humans to build weapons of mass destruction, or (...)
    Download  
     
    Export citation  
     
    Bookmark  
  12. UAP and Global Catastrophic Risks.Alexey Turchin - manuscript
    Abstract: After 2017 NY Times publication, the stigma of the scientific discussion of the problem of so-called UAP (Unidentified Aerial Phenomena) was lifted. Now the question arises: how UAP will affect the future of humanity, and especially, the probability of the global catastrophic risks? To answer this question, we assume that the Nimitz case in 2004 was real and we will suggest a classification of the possible explanations of the phenomena. The first level consists of mundane explanations: hardware glitches, malfunction, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  13. Approaches to the Prevention of Global Catastrophic Risks.Alexey Turchin - 2018 - Human Prospect 7 (2):52-65.
    Many global catastrophic and existential risks (X-risks) threaten the existence of humankind. There are also many ideas for their prevention, but the meta-problem is that these ideas are not structured. This lack of structure means it is not easy to choose the right plan(s) or to implement them in the correct order. I suggest using a “Plan A, Plan B” model, which has shown its effectiveness in planning actions in unpredictable environments. In this approach, Plan B is a backup (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  14. Islands as refuges for surviving global catastrophes.Alexey Turchin & Brian Patrick Green - 2018 - Foresight.
    Purpose Islands have long been discussed as refuges from global catastrophes; this paper will evaluate them systematically, discussing both the positives and negatives of islands as refuges. There are examples of isolated human communities surviving for thousands of years on places like Easter Island. Islands could provide protection against many low-level risks, notably including bio-risks. However, they are vulnerable to tsunamis, bird-transmitted diseases, and other risks. This article explores how to use the advantages of islands for survival during global catastrophes. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  15. The Global Catastrophic Risks Connected with Possibility of Finding Alien AI During SETI.Alexey Turchin - 2018 - Journal of British Interpanetary Society 71 (2):71-79.
    Abstract: This article examines risks associated with the program of passive search for alien signals (Search for Extraterrestrial Intelligence, or SETI) connected with the possibility of finding of alien transmission which includes description of AI system aimed on self-replication (SETI-attack). A scenario of potential vulnerability is proposed as well as the reasons why the proportion of dangerous to harmless signals may be high. The article identifies necessary conditions for the feasibility and effectiveness of the SETI-attack: ETI existence, possibility of AI, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  16. The Fragile World Hypothesis: Complexity, Fragility, and Systemic Existential Risk.David Manheim - forthcoming - Futures.
    The possibility of social and technological collapse has been the focus of science fiction tropes for decades, but more recent focus has been on specific sources of existential and global catastrophic risk. Because these scenarios are simple to understand and envision, they receive more attention than risks due to complex interplay of failures, or risks that cannot be clearly specified. In this paper, we discuss the possibility that complexity of a certain type leads to fragility which can function as (...)
    Download  
     
    Export citation  
     
    Bookmark  
  17.  80
    The Probability of a Global Catastrophe in the World with Exponentially Growing Technologies.Alexey Turchin & Justin Shovelain - manuscript
    Abstract. In this article is presented a model of the change of the probability of the global catastrophic risks in the world with exponentially evolving technologies. Increasingly cheaper technologies become accessible to a larger number of agents. Also, the technologies become more capable to cause a global catastrophe. Examples of such dangerous technologies are artificial viruses constructed by the means of synthetic biology, non-aligned AI and, to less extent, nanotech and nuclear proliferation. The model shows at least double exponential (...)
    Download  
     
    Export citation  
     
    Bookmark  
  18. Could slaughterbots wipe out humanity? Assessment of the global catastrophic risk posed by autonomous weapons.Alexey Turchin - manuscript
    Recently criticisms against autonomous weapons were presented in a video in which an AI-powered drone kills a person. However, some said that this video is a distraction from the real risk of AI—the risk of unlimitedly self-improving AI systems. In this article, we analyze arguments from both sides and turn them into conditions. The following conditions are identified as leading to autonomous weapons becoming a global catastrophic risk: 1) Artificial General Intelligence (AGI) development is delayed relative to progress in narrow (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  19. Artificial Multipandemic as the Most Plausible and Dangerous Global Catastrophic Risk Connected with Bioweapons and Synthetic Biology.Alexey Turchin, Brian Patrick Green & David Denkenberger - manuscript
    Pandemics have been suggested as global risks many times, but it has been shown that the probability of human extinction due to one pandemic is small, as it will not be able to affect and kill all people, but likely only half, even in the worst cases. Assuming that the probability of the worst pandemic to kill a person is 0.5, and assuming linear interaction between different pandemics, 30 strong pandemics running simultaneously will kill everyone. Such situations cannot happen naturally, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  20. Assessing the future plausibility of catastrophically dangerous AI.Alexey Turchin - 2018 - Futures.
    In AI safety research, the median timing of AGI creation is often taken as a reference point, which various polls predict will happen in second half of the 21 century, but for maximum safety, we should determine the earliest possible time of dangerous AI arrival and define a minimum acceptable level of AI risk. Such dangerous AI could be either narrow AI facilitating research into potentially dangerous technology like biotech, or AGI, capable of acting completely independently in the real world (...)
    Download  
     
    Export citation  
     
    Bookmark  
  21. Robustness to Fundamental Uncertainty in AGI Alignment.G. G. Worley Iii - 2020 - Journal of Consciousness Studies 27 (1-2):225-241.
    The AGI alignment problem has a bimodal distribution of outcomes with most outcomes clustering around the poles of total success and existential, catastrophic failure. Consequently, attempts to solve AGI alignment should, all else equal, prefer false negatives (ignoring research programs that would have been successful) to false positives (pursuing research programs that will unexpectedly fail). Thus, we propose adopting a policy of responding to points of philosophical and practical uncertainty associated with the alignment problem by limiting and choosing necessary (...)
    Download  
     
    Export citation  
     
    Bookmark  
  22. Instrumental Divergence.J. Dmitri Gallow - forthcoming - Philosophical Studies:1-27.
    The thesis of instrumental convergence holds that a wide range of ends have common means: for instance, self preservation, desire preservation, self improvement, and resource acquisition. Bostrom contends that instrumental convergence gives us reason to think that "the default outcome of the creation of machine superintelligence is existential catastrophe". I use the tools of decision theory to investigate whether this thesis is true. I find that, even if intrinsic desires are randomly selected, instrumental rationality induces biases towards certain (...)
    Download  
     
    Export citation  
     
    Bookmark  
  23. Non-Additive Axiologies in Large Worlds.Christian J. Tarsney & Teruji Thomas - 2020
    Is the overall value of a world just the sum of values contributed by each value-bearing entity in that world? Additively separable axiologies (like total utilitarianism, prioritarianism, and critical level views) say 'yes', but non-additive axiologies (like average utilitarianism, rank-discounted utilitarianism, and variable value views) say 'no'. This distinction is practically important: additive axiologies support 'arguments from astronomical scale' which suggest (among other things) that it is overwhelmingly important for humanity to avoid premature extinction and ensure the existence of a (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  24. Robustness to fundamental uncertainty in AGI alignment.I. I. I. G. Gordon Worley - manuscript
    The AGI alignment problem has a bimodal distribution of outcomes with most outcomes clustering around the poles of total success and existential, catastrophic failure. Consequently, attempts to solve AGI alignment should, all else equal, prefer false negatives (ignoring research programs that would have been successful) to false positives (pursuing research programs that will unexpectedly fail). Thus, we propose adopting a policy of responding to points of metaphysical and practical uncertainty associated with the alignment problem by limiting and choosing necessary (...)
    Download  
     
    Export citation  
     
    Bookmark  
  25. Long-Term Trajectories of Human Civilization.Seth D. Baum, Stuart Armstrong, Timoteus Ekenstedt, Olle Häggström, Robin Hanson, Karin Kuhlemann, Matthijs M. Maas, James D. Miller, Markus Salmela, Anders Sandberg, Kaj Sotala, Phil Torres, Alexey Turchin & Roman V. Yampolskiy - 2019 - Foresight 21 (1):53-83.
    Purpose This paper aims to formalize long-term trajectories of human civilization as a scientific and ethical field of study. The long-term trajectory of human civilization can be defined as the path that human civilization takes during the entire future time period in which human civilization could continue to exist. -/- Design/methodology/approach This paper focuses on four types of trajectories: status quo trajectories, in which human civilization persists in a state broadly similar to its current state into the distant future; (...) trajectories, in which one or more events cause significant harm to human civilization; technological transformation trajectories, in which radical technological breakthroughs put human civilization on a fundamentally different course; and astronomical trajectories, in which human civilization expands beyond its home planet and into the accessible portions of the cosmos. -/- Findings Status quo trajectories appear unlikely to persist into the distant future, especially in light of long-term astronomical processes. Several catastrophe, technological transformation and astronomical trajectories appear possible. -/- Originality/value Some current actions may be able to affect the long-term trajectory. Whether these actions should be pursued depends on a mix of empirical and ethical factors. For some ethical frameworks, these actions may be especially important to pursue. (shrink)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  26. Nuclear war as a predictable surprise.Matthew Rendall - 2022 - Global Policy 13 (5):782-791.
    Like asteroids, hundred-year floods and pandemic disease, thermonuclear war is a low-frequency, high-impact threat. In the long run, catastrophe is inevitable if nothing is done − yet each successive government and generation may fail to address it. Drawing on risk perception research, this paper argues that psychological biases cause the threat of nuclear war to receive less attention than it deserves. Nuclear deterrence is, moreover, a ‘front-loaded good’: its benefits accrue disproportionately to proximate generations, whereas much of the expected (...)
    Download  
     
    Export citation  
     
    Bookmark  
  27. When is a Techno-Fix Legitimate? The Case of Viticultural Climate Resilience.Rune Nydal, Giovanni De Grandis & Lars Ursin - 2023 - Journal of Agricultural and Environmental Ethics 36 (1):1-17.
    Climate change is an existential risk reinforced by ordinary actions in afuent societies—often silently present in comfortable and enjoyable habits. This silence is sometimes broken, presenting itself as a nagging reminder of how our habits fuel a catastrophe. As a case in point, global warming has created a state of urgency among wine makers in Spain, as the alcohol level has risen to a point where it jeopardises wine quality and thereby Spanish viticulture. Eforts are currently being made (...)
    Download  
     
    Export citation  
     
    Bookmark  
  28. Crises, and the Ethic of Finitude.Ryan Wasser - 2020 - Human Arenas 4 (3):357-365.
    In his postapocalyptic novel, Those Who Remain, G. Michael Hopf (2016) makes an important observation about the effect crises can have on human psychology by noting that "hard times create strong [humans]" (loc. 200). While the catastrophic effects of the recent COVID-19 outbreak are incontestable, there are arguments to be made that the situation itself could be materia prima of a more grounded, and authentic generation of humanity, at least in theory. In this article I draw on Heidegger's early, implicit (...)
    Download  
     
    Export citation  
     
    Bookmark  
  29. Сo-evolutionary biosemantics of evolutionary risk at technogenic civilization: Hiroshima, Chernobyl – Fukushima and further….Valentin Cheshko & Valery Glazko - 2016 - International Journal of Environmental Problems 3 (1):14-25.
    From Chernobyl to Fukushima, it became clear that the technology is a system evolutionary factor, and the consequences of man-made disasters, as the actualization of risk related to changes in the social heredity (cultural transmission) elements. The uniqueness of the human phenomenon is a characteristic of the system arising out of the nonlinear interaction of biological, cultural and techno-rationalistic adaptive modules. Distribution emerging adaptive innovation within each module is in accordance with the two algorithms that are characterized by the dominance (...)
    Download  
     
    Export citation  
     
    Bookmark  
  30. Climate Change, Moral Bioenhancement and the Ultimate Mostropic.Jon Rueda - 2020 - Ramon Llull Journal of Applied Ethics 11:277-303.
    Tackling climate change is one of the most demanding challenges of humanity in the 21st century. Still, the efforts to mitigate the current environmental crisis do not seem enough to deal with the increased existential risks for the human and other species. Persson and Savulescu have proposed that our evolutionarily forged moral psychology is one of the impediments to facing as enormous a problem as global warming. They suggested that if we want to address properly some of the most (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  31. A Pin and a Balloon: Anthropic Fragility Increases Chances of Runaway Global Warming.Alexey Turchin - manuscript
    Humanity may underestimate the rate of natural global catastrophes because of the survival bias (“anthropic shadow”). But the resulting reduction of the Earth’s future habitability duration is not very large in most plausible cases (1-2 orders of magnitude) and thus it looks like we still have at least millions of years. However, anthropic shadow implies anthropic fragility: we are more likely to live in a world where a sterilizing catastrophe is long overdue and could be triggered by unexpectedly small (...)
    Download  
     
    Export citation  
     
    Bookmark  
  32. Mad Speculation and Absolute Inhumanism: Lovecraft, Ligotti, and the Weirding of Philosophy.Ben Woodard - 2011 - Continent 1 (1):3-13.
    continent. 1.1 : 3-13. / 0/ – Introduction I want to propose, as a trajectory into the philosophically weird, an absurd theoretical claim and pursue it, or perhaps more accurately, construct it as I point to it, collecting the ground work behind me like the Perpetual Train from China Mieville's Iron Council which puts down track as it moves reclaiming it along the way. The strange trajectory is the following: Kant's critical philosophy and much of continental philosophy which has followed, (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  33. A Meta-Doomsday Argument: Uncertainty About the Validity of the Probabilistic Prediction of the End of the World.Alexey Turchin - manuscript
    Abstract: Four main forms of Doomsday Argument (DA) exist—Gott’s DA, Carter’s DA, Grace’s DA and Universal DA. All four forms use different probabilistic logic to predict that the end of the human civilization will happen unexpectedly soon based on our early location in human history. There are hundreds of publications about the validity of the Doomsday argument. Most of the attempts to disprove the Doomsday Argument have some weak points. As a result, we are uncertain about the validity of DA (...)
    Download  
     
    Export citation  
     
    Bookmark  
  34. La vocación filosófica gadameriana como profesión intempestiva. Introducción y traducción de "Wissenschaft als Beruf. Über den Ruf und Beruf der Wissenschaft in unserer Zeit" de Hans-Georg Gadamer.Facundo Norberto Bey - 2023 - Endoxa 52:271-302.
    El 27 de septiembre de 1943, Hans-Georg Gadamer publicó un breve pero significativo ensayo en el periódico conservador Leipziger Neueste Nachrichten und Handels-Zeitung, titulado “Wissenschaft als Beruf. Über den Ruf und Beruf der Wissenschaft in unserer Zeit”. El artículo, que recuperaba en medio de la Segunda Guerra Mundial el problema del valor y posición de la ciencia y la filosofía, nunca fue reimpreso en su obra completa, ni en los diez tomos editados por la editorial Mohr Siebeck ni tampoco por (...)
    Download  
     
    Export citation  
     
    Bookmark  
  35. Simulation Typology and Termination Risks.Alexey Turchin & Roman Yampolskiy - manuscript
    The goal of the article is to explore what is the most probable type of simulation in which humanity lives (if any) and how this affects simulation termination risks. We firstly explore the question of what kind of simulation in which humanity is most likely located based on pure theoretical reasoning. We suggest a new patch to the classical simulation argument, showing that we are likely simulated not by our own descendants, but by alien civilizations. Based on this, we provide (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  36. Military AI as a Convergent Goal of Self-Improving AI.Alexey Turchin & Denkenberger David - 2018 - In Artificial Intelligence Safety and Security. Louiswille: CRC Press.
    Better instruments to predict the future evolution of artificial intelligence (AI) are needed, as the destiny of our civilization depends on it. One of the ways to such prediction is the analysis of the convergent drives of any future AI, started by Omohundro. We show that one of the convergent drives of AI is a militarization drive, arising from AI’s need to wage a war against its potential rivals by either physical or software means, or to increase its bargaining power. (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  37. If now isn't the most influential time ever, when is? [REVIEW]Kritika Maheshwari - 2020 - The Philosopher 108:94-101.
    Download  
     
    Export citation  
     
    Bookmark  
  38. Dossier Chris Marker: The Suffering Image.Gavin Keeney - 2012 - Cambridge Scholars Press.
    This study firstly addresses three threads in Chris Marker’s work – theology, Marxism, and Surrealism – through a mapping of the work of both Giorgio Agamben and Jacques Derrida onto the varied production of his film and photographic work. Notably, it is late Agamben and late Derrida that is utilized, as both began to exit so-called post-structuralism proper with the theological turn in the late 1980s and early 1990s. It addresses these threads through the means to ends employed and as (...)
    Download  
     
    Export citation  
     
    Bookmark  
  39. Surviving global risks through the preservation of humanity's data on the Moon.Alexey Turchin & D. Denkenberger - 2018 - Acta Astronautica:in press.
    Many global catastrophic risks are threatening human civilization, and a number of ideas have been suggested for preventing or surviving them. However, if these interventions fail, society could preserve information about the human race and human DNA samples in the hopes that the next civilization on Earth will be able to reconstruct Homo sapiens and our culture. This requires information preservation of an order of magnitude of 100 million years, a little-explored topic thus far. It is important that a potential (...)
    Download  
     
    Export citation  
     
    Bookmark  
  40. Presumptuous Philosopher Proves Panspermia.Alexey Turchin - manuscript
    Abstract. The presumptuous philosopher (PP) thought experiment lends more credence to the hypothesis which postulates the existence of a larger number of observers than other hypothesis. The PP was suggested as a purely speculative endeavor. However, there is a class of real world observer-selection effects where it could be applied, and one of them is the possibility of interstellar panspermia (IP). There are two types of anthropic reasoning: SIA and SSA. SIA implies that my existence is an argument that larger (...)
    Download  
     
    Export citation  
     
    Bookmark  
  41. Catastrophic risk.H. Orri Stefánsson - 2020 - Philosophy Compass 15 (11):1-11.
    Catastrophic risk raises questions that are not only of practical importance, but also of great philosophical interest, such as how to define catastrophe and what distinguishes catastrophic outcomes from non-catastrophic ones. Catastrophic risk also raises questions about how to rationally respond to such risks. How to rationally respond arguably partly depends on the severity of the uncertainty, for instance, whether quantitative probabilistic information is available, or whether only comparative likelihood information is available, or neither type of information. Finally, catastrophic (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  42. Catastrophic Times. Against Equivalencies of History and Vulnerability in the «Anthropocene».Ralf Gisinger - 2023 - Filosofia Revista da Faculdade de Letras da Universidade do Porto 39 (Philosophy and Catastrophe):61-77.
    With catastrophic events of «nature» like global warming, arguments emerge that insinuate an equivalence of vulnerability, responsibility or being affected by these catastrophes. Such an alleged equivalence when facing climate catastrophe is already visible, for example, in the notion of the «Anthropocene» itself, which obscures both causes and various vulnerabilities in a homogenized as well as universalized concept of humanity (anthropos). Taking such narratives as a starting point, the paper explores questions about the connection between catastrophe, temporality, and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  43. Rape Myths, Catastrophe, and Credibility.Emily C. R. Tilton - 2022 - Episteme:1-17.
    There is an undeniable tendency to dismiss women’s sexual assault allegations out of hand. However, this tendency is not monolithic—allegations that black men have raped white women are often met with deadly seriousness. I argue that contemporary rape culture is characterized by the interplay between rape myths that minimize rape, and myths that catastrophize rape. Together, these two sets of rape myths distort the epistemic resources that people use when assessing rape allegations. These distortions result in the unjust exoneration of (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  44. Extending Existential Feeling Through Sensory Substitution.Jussi A. Saarinen - 2023 - Synthese 201 (2):1-24.
    In current philosophy of mind, there is lively debate over whether emotions, moods, and other affects can extend to comprise elements beyond one’s organismic boundaries. At the same time, there has been growing interest in the nature and significance of so-called existential feelings, which, as the term suggests, are feelings of one’s overall being in the world. In this article, I bring these two strands of investigation together to ask: Can the material underpinnings of existential feelings extend beyond (...)
    Download  
     
    Export citation  
     
    Bookmark  
  45. Existential phenomenology and qualitative research.Anthony Vincent Fernandez - 2024 - In Kevin Aho, Megan Altman & Hans Pedersen (eds.), The Routledge Handbook of Contemporary Existentialism. Routledge.
    This chapter provides an overview of how existential phenomenology has influenced qualitative research methods across a range of disciplines across the social, health, educational, and psychological sciences. It focuses specifically on how the concepts of “existential structures,” or “existentials”—such as selfhood, temporality, spatiality, affectivity, and embodiment—have been used in qualitative research. After providing a brief introduction to what qualitative research is and why philosophers should be interested in it, the chapter provides clear, straightforward examples of how qualitative researchers (...)
    Download  
     
    Export citation  
     
    Bookmark  
  46. Non-catastrophic presupposition failure.Stephen Yablo - 2006 - In Judith Thomson & Alex Byrne (eds.), Content and Modality: Themes From the Philosophy of Robert Stalnaker. Oxford University Press.
    Download  
     
    Export citation  
     
    Bookmark   69 citations  
  47. Ontological Catastrophe: Zizek and the Paradoxical Metaphysics of German Idealism.Joseph Carew - 2014 - Ann Arbor: Open Humanities Press.
    In Ontological Catastrophe, Joseph Carew takes up the central question guiding Slavoj Žižek’s philosophy: How could something like phenomenal reality emerge out of the meaninglessness of the Real? Carefully reconstructing and expanding upon his controversial reactualization of German Idealism, Carew argues that Žižek offers us an original, but perhaps terrifying, response: experience is possible only if we presuppose a prior moment of breakdown as the ontogenetic basis of subjectivity. Drawing upon resources found in Žižek, Lacanian psychoanalysis, and post-Kantian philosophy, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  48. Existential risk from AI and orthogonality: Can we have it both ways?Vincent C. Müller & Michael Cannon - 2021 - Ratio 35 (1):25-36.
    The standard argument to the conclusion that artificial intelligence (AI) constitutes an existential risk for the human species uses two premises: (1) AI may reach superintelligent levels, at which point we humans lose control (the ‘singularity claim’); (2) Any level of intelligence can go along with any goal (the ‘orthogonality thesis’). We find that the singularity claim requires a notion of ‘general intelligence’, while the orthogonality thesis requires a notion of ‘instrumental intelligence’. If this interpretation is correct, they cannot (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  49. La catastrophe écologique, les gilets jaunes et le sabotage de la démocratie.Donato Bergandi, Fabienne Galangau-Querat & Hervé Lelièvre - manuscript
    Caste : Groupe qui se distingue par ses privilèges et son esprit d’exclusive à l’égard de toute personne qui n’appartient pas au groupe. Larousse -/- La hausse des prix des carburants proposée pour lutter contre le changement climatique et mettre en œuvre les principes de la « transition écologique » adoptés par la France lors de la COP21, a fait naître le mouvement des gilets jaunes. Plus globalement c’est une bonne partie des français qui se trouve concernée, celle qui vit (...)
    Download  
     
    Export citation  
     
    Bookmark  
  50. Existential Import Today: New Metatheorems; Historical, Philosophical, and Pedagogical Misconceptions.John Corcoran & Hassan Masoud - 2015 - History and Philosophy of Logic 36 (1):39-61.
    Contrary to common misconceptions, today's logic is not devoid of existential import: the universalized conditional ∀ x [S→ P] implies its corresponding existentialized conjunction ∃ x [S & P], not in all cases, but in some. We characterize the proexamples by proving the Existential-Import Equivalence: The antecedent S of the universalized conditional alone determines whether the universalized conditional has existential import, i.e. whether it implies its corresponding existentialized conjunction.A predicate is an open formula having only x free. (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
1 — 50 / 1000