Results for 'existential risks'

775 found
Order:
  1. Existential Risks: Exploring a Robust Risk Reduction Strategy.Karim Jebari - 2015 - Science and Engineering Ethics 21 (3):541-554.
    A small but growing number of studies have aimed to understand, assess and reduce existential risks, or risks that threaten the continued existence of mankind. However, most attention has been focused on known and tangible risks. This paper proposes a heuristic for reducing the risk of black swan extinction events. These events are, as the name suggests, stochastic and unforeseen when they happen. Decision theory based on a fixed model of possible outcomes cannot properly deal with (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  2. Global Catastrophic and Existential Risks Communication Scale.Alexey Turchin & Denkeberger David - 2018 - Futures:not defiend yet.
    Existential risks threaten the future of humanity, but they are difficult to measure. However, to communicate, prioritize and mitigate such risks it is important to estimate their relative significance. Risk probabilities are typically used, but for existential risks they are problematic due to ambiguity, and because quantitative probabilities do not represent some aspects of these risks. Thus, a standardized and easily comprehensible instrument is called for, to communicate dangers from various global catastrophic and (...) risks. In this article, inspired by the Torino scale of asteroid danger, we suggest a color coded scale to communicate the magnitude of global catastrophic and existential risks. The scale is based on the probability intervals of risks in the next century if they are available. The risks’ estimations could be adjusted based on their severities and other factors. The scale covers not only existential risks, but smaller size global catastrophic risks. It consists of six color levels, which correspond to previously suggested levels of prevention activity. We estimate artificial intelligence risks as “red”, while “orange” risks include nanotechnology, synthetic biology, full scale nuclear war and a large global agricultural shortfall (caused by regional nuclear war, coincident extreme weather, etc.) The risks of natural pandemic, supervolcanic eruption and global warming are marked as “yellow” and the danger from asteroids is “green”. -/- Keywords: global catastrophic risks; existential risks; Torino scale; policy; risk probability . (shrink)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  3. Existential Risks: New Zealand Needs a Method to Agree on a Value Framework and How to Quantify Future Lives at Risk.Matthew Boyd & Nick Wilson - 2018 - Policy Quarterly 14 (3):58-65.
    Human civilisation faces a range of existential risks, including nuclear war, runaway climate change and superintelligent artificial intelligence run amok. As we show here with calculations for the New Zealand setting, large numbers of currently living and, especially, future people are potentially threatened by existential risks. A just process for resource allocation demands that we consider future generations but also account for solidarity with the present. Here we consider the various ethical and policy issues involved and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  4. Approaches to the Prevention of Global Catastrophic Risks.Alexey Turchin - 2018 - Human Prospect 7 (2):52-65.
    Many global catastrophic and existential risks (X-risks) threaten the existence of humankind. There are also many ideas for their prevention, but the meta-problem is that these ideas are not structured. This lack of structure means it is not easy to choose the right plan(s) or to implement them in the correct order. I suggest using a “Plan A, Plan B” model, which has shown its effectiveness in planning actions in unpredictable environments. In this approach, Plan B is (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  5. Classification of Global Catastrophic Risks Connected with Artificial Intelligence.Alexey Turchin & David Denkenberger - 2020 - AI and Society 35 (1):147-163.
    A classification of the global catastrophic risks of AI is presented, along with a comprehensive list of previously identified risks. This classification allows the identification of several new risks. We show that at each level of AI’s intelligence power, separate types of possible catastrophes dominate. Our classification demonstrates that the field of AI risks is diverse, and includes many scenarios beyond the commonly discussed cases of a paperclip maximizer or robot-caused unemployment. Global catastrophic failure could happen (...)
    Download  
    Translate
     
     
    Export citation  
     
    Bookmark   5 citations  
  6. Risks of Artificial Intelligence.Vincent C. Müller (ed.) - 2016 - CRC Press - Chapman & Hall.
    Papers from the conference on AI Risk (published in JETAI), supplemented by additional work. --- If the intelligence of artificial systems were to surpass that of humans, humanity would face significant risks. The time has come to consider these issues, and this consideration must include progress in artificial intelligence (AI) as much as insights from AI theory. -- Featuring contributions from leading experts and thinkers in artificial intelligence, Risks of Artificial Intelligence is the first volume of collected chapters (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  7.  86
    Simulation Typology and Termination Risks.Alexey Turchin & Roman Yampolskiy - manuscript
    The goal of the article is to explore what is the most probable type of simulation in which humanity lives (if any) and how this affects simulation termination risks. We firstly explore the question of what kind of simulation in which humanity is most likely located based on pure theoretical reasoning. We suggest a new patch to the classical simulation argument, showing that we are likely simulated not by our own descendants, but by alien civilizations. Based on this, we (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  8. Global Catastrophic Risks Connected with Extra-Terrestrial Intelligence.Alexey Turchin - manuscript
    In this article, a classification of the global catastrophic risks connected with the possible existence (or non-existence) of extraterrestrial intelligence is presented. If there are no extra-terrestrial intelligences (ETIs) in our light cone, it either means that the Great Filter is behind us, and thus some kind of periodic sterilizing natural catastrophe, like a gamma-ray burst, should be given a higher probability estimate, or that the Great Filter is ahead of us, and thus a future global catastrophe is high (...)
    Download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  9. Global Catastrophic Risks by Chemical Contamination.Alexey Turchin - manuscript
    Abstract: Global chemical contamination is an underexplored source of global catastrophic risks that is estimated to have low a priori probability. However, events such as pollinating insects’ population decline and lowering of the human male sperm count hint at some toxic exposure accumulation and thus could be a global catastrophic risk event if not prevented by future medical advances. We identified several potentially dangerous sources of the global chemical contamination, which may happen now or could happen in the future: (...)
    Download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  10. Superintelligence as a Cause or Cure for Risks of Astronomical Suffering.Kaj Sotala & Lukas Gloor - 2017 - Informatica: An International Journal of Computing and Informatics 41 (4):389-400.
    Discussions about the possible consequences of creating superintelligence have included the possibility of existential risk, often understood mainly as the risk of human extinction. We argue that suffering risks (s-risks) , where an adverse outcome would bring about severe suffering on an astronomical scale, are risks of a comparable severity and probability as risks of extinction. Preventing them is the common interest of many different value systems. Furthermore, we argue that in the same way as (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  11. UAP and Global Catastrophic Risks.Alexey Turchin - manuscript
    Abstract: After 2017 NY Times publication, the stigma of the scientific discussion of the problem of so-called UAP (Unidentified Aerial Phenomena) was lifted. Now the question arises: how UAP will affect the future of humanity, and especially, the probability of the global catastrophic risks? To answer this question, we assume that the Nimitz case in 2004 was real and we will suggest a classification of the possible explanations of the phenomena. The first level consists of mundane explanations: hardware glitches, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  12. The Fragile World Hypothesis: Complexity, Fragility, and Systemic Existential Risk.David Manheim - forthcoming - Futures.
    The possibility of social and technological collapse has been the focus of science fiction tropes for decades, but more recent focus has been on specific sources of existential and global catastrophic risk. Because these scenarios are simple to understand and envision, they receive more attention than risks due to complex interplay of failures, or risks that cannot be clearly specified. In this paper, we discuss the possibility that complexity of a certain type leads to fragility which can (...)
    Download  
     
    Export citation  
     
    Bookmark  
  13. Configuration of Stable Evolutionary Strategy of Homo Sapiens and Evolutionary Risks of Technological Civilization (the Conceptual Model Essay).Valentin T. Cheshko, Lida V. Ivanitskaya & Yulia V. Kosova - 2014 - Biogeosystem Technique 1 (1):58-68.
    Stable evolutionary strategy of Homo sapiens (SESH) is built in accordance with the modular and hierarchical principle and consists of the same type of self-replicating elements, i.e. is a system of systems. On the top level of the organization of SESH is the superposition of genetic, social, cultural and techno-rationalistic complexes. The components of this triad differ in the mechanism of cycles of generation - replication - transmission - fixing/elimination of adoptively relevant information. This mechanism is implemented either in accordance (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  14. Editorial: Risks of Artificial Intelligence.Vincent C. Müller - 2016 - In Risks of artificial intelligence. CRC Press - Chapman & Hall. pp. 1-8.
    If the intelligence of artificial systems were to surpass that of humans significantly, this would constitute a significant risk for humanity. Time has come to consider these issues, and this consideration must include progress in AI as much as insights from the theory of AI. The papers in this volume try to make cautious headway in setting the problem, evaluating predictions on the future of AI, proposing ways to ensure that AI systems will be beneficial to humans – and critically (...)
    Download  
     
    Export citation  
     
    Bookmark  
  15. The Global Catastrophic Risks Connected with Possibility of Finding Alien AI During SETI.Alexey Turchin - 2018 - Journal of British Interpanetary Society 71 (2):71-79.
    Abstract: This article examines risks associated with the program of passive search for alien signals (Search for Extraterrestrial Intelligence, or SETI) connected with the possibility of finding of alien transmission which includes description of AI system aimed on self-replication (SETI-attack). A scenario of potential vulnerability is proposed as well as the reasons why the proportion of dangerous to harmless signals may be high. The article identifies necessary conditions for the feasibility and effectiveness of the SETI-attack: ETI existence, possibility of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  16. Surviving Global Risks Through the Preservation of Humanity's Data on the Moon.Alexey Turchin & D. Denkenberger - 2018 - Acta Astronautica:in press.
    Many global catastrophic risks are threatening human civilization, and a number of ideas have been suggested for preventing or surviving them. However, if these interventions fail, society could preserve information about the human race and human DNA samples in the hopes that the next civilization on Earth will be able to reconstruct Homo sapiens and our culture. This requires information preservation of an order of magnitude of 100 million years, a little-explored topic thus far. It is important that a (...)
    Download  
     
    Export citation  
     
    Bookmark  
  17. Configuration of Stable Evolutionary Strategy of Homo Sapiens and Evolutionary Risks of Technological Civilization (the Conceptual Model Essay).Valentin T. Cheshko, Lida V. Ivanitskaya & Yulia V. Kosova - 2014 - Biogeosystem Technique, 1 (1):58-68.
    Stable evolutionary strategy of Homo sapiens (SESH) is built in accordance with the modular and hierarchical principle and consists of the same type of self-replicating elements, i.e. is a system of systems. On the top level of the organization of SESH is the superposition of genetic, social, cultural and techno-rationalistic complexes. The components of this triad differ in the mechanism of cycles of generation - replication - transmission - fixing/elimination of adoptively relevant information. This mechanism is implemented either in accordance (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  18.  27
    UN75 ↔ Towards Security Council Reform ↔ Metaphysical, Ontological, and Existential Statuses of the Veto Right (1).Vladimir Rogozhin - manuscript
    From year to year some of us, people of planet Earth, Earthlings, attacks intensify on the veto right in the UN Security Council. They consciously or unconsciously ignore its metaphisical, ontological and existential statuses established in 1945 by the founders of the United Nations as a result of the multimillion sacrificial struggle of all Humanity against nazism. Perhaps this is due to a misunderstanding of the metaphysics of international relations, the enduring existential significance of the veto for the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  19. Islands as Refuges for Surviving Global Catastrophes.Alexey Turchin & Brian Patrick Green - 2018 - Foresight.
    Purpose Islands have long been discussed as refuges from global catastrophes; this paper will evaluate them systematically, discussing both the positives and negatives of islands as refuges. There are examples of isolated human communities surviving for thousands of years on places like Easter Island. Islands could provide protection against many low-level risks, notably including bio-risks. However, they are vulnerable to tsunamis, bird-transmitted diseases, and other risks. This article explores how to use the advantages of islands for survival (...)
    Download  
     
    Export citation  
     
    Bookmark  
  20. A Meta-Doomsday Argument: Uncertainty About the Validity of the Probabilistic Prediction of the End of the World.Alexey Turchin - manuscript
    Abstract: Four main forms of Doomsday Argument (DA) exist—Gott’s DA, Carter’s DA, Grace’s DA and Universal DA. All four forms use different probabilistic logic to predict that the end of the human civilization will happen unexpectedly soon based on our early location in human history. There are hundreds of publications about the validity of the Doomsday argument. Most of the attempts to disprove the Doomsday Argument have some weak points. As a result, we are uncertain about the validity of DA (...)
    Download  
     
    Export citation  
     
    Bookmark  
  21. Wireheading as a Possible Contributor to Civilizational Decline.Alexey Turchin - manuscript
    Abstract: Advances in new technologies create new ways to stimulate the pleasure center of the human brain via new chemicals, direct application of electricity, electromagnetic fields, “reward hacking” in games and social networks, and in the future, possibly via genetic manipulation, nanorobots and AI systems. This may have two consequences: a) human life may become more interesting, b) humans may stop participating in any external activities, including work, maintenance, reproduction, and even caring for their own health, which could slowly contribute (...)
    Download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  22. Levels of Self-Improvement in AI and Their Implications for AI Safety.Alexey Turchin - manuscript
    Abstract: This article presents a model of self-improving AI in which improvement could happen on several levels: hardware, learning, code and goals system, each of which has several sublevels. We demonstrate that despite diminishing returns at each level and some intrinsic difficulties of recursive self-improvement—like the intelligence-measuring problem, testing problem, parent-child problem and halting risks—even non-recursive self-improvement could produce a mild form of superintelligence by combining small optimizations on different levels and the power of learning. Based on this, we (...)
    Download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  23. Presumptuous Philosopher Proves Panspermia.Alexey Turchin - manuscript
    Abstract. The presumptuous philosopher (PP) thought experiment lends more credence to the hypothesis which postulates the existence of a larger number of observers than other hypothesis. The PP was suggested as a purely speculative endeavor. However, there is a class of real world observer-selection effects where it could be applied, and one of them is the possibility of interstellar panspermia (IP). PP suggests that the universes with interstellar panspermia will have orders of magnitude more civilizations than universes without IP, and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  24. First Human Upload as AI Nanny.Alexey Turchin - manuscript
    Abstract: As there are no visible ways to create safe self-improving superintelligence, but it is looming, we probably need temporary ways to prevent its creation. The only way to prevent it, is to create special AI, which is able to control and monitor all places in the world. The idea has been suggested by Goertzel in form of AI Nanny, but his Nanny is still superintelligent and not easy to control, as was shown by Bensinger at al. We explore here (...)
    Download  
     
    Export citation  
     
    Bookmark  
  25. Narrow AI Nanny: Reaching Strategic Advantage Via Narrow AI to Prevent Creation of the Dangerous Superintelligence.Alexey Turchin - manuscript
    Abstract: As there are no currently obvious ways to create safe self-improving superintelligence, but its emergence is looming, we probably need temporary ways to prevent its creation. The only way to prevent it is to create a special type of AI that is able to control and monitor the entire world. The idea has been suggested by Goertzel in the form of an AI Nanny, but his Nanny is still superintelligent, and is not easy to control. We explore here ways (...)
    Download  
     
    Export citation  
     
    Bookmark  
  26. Aquatic Refuges for Surviving a Global Catastrophe.Alexey Turchin & Brian Green - 2017 - Futures 89:26-37.
    Recently many methods for reducing the risk of human extinction have been suggested, including building refuges underground and in space. Here we will discuss the perspective of using military nuclear submarines or their derivatives to ensure the survival of a small portion of humanity who will be able to rebuild human civilization after a large catastrophe. We will show that it is a very cost-effective way to build refuges, and viable solutions exist for various budgets and timeframes. Nuclear submarines are (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  27. Why AI Doomsayers Are Like Sceptical Theists and Why It Matters.John Danaher - 2015 - Minds and Machines 25 (3):231-246.
    An advanced artificial intelligence could pose a significant existential risk to humanity. Several research institutes have been set-up to address those risks. And there is an increasing number of academic publications analysing and evaluating their seriousness. Nick Bostrom’s superintelligence: paths, dangers, strategies represents the apotheosis of this trend. In this article, I argue that in defending the credibility of AI risk, Bostrom makes an epistemic move that is analogous to one made by so-called sceptical theists in the debate (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  28.  62
    Protogeometer: Falling Into Future.Vladimir Rogozhin - 2014 - FQXi Essay Contest 2014.
    Universe silence … Why? TechnoSfera … Where does it move? BioSfera … Where is the ―non-return point? NooSfera … What to do? The deep mind looks for primordial senses of the ―LifeWorld(LebensWelt). Сonsciousness, matter, memory … Self-Consciousness… Сonsciousness is attracting senses vector magnitude, intentional effect of absolute complexity. The Vector of Сonsciousness - the Triune Vector of absolute forms of existence of matter (limit states), the Vector of the Absolute Existential Field of the Universe, a polyvalent sense phenomenon of (...)
    Download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  29.  84
    United Humanity: From "UN 2.0" to "UN 3.0" The Conceptual Model of the United Nations for the XXI Century.Vladimir Rogozhin - 2018 - Academia.
    The conceptual model of United Nations reform - "UN 3.0" includes the General Program of Action on UN Reform, consisting of two stages. The first stage for 2020-2025 envisages the transformation of the main organs of the UN - the General Assembly and the Security Council with measures to improve the effectiveness of the management system, address the "veto problem", problem of financing, improve staff work and administrative and financial control, strengthen UN media, improvement of work with the global civil (...)
    Download  
     
    Export citation  
     
    Bookmark  
  30. Fighting Aging as an Effective Altruism Cause: A Model of the Impact of the Clinical Trials of Simple Interventions.Alexey Turchin - manuscript
    The effective altruism movement aims to save lives in the most cost-effective ways. In the future, technology will allow radical life extension, and anyone who survives until that time will gain potentially indefinite life extension. Fighting aging now increases the number of people who will survive until radical life extension becomes possible. We suggest a simple model, where radical life extension is achieved in 2100, the human population is 10 billion, and life expectancy is increased by simple geroprotectors like metformin (...)
    Download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  31. “Cheating Death in Damascus” Solution to the Fermi Paradox.Alexey Turchin & Roman Yampolskiy - manuscript
    One of the possible solutions of the Fermi paradox is that all civilizations go extinct because they hit some Late Great Filter. Such a universal Late Great Filter must be an unpredictable event that all civilizations unexpectedly encounter, even if they try to escape extinction. This is similar to the “Death in Damascus” paradox from decision theory. However, this unpredictable Late Great Filter could be escaped by choosing a random strategy for humanity’s future development. However, if all civilizations act randomly, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  32. Hans Jonas E Il Tramonto Dell'uomo.Roberto Franzini Tibaldeo & Paolo Becchi - 2016 - Annuario Filosofico 32:245-264.
    The article deals with present day challenges related to the employ of technology in order to reduce the exposition of the human being to the risks and vulnerability of his or her existential condition. According to certain transhumanist and posthumanist thinkers, as well as some supporters of human enhancement, essential features of the human being, such as vulnerability and mortality, ought to be thoroughly overcome. The aim of this article is twofold: on the one hand, we wish to (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  33. Artificial Multipandemic as the Most Plausible and Dangerous Global Catastrophic Risk Connected with Bioweapons and Synthetic Biology.Alexey Turchin, Brian Patrick Green & David Denkenberger - manuscript
    Pandemics have been suggested as global risks many times, but it has been shown that the probability of human extinction due to one pandemic is small, as it will not be able to affect and kill all people, but likely only half, even in the worst cases. Assuming that the probability of the worst pandemic to kill a person is 0.5, and assuming linear interaction between different pandemics, 30 strong pandemics running simultaneously will kill everyone. Such situations cannot happen (...)
    Download  
     
    Export citation  
     
    Bookmark  
  34. Zwischen Welt- und Kultursicherung. Erkenntnis und Sozialität vor dem Hintergrund kritischer Mythentheorien bei Adorno, Horkheimer und Baudrillard.Maximilian Runge - 2015
    In this thesis I try to evaluate the risks and potentials of modern and archaic myths for human existence in a holistical approach. After Adorno's and Horkheimer's critique that enlightment would still be mythical the positive aspects of myth and ancient religion were - with a few exceptions (i.g. Blumenberg, Eliade) - mostly neglected: In the analysis of Critical Theory myth only serves power, its misuse in fashist and capitalistic societies is inevitable; therefore any hint of mythological structures needs (...)
    Download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  35. Bioeconomics, Biopolitics and Bioethics: Evolutionary Semantics of Evolutionary Risk (Anthropological Essay).V. T. Cheshko - 2016 - Bioeconomics and Ecobiopolitic (1 (2)).
    Attempt of trans-disciplinary analysis of the evolutionary value of bioethics is realized. Currently, there are High Tech schemes for management and control of genetic, socio-cultural and mental evolution of Homo sapiens (NBIC, High Hume, etc.). The biological, socio-cultural and technological factors are included in the fabric of modern theories and technologies of social and political control and manipulation. However, the basic philosophical and ideological systems of modern civilization formed mainly in the 17–18 centuries and are experiencing ever-increasing and destabilizing risk-taking (...)
    Download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  36. Running Risks Morally.Brian Weatherson - 2014 - Philosophical Studies 167 (1):141-163.
    I defend normative externalism from the objection that it cannot account for the wrongfulness of moral recklessness. The defence is fairly simple—there is no wrong of moral recklessness. There is an intuitive argument by analogy that there should be a wrong of moral recklessness, and the bulk of the paper consists of a response to this analogy. A central part of my response is that if people were motivated to avoid moral recklessness, they would have to have an unpleasant sort (...)
    Download  
     
    Export citation  
     
    Bookmark   52 citations  
  37. Taking Risks Behind the Veil of Ignorance.Buchak Lara - 2017 - Ethics 127 (3):610-644.
    A natural view in distributive ethics is that everyone's interests matter, but the interests of the relatively worse off matter more than the interests of the relatively better off. I provide a new argument for this view. The argument takes as its starting point the proposal, due to Harsanyi and Rawls, that facts about distributive ethics are discerned from individual preferences in the "original position." I draw on recent work in decision theory, along with an intuitive principle about risk-taking, to (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  38. Existential Nihilism: The Only Really Serious Problem in Philosophy.Walter Veit - 2018 - Journal of Camus Studies 2018:211-232.
    Since Friedrich Nietzsche, philosophers have grappled with the question of how to respond to nihilism. Nihilism, often seen as a derogative term for a ‘life-denying’, destructive and perhaps most of all depressive philosophy is what drove existentialists to write about the right response to a meaningless universe devoid of purpose. This latter diagnosis is what I shall refer to as existential nihilism, the denial of meaning and purpose, a view that not only existentialists but also a long line of (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  39. Risks of Artificial General Intelligence.Vincent C. Müller (ed.) - 2014 - Taylor & Francis (JETAI).
    Special Issue “Risks of artificial general intelligence”, Journal of Experimental and Theoretical Artificial Intelligence, 26/3 (2014), ed. Vincent C. Müller. http://www.tandfonline.com/toc/teta20/26/3# - Risks of general artificial intelligence, Vincent C. Müller, pages 297-301 - Autonomous technology and the greater human good - Steve Omohundro - pages 303-315 - - - The errors, insights and lessons of famous AI predictions – and what they mean for the future - Stuart Armstrong, Kaj Sotala & Seán S. Ó hÉigeartaigh - pages 317-342 (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  40.  74
    Existential Conservatism.David McPherson - 2019 - Philosophy 94 (3):383-407.
    This essay articulates a kind of conservatism that it argues is the most fundamental and important kind of conservatism, viz. existential conservatism, which involves an affirmative and appreciative stance towards the given world. While this form of conservatism can be connected to political conservatism, as seen with Roger Scruton, it need not be, as seen with G. A. Cohen. It is argued that existential conservatism should be embraced whether or not one embraces political conservatism, though it is also (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  41. Editorial: Risks of General Artificial Intelligence.Vincent C. Müller - 2014 - Journal of Experimental and Theoretical Artificial Intelligence 26 (3):297-301.
    This is the editorial for a special volume of JETAI, featuring papers by Omohundro, Armstrong/Sotala/O’Heigeartaigh, T Goertzel, Brundage, Yampolskiy, B. Goertzel, Potapov/Rodinov, Kornai and Sandberg. - If the general intelligence of artificial systems were to surpass that of humans significantly, this would constitute a significant risk for humanity – so even if we estimate the probability of this event to be fairly low, it is necessary to think about it now. We need to estimate what progress we can expect, what (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  42. Kant on Existential Import.Alberto Vanzo - 2014 - Kantian Review 19 (2):207-232.
    This article reconstructs Kant's view on the existential import of categorical sentences. Kant is widely taken to have held that affirmative sentences (the A and I sentences of the traditional square of opposition) have existential import, whereas negative sentences (E and O) lack existential import. The article challenges this standard interpretation. It is argued that Kant ascribes existential import only to some affirmative synthetic sentences. However, the reasons for this do not fall within the remit of (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  43. Risky Killing: How Risks Worsen Violations of Objective Rights.Seth Lazar - 2019 - Journal of Moral Philosophy 16 (1):1-26.
    I argue that riskier killings of innocent people are, other things equal, objectively worse than less risky killings. I ground these views in considerations of disrespect and security. Killing someone more riskily shows greater disrespect for him by more grievously undervaluing his standing and interests, and more seriously undermines his security by exposing a disposition to harm him across all counterfactual scenarios in which the probability of killing an innocent person is that high or less. I argue that the salient (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  44. The Motivations and Risks of Machine Ethics.Stephen Cave, Rune Nyrup, Karina Vold & Adrian Weller - 2019 - Proceedings of the IEEE 107 (3):562-574.
    Many authors have proposed constraining the behaviour of intelligent systems with ‘machine ethics’ to ensure positive social outcomes from the development of such systems. This paper critically analyses the prospects for machine ethics, identifying several inherent limitations. While machine ethics may increase the probability of ethical behaviour in some situations, it cannot guarantee it due to the nature of ethics, the computational limitations of computational agents and the complexity of the world. In addition, machine ethics, even if it were to (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  45. Implicit Bias, Ideological Bias, and Epistemic Risks in Philosophy.Uwe Peters - 2019 - Mind and Language 34 (3):393-419.
    It has been argued that implicit biases are operative in philosophy and lead to significant epistemic costs in the field. Philosophers working on this issue have focussed mainly on implicit gender and race biases. They have overlooked ideological bias, which targets political orientations. Psychologists have found ideological bias in their field and have argued that it has negative epistemic effects on scientific research. I relate this debate to the field of philosophy and argue that if, as some studies suggest, the (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  46. Depression as Existential Feeling or de-Situatedness? Distinguishing Structure From Mode in Psychopathology.Anthony Vincent Fernandez - 2014 - Phenomenology and the Cognitive Sciences 13 (4):595-612.
    In this paper I offer an alternative phenomenological account of depression as consisting of a degradation of the degree to which one is situated in and attuned to the world. This account contrasts with recent accounts of depression offered by Matthew Ratcliffe and others. Ratcliffe develops an account in which depression is understood in terms of deep moods, or existential feelings, such as guilt or hopelessness. Such moods are capable of limiting the kinds of significance and meaning that one (...)
    Download  
     
    Export citation  
     
    Bookmark   17 citations  
  47.  73
    New Zealand Children’s Experiences of Online Risks and Their Perceptions of Harm. Evidence From Ngā Taiohi Matihiko o Aotearoa – New Zealand Kids Online.Edgar Pacheco & Neil Melhuish - 2020 - Netsafe.
    While children’s experiences of online risks and harm is a growing area of research in New Zealand, public discussion on the matter has largely been informed by mainstream media’s fixation on the dangers of technology. At best, debate on risks online has relied on overseas evidence. However, insights reflecting the New Zealand context and based on representative data are still needed to guide policy discussion, create awareness, and inform the implementation of prevention and support programmes for children. This (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  48. Existential Dynamics of Theorizing Black Invisibility.Lewis R. Gordon - 1997 - In Existence in Black: An Anthology of Black Existential Philosophy. Routledge.
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  49. Space Colonization and Existential Risk.Joseph Gottlieb - 2019 - Journal of the American Philosophical Association 5 (3):306-320.
    Ian Stoner has recently argued that we ought not to colonize Mars because doing so would flout our pro tanto obligation not to violate the principle of scientific conservation, and there is no countervailing considerations that render our violation of the principle permissible. While I remain agnostic on, my primary goal in this article is to challenge : there are countervailing considerations that render our violation of the principle permissible. As such, Stoner has failed to establish that we ought not (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  50. The Hardness of the Iconic Must: Can Peirce’s Existential Graphs Assist Modal Epistemology.C. Legg - 2012 - Philosophia Mathematica 20 (1):1-24.
    Charles Peirce's diagrammatic logic — the Existential Graphs — is presented as a tool for illuminating how we know necessity, in answer to Benacerraf's famous challenge that most ‘semantics for mathematics’ do not ‘fit an acceptable epistemology’. It is suggested that necessary reasoning is in essence a recognition that a certain structure has the particular structure that it has. This means that, contra Hume and his contemporary heirs, necessity is observable. One just needs to pay attention, not merely to (...)
    Download  
     
    Export citation  
     
    Bookmark   20 citations  
1 — 50 / 775