Switch to: References

Add citations

You must login to add citations.
  1. (1 other version)Existential risks: analyzing human extinction scenarios and related hazards.Nick Bostrom - 2002 - J Evol Technol 9 (1).
    Because of accelerating technological progress, humankind may be rapidly approaching a critical phase in its career. In addition to well-known threats such as nuclear holocaust, the propects of radically transforming technologies like nanotech systems and machine intelligence present us with unprecedented opportunities and risks. Our future, and whether we will have a future at all, may well be determined by how we deal with these challenges. In the case of radically transforming technologies, a better understanding of the transition dynamics from (...)
    Download  
     
    Export citation  
     
    Bookmark   80 citations  
  • Existential Risks: Exploring a Robust Risk Reduction Strategy.Karim Jebari - 2015 - Science and Engineering Ethics 21 (3):541-554.
    A small but growing number of studies have aimed to understand, assess and reduce existential risks, or risks that threaten the continued existence of mankind. However, most attention has been focused on known and tangible risks. This paper proposes a heuristic for reducing the risk of black swan extinction events. These events are, as the name suggests, stochastic and unforeseen when they happen. Decision theory based on a fixed model of possible outcomes cannot properly deal with this kind of event. (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Is Species Integrity a Human Right? A Rights Issue Emerging from Individual Liberties with New Technologies.Lantz Fleming Miller - 2014 - Human Rights Review 15 (2):177-199.
    Currently, some philosophers and technicians propose to change the fundamental constitution of Homo sapiens, as by significantly altering the genome, implanting microchips in the brain, and pursuing related techniques. Among these proposals are aspirations to guide humanity’s evolution into new species. Some philosophers have countered that such species alteration is unethical and have proposed international policies to protect species integrity; yet, it remains unclear on what basis such right to species integrity would rest. An answer may come from an unexpected (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • (1 other version)Technological revolutions: Ethics and policy in the dark.Nick Bostrom - 2007
    Technological revolutions are among the most important things that happen to humanity. Ethical assessment in the incipient stages of a potential technological revolution faces several difficulties, including the unpredictability of their long‐term impacts, the problematic role of human agency in bringing them about, and the fact that technological revolutions rewrite not only the material conditions of our existence but also reshape culture and even – perhaps – human nature. This essay explores some of these difficulties and the challenges they pose (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  • Superlongevity and utilitarianism.Mark Walker - 2007 - Australasian Journal of Philosophy 85 (4):581 – 595.
    Peter Singer has argued that there are good utilitarian reasons for rejecting the prospect of superlongevity: developing technology to double (or more) the average human lifespan. I argue against Singer's view on two fronts. First, empirical research on happiness indicates that the later years of life are (on average) the happiest, and there is no reason to suppose that this trend would not continue if superlongevity were realized. Second, it is argued that there are good reasons to suppose that there (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Three's a crowd: On causes, entropy and physical eschatology. [REVIEW]Milan M. Ćirković & Vesna Milošević-Zdjelar - 2004 - Foundations of Science 9 (1):1-24.
    Recent discussions of theorigins of the thermodynamical temporal asymmetry (thearrow of time) by Huw Price and others arecritically assessed. This serves as amotivation for consideration of relationshipbetween thermodynamical and cosmologicalcauses. Although the project of clarificationof the thermodynamical explanandum is certainlywelcome, Price excludes another interestingoption, at least as viable as the sort ofAcausal-Particular approach he favors, andarguably more in the spirit of Boltzmannhimself. Thus, the competition of explanatoryprojects includes three horses, not two. Inaddition, it is the Acausal-Particular approachthat could benefit enormously (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Expected choiceworthiness and fanaticism.Calvin Baker - 2024 - Philosophical Studies 181 (5).
    Maximize Expected Choiceworthiness (MEC) is a theory of decision-making under moral uncertainty. It says that we ought to handle moral uncertainty in the way that Expected Value Theory (EVT) handles descriptive uncertainty. MEC inherits from EVT the problem of fanaticism. Roughly, a decision theory is fanatical when it requires our decision-making to be dominated by low-probability, high-payoff options. Proponents of MEC have offered two main lines of response. The first is that MEC should simply import whatever are the best solutions (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Risk, Non-Identity, and Extinction.Kacper Kowalczyk & Nikhil Venkatesh - 2024 - The Monist 107 (2):146–156.
    This paper examines a recent argument in favour of strong precautionary action—possibly including working to hasten human extinction—on the basis of a decision-theoretic view that accommodates the risk-attitudes of all affected while giving more weight to the more risk-averse attitudes. First, we dispute the need to take into account other people’s attitudes towards risk at all. Second we argue that a version of the non-identity problem undermines the case for doing so in the context of future people. Lastly, we suggest (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • (1 other version)Respect for others' risk attitudes and the long‐run future.Andreas L. Mogensen - 2024 - Noûs 58 (4):1017-1031.
    When our choice affects some other person and the outcome is unknown, it has been argued that we should defer to their risk attitude, if known, or else default to use of a risk‐avoidant risk function. This, in turn, has been claimed to require the use of a risk‐avoidant risk function when making decisions that primarily affect future people, and to decrease the desirability of efforts to prevent human extinction, owing to the significant risks associated with continued human survival. I (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • The Case for Animal-Inclusive Longtermism.Gary David O’Brien - forthcoming - Journal of Moral Philosophy:1-24.
    Longtermism is the view that positively influencing the long-term future is one of the key moral priorities of our time. Longtermists generally focus on humans, and neglect animals. This is a mistake. In this paper I will show that the basic argument for longtermism applies to animals at least as well as it does to humans, and that the reasons longtermists have given for ignoring animals do not withstand scrutiny. Because of their numbers, their capacity for suffering, and our ability (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Concepts of Existential Catastrophe.Hilary Greaves - 2024 - The Monist 107 (2):109-129.
    The notion of existential catastrophe is increasingly appealed to in discussion of risk management around emerging technologies, but it is not completely clear what this notion amounts to. Here, I provide an opinionated survey of the space of plausibly useful definitions of existential catastrophe. Inter alia, I discuss: whether to define existential catastrophe in ex post or ex ante terms, whether an ex ante definition should be in terms of loss of expected value or loss of potential, and what kind (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Why Not Effective Altruism?Richard Yetter Chappell - 2024 - Public Affairs Quarterly 38 (1):3-21.
    Effective altruism sounds so innocuous—who could possibly be opposed to doing good more effectively? Yet it has inspired significant backlash in recent years. This paper addresses some common misconceptions and argues that the core “beneficentric” ideas of effective altruism are both excellent and widely neglected. Reasonable people may disagree on details of implementation, but all should share the basic goals or values underlying effective altruism.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Existential Risk, Astronomical Waste, and the Reasonableness of a Pure Time Preference for Well-Being.S. J. Beard & Patrick Kaczmarek - 2024 - The Monist 107 (2):157-175.
    In this paper, we argue that our moral concern for future well-being should reduce over time due to important practical considerations about how humans interact with spacetime. After surveying several of these considerations (around equality, special duties, existential contingency, and overlapping moral concern) we develop a set of core principles that can both explain their moral significance and highlight why this is inherently bound up with our relationship with spacetime. These relate to the equitable distribution of (1) moral concern in (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Mistakes in the moral mathematics of existential risk.David Thorstad - 2024 - Ethics 135 (1):122-150.
    Longtermists have recently argued that it is overwhelmingly important to do what we can to mitigate existential risks to humanity. I consider three mistakes that are often made in calculating the value of existential risk mitigation. I show how correcting these mistakes pushes the value of existential risk mitigation substantially below leading estimates, potentially low enough to threaten the normative case for existential risk mitigation. I use this discussion to draw four positive lessons for the study of existential risk. -/- (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • (1 other version)The epistemic challenge to longtermism.Christian Tarsney - 2023 - Synthese 201 (6):1-37.
    Longtermists claim that what we ought to do is mainly determined by how our actions might affect the very long-run future. A natural objection to longtermism is that these effects may be nearly impossible to predict — perhaps so close to impossible that, despite the astronomical importance of the far future, the expected value of our present actions is mainly determined by near-term considerations. This paper aims to precisify and evaluate one version of this epistemic objection to longtermism. To that (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • On Two Arguments for Fanaticism.Jeffrey Sanford Russell - 2023 - Noûs 58 (3):565-595.
    Should we make significant sacrifices to ever-so-slightly lower the chance of extremely bad outcomes, or to ever-so-slightly raise the chance of extremely good outcomes? *Fanaticism* says yes: for every bad outcome, there is a tiny chance of extreme disaster that is even worse, and for every good outcome, there is a tiny chance of an enormous good that is even better. I consider two related recent arguments for Fanaticism: Beckstead and Thomas's argument from *strange dependence on space and time*, and (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • High Risk, Low Reward: A Challenge to the Astronomical Value of Existential Risk Mitigation.David Thorstad - 2023 - Philosophy and Public Affairs 51 (4):373-412.
    Philosophy &Public Affairs, Volume 51, Issue 4, Page 373-412, Fall 2023.
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • The Rebugnant Conclusion: Utilitarianism, Insects, Microbes, and AI Systems.Jeff Sebo - 2023 - Ethics, Policy and Environment 26 (2):249-264.
    This paper considers questions that small animals and AI systems raise for utilitarianism. Specifically, if these beings have more welfare than humans and other large animals, then utilitarianism implies that we should prioritize them, all else equal. This could lead to a ‘rebugnant conclusion’, according to which we should, say, create large populations of small animals rather than small populations of large animals. It could also lead to a ‘Pascal’s bugging’, according to which we should, say, prioritize large populations of (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • A Paradox for Tiny Probabilities and Enormous Values.Nick Beckstead & Teruji Thomas - 2021 - Noûs.
    We begin by showing that every theory of the value of uncertain prospects must have one of three unpalatable properties. _Reckless_ theories recommend giving up a sure thing, no matter how good, for an arbitrarily tiny chance of enormous gain; _timid_ theories permit passing up an arbitrarily large potential gain to prevent a tiny increase in risk; _non-transitive_ theories deny the principle that, if A is better than B and B is better than C, then A must be better than (...)
    Download  
     
    Export citation  
     
    Bookmark   21 citations  
  • Ecocentrism and Biosphere Life Extension.Karim Jebari & Anders Sandberg - 2022 - Science and Engineering Ethics 28 (6):1-19.
    The biosphere represents the global sum of all ecosystems. According to a prominent view in environmental ethics, ecocentrism, these ecosystems matter for their own sake, and not only because they contribute to human ends. As such, some ecocentrists are critical of the modern industrial civilization, and a few even argue that an irreversible collapse of the modern industrial civilization would be a good thing. However, taking a longer view and considering the eventual destruction of the biosphere by astronomical processes, we (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Longtermism, Aggregation, and Catastrophic Risk.Emma J. Curran - manuscript
    Advocates of longtermism point out that interventions which focus on improving the prospects of people in the very far future will, in expectation, bring about a significant amount of good. Indeed, in expectation, such long-term interventions bring about far more good than their short-term counterparts. As such, longtermists claim we have compelling moral reason to prefer long-term interventions. In this paper, I show that longtermism is in conflict with plausible deontic scepticism about aggregation. I do so by demonstrating that, from (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Unfinished Business.Jonathan Knutzen - 2023 - Philosophers' Imprint 23 (1): 4, 1-15.
    According to an intriguing though somewhat enigmatic line of thought first proposed by Jonathan Bennett, if humanity went extinct any time soon this would be unfortunate because important business would be left unfinished. This line of thought remains largely unexplored. I offer an interpretation of the idea that captures its intuitive appeal, is consistent with plausible constraints, and makes it non-redundant to other views in the literature. The resulting view contrasts with a welfare-promotion perspective, according to which extinction would be (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • The Cosmic Significance of Directed Panspermia: Should Humanity Spread Life to Other Solar Systems?Oskari Sivula - 2022 - Utilitas 34 (2):178-194.
    The possibility of seeding other planets with life poses a tricky dilemma. On the one hand, directed panspermia might be extremely good, while, on the other, it might be extremely bad depending on what factors are taken into consideration. Therefore, we need to understand better what is ethically at stake with planetary seeding. I map out possible conditions under which humanity should spread life to other solar systems. I identify two key variables that affect the desirability of propagating life throughout (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • (1 other version)Moral demands and the far future.Andreas L. Mogensen - 2020 - Philosophy and Phenomenological Research 103 (3):567-585.
    Philosophy and Phenomenological Research, EarlyView.
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • (1 other version)Maximal Cluelessness.Andreas Mogensen - 2021 - Philosophical Quarterly 71 (1):141-162.
    I argue that many of the priority rankings that have been proposed by effective altruists seem to be in tension with apparently reasonable assumptions about the rational pursuit of our aims in the face of uncertainty. The particular issue on which I focus arises from recognition of the overwhelming importance and inscrutability of the indirect effects of our actions, conjoined with the plausibility of a permissive decision principle governing cases of deep uncertainty, known as the maximality rule. I conclude that (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  • Effective Altruism.Theron Pummer & William MacAskill - 2020 - International Encyclopedia of Ethics.
    In this entry, we discuss both the definition of effective altruism and objections to effective altruism, so defined.
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Space Colonization and Existential Risk.Joseph Gottlieb - 2019 - Journal of the American Philosophical Association 5 (3):306-320.
    Ian Stoner has recently argued that we ought not to colonize Mars because doing so would flout our pro tanto obligation not to violate the principle of scientific conservation, and there is no countervailing considerations that render our violation of the principle permissible. While I remain agnostic on, my primary goal in this article is to challenge : there are countervailing considerations that render our violation of the principle permissible. As such, Stoner has failed to establish that we ought not (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Classification of Global Catastrophic Risks Connected with Artificial Intelligence.Alexey Turchin & David Denkenberger - 2020 - AI and Society 35 (1):147-163.
    A classification of the global catastrophic risks of AI is presented, along with a comprehensive list of previously identified risks. This classification allows the identification of several new risks. We show that at each level of AI’s intelligence power, separate types of possible catastrophes dominate. Our classification demonstrates that the field of AI risks is diverse, and includes many scenarios beyond the commonly discussed cases of a paperclip maximizer or robot-caused unemployment. Global catastrophic failure could happen at various levels of (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  • How Much is Rule-Consequentialism Really Willing to Give Up to Save the Future of Humanity?Patrick Kaczmarek - 2017 - Utilitas 29 (2):239-249.
    Brad Hooker argues that the cost of inculcating in everyone the prevent disaster rule places a limit on its demandingness. My aim in this article is show that this is not true of existential risk reduction. However, this does not spell trouble for the reason that removing persistent global harms significantly improves our long-run chances of survival. We can expect things to get better, not worse, for our population.
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Superintelligence as a Cause or Cure for Risks of Astronomical Suffering.Kaj Sotala & Lukas Gloor - 2017 - Informatica: An International Journal of Computing and Informatics 41 (4):389-400.
    Discussions about the possible consequences of creating superintelligence have included the possibility of existential risk, often understood mainly as the risk of human extinction. We argue that suffering risks (s-risks) , where an adverse outcome would bring about severe suffering on an astronomical scale, are risks of a comparable severity and probability as risks of extinction. Preventing them is the common interest of many different value systems. Furthermore, we argue that in the same way as superintelligent AI both contributes to (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • Doing Less Than Best.Emma J. Curran - 2023 - Dissertation, University of Cambridge
    This thesis is about the moral reasons we have to do less than best. It consists of six chapters. Part I of the thesis proposes, extends, and defends reasons to do less than best. In Chapter One (“The Conditional Obligation”) I outline and reject two recent arguments from Joe Horton and Theron Pummer for the claim that we have a conditional obligation to bring about the most good. In Chapter Two (“Agglomeration and Agent-Relative Costs”) I argue that agent-relative costs can (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The Moral Consideration of Artificial Entities: A Literature Review.Jamie Harris & Jacy Reese Anthis - 2021 - Science and Engineering Ethics 27 (4):1-95.
    Ethicists, policy-makers, and the general public have questioned whether artificial entities such as robots warrant rights or other forms of moral consideration. There is little synthesis of the research on this topic so far. We identify 294 relevant research or discussion items in our literature review of this topic. There is widespread agreement among scholars that some artificial entities could warrant moral consideration in the future, if not also the present. The reasoning varies, such as concern for the effects on (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Aligning artificial intelligence with human values: reflections from a phenomenological perspective.Shengnan Han, Eugene Kelly, Shahrokh Nikou & Eric-Oluf Svee - 2022 - AI and Society 37 (4):1383-1395.
    Artificial Intelligence (AI) must be directed at humane ends. The development of AI has produced great uncertainties of ensuring AI alignment with human values (AI value alignment) through AI operations from design to use. For the purposes of addressing this problem, we adopt the phenomenological theories of material values and technological mediation to be that beginning step. In this paper, we first discuss the AI value alignment from the relevant AI studies. Second, we briefly present what are material values and (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Endangering humanity: an international crime?Catriona McKinnon - 2017 - Canadian Journal of Philosophy 47 (2-3):395-415.
    In the Anthropocene, human beings are capable of bringing about globally catastrophic outcomes that could damage conditions for present and future human life on Earth in unprecedented ways. This paper argues that the scale and severity of these dangers justifies a new international criminal offence of ‘postericide’ that would protect present and future people against wrongfully created dangers of near extinction. Postericide is committed by intentional or reckless systematic conduct that is fit to bring about near human extinction. The paper (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Big Historical Foundations for Deep Future Speculations: Cosmic Evolution, Atechnogenesis, and Technocultural Civilization.Cadell Last - 2017 - Foundations of Science 22 (1):39-124.
    Big historians are attempting to construct a general holistic narrative of human origins enabling an approach to studying the emergence of complexity, the relation between evolutionary processes, and the modern context of human experience and actions. In this paper I attempt to explore the past and future of cosmic evolution within a big historical foundation characterized by physical, biological, and cultural eras of change. From this analysis I offer a model of the human future that includes an addition and/or reinterpretation (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • What to Enhance: Behaviour, Emotion or Disposition?Karim Jebari - 2014 - Neuroethics 7 (3):253-261.
    As we learn more about the human brain, novel biotechnological means to modulate human behaviour and emotional dispositions become possible. These technologies could be used to enhance our morality. Moral bioenhancement, an instance of human enhancement, alters a person’s dispositions, emotions or behaviour in order to make that person more moral. I will argue that moral bioenhancement could be carried out in three different ways. The first strategy, well known from science fiction, is behavioural enhancement. The second strategy, favoured by (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  • Moral Uncertainty About Population Axiology.Hilary Greaves & Toby Ord - 2017 - Journal of Ethics and Social Philosophy 12 (2):135-167.
    Given the deep disagreement surrounding population axiology, one should remain uncertain about which theory is best. However, this uncertainty need not leave one neutral about which acts are better or worse. We show that, as the number of lives at stake grows, the Expected Moral Value approach to axiological uncertainty systematically pushes one toward choosing the option preferred by the Total View and critical-level views, even if one’s credence in those theories is low.
    Download  
     
    Export citation  
     
    Bookmark   18 citations  
  • Practical Ethics Given Moral Uncertainty.William MacAskill - 2019 - Utilitas 31 (3):231-245.
    A number of philosophers have claimed that we should take not just empirical uncertainty but also fundamental moral uncertainty into account in our decision-making, and that, despite widespread moral disagreement, doing so would allow us to draw robust lessons for some issues in practical ethics. In this article, I argue that, so far, the implications for practical ethics have been drawn too simplistically. First, the implications of moral uncertainty for normative ethics are far more wide-ranging than has been noted so (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  • Space colonization remains the only long-term option for humanity: A reply to Torres.Milan Ćirković - 2019 - Futures 105:166-173.
    Recent discussion of the alleged adverse consequences of space colonization by Phil Torres in this journal is critically assessed. While the concern for suffering risks should be part of any strategic discussion of the cosmic future of humanity, the Hobbesian picture painted by Torres is largely flawed and unpersuasive. Instead, there is a very real risk that the skeptical arguments will be taken too seriously and future human flourishing in space delayed or prevented.
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Enhancing a Person, Enhancing a Civilization: A Research Program at the Intersection of Bioethics, Future Studies, and Astrobiology.Milan M. Ćirković - 2017 - Cambridge Quarterly of Healthcare Ethics 26 (3):459-468.
    :There are manifold intriguing issues located within largely unexplored borderlands of bioethics, future studies, and astrobiology. Human enhancement has for quite some time been among the foci of bioethical debates, but the same cannot be said about its global, transgenerational, and even cosmological consequences. In recent years, discussions of posthuman and, in general terms, postbiological civilization have slowly gained a measure of academic respect, in parallel with the renewed interest in the entire field of future studies and the great strides (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Who's really afraid of AI?: Anthropocentric bias and postbiological evolution.Milan M. Ćirković - 2022 - Belgrade Philosophical Annual 35:17-29.
    The advent of artificial intelligence (AI) systems has provoked a lot of discussions in both epistemological, bioethical and risk-analytic terms, much of it rather paranoid in nature. Unless one takes an extreme anthropocentric and chronocentric stance, this process can be safely regarded as part and parcel of the sciences of the origin. In this contribution, I would like to suggest that at least four different classes of arguments could be brought forth against the proposition that AI - either human-level or (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Climate Change: Evidence of Human Causes and Arguments for Emissions Reduction.Seth D. Baum, Jacob D. Haqq-Misra & Chris Karmosky - 2012 - Science and Engineering Ethics 18 (2):393-410.
    In a recent editorial, Raymond Spier expresses skepticism over claims that climate change is driven by human actions and that humanity should act to avoid climate change. This paper responds to this skepticism as part of a broader review of the science and ethics of climate change. While much remains uncertain about the climate, research indicates that observed temperature increases are human-driven. Although opinions vary regarding what should be done, prominent arguments against action are based on dubious factual and ethical (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • (1 other version)La importancia del futuro lejano: un examen de algunas de las principales objeciones al largoplacismo.Dayrón Terán Pintos - 2022 - Revista de Filosofía (Madrid):1-19.
    Según el largoplacismo, los efectos a largo plazo de nuestras acciones son un aspecto crucial de las mismas. Esto se debe a que el futuro, dada su extensión, presumiblemente contendrá a la mayor parte de los seres que alguna vez existan. Hay, sin embargo, distintas objeciones que cuestionan la viabilidad de la propuesta largoplacista, señalando que tendríamos razones para priorizar el corto plazo. Estas objeciones apuntan a problemas relacionados con la representación de individuos que todavía no existen, la situación de (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • (1 other version)Non-Additive Axiologies in Large Worlds.Christian Tarsney & Teruji Thomas - 2024 - Ergo: An Open Access Journal of Philosophy 11.
    Is the overall value of a world just the sum of values contributed by each value-bearing entity in that world? Additively separable axiologies (like total utilitarianism, prioritarianism, and critical level views) say 'yes', but non-additive axiologies (like average utilitarianism, rank-discounted utilitarianism, and variable value views) say 'no'. This distinction appears to be practically important: among other things, additive axiologies generally assign great importance to large changes in population size, and therefore tend to strongly prioritize the long-term survival of humanity over (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Saved by the Dark Forest: How a Multitude of Extraterrestrial Civilizations Can Prevent a Hobbesian Trap.Karim Jebari & Andrea S. Asker - 2024 - The Monist 107 (2):176-189.
    The possibility of extraterrestrial intelligence (ETI) exists despite no observed evidence, and the risks and benefits of actively searching for ETI (Active SETI) have been debated. Active SETI has been criticized for potentially exposing humanity to existential risk, and a recent game-theoretical model highlights the Hobbesian trap that could occur following contact if mutual distrust leads to mutual destruction. We argue that observing a nearby ETI would suggest the existence of many unobserved ETI. This would expand the game and implies (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Reconciliation between factions focused on near-term and long-term artificial intelligence.Seth D. Baum - 2018 - AI and Society 33 (4):565-572.
    Artificial intelligence experts are currently divided into “presentist” and “futurist” factions that call for attention to near-term and long-term AI, respectively. This paper argues that the presentist–futurist dispute is not the best focus of attention. Instead, the paper proposes a reconciliation between the two factions based on a mutual interest in AI. The paper further proposes realignment to two new factions: an “intellectualist” faction that seeks to develop AI for intellectual reasons and a “societalist faction” that seeks to develop AI (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Expanding Opportunity in the Anthropocene.Rasmus Karlsson - 2017 - Ethics, Policy and Environment 20 (3):240-242.
    The pre-modern world was one of gross inequalities and abject poverty. Yet, over the last two hundred years, social investments have unlocked the productive capacity and imagination of billions (Li...
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Discourse analysis of academic debate of ethics for AGI.Ross Graham - 2022 - AI and Society 37 (4):1519-1532.
    Artificial general intelligence is a greatly anticipated technology with non-trivial existential risks, defined as machine intelligence with competence as great/greater than humans. To date, social scientists have dedicated little effort to the ethics of AGI or AGI researchers. This paper employs inductive discourse analysis of the academic literature of two intellectual groups writing on the ethics of AGI—applied and/or ‘basic’ scientific disciplines henceforth referred to as technicians (e.g., computer science, electrical engineering, physics), and philosophy-adjacent disciplines henceforth referred to as PADs (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Justifying the Precautionary Principle as a political principle.Lilian Bermejo-Luque & Javier Rodríguez-Alcázar - 2023 - Ethics in Science and Environmental Politics 23:7-22.
    Our aim is to defend the Precautionary Principle (PP) against the main theoretical and practical criticisms that it has raised by proposing a novel conception and a specific formulation of the principle. We first address the theoretical concerns against the idea of there being a principle of precaution by arguing for a distinctively political conception of the PP as opposed to a moral one. Our claim is that the rationale of the PP is grounded in the fact that contemporary societies (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Life-Suspending Technologies, Cryonics, and Catastrophic Risks.Andrea Sauchelli - 2024 - Science and Engineering Ethics 30 (37):1-16.
    I defend the claim that life-suspending technologies can constitute a catastrophic and existential security factor for risks structurally similar to those related to climate change. The gist of the argument is that, under certain conditions, life-suspending technologies such as cryonics can provide self-interested actors with incentives to efficiently tackle such risks—in particular, they provide reasons to overcome certain manifestations of generational egoism, a risk factor of several catastrophic and existential risks. Provided we have reasons to decrease catastrophic and existential risks (...)
    Download  
     
    Export citation  
     
    Bookmark