Switch to: References

Add citations

You must login to add citations.
  1. Transhumanism and Marxism: Philosophical Connections.James Steinhoff - 2014 - Journal of Evolution and Technology 24 (2):1-16.
    There exists a real dearth of literature available to Anglophones dealing with philosophical connections between transhumanism and Marxism. This is surprising, given the existence of works on just this relation in the other major European languages and the fact that 47 per cent of people surveyed in the 2007 Interests and Beliefs Survey of the Members of the World Transhumanist Association identified as “left,” though not strictly Marxist (Hughes 2008). Rather than seeking to explain this dearth here, I aim to (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Global technology regulation and potentially apocalyptic technological threats.James J. Hughes - 2007 - In Fritz Allhoff, Patrick Lin, James Moor, John Weckert & Mihail C. Roco (eds.), Nanoethics: The Ethical and Social Implications of Nanotechnology. Wiley. pp. 201-214.
    In 2000 Bill Joy proposed that the best way to prevent technological apocalypse was to "relinquish" emerging bio-, info- and nanotechnologies. His essay introduced many watchdog groups to the dangers that futurists had been warning of for decades. One such group, ETC, has called for a moratorium on all nanotechnological research until all safety issues can be investigated and social impacts ameliorated. In this essay I discuss the differences and similarities of regulating bio- and nanotechnological innovation to the efforts to (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Swarms Are Hell: Warfare as an Anti-Transhuman Choice.Woody Evans - 2013 - Journal of Evolution and Technology 23 (1):56-60.
    The use of advanced technologies, even of so-called transhuman technology, does not make militaries transhuman. Transhumanism includes dimensions of ethics that are themselves in direct conflict with many transhuman capabilities of soldiers in warfare. The use of advanced weapons of mass destruction represents an anti-humanism that undermines the modern, open, and high-tech nation state.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Thinking inside the box: Using and controlling an oracle AI.Stuart Armstrong, Anders Sandberg & Nick Bostrom - forthcoming - Minds and Machines.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Advantages of artificial intelligences, uploads, and digital minds.Kaj Sotala - 2012 - International Journal of Machine Consciousness 4 (01):275-291.
    I survey four categories of factors that might give a digital mind, such as an upload or an artificial general intelligence, an advantage over humans. Hardware advantages include greater serial speeds and greater parallel speeds. Self-improvement advantages include improvement of algorithms, design of new mental modules, and modification of motivational system. Co-operative advantages include copyability, perfect co-operation, improved communication, and transfer of skills. Human handicaps include computational limitations and faulty heuristics, human-centric biases, and socially motivated cognition. The shape of hardware (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Thinking Inside the Box: Controlling and Using an Oracle AI.Stuart Armstrong, Anders Sandberg & Nick Bostrom - 2012 - Minds and Machines 22 (4):299-324.
    There is no strong reason to believe that human-level intelligence represents an upper limit of the capacity of artificial intelligence, should it be realized. This poses serious safety issues, since a superintelligent system would have great power to direct the future according to its possibly flawed motivation system. Solving this issue in general has proven to be considerably harder than expected. This paper looks at one particular approach, Oracle AI. An Oracle AI is an AI that does not act in (...)
    Download  
     
    Export citation  
     
    Bookmark   18 citations  
  • A history of transhumanist thought.Nick Bostrom - 2005 - Journal of Evolution and Technology 14 (1):1-25.
    The human desire to acquire new capacities is as ancient as our species itself. We have always sought to expand the boundaries of our existence, be it socially, geographically, or mentally. There is a tendency in at least some individuals always to search for a way around every obstacle and limitation to human life and happiness.
    Download  
     
    Export citation  
     
    Bookmark   116 citations  
  • The problems with forbidding science.Gary E. Marchant & Lynda L. Pope - 2009 - Science and Engineering Ethics 15 (3):375-394.
    Scientific research is subject to a number of regulations which impose incidental (time, place), rather than substantive (type of research), restrictions on scientific research and the knowledge created through such research. In recent years, however, the premise that scientific research and knowledge should be free from substantive regulation has increasingly been called into question. Some have suggested that the law should be used as a tool to substantively restrict research which is dual-use in nature or which raises moral objections. There (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Mutual Aid as Effective Altruism.Ricky Mouser - 2023 - Kennedy Institute of Ethics Journal 33 (2):201-226.
    Effective altruism has a strategy problem. Overreliance on a strategy of donating to the most effective charities keeps us on the firefighter's treadmill, continually pursuing the next-highest quantifiable marginal gain. But on its own, this is politically shortsighted. Without any long-term framework within which these individual rescues fit together to bring about the greatest overall impact, we are almost certainly leaving a lot of value on the table. Thus, effective altruists' preferred means undercut their professed aims. Alongside the charity framework, (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Existentialist risk and value misalignment.Ariela Tubert & Justin Tiehen - forthcoming - Philosophical Studies.
    We argue that two long-term goals of AI research stand in tension with one another. The first involves creating AI that is safe, where this is understood as solving the problem of value alignment. The second involves creating artificial general intelligence, meaning AI that operates at or beyond human capacity across all or many intellectual domains. Our argument focuses on the human capacity to make what we call “existential choices”, choices that transform who we are as persons, including transforming what (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • The Probability of a Global Catastrophe in the World with Exponentially Growing Technologies.Alexey Turchin & Justin Shovelain - manuscript
    Abstract. In this article is presented a model of the change of the probability of the global catastrophic risks in the world with exponentially evolving technologies. Increasingly cheaper technologies become accessible to a larger number of agents. Also, the technologies become more capable to cause a global catastrophe. Examples of such dangerous technologies are artificial viruses constructed by the means of synthetic biology, non-aligned AI and, to less extent, nanotech and nuclear proliferation. The model shows at least double exponential growth (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Existential Risk, Climate Change, and Nonideal Justice.Alex McLaughlin - 2024 - The Monist 107 (2):190-206.
    Climate change is often described as an existential risk to the human species, but this terminology has generally been avoided in the climate-justice literature in analytic philosophy. I investigate the source of this disconnect and explore the prospects for incorporating the idea of climate change as an existential risk into debates about climate justice. The concept of existential risk does not feature prominently in these discussions, I suggest, because assumptions that structure ‘ideal’ accounts of climate justice ensure that the prospect (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • ‘Humanity’: Constitution, Value, and Extinction.Elizabeth Finneron-Burns - 2024 - The Monist 107 (2):99-108.
    When discussing the extinction of humanity, there does not seem to be any clear agreement about what ‘humanity’ really means. One aim of this paper is to show that it is a more slippery concept than it might at first seem. A second aim is to show the relationship between what constitutes or defines humanity and what gives it value. Often, whether and how we ought to prevent human extinction depends on what we take humanity to mean, which in turn (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Uses and Abuses of AI Ethics.Lily E. Frank & Michal Klincewicz - 2024 - In David J. Gunkel (ed.), Handbook on the Ethics of Artificial Intelligence. Edward Elgar Publishing.
    In this chapter we take stock of some of the complexities of the sprawling field of AI ethics. We consider questions like "what is the proper scope of AI ethics?" And "who counts as an AI ethicist?" At the same time, we flag several potential uses and abuses of AI ethics. These include challenges for the AI ethicist, including what qualifications they should have; the proper place and extent of futuring and speculation in the field; and the dilemmas concerning how (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Philosophical Examinations of the Anthropocene.Richard Sťahel (ed.) - 2023 - Bratislava: Institute of Philosophy, Slovak Academy of Sciences, v. v. i..
    Download  
     
    Export citation  
     
    Bookmark  
  • Mapping the potential AI-driven virtual hyper-personalised ikigai universe.Soenke Ziesche & Roman Yampolskiy - manuscript
    Ikigai is a Japanese concept, which, in brief, refers to the “reason or purpose to live”. I-risks have been identified as a category of risks complementing x- risks, i.e., existential risks, and s-risks, i.e., suffering risks, which describes undesirable future scenarios in which humans are deprived of the pursuit of their individual ikigai. While some developments in AI increase i-risks, there are also AI-driven virtual opportunities, which reduce i-risks by increasing the space of potential ikigais, largely due to developments in (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The Concept of Extinction: Epistemology, Responsibility, and Precaution.Fenner Stanley Tanswell - 2024 - Ethics, Policy and Environment 27 (2):205-226.
    Extinction is a concept of rapidly growing importance, with the world currently in the sixth mass extinction event and a biodiversity crisis. However, the concept of extinction has itself received surprisingly little attention from philosophers. I will first argue that in practice there is no single unified concept of extinction, but instead that its usage divides between descriptive, epistemic, and declarative concepts. I will then consider the epistemic challenges that arise in ascertaining whether a species has gone extinct, and how (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Are universal ethics necessary? And possible? A systematic theory of universal ethics and a code for global moral education.Enno A. Winkler - 2022 - SN Social Sciences 2.
    This paper analyzes the political, philosophical, societal, legal, educational, biological, psychological and technological reasons why there is an urgent need for basic intercultural and interfaith ethics in the world and whether it is possible to formulate a valid code of such ethics. It is shown that universal ethics could be founded on natural law, which can be understood in both religious and secular ways. Alternatively, universal ethics could be based on a single supreme principle that is independent of worldview and (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • COVID-19 and Singularity: Can the Philippines Survive Another Existential Threat?Robert James M. Boyles, Mark Anthony Dacela, Tyrone Renzo Evangelista & Jon Carlos Rodriguez - 2022 - Asia-Pacific Social Science Review 22 (2):181–195.
    In general, existential threats are those that may potentially result in the extinction of the entire human species, if not significantly endanger its living population. Among the said threats include, but not limited to, pandemics and the impacts of a technological singularity. As regards pandemics, significant work has already been done on how to mitigate, if not prevent, the aftereffects of this type of disaster. For one, certain problem areas on how to properly manage pandemic responses have already been identified, (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Transhumanism and Personal Identity.James Hughes - 2013 - In Max More & Natasha Vita-More (eds.), The Transhumanist Reader: Classical and Contemporary Essays on the Science, Technology, and Philosophy of the Human Future. Chichester, West Sussex, UK: Wiley-Blackwell. pp. 227=234.
    Enlightenment values are built around the presumption of an independent rational self, citizen, consumer and pursuer of self-interest. Even the authoritarian and communitarian variants of the Enlightenment presumed the existence of autonomous individuals, simply arguing for greater weight to be given to their collective interests. Since Hume, however, radical Enlightenment empiricists have called into question the existence of a discrete, persistent self. Today neuroscientific reductionism has contributed to the rejection of an essentialist model of personal identity. Contemporary transhumanism has yet (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Existential risk from AI and orthogonality: Can we have it both ways?Vincent C. Müller & Michael Cannon - 2021 - Ratio 35 (1):25-36.
    The standard argument to the conclusion that artificial intelligence (AI) constitutes an existential risk for the human species uses two premises: (1) AI may reach superintelligent levels, at which point we humans lose control (the ‘singularity claim’); (2) Any level of intelligence can go along with any goal (the ‘orthogonality thesis’). We find that the singularity claim requires a notion of ‘general intelligence’, while the orthogonality thesis requires a notion of ‘instrumental intelligence’. If this interpretation is correct, they cannot be (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Towards a Theory of Posthuman Care: Real Humans and Caring Robots.Amelia DeFalco - 2020 - Body and Society 26 (3):31-60.
    This essay interrogates the common assumption that good care is necessarily human care. It looks to disruptive fictional representations of robot care to assist its development of a theory of posthuman care that jettisons the implied anthropocentrism of ethics of care philosophy but retains care’s foregrounding of entanglement, embodiment and obligation. The essay reads speculative representations of robot care, particularly the Swedish television programme Äkta människor (Real Humans), alongside ethics of care philosophy and critical posthumanism to highlight their synergetic critiques (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • On the Logical Impossibility of Solving the Control Problem.Caleb Rudnick - manuscript
    In the philosophy of artificial intelligence (AI) we are often warned of machines built with the best possible intentions, killing everyone on the planet and in some cases, everything in our light cone. At the same time, however, we are also told of the utopian worlds that could be created with just a single superintelligent mind. If we’re ever to live in that utopia (or just avoid dystopia) it’s necessary we solve the control problem. The control problem asks how humans (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Good AI for the Present of Humanity Democratizing AI Governance.Nicholas Kluge Corrêa & Nythamar De Oliveira - 2021 - AI Ethics Journal 2 (2):1-16.
    What does Cyberpunk and AI Ethics have to do with each other? Cyberpunk is a sub-genre of science fiction that explores the post-human relationships between human experience and technology. One similarity between AI Ethics and Cyberpunk literature is that both seek a dialogue in which the reader may inquire about the future and the ethical and social problems that our technological advance may bring upon society. In recent years, an increasing number of ethical matters involving AI have been pointed and (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Fully Autonomous AI.Wolfhart Totschnig - 2020 - Science and Engineering Ethics 26 (5):2473-2485.
    In the fields of artificial intelligence and robotics, the term “autonomy” is generally used to mean the capacity of an artificial agent to operate independently of human guidance. It is thereby assumed that the agent has a fixed goal or “utility function” with respect to which the appropriateness of its actions will be evaluated. From a philosophical perspective, this notion of autonomy seems oddly weak. For, in philosophy, the term is generally used to refer to a stronger capacity, namely the (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Responses to Catastrophic AGI Risk: A Survey.Kaj Sotala & Roman V. Yampolskiy - 2015 - Physica Scripta 90.
    Many researchers have argued that humanity will create artificial general intelligence (AGI) within the next twenty to one hundred years. It has been suggested that AGI may inflict serious damage to human well-being on a global scale ('catastrophic risk'). After summarizing the arguments for why AGI may pose such a risk, we review the fieldʼs proposed responses to AGI risk. We consider societal proposals, proposals for external constraints on AGI behaviors and proposals for creating AGIs that are safe due to (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  • Risk management standards and the active management of malicious intent in artificial superintelligence.Patrick Bradley - 2020 - AI and Society 35 (2):319-328.
    The likely near future creation of artificial superintelligence carries significant risks to humanity. These risks are difficult to conceptualise and quantify, but malicious use of existing artificial intelligence by criminals and state actors is already occurring and poses risks to digital security, physical security and integrity of political systems. These risks will increase as artificial intelligence moves closer to superintelligence. While there is little research on risk management tools used in artificial intelligence development, the current global standard for risk management, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Can a Robot Pursue the Good? Exploring Artificial Moral Agency.Amy Michelle DeBaets - 2014 - Journal of Evolution and Technology 24 (3):76-86.
    In this essay I will explore an understanding of the potential moral agency of robots; arguing that the key characteristics of physical embodiment; adaptive learning; empathy in action; and a teleology toward the good are the primary necessary components for a machine to become a moral agent. In this context; other possible options will be rejected as necessary for moral agency; including simplistic notions of intelligence; computational power; and rule-following; complete freedom; a sense of God; and an immaterial soul. I (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Extraterrestrial artificial intelligences and humanity's cosmic future: Answering the Fermi paradox through the construction of a Bracewell-Von Neumann AGI.Tomislav Miletić - 2015 - Journal of Evolution and Technology 25 (1):56-73.
    A probable solution of the Fermi paradox; and a necessary step in humanity’s cosmic development; is the construction of a Bracewell-Von Neumann Artificial General Intelligence. The use of BN probes is the most plausible method of initial galactic exploration and communication for advanced ET civilizations; and our own cosmic evolution lies firmly in the utilization of; and cooperation with; AGI agents. To establish these claims; I explore the most credible developmental path from carbon-based life forms to planetary civilizations and AI (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Agential Risks: A Comprehensive Introduction.Phil Torres - 2016 - Journal of Evolution and Technology 26 (2):31-47.
    The greatest existential threats to humanity stem from increasingly powerful advanced technologies. Yet the “risk potential” of such tools can only be realized when coupled with a suitable agent who; through error or terror; could use the tool to bring about an existential catastrophe. While the existential risk literature has provided many accounts of how advanced technologies might be misused and abused to cause unprecedented harm; no scholar has yet explored the other half of the agent-tool coupling; namely the agent. (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • The Vulnerable World Hypothesis.Nick Bostrom - 2018
    Scientific and technological progress might change people’s capabilities or incentives in ways that would destabilize civilization. For example, advances in DIY biohacking tools might make it easy for anybody with basic training in biology to kill millions; novel military technologies could trigger arms races in which whoever strikes first has a decisive advantage; or some economically advantageous process may be invented that produces disastrous negative global externalities that are hard to regulate. This paper introduces the concept of a vulnerable world: (...)
    Download  
     
    Export citation  
     
    Bookmark   17 citations  
  • The Termination Risks of Simulation Science.Preston Greene - 2020 - Erkenntnis 85 (2):489-509.
    Historically, the hypothesis that our world is a computer simulation has struck many as just another improbable-but-possible “skeptical hypothesis” about the nature of reality. Recently, however, the simulation hypothesis has received significant attention from philosophers, physicists, and the popular press. This is due to the discovery of an epistemic dependency: If we believe that our civilization will one day run many simulations concerning its ancestry, then we should believe that we are probably in an ancestor simulation right now. This essay (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Classification of Global Catastrophic Risks Connected with Artificial Intelligence.Alexey Turchin & David Denkenberger - 2020 - AI and Society 35 (1):147-163.
    A classification of the global catastrophic risks of AI is presented, along with a comprehensive list of previously identified risks. This classification allows the identification of several new risks. We show that at each level of AI’s intelligence power, separate types of possible catastrophes dominate. Our classification demonstrates that the field of AI risks is diverse, and includes many scenarios beyond the commonly discussed cases of a paperclip maximizer or robot-caused unemployment. Global catastrophic failure could happen at various levels of (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  • Aquatic refuges for surviving a global catastrophe.Alexey Turchin & Brian Green - 2017 - Futures 89:26-37.
    Recently many methods for reducing the risk of human extinction have been suggested, including building refuges underground and in space. Here we will discuss the perspective of using military nuclear submarines or their derivatives to ensure the survival of a small portion of humanity who will be able to rebuild human civilization after a large catastrophe. We will show that it is a very cost-effective way to build refuges, and viable solutions exist for various budgets and timeframes. Nuclear submarines are (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Superintelligence as a Cause or Cure for Risks of Astronomical Suffering.Kaj Sotala & Lukas Gloor - 2017 - Informatica: An International Journal of Computing and Informatics 41 (4):389-400.
    Discussions about the possible consequences of creating superintelligence have included the possibility of existential risk, often understood mainly as the risk of human extinction. We argue that suffering risks (s-risks) , where an adverse outcome would bring about severe suffering on an astronomical scale, are risks of a comparable severity and probability as risks of extinction. Preventing them is the common interest of many different value systems. Furthermore, we argue that in the same way as superintelligent AI both contributes to (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  • From responsible robotics towards a human rights regime oriented to the challenges of robotics and artificial intelligence.Hin-Yan Liu & Karolina Zawieska - 2020 - Ethics and Information Technology 22 (4):321-333.
    As the aim of the responsible robotics initiative is to ensure that responsible practices are inculcated within each stage of design, development and use, this impetus is undergirded by the alignment of ethical and legal considerations towards socially beneficial ends. While every effort should be expended to ensure that issues of responsibility are addressed at each stage of technological progression, irresponsibility is inherent within the nature of robotics technologies from a theoretical perspective that threatens to thwart the endeavour. This is (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • The politics of transhumanism and the techno‐millennial imagination, 1626–2030.James J. Hughes - 2012 - Zygon 47 (4):757-776.
    Transhumanism is a modern expression of ancient and transcultural aspirations to radically transform human existence, socially and bodily. Before the Enlightenment these aspirations were only expressed in religious millennialism, magical medicine, and spiritual practices. The Enlightenment channeled these desires into projects to use science and technology to improve health, longevity, and human abilities, and to use reason to revolutionize society. Since the Enlightenment, techno‐utopian movements have dynamically interacted with supernaturalist millennialism, sometimes syncretically, and often in violent opposition. Today the transhumanist (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  • Astronomical Waste: The Opportunity Cost of Delayed Technological Development: Nick Bostrom.Nick Bostrom - 2003 - Utilitas 15 (3):308-314.
    With very advanced technology, a very large population of people living happy lives could be sustained in the accessible region of the universe. For every year that development of such technologies and colonization of the universe is delayed, there is therefore a corresponding opportunity cost: a potential good, lives worth living, is not being realized. Given some plausible assumptions, this cost is extremely large. However, the lesson for standard utilitarians is not that we ought to maximize the pace of technological (...)
    Download  
     
    Export citation  
     
    Bookmark   57 citations  
  • Anthropic shadow: observation selection effects and human extinction risks.Milan M. Ćirković, Anders Sandberg & Nick Bostrom - unknown
    We describe a significant practical consequence of taking anthropic biases into account in deriving predictions for rare stochastic catastrophic events. The risks associated with catastrophes such as asteroidal/cometary impacts, supervolcanic episodes, and explosions of supernovae/gamma-ray bursts are based on their observed frequencies. As a result, the frequencies of catastrophes that destroy or are otherwise incompatible with the existence of observers are systematically underestimated. We describe the consequences of the anthropic bias for estimation of catastrophic risks, and suggest some directions for (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Künstliche Intelligenz: Chancen und Risiken.Mannino Adriano, David Althaus, Jonathan Erhardt, Lukas Gloor, Adrian Hutter & Thomas Metzinger - 2015 - Diskussionspapiere der Stiftung Für Effektiven Altruismus 2:1-17.
    Die Übernahme des KI-Unternehmens DeepMind durch Google für rund eine halbe Milliarde US-Dollar signalisierte vor einem Jahr, dass von der KI-Forschung vielversprechende Ergebnisse erwartet werden. Spätestens seit bekannte Wissenschaftler wie Stephen Hawking und Unternehmer wie Elon Musk oder Bill Gates davor warnen, dass künstliche Intelligenz eine Bedrohung für die Menschheit darstellt, schlägt das KI-Thema hohe Wellen. Die Stiftung für Effektiven Altruismus (EAS, vormals GBS Schweiz) hat mit der Unterstützung von Experten/innen aus Informatik und KI ein umfassendes Diskussionspapier zu den Chancen (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Would Human Extinction Be Morally Wrong?Franco Palazzi - 2014 - Philosophia 42 (4):1063-1084.
    This article casts light on the moral implications of the possibility of human extinction, with a specific focus on extinction caused by an interruption in human reproduction. In the first two paragraphs, I show that moral philosophy has not yet given promising explanations for the wrongness of this kind of extinction. Specifically, the second paragraph contains a detailed rejection of John Leslie’s main claims on the morality of extinction. In the third paragraph, I offer a demonstration of the fact that (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • The Hermeneutic Challenge of Genetic Engineering: Habermas and the Transhumanists.Andrew Edgar - 2009 - Medicine, Health Care and Philosophy 12 (2):157-167.
    The purpose of this paper is to explore the impact that developments in transhumanist technologies may have upon human cultures, and to do so by exploring a potential debate between Habermas and the transhumanists. Transhumanists, such as Nick Bostrom, typically see the potential in genetic and other technologies for positively expanding and transcending human nature. In contrast, Habermas is a representative of those who are fearful of this technology, suggesting that it will compound the deleterious effects of the colonisation of (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  • Human Extinction and AI: What We Can Learn from the Ultimate Threat.Andrea Lavazza & Murilo Vilaça - 2024 - Philosophy and Technology 37 (1):1-21.
    Human extinction is something generally deemed as undesirable, although some scholars view it as a potential solution to the problems of the Earth since it would reduce the moral evil and the suffering that are brought about by humans. We contend that humans collectively have absolute intrinsic value as sentient, conscious and rational entities, and we should preserve them from extinction. However, severe threats, such as climate change and incurable viruses, might push humanity to the brink of extinction. Should that (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Why Confucianism Matters in Ethics of Technology.Pak-Hang Wong - 2020 - In Shannon Vallor (ed.), The Oxford Handbook of Philosophy of Technology. New York, NY: Oxford University Press, Usa.
    There are a number of recent attempts to introduce Confucian values to the ethical analysis of technology. These works, however, have not attended sufficiently to one central aspect of Confucianism, namely Ritual (‘Li’). Li is central to Confucian ethics, and it has been suggested that the emphasis on Li in Confucian ethics is what distinguishes it from other ethical traditions. Any discussion of Confucian ethics for technology, therefore, remains incomplete without accounting for Li. This chapter aims to elaborate on the (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • The Philosophical Core of Effective Altruism.Brian Berkey - 2021 - Journal of Social Philosophy 52 (1):93-115.
    Effective altruism’s identity as both a philosophy and a social movement requires effective altruists to consider which philosophical commitments are essential, such that one must embrace them in order to count as an effective altruist, at least in part in the light of the goal of building a robust social movement capable of advancing its aims. The goal of building a social movement provides a strong reason for effective altruists to embrace an ecumenical set of core commitments. At the same (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • The role of experts in the public perception of risk of artificial intelligence.Hugo Neri & Fabio Cozman - 2020 - AI and Society 35 (3):663-673.
    The goal of this paper is to describe the mechanism of the public perception of risk of artificial intelligence. For that we apply the social amplification of risk framework to the public perception of artificial intelligence using data collected from Twitter from 2007 to 2018. We analyzed when and how there appeared a significant representation of the association between risk and artificial intelligence in the public awareness of artificial intelligence. A significant finding is that the image of the risk of (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Whose Survival? A Critical Engagement with the Notion of Existential Risk.Philip Højme - 2019 - Scientia et Fides 7 (2):63-76.
    This paper provides a critique of Bostrom’s concern with existential risks, a critique which relies on Adorno and Horkheimer’s interpretation of the Enlightenment. Their interpretation is used to elicit the inner contradictions of transhumanist thought and to show the invalid premises on which it is based. By first outlining Bostrom’s position this paper argues that transhumanism reverts to myth in its attempt to surpass the human condition. Bostrom’s argument is based on three pillars, Maxipok, Parfitian population ethics and a universal (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • The problem of superintelligence: political, not technological.Wolfhart Totschnig - 2019 - AI and Society 34 (4):907-920.
    The thinkers who have reflected on the problem of a coming superintelligence have generally seen the issue as a technological problem, a problem of how to control what the superintelligence will do. I argue that this approach is probably mistaken because it is based on questionable assumptions about the behavior of intelligent agents and, moreover, potentially counterproductive because it might, in the end, bring about the existential catastrophe that it is meant to prevent. I contend that the problem posed by (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • The hermeneutic challenge of genetic engineering: Habermas and the transhumanists.Edgar Andrew Robert - 2009 - Medicine, Health Care and Philosophy 12 (2):157-167.
    The purpose of this paper is to explore the impact that developments in transhumanist technologies may have upon human cultures (and thus upon the lifeworld), and to do so by exploring a potential debate between Habermas and the transhumanists. Transhumanists, such as Nick Bostrom, typically see the potential in genetic and other technologies for positively expanding and transcending human nature. In contrast, Habermas is a representative of those who are fearful of this technology, suggesting that it will compound the deleterious (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Auto-Catastrophic Theory: the necessity of self-destruction for the formation, survival, and termination of systems.Marilena Kyriakidou - 2016 - AI and Society 31 (2):191-200.
    Systems evolve in order to adjust and survive. The paper’s contribution is that this evolvement is inadequate without an evolutionary telos. It is argued that without the presence of self-destruction in multiple levels of our existence and surroundings, our survival would have been impossible. This paper recognises an appreciation of auto-catastrophe at the cell level, in human attitudes (both as an individual and in societies), and extended to Earth and out to galaxies. Auto-Catastrophic Theory combines evolution with auto-catastrophic behaviours and (...)
    Download  
     
    Export citation  
     
    Bookmark