Switch to: References

Citations of:

Artificial Intelligence as a Positive and Negative Factor in Global Risk

In Nick Bostrom & Milan M. Cirkovic (eds.), Global Catastrophic Risks. Oxford University Press. pp. 308-345 (2008)

Add citations

You must login to add citations.
  1. (1 other version)The singularity: A philosophical analysis.David J. Chalmers - 2010 - Journal of Consciousness Studies 17 (9-10):9 - 10.
    What happens when machines become more intelligent than humans? One view is that this event will be followed by an explosion to ever-greater levels of intelligence, as each generation of machines creates more intelligent machines in turn. This intelligence explosion is now often known as the “singularity”. The basic argument here was set out by the statistician I.J. Good in his 1965 article “Speculations Concerning the First Ultraintelligent Machine”: Let an ultraintelligent machine be defined as a machine that can far (...)
    Download  
     
    Export citation  
     
    Bookmark   120 citations  
  • Artificial Intelligence: Arguments for Catastrophic Risk.Adam Bales, William D'Alessandro & Cameron Domenico Kirk-Giannini - 2024 - Philosophy Compass 19 (2):e12964.
    Recent progress in artificial intelligence (AI) has drawn attention to the technology’s transformative potential, including what some see as its prospects for causing large-scale harm. We review two influential arguments purporting to show how AI could pose catastrophic risks. The first argument — the Problem of Power-Seeking — claims that, under certain assumptions, advanced AI systems are likely to engage in dangerous power-seeking behavior in pursuit of their goals. We review reasons for thinking that AI systems might seek power, that (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Taking Into Account Sentient Non-Humans in AI Ambitious Value Learning: Sentientist Coherent Extrapolated Volition.Adrià Moret - 2023 - Journal of Artificial Intelligence and Consciousness 10 (02):309-334.
    Ambitious value learning proposals to solve the AI alignment problem and avoid catastrophic outcomes from a possible future misaligned artificial superintelligence (such as Coherent Extrapolated Volition [CEV]) have focused on ensuring that an artificial superintelligence (ASI) would try to do what humans would want it to do. However, present and future sentient non-humans, such as non-human animals and possible future digital minds could also be affected by the ASI’s behaviour in morally relevant ways. This paper puts forward Sentientist Coherent Extrapolated (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Why Machines Will Never Rule the World: Artificial Intelligence without Fear.Jobst Landgrebe & Barry Smith - 2022 - Abingdon, England: Routledge.
    The book’s core argument is that an artificial intelligence that could equal or exceed human intelligence—sometimes called artificial general intelligence (AGI)—is for mathematical reasons impossible. It offers two specific reasons for this claim: Human intelligence is a capability of a complex dynamic system—the human brain and central nervous system. Systems of this sort cannot be modelled mathematically in a way that allows them to operate inside a computer. In supporting their claim, the authors, Jobst Landgrebe and Barry Smith, marshal evidence (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Moral difference between humans and robots: paternalism and human-relative reason.Tsung-Hsing Ho - 2022 - AI and Society 37 (4):1533-1543.
    According to some philosophers, if moral agency is understood in behaviourist terms, robots could become moral agents that are as good as or even better than humans. Given the behaviourist conception, it is natural to think that there is no interesting moral difference between robots and humans in terms of moral agency (call it the _equivalence thesis_). However, such moral differences exist: based on Strawson’s account of participant reactive attitude and Scanlon’s relational account of blame, I argue that a distinct (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Artificial Intelligence and the Body: Dreyfus, Bickhard, and the Future of AI.Daniel Susser - 2013 - In Vincent Müller (ed.), Philosophy and Theory of Artificial Intelligence. Springer. pp. 277-287.
    For those who find Dreyfus’s critique of AI compelling, the prospects for producing true artificial human intelligence are bleak. An important question thus becomes, what are the prospects for producing artificial non-human intelligence? Applying Dreyfus’s work to this question is difficult, however, because his work is so thoroughly human-centered. Granting Dreyfus that the body is fundamental to intelligence, how are we to conceive of non-human bodies? In this paper, I argue that bringing Dreyfus’s work into conversation with the work of (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Classification of Approaches to Technological Resurrection.Alexey Turchin & Chernyakov Maxim - manuscript
    Abstract. Death seems to be a permanent event, but there is no actual proof of its irreversibility. Here we list all known ways to resurrect the dead that do not contradict our current scientific understanding of the world. While no method is currently possible, many of those listed here may become feasible with future technological development, and it may even be possible to act now to increase their probability. The most well-known such approach to technological resurrection is cryonics. Another method (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Artificial Moral Agents: Moral Mentors or Sensible Tools?Fabio Fossa - 2018 - Ethics and Information Technology (2):1-12.
    The aim of this paper is to offer an analysis of the notion of artificial moral agent (AMA) and of its impact on human beings’ self-understanding as moral agents. Firstly, I introduce the topic by presenting what I call the Continuity Approach. Its main claim holds that AMAs and human moral agents exhibit no significant qualitative difference and, therefore, should be considered homogeneous entities. Secondly, I focus on the consequences this approach leads to. In order to do this I take (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  • Aquatic refuges for surviving a global catastrophe.Alexey Turchin & Brian Green - 2017 - Futures 89:26-37.
    Recently many methods for reducing the risk of human extinction have been suggested, including building refuges underground and in space. Here we will discuss the perspective of using military nuclear submarines or their derivatives to ensure the survival of a small portion of humanity who will be able to rebuild human civilization after a large catastrophe. We will show that it is a very cost-effective way to build refuges, and viable solutions exist for various budgets and timeframes. Nuclear submarines are (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Superintelligence as a Cause or Cure for Risks of Astronomical Suffering.Kaj Sotala & Lukas Gloor - 2017 - Informatica: An International Journal of Computing and Informatics 41 (4):389-400.
    Discussions about the possible consequences of creating superintelligence have included the possibility of existential risk, often understood mainly as the risk of human extinction. We argue that suffering risks (s-risks) , where an adverse outcome would bring about severe suffering on an astronomical scale, are risks of a comparable severity and probability as risks of extinction. Preventing them is the common interest of many different value systems. Furthermore, we argue that in the same way as superintelligent AI both contributes to (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • From responsible robotics towards a human rights regime oriented to the challenges of robotics and artificial intelligence.Hin-Yan Liu & Karolina Zawieska - 2020 - Ethics and Information Technology 22 (4):321-333.
    As the aim of the responsible robotics initiative is to ensure that responsible practices are inculcated within each stage of design, development and use, this impetus is undergirded by the alignment of ethical and legal considerations towards socially beneficial ends. While every effort should be expended to ensure that issues of responsibility are addressed at each stage of technological progression, irresponsibility is inherent within the nature of robotics technologies from a theoretical perspective that threatens to thwart the endeavour. This is (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Künstliche Intelligenz: Chancen und Risiken.Mannino Adriano, David Althaus, Jonathan Erhardt, Lukas Gloor, Adrian Hutter & Thomas Metzinger - 2015 - Diskussionspapiere der Stiftung Für Effektiven Altruismus 2:1-17.
    Die Übernahme des KI-Unternehmens DeepMind durch Google für rund eine halbe Milliarde US-Dollar signalisierte vor einem Jahr, dass von der KI-Forschung vielversprechende Ergebnisse erwartet werden. Spätestens seit bekannte Wissenschaftler wie Stephen Hawking und Unternehmer wie Elon Musk oder Bill Gates davor warnen, dass künstliche Intelligenz eine Bedrohung für die Menschheit darstellt, schlägt das KI-Thema hohe Wellen. Die Stiftung für Effektiven Altruismus (EAS, vormals GBS Schweiz) hat mit der Unterstützung von Experten/innen aus Informatik und KI ein umfassendes Diskussionspapier zu den Chancen (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Safety Engineering for Artificial General Intelligence.Roman Yampolskiy & Joshua Fox - 2012 - Topoi 32 (2):217-226.
    Machine ethics and robot rights are quickly becoming hot topics in artificial intelligence and robotics communities. We will argue that attempts to attribute moral agency and assign rights to all intelligent machines are misguided, whether applied to infrahuman or superhuman AIs, as are proposals to limit the negative effects of AIs by constraining their behavior. As an alternative, we propose a new science of safety engineering for intelligent artificial agents based on maximizing for what humans value. In particular, we challenge (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents. [REVIEW]Nick Bostrom - 2012 - Minds and Machines 22 (2):71-85.
    This paper discusses the relation between intelligence and motivation in artificial agents, developing and briefly arguing for two theses. The first, the orthogonality thesis, holds (with some caveats) that intelligence and final goals (purposes) are orthogonal axes along which possible artificial intellects can freely vary—more or less any level of intelligence could be combined with more or less any final goal. The second, the instrumental convergence thesis, holds that as long as they possess a sufficient level of intelligence, agents having (...)
    Download  
     
    Export citation  
     
    Bookmark   43 citations  
  • Thinking Inside the Box: Controlling and Using an Oracle AI.Stuart Armstrong, Anders Sandberg & Nick Bostrom - 2012 - Minds and Machines 22 (4):299-324.
    There is no strong reason to believe that human-level intelligence represents an upper limit of the capacity of artificial intelligence, should it be realized. This poses serious safety issues, since a superintelligent system would have great power to direct the future according to its possibly flawed motivation system. Solving this issue in general has proven to be considerably harder than expected. This paper looks at one particular approach, Oracle AI. An Oracle AI is an AI that does not act in (...)
    Download  
     
    Export citation  
     
    Bookmark   18 citations  
  • 4.5 AI (Artificial Intelligence) and Ultimate Reality.Anthony Cristiano - 2020 - Ultimate Reality and Meaning 36 (3-4):127-143.
    Download  
     
    Export citation  
     
    Bookmark  
  • Problems with “Friendly AI”.Oliver Li - 2021 - Ethics and Information Technology 23 (3):543-550.
    On virtue ethical grounds, Barbro Fröding and Martin Peterson recently recommended that near-future AIs should be developed as ‘Friendly AI’. AI in social interaction with humans should be programmed such that they mimic aspects of human friendship. While it is a reasonable goal to implement AI systems interacting with humans as Friendly AI, I identify four issues that need to be addressed concerning Friendly AI with Fröding’s and Peterson’s understanding of Friendly AI as a starting point. In a first step, (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Fully Autonomous AI.Wolfhart Totschnig - 2020 - Science and Engineering Ethics 26 (5):2473-2485.
    In the fields of artificial intelligence and robotics, the term “autonomy” is generally used to mean the capacity of an artificial agent to operate independently of human guidance. It is thereby assumed that the agent has a fixed goal or “utility function” with respect to which the appropriateness of its actions will be evaluated. From a philosophical perspective, this notion of autonomy seems oddly weak. For, in philosophy, the term is generally used to refer to a stronger capacity, namely the (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Echoes of myth and magic in the language of Artificial Intelligence.Roberto Musa Giuliano - 2020 - AI and Society 35 (4):1009-1024.
    To a greater extent than in other technical domains, research and progress in Artificial Intelligence has always been entwined with the fictional. Its language echoes strongly with other forms of cultural narratives, such as fairytales, myth and religion. In this essay we present varied examples that illustrate how these analogies have guided not only readings of the AI enterprise by commentators outside the community but also inspired AI researchers themselves. Owing to their influence, we pay particular attention to the similarities (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • The race for an artificial general intelligence: implications for public policy.Wim Naudé & Nicola Dimitri - 2020 - AI and Society 35 (2):367-379.
    An arms race for an artificial general intelligence would be detrimental for and even pose an existential threat to humanity if it results in an unfriendly AGI. In this paper, an all-pay contest model is developed to derive implications for public policy to avoid such an outcome. It is established that, in a winner-takes-all race, where players must invest in R&D, only the most competitive teams will participate. Thus, given the difficulty of AGI, the number of competing teams is unlikely (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Why we need friendly ai.Luke Muehlhauser & Nick Bostrom - 2014 - Think 13 (36):41-47.
    Humans will not always be the most intelligent agents on Earth, the ones steering the future. What will happen to us when we no longer play that role, and how can we prepare for this transition?
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • A democratic way of controlling artificial general intelligence.Jussi Salmi - forthcoming - AI and Society:1-7.
    The problem of controlling an artificial general intelligence has fascinated both scientists and science-fiction writers for centuries. Today that problem is becoming more important because the time when we may have a superhuman intelligence among us is within the foreseeable future. Current average estimates place that moment to before 2060. Some estimates place it as early as 2040, which is quite soon. The arrival of the first AGI might lead to a series of events that we have not seen before: (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Corporate Agency and Possible Futures.Tim Mulgan - 2018 - Journal of Business Ethics 154 (4):901-916.
    We need an account of corporate agency that is temporally robust – one that will help future people to cope with challenges posed by corporate groups in a range of credible futures. In particular, we need to bequeath moral resources that enable future people to avoid futures dominated by corporate groups that have no regard for human beings. This paper asks how future philosophers living in broken or digital futures might re-imagine contemporary debates about corporate agency. It argues that the (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Making the black box society transparent.Daniel Innerarity - 2021 - AI and Society 36 (3):975-981.
    The growing presence of smart devices in our lives turns all of society into something largely unknown to us. The strategy of demanding transparency stems from the desire to reduce the ignorance to which this automated society seems to condemn us. An evaluation of this strategy first requires that we distinguish the different types of non-transparency. Once we reveal the limits of the transparency needed to confront these devices, the article examines the alternative strategy of explainable artificial intelligence and concludes (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Democratizing cognitive technology: a proactive approach.Marcello Ienca - 2019 - Ethics and Information Technology 21 (4):267-280.
    Cognitive technology is an umbrella term sometimes used to designate the realm of technologies that assist, augment or simulate cognitive processes or that can be used for the achievement of cognitive aims. This technological macro-domain encompasses both devices that directly interface the human brain as well as external systems that use artificial intelligence to simulate or assist (aspects of) human cognition. As they hold the promise of assisting and augmenting human cognitive capabilities both individually and collectively, cognitive technologies could produce, (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Who Should Die? The Ethics of Killing in War.Ryan Jenkins & Bradley Strawser (eds.) - 2017 - New York: Oxford University Press.
    This volume collects influential and groundbreaking philosophical work on killing in war. A " of contemporary scholars, this volume serves as a convenient and authoritative collection uniquely suited for university-level teaching and as a reference for ethicists, policymakers, stakeholders, and any student of the morality of war.
    Download  
     
    Export citation  
     
    Bookmark  
  • The problem of superintelligence: political, not technological.Wolfhart Totschnig - 2019 - AI and Society 34 (4):907-920.
    The thinkers who have reflected on the problem of a coming superintelligence have generally seen the issue as a technological problem, a problem of how to control what the superintelligence will do. I argue that this approach is probably mistaken because it is based on questionable assumptions about the behavior of intelligent agents and, moreover, potentially counterproductive because it might, in the end, bring about the existential catastrophe that it is meant to prevent. I contend that the problem posed by (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Risk management standards and the active management of malicious intent in artificial superintelligence.Patrick Bradley - 2020 - AI and Society 35 (2):319-328.
    The likely near future creation of artificial superintelligence carries significant risks to humanity. These risks are difficult to conceptualise and quantify, but malicious use of existing artificial intelligence by criminals and state actors is already occurring and poses risks to digital security, physical security and integrity of political systems. These risks will increase as artificial intelligence moves closer to superintelligence. While there is little research on risk management tools used in artificial intelligence development, the current global standard for risk management, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Synthesizing Methuselah: The Question of Artificial Agelessness.Richard B. Gibson - 2024 - Cambridge Quarterly of Healthcare Ethics 33 (1):60-75.
    As biological organisms, we age and, eventually, die. However, age’s deteriorating effects may not be universal. Some theoretical entities, due to their synthetic composition, could exist independently from aging—artificial general intelligence (AGI). With adequate resource access, an AGI could theoretically be ageless and would be, in some sense, immortal. Yet, this need not be inevitable. Designers could imbue AGIs with artificial mortality via an internal shut-off point. The question, though, is, should they? Should researchers curtail an AGI’s potentially endless lifespan (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Discourse analysis of academic debate of ethics for AGI.Ross Graham - 2022 - AI and Society 37 (4):1519-1532.
    Artificial general intelligence is a greatly anticipated technology with non-trivial existential risks, defined as machine intelligence with competence as great/greater than humans. To date, social scientists have dedicated little effort to the ethics of AGI or AGI researchers. This paper employs inductive discourse analysis of the academic literature of two intellectual groups writing on the ethics of AGI—applied and/or ‘basic’ scientific disciplines henceforth referred to as technicians (e.g., computer science, electrical engineering, physics), and philosophy-adjacent disciplines henceforth referred to as PADs (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Do No Harm Policy for Minds in Other Substrates.Soenke Ziesche & Roman V. Yampolskiy - 2019 - Journal of Ethics and Emerging Technologies 29 (2):1-11.
    Various authors have argued that in the future not only will it be technically feasible for human minds to be transferred to other substrates, but this will become, for most humans, the preferred option over the current biological limitations. It has even been claimed that such a scenario is inevitable in order to solve the challenging, but imperative, multi-agent value alignment problem. In all these considerations, it has been overlooked that, in order to create a suitable environment for a particular (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Why AI shall emerge in the one of possible worlds?Ignacy Sitnicki - 2019 - AI and Society 34 (2):365-371.
    The aim of this paper is to present some philosophical considerations about the supposed AI emergence in the future. However, the predicted timeline of this process is uncertain. To avoid any kind of speculations on the proposed analysis from a scientific point of view, a metaphysical approach is undertaken as a modal context of the discussion. I argue that modal claim about possible AI emergence at a certain point of time in the future is justified from a temporal perspective. Therefore, (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Optimising peace through a Universal Global Peace Treaty to constrain the risk of war from a militarised artificial superintelligence.Elias G. Carayannis & John Draper - 2023 - AI and Society 38 (6):2679-2692.
    This article argues that an artificial superintelligence (ASI) emerging in a world where war is still normalised constitutes a catastrophic existential risk, either because the ASI might be employed by a nation–state to war for global domination, i.e., ASI-enabled warfare, or because the ASI wars on behalf of itself to establish global domination, i.e., ASI-directed warfare. Presently, few states declare war or even war on each other, in part due to the 1945 UN Charter, which states Member States should “refrain (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Superintelligence and the Future of Governance: On Prioritizing the Control Problem at the End of History.Phil Torres - 2018 - In Yampolskiy Roman (ed.), Artificial Intelligence Safety and Security. CRC Press.
    This chapter argues that dual-use emerging technologies are distributing unprecedented offensive capabilities to nonstate actors. To counteract this trend, some scholars have proposed that states become a little “less liberal” by implementing large-scale surveillance policies to monitor the actions of citizens. This is problematic, though, because the distribution of offensive capabilities is also undermining states’ capacity to enforce the rule of law. I will suggest that the only plausible escape from this conundrum, at least from our present vantage point, is (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations