Switch to: References

Add citations

You must login to add citations.
  1. The Weirdness of the World.Eric Schwitzgebel - 2024 - Princeton University Press.
    How all philosophical explanations of human consciousness and the fundamental structure of the cosmos are bizarre—and why that’s a good thing Do we live inside a simulated reality or a pocket universe embedded in a larger structure about which we know virtually nothing? Is consciousness a purely physical matter, or might it require something extra, something nonphysical? According to the philosopher Eric Schwitzgebel, it’s hard to say. In The Weirdness of the World, Schwitzgebel argues that the answers to these fundamental (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Ethics of Artificial Intelligence and Robotics.Vincent C. Müller - 2012 - In Peter Adamson (ed.), Stanford Encyclopedia of Philosophy. Stanford Encyclopedia of Philosophy. pp. 1-70.
    Artificial intelligence (AI) and robotics are digital technologies that will have significant impact on the development of humanity in the near future. They have raised fundamental questions about what we should do with these systems, what the systems themselves should do, what risks they involve, and how we can control these. - After the Introduction to the field (§1), the main themes (§2) of this article are: Ethical issues that arise with AI systems as objects, i.e., tools made and used (...)
    Download  
     
    Export citation  
     
    Bookmark   30 citations  
  • The brain as artificial intelligence: prospecting the frontiers of neuroscience.Steve Fuller - 2019 - AI and Society 34 (4):825-833.
    This article explores the proposition that the brain, normally seen as an organ of the human body, should be understood as a biologically based form of artificial intelligence, in the course of which the case is made for a new kind of ‘brain exceptionalism’. After noting that such a view was generally assumed by the founders of AI in the 1950s, the argument proceeds by drawing on the distinction between science—in this case neuroscience—adopting a ‘telescopic’ or a ‘microscopic’ orientation to (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Eight Kinds of Critters: A Moral Taxonomy for the Twenty-Second Century.Michael Bess - 2018 - Journal of Medicine and Philosophy 43 (5):585-612.
    Over the coming century, the accelerating advance of bioenhancement technologies, robotics, and artificial intelligence (AI) may significantly broaden the qualitative range of sentient and intelligent beings. This article proposes a taxonomy of such beings, ranging from modified animals to bioenhanced humans to advanced forms of robots and AI. It divides these diverse beings into three moral and legal categories—animals, persons, and presumed persons—describing the moral attributes and legal rights of each category. In so doing, the article sets forth a framework (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Reconciliation between factions focused on near-term and long-term artificial intelligence.Seth D. Baum - 2018 - AI and Society 33 (4):565-572.
    Artificial intelligence experts are currently divided into “presentist” and “futurist” factions that call for attention to near-term and long-term AI, respectively. This paper argues that the presentist–futurist dispute is not the best focus of attention. Instead, the paper proposes a reconciliation between the two factions based on a mutual interest in AI. The paper further proposes realignment to two new factions: an “intellectualist” faction that seeks to develop AI for intellectual reasons and a “societalist faction” that seeks to develop AI (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • The Moral Case for Long-Term Thinking.Hilary Greaves, William MacAskill & Elliott Thornley - forthcoming - In Natalie Cargill & Tyler M. John (eds.), The Long View: Essays on Policy, Philanthropy, and the Long-Term Future. London: FIRST. pp. 19-28.
    This chapter makes the case for strong longtermism: the claim that, in many situations, impact on the long-run future is the most important feature of our actions. Our case begins with the observation that an astronomical number of people could exist in the aeons to come. Even on conservative estimates, the expected future population is enormous. We then add a moral claim: all the consequences of our actions matter. In particular, the moral importance of what happens does not depend on (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The Fragile World Hypothesis: Complexity, Fragility, and Systemic Existential Risk.David Manheim - forthcoming - Futures.
    The possibility of social and technological collapse has been the focus of science fiction tropes for decades, but more recent focus has been on specific sources of existential and global catastrophic risk. Because these scenarios are simple to understand and envision, they receive more attention than risks due to complex interplay of failures, or risks that cannot be clearly specified. In this paper, we discuss the possibility that complexity of a certain type leads to fragility which can function as a (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • What is the upper limit of value?David Manheim & Anders Sandberg - manuscript
    How much value can our decisions create? We argue that unless our current understanding of physics is wrong in fairly fundamental ways, there exists an upper limit of value relevant to our decisions. First, due to the speed of light and the definition and conception of economic growth, the limit to economic growth is a restrictive one. Additionally, a related far larger but still finite limit exists for value in a much broader sense due to the physics of information and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The grateful Un-dead? Philosophical and Social Implications of Mind-Uploading.Ivan William Kelly - manuscript
    The popular belief that our mind either depends on or (in stronger terms) is identical with brain functions and processes, along with the belief that advances in technology in virtual reality and computability will continue, has contributed to the contention that one-day (perhaps this century) it may be possible to transfer one’s mind (or a simulated copy) into another body (physical or virtual). This is called mind-uploading or whole brain emulation. This paper serves as an introduction to the area and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Long-Term Trajectories of Human Civilization.Seth D. Baum, Stuart Armstrong, Timoteus Ekenstedt, Olle Häggström, Robin Hanson, Karin Kuhlemann, Matthijs M. Maas, James D. Miller, Markus Salmela, Anders Sandberg, Kaj Sotala, Phil Torres, Alexey Turchin & Roman V. Yampolskiy - 2019 - Foresight 21 (1):53-83.
    Purpose This paper aims to formalize long-term trajectories of human civilization as a scientific and ethical field of study. The long-term trajectory of human civilization can be defined as the path that human civilization takes during the entire future time period in which human civilization could continue to exist. -/- Design/methodology/approach This paper focuses on four types of trajectories: status quo trajectories, in which human civilization persists in a state broadly similar to its current state into the distant future; catastrophe (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • Superintelligence as a Cause or Cure for Risks of Astronomical Suffering.Kaj Sotala & Lukas Gloor - 2017 - Informatica: An International Journal of Computing and Informatics 41 (4):389-400.
    Discussions about the possible consequences of creating superintelligence have included the possibility of existential risk, often understood mainly as the risk of human extinction. We argue that suffering risks (s-risks) , where an adverse outcome would bring about severe suffering on an astronomical scale, are risks of a comparable severity and probability as risks of extinction. Preventing them is the common interest of many different value systems. Furthermore, we argue that in the same way as superintelligent AI both contributes to (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • Global Solutions vs. Local Solutions for the AI Safety Problem.Alexey Turchin - 2019 - Big Data Cogn. Comput 3 (1).
    There are two types of artificial general intelligence (AGI) safety solutions: global and local. Most previously suggested solutions are local: they explain how to align or “box” a specific AI (Artificial Intelligence), but do not explain how to prevent the creation of dangerous AI in other places. Global solutions are those that ensure any AI on Earth is not dangerous. The number of suggested global solutions is much smaller than the number of proposed local solutions. Global solutions can be divided (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation