Switch to: References

Add citations

You must login to add citations.
  1. Superintelligence as Superethical.Steve Petersen - 2017 - In Patrick Lin, Keith Abney & Ryan Jenkins (eds.), Robot Ethics 2.0. New York, USA: Oxford University Press. pp. 322-337.
    Nick Bostrom's book *Superintelligence* outlines a frightening but realistic scenario for human extinction: true artificial intelligence is likely to bootstrap itself into superintelligence, and thereby become ideally effective at achieving its goals. Human-friendly goals seem too abstract to be pre-programmed with any confidence, and if those goals are *not* explicitly favorable toward humans, the superintelligence will extinguish us---not through any malice, but simply because it will want our resources for its own purposes. In response I argue that things might not (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Thinking Inside the Box: Using and Controlling an Oracle AI.Stuart Armstrong, Anders Sandberg & Nick Bostrom - forthcoming - Minds and Machines.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards.Nick Bostrom - unknown
    Because of accelerating technological progress, humankind may be rapidly approaching a critical phase in its career. In addition to well-known threats such as nuclear holocaust, the propects of radically transforming technologies like nanotech systems and machine intelligence present us with unprecedented opportunities and risks. Our future, and whether we will have a future at all, may well be determined by how we deal with these challenges. In the case of radically transforming technologies, a better understanding of the transition dynamics from (...)
    Download  
     
    Export citation  
     
    Bookmark   31 citations  
  • Thinking Inside the Box: Controlling and Using an Oracle AI.Stuart Armstrong, Anders Sandberg & Nick Bostrom - 2012 - Minds and Machines 22 (4):299-324.
    There is no strong reason to believe that human-level intelligence represents an upper limit of the capacity of artificial intelligence, should it be realized. This poses serious safety issues, since a superintelligent system would have great power to direct the future according to its possibly flawed motivation system. Solving this issue in general has proven to be considerably harder than expected. This paper looks at one particular approach, Oracle AI. An Oracle AI is an AI that does not act in (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Leakproofing the Singularity.Roman V. Yampolskiy - 2012 - Journal of Consciousness Studies 19 (1-2):194-214.
    This paper attempts to formalize and to address the ‘leakproofing’ of the Singularity problem presented by David Chalmers. The paper begins with the definition of the Artificial Intelligence Confinement Problem. After analysis of existing solutions and their shortcomings, a protocol is proposed aimed at making a more secure confinement environment which might delay potential negative effect from the technological singularity while allowing humanity to benefit from the superintelligence.
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • The Singularity Beyond Philosophy of Mind.Eric Steinhart - 2012 - Journal of Consciousness Studies 19 (7-8):7-8.
    Thought about the singularity intersects the philosophy of mind in deep and important ways. However, thought about the singularity also intersects many other areas of philosophy, including the history of philosophy, metaphysics, the philosophy of science, and the philosophy of religion. I point to some of those intersections. Singularitarian thought suggests that many of the objects and processes that once lay in the domain of revealed religion now lie in the domain of pure computer science.
    Download  
     
    Export citation  
     
    Bookmark  
  • The Singularity: A Philosophical Analysis.David J. Chalmers - 2010 - Journal of Consciousness Studies 17 (9-10):9 - 10.
    What happens when machines become more intelligent than humans? One view is that this event will be followed by an explosion to ever-greater levels of intelligence, as each generation of machines creates more intelligent machines in turn. This intelligence explosion is now often known as the “singularity”. The basic argument here was set out by the statistician I.J. Good in his 1965 article “Speculations Concerning the First Ultraintelligent Machine”: Let an ultraintelligent machine be defined as a machine that can far (...)
    Download  
     
    Export citation  
     
    Bookmark   57 citations  
  • Self-Improving AI: An Analysis. [REVIEW]John Storrs Hall - 2007 - Minds and Machines 17 (3):249-259.
    Self-improvement was one of the aspects of AI proposed for study in the 1956 Dartmouth conference. Turing proposed a “child machine” which could be taught in the human manner to attain adult human-level intelligence. In latter days, the contention that an AI system could be built to learn and improve itself indefinitely has acquired the label of the bootstrap fallacy. Attempts in AI to implement such a system have met with consistent failure for half a century. Technological optimists, however, have (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Autonomous Machines, Moral Judgment, and Acting for the Right Reasons.Duncan Purves, Ryan Jenkins & Bradley J. Strawser - 2015 - Ethical Theory and Moral Practice 18 (4):851-872.
    We propose that the prevalent moral aversion to AWS is supported by a pair of compelling objections. First, we argue that even a sophisticated robot is not the kind of thing that is capable of replicating human moral judgment. This conclusion follows if human moral judgment is not codifiable, i.e., it cannot be captured by a list of rules. Moral judgment requires either the ability to engage in wide reflective equilibrium, the ability to perceive certain facts as moral considerations, moral (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • Computing Machinery and Morality.Blay Whitby - 2008 - AI and Society 22 (4):551-563.
    Artificial Intelligence (AI) is a technology widely used to support human decision-making. Current areas of application include financial services, engineering, and management. A number of attempts to introduce AI decision support systems into areas which more obviously include moral judgement have been made. These include systems that give advice on patient care, on social benefit entitlement, and even ethical advice for medical professionals. Responding to these developments raises a complex set of moral questions. This paper proposes a clearer replacement question (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation