Switch to: Citations

Add references

You must login to add references.
  1. Superintelligence: paths, dangers, strategies.Nick Bostrom (ed.) - 2014 - Oxford University Press.
    The human brain has some capabilities that the brains of other animals lack. It is to these distinctive capabilities that our species owes its dominant position. Other animals have stronger muscles or sharper claws, but we have cleverer brains. If machine brains one day come to surpass human brains in general intelligence, then this new superintelligence could become very powerful. As the fate of the gorillas now depends more on us humans than on the gorillas themselves, so the fate of (...)
    Download  
     
    Export citation  
     
    Bookmark   307 citations  
  • The singularity: A philosophical analysis.David J. Chalmers - 2010 - Journal of Consciousness Studies 17 (9-10):9 - 10.
    What happens when machines become more intelligent than humans? One view is that this event will be followed by an explosion to ever-greater levels of intelligence, as each generation of machines creates more intelligent machines in turn. This intelligence explosion is now often known as the “singularity”. The basic argument here was set out by the statistician I.J. Good in his 1965 article “Speculations Concerning the First Ultraintelligent Machine”: Let an ultraintelligent machine be defined as a machine that can far (...)
    Download  
     
    Export citation  
     
    Bookmark   118 citations  
  • Staring into the singularity.Eliezer Yudkowsky - manuscript
    1: The End of History 2: The Beyondness of the Singularity 2.1: The Definition of Smartness 2.2: Perceptual Transcends 2.3: Great Big Numbers 2.4: Smarter Than We Are 3: Sooner Than You Think 4: Uploading 5: The Interim Meaning of Life 6: Getting to the Singularity.
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Nine Ways to Bias Open-Source AGI Toward Friendliness.Ben Goertzel & Joel Pitt - 2011 - Journal of Evolution and Technology 22 (1):116-131.
    While it seems unlikely that any method of guaranteeing human-friendliness on the part of advanced Artificial General Intelligence systems will be possible, this doesn’t mean the only alternatives are throttling AGI development to safeguard humanity, or plunging recklessly into the complete unknown. Without denying the presence of a certain irreducible uncertainty in such matters, it is still sensible to explore ways of biasing the odds in a favorable way, such that newly created AI systems are significantly more likely than not (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Should Humanity Build a Global AI Nanny to Delay the Singularity Until It's Better Understood?Ben Goertzel - 2012 - Journal of Consciousness Studies 19 (1-2):96.
    Chalmers suggests that, if a Singularity fails to occur in the next few centuries, the most likely reason will be 'motivational defeaters' i.e. at some point humanity or human-level AI may abandon the effort to create dramatically superhuman artificial general intelligence. Here I explore one plausible way in which that might happen: the deliberate human creation of an 'AI Nanny' with mildly superhuman intelligence and surveillance powers, designed either to forestall Singularity eternally, or to delay the Singularity until humanity more (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations