Switch to: Citations

Add references

You must login to add references.
  1. Superintelligence: paths, dangers, strategies.Nick Bostrom (ed.) - 2003 - Oxford University Press.
    The human brain has some capabilities that the brains of other animals lack. It is to these distinctive capabilities that our species owes its dominant position. Other animals have stronger muscles or sharper claws, but we have cleverer brains. If machine brains one day come to surpass human brains in general intelligence, then this new superintelligence could become very powerful. As the fate of the gorillas now depends more on us humans than on the gorillas themselves, so the fate of (...)
    Download  
     
    Export citation  
     
    Bookmark   307 citations  
  • The abolitionist project.David Pearce - manuscript
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • (2 other versions)Facing up to the problem of consciousness.David Chalmers - 1995 - Journal of Consciousness Studies 2 (3):200-19.
    To make progress on the problem of consciousness, we have to confront it directly. In this paper, I first isolate the truly hard part of the problem, separating it from more tractable parts and giving an account of why it is so difficult to explain. I critique some recent work that uses reductive methods to address consciousness, and argue that such methods inevitably fail to come to grips with the hardest part of the problem. Once this failure is recognized, the (...)
    Download  
     
    Export citation  
     
    Bookmark   712 citations  
  • Superintelligence as a Cause or Cure for Risks of Astronomical Suffering.Kaj Sotala & Lukas Gloor - 2017 - Informatica: An International Journal of Computing and Informatics 41 (4):389-400.
    Discussions about the possible consequences of creating superintelligence have included the possibility of existential risk, often understood mainly as the risk of human extinction. We argue that suffering risks (s-risks) , where an adverse outcome would bring about severe suffering on an astronomical scale, are risks of a comparable severity and probability as risks of extinction. Preventing them is the common interest of many different value systems. Furthermore, we argue that in the same way as superintelligent AI both contributes to (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • (2 other versions)Facing up to the problem of consciousness.D. J. Chalmers - 1996 - Toward a Science of Consciousness:5-28.
    Download  
     
    Export citation  
     
    Bookmark   530 citations  
  • Artificial Intelligence as a Positive and Negative Factor in Global Risk.Eliezer Yudkowsky - 2008 - In Nick Bostrom & Milan M. Cirkovic (eds.), Global Catastrophic Risks. Oxford University Press. pp. 308-345.
    Download  
     
    Export citation  
     
    Bookmark   36 citations  
  • (1 other version)Why I Want to be a Posthuman When I Grow Up.Nick Bostrom - 2013 - In Max More & Natasha Vita-More (eds.), The Transhumanist Reader: Classical and Contemporary Essays on the Science, Technology, and Philosophy of the Human Future. Chichester, West Sussex, UK: Wiley-Blackwell. pp. 28-53.
    The term “posthuman” has been used in very different senses by different authors.2 I am sympathetic to the view that the word often causes more confusion than clarity, and that we might be better off replacing it with some alternative vocabulary.
    Download  
     
    Export citation  
     
    Bookmark   52 citations