Switch to: Citations

Add references

You must login to add references.
  1. The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents. [REVIEW]Nick Bostrom - 2012 - Minds and Machines 22 (2):71-85.
    This paper discusses the relation between intelligence and motivation in artificial agents, developing and briefly arguing for two theses. The first, the orthogonality thesis, holds (with some caveats) that intelligence and final goals (purposes) are orthogonal axes along which possible artificial intellects can freely vary—more or less any level of intelligence could be combined with more or less any final goal. The second, the instrumental convergence thesis, holds that as long as they possess a sufficient level of intelligence, agents having (...)
    Download  
     
    Export citation  
     
    Bookmark   44 citations  
  • Superintelligence: paths, dangers, strategies.Nick Bostrom (ed.) - 2003 - Oxford University Press.
    The human brain has some capabilities that the brains of other animals lack. It is to these distinctive capabilities that our species owes its dominant position. Other animals have stronger muscles or sharper claws, but we have cleverer brains. If machine brains one day come to surpass human brains in general intelligence, then this new superintelligence could become very powerful. As the fate of the gorillas now depends more on us humans than on the gorillas themselves, so the fate of (...)
    Download  
     
    Export citation  
     
    Bookmark   308 citations  
  • The Tragedy of the Commons.Garrett Hardin - 1968 - Science 162 (3859):1243-1248.
    At the end of a thoughtful article on the future of nuclear war, Wiesner and York concluded that: "Both sides in the arms race are... confronted by the dilemma of steadily increasing military power and steadily decreasing national security. It is our considered professional judgment that this dilemma has no technical solution. If the great powers continue to look for solutions in the area of science and technology only, the result will be to worsen the situation.".
    Download  
     
    Export citation  
     
    Bookmark   943 citations  
  • (1 other version)The singularity: A philosophical analysis.David J. Chalmers - 2010 - Journal of Consciousness Studies 17 (9-10):9 - 10.
    What happens when machines become more intelligent than humans? One view is that this event will be followed by an explosion to ever-greater levels of intelligence, as each generation of machines creates more intelligent machines in turn. This intelligence explosion is now often known as the “singularity”. The basic argument here was set out by the statistician I.J. Good in his 1965 article “Speculations Concerning the First Ultraintelligent Machine”: Let an ultraintelligent machine be defined as a machine that can far (...)
    Download  
     
    Export citation  
     
    Bookmark   122 citations  
  • Thinking Inside the Box: Controlling and Using an Oracle AI.Stuart Armstrong, Anders Sandberg & Nick Bostrom - 2012 - Minds and Machines 22 (4):299-324.
    There is no strong reason to believe that human-level intelligence represents an upper limit of the capacity of artificial intelligence, should it be realized. This poses serious safety issues, since a superintelligent system would have great power to direct the future according to its possibly flawed motivation system. Solving this issue in general has proven to be considerably harder than expected. This paper looks at one particular approach, Oracle AI. An Oracle AI is an AI that does not act in (...)
    Download  
     
    Export citation  
     
    Bookmark   18 citations  
  • (1 other version)The Singularity: A Philosophical Analysis.David Chalmers - 2016 - In Uzi Awret & U. Awret (eds.), The Singularity: Could Artificial Intelligence Really Out-Think Us ? Imprint Academic. pp. 12-88.
    Download  
     
    Export citation  
     
    Bookmark   75 citations  
  • The Unilateralist’s Curse and the Case for a Principle of Conformity.Nick Bostrom, Thomas Douglas & Anders Sandberg - 2016 - Social Epistemology 30 (4):350-371.
    In some situations a number of agents each have the ability to undertake an initiative that would have significant effects on the others. Suppose that each of these agents is purely motivated by an altruistic concern for the common good. We show that if each agent acts on her own personal judgment as to whether the initiative should be undertaken, then the initiative will be undertaken more often than is optimal. We suggest that this phenomenon, which we call the unilateralist’s (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Speculations concerning the first ultraintelligent machine.I. J. Good - 1965 - In F. Alt & M. Ruminoff (eds.), Advances in Computers, volume 6. Academic Press.
    Download  
     
    Export citation  
     
    Bookmark   55 citations  
  • Intelligence Explosion: Evidence and Import.Luke Muehlhauser & Anna Salamon - 2012 - In Amnon H. Eden & James H. Moor (eds.), Singularity Hypotheses: A Scientific and Philosophical Assessment. Springer. pp. 15-40.
    Download  
     
    Export citation  
     
    Bookmark   7 citations