Switch to: Citations

References in:

Future progress in artificial intelligence: A survey of expert opinion

In Vincent C. Müller (ed.), Fundamental Issues of Artificial Intelligence. Cham: Springer. pp. 553-571 (2016)

Add references

You must login to add references.
  1. 20. What Computers Can’t Do: A Critique of Artificial Reason.Hubert L. Dreyfus - 2014 - In Bernard Williams (ed.), Essays and Reviews: 1959-2002. Princeton: Princeton University Press. pp. 90-100.
    Download  
     
    Export citation  
     
    Bookmark   74 citations  
  • The Singularity is Near: When Humans Transcend Biology.Ray Kurzweil - 2005 - Viking Press.
    A controversial scientific vision predicts a time in which humans and machines will merge and create a new form of non-biological intelligence, explaining how the occurrence will solve such issues as pollution, hunger, and aging.
    Download  
     
    Export citation  
     
    Bookmark   291 citations  
  • Risks of artificial general intelligence.Vincent C. Müller (ed.) - 2014 - Taylor & Francis (JETAI).
    Special Issue “Risks of artificial general intelligence”, Journal of Experimental and Theoretical Artificial Intelligence, 26/3 (2014), ed. Vincent C. Müller. http://www.tandfonline.com/toc/teta20/26/3# - Risks of general artificial intelligence, Vincent C. Müller, pages 297-301 - Autonomous technology and the greater human good - Steve Omohundro - pages 303-315 - - - The errors, insights and lessons of famous AI predictions – and what they mean for the future - Stuart Armstrong, Kaj Sotala & Seán S. Ó hÉigeartaigh - pages 317-342 - - (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Theory and philosophy of AI (Minds and Machines, 22/2 - Special volume).Vincent C. Müller (ed.) - 2012 - Springer.
    Invited papers from PT-AI 2011. - Vincent C. Müller: Introduction: Theory and Philosophy of Artificial Intelligence - Nick Bostrom: The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents - Hubert L. Dreyfus: A History of First Step Fallacies - Antoni Gomila, David Travieso and Lorena Lobo: Wherein is Human Cognition Systematic - J. Kevin O'Regan: How to Build a Robot that Is Conscious and Feels - Oron Shagrir: Computation, Implementation, Cognition.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Is there a future for AI without representation?Vincent C. Müller - 2007 - Minds and Machines 17 (1):101-115.
    This paper investigates the prospects of Rodney Brooks’ proposal for AI without representation. It turns out that the supposedly characteristic features of “new AI” (embodiment, situatedness, absence of reasoning, and absence of representation) are all present in conventional systems: “New AI” is just like old AI. Brooks proposal boils down to the architectural rejection of central control in intelligent agents—Which, however, turns out to be crucial. Some of more recent cognitive science suggests that we might do well to dispose of (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • A History of First Step Fallacies.Hubert L. Dreyfus - 2012 - Minds and Machines 22 (2):87-99.
    In the 1960s, without realizing it, AI researchers were hard at work finding the features, rules, and representations needed for turning rationalist philosophy into a research program, and by so doing AI researchers condemned their enterprise to failure. About the same time, a logician, Yehoshua Bar-Hillel, pointed out that AI optimism was based on what he called the “first step fallacy”. First step thinking has the idea of a successful last step built in. Limited early success, however, is not a (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Superintelligence: paths, dangers, strategies.Nick Bostrom (ed.) - 2014 - Oxford University Press.
    The human brain has some capabilities that the brains of other animals lack. It is to these distinctive capabilities that our species owes its dominant position. Other animals have stronger muscles or sharper claws, but we have cleverer brains. If machine brains one day come to surpass human brains in general intelligence, then this new superintelligence could become very powerful. As the fate of the gorillas now depends more on us humans than on the gorillas themselves, so the fate of (...)
    Download  
     
    Export citation  
     
    Bookmark   288 citations  
  • Editorial: Risks of general artificial intelligence.Vincent C. Müller - 2014 - Journal of Experimental and Theoretical Artificial Intelligence 26 (3):297-301.
    This is the editorial for a special volume of JETAI, featuring papers by Omohundro, Armstrong/Sotala/O’Heigeartaigh, T Goertzel, Brundage, Yampolskiy, B. Goertzel, Potapov/Rodinov, Kornai and Sandberg. - If the general intelligence of artificial systems were to surpass that of humans significantly, this would constitute a significant risk for humanity – so even if we estimate the probability of this event to be fairly low, it is necessary to think about it now. We need to estimate what progress we can expect, what (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • How long before superintelligence?Nick Bostrom - 1998 - International Journal of Futures Studies 2.
    _This paper outlines the case for believing that we will have superhuman artificial intelligence_ _within the first third of the next century. It looks at different estimates of the processing power of_ _the human brain; how long it will take until computer hardware achieve a similar performance;_ _ways of creating the software through bottom-up approaches like the one used by biological_ _brains; how difficult it will be for neuroscience figure out enough about how brains work to_ _make this approach work; (...)
    Download  
     
    Export citation  
     
    Bookmark   34 citations