Switch to: References

Citations of:

Intelligence Explosion: Evidence and Import

In Amnon H. Eden & James H. Moor (eds.), Singularity Hypotheses: A Scientific and Philosophical Assessment. Springer. pp. 15-40 (2012)

Add citations

You must login to add citations.
  1. Why Machines Will Never Rule the World: Artificial Intelligence without Fear.Jobst Landgrebe & Barry Smith - 2022 - Abingdon, England: Routledge.
    The book’s core argument is that an artificial intelligence that could equal or exceed human intelligence—sometimes called artificial general intelligence (AGI)—is for mathematical reasons impossible. It offers two specific reasons for this claim: Human intelligence is a capability of a complex dynamic system—the human brain and central nervous system. Systems of this sort cannot be modelled mathematically in a way that allows them to operate inside a computer. In supporting their claim, the authors, Jobst Landgrebe and Barry Smith, marshal evidence (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • A philosophical view on singularity and strong AI.Christian Hugo Hoffmann - forthcoming - AI and Society:1-18.
    More intellectual modesty, but also conceptual clarity is urgently needed in AI, perhaps more than in many other disciplines. AI research has been coined by hypes and hubris since its early beginnings in the 1950s. For instance, the Nobel laureate Herbert Simon predicted after his participation in the Dartmouth workshop that “machines will be capable, within 20 years, of doing any work that a man can do”. And expectations are in some circles still high to overblown today. This paper addresses (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • How feasible is the rapid development of artificial superintelligence?Kaj Sotala - 2017 - Physica Scripta 11 (92).
    What kinds of fundamental limits are there in how capable artificial intelligence (AI) systems might become? Two questions in particular are of interest: (1) How much more capable could AI become relative to humans, and (2) how easily could superhuman capability be acquired? To answer these questions, we will consider the literature on human expertise and intelligence, discuss its relevance for AI, and consider how AI could improve on humans in two major aspects of thought and expertise, namely simulation and (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Racing to the precipice: a model of artificial intelligence development.Stuart Armstrong, Nick Bostrom & Carl Shulman - 2016 - AI and Society 31 (2):201-206.
    Download  
     
    Export citation  
     
    Bookmark   15 citations  
  • Why AI Doomsayers are Like Sceptical Theists and Why it Matters.John Danaher - 2015 - Minds and Machines 25 (3):231-246.
    An advanced artificial intelligence could pose a significant existential risk to humanity. Several research institutes have been set-up to address those risks. And there is an increasing number of academic publications analysing and evaluating their seriousness. Nick Bostrom’s superintelligence: paths, dangers, strategies represents the apotheosis of this trend. In this article, I argue that in defending the credibility of AI risk, Bostrom makes an epistemic move that is analogous to one made by so-called sceptical theists in the debate about the (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Artificial Intelligence: Arguments for Catastrophic Risk.Adam Bales, William D'Alessandro & Cameron Domenico Kirk-Giannini - 2024 - Philosophy Compass 19 (2):e12964.
    Recent progress in artificial intelligence (AI) has drawn attention to the technology’s transformative potential, including what some see as its prospects for causing large-scale harm. We review two influential arguments purporting to show how AI could pose catastrophic risks. The first argument — the Problem of Power-Seeking — claims that, under certain assumptions, advanced AI systems are likely to engage in dangerous power-seeking behavior in pursuit of their goals. We review reasons for thinking that AI systems might seek power, that (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Polity Without Politics? Artificial Intelligence Versus Democracy: Lessons From Neal Asher’s Polity Universe.Ivana Damnjanović - 2015 - Bulletin of Science, Technology and Society 35 (3-4):76-83.
    Is it time for politics and political theory to face the challenge of artificial intelligence (AI)? It seems to be the case that political theory constantly lags behind technological developments. With rapid developments in the field of AI, a common estimate is that technological singularity will probably happen in the next 50 to 200 years. Even regardless of the time frame, the very possibility of superhumanly smart AIs poses serious political questions and calls for some serious political decisions. Luckily, some (...)
    Download  
     
    Export citation  
     
    Bookmark