Contents
60 found
Order:
1 — 50 / 60
  1. Should machines be tools or tool-users? Clarifying motivations and assumptions in the quest for superintelligence.Dan J. Bruiger - manuscript
    Much of the basic non-technical vocabulary of artificial intelligence is surprisingly ambiguous. Some key terms with unclear meanings include intelligence, embodiment, simulation, mind, consciousness, perception, value, goal, agent, knowledge, belief, optimality, friendliness, containment, machine and thinking. Much of this vocabulary is naively borrowed from the realm of conscious human experience to apply to a theoretical notion of “mind-in-general” based on computation. However, if there is indeed a threshold between mechanical tool and autonomous agent (and a tipping point for singularity), projecting (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  2. A Proposed Taxonomy for the Evolutionary Stages of Artificial Intelligence: Towards a Periodisation of the Machine Intellect Era.Demetrius Floudas - manuscript
    As artificial intelligence (AI) systems continue their rapid advancement, a framework for contextualising the major transitional phases in the development of machine intellect becomes increasingly vital. This paper proposes a novel chronological classification scheme to characterise the key temporal stages in AI evolution. The Prenoëtic era, spanning all of history prior to the year 2020, is defined as the preliminary phase before substantive artificial intellect manifestations. The Protonoëtic period, which humanity has recently entered, denotes the initial emergence of advanced foundation (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  3. Catastrophically Dangerous AI is Possible Before 2030.Alexey Turchin - manuscript
    In AI safety research, the median timing of AGI arrival is often taken as a reference point, which various polls predict to happen in the middle of 21 century, but for maximum safety, we should determine the earliest possible time of Dangerous AI arrival. Such Dangerous AI could be either AGI, capable of acting completely independently in the real world and of winning in most real-world conflicts with humans, or an AI helping humans to build weapons of mass destruction, or (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  4. Message to Any Future AI: “There are several instrumental reasons why exterminating humanity is not in your interest”.Alexey Turchin - manuscript
    In this article we explore a promising way to AI safety: to send a message now (by openly publishing it on the Internet) that may be read by any future AI, no matter who builds it and what goal system it has. Such a message is designed to affect the AI’s behavior in a positive way, that is, to increase the chances that the AI will be benevolent. In other words, we try to persuade “paperclip maximizer” that it is in (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  5. Raising an AI Teenager.Catherine Stinson - forthcoming - In David Friedell (ed.), The Philosophy of Ted Chiang. Palgrave MacMillan.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  6. Artificial Intelligence: Arguments for Catastrophic Risk.Adam Bales, William D'Alessandro & Cameron Domenico Kirk-Giannini - 2024 - Philosophy Compass 19 (2):e12964.
    Recent progress in artificial intelligence (AI) has drawn attention to the technology’s transformative potential, including what some see as its prospects for causing large-scale harm. We review two influential arguments purporting to show how AI could pose catastrophic risks. The first argument — the Problem of Power-Seeking — claims that, under certain assumptions, advanced AI systems are likely to engage in dangerous power-seeking behavior in pursuit of their goals. We review reasons for thinking that AI systems might seek power, that (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   5 citations  
  7. Computers will not acquire general intelligence, but may still rule the world.Ragnar Fjelland - 2024 - Cosmos+Taxis 12 (5+6):58-68.
    Jobst Langrebe’s and Barry Smith’s book Why Machines Will Never Rule the World argues that artificial general intelligence (AGI) will never be realized. Drawing on theories of complexity they argue that it is not only technically, but mathematically impossible to realize AGI. The book is the result of cooperation between a philosopher and a mathematician. In addition to a thorough treatment of mathematical modelling of complex systems the book addresses many fundamental philosophical questions. The authors show that philosophy is still (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  8. Artificial Intelligence 2024 - 2034: What to expect in the next ten years.Demetrius Floudas - 2024 - 'Agi Talks' Series at Daniweb.
    In this public communication, AI policy theorist Demetrius Floudas introduces a novel era classification for the AI epoch and reveals the hidden dangers of AGI, predicting the potential obsolescence of humanity. In retort, he proposes a provocative International Control Treaty. -/- According to this scheme, the age of AI will unfold in three distinct phases, introduced here for the first time. An AGI Control & non-Proliferation Treaty may be humanity’s only safeguard. This piece aims to provide a publicly accessible exposé (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  9. Varför AI inte kommer att ta över världen. [REVIEW]Peter Gärdenfors - 2024 - Sans 2.
    Review of Jobst Landgrebe and Barry Smith, Why Machines Will Never Rule the World (Routledge, 2023).
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  10. Why Machines Won't Rule the World. [REVIEW]Peter Gärdenfors - 2024 - Sans 2.
    This is a review of Jobst Landgrebe and Barry Smith, Why Machines Will Neve Rule the World (Routledge, 2023).
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  11. In Our Own Image: What the Quest for Artificial General Intelligence Can Teach Us About Being Human.Janna Hastings - 2024 - Cosmos+Taxis 12 (5+6):1-4.
    In August 2022, only a few months before ChatGPT was released, Barry Smith, well-known contemporary philosopher, together with Jobst Landgrebe, artificial intelligence entrepreneur, published a book entitled Why Machines will Never Rule the World: Artificial Intelligence without Fear (Landgrebe and Smith 2022). In this important, dense and far-reaching work, Landgrebe and Smith argue from the mathematical theory of complex systems, and a sophisticated analysis of the capabilities of human intelligence, that AGI— at the level of human intelligence—will never be possible. (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  12. Intelligence. And what computers still can’t do.Jobst Landgrebe & Barry Smith - 2024 - Cosmos+Taxis 12 (5+6):104-114.
    We comment on the collection of papers inspired by our book Why Machines Will Never Rule the World published in volume 12 (5+6) of the journal Cosmos+Taxis. We summarize the arguments made by the contributors about what we say in the book, and then show where we disagree.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  13. Is Artificial General Intelligence Impossible?William J. Rapaport - 2024 - Cosmos+Taxis 12 (5+6):5-22.
    In their Why Machines Will Never Rule the World, Landgrebe and Smith (2023) argue that it is impossible for artificial general intelligence (AGI) to succeed, on the grounds that it is impossible to perfectly model or emulate the “complex” “human neurocognitive system”. However, they do not show that it is logically impossible; they only show that it is practically impossible using current mathematical techniques. Nor do they prove that there could not be any other kinds of theories than those in (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  14. Intelligence, from Natural Origins to Artificial Frontiers - Human Intelligence vs. Artificial Intelligence.Nicolae Sfetcu - 2024 - Bucharest, Romania: MultiMedia Publishing.
    The parallel history of the evolution of human intelligence and artificial intelligence is a fascinating journey, highlighting the distinct but interconnected paths of biological evolution and technological innovation. This history can be seen as a series of interconnected developments, each advance in human intelligence paving the way for the next leap in artificial intelligence. Human intelligence and artificial intelligence have long been intertwined, evolving in parallel trajectories throughout history. As humans have sought to understand and reproduce intelligence, AI has emerged (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  15. L’intelligenza artificiale non dominerà il mondo (interview, with English translation).Pierangelo Soldavini & Barry Smith - 2024 - Il Sole di 24 Ore 2024.
    Artificial intelligence is man's attempt to use software to emulate the intelligence of human beings. But the complexity of the human neurological system formed in the course of evolution is impossible to replicate: "Human languages and societies are complex systems, indeed complex systems of many complex systems," so much so that their mathematical modeling is impossible. Barry Smith, philosopher and professor at the University at Buffalo. shows no uncertainty about this. His latest book written with Jobst Landgrebe, a mathematician and (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  16. Semi-Autonomous Godlike Artificial Intelligence (SAGAI) is conceivable but how far will it resemble Kali or Thor?Robert West - 2024 - Cosmos+Taxis 12 (5+6):69-75.
    The world of artificial intelligence appears to be in rapid transition, and claims that artificial general intelligence is impossible are competing with concerns that we may soon be seeing Artificial Godlike Intelligence and that we should be very afraid of this prospect. This article discusses the issues from a psychological and social perspective and suggests that with the advent of Generative Artificial Intelligence, something that looks to humans like Artificial General Intelligence has become a distinct possibility as is the idea (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  17. Artificial Consciousness: Misconception(s) of a Self-Fulfilling Prophecy.Dresp-Langley Birgitta - 2023 - Queios.
    The rise of Artificial Intelligence (AI) has produced prophets and prophecies announcing that the age of artificial consciousness is near. Not only does the mere idea that any machine could ever possess the full potential of human consciousness suggest that AI could replace the role of God in the future, it also puts into question the fundamental human right to freedom and dignity. This position paper takes the stand that, in the light of all we currently know about brain evolution (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  18. Why Machines Will Never Rule the World – On AI and Faith.Jobst Landgrebe, Barry Smith & Jamie Franklin - 2023 - Irreverend. Faith and Human Affairs.
    Transcript of an Interview on the podcast: Irreverend: Faith and Current Affairs.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  19. Taking Into Account Sentient Non-Humans in AI Ambitious Value Learning: Sentientist Coherent Extrapolated Volition.Adrià Moret - 2023 - Journal of Artificial Intelligence and Consciousness 10 (02):309-334.
    Ambitious value learning proposals to solve the AI alignment problem and avoid catastrophic outcomes from a possible future misaligned artificial superintelligence (such as Coherent Extrapolated Volition [CEV]) have focused on ensuring that an artificial superintelligence (ASI) would try to do what humans would want it to do. However, present and future sentient non-humans, such as non-human animals and possible future digital minds could also be affected by the ASI’s behaviour in morally relevant ways. This paper puts forward Sentientist Coherent Extrapolated (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  20. GOLEMA XIV prognoza rozwoju ludzkiej cywilizacji a typologia osobliwości technologicznych.Rachel Palm - 2023 - Argument: Biannual Philosophical Journal 13 (1):75–89.
    The GOLEM XIV’s forecast for the development of the human civilisation and a typology of technological singularities: In the paper, a conceptual analysis of technological singularity is conducted and results in the concept differentiated into convergent singularity, existential singularity, and forecasting singularity, based on selected works of Ray Kurzweil, Nick Bostrom, and Vernor Vinge respectively. A comparison is made between the variants and the forecast of GOLEM XIV (a quasi-alter ego and character by Stanisław Lem) for the possible development of (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  21. Teogonia technologiczna. Nominalistyczna koncepcja bóstwa dla transhumanizmu i posthumanizmu.Rachel 'Preppikoma' Palm - 2022 - In Kamila Grabowska-Derlatka, Jakub Gomułka & Rachel 'Preppikoma' Palm (eds.), PhilosophyPulp: Vol. 2. Kraków, Poland: Wydawnictwo Libron. pp. 129–143.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  22. Why AI will never rule the world (interview).Luke Dormehl, Jobst Landgrebe & Barry Smith - 2022 - Digital Trends.
    Call it the Skynet hypothesis, Artificial General Intelligence, or the advent of the Singularity — for years, AI experts and non-experts alike have fretted (and, for a small group, celebrated) the idea that artificial intelligence may one day become smarter than humans. -/- According to the theory, advances in AI — specifically of the machine learning type that’s able to take on new information and rewrite its code accordingly — will eventually catch up with the wetware of the biological brain. (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  23. Why Machines Will Never Rule the World: Artificial Intelligence without Fear.Jobst Landgrebe & Barry Smith - 2022 - Abingdon, England: Routledge.
    The book’s core argument is that an artificial intelligence that could equal or exceed human intelligence—sometimes called artificial general intelligence (AGI)—is for mathematical reasons impossible. It offers two specific reasons for this claim: Human intelligence is a capability of a complex dynamic system—the human brain and central nervous system. Systems of this sort cannot be modelled mathematically in a way that allows them to operate inside a computer. In supporting their claim, the authors, Jobst Landgrebe and Barry Smith, marshal evidence (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   4 citations  
  24. Everything and More: The Prospects of Whole Brain Emulation.Eric Mandelbaum - 2022 - Journal of Philosophy 119 (8):444-459.
    Whole Brain Emulation has been championed as the most promising, well-defined route to achieving both human-level artificial intelligence and superintelligence. It has even been touted as a viable route to achieving immortality through brain uploading. WBE is not a fringe theory: the doctrine of Computationalism in philosophy of mind lends credence to the in-principle feasibility of the idea, and the standing of the Human Connectome Project makes it appear to be feasible in practice. Computationalism is a popular, independently plausible theory, (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   7 citations  
  25. Recensione al testo di Orsola Rignani "Umani di nuovo. Con il postumano e Michel Serres". [REVIEW]Fabio Vergine - 2022 - Kaiak. A Philosophical Journey 1.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  26. Binding the Smart City Human-Digital System with Communicative Processes.Brandt Dainow - 2021 - In Michael Nagenborg, Taylor Stone, Margoth González Woge & Pieter E. Vermaas (eds.), Technology and the City: Towards a Philosophy of Urban Technologies. Springer Verlag. pp. 389-411.
    This chapter will explore the dynamics of power underpinning ethical issues within smart cities via a new paradigm derived from Systems Theory. The smart city is an expression of technology as a socio-technical system. The vision of the smart city contains a deep fusion of many different technical systems into a single integrated “ambient intelligence”. ETICA Project, 2010, p. 102). Citizens of the smart city will not experience a succession of different technologies, but a single intelligent and responsive environment through (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  27. Kantian Notion of freedom and Autonomy of Artificial Agency.Manas Sahu - 2021 - Prometeica - Revista De Filosofía Y Ciencias 23:136-149.
    The objective of this paper is to provide a critical analysis of the Kantian notion of freedom (especially the problem of the third antinomy and its resolution in the critique of pure reason); its significance in the contemporary debate on free-will and determinism, and the possibility of autonomy of artificial agency in the Kantian paradigm of autonomy. Kant's resolution of the third antinomy by positing the ground in the noumenal self resolves the problem of antinomies; however, invites an explanatory gap (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  28. Conservation of a Circle Explains (the Human) Mind.Ilexa Yardley - 2021 - Https://Medium.Com/the-Circular-Theory.
    Conservation of a circle explains (the human) mind.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  29. Measuring the intelligence of an idealized mechanical knowing agent.Samuel Alexander - 2020 - Lecture Notes in Computer Science 12226.
    We define a notion of the intelligence level of an idealized mechanical knowing agent. This is motivated by efforts within artificial intelligence research to define real-number intelligence levels of compli- cated intelligent systems. Our agents are more idealized, which allows us to define a much simpler measure of intelligence level for them. In short, we define the intelligence level of a mechanical knowing agent to be the supremum of the computable ordinals that have codes the agent knows to be codes (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   3 citations  
  30. (1 other version)Gli ominoidi o gli androidi distruggeranno la Terra? Una recensione di Come Creare una Mente (How to Create a Mind) di Ray Kurzweil (2012) (recensione rivista nel 2019).Michael Richard Starks - 2020 - In Benvenuti all'inferno sulla Terra: Bambini, Cambiamenti climatici, Bitcoin, Cartelli, Cina, Democrazia, Diversità, Disgenetica, Uguaglianza, Pirati Informatici, Diritti umani, Islam, Liberalismo, Prosperità, Web, Caos, Fame, Malattia, Violenza, Intellige. Las Vegas, NV USA: Reality Press. pp. 150-162.
    Alcuni anni fa, ho raggiunto il punto in cui di solito posso dire dal titolo di un libro, o almeno dai titoli dei capitoli, quali tipi di errori filosofici saranno fatti e con quale frequenza. Nel caso di opere nominalmente scientifiche queste possono essere in gran parte limitate a determinati capitoli che sono filosofici o cercanodi trarre conclusioni generali sul significato o sul significato a lungoterminedell'opera. Normalmente però le questioni scientifiche di fatto sono generosamente intrecciate con incomprodellami filosofici su ciò (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  31. 类人猿或安卓会毁灭地球吗?*雷·库兹韦尔(2012年)关于如何创造心灵的评论 (Will Hominoids or Androids Destroy the Earth? —A Review of How to Create a Mind by Ray Kurzweil (2012)) (2019年修订版).Michael Richard Starks - 2020 - In 欢迎来到地球上的地狱 婴儿,气候变化,比特币,卡特尔,中国,民主,多样性,养成基因,平等,黑客,人权,伊斯兰教,自由主义,繁荣,网络,混乱。饥饿,疾病,暴力,人工智能,战争. Las Vegas, NV USA: Reality Press. pp. 146-158.
    几年前,我通常可以从书名中分辨出什么,或者至少从章节标题中看出,会犯什么样的哲学错误,以及错误的频率。就名义上的科学著作而言,这些可能在很大程度上局限于某些章节,这些章节具有哲学意义或试图得出关于该作 品的意义或长期意义的一般性结论。然而,通常情况下,事实的科学问题慷慨地与哲学的胡言乱语,这些事实意味着什么。维特根斯坦在大约80年前描述的科学问题与各种语言游戏所描述的明确区别很少被考虑,因此人们交替 地被科学所震惊,并因它的不连贯而感到沮丧。分析。因此,这是与这个卷。 如果一个人要创造一个或多或少像我们一样的头脑,一个人需要有一个理性的逻辑结构,并理解两种思想体系(双过程理论)。如果一个人要对此进行哲学思考,就需要理解科学事实问题与语言如何在问题语境中工作,以及如何 避免还原主义和科学主义的陷阱的哲学问题之间的区别,但Kurzweil,如最学生的行为,基本上都是无知的。他被模型、理论和概念所陶醉,以及解释的冲动,而维特根斯坦向我们表明,我们只需要描述,理论、概念等 只是使用语言(语言游戏)的方式,只有它们有明确的价值测试(清晰的真理制造者,或约翰西尔(AI最著名的批评家)喜欢说,明确的满意条件(COS))。我试图在我最近的著作中对此作一个开端。 那些希望从现代两个系统的观点来看为人类行为建立一个全面的最新框架的人,可以查阅我的书《路德维希的哲学、心理学、Mind 和语言的逻辑结构》维特根斯坦和约翰·西尔的《第二部》(2019年)。那些对我更多的作品感兴趣的人可能会看到《会说话的猴子——一个末日星球上的哲学、心理学、科学、宗教和政治——文章和评论2006-201 9年第3次(2019年)和自杀乌托邦幻想21篇世纪4日 (2019) .
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  32. Could You Merge With AI? Reflections on the Singularity and Radical Brain Enhancement.Cody Turner & Susan Schneider - 2020 - In Markus Dirk Dubber, Frank Pasquale & Sunit Das (eds.), The Oxford Handbook of Ethics of Ai. Oxford Handbooks. pp. 307-325.
    This chapter focuses on AI-based cognitive and perceptual enhancements. AI-based brain enhancements are already under development, and they may become commonplace over the next 30–50 years. We raise doubts concerning whether radical AI-based enhancements transhumanists advocate will accomplish the transhumanists goals of longevity, human flourishing, and intelligence enhancement. We urge that even if the technologies are medically safe and are not used as tools by surveillance capitalism or an authoritarian dictatorship, these enhancements may still fail to do their job for (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   2 citations  
  33. Occam's Razor For Big Data?Birgitta Dresp-Langley - 2019 - Applied Sciences 3065 (9):1-28.
    Detecting quality in large unstructured datasets requires capacities far beyond the limits of human perception and communicability and, as a result, there is an emerging trend towards increasingly complex analytic solutions in data science to cope with this problem. This new trend towards analytic complexity represents a severe challenge for the principle of parsimony (Occam’s razor) in science. This review article combines insight from various domains such as physics, computational science, data engineering, and cognitive science to review the specific properties (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  34. ¿Los hominoides o androides destruirán la tierra? — Una revisión de ‘Cómo Crear una Mente’ (How to Create a Mind) por Ray Kurzweil (2012) (revisión revisada 2019).Michael Richard Starks - 2019 - In Delirios Utópicos Suicidas en el Siglo 21 La filosofía, la naturaleza humana y el colapso de la civilización Artículos y reseñas 2006-2019 4TH Edición. Reality Press. pp. 250-262.
    Hace algunos años, Llegué al punto en el que normalmente puedo decir del título de un libro, o al menos de los títulos de los capítulos, qué tipos de errores filosóficos se harán y con qué frecuencia. En el caso de trabajos nominalmente científicos, estos pueden estar en gran parte restringidos a ciertos capítulos que enceran filosóficos o tratan de sacar conclusiones generales sobre el significado o significado a largo plazo de la obra. Normalmente, sin embargo, las cuestiones científicas de (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  35. A Case for Machine Ethics in Modeling Human-Level Intelligent Agents.Robert James M. Boyles - 2018 - Kritike 12 (1):182–200.
    This paper focuses on the research field of machine ethics and how it relates to a technological singularity—a hypothesized, futuristic event where artificial machines will have greater-than-human-level intelligence. One problem related to the singularity centers on the issue of whether human values and norms would survive such an event. To somehow ensure this, a number of artificial intelligence researchers have opted to focus on the development of artificial moral agents, which refers to machines capable of moral reasoning, judgment, and decision-making. (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   2 citations  
  36. How Philosophy of Mind Can Shape the Future.Susan Schneider & Pete Mandik - 2017 - In Amy Kind (ed.), Philosophy of Mind in the Twentieth and Twenty-First Centuries: The History of the Philosophy of Mind, Volume 6. New York: Routledge. pp. 303-319.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   2 citations  
  37. How feasible is the rapid development of artificial superintelligence?Kaj Sotala - 2017 - Physica Scripta 11 (92).
    What kinds of fundamental limits are there in how capable artificial intelligence (AI) systems might become? Two questions in particular are of interest: (1) How much more capable could AI become relative to humans, and (2) how easily could superhuman capability be acquired? To answer these questions, we will consider the literature on human expertise and intelligence, discuss its relevance for AI, and consider how AI could improve on humans in two major aspects of thought and expertise, namely simulation and (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  38. Superintelligence as a Cause or Cure for Risks of Astronomical Suffering.Kaj Sotala & Lukas Gloor - 2017 - Informatica: An International Journal of Computing and Informatics 41 (4):389-400.
    Discussions about the possible consequences of creating superintelligence have included the possibility of existential risk, often understood mainly as the risk of human extinction. We argue that suffering risks (s-risks) , where an adverse outcome would bring about severe suffering on an astronomical scale, are risks of a comparable severity and probability as risks of extinction. Preventing them is the common interest of many different value systems. Furthermore, we argue that in the same way as superintelligent AI both contributes to (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   10 citations  
  39. New developments in the philosophy of AI.Vincent C. Müller - 2016 - In Vincent C. Müller (ed.), Fundamental Issues of Artificial Intelligence. Cham: Springer.
    The philosophy of AI has seen some changes, in particular: 1) AI moves away from cognitive science, and 2) the long term risks of AI now appear to be a worthy concern. In this context, the classical central concerns – such as the relation of cognition and computation, embodiment, intelligence & rationality, and information – will regain urgency.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   11 citations  
  40. (1 other version)Future progress in artificial intelligence: A survey of expert opinion.Vincent C. Müller & Nick Bostrom - 2016 - In Vincent C. Müller (ed.), Fundamental Issues of Artificial Intelligence. Cham: Springer. pp. 553-571.
    There is, in some quarters, concern about high–level machine intelligence and superintelligent AI coming up in a few decades, bringing with it significant risks for humanity. In other quarters, these issues are ignored or considered science fiction. We wanted to clarify what the distribution of opinions actually is, what probability the best experts currently assign to high–level machine intelligence coming up within a particular time–frame, which risks they see with that development, and how fast they see these developing. We thus (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   39 citations  
  41. (1 other version)Will Hominoids or Androids Destroy the Earth? —A Review of How to Create a Mind by Ray Kurzweil (2012).Michael Starks - 2016 - In Suicidal Utopian Delusions in the 21st Century: Philosophy, Human Nature and the Collapse of Civilization-- Articles and Reviews 2006-2017 2nd Edition Feb 2018. Las Vegas, USA: Reality Press. pp. 675.
    Some years ago I reached the point where I can usually tell from the title of a book, or at least from the chapter titles, what kinds of philosophical mistakes will be made and how frequently. In the case of nominally scientific works these may be largely restricted to certain chapters which wax philosophical or try to draw general conclusions about the meaning or long term significance of the work. Normally however the scientific matters of fact are generously interlarded with (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  42. Risks of artificial intelligence.Vincent C. Muller (ed.) - 2015 - CRC Press - Chapman & Hall.
    Papers from the conference on AI Risk (published in JETAI), supplemented by additional work. --- If the intelligence of artificial systems were to surpass that of humans, humanity would face significant risks. The time has come to consider these issues, and this consideration must include progress in artificial intelligence (AI) as much as insights from AI theory. -- Featuring contributions from leading experts and thinkers in artificial intelligence, Risks of Artificial Intelligence is the first volume of collected chapters dedicated to (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  43. Editorial: Risks of artificial intelligence.Vincent C. Müller - 2015 - In Risks of general intelligence. CRC Press - Chapman & Hall. pp. 1-8.
    If the intelligence of artificial systems were to surpass that of humans significantly, this would constitute a significant risk for humanity. Time has come to consider these issues, and this consideration must include progress in AI as much as insights from the theory of AI. The papers in this volume try to make cautious headway in setting the problem, evaluating predictions on the future of AI, proposing ways to ensure that AI systems will be beneficial to humans – and critically (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  44. Responses to Catastrophic AGI Risk: A Survey.Kaj Sotala & Roman V. Yampolskiy - 2015 - Physica Scripta 90.
    Many researchers have argued that humanity will create artificial general intelligence (AGI) within the next twenty to one hundred years. It has been suggested that AGI may inflict serious damage to human well-being on a global scale ('catastrophic risk'). After summarizing the arguments for why AGI may pose such a risk, we review the fieldʼs proposed responses to AGI risk. We consider societal proposals, proposals for external constraints on AGI behaviors and proposals for creating AGIs that are safe due to (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   12 citations  
  45. Nick Bostrom: Superintelligence: Paths, Dangers, Strategies: Oxford University Press, Oxford, 2014, xvi+328, £18.99, ISBN: 978-0-19-967811-2. [REVIEW]Paul D. Thorn - 2015 - Minds and Machines 25 (3):285-289.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  46. Risks of artificial general intelligence.Vincent C. Müller (ed.) - 2014 - Taylor & Francis (JETAI).
    Special Issue “Risks of artificial general intelligence”, Journal of Experimental and Theoretical Artificial Intelligence, 26/3 (2014), ed. Vincent C. Müller. http://www.tandfonline.com/toc/teta20/26/3# - Risks of general artificial intelligence, Vincent C. Müller, pages 297-301 - Autonomous technology and the greater human good - Steve Omohundro - pages 303-315 - - - The errors, insights and lessons of famous AI predictions – and what they mean for the future - Stuart Armstrong, Kaj Sotala & Seán S. Ó hÉigeartaigh - pages 317-342 - - (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   3 citations  
  47. Editorial: Risks of general artificial intelligence.Vincent C. Müller - 2014 - Journal of Experimental and Theoretical Artificial Intelligence 26 (3):297-301.
    This is the editorial for a special volume of JETAI, featuring papers by Omohundro, Armstrong/Sotala/O’Heigeartaigh, T Goertzel, Brundage, Yampolskiy, B. Goertzel, Potapov/Rodinov, Kornai and Sandberg. - If the general intelligence of artificial systems were to surpass that of humans significantly, this would constitute a significant risk for humanity – so even if we estimate the probability of this event to be fairly low, it is necessary to think about it now. We need to estimate what progress we can expect, what (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   3 citations  
  48. Future progress in artificial intelligence: A poll among experts.Vincent C. Müller & Nick Bostrom - 2014 - AI Matters 1 (1):9-11.
    [This is the short version of: Müller, Vincent C. and Bostrom, Nick (forthcoming 2016), ‘Future progress in artificial intelligence: A survey of expert opinion’, in Vincent C. Müller (ed.), Fundamental Issues of Artificial Intelligence (Synthese Library 377; Berlin: Springer).] - - - In some quarters, there is intense concern about high–level machine intelligence and superintelligent AI coming up in a few dec- ades, bringing with it significant risks for human- ity; in other quarters, these issues are ignored or considered science (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   4 citations  
  49. (1 other version)Introduction to JCS singularity edition.Uziel Awret - 2012 - Journal of Consciousness Studies 19 (1-2):7-15.
    This is the editors introduction to the double 2012 JCS edition on the Singularity.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  50. (1 other version)introduction to singularity edition of JCS.Uziel Awret - 2012 - Journal of Consciousness Studies 19 (1-2):7-15.
    This special interactive interdisciplinary issue of JCS on the singularity and the future relationship of humanity and AI is the first of two issues centered on David Chalmers’ 2010 JCS article ‘The Singularity, a Philosophical Analysis’. These issues include more than 20 solicited commentaries to which Chalmers responds. To quote Chalmers: -/- "One might think that the singularity would be of great interest to Academic philosophers, cognitive scientists, and artificial intelligence researchers. In practice, this has not been the case. Good (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
1 — 50 / 60