View topic on PhilPapers for more information
Related categories

34 found
Order:
More results on PhilPapers
  1. Should Machines Be Tools or Tool-Users? Clarifying Motivations and Assumptions in the Quest for Superintelligence.Dan J. Bruiger - manuscript
    Much of the basic non-technical vocabulary of artificial intelligence is surprisingly ambiguous. Some key terms with unclear meanings include intelligence, embodiment, simulation, mind, consciousness, perception, value, goal, agent, knowledge, belief, optimality, friendliness, containment, machine and thinking. Much of this vocabulary is naively borrowed from the realm of conscious human experience to apply to a theoretical notion of “mind-in-general” based on computation. However, if there is indeed a threshold between mechanical tool and autonomous agent (and a tipping point for singularity), projecting (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  2. Message to Any Future AI: “There Are Several Instrumental Reasons Why Exterminating Humanity is Not in Your Interest”.Alexey Turchin - manuscript
    In this article we explore a promising way to AI safety: to send a message now (by openly publishing it on the Internet) that may be read by any future AI, no matter who builds it and what goal system it has. Such a message is designed to affect the AI’s behavior in a positive way, that is, to increase the chances that the AI will be benevolent. In other words, we try to persuade “paperclip maximizer” that it is in (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  3. Measuring the Intelligence of an Idealized Mechanical Knowing Agent.Samuel Alexander - 2020 - Lecture Notes in Computer Science 12226.
    We define a notion of the intelligence level of an idealized mechanical knowing agent. This is motivated by efforts within artificial intelligence research to define real-number intelligence levels of compli- cated intelligent systems. Our agents are more idealized, which allows us to define a much simpler measure of intelligence level for them. In short, we define the intelligence level of a mechanical knowing agent to be the supremum of the computable ordinals that have codes the agent knows to be codes (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   3 citations  
  4. Gli ominoidi o gli androidi distruggeranno la Terra? Una recensione di Come Creare una Mente (How to Create a Mind) di Ray Kurzweil (2012) (recensione rivista nel 2019).Michael Richard Starks - 2020 - In Benvenuti all'inferno sulla Terra: Bambini, Cambiamenti climatici, Bitcoin, Cartelli, Cina, Democrazia, Diversità, Disgenetica, Uguaglianza, Pirati Informatici, Diritti umani, Islam, Liberalismo, Prosperità, Web, Caos, Fame, Malattia, Violenza, Intellige. Las Vegas, NV, USA: Reality Press. pp. 150-162.
    Alcuni anni fa, ho raggiunto il punto in cui di solito posso dire dal titolo di un libro, o almeno dai titoli dei capitoli, quali tipi di errori filosofici saranno fatti e con quale frequenza. Nel caso di opere nominalmente scientifiche queste possono essere in gran parte limitate a determinati capitoli che sono filosofici o cercanodi trarre conclusioni generali sul significato o sul significato a lungoterminedell'opera. Normalmente però le questioni scientifiche di fatto sono generosamente intrecciate con incomprodellami filosofici su ciò (...)
    Remove from this list   Download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  5. 类人猿或安卓会毁灭地球吗?*雷·库兹韦尔(2012年)关于如何创造心灵的评论 (Will Hominoids or Androids Destroy the Earth? —A Review of How to Create a Mind by Ray Kurzweil (2012)) (2019年修订版).Michael Richard Starks - 2020 - In 欢迎来到地球上的地狱: 婴儿,气候变化,比特币,卡特尔,中国,民主,多样性,养成基因,平等,黑客,人权,伊斯兰教,自由主义,繁荣,网络,混乱。饥饿,疾病,暴力,人工智能,战争. Las Vegas, NV USA: Reality Press. pp. 146-158.
    几年前,我通常可以从书名中分辨出什么,或者至少从章节标题中看出,会犯什么样的哲学错误,以及错误的频率。就名义上的科学著作而言,这些可能在很大程度上局限于某些章节,这些章节具有哲学意义或试图得出关于该作 品的意义或长期意义的一般性结论。然而,通常情况下,事实的科学问题慷慨地与哲学的胡言乱语,这些事实意味着什么。维特根斯坦在大约80年前描述的科学问题与各种语言游戏所描述的明确区别很少被考虑,因此人们交替 地被科学所震惊,并因它的不连贯而感到沮丧。分析。因此,这是与这个卷。 如果一个人要创造一个或多或少像我们一样的头脑,一个人需要有一个理性的逻辑结构,并理解两种思想体系(双过程理论)。如果一个人要对此进行哲学思考,就需要理解科学事实问题与语言如何在问题语境中工作,以及如何 避免还原主义和科学主义的陷阱的哲学问题之间的区别,但Kurzweil,如最学生的行为,基本上都是无知的。他被模型、理论和概念所陶醉,以及解释的冲动,而维特根斯坦向我们表明,我们只需要描述,理论、概念等 只是使用语言(语言游戏)的方式,只有它们有明确的价值测试(清晰的真理制造者,或约翰西尔(AI最著名的批评家)喜欢说,明确的满意条件(COS))。我试图在我最近的著作中对此作一个开端。 那些希望从现代两个系统的观点来看为人类行为建立一个全面的最新框架的人,可以查阅我的书《路德维希的哲学、心理学、Mind 和语言的逻辑结构》维特根斯坦和约翰·西尔的《第二部》(2019年)。那些对我更多的作品感兴趣的人可能会看到《会说话的猴子——一个末日星球上的哲学、心理学、科学、宗教和政治——文章和评论2006-201 9年第3次(2019年)和自杀乌托邦幻想21篇世纪4日 (2019) .
    Remove from this list   Download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  6. Could You Merge With AI? Reflections on the Singularity and Radical Brain Enhancement.Cody Turner & Susan Schneider - 2020 - In Markus Dirk Dubber, Frank Pasquale & Sunit Das (eds.), The Oxford Handbook of Ethics of AI. Oxford University Press. pp. 307-325.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  7. ¿Los hominoides o androides destruirán la tierra? — Una revisión de ‘Cómo Crear una Mente’ (How to Create a Mind) por Ray Kurzweil (2012) (revisión revisada 2019).Michael Richard Starks - 2019 - In Delirios Utópicos Suicidas en el Siglo 21 La filosofía, la naturaleza humana y el colapso de la civilización Artículos y reseñas 2006-2019 4a Edición. Las Vegas, NV USA: Reality Press. pp. 250-262.
    Hace algunos años, Llegué al punto en el que normalmente puedo decir del título de un libro, o al menos de los títulos de los capítulos, qué tipos de errores filosóficos se harán y con qué frecuencia. En el caso de trabajos nominalmente científicos, estos pueden estar en gran parte restringidos a ciertos capítulos que enceran filosóficos o tratan de sacar conclusiones generales sobre el significado o significado a largo plazo de la obra. Normalmente, sin embargo, las cuestiones científicas de (...)
    Remove from this list   Download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  8. A Case for Machine Ethics in Modeling Human-Level Intelligent Agents.Robert James M. Boyles - 2018 - Kritike 12 (1):182–200.
    This paper focuses on the research field of machine ethics and how it relates to a technological singularity—a hypothesized, futuristic event where artificial machines will have greater-than-human-level intelligence. One problem related to the singularity centers on the issue of whether human values and norms would survive such an event. To somehow ensure this, a number of artificial intelligence researchers have opted to focus on the development of artificial moral agents, which refers to machines capable of moral reasoning, judgment, and decision-making. (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  9. How Philosophy of Mind Can Shape the Future.Susan Schneider & Pete Mandik - 2018 - In Amy Kind (ed.), Philosophy of Mind in the Twentieth and Twenty-first Centuries. New York, NY, USA: pp. 303-319.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   2 citations  
  10. How Feasible is the Rapid Development of Artificial Superintelligence?Kaj Sotala - 2017 - Physica Scripta 11 (92).
    What kinds of fundamental limits are there in how capable artificial intelligence (AI) systems might become? Two questions in particular are of interest: (1) How much more capable could AI become relative to humans, and (2) how easily could superhuman capability be acquired? To answer these questions, we will consider the literature on human expertise and intelligence, discuss its relevance for AI, and consider how AI could improve on humans in two major aspects of thought and expertise, namely simulation and (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  11. Superintelligence as a Cause or Cure for Risks of Astronomical Suffering.Kaj Sotala & Lukas Gloor - 2017 - Informatica: An International Journal of Computing and Informatics 41 (4):389-400.
    Discussions about the possible consequences of creating superintelligence have included the possibility of existential risk, often understood mainly as the risk of human extinction. We argue that suffering risks (s-risks) , where an adverse outcome would bring about severe suffering on an astronomical scale, are risks of a comparable severity and probability as risks of extinction. Preventing them is the common interest of many different value systems. Furthermore, we argue that in the same way as superintelligent AI both contributes to (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   6 citations  
  12. Will Hominoids or Androids Destroy the Earth? —A Review of How to Create a Mind by Ray Kurzweil (2012).Michael Starks - 2017 - In Suicidal Utopian Delusions in the 21st Century 4th ed (2019). Henderson, NV USA: Michael Starks. pp. 675.
    Some years ago I reached the point where I can usually tell from the title of a book, or at least from the chapter titles, what kinds of philosophical mistakes will be made and how frequently. In the case of nominally scientific works these may be largely restricted to certain chapters which wax philosophical or try to draw general conclusions about the meaning or long term significance of the work. Normally however the scientific matters of fact are generously interlarded with (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  13. New Developments in the Philosophy of AI.Vincent Müller - 2016 - In Fundamental Issues of Artificial Intelligence. Springer.
    The philosophy of AI has seen some changes, in particular: 1) AI moves away from cognitive science, and 2) the long term risks of AI now appear to be a worthy concern. In this context, the classical central concerns – such as the relation of cognition and computation, embodiment, intelligence & rationality, and information – will regain urgency.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   7 citations  
  14. Risks of Artificial Intelligence.Vincent C. Müller (ed.) - 2016 - CRC Press - Chapman & Hall.
    Papers from the conference on AI Risk (published in JETAI), supplemented by additional work. --- If the intelligence of artificial systems were to surpass that of humans, humanity would face significant risks. The time has come to consider these issues, and this consideration must include progress in artificial intelligence (AI) as much as insights from AI theory. -- Featuring contributions from leading experts and thinkers in artificial intelligence, Risks of Artificial Intelligence is the first volume of collected chapters dedicated to (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  15. Editorial: Risks of Artificial Intelligence.Vincent C. Müller - 2016 - In Risks of artificial intelligence. CRC Press - Chapman & Hall. pp. 1-8.
    If the intelligence of artificial systems were to surpass that of humans significantly, this would constitute a significant risk for humanity. Time has come to consider these issues, and this consideration must include progress in AI as much as insights from the theory of AI. The papers in this volume try to make cautious headway in setting the problem, evaluating predictions on the future of AI, proposing ways to ensure that AI systems will be beneficial to humans – and critically (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  16. Future Progress in Artificial Intelligence: A Survey of Expert Opinion.Vincent C. Müller & Nick Bostrom - 2016 - In Vincent Müller (ed.), Fundamental Issues of Artificial Intelligence. Springer. pp. 553-571.
    There is, in some quarters, concern about high–level machine intelligence and superintelligent AI coming up in a few decades, bringing with it significant risks for humanity. In other quarters, these issues are ignored or considered science fiction. We wanted to clarify what the distribution of opinions actually is, what probability the best experts currently assign to high–level machine intelligence coming up within a particular time–frame, which risks they see with that development, and how fast they see these developing. We thus (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   17 citations  
  17. Responses to Catastrophic AGI Risk: A Survey.Kaj Sotala & Roman V. Yampolskiy - 2015 - Physica Scripta 90.
    Many researchers have argued that humanity will create artificial general intelligence (AGI) within the next twenty to one hundred years. It has been suggested that AGI may inflict serious damage to human well-being on a global scale ('catastrophic risk'). After summarizing the arguments for why AGI may pose such a risk, we review the fieldʼs proposed responses to AGI risk. We consider societal proposals, proposals for external constraints on AGI behaviors and proposals for creating AGIs that are safe due to (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   8 citations  
  18. Nick Bostrom: Superintelligence: Paths, Dangers, Strategies: Oxford University Press, Oxford, 2014, Xvi+328, £18.99, ISBN: 978-0-19-967811-2. [REVIEW]Paul D. Thorn - 2015 - Minds and Machines 25 (3):285-289.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  19. Risks of Artificial General Intelligence.Vincent C. Müller (ed.) - 2014 - Taylor & Francis (JETAI).
    Special Issue “Risks of artificial general intelligence”, Journal of Experimental and Theoretical Artificial Intelligence, 26/3 (2014), ed. Vincent C. Müller. http://www.tandfonline.com/toc/teta20/26/3# - Risks of general artificial intelligence, Vincent C. Müller, pages 297-301 - Autonomous technology and the greater human good - Steve Omohundro - pages 303-315 - - - The errors, insights and lessons of famous AI predictions – and what they mean for the future - Stuart Armstrong, Kaj Sotala & Seán S. Ó hÉigeartaigh - pages 317-342 - - (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   3 citations  
  20. Editorial: Risks of General Artificial Intelligence.Vincent C. Müller - 2014 - Journal of Experimental and Theoretical Artificial Intelligence 26 (3):297-301.
    This is the editorial for a special volume of JETAI, featuring papers by Omohundro, Armstrong/Sotala/O’Heigeartaigh, T Goertzel, Brundage, Yampolskiy, B. Goertzel, Potapov/Rodinov, Kornai and Sandberg. - If the general intelligence of artificial systems were to surpass that of humans significantly, this would constitute a significant risk for humanity – so even if we estimate the probability of this event to be fairly low, it is necessary to think about it now. We need to estimate what progress we can expect, what (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   3 citations  
  21. Future Progress in Artificial Intelligence: A Poll Among Experts.Vincent C. Müller & Nick Bostrom - 2014 - AI Matters 1 (1):9-11.
    [This is the short version of: Müller, Vincent C. and Bostrom, Nick (forthcoming 2016), ‘Future progress in artificial intelligence: A survey of expert opinion’, in Vincent C. Müller (ed.), Fundamental Issues of Artificial Intelligence (Synthese Library 377; Berlin: Springer).] - - - In some quarters, there is intense concern about high–level machine intelligence and superintelligent AI coming up in a few dec- ades, bringing with it significant risks for human- ity; in other quarters, these issues are ignored or considered science (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   2 citations  
  22. Philosophy and Theory of Artificial Intelligence.Vincent C. Müller (ed.) - 2013 - Springer.
    [Müller, Vincent C. (ed.), (2013), Philosophy and theory of artificial intelligence (SAPERE, 5; Berlin: Springer). 429 pp. ] --- Can we make machines that think and act like humans or other natural intelligent agents? The answer to this question depends on how we see ourselves and how we see the machines in question. Classical AI and cognitive science had claimed that cognition is computation, and can thus be reproduced on other computing machines, possibly surpassing the abilities of human intelligence. This (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  23. Introduction to JCS Singularity Edition.Uziel Awret - 2012 - Journal of Consciousness Studies 19 (1-2):7-15.
    This is the editors introduction to the double 2012 JCS edition on the Singularity.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  24. Introduction to Singularity Edition of JCS.Uziel Awret - 2012 - Journal of Consciousness Studies 19 (1-2):7-15.
    This special interactive interdisciplinary issue of JCS on the singularity and the future relationship of humanity and AI is the first of two issues centered on David Chalmers’ 2010 JCS article ‘The Singularity, a Philosophical Analysis’. These issues include more than 20 solicited commentaries to which Chalmers responds. To quote Chalmers: -/- "One might think that the singularity would be of great interest to Academic philosophers, cognitive scientists, and artificial intelligence researchers. In practice, this has not been the case. Good (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  25. Introduction: Philosophy and Theory of Artificial Intelligence.Vincent C. Müller - 2012 - Minds and Machines 22 (2):67-69.
    The theory and philosophy of artificial intelligence has come to a crucial point where the agenda for the forthcoming years is in the air. This special volume of Minds and Machines presents leading invited papers from a conference on the “Philosophy and Theory of Artificial Intelligence” that was held in October 2011 in Thessaloniki. Artificial Intelligence is perhaps unique among engineering subjects in that it has raised very basic questions about the nature of computing, perception, reasoning, learning, language, action, interaction, (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  26. Theory and Philosophy of AI (Minds and Machines, 22/2 - Special Volume).Vincent C. Müller (ed.) - 2012 - Springer.
    Invited papers from PT-AI 2011. - Vincent C. Müller: Introduction: Theory and Philosophy of Artificial Intelligence - Nick Bostrom: The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents - Hubert L. Dreyfus: A History of First Step Fallacies - Antoni Gomila, David Travieso and Lorena Lobo: Wherein is Human Cognition Systematic - J. Kevin O'Regan: How to Build a Robot that Is Conscious and Feels - Oron Shagrir: Computation, Implementation, Cognition.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   2 citations  
  27. The Disconnection Thesis.David Roden - 2012 - In A. Eden, J. H. Søraker, E. Steinhart & A. H. Moore (eds.), The Singularity Hypothesis: A Scientific and Philosophical Assessment. Springer.
    In his 1993 article ‘The Coming Technological Singularity: How to survive in the posthuman era’ the computer scientist Virnor Vinge speculated that developments in artificial intelligence might reach a point where improvements in machine intelligence result in smart AI’s producing ever-smarter AI’s. According to Vinge the ‘singularity’, as he called this threshold of recursive self-improvement, would be a ‘transcendental event’ transforming life on Earth in ways that unaugmented humans are not equipped to envisage. In this paper I argue Vinge’s idea (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   3 citations  
  28. Advantages of Artificial Intelligences, Uploads, and Digital Minds.Kaj Sotala - 2012 - International Journal of Machine Consciousness 4 (01):275-291.
    I survey four categories of factors that might give a digital mind, such as an upload or an artificial general intelligence, an advantage over humans. Hardware advantages include greater serial speeds and greater parallel speeds. Self-improvement advantages include improvement of algorithms, design of new mental modules, and modification of motivational system. Co-operative advantages include copyability, perfect co-operation, improved communication, and transfer of skills. Human handicaps include computational limitations and faulty heuristics, human-centric biases, and socially motivated cognition. The shape of hardware (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   4 citations  
  29. The Singularity Beyond Philosophy of Mind.Eric Steinhart - 2012 - Journal of Consciousness Studies 19 (7-8):131-137.
    Thought about the singularity intersects the philosophy of mind in deep and important ways. However, thought about the singularity also intersects many other areas of philosophy, including the history of philosophy, metaphysics, the philosophy of science, and the philosophy of religion. I point to some of those intersections. Singularitarian thought suggests that many of the objects and processes that once lay in the domain of revealed religion now lie in the domain of pure computer science.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  30. Last Man or Overman? Transhuman Appropriations of a Nietzschean Theme.Michael E. Zimmerman - 2011 - Hedgehog Review 13 (2):31-44.
    To what extent can Nietzsche's idea of the Overman be used in connection with transhumanist notions of highly advanced humans and even posthumans?
    Remove from this list   Download  
    Translate
     
     
    Export citation  
     
    Bookmark   1 citation  
  31. After the Humans Are Gone.Eric Dietrich - 2007 - Philosophy Now 61 (May/June):16-19.
    Recently, on the History Channel, artificial intelligence (AI) was singled out, with much wringing of hands, as one of the seven possible causes of the end of human life on Earth. I argue that the wringing of hands is quite inappropriate: the best thing that could happen to humans, and the rest of life of on planet Earth, would be for us to develop intelligent machines and then usher in our own extinction.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   6 citations  
  32. Ethical Issues in Advanced Artificial Intelligence.Nick Bostrom - manuscript
    The ethical issues related to the possible future creation of machines with general intellectual capabilities far outstripping those of humans are quite distinct from any ethical problems arising in current automation and information systems. Such superintelligence would not be just another technological development; it would be the most important invention ever made, and would lead to explosive progress in all scientific and technological fields, as the superintelligence would conduct research with superhuman efficiency. To the extent that ethics is a cognitive (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   14 citations  
  33. Human ≠ AGI.Roman Yampolskiy - manuscript
    Terms Artificial General Intelligence (AGI) and Human-Level Artificial Intelligence (HLAI) have been used interchangeably to refer to the Holy Grail of Artificial Intelligence (AI) research, creation of a machine capable of achieving goals in a wide range of environments. However, widespread implicit assumption of equivalence between capabilities of AGI and HLAI appears to be unjustified, as humans are not general intelligences. In this paper, we will prove this distinction.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  34. On Controllability of Artificial Intelligence.Roman Yampolskiy - manuscript
    Invention of artificial general intelligence is predicted to cause a shift in the trajectory of human civilization. In order to reap the benefits and avoid pitfalls of such powerful technology it is important to be able to control it. However, possibility of controlling artificial general intelligence and its more advanced version, superintelligence, has not been formally established. In this paper, we present arguments as well as supporting evidence from multiple domains indicating that advanced AI can’t be fully controlled. Consequences of (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   2 citations