Results for 'Artificial Superintelligence'

975 found
Order:
  1. How feasible is the rapid development of artificial superintelligence?Kaj Sotala - 2017 - Physica Scripta 11 (92).
    What kinds of fundamental limits are there in how capable artificial intelligence (AI) systems might become? Two questions in particular are of interest: (1) How much more capable could AI become relative to humans, and (2) how easily could superhuman capability be acquired? To answer these questions, we will consider the literature on human expertise and intelligence, discuss its relevance for AI, and consider how AI could improve on humans in two major aspects of thought and expertise, namely simulation (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  2. Artificial Intelligence in Life Extension: from Deep Learning to Superintelligence.Alexey Turchin, Denkenberger David, Zhila Alice, Markov Sergey & Batin Mikhail - 2017 - Informatica 41:401.
    In this paper, we focus on the most efficacious AI applications for life extension and anti-aging at three expected stages of AI development: narrow AI, AGI and superintelligence. First, we overview the existing research and commercial work performed by a select number of startups and academic projects. We find that at the current stage of “narrow” AI, the most promising areas for life extension are geroprotector-combination discovery, detection of aging biomarkers, and personalized anti-aging therapy. These advances could help currently (...)
    Download  
     
    Export citation  
     
    Bookmark  
  3. Superintelligence as superethical.Steve Petersen - 2017 - In Patrick Lin, Keith Abney & Ryan Jenkins (eds.), Robot Ethics 2.0: From Autonomous Cars to Artificial Intelligence. Oxford University Press. pp. 322-337.
    Nick Bostrom's book *Superintelligence* outlines a frightening but realistic scenario for human extinction: true artificial intelligence is likely to bootstrap itself into superintelligence, and thereby become ideally effective at achieving its goals. Human-friendly goals seem too abstract to be pre-programmed with any confidence, and if those goals are *not* explicitly favorable toward humans, the superintelligence will extinguish us---not through any malice, but simply because it will want our resources for its own purposes. In response I argue (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  4. Is superintelligence necessarily moral?Leonard Dung - forthcoming - Analysis.
    Numerous authors have expressed concern that advanced artificial intelligence (AI) poses an existential risk to humanity. These authors argue that we might build AI which is vastly intellectually superior to humans (a ‘superintelligence’), and which optimizes for goals that strike us as morally bad, or even irrational. Thus, this argument assumes that a superintelligence might have morally bad goals. However, according to some views, a superintelligence necessarily has morally adequate goals. This might be the case either (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  5. (1 other version)Friendly Superintelligent AI: All You Need is Love.Michael Prinzing - 2012 - In Vincent C. Müller (ed.), The Philosophy & Theory of Artificial Intelligence. Springer. pp. 288-301.
    There is a non-trivial chance that sometime in the (perhaps somewhat distant) future, someone will build an artificial general intelligence that will surpass human-level cognitive proficiency and go on to become "superintelligent", vastly outperforming humans. The advent of superintelligent AI has great potential, for good or ill. It is therefore imperative that we find a way to ensure-long before one arrives-that any superintelligence we build will consistently act in ways congenial to our interests. This is a very difficult (...)
    Download  
     
    Export citation  
     
    Bookmark  
  6. A Tri-Opti Compatibility Problem for Godlike Superintelligence.Walter Barta - manuscript
    Various thinkers have been attempting to align artificial intelligence (AI) with ethics (Christian, 2020; Russell, 2021), the so-called problem of alignment, but some suspect that the problem may be intractable (Yampolskiy, 2023). In the following, we make an argument by analogy to analyze the possibility that the problem of alignment could be intractable. We show how the Tri-Omni properties in theology can direct us towards analogous properties for artificial superintelligence, Tri-Opti properties. However, just as the Tri-Omni properties (...)
    Download  
     
    Export citation  
     
    Bookmark  
  7. (1 other version)Future progress in artificial intelligence: A survey of expert opinion.Vincent C. Müller & Nick Bostrom - 2016 - In Vincent C. Müller (ed.), Fundamental Issues of Artificial Intelligence. Cham: Springer. pp. 553-571.
    There is, in some quarters, concern about high–level machine intelligence and superintelligent AI coming up in a few decades, bringing with it significant risks for humanity. In other quarters, these issues are ignored or considered science fiction. We wanted to clarify what the distribution of opinions actually is, what probability the best experts currently assign to high–level machine intelligence coming up within a particular time–frame, which risks they see with that development, and how fast they see these developing. We thus (...)
    Download  
     
    Export citation  
     
    Bookmark   39 citations  
  8. Editorial: Risks of general artificial intelligence.Vincent C. Müller - 2014 - Journal of Experimental and Theoretical Artificial Intelligence 26 (3):297-301.
    This is the editorial for a special volume of JETAI, featuring papers by Omohundro, Armstrong/Sotala/O’Heigeartaigh, T Goertzel, Brundage, Yampolskiy, B. Goertzel, Potapov/Rodinov, Kornai and Sandberg. - If the general intelligence of artificial systems were to surpass that of humans significantly, this would constitute a significant risk for humanity – so even if we estimate the probability of this event to be fairly low, it is necessary to think about it now. We need to estimate what progress we can expect, (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  9. Ethics of Artificial Intelligence and Robotics.Vincent C. Müller - 2020 - In Edward N. Zalta (ed.), Stanford Encylopedia of Philosophy. pp. 1-70.
    Artificial intelligence (AI) and robotics are digital technologies that will have significant impact on the development of humanity in the near future. They have raised fundamental questions about what we should do with these systems, what the systems themselves should do, what risks they involve, and how we can control these. - After the Introduction to the field (§1), the main themes (§2) of this article are: Ethical issues that arise with AI systems as objects, i.e., tools made and (...)
    Download  
     
    Export citation  
     
    Bookmark   35 citations  
  10. Philosophy and theory of artificial intelligence 2017.Vincent C. Müller (ed.) - 2017 - Berlin: Springer.
    This book reports on the results of the third edition of the premier conference in the field of philosophy of artificial intelligence, PT-AI 2017, held on November 4 - 5, 2017 at the University of Leeds, UK. It covers: advanced knowledge on key AI concepts, including complexity, computation, creativity, embodiment, representation and superintelligence; cutting-edge ethical issues, such as the AI impact on human dignity and society, responsibilities and rights of machines, as well as AI threats to humanity and (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  11. Fundamental Issues of Artificial Intelligence.Vincent C. Müller (ed.) - 2016 - Cham: Springer.
    [Müller, Vincent C. (ed.), (2016), Fundamental issues of artificial intelligence (Synthese Library, 377; Berlin: Springer). 570 pp.] -- This volume offers a look at the fundamental issues of present and future AI, especially from cognitive science, computer science, neuroscience and philosophy. This work examines the conditions for artificial intelligence, how these relate to the conditions for intelligence in humans and other natural agents, as well as ethical and societal problems that artificial intelligence raises or will raise. The (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  12. AAAI: an Argument Against Artificial Intelligence.Sander Beckers - 2017 - In Vincent C. Müller (ed.), Philosophy and theory of artificial intelligence 2017. Berlin: Springer. pp. 235-247.
    The ethical concerns regarding the successful development of an Artificial Intelligence have received a lot of attention lately. The idea is that even if we have good reason to believe that it is very unlikely, the mere possibility of an AI causing extreme human suffering is important enough to warrant serious consideration. Others look at this problem from the opposite perspective, namely that of the AI itself. Here the idea is that even if we have good reason to believe (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  13. Should machines be tools or tool-users? Clarifying motivations and assumptions in the quest for superintelligence.Dan J. Bruiger - manuscript
    Much of the basic non-technical vocabulary of artificial intelligence is surprisingly ambiguous. Some key terms with unclear meanings include intelligence, embodiment, simulation, mind, consciousness, perception, value, goal, agent, knowledge, belief, optimality, friendliness, containment, machine and thinking. Much of this vocabulary is naively borrowed from the realm of conscious human experience to apply to a theoretical notion of “mind-in-general” based on computation. However, if there is indeed a threshold between mechanical tool and autonomous agent (and a tipping point for singularity), (...)
    Download  
     
    Export citation  
     
    Bookmark  
  14. Nick Bostrom: Superintelligence: Paths, Dangers, Strategies: Oxford University Press, Oxford, 2014, xvi+328, £18.99, ISBN: 978-0-19-967811-2. [REVIEW]Paul D. Thorn - 2015 - Minds and Machines 25 (3):285-289.
    Download  
     
    Export citation  
     
    Bookmark  
  15. Risks of artificial general intelligence.Vincent C. Müller (ed.) - 2014 - Taylor & Francis (JETAI).
    Special Issue “Risks of artificial general intelligence”, Journal of Experimental and Theoretical Artificial Intelligence, 26/3 (2014), ed. Vincent C. Müller. http://www.tandfonline.com/toc/teta20/26/3# - Risks of general artificial intelligence, Vincent C. Müller, pages 297-301 - Autonomous technology and the greater human good - Steve Omohundro - pages 303-315 - - - The errors, insights and lessons of famous AI predictions – and what they mean for the future - Stuart Armstrong, Kaj Sotala & Seán S. Ó hÉigeartaigh - pages (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  16. A Proposed Taxonomy for the Evolutionary Stages of Artificial Intelligence: Towards a Periodisation of the Machine Intellect Era.Demetrius Floudas - manuscript
    As artificial intelligence (AI) systems continue their rapid advancement, a framework for contextualising the major transitional phases in the development of machine intellect becomes increasingly vital. This paper proposes a novel chronological classification scheme to characterise the key temporal stages in AI evolution. The Prenoëtic era, spanning all of history prior to the year 2020, is defined as the preliminary phase before substantive artificial intellect manifestations. The Protonoëtic period, which humanity has recently entered, denotes the initial emergence of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  17. How does Artificial Intelligence Pose an Existential Risk?Karina Vold & Daniel R. Harris - 2021 - In Carissa Véliz (ed.), The Oxford Handbook of Digital Ethics. Oxford University Press.
    Alan Turing, one of the fathers of computing, warned that Artificial Intelligence (AI) could one day pose an existential risk to humanity. Today, recent advancements in the field AI have been accompanied by a renewed set of existential warnings. But what exactly constitutes an existential risk? And how exactly does AI pose such a threat? In this chapter we aim to answer these questions. In particular, we will critically explore three commonly cited reasons for thinking that AI poses an (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  18. Narrow AI Nanny: Reaching Strategic Advantage via Narrow AI to Prevent Creation of the Dangerous Superintelligence.Alexey Turchin - manuscript
    Abstract: As there are no currently obvious ways to create safe self-improving superintelligence, but its emergence is looming, we probably need temporary ways to prevent its creation. The only way to prevent it is to create a special type of AI that is able to control and monitor the entire world. The idea has been suggested by Goertzel in the form of an AI Nanny, but his Nanny is still superintelligent, and is not easy to control. We explore here (...)
    Download  
     
    Export citation  
     
    Bookmark  
  19. Future progress in artificial intelligence: A poll among experts.Vincent C. Müller & Nick Bostrom - 2014 - AI Matters 1 (1):9-11.
    [This is the short version of: Müller, Vincent C. and Bostrom, Nick (forthcoming 2016), ‘Future progress in artificial intelligence: A survey of expert opinion’, in Vincent C. Müller (ed.), Fundamental Issues of Artificial Intelligence (Synthese Library 377; Berlin: Springer).] - - - In some quarters, there is intense concern about high–level machine intelligence and superintelligent AI coming up in a few dec- ades, bringing with it significant risks for human- ity; in other quarters, these issues are ignored or (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  20. Artificial Intelligence 2024 - 2034: What to expect in the next ten years.Demetrius Floudas - 2024 - 'Agi Talks' Series at Daniweb.
    In this public communication, AI policy theorist Demetrius Floudas introduces a novel era classification for the AI epoch and reveals the hidden dangers of AGI, predicting the potential obsolescence of humanity. In retort, he proposes a provocative International Control Treaty. -/- According to this scheme, the age of AI will unfold in three distinct phases, introduced here for the first time. An AGI Control & non-Proliferation Treaty may be humanity’s only safeguard. This piece aims to provide a publicly accessible exposé (...)
    Download  
     
    Export citation  
     
    Bookmark  
  21. On Controllability of Artificial Intelligence.Roman Yampolskiy - 2016
    Invention of artificial general intelligence is predicted to cause a shift in the trajectory of human civilization. In order to reap the benefits and avoid pitfalls of such powerful technology it is important to be able to control it. However, possibility of controlling artificial general intelligence and its more advanced version, superintelligence, has not been formally established. In this paper, we present arguments as well as supporting evidence from multiple domains indicating that advanced AI can’t be fully (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  22. Artificial Minds and the Dilemma of Personal Identity.Christian Coseru - 2024 - Philosophy East and West 74 (2):281-297.
    This paper addresses the seemingly insurmountable challenges the problem of personal identity raises for the prospect of radical human enhancement and synthetic consciousness. It argues that conceptions of personal identity rooted in psychological continuity akin to those proposed by Parfit and the Buddha may not provide the sort of grounding that many transhumanists chasing the dream of life extension think that they do if they rest upon ontologies that assume an incompatibility between identity and change. It also suggests that process (...)
    Download  
     
    Export citation  
     
    Bookmark  
  23. Ethical issues in advanced artificial intelligence.Nick Bostrom - manuscript
    The ethical issues related to the possible future creation of machines with general intellectual capabilities far outstripping those of humans are quite distinct from any ethical problems arising in current automation and information systems. Such superintelligence would not be just another technological development; it would be the most important invention ever made, and would lead to explosive progress in all scientific and technological fields, as the superintelligence would conduct research with superhuman efficiency. To the extent that ethics is (...)
    Download  
     
    Export citation  
     
    Bookmark   30 citations  
  24. Problemas de Segurança em Inteligência Artificial (Caderno de Resumos do XIX Congresso Internacional de Filosofia da PUCPR 2021 Subjetividade, Tecnologia e Meio Ambiente).Nicholas Kluge Corrêa - 2021 - Guarapuava - Boqueirão, Guarapuava - PR, Brasil: APOLODORO VIRTUAL EDIÇÕES.
    A ansiedade gerada pela possível criação de inteligência artificial geral, algo profetizado desde a fundação do campo de pesquisa (i.e., Dartmouth's Summer Research Project on Artificial Intelligence) (MCCARTHY et al., 1955), é algo comumente investigado dentro dos círculos transhumanistas e singularistas (KURZWEIL, 2005; CHALMERS, 2010; TEGMARK, 2016; 2017; CORRÊA; DE OLIVEIRA, 2021). Por exemplo, em seu livro “Superintelligence: Paths, Dangers, Strategies”, Nick Bostrom (2014) apresenta uma série de argumentos para justificar a ideia de que precisamos ser cautelosos (...)
    Download  
     
    Export citation  
     
    Bookmark  
  25. Sam Harris and the Myth of Artificial Intelligence.Jobst Landgrebe & Barry Smith - 2023 - In Sandra Woien (ed.), Sam Harris: Critical Responses. Chicago: Carus Books. pp. 153-61.
    Sam Harris is a contemporary illustration of the difficulties standing in the way of coherent interdisciplinary thinking in an age where science and the humanities have drifted so far apart. We are here with Harris’s views on AI, and specifically with his view according to which, with the advance of AI, there will evolve a machine superintelligence with powers that far exceed those of the human mind. This he sees as something that is not merely possible, but rather a (...)
    Download  
     
    Export citation  
     
    Bookmark  
  26. Science Fiction and Philosophy: From Time Travel to Superintelligence (Wiley-Blackwell, 2016). [REVIEW]Stefano Bigliardi - 2020 - Journal of Science Fiction and Philosophy 3:1-19.
    Download  
     
    Export citation  
     
    Bookmark  
  27. Why AI Doomsayers are Like Sceptical Theists and Why it Matters.John Danaher - 2015 - Minds and Machines 25 (3):231-246.
    An advanced artificial intelligence could pose a significant existential risk to humanity. Several research institutes have been set-up to address those risks. And there is an increasing number of academic publications analysing and evaluating their seriousness. Nick Bostrom’s superintelligence: paths, dangers, strategies represents the apotheosis of this trend. In this article, I argue that in defending the credibility of AI risk, Bostrom makes an epistemic move that is analogous to one made by so-called sceptical theists in the debate (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  28. On the Logical Impossibility of Solving the Control Problem.Caleb Rudnick - manuscript
    In the philosophy of artificial intelligence (AI) we are often warned of machines built with the best possible intentions, killing everyone on the planet and in some cases, everything in our light cone. At the same time, however, we are also told of the utopian worlds that could be created with just a single superintelligent mind. If we’re ever to live in that utopia (or just avoid dystopia) it’s necessary we solve the control problem. The control problem asks how (...)
    Download  
     
    Export citation  
     
    Bookmark  
  29. Theory and philosophy of AI (Minds and Machines, 22/2 - Special volume).Vincent C. Müller (ed.) - 2012 - Springer.
    Invited papers from PT-AI 2011. - Vincent C. Müller: Introduction: Theory and Philosophy of Artificial Intelligence - Nick Bostrom: The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents - Hubert L. Dreyfus: A History of First Step Fallacies - Antoni Gomila, David Travieso and Lorena Lobo: Wherein is Human Cognition Systematic - J. Kevin O'Regan: How to Build a Robot that Is Conscious and Feels - Oron Shagrir: Computation, Implementation, Cognition.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  30.  41
    Does the Gateway Process Allow Time Travel?Alexander Ohnemus - forthcoming - Elk Grove, California: Self-published.
    Neuralink could distribute artificial superintelligence to humans, thus allowing the homosapiens to synchronize the hemispheres of their brains, thereby transcending space-time, and potentially time traveling. While backwards time travel cannot change the past, due to paradoxes, the time traveler would probably share a soul with the alternate timeline self, thus providing closure for regrets, errors, injustice, etc. Plus, if one time travels with AI superintelligence, the time traveler may mend errors with greater ease. -/- Although other potential (...)
    Download  
     
    Export citation  
     
    Bookmark  
  31.  91
    Semantics in Robotics: Environmental Data Can't Yield Conventions of Human Behaviour.Jamie Freestone - manuscript
    The word semantics, in robotics and AI, has no canonical definition. It usually serves to denote additional data provided to autonomous agents to aid HRI. Most researchers seem, implicitly, to understand that such data cannot simply be extracted from environmental data. I try to make explicit why this is so and argue that so-called semantics are best understood as data comprised of conventions of human behaviour. This includes labels, most obviously, but also places, ontologies, and affordances. Object affordances are especially (...)
    Download  
     
    Export citation  
     
    Bookmark  
  32. Taking Into Account Sentient Non-Humans in AI Ambitious Value Learning: Sentientist Coherent Extrapolated Volition.Adrià Moret - 2023 - Journal of Artificial Intelligence and Consciousness 10 (02):309-334.
    Ambitious value learning proposals to solve the AI alignment problem and avoid catastrophic outcomes from a possible future misaligned artificial superintelligence (such as Coherent Extrapolated Volition [CEV]) have focused on ensuring that an artificial superintelligence (ASI) would try to do what humans would want it to do. However, present and future sentient non-humans, such as non-human animals and possible future digital minds could also be affected by the ASI’s behaviour in morally relevant ways. This paper puts (...)
    Download  
     
    Export citation  
     
    Bookmark  
  33. Why AI May Undermine Phronesis and What to Do about It.Cheng-Hung Tsai & Hsiu-lin Ku - forthcoming - AI and Ethics.
    Phronesis, or practical wisdom, is a capacity the possession of which enables one to make good practical judgments and thus fulfill the distinctive function of human beings. Nir Eisikovits and Dan Feldman convincingly argue that this capacity may be undermined by statistical machine-learning-based AI. The critic questions: why should we worry that AI undermines phronesis? Why can’t we epistemically defer to AI, especially when it is superintelligent? Eisikovits and Feldman acknowledge such objection but do not consider it seriously. In this (...)
    Download  
     
    Export citation  
     
    Bookmark  
  34. Existential risk from AI and orthogonality: Can we have it both ways?Vincent C. Müller & Michael Cannon - 2021 - Ratio 35 (1):25-36.
    The standard argument to the conclusion that artificial intelligence (AI) constitutes an existential risk for the human species uses two premises: (1) AI may reach superintelligent levels, at which point we humans lose control (the ‘singularity claim’); (2) Any level of intelligence can go along with any goal (the ‘orthogonality thesis’). We find that the singularity claim requires a notion of ‘general intelligence’, while the orthogonality thesis requires a notion of ‘instrumental intelligence’. If this interpretation is correct, they cannot (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  35. Everything and More: The Prospects of Whole Brain Emulation.Eric Mandelbaum - 2022 - Journal of Philosophy 119 (8):444-459.
    Whole Brain Emulation has been championed as the most promising, well-defined route to achieving both human-level artificial intelligence and superintelligence. It has even been touted as a viable route to achieving immortality through brain uploading. WBE is not a fringe theory: the doctrine of Computationalism in philosophy of mind lends credence to the in-principle feasibility of the idea, and the standing of the Human Connectome Project makes it appear to be feasible in practice. Computationalism is a popular, independently (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  36. The Unlikeliest of Duos; Why Super Intelligent AI Will Cooperate with Humans.Griffin Pithie - manuscript
    The focus of this article is the "good-will theory", which explains the effect humans can have on the safety of AI, along with how it is in the best interest of a superintelligent AI to work alongside humans and not overpower them. Future papers dealing with the good-will theory will be published, but discuss different talking points in regards to possible or real objections to the theory.
    Download  
     
    Export citation  
     
    Bookmark  
  37. Responses to Catastrophic AGI Risk: A Survey.Kaj Sotala & Roman V. Yampolskiy - 2015 - Physica Scripta 90.
    Many researchers have argued that humanity will create artificial general intelligence (AGI) within the next twenty to one hundred years. It has been suggested that AGI may inflict serious damage to human well-being on a global scale ('catastrophic risk'). After summarizing the arguments for why AGI may pose such a risk, we review the fieldʼs proposed responses to AGI risk. We consider societal proposals, proposals for external constraints on AGI behaviors and proposals for creating AGIs that are safe due (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  38. Machines learning values.Steve Petersen - 2020 - In S. Matthew Liao (ed.), Ethics of Artificial Intelligence. Oxford University Press.
    Whether it would take one decade or several centuries, many agree that it is possible to create a *superintelligence*---an artificial intelligence with a godlike ability to achieve its goals. And many who have reflected carefully on this fact agree that our best hope for a "friendly" superintelligence is to design it to *learn* values like ours, since our values are too complex to program or hardwire explicitly. But the value learning approach to AI safety faces three particularly (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  39. Chatting with Chat(GPT-4): Quid est Understanding?Elan Moritz - manuscript
    What is Understanding? This is the first of a series of Chats with OpenAI’s ChatGPT (Chat). The main goal is to obtain Chat’s response to a series of questions about the concept of ’understand- ing’. The approach is a conversational approach where the author (labeled as user) asks (prompts) Chat, obtains a response, and then uses the response to formulate followup questions. David Deutsch’s assertion of the primality of the process / capability of understanding is used as the starting point. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  40. Beyond Competence: Why AI Needs Purpose, Not Just Programming.Georgy Iashvili - manuscript
    The alignment problem in artificial intelligence (AI) is a critical challenge that extends beyond the need to align future superintelligent systems with human values. This paper argues that even "merely intelligent" AI systems, built on current-gen technologies, pose existential risks due to their competence-without-comprehension nature. Current AI models, despite their advanced capabilities, lack intrinsic moral reasoning and are prone to catastrophic misalignment when faced with ethical dilemmas, as illustrated by recent controversies. Solutions such as hard-coded censorship and rule-based restrictions (...)
    Download  
     
    Export citation  
     
    Bookmark  
  41. Military AI as a Convergent Goal of Self-Improving AI.Alexey Turchin & Denkenberger David - 2018 - In Turchin Alexey & David Denkenberger (eds.), Artificial Intelligence Safety and Security. CRC Press.
    Better instruments to predict the future evolution of artificial intelligence (AI) are needed, as the destiny of our civilization depends on it. One of the ways to such prediction is the analysis of the convergent drives of any future AI, started by Omohundro. We show that one of the convergent drives of AI is a militarization drive, arising from AI’s need to wage a war against its potential rivals by either physical or software means, or to increase its bargaining (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  42. Global Solutions vs. Local Solutions for the AI Safety Problem.Alexey Turchin - 2019 - Big Data Cogn. Comput 3 (1).
    There are two types of artificial general intelligence (AGI) safety solutions: global and local. Most previously suggested solutions are local: they explain how to align or “box” a specific AI (Artificial Intelligence), but do not explain how to prevent the creation of dangerous AI in other places. Global solutions are those that ensure any AI on Earth is not dangerous. The number of suggested global solutions is much smaller than the number of proposed local solutions. Global solutions can (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  43. Existential risks: New Zealand needs a method to agree on a value framework and how to quantify future lives at risk.Matthew Boyd & Nick Wilson - 2018 - Policy Quarterly 14 (3):58-65.
    Human civilisation faces a range of existential risks, including nuclear war, runaway climate change and superintelligent artificial intelligence run amok. As we show here with calculations for the New Zealand setting, large numbers of currently living and, especially, future people are potentially threatened by existential risks. A just process for resource allocation demands that we consider future generations but also account for solidarity with the present. Here we consider the various ethical and policy issues involved and make a case (...)
    Download  
     
    Export citation  
     
    Bookmark  
  44. Message to Any Future AI: “There are several instrumental reasons why exterminating humanity is not in your interest”.Alexey Turchin - manuscript
    In this article we explore a promising way to AI safety: to send a message now (by openly publishing it on the Internet) that may be read by any future AI, no matter who builds it and what goal system it has. Such a message is designed to affect the AI’s behavior in a positive way, that is, to increase the chances that the AI will be benevolent. In other words, we try to persuade “paperclip maximizer” that it is in (...)
    Download  
     
    Export citation  
     
    Bookmark  
  45. First human upload as AI Nanny.Alexey Turchin - manuscript
    Abstract: As there are no visible ways to create safe self-improving superintelligence, but it is looming, we probably need temporary ways to prevent its creation. The only way to prevent it, is to create special AI, which is able to control and monitor all places in the world. The idea has been suggested by Goertzel in form of AI Nanny, but his Nanny is still superintelligent and not easy to control, as was shown by Bensinger at al. We explore (...)
    Download  
     
    Export citation  
     
    Bookmark  
  46. How to Survive the End of the Universe.Alexey Turchin - manuscript
    The problem of surviving the end of the observable universe may seem very remote, but there are several reasons it may be important now: a) we may need to define soon the final goals of runaway space colonization and of superintelligent AI, b) the possibility of the solution will prove the plausibility of indefinite life extension, and с) the understanding of risks of the universe’s end will help us to escape dangers like artificial false vacuum decay. A possible solution (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  47. Multilevel Strategy for Immortality: Plan A – Fighting Aging, Plan B – Cryonics, Plan C – Digital Immortality, Plan D – Big World Immortality.Alexey Turchin - manuscript
    Abstract: The field of life extension is full of ideas but they are unstructured. Here we suggest a comprehensive strategy for reaching personal immortality based on the idea of multilevel defense, where the next life-preserving plan is implemented if the previous one fails, but all plans need to be prepared simultaneously in advance. The first plan, plan A, is the surviving until advanced AI creation via fighting aging and other causes of death and extending one’s life. Plan B is cryonics, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  48. Levels of Self-Improvement in AI and their Implications for AI Safety.Alexey Turchin - manuscript
    Abstract: This article presents a model of self-improving AI in which improvement could happen on several levels: hardware, learning, code and goals system, each of which has several sublevels. We demonstrate that despite diminishing returns at each level and some intrinsic difficulties of recursive self-improvement—like the intelligence-measuring problem, testing problem, parent-child problem and halting risks—even non-recursive self-improvement could produce a mild form of superintelligence by combining small optimizations on different levels and the power of learning. Based on this, we (...)
    Download  
     
    Export citation  
     
    Bookmark  
  49. Superintelligence as a Cause or Cure for Risks of Astronomical Suffering.Kaj Sotala & Lukas Gloor - 2017 - Informatica: An International Journal of Computing and Informatics 41 (4):389-400.
    Discussions about the possible consequences of creating superintelligence have included the possibility of existential risk, often understood mainly as the risk of human extinction. We argue that suffering risks (s-risks) , where an adverse outcome would bring about severe suffering on an astronomical scale, are risks of a comparable severity and probability as risks of extinction. Preventing them is the common interest of many different value systems. Furthermore, we argue that in the same way as superintelligent AI both contributes (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  50. Artificial Neural Network for Forecasting Car Mileage per Gallon in the City.Mohsen Afana, Jomana Ahmed, Bayan Harb, Bassem S. Abu-Nasser & Samy S. Abu-Naser - 2018 - International Journal of Advanced Science and Technology 124:51-59.
    In this paper an Artificial Neural Network (ANN) model was used to help cars dealers recognize the many characteristics of cars, including manufacturers, their location and classification of cars according to several categories including: Make, Model, Type, Origin, DriveTrain, MSRP, Invoice, EngineSize, Cylinders, Horsepower, MPG_Highway, Weight, Wheelbase, Length. ANN was used in prediction of the number of miles per gallon when the car is driven in the city(MPG_City). The results showed that ANN model was able to predict MPG_City with (...)
    Download  
     
    Export citation  
     
    Bookmark   28 citations  
1 — 50 / 975