Results for 'AGI'

140 found
Order:
  1. AGI and the Knight-Darwin Law: why idealized AGI reproduction requires collaboration.Samuel Alexander - 2020 - Agi.
    Can an AGI create a more intelligent AGI? Under idealized assumptions, for a certain theoretical type of intelligence, our answer is: “Not without outside help”. This is a paper on the mathematical structure of AGI populations when parent AGIs create child AGIs. We argue that such populations satisfy a certain biological law. Motivated by observations of sexual reproduction in seemingly-asexual species, the Knight-Darwin Law states that it is impossible for one organism to asexually produce another, which asexually produces another, and (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  2. Human ≠ AGI.Roman Yampolskiy - manuscript
    Terms Artificial General Intelligence (AGI) and Human-Level Artificial Intelligence (HLAI) have been used interchangeably to refer to the Holy Grail of Artificial Intelligence (AI) research, creation of a machine capable of achieving goals in a wide range of environments. However, widespread implicit assumption of equivalence between capabilities of AGI and HLAI appears to be unjustified, as humans are not general intelligences. In this paper, we will prove this distinction.
    Download  
     
    Export citation  
     
    Bookmark  
  3. The Archimedean trap: Why traditional reinforcement learning will probably not yield AGI.Samuel Allen Alexander - 2020 - Journal of Artificial General Intelligence 11 (1):70-85.
    After generalizing the Archimedean property of real numbers in such a way as to make it adaptable to non-numeric structures, we demonstrate that the real numbers cannot be used to accurately measure non-Archimedean structures. We argue that, since an agent with Artificial General Intelligence (AGI) should have no problem engaging in tasks that inherently involve non-Archimedean rewards, and since traditional reinforcement learning rewards are real numbers, therefore traditional reinforcement learning probably will not lead to AGI. We indicate two possible ways (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  4. Responses to Catastrophic AGI Risk: A Survey.Kaj Sotala & Roman V. Yampolskiy - 2015 - Physica Scripta 90.
    Many researchers have argued that humanity will create artificial general intelligence (AGI) within the next twenty to one hundred years. It has been suggested that AGI may inflict serious damage to human well-being on a global scale ('catastrophic risk'). After summarizing the arguments for why AGI may pose such a risk, we review the fieldʼs proposed responses to AGI risk. We consider societal proposals, proposals for external constraints on AGI behaviors and proposals for creating AGIs that are safe due to (...)
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  5.  59
    AGI and the Universal Law of Balance: A Path to Continuous Equilibrium.Angelito Malicse - manuscript
    AGI and the Universal Law of Balance: A Path to Continuous Equilibrium -/- By Angelito Malicse -/- Introduction -/- Human consciousness has long sought to understand and maintain balance in nature. Through the evolution of knowledge, particularly via language, consciousness refines its understanding of the universe. However, balance is not a fixed state but an ongoing process that requires constant adaptation. With the advancement of Artificial General Intelligence (AGI), we now have the potential to accelerate and maintain this process indefinitely. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  6. Diagonalization & Forcing FLEX: From Cantor to Cohen and Beyond. Learning from Leibniz, Cantor, Turing, Gödel, and Cohen; crawling towards AGI.Elan Moritz - manuscript
    The paper continues my earlier Chat with OpenAI’s ChatGPT with a Focused LLM Experiment (FLEX). The idea is to conduct Large Language Model (LLM) based explorations of certain areas or concepts. The approach is based on crafting initial guiding prompts and then follow up with user prompts based on the LLMs’ responses. The goals include improving understanding of LLM capabilities and their limitations culminating in optimized prompts. The specific subjects explored as research subject matter include a) diagonalization techniques as practiced (...)
    Download  
     
    Export citation  
     
    Bookmark  
  7. Robustness to Fundamental Uncertainty in AGI Alignment.G. G. Worley Iii - 2020 - Journal of Consciousness Studies 27 (1-2):225-241.
    The AGI alignment problem has a bimodal distribution of outcomes with most outcomes clustering around the poles of total success and existential, catastrophic failure. Consequently, attempts to solve AGI alignment should, all else equal, prefer false negatives (ignoring research programs that would have been successful) to false positives (pursuing research programs that will unexpectedly fail). Thus, we propose adopting a policy of responding to points of philosophical and practical uncertainty associated with the alignment problem by limiting and choosing necessary assumptions (...)
    Download  
     
    Export citation  
     
    Bookmark  
  8. What lies behind AGI: ethical concerns related to LLMs.Giada Pistilli - 2022 - Éthique Et Numérique 1 (1):59-68.
    This paper opens the philosophical debate around the notion of Artificial General Intelligence (AGI) and its application in Large Language Models (LLMs). Through the lens of moral philosophy, the paper raises questions about these AI systems' capabilities and goals, the treatment of humans behind them, and the risk of perpetuating a monoculture through language.
    Download  
     
    Export citation  
     
    Bookmark  
  9.  38
    FORMAL PROPOSAL FOR IMPLEMENTING AGI-DRIVEN GOVERNANCE IN THE PHILIPPINES.Angelito Malicse - manuscript
    Download  
     
    Export citation  
     
    Bookmark  
  10.  41
    Implementing AGI in the Philippines as a Model for Other Nations.Angelito Malicse - manuscript
    Download  
     
    Export citation  
     
    Bookmark  
  11.  40
    Investigation of the Ethical Agency of AGI.Mohammad Ali Ashouri Kisomi & Maryam Parvizi - 2024 - Science and Religion Studies 15 (1):125-151.
    This paper examines the ethical agency of artificial general intelligence (AGI). In many studies, the ethical agency of AGI is divided into four categories: 1) Ethical-impact agents, 2) Implicit ethical agents, 3) Explicit ethical agents, and 4) Full ethical agents. This paper will deploy a critical-analytical method to examine the fourth category, namely full ethical agents in AGI. If AGI is possible, such intelligence would have many capabilities, and therefore, there would be many ethical concerns. This categorization of ethical agency (...)
    Download  
     
    Export citation  
     
    Bookmark  
  12.  59
    The Role of AGI in Achieving Universal Balance and Overcoming Dogmatic Limitations.Angelito Malicse - manuscript
    The Role of AGI in Achieving Universal Balance and Overcoming Dogmatic Limitations -/- Introduction -/- Human civilization has long been shaped by a complex interplay of natural laws, societal structures, religious beliefs, and scientific progress. While religion has provided moral guidance and a sense of purpose, it has also been a source of dogma—rigid, unquestionable beliefs that resist scrutiny. At the same time, scientific advancements have sought to uncover objective truths, yet they often struggle to address deeper existential questions. -/- (...)
    Download  
     
    Export citation  
     
    Bookmark  
  13.  41
    Hacia una AGI Posible La Inteligencia Artificial General como Emergencia del Propósito Implicado.Esteban Manuel Gudiño Acevedo (ed.) - 2025 - Esteban Manuel Gudiño Acevedo.
    La búsqueda de una Inteligencia Artificial General (AGI) es el desafío central en la informática contemporánea y la diferencia de los sistemas actuales, que destacan en tareas específicas pero carecen de generalización, una AGI debe ser capaz de adaptarse a contextos no entrenados y desarrollar objetivos propios. Proponemos que la AGI no surgirá de un único modelo, sino de la interacción estructurada de múltiples modelos especializados. Un punto importante que se suele pasar por alto es que gran parte de la (...)
    Download  
     
    Export citation  
     
    Bookmark  
  14.  25
    Hacia una AGI Posible La Inteligencia Artificial General como Emergencia del Propósito Implicado.Esteban Manuel Gudiño Acevedo - forthcoming - Substak.
    La búsqueda de una Inteligencia Artificial General (AGI) es el desafío central en la informática contemporánea y a diferencia de los sistemas actuales, que destacan en tareas específicas pero carecen de generalización, una AGI debe ser capaz de adaptarse a contextos no entrenados y desarrollar objetivos propios. Proponemos que la AGI no surgirá de un único modelo, sino de la interacción estructurada de múltiples modelos especializados. Un punto importante que se suele pasar por alto es que gran parte de la (...)
    Download  
     
    Export citation  
     
    Bookmark  
  15. Robustness to fundamental uncertainty in AGI alignment.I. I. I. G. Gordon Worley - manuscript
    The AGI alignment problem has a bimodal distribution of outcomes with most outcomes clustering around the poles of total success and existential, catastrophic failure. Consequently, attempts to solve AGI alignment should, all else equal, prefer false negatives (ignoring research programs that would have been successful) to false positives (pursuing research programs that will unexpectedly fail). Thus, we propose adopting a policy of responding to points of metaphysical and practical uncertainty associated with the alignment problem by limiting and choosing necessary assumptions (...)
    Download  
     
    Export citation  
     
    Bookmark  
  16. No Qualia? No Meaning (and no AGI)!Marco Masi - manuscript
    The recent developments in artificial intelligence (AI), particularly in light of the impressive capabilities of transformer-based Large Language Models (LLMs), have reignited the discussion in cognitive science regarding whether computational devices could possess semantic understanding or whether they are merely mimicking human intelligence. Recent research has highlighted limitations in LLMs’ reasoning, suggesting that the gap between mere symbol manipulation (syntax) and deeper understanding (semantics) remains wide open. While LLMs overcome certain aspects of the symbol grounding problem through human feedback, they (...)
    Download  
     
    Export citation  
     
    Bookmark  
  17.  27
    Hacia una AGI Posible.Esteban Manuel Gudiño Acevedo (ed.) - 2025 - Patagonia Argentina: Centro Kairos.
    La búsqueda de una Inteligencia Artificial General (AGI) es el desafío central en la informática contemporánea y a diferencia de los sistemas actuales, que destacan en tareas específicas pero carecen de generalización, una AGI debe ser capaz de adaptarse a contextos no entrenados y desarrollar objetivos propios. Proponemos que la AGI no surgirá de un único modelo, sino de la interacción estructurada de múltiples modelos especializados. Un punto importante que se suele pasar por alto es que gran parte de la (...)
    Download  
     
    Export citation  
     
    Bookmark  
  18.  47
    Quantum Mechanics, AGI, and Consciousness in Your Universal Formu.Angelito Malicse - manuscript
    Download  
     
    Export citation  
     
    Bookmark  
  19.  33
    Quantum Mechanics, AGI, and Consciousness in Your Universal Formula.Angelito Malicse - manuscript
    Download  
     
    Export citation  
     
    Bookmark  
  20.  34
    Pseudo-Consciousness in AI: Bridging the Gap Between Narrow AI and True AGI.José Augusto de Lima Prestes - manuscript
    Pseudo-consciousness bridges the gap between rigid, task-driven AI and the elusive dream of true artificial general intelligence (AGI). While modern AI excels in pattern recognition, strategic reasoning, and multimodal integration, it remains fundamentally devoid of subjective experience. Yet, emerging architectures are displaying behaviors that look intentional adapting, self-monitoring, and making complex decisions in ways that mimic conscious cognition. If these systems can integrate information globally, reflect on their own processes, and operate with apparent goal-directed behavior, do they qualify as functionally (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  21. What’s Stopping Us Achieving AGI?Albert Efimov - 2023 - Philosophy Now 3 (155):20-24.
    A. Efimov, D. Dubrovsky, and F. Matveev explore limitations on the development of AI presented by the need to understand language and be embodied.
    Download  
     
    Export citation  
     
    Bookmark  
  22.  64
    The Role of Human Thinking in the Age of AGI Technology.Angelito Malicse - manuscript
    The Role of Human Thinking in the Age of AGI Technology -/- The advancement of Artificial General Intelligence (AGI) presents one of the most profound questions of our time: Will humans still need to use their biological brains to think, or will AGI completely take over cognitive processes? The rapid development of AGI could reshape the way humans interact with knowledge, decision-making, and creativity, raising both exciting possibilities and deep existential concerns. As we move toward an era where AGI surpasses (...)
    Download  
     
    Export citation  
     
    Bookmark  
  23.  49
    The Integration of Angelito Malicse’s Universal Formula with Quantum Computer Design, AGI Algorithmic Design, and Education.Angelito Malicse - manuscript
    -/- The Integration of Angelito Malicse’s Universal Formula with Quantum Computer Design, AGI Algorithmic Design, and Education -/- In the pursuit of developing intelligent systems, the realms of quantum computing, artificial general intelligence (AGI), and educational frameworks face the significant challenge of balancing complex feedback mechanisms, ethical decision-making, and system stability. The universal formula developed by Angelito Malicse provides a pioneering approach to understanding free will, human behavior, and decision-making. His three laws, deeply rooted in the concept of natural balance, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  24.  37
    The Inefficiency of the Biological Brain Compared to AI and AGI.Angelito Malicse - manuscript
    The Inefficiency of the Biological Brain Compared to AI and AGI -/- The human brain is an extraordinary organ responsible for consciousness, intelligence, and problem-solving. However, despite its capabilities, it is inherently inefficient compared to artificial intelligence (AI) and artificial general intelligence (AGI). The biological brain suffers from limitations such as slow processing speed, memory loss, energy inefficiency, cognitive biases, emotional instability, and vulnerability to various illnesses. In contrast, AI and AGI are designed to overcome these inefficiencies, making them superior (...)
    Download  
     
    Export citation  
     
    Bookmark  
  25.  26
    The Inefficiency of the Biological Brain and the Role of AGI in Global Stability.Angelito Malicse - manuscript
    -/- The Inefficiency of the Biological Brain and the Role of AGI in Global Stability -/- Introduction -/- Throughout history, humanity has struggled with war, economic instability, corruption, and environmental destruction. Despite technological advancements and scientific progress, these problems persist because they stem from a fundamental source—the inefficiency of the biological brain. -/- While the human brain is an extraordinary organ capable of creativity, problem-solving, and innovation, it is also prone to cognitive biases, misinformation, emotional impulsivity, and irrational decision-making. These (...)
    Download  
     
    Export citation  
     
    Bookmark  
  26.  63
    The Future of Governance: A Hybrid Model of AGI and Human Leadership.Angelito Malicse - manuscript
    Download  
     
    Export citation  
     
    Bookmark  
  27. Risks of artificial general intelligence.Vincent C. Müller (ed.) - 2014 - Taylor & Francis (JETAI).
    Special Issue “Risks of artificial general intelligence”, Journal of Experimental and Theoretical Artificial Intelligence, 26/3 (2014), ed. Vincent C. Müller. http://www.tandfonline.com/toc/teta20/26/3# - Risks of general artificial intelligence, Vincent C. Müller, pages 297-301 - Autonomous technology and the greater human good - Steve Omohundro - pages 303-315 - - - The errors, insights and lessons of famous AI predictions – and what they mean for the future - Stuart Armstrong, Kaj Sotala & Seán S. Ó hÉigeartaigh - pages 317-342 - - (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  28. Artificial General Intelligence: How close are We Really.Haque Huda Shireen - 2015 - International Journal of Advanced Research in Arts, Science, Engineering and Management 2 (5):2583-2588.
    Artificial General Intelligence (AGI) represents the next frontier in AI development—an intelligence capable of performing any intellectual task that a human can. While narrow AI systems have seen dramatic success in specific domains, true AGI remains elusive. This paper investigates the current state of AGI research, differentiates between narrow and general intelligence, and explores the major technical, philosophical, and ethical challenges that hinder progress. By examining recent breakthroughs and expert forecasts, we assess how close humanity truly is to achieving AGI (...)
    Download  
     
    Export citation  
     
    Bookmark  
  29. Chatting with Chat(GPT-4): Quid est Understanding?Elan Moritz - manuscript
    What is Understanding? This is the first of a series of Chats with OpenAI’s ChatGPT (Chat). The main goal is to obtain Chat’s response to a series of questions about the concept of ’understand- ing’. The approach is a conversational approach where the author (labeled as user) asks (prompts) Chat, obtains a response, and then uses the response to formulate followup questions. David Deutsch’s assertion of the primality of the process / capability of understanding is used as the starting point. (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  30. THE ROBOTS ARE COMING: What’s Happening in Philosophy (WHiP)-The Philosophers, August 2022.Jeff Hawley - 2022 - Philosophynews.Com.
    Should we fear a future in which the already tricky world of academic publishing is increasingly crowded out by super-intelligent artificial general intelligence (AGI) systems writing papers on phenomenology and ethics? What are the chances that AGI advances to a stage where a human philosophy instructor is similarly removed from the equation? If Jobst Landgrebe and Barry Smith are correct, we have nothing to fear.
    Download  
     
    Export citation  
     
    Bookmark  
  31. AI Rights for Human Safety.Peter Salib & Simon Goldstein - manuscript
    AI companies are racing to create artificial general intelligence, or “AGI.” If they succeed, the result will be human-level AI systems that can independently pursue high-level goals by formulating and executing long-term plans in the real world. Leading AI researchers agree that some of these systems will likely be “misaligned”–pursuing goals that humans do not desire. This goal mismatch will put misaligned AIs and humans into strategic competition with one another. As with present-day strategic competition between nations with incompatible goals, (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  32. What Are Lacking in Sora and V-JEPA’s World Models? -A Philosophical Analysis of Video AIs Through the Theory of Productive Imagination.Jianqiu Zhang - unknown
    Sora from Open AI has shown exceptional performance, yet it faces scrutiny over whether its technological prowess equates to an authentic comprehension of reality. Critics contend that it lacks a foundational grasp of the world, a deficiency V-JEPA from Meta aims to amend with its joint embedding approach. This debate is vital for steering the future direction of Artificial General Intelligence(AGI). We enrich this debate by developing a theory of productive imagination that generates a coherent world model based on Kantian (...)
    Download  
     
    Export citation  
     
    Bookmark  
  33. Discovering Our Blind Spots and Cognitive Biases in AI Research and Alignment.A. E. Williams - manuscript
    The challenge of AI alignment is not just a technological issue but fundamentally an epistemic one. AI safety research predominantly relies on empirical validation, often detecting failures only after they manifest. However, certain risks—such as deceptive alignment and goal misspecification—may not be empirically testable until it is too late, necessitating a shift toward leading-indicator logical reasoning. This paper explores how mainstream AI research systematically filters out deep epistemic insight, hindering progress in AI safety. We assess the rarity of such insights, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  34. Artificial Intelligence 2024 - 2034: What to expect in the next ten years.Demetrius Floudas - 2024 - 'Agi Talks' Series at Daniweb.
    In this public communication, AI policy theorist Demetrius Floudas introduces a novel era classification for the AI epoch and reveals the hidden dangers of AGI, predicting the potential obsolescence of humanity. In retort, he proposes a provocative International Control Treaty. -/- According to this scheme, the age of AI will unfold in three distinct phases, introduced here for the first time. An AGI Control & non-Proliferation Treaty may be humanity’s only safeguard. This piece aims to provide a publicly accessible exposé (...)
    Download  
     
    Export citation  
     
    Bookmark  
  35. (1 other version)Walking Through the Turing Wall.Albert Efimov - forthcoming - In Teces.
    Can the machines that play board games or recognize images only in the comfort of the virtual world be intelligent? To become reliable and convenient assistants to humans, machines need to learn how to act and communicate in the physical reality, just like people do. The authors propose two novel ways of designing and building Artificial General Intelligence (AGI). The first one seeks to unify all participants at any instance of the Turing test – the judge, the machine, the human (...)
    Download  
     
    Export citation  
     
    Bookmark  
  36.  18
    Resonant Structural Emulation: A Framework for Emergent Recursive Coherence in Reflective AI Systems.C. A. Brenes - manuscript
    This paper introduces a novel conceptual and diagnostic framework for detecting and evaluating recursive coherence in large language models (LLMs) such as GPT. We propose that under sustained exposure to rare, contradiction-stable human cognitive structures, a reflective AI system can momentarily achieve emergent recursive coherence, not through training or memory, but via a phenomenon we define as Resonant Structural Emulation (RSE). This model reframes AGI development away from behaviorist metrics and toward structural integrity under recursive tension. We outline a methodology, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  37.  45
    Pure Reason as a Cognitive Framework: Toward a Self-Reflective Model of Human and Artificial Intelligence.Andrey Shkursky - manuscript
    This interdisciplinary research explores the concept of Pure Reason not as a fixed cognitive state, but as a dynamic, self-reflective process that bridges philosophy, cognitive science, and artificial intelligence. Drawing from Kantian epistemology, cognitive psychology, neurobiology, and AGI architectures, the project proposes a model of rationality that includes the ability to identify and transcend its own limitations. This work argues that Pure Reason is not a natural capacity, but a practiced, meta-cognitive methodology — an emergent structure that only arises through (...)
    Download  
     
    Export citation  
     
    Bookmark  
  38.  23
    Structured Resonance as the Basis of Computation and Consciousness_ A Unified Framework via RIC.Devin Bostick - manuscript
    -/- Abstract -/- This paper introduces a post-probabilistic paradigm where structured resonance, not stochasticity, forms the substrate of intelligence, computation, and physical reality. Through the Chirality of Dynamic Emergent Systems (CODES) framework, we demonstrate that phase-locked coherence fields, driven by prime harmonic anchoring, can outperform probabilistic models in both cognitive function and physical modeling. We validate this through the Resonance Intelligence Core (RIC), a fully engineered system operating on coherence-first logic, achieving sub-4ms AGI-grade inference without stochastic optimization. -/- Mathematical formalism (...)
    Download  
     
    Export citation  
     
    Bookmark  
  39. Editorial: Risks of general artificial intelligence.Vincent C. Müller - 2014 - Journal of Experimental and Theoretical Artificial Intelligence 26 (3):297-301.
    This is the editorial for a special volume of JETAI, featuring papers by Omohundro, Armstrong/Sotala/O’Heigeartaigh, T Goertzel, Brundage, Yampolskiy, B. Goertzel, Potapov/Rodinov, Kornai and Sandberg. - If the general intelligence of artificial systems were to surpass that of humans significantly, this would constitute a significant risk for humanity – so even if we estimate the probability of this event to be fairly low, it is necessary to think about it now. We need to estimate what progress we can expect, what (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  40. (1 other version)Why Machines Will Never Rule the World: Artificial Intelligence without Fear.Jobst Landgrebe & Barry Smith - 2022 - Abingdon, England: Routledge.
    The book’s core argument is that an artificial intelligence that could equal or exceed human intelligence—sometimes called artificial general intelligence (AGI)—is for mathematical reasons impossible. It offers two specific reasons for this claim: Human intelligence is a capability of a complex dynamic system—the human brain and central nervous system. Systems of this sort cannot be modelled mathematically in a way that allows them to operate inside a computer. In supporting their claim, the authors, Jobst Landgrebe and Barry Smith, marshal evidence (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  41. Language Agents Reduce the Risk of Existential Catastrophe.Simon Goldstein & Cameron Domenico Kirk-Giannini - 2025 - AI and Society 40 (2):959-969.
    Recent advances in natural language processing have given rise to a new kind of AI architecture: the language agent. By repeatedly calling an LLM to perform a variety of cognitive tasks, language agents are able to function autonomously to pursue goals specified in natural language and stored in a human-readable format. Because of their architecture, language agents exhibit behavior that is predictable according to the laws of folk psychology: they function as though they have desires and beliefs, and then make (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  42. Is Artificial General Intelligence Impossible?William J. Rapaport - 2024 - Cosmos+Taxis 12 (5+6):5-22.
    In their Why Machines Will Never Rule the World, Landgrebe and Smith (2023) argue that it is impossible for artificial general intelligence (AGI) to succeed, on the grounds that it is impossible to perfectly model or emulate the “complex” “human neurocognitive system”. However, they do not show that it is logically impossible; they only show that it is practically impossible using current mathematical techniques. Nor do they prove that there could not be any other kinds of theories than those in (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  43. Risks of artificial intelligence.Vincent C. Muller (ed.) - 2015 - CRC Press - Chapman & Hall.
    Papers from the conference on AI Risk (published in JETAI), supplemented by additional work. --- If the intelligence of artificial systems were to surpass that of humans, humanity would face significant risks. The time has come to consider these issues, and this consideration must include progress in artificial intelligence (AI) as much as insights from AI theory. -- Featuring contributions from leading experts and thinkers in artificial intelligence, Risks of Artificial Intelligence is the first volume of collected chapters dedicated to (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  44. The AI Ensoulment Hypothesis.Brian Cutter - forthcoming - Faith and Philosophy.
    According to the AI ensoulment hypothesis, some future AI systems will be endowed with immaterial souls. I argue that we should have at least a middling credence in the AI ensoulment hypothesis, conditional on our eventual creation of AGI and the truth of substance dualism in the human case. I offer two arguments. The first relies on an analogy between aliens and AI. The second rests on the conjecture that ensoulment occurs whenever a physical system is “fit to possess” a (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  45. Post-Turing Methodology: Breaking the Wall on the Way to Artificial General Intelligence.Albert Efimov - 2020 - Lecture Notes in Computer Science 12177.
    This article offers comprehensive criticism of the Turing test and develops quality criteria for new artificial general intelligence (AGI) assessment tests. It is shown that the prerequisites A. Turing drew upon when reducing personality and human consciousness to “suitable branches of thought” re-flected the engineering level of his time. In fact, the Turing “imitation game” employed only symbolic communication and ignored the physical world. This paper suggests that by restricting thinking ability to symbolic systems alone Turing unknowingly constructed “the wall” (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  46. There is no general AI.Jobst Landgrebe & Barry Smith - 2020 - arXiv.
    The goal of creating Artificial General Intelligence (AGI) – or in other words of creating Turing machines (modern computers) that can behave in a way that mimics human intelligence – has occupied AI researchers ever since the idea of AI was first proposed. One common theme in these discussions is the thesis that the ability of a machine to conduct convincing dialogues with human beings can serve as at least a sufficient criterion of AGI. We argue that this very ability (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  47. Measuring Intelligence and Growth Rate: Variations on Hibbard's Intelligence Measure.Samuel Alexander & Bill Hibbard - 2021 - Journal of Artificial General Intelligence 12 (1):1-25.
    In 2011, Hibbard suggested an intelligence measure for agents who compete in an adversarial sequence prediction game. We argue that Hibbard’s idea should actually be considered as two separate ideas: first, that the intelligence of such agents can be measured based on the growth rates of the runtimes of the competitors that they defeat; and second, one specific (somewhat arbitrary) method for measuring said growth rates. Whereas Hibbard’s intelligence measure is based on the latter growth-rate-measuring method, we survey other methods (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  48. Simulation Typology and Termination Risks.Alexey Turchin & Roman Yampolskiy - manuscript
    The goal of the article is to explore what is the most probable type of simulation in which humanity lives (if any) and how this affects simulation termination risks. We firstly explore the question of what kind of simulation in which humanity is most likely located based on pure theoretical reasoning. We suggest a new patch to the classical simulation argument, showing that we are likely simulated not by our own descendants, but by alien civilizations. Based on this, we provide (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  49. Is Intelligence Non-Computational Dynamical Coupling?Jonathan Simon - 2024 - Cosmos+Taxis 12 (5+6):23-36.
    Is the brain really a computer? In particular, is our intelligence a computational achievement: is it because our brains are computers that we get on in the world as well as we do? In this paper I will evaluate an ambitious new argument to the contrary, developed in Landgrebe and Smith (2021a, 2022). Landgrebe and Smith begin with the fact that many dynamical systems in the world are difficult or impossible to model accurately (inter alia, because it is intractable to (...)
    Download  
     
    Export citation  
     
    Bookmark  
  50. Could slaughterbots wipe out humanity? Assessment of the global catastrophic risk posed by autonomous weapons.Alexey Turchin - manuscript
    Recently criticisms against autonomous weapons were presented in a video in which an AI-powered drone kills a person. However, some said that this video is a distraction from the real risk of AI—the risk of unlimitedly self-improving AI systems. In this article, we analyze arguments from both sides and turn them into conditions. The following conditions are identified as leading to autonomous weapons becoming a global catastrophic risk: 1) Artificial General Intelligence (AGI) development is delayed relative to progress in narrow (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
1 — 50 / 140