Results for 'AGI'

98 found
Order:
  1. AGI and the Knight-Darwin Law: why idealized AGI reproduction requires collaboration.Samuel Alexander - 2020 - Agi.
    Can an AGI create a more intelligent AGI? Under idealized assumptions, for a certain theoretical type of intelligence, our answer is: “Not without outside help”. This is a paper on the mathematical structure of AGI populations when parent AGIs create child AGIs. We argue that such populations satisfy a certain biological law. Motivated by observations of sexual reproduction in seemingly-asexual species, the Knight-Darwin Law states that it is impossible for one organism to asexually produce another, which asexually produces another, and (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  2.  39
    AGI and the Universal Law of Balance: A Path to Continuous Equilibrium.Angelito Malicse - manuscript
    AGI and the Universal Law of Balance: A Path to Continuous Equilibrium -/- By Angelito Malicse -/- Introduction -/- Human consciousness has long sought to understand and maintain balance in nature. Through the evolution of knowledge, particularly via language, consciousness refines its understanding of the universe. However, balance is not a fixed state but an ongoing process that requires constant adaptation. With the advancement of Artificial General Intelligence (AGI), we now have the potential to accelerate and maintain this process indefinitely. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  3. Human ≠ AGI.Roman Yampolskiy - manuscript
    Terms Artificial General Intelligence (AGI) and Human-Level Artificial Intelligence (HLAI) have been used interchangeably to refer to the Holy Grail of Artificial Intelligence (AI) research, creation of a machine capable of achieving goals in a wide range of environments. However, widespread implicit assumption of equivalence between capabilities of AGI and HLAI appears to be unjustified, as humans are not general intelligences. In this paper, we will prove this distinction.
    Download  
     
    Export citation  
     
    Bookmark  
  4. The Archimedean trap: Why traditional reinforcement learning will probably not yield AGI.Samuel Allen Alexander - 2020 - Journal of Artificial General Intelligence 11 (1):70-85.
    After generalizing the Archimedean property of real numbers in such a way as to make it adaptable to non-numeric structures, we demonstrate that the real numbers cannot be used to accurately measure non-Archimedean structures. We argue that, since an agent with Artificial General Intelligence (AGI) should have no problem engaging in tasks that inherently involve non-Archimedean rewards, and since traditional reinforcement learning rewards are real numbers, therefore traditional reinforcement learning probably will not lead to AGI. We indicate two possible ways (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  5. Responses to Catastrophic AGI Risk: A Survey.Kaj Sotala & Roman V. Yampolskiy - 2015 - Physica Scripta 90.
    Many researchers have argued that humanity will create artificial general intelligence (AGI) within the next twenty to one hundred years. It has been suggested that AGI may inflict serious damage to human well-being on a global scale ('catastrophic risk'). After summarizing the arguments for why AGI may pose such a risk, we review the fieldʼs proposed responses to AGI risk. We consider societal proposals, proposals for external constraints on AGI behaviors and proposals for creating AGIs that are safe due to (...)
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  6.  42
    The Role of AGI in Achieving Universal Balance and Overcoming Dogmatic Limitations.Angelito Malicse - manuscript
    The Role of AGI in Achieving Universal Balance and Overcoming Dogmatic Limitations -/- Introduction -/- Human civilization has long been shaped by a complex interplay of natural laws, societal structures, religious beliefs, and scientific progress. While religion has provided moral guidance and a sense of purpose, it has also been a source of dogma—rigid, unquestionable beliefs that resist scrutiny. At the same time, scientific advancements have sought to uncover objective truths, yet they often struggle to address deeper existential questions. -/- (...)
    Download  
     
    Export citation  
     
    Bookmark  
  7. Diagonalization & Forcing FLEX: From Cantor to Cohen and Beyond. Learning from Leibniz, Cantor, Turing, Gödel, and Cohen; crawling towards AGI.Elan Moritz - manuscript
    The paper continues my earlier Chat with OpenAI’s ChatGPT with a Focused LLM Experiment (FLEX). The idea is to conduct Large Language Model (LLM) based explorations of certain areas or concepts. The approach is based on crafting initial guiding prompts and then follow up with user prompts based on the LLMs’ responses. The goals include improving understanding of LLM capabilities and their limitations culminating in optimized prompts. The specific subjects explored as research subject matter include a) diagonalization techniques as practiced (...)
    Download  
     
    Export citation  
     
    Bookmark  
  8.  23
    Implementing AGI in the Philippines as a Model for Other Nations.Angelito Malicse - manuscript
    Download  
     
    Export citation  
     
    Bookmark  
  9. What lies behind AGI: ethical concerns related to LLMs.Giada Pistilli - 2022 - Éthique Et Numérique 1 (1):59-68.
    This paper opens the philosophical debate around the notion of Artificial General Intelligence (AGI) and its application in Large Language Models (LLMs). Through the lens of moral philosophy, the paper raises questions about these AI systems' capabilities and goals, the treatment of humans behind them, and the risk of perpetuating a monoculture through language.
    Download  
     
    Export citation  
     
    Bookmark  
  10. No Qualia? No Meaning (and no AGI)!Marco Masi - manuscript
    The recent developments in artificial intelligence (AI), particularly in light of the impressive capabilities of transformer-based Large Language Models (LLMs), have reignited the discussion in cognitive science regarding whether computational devices could possess semantic understanding or whether they are merely mimicking human intelligence. Recent research has highlighted limitations in LLMs’ reasoning, suggesting that the gap between mere symbol manipulation (syntax) and deeper understanding (semantics) remains wide open. While LLMs overcome certain aspects of the symbol grounding problem through human feedback, they (...)
    Download  
     
    Export citation  
     
    Bookmark  
  11. Robustness to Fundamental Uncertainty in AGI Alignment.G. G. Worley Iii - 2020 - Journal of Consciousness Studies 27 (1-2):225-241.
    The AGI alignment problem has a bimodal distribution of outcomes with most outcomes clustering around the poles of total success and existential, catastrophic failure. Consequently, attempts to solve AGI alignment should, all else equal, prefer false negatives (ignoring research programs that would have been successful) to false positives (pursuing research programs that will unexpectedly fail). Thus, we propose adopting a policy of responding to points of philosophical and practical uncertainty associated with the alignment problem by limiting and choosing necessary assumptions (...)
    Download  
     
    Export citation  
     
    Bookmark  
  12.  9
    Investigation of the Ethical Agency of AGI.Mohammad Ali Ashouri Kisomi & Maryam Parvizi - 2024 - Science and Religion Studies 15 (1):125-151.
    This paper examines the ethical agency of artificial general intelligence (AGI). In many studies, the ethical agency of AGI is divided into four categories: 1) Ethical-impact agents, 2) Implicit ethical agents, 3) Explicit ethical agents, and 4) Full ethical agents. This paper will deploy a critical-analytical method to examine the fourth category, namely full ethical agents in AGI. If AGI is possible, such intelligence would have many capabilities, and therefore, there would be many ethical concerns. This categorization of ethical agency (...)
    Download  
     
    Export citation  
     
    Bookmark  
  13.  28
    Quantum Mechanics, AGI, and Consciousness in Your Universal Formu.Angelito Malicse - manuscript
    Download  
     
    Export citation  
     
    Bookmark  
  14.  25
    Quantum Mechanics, AGI, and Consciousness in Your Universal Formula.Angelito Malicse - manuscript
    Download  
     
    Export citation  
     
    Bookmark  
  15.  23
    FORMAL PROPOSAL FOR IMPLEMENTING AGI-DRIVEN GOVERNANCE IN THE PHILIPPINES.Angelito Malicse - manuscript
    Download  
     
    Export citation  
     
    Bookmark  
  16. Robustness to fundamental uncertainty in AGI alignment.I. I. I. G. Gordon Worley - manuscript
    The AGI alignment problem has a bimodal distribution of outcomes with most outcomes clustering around the poles of total success and existential, catastrophic failure. Consequently, attempts to solve AGI alignment should, all else equal, prefer false negatives (ignoring research programs that would have been successful) to false positives (pursuing research programs that will unexpectedly fail). Thus, we propose adopting a policy of responding to points of metaphysical and practical uncertainty associated with the alignment problem by limiting and choosing necessary assumptions (...)
    Download  
     
    Export citation  
     
    Bookmark  
  17.  35
    The Role of Human Thinking in the Age of AGI Technology.Angelito Malicse - manuscript
    The Role of Human Thinking in the Age of AGI Technology -/- The advancement of Artificial General Intelligence (AGI) presents one of the most profound questions of our time: Will humans still need to use their biological brains to think, or will AGI completely take over cognitive processes? The rapid development of AGI could reshape the way humans interact with knowledge, decision-making, and creativity, raising both exciting possibilities and deep existential concerns. As we move toward an era where AGI surpasses (...)
    Download  
     
    Export citation  
     
    Bookmark  
  18.  32
    The Integration of Angelito Malicse’s Universal Formula with Quantum Computer Design, AGI Algorithmic Design, and Education.Angelito Malicse - manuscript
    -/- The Integration of Angelito Malicse’s Universal Formula with Quantum Computer Design, AGI Algorithmic Design, and Education -/- In the pursuit of developing intelligent systems, the realms of quantum computing, artificial general intelligence (AGI), and educational frameworks face the significant challenge of balancing complex feedback mechanisms, ethical decision-making, and system stability. The universal formula developed by Angelito Malicse provides a pioneering approach to understanding free will, human behavior, and decision-making. His three laws, deeply rooted in the concept of natural balance, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  19. What’s Stopping Us Achieving AGI?Albert Efimov - 2023 - Philosophy Now 3 (155):20-24.
    A. Efimov, D. Dubrovsky, and F. Matveev explore limitations on the development of AI presented by the need to understand language and be embodied.
    Download  
     
    Export citation  
     
    Bookmark  
  20.  43
    The Future of Governance: A Hybrid Model of AGI and Human Leadership.Angelito Malicse - manuscript
    Download  
     
    Export citation  
     
    Bookmark  
  21. AI Rights for Human Safety.Peter Salib & Simon Goldstein - manuscript
    AI companies are racing to create artificial general intelligence, or “AGI.” If they succeed, the result will be human-level AI systems that can independently pursue high-level goals by formulating and executing long-term plans in the real world. Leading AI researchers agree that some of these systems will likely be “misaligned”–pursuing goals that humans do not desire. This goal mismatch will put misaligned AIs and humans into strategic competition with one another. As with present-day strategic competition between nations with incompatible goals, (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  22. Risks of artificial general intelligence.Vincent C. Müller (ed.) - 2014 - Taylor & Francis (JETAI).
    Special Issue “Risks of artificial general intelligence”, Journal of Experimental and Theoretical Artificial Intelligence, 26/3 (2014), ed. Vincent C. Müller. http://www.tandfonline.com/toc/teta20/26/3# - Risks of general artificial intelligence, Vincent C. Müller, pages 297-301 - Autonomous technology and the greater human good - Steve Omohundro - pages 303-315 - - - The errors, insights and lessons of famous AI predictions – and what they mean for the future - Stuart Armstrong, Kaj Sotala & Seán S. Ó hÉigeartaigh - pages 317-342 - - (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  23. Chatting with Chat(GPT-4): Quid est Understanding?Elan Moritz - manuscript
    What is Understanding? This is the first of a series of Chats with OpenAI’s ChatGPT (Chat). The main goal is to obtain Chat’s response to a series of questions about the concept of ’understand- ing’. The approach is a conversational approach where the author (labeled as user) asks (prompts) Chat, obtains a response, and then uses the response to formulate followup questions. David Deutsch’s assertion of the primality of the process / capability of understanding is used as the starting point. (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  24. THE ROBOTS ARE COMING: What’s Happening in Philosophy (WHiP)-The Philosophers, August 2022.Jeff Hawley - 2022 - Philosophynews.Com.
    Should we fear a future in which the already tricky world of academic publishing is increasingly crowded out by super-intelligent artificial general intelligence (AGI) systems writing papers on phenomenology and ethics? What are the chances that AGI advances to a stage where a human philosophy instructor is similarly removed from the equation? If Jobst Landgrebe and Barry Smith are correct, we have nothing to fear.
    Download  
     
    Export citation  
     
    Bookmark  
  25. Artificial Intelligence 2024 - 2034: What to expect in the next ten years.Demetrius Floudas - 2024 - 'Agi Talks' Series at Daniweb.
    In this public communication, AI policy theorist Demetrius Floudas introduces a novel era classification for the AI epoch and reveals the hidden dangers of AGI, predicting the potential obsolescence of humanity. In retort, he proposes a provocative International Control Treaty. -/- According to this scheme, the age of AI will unfold in three distinct phases, introduced here for the first time. An AGI Control & non-Proliferation Treaty may be humanity’s only safeguard. This piece aims to provide a publicly accessible exposé (...)
    Download  
     
    Export citation  
     
    Bookmark  
  26.  60
    Discovering Our Blind Spots and Cognitive Biases in AI Research and Alignment.A. E. Williams - manuscript
    The challenge of AI alignment is not just a technological issue but fundamentally an epistemic one. AI safety research predominantly relies on empirical validation, often detecting failures only after they manifest. However, certain risks—such as deceptive alignment and goal misspecification—may not be empirically testable until it is too late, necessitating a shift toward leading-indicator logical reasoning. This paper explores how mainstream AI research systematically filters out deep epistemic insight, hindering progress in AI safety. We assess the rarity of such insights, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  27. What Are Lacking in Sora and V-JEPA’s World Models? -A Philosophical Analysis of Video AIs Through the Theory of Productive Imagination.Jianqiu Zhang - unknown
    Sora from Open AI has shown exceptional performance, yet it faces scrutiny over whether its technological prowess equates to an authentic comprehension of reality. Critics contend that it lacks a foundational grasp of the world, a deficiency V-JEPA from Meta aims to amend with its joint embedding approach. This debate is vital for steering the future direction of Artificial General Intelligence(AGI). We enrich this debate by developing a theory of productive imagination that generates a coherent world model based on Kantian (...)
    Download  
     
    Export citation  
     
    Bookmark  
  28. (1 other version)Walking Through the Turing Wall.Albert Efimov - forthcoming - In Teces.
    Can the machines that play board games or recognize images only in the comfort of the virtual world be intelligent? To become reliable and convenient assistants to humans, machines need to learn how to act and communicate in the physical reality, just like people do. The authors propose two novel ways of designing and building Artificial General Intelligence (AGI). The first one seeks to unify all participants at any instance of the Turing test – the judge, the machine, the human (...)
    Download  
     
    Export citation  
     
    Bookmark  
  29. Language Agents Reduce the Risk of Existential Catastrophe.Simon Goldstein & Cameron Domenico Kirk-Giannini - 2023 - AI and Society:1-11.
    Recent advances in natural language processing have given rise to a new kind of AI architecture: the language agent. By repeatedly calling an LLM to perform a variety of cognitive tasks, language agents are able to function autonomously to pursue goals specified in natural language and stored in a human-readable format. Because of their architecture, language agents exhibit behavior that is predictable according to the laws of folk psychology: they function as though they have desires and beliefs, and then make (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  30. Editorial: Risks of general artificial intelligence.Vincent C. Müller - 2014 - Journal of Experimental and Theoretical Artificial Intelligence 26 (3):297-301.
    This is the editorial for a special volume of JETAI, featuring papers by Omohundro, Armstrong/Sotala/O’Heigeartaigh, T Goertzel, Brundage, Yampolskiy, B. Goertzel, Potapov/Rodinov, Kornai and Sandberg. - If the general intelligence of artificial systems were to surpass that of humans significantly, this would constitute a significant risk for humanity – so even if we estimate the probability of this event to be fairly low, it is necessary to think about it now. We need to estimate what progress we can expect, what (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  31. Why Machines Will Never Rule the World: Artificial Intelligence without Fear.Jobst Landgrebe & Barry Smith - 2022 - Abingdon, England: Routledge.
    The book’s core argument is that an artificial intelligence that could equal or exceed human intelligence—sometimes called artificial general intelligence (AGI)—is for mathematical reasons impossible. It offers two specific reasons for this claim: Human intelligence is a capability of a complex dynamic system—the human brain and central nervous system. Systems of this sort cannot be modelled mathematically in a way that allows them to operate inside a computer. In supporting their claim, the authors, Jobst Landgrebe and Barry Smith, marshal evidence (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  32. Is Artificial General Intelligence Impossible?William J. Rapaport - 2024 - Cosmos+Taxis 12 (5+6):5-22.
    In their Why Machines Will Never Rule the World, Landgrebe and Smith (2023) argue that it is impossible for artificial general intelligence (AGI) to succeed, on the grounds that it is impossible to perfectly model or emulate the “complex” “human neurocognitive system”. However, they do not show that it is logically impossible; they only show that it is practically impossible using current mathematical techniques. Nor do they prove that there could not be any other kinds of theories than those in (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  33. The AI Ensoulment Hypothesis.Brian Cutter - forthcoming - Faith and Philosophy.
    According to the AI ensoulment hypothesis, some future AI systems will be endowed with immaterial souls. I argue that we should have at least a middling credence in the AI ensoulment hypothesis, conditional on our eventual creation of AGI and the truth of substance dualism in the human case. I offer two arguments. The first relies on an analogy between aliens and AI. The second rests on the conjecture that ensoulment occurs whenever a physical system is “fit to possess” a (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  34. Post-Turing Methodology: Breaking the Wall on the Way to Artificial General Intelligence.Albert Efimov - 2020 - Lecture Notes in Computer Science 12177.
    This article offers comprehensive criticism of the Turing test and develops quality criteria for new artificial general intelligence (AGI) assessment tests. It is shown that the prerequisites A. Turing drew upon when reducing personality and human consciousness to “suitable branches of thought” re-flected the engineering level of his time. In fact, the Turing “imitation game” employed only symbolic communication and ignored the physical world. This paper suggests that by restricting thinking ability to symbolic systems alone Turing unknowingly constructed “the wall” (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  35. A Proposed Taxonomy for the Evolutionary Stages of Artificial Intelligence: Towards a Periodisation of the Machine Intellect Era.Demetrius Floudas - manuscript
    As artificial intelligence (AI) systems continue their rapid advancement, a framework for contextualising the major transitional phases in the development of machine intellect becomes increasingly vital. This paper proposes a novel chronological classification scheme to characterise the key temporal stages in AI evolution. The Prenoëtic era, spanning all of history prior to the year 2020, is defined as the preliminary phase before substantive artificial intellect manifestations. The Protonoëtic period, which humanity has recently entered, denotes the initial emergence of advanced foundation (...)
    Download  
     
    Export citation  
     
    Bookmark  
  36. Measuring Intelligence and Growth Rate: Variations on Hibbard's Intelligence Measure.Samuel Alexander & Bill Hibbard - 2021 - Journal of Artificial General Intelligence 12 (1):1-25.
    In 2011, Hibbard suggested an intelligence measure for agents who compete in an adversarial sequence prediction game. We argue that Hibbard’s idea should actually be considered as two separate ideas: first, that the intelligence of such agents can be measured based on the growth rates of the runtimes of the competitors that they defeat; and second, one specific (somewhat arbitrary) method for measuring said growth rates. Whereas Hibbard’s intelligence measure is based on the latter growth-rate-measuring method, we survey other methods (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  37. Is Intelligence Non-Computational Dynamical Coupling?Jonathan Simon - 2024 - Cosmos+Taxis 12 (5+6):23-36.
    Is the brain really a computer? In particular, is our intelligence a computational achievement: is it because our brains are computers that we get on in the world as well as we do? In this paper I will evaluate an ambitious new argument to the contrary, developed in Landgrebe and Smith (2021a, 2022). Landgrebe and Smith begin with the fact that many dynamical systems in the world are difficult or impossible to model accurately (inter alia, because it is intractable to (...)
    Download  
     
    Export citation  
     
    Bookmark  
  38. There is no general AI.Jobst Landgrebe & Barry Smith - 2020 - arXiv.
    The goal of creating Artificial General Intelligence (AGI) – or in other words of creating Turing machines (modern computers) that can behave in a way that mimics human intelligence – has occupied AI researchers ever since the idea of AI was first proposed. One common theme in these discussions is the thesis that the ability of a machine to conduct convincing dialogues with human beings can serve as at least a sufficient criterion of AGI. We argue that this very ability (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  39. Short-circuiting the definition of mathematical knowledge for an Artificial General Intelligence.Samuel Alexander - 2020 - Cifma.
    We propose that, for the purpose of studying theoretical properties of the knowledge of an agent with Artificial General Intelligence (that is, the knowledge of an AGI), a pragmatic way to define such an agent’s knowledge (restricted to the language of Epistemic Arithmetic, or EA) is as follows. We declare an AGI to know an EA-statement φ if and only if that AGI would include φ in the resulting enumeration if that AGI were commanded: “Enumerate all the EA-sentences which you (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  40. Complexity and Particularity: An Argument for the Impossibility of Artificial Intelligence.Emanuele Martinelli - 2024 - Cosmos+Taxis 12 (5+6):42-57.
    Landgrebe and Smith (2022) have recently offered an important mathematical argument against the possibility of Artificial General Intelligence (AGI): human intelligence is a complex system; complex systems have some properties that cannot be modelled mathematically; hence we have no viable way to build an AI that would be able to emulate human intelligence. The issue of complexity is thus at the heart of the Landgrebe and Smith approach, and they tackle this issue by postulating a set of conditions, derived from (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  41. Formal theory of thinking (4th edition).Anton Venglovskiy - manuscript
    The definition of thinking in general form is given. The constructive logic of thinking is formulated. An algorithm capable of arbitrarily complex thinking is built.
    Download  
     
    Export citation  
     
    Bookmark  
  42. Probable General Intelligence algorithm.Anton Venglovskiy - manuscript
    Contains a description of a generalized and constructive formal model for the processes of subjective and creative thinking. According to the author, the algorithm presented in the article is capable of real and arbitrarily complex thinking and is potentially able to report on the presence of consciousness.
    Download  
     
    Export citation  
     
    Bookmark  
  43. Risks of artificial intelligence.Vincent C. Muller (ed.) - 2015 - CRC Press - Chapman & Hall.
    Papers from the conference on AI Risk (published in JETAI), supplemented by additional work. --- If the intelligence of artificial systems were to surpass that of humans, humanity would face significant risks. The time has come to consider these issues, and this consideration must include progress in artificial intelligence (AI) as much as insights from AI theory. -- Featuring contributions from leading experts and thinkers in artificial intelligence, Risks of Artificial Intelligence is the first volume of collected chapters dedicated to (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  44. Evaluation and Design of Generalist Systems (EDGeS).John Beverley & Amanda Hicks - 2023 - Ai Magazine.
    The field of AI has undergone a series of transformations, each marking a new phase of development. The initial phase emphasized curation of symbolic models which excelled in capturing reasoning but were fragile and not scalable. The next phase was characterized by machine learning models—most recently large language models (LLMs)—which were more robust and easier to scale but struggled with reasoning. Now, we are witnessing a return to symbolic models as complementing machine learning. Successes of LLMs contrast with their inscrutability, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  45. Can the g Factor Play a Role in Artificial General Intelligence Research?Davide Serpico & Marcello Frixione - 2018 - In Davide Serpico & Marcello Frixione, Proceedings of the Society for the Study of Artificial Intelligence and Simulation of Behaviour 2018. pp. 301-305.
    In recent years, a trend in AI research has started to pursue human-level, general artificial intelli-gence (AGI). Although the AGI framework is characterised by different viewpoints on what intelligence is and how to implement it in artificial systems, it conceptualises intelligence as flexible, general-purposed, and capable of self-adapting to different contexts and tasks. Two important ques-tions remain open: a) should AGI projects simu-late the biological, neural, and cognitive mecha-nisms realising the human intelligent behaviour? and b) what is the relationship, if (...)
    Download  
     
    Export citation  
     
    Bookmark  
  46. The Gap between Intelligence and Mind.Bowen Xu, Xinyi Zhan & Quansheng Ren - manuscript
    The feeling brings the "Hard Problem" to philosophy of mind. Does the subjective feeling have a non-ignorable impact on Intelligence? If so, can the feeling be realized in Artificial Intelligence (AI)? To discuss the problems, we have to figure out what the feeling means, by giving a clear definition. In this paper, we primarily give some mainstream perspectives on the topic of the mind, especially the topic of the feeling (or qualia, subjective experience, etc.). Then, a definition of the feeling (...)
    Download  
     
    Export citation  
     
    Bookmark  
  47. Simulation Typology and Termination Risks.Alexey Turchin & Roman Yampolskiy - manuscript
    The goal of the article is to explore what is the most probable type of simulation in which humanity lives (if any) and how this affects simulation termination risks. We firstly explore the question of what kind of simulation in which humanity is most likely located based on pure theoretical reasoning. We suggest a new patch to the classical simulation argument, showing that we are likely simulated not by our own descendants, but by alien civilizations. Based on this, we provide (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  48. Angelito Enriquez Malicse Solution to Freewill Problem- Comparison in Existing Theory.Angelito Malicse - manuscript - Translated by Angelito Malicse.
    Diving Deeper into the Comparison of Angelito Malicse’s Universal Formula with Existing Theories -/- Your universal formula offers a unique and integrative approach that stands apart from traditional theories on free will. Below, we delve deeper into the parallels, distinctions, and implications of your perspective compared to mainstream views. -/- 1. Cause-and-Effect: Your Karma-Based System vs. Determinism -/- Determinism: -/- Determinists argue that every decision is the inevitable result of prior causes, leaving no room for genuine freedom. From this view, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  49.  80
    Cognitive Optimization in the Age of AI: Enhancing Human Potential.Angelito Malicse - manuscript
    Cognitive Optimization in the Age of AI: Enhancing Human Potential -/- Introduction -/- Cognitive optimization is the process of enhancing mental functions such as memory, learning, decision-making, and problem-solving to achieve peak intellectual performance. It is a multidisciplinary approach that integrates neuroscience, psychology, nutrition, lifestyle adjustments, and, increasingly, artificial intelligence (AI). In an era where information is abundant and rapid decision-making is crucial, optimizing cognitive abilities is more Important than ever. -/- AI-driven technologies, video games, mobile apps, and digital platforms (...)
    Download  
     
    Export citation  
     
    Bookmark  
  50.  68
    Entropy in Physics using my Universal Formula.Angelito Malicse - manuscript
    -/- 1. Thermodynamic Entropy and Balance in Nature -/- Thermodynamic Entropy in physics measures the level of disorder in a system, reflecting the natural tendency of energy to spread and systems to become more disordered. -/- Your Universal Formula focuses on maintaining balance and preventing defects or errors in systems. -/- Integration: -/- Increasing thermodynamic entropy (e.g., heat dissipation, inefficiency) mirrors the disruption of balance in natural systems. -/- Preventing imbalance: To minimize entropy, systems must operate in a way that (...)
    Download  
     
    Export citation  
     
    Bookmark  
1 — 50 / 98