Results for 'artificial general intelligence, AGI, embodiment, autopoiesis, agency, autonomy, G-factor, AI tool, smartonium, superintelligence'

969 found
Order:
  1. Should machines be tools or tool-users? Clarifying motivations and assumptions in the quest for superintelligence.Dan J. Bruiger - manuscript
    Much of the basic non-technical vocabulary of artificial intelligence is surprisingly ambiguous. Some key terms with unclear meanings include intelligence, embodiment, simulation, mind, consciousness, perception, value, goal, agent, knowledge, belief, optimality, friendliness, containment, machine and thinking. Much of this vocabulary is naively borrowed from the realm of conscious human experience to apply to a theoretical notion of “mind-in-general” based on computation. However, if there is indeed a threshold between mechanical tool and autonomous agent (and a tipping point for (...)
    Download  
     
    Export citation  
     
    Bookmark  
  2. Can the g Factor Play a Role in Artificial General Intelligence Research?Davide Serpico & Marcello Frixione - 2018 - In Davide Serpico & Marcello Frixione (eds.), Proceedings of the Society for the Study of Artificial Intelligence and Simulation of Behaviour 2018. pp. 301-305.
    In recent years, a trend in AI research has started to pursue human-level, general artificial intelli-gence (AGI). Although the AGI framework is characterised by different viewpoints on what intelligence is and how to implement it in artificial systems, it conceptualises intelligence as flexible, general-purposed, and capable of self-adapting to different contexts and tasks. Two important ques-tions remain open: a) should AGI projects simu-late the biological, neural, and cognitive mecha-nisms realising the human intelligent behaviour? and b) what (...)
    Download  
     
    Export citation  
     
    Bookmark  
  3. Ethics of Artificial Intelligence and Robotics.Vincent C. Müller - 2020 - In Edward N. Zalta (ed.), Stanford Encylopedia of Philosophy. pp. 1-70.
    Artificial intelligence (AI) and robotics are digital technologies that will have significant impact on the development of humanity in the near future. They have raised fundamental questions about what we should do with these systems, what the systems themselves should do, what risks they involve, and how we can control these. - After the Introduction to the field (§1), the main themes (§2) of this article are: Ethical issues that arise with AI systems as objects, i.e., tools made and (...)
    Download  
     
    Export citation  
     
    Bookmark   32 citations  
  4. Risks of artificial general intelligence.Vincent C. Müller (ed.) - 2014 - Taylor & Francis (JETAI).
    Special Issue “Risks of artificial general intelligence”, Journal of Experimental and Theoretical Artificial Intelligence, 26/3 (2014), ed. Vincent C. Müller. http://www.tandfonline.com/toc/teta20/26/3# - Risks of general artificial intelligence, Vincent C. Müller, pages 297-301 - Autonomous technology and the greater human good - Steve Omohundro - pages 303-315 - - - The errors, insights and lessons of famous AI predictions – and what they mean for the future - Stuart Armstrong, Kaj Sotala & Seán S. Ó hÉigeartaigh (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  5. Post-Turing Methodology: Breaking the Wall on the Way to Artificial General Intelligence.Albert Efimov - 2020 - Lecture Notes in Computer Science 12177.
    This article offers comprehensive criticism of the Turing test and develops quality criteria for new artificial general intelligence (AGI) assessment tests. It is shown that the prerequisites A. Turing drew upon when reducing personality and human consciousness to “suitable branches of thought” re-flected the engineering level of his time. In fact, the Turing “imitation game” employed only symbolic communication and ignored the physical world. This paper suggests that by restricting thinking ability to symbolic systems alone Turing unknowingly constructed (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  6. Ethics of Artificial Intelligence.Vincent C. Müller - 2021 - In Anthony Elliott (ed.), The Routledge Social Science Handbook of Ai. Routledge. pp. 122-137.
    Artificial intelligence (AI) is a digital technology that will be of major importance for the development of humanity in the near future. AI has raised fundamental questions about what we should do with such systems, what the systems themselves should do, what risks they involve and how we can control these. - After the background to the field (1), this article introduces the main debates (2), first on ethical issues that arise with AI systems as objects, i.e. tools made (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  7. Editorial: Risks of general artificial intelligence.Vincent C. Müller - 2014 - Journal of Experimental and Theoretical Artificial Intelligence 26 (3):297-301.
    This is the editorial for a special volume of JETAI, featuring papers by Omohundro, Armstrong/Sotala/O’Heigeartaigh, T Goertzel, Brundage, Yampolskiy, B. Goertzel, Potapov/Rodinov, Kornai and Sandberg. - If the general intelligence of artificial systems were to surpass that of humans significantly, this would constitute a significant risk for humanity – so even if we estimate the probability of this event to be fairly low, it is necessary to think about it now. We need to estimate what progress we can (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  8. Global Solutions vs. Local Solutions for the AI Safety Problem.Alexey Turchin - 2019 - Big Data Cogn. Comput 3 (1).
    There are two types of artificial general intelligence (AGI) safety solutions: global and local. Most previously suggested solutions are local: they explain how to align or “box” a specific AI (Artificial Intelligence), but do not explain how to prevent the creation of dangerous AI in other places. Global solutions are those that ensure any AI on Earth is not dangerous. The number of suggested global solutions is much smaller than the number of proposed local solutions. Global solutions (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  9. AI Rights for Human Safety.Peter Salib & Simon Goldstein - manuscript
    AI companies are racing to create artificial general intelligence, or “AGI.” If they succeed, the result will be human-level AI systems that can independently pursue high-level goals by formulating and executing long-term plans in the real world. Leading AI researchers agree that some of these systems will likely be “misaligned”–pursuing goals that humans do not desire. This goal mismatch will put misaligned AIs and humans into strategic competition with one another. As with present-day strategic competition between nations with (...)
    Download  
     
    Export citation  
     
    Bookmark  
  10. Risks of artificial intelligence.Vincent C. Müller (ed.) - 2016 - CRC Press - Chapman & Hall.
    Papers from the conference on AI Risk (published in JETAI), supplemented by additional work. --- If the intelligence of artificial systems were to surpass that of humans, humanity would face significant risks. The time has come to consider these issues, and this consideration must include progress in artificial intelligence (AI) as much as insights from AI theory. -- Featuring contributions from leading experts and thinkers in artificial intelligence, Risks of Artificial Intelligence is the first volume of (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  11. Action and Agency in Artificial Intelligence: A Philosophical Critique.Justin Nnaemeka Onyeukaziri - 2023 - Philosophia: International Journal of Philosophy (Philippine e-journal) 24 (1):73-90.
    The objective of this work is to explore the notion of “action” and “agency” in artificial intelligence (AI). It employs a metaphysical notion of action and agency as an epistemological tool in the critique of the notion of “action” and “agency” in artificial intelligence. Hence, both a metaphysical and cognitive analysis is employed in the investigation of the quiddity and nature of action and agency per se, and how they are, by extension employed in the language and science (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  12. Literature Review: What Artificial General Intelligence Safety Researchers Have Written About the Nature of Human Values.Alexey Turchin & David Denkenberger - manuscript
    Abstract: The field of artificial general intelligence (AGI) safety is quickly growing. However, the nature of human values, with which future AGI should be aligned, is underdefined. Different AGI safety researchers have suggested different theories about the nature of human values, but there are contradictions. This article presents an overview of what AGI safety researchers have written about the nature of human values, up to the beginning of 2019. 21 authors were overviewed, and some of them have several (...)
    Download  
     
    Export citation  
     
    Bookmark  
  13. Development and validation of the AI attitude scale (AIAS-4): a brief measure of general attitude toward artificial intelligence.Simone Grassini - 2023 - Frontiers in Psychology 14:1191628.
    The rapid advancement of artificial intelligence (AI) has generated an increasing demand for tools that can assess public attitudes toward AI. This study proposes the development and the validation of the AI Attitude Scale (AIAS), a concise self-report instrument designed to evaluate public perceptions of AI technology. The first version of the AIAS that the present manuscript proposes comprises five items, including one reverse-scored item, which aims to gauge individuals’ beliefs about AI’s influence on their lives, careers, and humanity (...)
    Download  
     
    Export citation  
     
    Bookmark  
  14. Chatting with Chat(GPT-4): Quid est Understanding?Elan Moritz - manuscript
    What is Understanding? This is the first of a series of Chats with OpenAI’s ChatGPT (Chat). The main goal is to obtain Chat’s response to a series of questions about the concept of ’understand- ing’. The approach is a conversational approach where the author (labeled as user) asks (prompts) Chat, obtains a response, and then uses the response to formulate followup questions. David Deutsch’s assertion of the primality of the process / capability of understanding is used as the starting point. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  15. What’s Stopping Us Achieving AGI?Albert Efimov - 2023 - Philosophy Now 3 (155):20-24.
    A. Efimov, D. Dubrovsky, and F. Matveev explore limitations on the development of AI presented by the need to understand language and be embodied.
    Download  
     
    Export citation  
     
    Bookmark  
  16.  93
    In Our Own Image: What the Quest for Artificial General Intelligence Can Teach Us About Being Human.Janna Hastings - 2024 - Cosmos+Taxis 12 (5+6):1-4.
    In August 2022, only a few months before ChatGPT was released, Barry Smith, well-known contemporary philosopher, together with Jobst Landgrebe, artificial intelligence entrepreneur, published a book entitled Why Machines will Never Rule the World: Artificial Intelligence without Fear (Landgrebe and Smith 2022). In this important, dense and far-reaching work, Landgrebe and Smith argue from the mathematical theory of complex systems, and a sophisticated analysis of the capabilities of human intelligence, that AGI— at the level of human intelligence—will never (...)
    Download  
     
    Export citation  
     
    Bookmark  
  17. Human ≠ AGI.Roman Yampolskiy - manuscript
    Terms Artificial General Intelligence (AGI) and Human-Level Artificial Intelligence (HLAI) have been used interchangeably to refer to the Holy Grail of Artificial Intelligence (AI) research, creation of a machine capable of achieving goals in a wide range of environments. However, widespread implicit assumption of equivalence between capabilities of AGI and HLAI appears to be unjustified, as humans are not general intelligences. In this paper, we will prove this distinction.
    Download  
     
    Export citation  
     
    Bookmark  
  18. Artificial Intelligence as Art – What the Philosophy of Art can offer the understanding of AI and Consciousness.Hutan Ashrafian - manuscript
    Defining Artificial Intelligence and Artificial General Intelligence remain controversial and disputed. They stem from a longer-standing controversy of what is the definition of consciousness, which if solved could possibly offer a solution to defining AI and AGI. Central to these problems is the paradox that appraising AI and Consciousness requires epistemological objectivity of domains that are ontologically subjective. I propose that applying the philosophy of art, which also aims to define art through a lens of epistemological objectivity (...)
    Download  
     
    Export citation  
     
    Bookmark  
  19. There is no general AI.Jobst Landgrebe & Barry Smith - 2020 - arXiv.
    The goal of creating Artificial General Intelligence (AGI) – or in other words of creating Turing machines (modern computers) that can behave in a way that mimics human intelligence – has occupied AI researchers ever since the idea of AI was first proposed. One common theme in these discussions is the thesis that the ability of a machine to conduct convincing dialogues with human beings can serve as at least a sufficient criterion of AGI. We argue that this (...)
    Download  
     
    Export citation  
     
    Bookmark  
  20.  81
    What Are Lacking in Sora and V-JEPA’s World Models? -A Philosophical Analysis of Video AIs Through the Theory of Productive Imagination.Jianqiu Zhang - unknown
    Sora from Open AI has shown exceptional performance, yet it faces scrutiny over whether its technological prowess equates to an authentic comprehension of reality. Critics contend that it lacks a foundational grasp of the world, a deficiency V-JEPA from Meta aims to amend with its joint embedding approach. This debate is vital for steering the future direction of Artificial General Intelligence(AGI). We enrich this debate by developing a theory of productive imagination that generates a coherent world model based (...)
    Download  
     
    Export citation  
     
    Bookmark  
  21. Is Artificial General Intelligence Impossible?William J. Rapaport - 2024 - Cosmos+Taxis 12 (5+6):5-22.
    In their Why Machines Will Never Rule the World, Landgrebe and Smith (2023) argue that it is impossible for artificial general intelligence (AGI) to succeed, on the grounds that it is impossible to perfectly model or emulate the “complex” “human neurocognitive system”. However, they do not show that it is logically impossible; they only show that it is practically impossible using current mathematical techniques. Nor do they prove that there could not be any other kinds of theories than (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  22. Human Autonomy in the Age of Artificial Intelligence.C. Prunkl - 2022 - Nature Machine Intelligence 4 (2):99-101.
    Current AI policy recommendations differ on what the risks to human autonomy are. To systematically address risks to autonomy, we need to confront the complexity of the concept itself and adapt governance solutions accordingly.
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  23. Why Machines Will Never Rule the World: Artificial Intelligence without Fear.Jobst Landgrebe & Barry Smith - 2022 - Abingdon, England: Routledge.
    The book’s core argument is that an artificial intelligence that could equal or exceed human intelligence—sometimes called artificial general intelligence (AGI)—is for mathematical reasons impossible. It offers two specific reasons for this claim: Human intelligence is a capability of a complex dynamic system—the human brain and central nervous system. Systems of this sort cannot be modelled mathematically in a way that allows them to operate inside a computer. In supporting their claim, the authors, Jobst Landgrebe and Barry (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  24. Responses to Catastrophic AGI Risk: A Survey.Kaj Sotala & Roman V. Yampolskiy - 2015 - Physica Scripta 90.
    Many researchers have argued that humanity will create artificial general intelligence (AGI) within the next twenty to one hundred years. It has been suggested that AGI may inflict serious damage to human well-being on a global scale ('catastrophic risk'). After summarizing the arguments for why AGI may pose such a risk, we review the fieldʼs proposed responses to AGI risk. We consider societal proposals, proposals for external constraints on AGI behaviors and proposals for creating AGIs that are safe (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  25. (1 other version)Friendly Superintelligent AI: All You Need is Love.Michael Prinzing - 2012 - In Vincent C. Müller (ed.), The Philosophy & Theory of Artificial Intelligence. Springer. pp. 288-301.
    There is a non-trivial chance that sometime in the (perhaps somewhat distant) future, someone will build an artificial general intelligence that will surpass human-level cognitive proficiency and go on to become "superintelligent", vastly outperforming humans. The advent of superintelligent AI has great potential, for good or ill. It is therefore imperative that we find a way to ensure-long before one arrives-that any superintelligence we build will consistently act in ways congenial to our interests. This is a very (...)
    Download  
     
    Export citation  
     
    Bookmark  
  26. Toward a social theory of Human-AI Co-creation: Bringing techno-social reproduction and situated cognition together with the following seven premises.Manh-Tung Ho & Quan-Hoang Vuong - manuscript
    This article synthesizes the current theoretical attempts to understand human-machine interactions and introduces seven premises to understand our emerging dynamics with increasingly competent, pervasive, and instantly accessible algorithms. The hope that these seven premises can build toward a social theory of human-AI cocreation. The focus on human-AI cocreation is intended to emphasize two factors. First, is the fact that our machine learning systems are socialized. Second, is the coevolving nature of human mind and AI systems as smart devices form an (...)
    Download  
     
    Export citation  
     
    Bookmark  
  27. A way forward for responsibility in the age of AI.Dane Leigh Gogoshin - 2024 - Inquiry: An Interdisciplinary Journal of Philosophy:1-34.
    Whatever one makes of the relationship between free will and moral responsibility – e.g. whether it’s the case that we can have the latter without the former and, if so, what conditions must be met; whatever one thinks about whether artificially intelligent agents might ever meet such conditions, one still faces the following questions. What is the value of moral responsibility? If we take moral responsibility to be a matter of being a fitting target of moral blame or praise, what (...)
    Download  
     
    Export citation  
     
    Bookmark  
  28. Why AI Doomsayers are Like Sceptical Theists and Why it Matters.John Danaher - 2015 - Minds and Machines 25 (3):231-246.
    An advanced artificial intelligence could pose a significant existential risk to humanity. Several research institutes have been set-up to address those risks. And there is an increasing number of academic publications analysing and evaluating their seriousness. Nick Bostrom’s superintelligence: paths, dangers, strategies represents the apotheosis of this trend. In this article, I argue that in defending the credibility of AI risk, Bostrom makes an epistemic move that is analogous to one made by so-called sceptical theists in the debate (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  29.  76
    AI training data, model success likelihood, and informational entropy-based value.Quan-Hoang Vuong, Viet-Phuong La & Minh-Hoang Nguyen - manuscript
    Since the release of OpenAI's ChatGPT, the world has entered a race to develop more capable and powerful AI, including artificial general intelligence (AGI). The development is constrained by the dependency of AI on the model, quality, and quantity of training data, making the AI training process highly costly in terms of resources and environmental consequences. Thus, improving the effectiveness and efficiency of the AI training process is essential, especially when the Earth is approaching the climate tipping points (...)
    Download  
     
    Export citation  
     
    Bookmark  
  30. On Controllability of Artificial Intelligence.Roman Yampolskiy - 2016
    Invention of artificial general intelligence is predicted to cause a shift in the trajectory of human civilization. In order to reap the benefits and avoid pitfalls of such powerful technology it is important to be able to control it. However, possibility of controlling artificial general intelligence and its more advanced version, superintelligence, has not been formally established. In this paper, we present arguments as well as supporting evidence from multiple domains indicating that advanced AI can’t (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  31. Narrow AI Nanny: Reaching Strategic Advantage via Narrow AI to Prevent Creation of the Dangerous Superintelligence.Alexey Turchin - manuscript
    Abstract: As there are no currently obvious ways to create safe self-improving superintelligence, but its emergence is looming, we probably need temporary ways to prevent its creation. The only way to prevent it is to create a special type of AI that is able to control and monitor the entire world. The idea has been suggested by Goertzel in the form of an AI Nanny, but his Nanny is still superintelligent, and is not easy to control. We explore here (...)
    Download  
     
    Export citation  
     
    Bookmark  
  32. Nietzsche's Three Metamorphoses and Their Relevance to Artificial Intelligence Development.Beni Beeri Issembert - unknown
    This opinion paper delves into the philosophical underpinnings and implications of artificial intelligence (AI) development through the lens of Friedrich Nietzsche's "Three Metamorphoses," exploring the stages from the camel, through the lion, to the envisioned child phase within the AI context. Amidst growing concerns over AI's ethical ramifications, including job displacement, biased decision-making, and misuse potential, this analysis seeks to provide a comprehensive framework for understanding AI's evolution and its socio-technical effects on society. The discourse begins by contextualizing AI (...)
    Download  
     
    Export citation  
     
    Bookmark  
  33. Artificial intelligence and human autonomy: the case of driving automation.Fabio Fossa - 2024 - AI and Society:1-12.
    The present paper aims at contributing to the ethical debate on the impacts of artificial intelligence (AI) systems on human autonomy. More specifically, it intends to offer a clearer understanding of the design challenges to the effort of aligning driving automation technologies to this ethical value. After introducing the discussion on the ambiguous impacts that AI systems exert on human autonomy, the analysis zooms in on how the problem has been discussed in the literature on connected and automated vehicles (...)
    Download  
     
    Export citation  
     
    Bookmark  
  34. What lies behind AGI: ethical concerns related to LLMs.Giada Pistilli - 2022 - Éthique Et Numérique 1 (1):59-68.
    This paper opens the philosophical debate around the notion of Artificial General Intelligence (AGI) and its application in Large Language Models (LLMs). Through the lens of moral philosophy, the paper raises questions about these AI systems' capabilities and goals, the treatment of humans behind them, and the risk of perpetuating a monoculture through language.
    Download  
     
    Export citation  
     
    Bookmark  
  35. Artificial Intelligence in Life Extension: from Deep Learning to Superintelligence.Alexey Turchin, Denkenberger David, Zhila Alice, Markov Sergey & Batin Mikhail - 2017 - Informatica 41:401.
    In this paper, we focus on the most efficacious AI applications for life extension and anti-aging at three expected stages of AI development: narrow AI, AGI and superintelligence. First, we overview the existing research and commercial work performed by a select number of startups and academic projects. We find that at the current stage of “narrow” AI, the most promising areas for life extension are geroprotector-combination discovery, detection of aging biomarkers, and personalized anti-aging therapy. These advances could help currently (...)
    Download  
     
    Export citation  
     
    Bookmark  
  36. How does Artificial Intelligence Pose an Existential Risk?Karina Vold & Daniel R. Harris - 2023 - In Carissa Véliz (ed.), The Oxford Handbook of Digital Ethics. Oxford University Press.
    Alan Turing, one of the fathers of computing, warned that Artificial Intelligence (AI) could one day pose an existential risk to humanity. Today, recent advancements in the field AI have been accompanied by a renewed set of existential warnings. But what exactly constitutes an existential risk? And how exactly does AI pose such a threat? In this chapter we aim to answer these questions. In particular, we will critically explore three commonly cited reasons for thinking that AI poses an (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  37. Human Goals Are Constitutive of Agency in Artificial Intelligence.Elena Popa - 2021 - Philosophy and Technology 34 (4):1731-1750.
    The question whether AI systems have agency is gaining increasing importance in discussions of responsibility for AI behavior. This paper argues that an approach to artificial agency needs to be teleological, and consider the role of human goals in particular if it is to adequately address the issue of responsibility. I will defend the view that while AI systems can be viewed as autonomous in the sense of identifying or pursuing goals, they rely on human goals and other values (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  38. Existential risk from AI and orthogonality: Can we have it both ways?Vincent C. Müller & Michael Cannon - 2021 - Ratio 35 (1):25-36.
    The standard argument to the conclusion that artificial intelligence (AI) constitutes an existential risk for the human species uses two premises: (1) AI may reach superintelligent levels, at which point we humans lose control (the ‘singularity claim’); (2) Any level of intelligence can go along with any goal (the ‘orthogonality thesis’). We find that the singularity claim requires a notion of ‘general intelligence’, while the orthogonality thesis requires a notion of ‘instrumental intelligence’. If this interpretation is correct, they (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  39. The Gap between Intelligence and Mind.Bowen Xu, Xinyi Zhan & Quansheng Ren - manuscript
    The feeling (quale) brings the "Hard Problem" to philosophy of mind. Does the subjective feeling have a non-ignorable impact on Intelligence? If so, can the feeling be realized in Artificial Intelligence (AI)? To discuss the problems, we have to figure out what the feeling means, by giving a clear definition. In this paper, we primarily give some mainstream perspectives on the topic of the mind, especially the topic of the feeling (or qualia, subjective experience, etc.). Then, a definition of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  40. Complexity and Particularity: An Argument for the Impossibility of Artificial Intelligence.Emanuele Martinelli - 2024 - Cosmos+Taxis 12 (5+6):42-57.
    Landgrebe and Smith (2022) have recently offered an important mathematical argument against the possibility of Artificial General Intelligence (AGI): human intelligence is a complex system; complex systems have some properties that cannot be modelled mathematically; hence we have no viable way to build an AI that would be able to emulate human intelligence. The issue of complexity is thus at the heart of the Landgrebe and Smith approach, and they tackle this issue by postulating a set of conditions, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  41. Why Machines Will Never Rule the World: Artificial Intelligence without Fear by Jobst Landgrebe & Barry Smith (Book review). [REVIEW]Walid S. Saba - 2022 - Journal of Knowledge Structures and Systems 3 (4):38-41.
    Whether it was John Searle’s Chinese Room argument (Searle, 1980) or Roger Penrose’s argument of the non-computable nature of a mathematician’s insight – an argument that was based on Gödel’s Incompleteness theorem (Penrose, 1989), we have always had skeptics that questioned the possibility of realizing strong Artificial Intelligence (AI), or what has become known by Artificial General Intelligence (AGI). But this new book by Landgrebe and Smith (henceforth, L&S) is perhaps the strongest argument ever made against strong (...)
    Download  
     
    Export citation  
     
    Bookmark  
  42. Conversational AI for Psychotherapy and Its Role in the Space of Reason.Jana Sedlakova - 2024 - Cosmos+Taxis 12 (5+6):80-87.
    The recent book by Landgrebe and Smith (2022) offers compelling arguments against the possibility of Artificial General Intelligence (AGI) as well as against the idea that machines have the abilities to master human language, human social interaction and morality. Their arguments leave open, however, a problem on the side of the imaginative power of humans to perceive more than there is and treat AIs as humans and social actors independent of their actual properties and abilities or lack thereof. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  43. Nonhuman Moral Agency: A Practice-Focused Exploration of Moral Agency in Nonhuman Animals and Artificial Intelligence.Dorna Behdadi - 2023 - Dissertation, University of Gothenburg
    Can nonhuman animals and artificial intelligence (AI) entities be attributed moral agency? The general assumption in the philosophical literature is that moral agency applies exclusively to humans since they alone possess free will or capacities required for deliberate reflection. Consequently, only humans have been taken to be eligible for ascriptions of moral responsibility in terms of, for instance, blame or praise, moral criticism, or attributions of vice and virtue. Animals and machines may cause harm, but they cannot be (...)
    Download  
     
    Export citation  
     
    Bookmark  
  44. Robustness to Fundamental Uncertainty in AGI Alignment.G. G. Worley Iii - 2020 - Journal of Consciousness Studies 27 (1-2):225-241.
    The AGI alignment problem has a bimodal distribution of outcomes with most outcomes clustering around the poles of total success and existential, catastrophic failure. Consequently, attempts to solve AGI alignment should, all else equal, prefer false negatives (ignoring research programs that would have been successful) to false positives (pursuing research programs that will unexpectedly fail). Thus, we propose adopting a policy of responding to points of philosophical and practical uncertainty associated with the alignment problem by limiting and choosing necessary assumptions (...)
    Download  
     
    Export citation  
     
    Bookmark  
  45. AI language models cannot replace human research participants.Jacqueline Harding, William D’Alessandro, N. G. Laskowski & Robert Long - 2024 - AI and Society 39 (5):2603-2605.
    In a recent letter, Dillion et. al (2023) make various suggestions regarding the idea of artificially intelligent systems, such as large language models, replacing human subjects in empirical moral psychology. We argue that human subjects are in various ways indispensable.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  46. Catastrophically Dangerous AI is Possible Before 2030.Alexey Turchin - manuscript
    In AI safety research, the median timing of AGI arrival is often taken as a reference point, which various polls predict to happen in the middle of 21 century, but for maximum safety, we should determine the earliest possible time of Dangerous AI arrival. Such Dangerous AI could be either AGI, capable of acting completely independently in the real world and of winning in most real-world conflicts with humans, or an AI helping humans to build weapons of mass destruction, or (...)
    Download  
     
    Export citation  
     
    Bookmark  
  47. Understanding Artificial Agency.Leonard Dung - forthcoming - Philosophical Quarterly.
    Which artificial intelligence (AI) systems are agents? To answer this question, I propose a multidimensional account of agency. According to this account, a system's agency profile is jointly determined by its level of goal-directedness and autonomy as well as is abilities for directly impacting the surrounding world, long-term planning and acting for reasons. Rooted in extant theories of agency, this account enables fine-grained, nuanced comparative characterizations of artificial agency. I show that this account has multiple important virtues and (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  48. Acquisition of Autonomy in Biotechnology and Artificial Intelligence.Philippe Gagnon, Mathieu Guillermin, Olivier Georgeon, Juan R. Vidal & Béatrice de Montera - 2020 - In S. Hashimoto N. Callaos (ed.), Proceedings of the 11th International Multi-Conference on Complexity, Informatics and Cybernetics: IMCIC 2020, Volume II. Winter Garden: International Institute for Informatics and Systemics. pp. 168-172.
    This presentation discusses a notion encountered across disciplines, and in different facets of human activity: autonomous activity. We engage it in an interdisciplinary way. We start by considering the reactions and behaviors of biological entities to biotechnological intervention. An attempt is made to characterize the degree of freedom of embryos & clones, which show openness to different outcomes when the epigenetic developmental landscape is factored in. We then consider the claim made in programming and artificial intelligence that automata could (...)
    Download  
     
    Export citation  
     
    Bookmark  
  49.  36
    Minds, Brains, AI.Jay Seitz - manuscript
    In the last year or so (and going back many decades) there has been extensive claims by major computational scientists, engineers, and others that AGI (artificial general intelligence) is 5 or 10 years away, but without a scintilla of scientific evidence, for a broad body of these claims: Computers will become conscious, have a “theory of mind,” think and reason, will become more intelligent than humans, and so on. But the claims are science fiction, not science. -/- This (...)
    Download  
     
    Export citation  
     
    Bookmark  
  50. A Holistic Test for (Artificial) General Intelligence.Gomez-Ramirez Danny A. J. & Kieninger Judith - manuscript
    We approach the notion of general (global) human intelligence as a prominently multifaceted concept which can be tested in at least seventy specific scenarios. We say that an agent has Artificial Global Intelligence (AGLI), if it is able to perform in an intelligent manner for at least the collection of tasks defining the former scenarios. In particular, based on Gartner's multiple intelligences theory, we describe the design of a concrete test for AGLI made in such a way that (...)
    Download  
     
    Export citation  
     
    Bookmark  
1 — 50 / 969