Results for 'artificial general intelligence'

998 found
Order:
  1. Risks of artificial general intelligence.Vincent C. Müller (ed.) - 2014 - Taylor & Francis (JETAI).
    Special Issue “Risks of artificial general intelligence”, Journal of Experimental and Theoretical Artificial Intelligence, 26/3 (2014), ed. Vincent C. Müller. http://www.tandfonline.com/toc/teta20/26/3# - Risks of general artificial intelligence, Vincent C. Müller, pages 297-301 - Autonomous technology and the greater human good - Steve Omohundro - pages 303-315 - - - The errors, insights and lessons of famous AI predictions – and what they mean for the future - Stuart Armstrong, Kaj Sotala & Seán (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  2. Artificial general intelligence through visual pattern recognition: an analysis of the Phaeaco cognitive architecture.Safal Aryal -
    In the mid-1960s, Soviet computer scientist Mikhail Moiseevich Bongard created sets of visual puzzles where the objective was to spot an easily justifiable difference between two sides of a single image (for instance, white shapes vs black shapes, etc...). The idea was that these puzzles could be used to teach computers the general faculty of abstraction: perhaps by learning to spot the differences between these sorts of images, a computational agent could learn about inference in general. Considered a (...)
    Download  
     
    Export citation  
     
    Bookmark  
  3. Post-Turing Methodology: Breaking the Wall on the Way to Artificial General Intelligence.Albert Efimov - 2020 - Lecture Notes in Computer Science 12177.
    This article offers comprehensive criticism of the Turing test and develops quality criteria for new artificial general intelligence (AGI) assessment tests. It is shown that the prerequisites A. Turing drew upon when reducing personality and human consciousness to “suitable branches of thought” re-flected the engineering level of his time. In fact, the Turing “imitation game” employed only symbolic communication and ignored the physical world. This paper suggests that by restricting thinking ability to symbolic systems alone Turing unknowingly (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  4. Literature Review: What Artificial General Intelligence Safety Researchers Have Written About the Nature of Human Values.Alexey Turchin & David Denkenberger - manuscript
    Abstract: The field of artificial general intelligence (AGI) safety is quickly growing. However, the nature of human values, with which future AGI should be aligned, is underdefined. Different AGI safety researchers have suggested different theories about the nature of human values, but there are contradictions. This article presents an overview of what AGI safety researchers have written about the nature of human values, up to the beginning of 2019. 21 authors were overviewed, and some of them have (...)
    Download  
     
    Export citation  
     
    Bookmark  
  5. Short-circuiting the definition of mathematical knowledge for an Artificial General Intelligence.Samuel Alexander - forthcoming - Lecture Notes in Computer Science.
    We propose that, for the purpose of studying theoretical properties of the knowledge of an agent with Artificial General Intelligence (that is, the knowledge of an AGI), a pragmatic way to define such an agent’s knowledge (restricted to the language of Epistemic Arithmetic, or EA) is as follows. We declare an AGI to know an EA-statement φ if and only if that AGI would include φ in the resulting enumeration if that AGI were commanded: “Enumerate all the (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  6. Can the g Factor Play a Role in Artificial General Intelligence Research?Davide Serpico & Marcello Frixione - 2018 - In Proceedings of the Society for the Study of Artificial Intelligence and Simulation of Behaviour 2018. pp. 301-305.
    In recent years, a trend in AI research has started to pursue human-level, general artificial intelli-gence (AGI). Although the AGI framework is characterised by different viewpoints on what intelligence is and how to implement it in artificial systems, it conceptualises intelligence as flexible, general-purposed, and capable of self-adapting to different contexts and tasks. Two important ques-tions remain open: a) should AGI projects simu-late the biological, neural, and cognitive mecha-nisms realising the human intelligent behaviour? and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  7.  91
    Probable General Intelligence algorithm.Anton Venglovskiy - manuscript
    Contains a description of a generalized and constructive formal model for the processes of subjective and creative thinking. According to the author, the algorithm presented in the article is capable of real and arbitrarily complex thinking and is potentially able to report on the presence of consciousness.
    Download  
     
    Export citation  
     
    Bookmark  
  8. Editorial: Risks of general artificial intelligence.Vincent C. Müller - 2014 - Journal of Experimental and Theoretical Artificial Intelligence 26 (3):297-301.
    This is the editorial for a special volume of JETAI, featuring papers by Omohundro, Armstrong/Sotala/O’Heigeartaigh, T Goertzel, Brundage, Yampolskiy, B. Goertzel, Potapov/Rodinov, Kornai and Sandberg. - If the general intelligence of artificial systems were to surpass that of humans significantly, this would constitute a significant risk for humanity – so even if we estimate the probability of this event to be fairly low, it is necessary to think about it now. We need to estimate what progress we (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  9. Advantages of artificial intelligences, uploads, and digital minds.Kaj Sotala - 2012 - International Journal of Machine Consciousness 4 (01):275-291.
    I survey four categories of factors that might give a digital mind, such as an upload or an artificial general intelligence, an advantage over humans. Hardware advantages include greater serial speeds and greater parallel speeds. Self-improvement advantages include improvement of algorithms, design of new mental modules, and modification of motivational system. Co-operative advantages include copyability, perfect co-operation, improved communication, and transfer of skills. Human handicaps include computational limitations and faulty heuristics, human-centric biases, and socially motivated cognition. The (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  10. Artificial Intelligence as Art – What the Philosophy of Art can offer the understanding of AI and Consciousness.Hutan Ashrafian -
    Defining Artificial Intelligence and Artificial General Intelligence remain controversial and disputed. They stem from a longer-standing controversy of what is the definition of consciousness, which if solved could possibly offer a solution to defining AI and AGI. Central to these problems is the paradox that appraising AI and Consciousness requires epistemological objectivity of domains that are ontologically subjective. I propose that applying the philosophy of art, which also aims to define art through a lens of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  11.  43
    Artificial Intelligence and the Notions of the “Natural” and the “Artificial.”.Justin Nnaemeka Onyeukaziri - 2022 - Journal of Data Analysis 17 (No. 4):101-116.
    This paper argues that to negate the ontological difference between the natural and the artificial, is not plausible; nor is the reduction of the natural to the artificial or vice versa possible. Except if one intends to empty the semantic content of the terms and notions: “natural” and “artificial.” Most philosophical discussions on Artificial Intelligence (AI) have always been in relation to the human person, especially as it relates to human intelligence, consciousness and/or mind (...)
    Download  
     
    Export citation  
     
    Bookmark  
  12. One decade of universal artificial intelligence.Marcus Hutter - 2012 - In Pei Wang & Ben Goertzel (eds.), Theoretical Foundations of Artificial General Intelligence. Springer. pp. 67--88.
    The first decade of this century has seen the nascency of the first mathematical theory of general artificial intelligence. This theory of Universal Artificial Intelligence (UAI) has made significant contributions to many theoretical, philosophical, and practical AI questions. In a series of papers culminating in book (Hutter, 2005), an exciting sound and complete mathematical model for a super intelligent agent (AIXI) has been developed and rigorously analyzed. While nowadays most AI researchers avoid discussing intelligence, (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  13.  43
    Thoughts on Artificial Intelligence and the Origin of Life Resulting from General Relativity, with Neo-Darwinist Reference to Human Evolution and Mathematical Reference to Cosmology.Rodney Bartlett - manuscript
    When this article was first planned, writing was going to be exclusively about two things - the origin of life and human evolution. But it turned out to be out of the question for the author to restrict himself to these biological and anthropological topics. A proper understanding of them required answering questions like “What is the nature of the universe – the home of life – and how did it originate?”, “How can time travel be removed from fantasy and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  14. On Controllability of Artificial Intelligence.Roman Yampolskiy - manuscript
    Invention of artificial general intelligence is predicted to cause a shift in the trajectory of human civilization. In order to reap the benefits and avoid pitfalls of such powerful technology it is important to be able to control it. However, possibility of controlling artificial general intelligence and its more advanced version, superintelligence, has not been formally established. In this paper, we present arguments as well as supporting evidence from multiple domains indicating that advanced AI (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  15. Punishing Artificial Intelligence: Legal Fiction or Science Fiction.Alexander Sarch & Ryan Abbott - 2019 - UC Davis Law Review 53:323-384.
    Whether causing flash crashes in financial markets, purchasing illegal drugs, or running over pedestrians, AI is increasingly engaging in activity that would be criminal for a natural person, or even an artificial person like a corporation. We argue that criminal law falls short in cases where an AI causes certain types of harm and there are no practically or legally identifiable upstream criminal actors. This Article explores potential solutions to this problem, focusing on holding AI directly criminally liable where (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  16. How does Artificial Intelligence Pose an Existential Risk?Karina Vold & Daniel R. Harris - forthcoming - In Carissa Véliz (ed.), Oxford Handbook of Digital Ethics.
    Alan Turing, one of the fathers of computing, warned that Artificial Intelligence (AI) could one day pose an existential risk to humanity. Today, recent advancements in the field AI have been accompanied by a renewed set of existential warnings. But what exactly constitutes an existential risk? And how exactly does AI pose such a threat? In this chapter we aim to answer these questions. In particular, we will critically explore three commonly cited reasons for thinking that AI poses (...)
    Download  
     
    Export citation  
     
    Bookmark  
  17. Why Machines Will Never Rule the World: Artificial Intelligence without Fear by Jobst Landgrebe & Barry Smith (Book review). [REVIEW]Walid S. Saba - 2022 - Journal of Knowledge Structures and Systems 3 (4):38-41.
    Whether it was John Searle’s Chinese Room argument (Searle, 1980) or Roger Penrose’s argument of the non-computable nature of a mathematician’s insight – an argument that was based on Gödel’s Incompleteness theorem (Penrose, 1989), we have always had skeptics that questioned the possibility of realizing strong Artificial Intelligence (AI), or what has become known by Artificial General Intelligence (AGI). But this new book by Landgrebe and Smith (henceforth, L&S) is perhaps the strongest argument ever made (...)
    Download  
     
    Export citation  
     
    Bookmark  
  18. Ethics of Artificial Intelligence and Robotics.Vincent C. Müller - 2020 - In Edward Zalta (ed.), Stanford Encyclopedia of Philosophy. Palo Alto, Cal.: CSLI, Stanford University. pp. 1-70.
    Artificial intelligence (AI) and robotics are digital technologies that will have significant impact on the development of humanity in the near future. They have raised fundamental questions about what we should do with these systems, what the systems themselves should do, what risks they involve, and how we can control these. - After the Introduction to the field (§1), the main themes (§2) of this article are: Ethical issues that arise with AI systems as objects, i.e., tools made (...)
    Download  
     
    Export citation  
     
    Bookmark   24 citations  
  19. Can Artificial Intelligence (Re)Define Creativity?Dessislava Fessenko - 2022 - In EthicAI=LABS Project. Sofia: DA LAB Foundation /Goethe-institut Sofia. pp. 34-48.
    What is the essential ingredient of creativity that only humans – and not machines – possess? Can artificial intelligence help refine the notion of creativity by reference to that essential ingredient? How / do we need to redefine our conceptual and legal frameworks for rewarding creativity because of this new qualifying – actually creatively significant – factor? -/- Those are the questions tackled in this essay. The author’s conclusion is that consciousness, experiential states (such as a raw feel (...)
    Download  
     
    Export citation  
     
    Bookmark  
  20. Why Machines Will Never Rule the World: Artificial Intelligence without Fear.Jobst Smith Landgrebe & Barry Smith - 2022 - Abingdon, England: Routledge.
    The book’s core argument is that an artificial intelligence that could equal or exceed human intelligence—sometimes called artificial general intelligence (AGI)—is for mathematical reasons impossible. It offers two specific reasons for this claim: Human intelligence is a capability of a complex dynamic system—the human brain and central nervous system. Systems of this sort cannot be modelled mathematically in a way that allows them to operate inside a computer. In supporting their claim, the authors, (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  21. Risks of artificial intelligence.Vincent C. Müller (ed.) - 2016 - CRC Press - Chapman & Hall.
    Papers from the conference on AI Risk (published in JETAI), supplemented by additional work. --- If the intelligence of artificial systems were to surpass that of humans, humanity would face significant risks. The time has come to consider these issues, and this consideration must include progress in artificial intelligence (AI) as much as insights from AI theory. -- Featuring contributions from leading experts and thinkers in artificial intelligence, Risks of Artificial Intelligence is (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  22. Ethics of Artificial Intelligence.Vincent C. Müller - 2021 - In Anthony Elliott (ed.), The Routledge social science handbook of AI. London: Routledge. pp. 122-137.
    Artificial intelligence (AI) is a digital technology that will be of major importance for the development of humanity in the near future. AI has raised fundamental questions about what we should do with such systems, what the systems themselves should do, what risks they involve and how we can control these. - After the background to the field (1), this article introduces the main debates (2), first on ethical issues that arise with AI systems as objects, i.e. tools (...)
    Download  
     
    Export citation  
     
    Bookmark  
  23. Intelligence via ultrafilters: structural properties of some intelligence comparators of deterministic Legg-Hutter agents.Samuel Alexander - 2019 - Journal of Artificial General Intelligence 10 (1):24-45.
    Legg and Hutter, as well as subsequent authors, considered intelligent agents through the lens of interaction with reward-giving environments, attempting to assign numeric intelligence measures to such agents, with the guiding principle that a more intelligent agent should gain higher rewards from environments in some aggregate sense. In this paper, we consider a related question: rather than measure numeric intelligence of one Legg- Hutter agent, how can we compare the relative intelligence of two Legg-Hutter agents? We propose (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  24. The Rhetoric and Reality of Anthropomorphism in Artificial Intelligence.David Watson - 2019 - Minds and Machines 29 (3):417-440.
    Artificial intelligence has historically been conceptualized in anthropomorphic terms. Some algorithms deploy biomimetic designs in a deliberate attempt to effect a sort of digital isomorphism of the human brain. Others leverage more general learning strategies that happen to coincide with popular theories of cognitive science and social epistemology. In this paper, I challenge the anthropomorphic credentials of the neural network algorithm, whose similarities to human cognition I argue are vastly overstated and narrowly construed. I submit that three (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  25. Philosophy and Theory of Artificial Intelligence.Vincent Müller (ed.) - 2013 - Springer.
    [Müller, Vincent C. (ed.), (2013), Philosophy and theory of artificial intelligence (SAPERE, 5; Berlin: Springer). 429 pp. ] --- Can we make machines that think and act like humans or other natural intelligent agents? The answer to this question depends on how we see ourselves and how we see the machines in question. Classical AI and cognitive science had claimed that cognition is computation, and can thus be reproduced on other computing machines, possibly surpassing the abilities of human (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  26. The fetish of artificial intelligence. In response to Iason Gabriel’s “Towards a Theory of Justice for Artificial Intelligence”.Albert Efimov - forthcoming - Philosophy Science.
    The article presents the grounds for defining the fetish of artificial intelligence (AI). The fundamental differences of AI from all previous technological innovations are highlighted, as primarily related to the introduction into the human cognitive sphere and fundamentally new uncontrolled consequences for society. Convincing arguments are presented that the leaders of the globalist project are the main beneficiaries of the AI fetish. This is clearly manifested in the works of philosophers close to big technology corporations and their mega-projects. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  27. What Is Intelligence in the Context of AGI?Dan J. Bruiger - manuscript
    Lack of coherence in concepts of intelligence has implications for artificial intelligence. ‘Intelligence’ is an abstraction grounded in human experience while supposedly freed from the embodiment that is the basis of that experience. In addition to physical instantiation, embodiment is a condition of dependency, of an autopoietic system upon an environment, which thus matters to the system itself. The autonomy and general capability sought in artificial general intelligence implies artificially re-creating the organism’s (...)
    Download  
     
    Export citation  
     
    Bookmark  
  28. Ethical issues in advanced artificial intelligence.Nick Bostrom - manuscript
    The ethical issues related to the possible future creation of machines with general intellectual capabilities far outstripping those of humans are quite distinct from any ethical problems arising in current automation and information systems. Such superintelligence would not be just another technological development; it would be the most important invention ever made, and would lead to explosive progress in all scientific and technological fields, as the superintelligence would conduct research with superhuman efficiency. To the extent that ethics is a (...)
    Download  
     
    Export citation  
     
    Bookmark   26 citations  
  29. Theological Foundations for Moral Artificial Intelligence.Mark Graves - 2022 - Journal of Moral Theology 11 (Special Issue 1):182-211.
    The expanding social role and continued development of artificial intelligence (AI) needs theological investigation of its anthropological and moral potential. A pragmatic theological anthropology adapted for AI can characterize moral AI as experiencing its natural, social, and moral world through interpretations of its external reality as well as its self-reckoning. Systems theory can further structure insights into an AI social self that conceptualizes itself within Ignacio Ellacuria’s historical reality and its moral norms through Thomistic ideogenesis. This enables a (...)
    Download  
     
    Export citation  
     
    Bookmark  
  30. The Weaponization of Artificial Intelligence: What The Public Needs to be Aware of.Birgitta Dresp-Langley - 2023 - Frontiers in Artificial Intelligence 6 (1154184):1-6..
    Technological progress has brought about the emergence of machines that have the capacity to take human lives without human control. These represent an unprecedented threat to humankind. This paper starts from the example of chemical weapons, now banned worldwide by the Geneva protocol, to illustrate how technological development initially aimed at the benefit of humankind has, ultimately, produced what is now called the “Weaponization of Artificial Intelligence (AI)”. Autonomous Weapon Systems (AWS) fail the so-called discrimination principle, yet, the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  31. First Steps Towards an Ethics of Robots and Artificial Intelligence.John Tasioulas - 2019 - Journal of Practical Ethics 7 (1):61-95.
    This article offers an overview of the main first-order ethical questions raised by robots and Artificial Intelligence (RAIs) under five broad rubrics: functionality, inherent significance, rights and responsibilities, side-effects, and threats. The first letter of each rubric taken together conveniently generates the acronym FIRST. Special attention is given to the rubrics of functionality and inherent significance given the centrality of the former and the tendency to neglect the latter in virtue of its somewhat nebulous and contested character. In (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  32.  96
    ¿What is Artificial Intelligence?Fabio Morandín-Ahuerma - 2022 - Int. J. Res. Publ. Rev 3 (12):1947-1951.
    La inteligencia artificial (IA) es la capacidad de una máquina o sistema informático para simular y realizar tareas que normalmente requerirían inteligencia humana, como el razonamiento lógico, el aprendizaje y la resolución de problemas. La inteligencia artificial se basa en el uso de algoritmos y tecnologías de aprendizaje automático para dar a las máquinas la capacidad de aplicar ciertas habilidades cognitivas y realizar tareas por sí mismas de manera autónoma o semiautónoma. La inteligencia artificial se distingue por (...)
    Download  
     
    Export citation  
     
    Bookmark  
  33. Artificial Brains and Hybrid Minds.Paul Schweizer - 2017 - In Vincent C. Müller (ed.), Philosophy and Theory of Artificial Intelligence 2017. Cham, Switzerland: Springer. pp. 81-91.
    The paper develops two related thought experiments exploring variations on an ‘animat’ theme. Animats are hybrid devices with both artificial and biological components. Traditionally, ‘components’ have been construed in concrete terms, as physical parts or constituent material structures. Many fascinating issues arise within this context of hybrid physical organization. However, within the context of functional/computational theories of mentality, demarcations based purely on material structure are unduly narrow. It is abstract functional structure which does the key work in characterizing the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  34.  46
    A Cartesian critique of the artificial intelligence.Rajakishore Nath - 2010 - Philosophical Papers and Review 3 (2):27-33.
    This paper deals with the philosophical problems concerned with research in the field of artificial intelligence (AI), in particular with problems arising out of claims that AI exhibits ‘consciousness’, ‘thinking’ and other ‘inner’ processes and that they simulate human intelligence and cognitive processes in general. The argument is to show how Cartesian mind is non-mechanical. Descartes’ concept of ‘I think’ presupposes subjective experience, because it is ‘I’ who experiences the world. Likewise, Descartes’ notion of ‘I’ negates (...)
    Download  
     
    Export citation  
     
    Bookmark  
  35. Updating the Frame Problem for Artificial Intelligence Research.Lisa Miracchi - 2020 - Journal of Artificial Intelligence and Consciousness 7 (2):217-230.
    The Frame Problem is the problem of how one can design a machine to use information so as to behave competently, with respect to the kinds of tasks a genuinely intelligent agent can reliably, effectively perform. I will argue that the way the Frame Problem is standardly interpreted, and so the strategies considered for attempting to solve it, must be updated. We must replace overly simplistic and reductionist assumptions with more sophisticated and plausible ones. In particular, the standard interpretation assumes (...)
    Download  
     
    Export citation  
     
    Bookmark  
  36. Measuring Intelligence and Growth Rate: Variations on Hibbard's Intelligence Measure.Samuel Alexander & Bill Hibbard - 2021 - Journal of Artificial General Intelligence 12 (1):1-25.
    In 2011, Hibbard suggested an intelligence measure for agents who compete in an adversarial sequence prediction game. We argue that Hibbard’s idea should actually be considered as two separate ideas: first, that the intelligence of such agents can be measured based on the growth rates of the runtimes of the competitors that they defeat; and second, one specific (somewhat arbitrary) method for measuring said growth rates. Whereas Hibbard’s intelligence measure is based on the latter growth-rate-measuring method, we (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  37. Legg-Hutter universal intelligence implies classical music is better than pop music for intellectual training.Samuel Alexander - 2019 - The Reasoner 13 (11):71-72.
    In their thought-provoking paper, Legg and Hutter consider a certain abstrac- tion of an intelligent agent, and define a universal intelligence measure, which assigns every such agent a numerical intelligence rating. We will briefly summarize Legg and Hutter’s paper, and then give a tongue-in-cheek argument that if one’s goal is to become more intelligent by cultivating music appreciation, then it is bet- ter to use classical music (such as Bach, Mozart, and Beethoven) than to use more recent pop (...)
    Download  
     
    Export citation  
     
    Bookmark  
  38. Human ≠ AGI.Roman Yampolskiy - manuscript
    Terms Artificial General Intelligence (AGI) and Human-Level Artificial Intelligence (HLAI) have been used interchangeably to refer to the Holy Grail of Artificial Intelligence (AI) research, creation of a machine capable of achieving goals in a wide range of environments. However, widespread implicit assumption of equivalence between capabilities of AGI and HLAI appears to be unjustified, as humans are not general intelligences. In this paper, we will prove this distinction.
    Download  
     
    Export citation  
     
    Bookmark  
  39. There is no general AI.Jobst Landgrebe & Barry Smith - 2020 - arXiv.
    The goal of creating Artificial General Intelligence (AGI) – or in other words of creating Turing machines (modern computers) that can behave in a way that mimics human intelligence – has occupied AI researchers ever since the idea of AI was first proposed. One common theme in these discussions is the thesis that the ability of a machine to conduct convincing dialogues with human beings can serve as at least a sufficient criterion of AGI. We argue (...)
    Download  
     
    Export citation  
     
    Bookmark  
  40.  91
    Peter Dauvergne. AI in the Wild: Sustainability in the Age of Artificial Intelligence[REVIEW]Philip J. Walsh - 2022 - Environmental Ethics 44 (2):185-186.
    Download  
     
    Export citation  
     
    Bookmark  
  41. The Gap between Intelligence and Mind.Bowen Xu, Xinyi Zhan & Quansheng Ren - manuscript
    The feeling (quale) brings the "Hard Problem" to philosophy of mind. Does the subjective feeling have a non-ignorable impact on Intelligence? If so, can the feeling be realized in Artificial Intelligence (AI)? To discuss the problems, we have to figure out what the feeling means, by giving a clear definition. In this paper, we primarily give some mainstream perspectives on the topic of the mind, especially the topic of the feeling (or qualia, subjective experience, etc.). Then, a (...)
    Download  
     
    Export citation  
     
    Bookmark  
  42. natural intelligence and anthropic reasoning.Predrag Slijepcevic - 2020 - Biosemiotics 13 (tba):1-23.
    This paper aims to justify the concept of natural intelligence in the biosemiotic context. I will argue that the process of life is (i) a cognitive/semiotic process and (ii) that organisms, from bacteria to animals, are cognitive or semiotic agents. To justify these arguments, the neural-type intelligence represented by the form of reasoning known as anthropic reasoning will be compared and contrasted with types of intelligence explicated by four disciplines of biology – relational biology, evolutionary epistemology, biosemiotics (...)
    Download  
     
    Export citation  
     
    Bookmark  
  43. How feasible is the rapid development of artificial superintelligence?Kaj Sotala - 2017 - Physica Scripta 11 (92).
    What kinds of fundamental limits are there in how capable artificial intelligence (AI) systems might become? Two questions in particular are of interest: (1) How much more capable could AI become relative to humans, and (2) how easily could superhuman capability be acquired? To answer these questions, we will consider the literature on human expertise and intelligence, discuss its relevance for AI, and consider how AI could improve on humans in two major aspects of thought and expertise, (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  44. Making moral machines: why we need artificial moral agents.Paul Formosa & Malcolm Ryan - forthcoming - AI and Society.
    As robots and Artificial Intelligences become more enmeshed in rich social contexts, it seems inevitable that we will have to make them into moral machines equipped with moral skills. Apart from the technical difficulties of how we could achieve this goal, we can also ask the ethical question of whether we should seek to create such Artificial Moral Agents (AMAs). Recently, several papers have argued that we have strong reasons not to develop AMAs. In response, we develop a (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  45. The Value Alignment Problem.Dan J. Bruiger - manuscript
    The Value Alignment Problem (VAP) presupposes that artificial general intelligence (AGI) is desirable and perhaps inevitable. As usually conceived, it is one side of the more general issue of mutual control between agonistic agents. To be fully autonomous, an AI must be an autopoietic system (an agent), with its own purposiveness. In the case of such systems, Bostrom’s orthogonality thesis is untrue. The VAP reflects the more general problem of interfering in complex systems, entraining the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  46. Can reinforcement learning learn itself? A reply to 'Reward is enough'.Samuel Allen Alexander - forthcoming - CIFMA 2021.
    In their paper 'Reward is enough', Silver et al conjecture that the creation of sufficiently good reinforcement learning (RL) agents is a path to artificial general intelligence (AGI). We consider one aspect of intelligence Silver et al did not consider in their paper, namely, that aspect of intelligence involved in designing RL agents. If that is within human reach, then it should also be within AGI's reach. This raises the question: is there an RL environment (...)
    Download  
     
    Export citation  
     
    Bookmark  
  47. Should machines be tools or tool-users? Clarifying motivations and assumptions in the quest for superintelligence.Dan J. Bruiger - manuscript
    Much of the basic non-technical vocabulary of artificial intelligence is surprisingly ambiguous. Some key terms with unclear meanings include intelligence, embodiment, simulation, mind, consciousness, perception, value, goal, agent, knowledge, belief, optimality, friendliness, containment, machine and thinking. Much of this vocabulary is naively borrowed from the realm of conscious human experience to apply to a theoretical notion of “mind-in-general” based on computation. However, if there is indeed a threshold between mechanical tool and autonomous agent (and a tipping (...)
    Download  
     
    Export citation  
     
    Bookmark  
  48. Global Catastrophic Risks Connected with Extra-Terrestrial Intelligence.Alexey Turchin -
    In this article, a classification of the global catastrophic risks connected with the possible existence (or non-existence) of extraterrestrial intelligence is presented. If there are no extra-terrestrial intelligences (ETIs) in our light cone, it either means that the Great Filter is behind us, and thus some kind of periodic sterilizing natural catastrophe, like a gamma-ray burst, should be given a higher probability estimate, or that the Great Filter is ahead of us, and thus a future global catastrophe is high (...)
    Download  
     
    Export citation  
     
    Bookmark  
  49. Genes, Affect, and Reason: Why Autonomous Robot Intelligence Will Be Nothing Like Human Intelligence.Henry Moss - 2016 - Techné: Research in Philosophy and Technology 20 (1):1-15.
    Abstract: Many believe that, in addition to cognitive capacities, autonomous robots need something similar to affect. As in humans, affect, including specific emotions, would filter robot experience based on a set of goals, values, and interests. This narrows behavioral options and avoids combinatorial explosion or regress problems that challenge purely cognitive assessments in a continuously changing experiential field. Adding human-like affect to robots is not straightforward, however. Affect in organisms is an aspect of evolved biological systems, from the taxes of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  50. An Analysis of the Interaction Between Intelligent Software Agents and Human Users.Christopher Burr, Nello Cristianini & James Ladyman - 2018 - Minds and Machines 28 (4):735-774.
    Interactions between an intelligent software agent and a human user are ubiquitous in everyday situations such as access to information, entertainment, and purchases. In such interactions, the ISA mediates the user’s access to the content, or controls some other aspect of the user experience, and is not designed to be neutral about outcomes of user choices. Like human users, ISAs are driven by goals, make autonomous decisions, and can learn from experience. Using ideas from bounded rationality, we frame these interactions (...)
    Download  
     
    Export citation  
     
    Bookmark   27 citations  
1 — 50 / 998