Switch to: References

Add citations

You must login to add citations.
  1. Husserl’s concept of transcendental consciousness and the problem of AI consciousness.Zbigniew Orbik - forthcoming - Phenomenology and the Cognitive Sciences:1-20.
    Edmund Husserl, the founder of phenomenological philosophy, developed the concept of the so-called pure transcendental consciousness. The author of the article asks whether the concept of consciousness understood this way can constitute a model for AI consciousness. It should be remembered that transcendental consciousness is the result of the use of the phenomenological method, the essence of which is referring to experience (“back to things themselves”). Therefore, one can legitimately ask whether the consciousness that AI can achieve can possess the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Intelligent Behaviour.Dimitri Coelho Mollo - 2022 - Erkenntnis 89 (2):705-721.
    The notion of intelligence is relevant to several fields of research, including cognitive and comparative psychology, neuroscience, artificial intelligence, and philosophy, among others. However, there is little agreement within and across these fields on how to characterise and explain intelligence. I put forward a behavioural, operational characterisation of intelligence that can play an integrative role in the sciences of intelligence, as well as preserve the distinctive explanatory value of the notion, setting it apart from the related concepts of cognition and (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Intelligence Socialism.Carlotta Pavese - forthcoming - Oxford Studies in Philosophy of Mind.
    From artistic performances in the visual arts and in music to motor control in gymnastics, from tool use to chess and language, humans excel in a variety of skills. On the plausible assumption that skillful behavior is a visible manifestation of intelligence, a theory of intelligence—whether human or not—should be informed by a theory of skills. More controversial is the question as to whether, in order to theorize about intelligence, we should study certain skills in particular. My target is the (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Technological singularity and transhumanism.Piero Gayozzo - 2021 - Teknokultura. Revista de Cultura Digital y Movimientos Sociales 18 (2):195-200.
    The technological innovations of the Fourth Industrial Revolution have facilitated the formulation of strategies to transcend human limitations; strategies that are widely supported by the transhumanist philosophy. The purpose of this article is to explain the relationship between ‘transhumanism’ and ‘technological singularity’, to which end the Fourth Industrial Revolution and transhumanism are also briefly covered. Subsequently, the three main models of technological singularity are evaluated and a definition of this futuristic concept is offered. Finally, the author provides a reflection on (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Universal Agent Mixtures and the Geometry of Intelligence.Samuel Allen Alexander, David Quarel, Len Du & Marcus Hutter - 2023 - Aistats.
    Inspired by recent progress in multi-agent Reinforcement Learning (RL), in this work we examine the collective intelligent behaviour of theoretical universal agents by introducing a weighted mixture operation. Given a weighted set of agents, their weighted mixture is a new agent whose expected total reward in any environment is the corresponding weighted average of the original agents' expected total rewards in that environment. Thus, if RL agent intelligence is quantified in terms of performance across environments, the weighted mixture's intelligence is (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Extended subdomains: a solution to a problem of Hernández-Orallo and Dowe.Samuel Allen Alexander - 2021 - In Samuel Allen Alexander & Marcus Hutter (eds.), AGI.
    This is a paper about the general theory of measuring or estimating social intelligence via benchmarks. Hernández-Orallo and Dowe described a problem with certain proposed intelligence measures. The problem suggests that those intelligence measures might not accurately capture social intelligence. We argue that Hernández-Orallo and Dowe's problem is even more general than how they stated it, applying to many subdomains of AGI, not just the one subdomain in which they stated it. We then propose a solution. In our solution, instead (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Intelligence as Accurate Prediction.Trond A. Tjøstheim & Andreas Stephens - 2022 - Review of Philosophy and Psychology 13 (2):475-499.
    This paper argues that intelligence can be approximated by the ability to produce accurate predictions. It is further argued that general intelligence can be approximated by context dependent predictive abilities combined with the ability to use working memory to abstract away contextual information. The flexibility associated with general intelligence can be understood as the ability to use selective attention to focus on specific aspects of sensory impressions to identify patterns, which can then be used to predict events in novel situations (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Why Machines Will Never Rule the World: Artificial Intelligence without Fear.Jobst Landgrebe & Barry Smith - 2022 - Abingdon, England: Routledge.
    The book’s core argument is that an artificial intelligence that could equal or exceed human intelligence—sometimes called artificial general intelligence (AGI)—is for mathematical reasons impossible. It offers two specific reasons for this claim: Human intelligence is a capability of a complex dynamic system—the human brain and central nervous system. Systems of this sort cannot be modelled mathematically in a way that allows them to operate inside a computer. In supporting their claim, the authors, Jobst Landgrebe and Barry Smith, marshal evidence (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • In search of the moral status of AI: why sentience is a strong argument.Martin Gibert & Dominic Martin - 2022 - AI and Society 37 (1):319-330.
    Is it OK to lie to Siri? Is it bad to mistreat a robot for our own pleasure? Under what condition should we grant a moral status to an artificial intelligence (AI) system? This paper looks at different arguments for granting moral status to an AI system: the idea of indirect duties, the relational argument, the argument from intelligence, the arguments from life and information, and the argument from sentience. In each but the last case, we find unresolved issues with (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  • Can reinforcement learning learn itself? A reply to 'Reward is enough'.Samuel Allen Alexander - 2021 - Cifma.
    In their paper 'Reward is enough', Silver et al conjecture that the creation of sufficiently good reinforcement learning (RL) agents is a path to artificial general intelligence (AGI). We consider one aspect of intelligence Silver et al did not consider in their paper, namely, that aspect of intelligence involved in designing RL agents. If that is within human reach, then it should also be within AGI's reach. This raises the question: is there an RL environment which incentivises RL agents to (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Existential risk from AI and orthogonality: Can we have it both ways?Vincent C. Müller & Michael Cannon - 2021 - Ratio 35 (1):25-36.
    The standard argument to the conclusion that artificial intelligence (AI) constitutes an existential risk for the human species uses two premises: (1) AI may reach superintelligent levels, at which point we humans lose control (the ‘singularity claim’); (2) Any level of intelligence can go along with any goal (the ‘orthogonality thesis’). We find that the singularity claim requires a notion of ‘general intelligence’, while the orthogonality thesis requires a notion of ‘instrumental intelligence’. If this interpretation is correct, they cannot be (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Extending Environments To Measure Self-Reflection In Reinforcement Learning.Samuel Allen Alexander, Michael Castaneda, Kevin Compher & Oscar Martinez - 2022 - Journal of Artificial General Intelligence 13 (1).
    We consider an extended notion of reinforcement learning in which the environment can simulate the agent and base its outputs on the agent's hypothetical behavior. Since good performance usually requires paying attention to whatever things the environment's outputs are based on, we argue that for an agent to achieve on-average good performance across many such extended environments, it is necessary for the agent to self-reflect. Thus weighted-average performance over the space of all suitably well-behaved extended environments could be considered a (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Reward is enough.David Silver, Satinder Singh, Doina Precup & Richard S. Sutton - 2021 - Artificial Intelligence 299 (C):103535.
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  • In search of the moral status of AI: why sentience is a strong argument.Martin Gibert & Dominic Martin - 2021 - AI and Society 1:1-12.
    Is it OK to lie to Siri? Is it bad to mistreat a robot for our own pleasure? Under what condition should we grant a moral status to an artificial intelligence system? This paper looks at different arguments for granting moral status to an AI system: the idea of indirect duties, the relational argument, the argument from intelligence, the arguments from life and information, and the argument from sentience. In each but the last case, we find unresolved issues with the (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  • Computer models solving intelligence test problems: Progress and implications.José Hernández-Orallo, Fernando Martínez-Plumed, Ute Schmid, Michael Siebers & David L. Dowe - 2016 - Artificial Intelligence 230 (C):74-107.
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Artificial Intelligence, Values, and Alignment.Iason Gabriel - 2020 - Minds and Machines 30 (3):411-437.
    This paper looks at philosophical questions that arise in the context of AI alignment. It defends three propositions. First, normative and technical aspects of the AI alignment problem are interrelated, creating space for productive engagement between people working in both domains. Second, it is important to be clear about the goal of alignment. There are significant differences between AI that aligns with instructions, intentions, revealed preferences, ideal preferences, interests and values. A principle-based approach to AI alignment, which combines these elements (...)
    Download  
     
    Export citation  
     
    Bookmark   48 citations  
  • Building Thinking Machines by Solving Animal Cognition Tasks.Matthew Crosby - 2020 - Minds and Machines 30 (4):589-615.
    In ‘Computing Machinery and Intelligence’, Turing, sceptical of the question ‘Can machines think?’, quickly replaces it with an experimentally verifiable test: the imitation game. I suggest that for such a move to be successful the test needs to be relevant, expansive, solvable by exemplars, unpredictable, and lead to actionable research. The Imitation Game is only partially successful in this regard and its reliance on language, whilst insightful for partially solving the problem, has put AI progress on the wrong foot, prescribing (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • The Archimedean trap: Why traditional reinforcement learning will probably not yield AGI.Samuel Allen Alexander - 2020 - Journal of Artificial General Intelligence 11 (1):70-85.
    After generalizing the Archimedean property of real numbers in such a way as to make it adaptable to non-numeric structures, we demonstrate that the real numbers cannot be used to accurately measure non-Archimedean structures. We argue that, since an agent with Artificial General Intelligence (AGI) should have no problem engaging in tasks that inherently involve non-Archimedean rewards, and since traditional reinforcement learning rewards are real numbers, therefore traditional reinforcement learning probably will not lead to AGI. We indicate two possible ways (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • How feasible is the rapid development of artificial superintelligence?Kaj Sotala - 2017 - Physica Scripta 11 (92).
    What kinds of fundamental limits are there in how capable artificial intelligence (AI) systems might become? Two questions in particular are of interest: (1) How much more capable could AI become relative to humans, and (2) how easily could superhuman capability be acquired? To answer these questions, we will consider the literature on human expertise and intelligence, discuss its relevance for AI, and consider how AI could improve on humans in two major aspects of thought and expertise, namely simulation and (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Intuition, intelligence, data compression.Jens Kipper - 2019 - Synthese 198 (Suppl 27):6469-6489.
    The main goal of my paper is to argue that data compression is a necessary condition for intelligence. One key motivation for this proposal stems from a paradox about intuition and intelligence. For the purposes of this paper, it will be useful to consider playing board games—such as chess and Go—as a paradigm of problem solving and cognition, and computer programs as a model of human cognition. I first describe the basic components of computer programs that play board games, namely (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • 20 years after The Embodied Mind - why is cognitivism alive and kicking?Vincent C. Müller - 2013 - In Blay Whitby & Joel Parthmore (eds.), Re-Conceptualizing Mental "Illness": The View from Enactivist Philosophy and Cognitive Science - AISB Convention 2013. AISB. pp. 47-49.
    I want to suggest that the major influence of classical arguments for embodiment like "The Embodied Mind" by Varela, Thomson & Rosch (1991) has been a changing of positions rather than a refutation: Cognitivism has found ways to retreat and regroup at positions that have better fortification, especially when it concerns theses about artificial intelligence or artificial cognitive systems. For example: a) Agent-based cognitivism' that understands humans as taking in representations of the world, doing rule-based processing and then acting on (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Revisiting Turing and His Test: Comprehensiveness, Qualia, and the Real World.Vincent C. Müller & Aladdin Ayesh (eds.) - 2012 - AISB.
    Proceedings of the papers presented at the Symposium on "Revisiting Turing and his Test: Comprehensiveness, Qualia, and the Real World" at the 2012 AISB and IACAP Symposium that was held in the Turing year 2012, 2–6 July at the University of Birmingham, UK. Ten papers. - http://www.pt-ai.org/turing-test --- Daniel Devatman Hromada: From Taxonomy of Turing Test-Consistent Scenarios Towards Attribution of Legal Status to Meta-modular Artificial Autonomous Agents - Michael Zillich: My Robot is Smarter than Your Robot: On the Need for (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Machines and the Moral Community.Erica L. Neely - 2013 - Philosophy and Technology 27 (1):97-111.
    A key distinction in ethics is between members and nonmembers of the moral community. Over time, our notion of this community has expanded as we have moved from a rationality criterion to a sentience criterion for membership. I argue that a sentience criterion is insufficient to accommodate all members of the moral community; the true underlying criterion can be understood in terms of whether a being has interests. This may be extended to conscious, self-aware machines, as well as to any (...)
    Download  
     
    Export citation  
     
    Bookmark   31 citations  
  • Safety Engineering for Artificial General Intelligence.Roman Yampolskiy & Joshua Fox - 2012 - Topoi 32 (2):217-226.
    Machine ethics and robot rights are quickly becoming hot topics in artificial intelligence and robotics communities. We will argue that attempts to attribute moral agency and assign rights to all intelligent machines are misguided, whether applied to infrahuman or superhuman AIs, as are proposals to limit the negative effects of AIs by constraining their behavior. As an alternative, we propose a new science of safety engineering for intelligent artificial agents based on maximizing for what humans value. In particular, we challenge (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Advantages of artificial intelligences, uploads, and digital minds.Kaj Sotala - 2012 - International Journal of Machine Consciousness 4 (01):275-291.
    I survey four categories of factors that might give a digital mind, such as an upload or an artificial general intelligence, an advantage over humans. Hardware advantages include greater serial speeds and greater parallel speeds. Self-improvement advantages include improvement of algorithms, design of new mental modules, and modification of motivational system. Co-operative advantages include copyability, perfect co-operation, improved communication, and transfer of skills. Human handicaps include computational limitations and faulty heuristics, human-centric biases, and socially motivated cognition. The shape of hardware (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • The troublesome explanandum in Plantinga’s argument against naturalism.Yingjin Xu - 2011 - International Journal for Philosophy of Religion 69 (1):1-15.
    Intending to have a constructive dialogue with the combination of evolutionary theory (E) and metaphysical naturalism (N), Alvin Plantinga’s evolutionary argument against naturalism (EAAN) takes the reliability of human cognition (in normal environments) as a purported explanandum and E&N as a purported explanans. Then, he considers whether E&N can offer a good explanans for this explanandum, and his answer is negative (an answer employed by him to produce a defeater for N). But I will argue that the whole EAAN goes (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Can Machines Think? An Old Question Reformulated.Achim Hoffmann - 2010 - Minds and Machines 20 (2):203-212.
    This paper revisits the often debated question Can machines think? It is argued that the usual identification of machines with the notion of algorithm has been both counter-intuitive and counter-productive. This is based on the fact that the notion of algorithm just requires an algorithm to contain a finite but arbitrary number of rules. It is argued that intuitively people tend to think of an algorithm to have a rather limited number of rules. The paper will further propose a modification (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Can Computational Intelligence Model Phenomenal Consciousness?Eduardo C. Garrido Merchán & Sara Lumbreras - 2023 - Philosophies 8 (4):70.
    Consciousness and intelligence are properties that can be misunderstood as necessarily dependent. The term artificial intelligence and the kind of problems it managed to solve in recent years has been shown as an argument to establish that machines experience some sort of consciousness. Following Russell’s analogy, if a machine can do what a conscious human being does, the likelihood that the machine is conscious increases. However, the social implications of this analogy are catastrophic. Concretely, if rights are given to entities (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Artificial superintelligence and its limits: why AlphaZero cannot become a general agent.Karim Jebari & Joakim Lundborg - forthcoming - AI and Society.
    An intelligent machine surpassing human intelligence across a wide set of skills has been proposed as a possible existential catastrophe. Among those concerned about existential risk related to artificial intelligence, it is common to assume that AI will not only be very intelligent, but also be a general agent. This article explores the characteristics of machine agency, and what it would mean for a machine to become a general agent. In particular, it does so by articulating some important differences between (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Ako a čím sa od seba odlišujú slabo, stredne a silne usmernené procesy.Robert Burgan - 2012 - E-Logos 19 (1):1-31.
    V nasledujúcom príspevku sa snažíme zdôvodniť vyčlenenie troch typov procesov v pozorovanom vesmíre - procesov slabo, stredne a silne usmernených, a to na základe rôznej miery autonómnosti ich štruktúrnych prvkov a rôznej miery či intenzity zákonov, ktorými sú usmerňované alebo riadené. Individuálne a konkrétne procesy sú tak v podstate totožné s individuálnymi a konkrétnymi systémami, cez ktoré, v ktorých a prostredníctvom ktorých sa úplne realizujú, disponujúc tak vždy a všade vlastným substanciálnym obsahom. Na tomto základe potom vyčleňujeme slabo usmernené procesy (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • An object-oriented view on problem representation as a search-efficiency facet: Minds vs. machines. [REVIEW]Reza Zamani - 2010 - Minds and Machines 20 (1):103-117.
    From an object-oriented perspective, this paper investigates the interdisciplinary aspects of problem representation as well the differences between representation of problems in the mind and that in the machine. By defining an object as a combination of a symbol-structure and its associated operations, it shows how the representation of problems can become related to control, which conducts the search in finding a solution. Different types of representation of problems in the machine are classified into four categories, and in a similar (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Measuring universal intelligence: Towards an anytime intelligence test.José Hernández-Orallo & David L. Dowe - 2010 - Artificial Intelligence 174 (18):1508-1539.
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • Intelligence via ultrafilters: structural properties of some intelligence comparators of deterministic Legg-Hutter agents.Samuel Alexander - 2019 - Journal of Artificial General Intelligence 10 (1):24-45.
    Legg and Hutter, as well as subsequent authors, considered intelligent agents through the lens of interaction with reward-giving environments, attempting to assign numeric intelligence measures to such agents, with the guiding principle that a more intelligent agent should gain higher rewards from environments in some aggregate sense. In this paper, we consider a related question: rather than measure numeric intelligence of one Legg- Hutter agent, how can we compare the relative intelligence of two Legg-Hutter agents? We propose an elegant answer (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Towards a unified framework for developing ethical and practical Turing tests.Balaji Srinivasan & Kushal Shah - 2019 - AI and Society 34 (1):145-152.
    Since Turing proposed the first test of intelligence, several modifications have been proposed with the aim of making Turing’s proposal more realistic and applicable in the search for artificial intelligence. In the modern context, it turns out that some of these definitions of intelligence and the corresponding tests merely measure computational power. Furthermore, in the framework of the original Turing test, for a system to prove itself to be intelligent, a certain amount of deceit is implicitly required which can have (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • A Formal Mathematical Model of Cognitive Radio.Ramy A. Fathy, Ahmed A. Abdel-Hafez & Abd El-Halim A. Zekry - 2013 - International Journal of Computer and Information Technology 2 (4).
    Download  
     
    Export citation  
     
    Bookmark  
  • On Potential Cognitive Abilities in the Machine Kingdom.José Hernández-Orallo & David L. Dowe - 2013 - Minds and Machines 23 (2):179-210.
    Animals, including humans, are usually judged on what they could become, rather than what they are. Many physical and cognitive abilities in the ‘animal kingdom’ are only acquired (to a given degree) when the subject reaches a certain stage of development, which can be accelerated or spoilt depending on how the environment, training or education is. The term ‘potential ability’ usually refers to how quick and likely the process of attaining the ability is. In principle, things should not be different (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • The assumptions on knowledge and resources in models of rationality.Pei Wang - 2011 - International Journal of Machine Consciousness 3 (01):193-218.
    Intelligence can be understood as a form of rationality, in the sense that an intelligent system does its best when its knowledge and resources are insufficient with respect to the problems to be solved. The traditional models of rationality typically assume some form of sufficiency of knowledge and resources, so cannot solve many theoretical and practical problems in Artificial Intelligence (AI). New models based on the Assumption of Insufficient Knowledge and Resources (AIKR) cannot be obtained by minor revisions or extensions (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Expression unleashed in artificial intelligence.Ekaterina I. Tolstaya, Abhinav Gupta & Edward Hughes - 2023 - Behavioral and Brain Sciences 46:e16.
    The problem of generating generally capable agents is an important frontier in artificial intelligence (AI) research. Such agents may demonstrate open-ended, versatile, and diverse modes of expression, similar to humans. We interpret the work of Heintz & Scott-Phillips as a minimal sufficient set of socio-cognitive biases for the emergence of generally expressive AI, separate yet complementary to existing algorithms.
    Download  
     
    Export citation  
     
    Bookmark  
  • Intelligence as Accurate Prediction.Trond A. Tjøstheim & Andreas Stephens - 2022 - Review of Philosophy and Psychology 1 (2):475-499.
    This paper argues that intelligence can be approximated by the ability to produce accurate predictions. It is further argued that general intelligence can be approximated by context dependent predictive abilities combined with the ability to use working memory to abstract away contextual information. The flexibility associated with general intelligence can be understood as the ability to use selective attention to focus on specific aspects of sensory impressions to identify patterns, which can then be used to predict events in novel situations (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Plans or Outcomes: How Do We Attribute Intelligence to Others?Marta Kryven, Tomer D. Ullman, William Cowan & Joshua B. Tenenbaum - 2021 - Cognitive Science 45 (9):e13041.
    Humans routinely make inferences about both the contents and the workings of other minds based on observed actions. People consider what others want or know, but also how intelligent, rational, or attentive they might be. Here, we introduce a new methodology for quantitatively studying the mechanisms people use to attribute intelligence to others based on their behavior. We focus on two key judgments previously proposed in the literature: judgments based on observed outcomes (you're smart if you won the game) and (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Language and Intelligence.Carlos Montemayor - 2021 - Minds and Machines 31 (4):471-486.
    This paper explores aspects of GPT-3 that have been discussed as harbingers of artificial general intelligence and, in particular, linguistic intelligence. After introducing key features of GPT-3 and assessing its performance in the light of the conversational standards set by Alan Turing in his seminal paper from 1950, the paper elucidates the difference between clever automation and genuine linguistic intelligence. A central theme of this discussion on genuine conversational intelligence is that members of a linguistic community never merely respond “algorithmically” (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Approval-directed agency and the decision theory of Newcomb-like problems.Caspar Oesterheld - 2019 - Synthese 198 (Suppl 27):6491-6504.
    Decision theorists disagree about how instrumentally rational agents, i.e., agents trying to achieve some goal, should behave in so-called Newcomb-like problems, with the main contenders being causal and evidential decision theory. Since the main goal of artificial intelligence research is to create machines that make instrumentally rational decisions, the disagreement pertains to this field. In addition to the more philosophical question of what the right decision theory is, the goal of AI poses the question of how to implement any given (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • SAT: a methodology to assess the social acceptance of innovative AI-based technologies.Carmela Occhipinti, Antonio Carnevale, Luigi Briguglio, Andrea Iannone & Piercosma Bisconti - 2022 - Journal of Information, Communication and Ethics in Society 1 (In press).
    Purpose The purpose of this paper is to present the conceptual model of an innovative methodology (SAT) to assess the social acceptance of technology, especially focusing on artificial intelligence (AI)-based technology. -/- Design/methodology/approach After a review of the literature, this paper presents the main lines by which SAT stands out from current methods, namely, a four-bubble approach and a mix of qualitative and quantitative techniques that offer assessments that look at technology as a socio-technical system. Each bubble determines the social (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Intelligence at any price? A criterion for defining AI.Mihai Nadin - 2023 - AI and Society 38 (5):1813-1817.
    According to how AI has defined itself from its beginning, thinking in non-living matter, i.e., without life, is possible. The premise of symbolic AI is that operating on representations of reality machines can understand it. When this assumption did not work as expected, the mathematical model of the neuron became the engine of artificial “brains.” Connectionism followed. Currently, in the context of Machine Learning success, attempts are made at integrating the symbolic and connectionist paths. There is hope that Artificial General (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Twenty Years Beyond the Turing Test: Moving Beyond the Human Judges Too.José Hernández-Orallo - 2020 - Minds and Machines 30 (4):533-562.
    In the last 20 years the Turing test has been left further behind by new developments in artificial intelligence. At the same time, however, these developments have revived some key elements of the Turing test: imitation and adversarialness. On the one hand, many generative models, such as generative adversarial networks, build imitators under an adversarial setting that strongly resembles the Turing test. The term “Turing learning” has been used for this kind of setting. On the other hand, AI benchmarks are (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Computational Functionalism for the Deep Learning Era.Ezequiel López-Rubio - 2018 - Minds and Machines 28 (4):667-688.
    Deep learning is a kind of machine learning which happens in a certain type of artificial neural networks called deep networks. Artificial deep networks, which exhibit many similarities with biological ones, have consistently shown human-like performance in many intelligent tasks. This poses the question whether this performance is caused by such similarities. After reviewing the structure and learning processes of artificial and biological neural networks, we outline two important reasons for the success of deep learning, namely the extraction of successively (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • Claims and challenges in evaluating human-level intelligent systems.John E. Laird, Robert Wray, Robert Marinier & Pat Langley - 2009 - In B. Goertzel, P. Hitzler & M. Hutter (eds.), Proceedings of the Second Conference on Artificial General Intelligence. Atlantis Press.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Artificial Interdisciplinarity: Artificial Intelligence for Research on Complex Societal Problems.Seth D. Baum - 2020 - Philosophy and Technology 34 (1):45-63.
    This paper considers the question: In what ways can artificial intelligence assist with interdisciplinary research for addressing complex societal problems and advancing the social good? Problems such as environmental protection, public health, and emerging technology governance do not fit neatly within traditional academic disciplines and therefore require an interdisciplinary approach. However, interdisciplinary research poses large cognitive challenges for human researchers that go beyond the substantial challenges of narrow disciplinary research. The challenges include epistemic divides between disciplines, the massive bodies of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Minimum message length and statistically consistent invariant (objective?) Bayesian probabilistic inference—from (medical) “evidence”.David L. Dowe - 2008 - Social Epistemology 22 (4):433 – 460.
    “Evidence” in the form of data collected and analysis thereof is fundamental to medicine, health and science. In this paper, we discuss the “evidence-based” aspect of evidence-based medicine in terms of statistical inference, acknowledging that this latter field of statistical inference often also goes by various near-synonymous names—such as inductive inference (amongst philosophers), econometrics (amongst economists), machine learning (amongst computer scientists) and, in more recent times, data mining (in some circles). Three central issues to this discussion of “evidence-based” are (i) (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Reframing Ethical Theory, Pedagogy, and Legislation to Bias Open Source AGI Towards Friendliness and Wisdom.John Gray Cox - 2015 - Journal of Evolution and Technology 25 (2):39-54.
    Hopes for biasing the odds towards the development of AGI that is human-friendly depend on finding and employing ethical theories and practices that can be incorporated successfully in the construction; programming and/or developmental growth; education and mature life world of future AGI. Mainstream ethical theories are ill-adapted for this purpose because of their mono-logical decision procedures which aim at “Golden rule” style principles and judgments which are objective in the sense of being universal and absolute. A much more helpful framework (...)
    Download  
     
    Export citation  
     
    Bookmark