Switch to: References

Add citations

You must login to add citations.
  1. Building Thinking Machines by Solving Animal Cognition Tasks.Matthew Crosby - 2020 - Minds and Machines 30 (4):589-615.
    In ‘Computing Machinery and Intelligence’, Turing, sceptical of the question ‘Can machines think?’, quickly replaces it with an experimentally verifiable test: the imitation game. I suggest that for such a move to be successful the test needs to be relevant, expansive, solvable by exemplars, unpredictable, and lead to actionable research. The Imitation Game is only partially successful in this regard and its reliance on language, whilst insightful for partially solving the problem, has put AI progress on the wrong foot, prescribing (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • A truly human interface: interacting face-to-face with someone whose words are determined by a computer program.Kevin Corti & Alex Gillespie - 2015 - Frontiers in Psychology 6:145265.
    We use speech shadowing to create situations wherein people converse in person with a human whose words are determined by a conversational agent computer program. Speech shadowing involves a person (the shadower) repeating vocal stimuli originating from a separate communication source in real-time. Humans shadowing for conversational agent sources (e.g., chat bots) become hybrid agents ("echoborgs") capable of face-to-face interlocution. We report three studies that investigated people’s experiences interacting with echoborgs and the extent to which echoborgs pass as autonomous humans. (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Black Boxes or Unflattering Mirrors? Comparative Bias in the Science of Machine Behaviour.Cameron Buckner - 2023 - British Journal for the Philosophy of Science 74 (3):681-712.
    The last 5 years have seen a series of remarkable achievements in deep-neural-network-based artificial intelligence research, and some modellers have argued that their performance compares favourably to human cognition. Critics, however, have argued that processing in deep neural networks is unlike human cognition for four reasons: they are (i) data-hungry, (ii) brittle, and (iii) inscrutable black boxes that merely (iv) reward-hack rather than learn real solutions to problems. This article rebuts these criticisms by exposing comparative bias within them, in the (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Subjectness of Intelligence: Quantum-Theoretic Analysis and Ethical Perspective.Ilya A. Surov & Elena N. Melnikova - forthcoming - Foundations of Science.
    Download  
     
    Export citation  
     
    Bookmark  
  • The Rhetoric and Reality of Anthropomorphism in Artificial Intelligence.David Watson - 2019 - Minds and Machines 29 (3):417-440.
    Artificial intelligence has historically been conceptualized in anthropomorphic terms. Some algorithms deploy biomimetic designs in a deliberate attempt to effect a sort of digital isomorphism of the human brain. Others leverage more general learning strategies that happen to coincide with popular theories of cognitive science and social epistemology. In this paper, I challenge the anthropomorphic credentials of the neural network algorithm, whose similarities to human cognition I argue are vastly overstated and narrowly construed. I submit that three alternative supervised learning (...)
    Download  
     
    Export citation  
     
    Bookmark   14 citations  
  • Anthropomorphism in AI.Arleen Salles, Kathinka Evers & Michele Farisco - 2020 - American Journal of Bioethics Neuroscience 11 (2):88-95.
    AI research is growing rapidly raising various ethical issues related to safety, risks, and other effects widely discussed in the literature. We believe that in order to adequately address those issues and engage in a productive normative discussion it is necessary to examine key concepts and categories. One such category is anthropomorphism. It is a well-known fact that AI’s functionalities and innovations are often anthropomorphized. The general public’s anthropomorphic attitudes and some of their ethical consequences have been widely discussed in (...)
    Download  
     
    Export citation  
     
    Bookmark   19 citations  
  • Truth, Lies and New Weapons Technologies: Prospects for Jus in Silico?Esther D. Reed - 2022 - Studies in Christian Ethics 35 (1):68-86.
    This article tests the proposition that new weapons technology requires Christian ethics to dispense with the just war tradition (JWT) and argues for its development rather than dissolution. Those working in the JWT should be under no illusions, however, that new weapons technologies could (or do already) represent threats to the doing of justice in the theatre of war. These threats include weapons systems that deliver indiscriminate, disproportionate or otherwise unjust outcomes, or that are operated within (quasi-)legal frameworks marked by (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Rethinking Turing’s Test and the Philosophical Implications.Diane Proudfoot - 2020 - Minds and Machines 30 (4):487-512.
    In the 70 years since Alan Turing’s ‘Computing Machinery and Intelligence’ appeared in Mind, there have been two widely-accepted interpretations of the Turing test: the canonical behaviourist interpretation and the rival inductive or epistemic interpretation. These readings are based on Turing’s Mind paper; few seem aware that Turing described two other versions of the imitation game. I have argued that both readings are inconsistent with Turing’s 1948 and 1952 statements about intelligence, and fail to explain the design of his game. (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • An Analysis of Turing’s Criterion for ‘Thinking’.Diane Proudfoot - 2022 - Philosophies 7 (6):124.
    In this paper I argue that Turing proposed a new approach to the concept of thinking, based on his claim that intelligence is an ‘emotional concept’; and that the response-dependence interpretation of Turing’s ‘criterion for “thinking”’ is a better fit with his writings than orthodox interpretations. The aim of this paper is to clarify the response-dependence interpretation, by addressing such questions as: What did Turing mean by the expression ‘emotional’? Is Turing’s criterion subjective? Are ‘emotional’ judgements decided by social consensus? (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Karl Jaspers and artificial neural nets: on the relation of explaining and understanding artificial intelligence in medicine.Christopher Poppe & Georg Starke - 2022 - Ethics and Information Technology 24 (3):1-10.
    Assistive systems based on Artificial Intelligence (AI) are bound to reshape decision-making in all areas of society. One of the most intricate challenges arising from their implementation in high-stakes environments such as medicine concerns their frequently unsatisfying levels of explainability, especially in the guise of the so-called black-box problem: highly successful models based on deep learning seem to be inherently opaque, resisting comprehensive explanations. This may explain why some scholars claim that research should focus on rendering AI systems understandable, rather (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Moral Status for Malware! The Difficulty of Defining Advanced Artificial Intelligence.Miranda Mowbray - 2021 - Cambridge Quarterly of Healthcare Ethics 30 (3):517-528.
    The suggestion has been made that future advanced artificial intelligence (AI) that passes some consciousness-related criteria should be treated as having moral status, and therefore, humans would have an ethical obligation to consider its well-being. In this paper, the author discusses the extent to which software and robots already pass proposed criteria for consciousness; and argues against the moral status for AI on the grounds that human malware authors may design malware to fake consciousness. In fact, the article warns that (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Does Computation Reveal Machine Cognition?Prakash Mondal - 2014 - Biosemiotics 7 (1):97-110.
    This paper seeks to understand machine cognition. The nature of machine cognition has been shrouded in incomprehensibility. We have often encountered familiar arguments in cognitive science that human cognition is still faintly understood. This paper will argue that machine cognition is far less understood than even human cognition despite the fact that a lot about computer architecture and computational operations is known. Even if there have been putative claims about the transparency of the notion of machine computations, these claims do (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Twenty Years Beyond the Turing Test: Moving Beyond the Human Judges Too.José Hernández-Orallo - 2020 - Minds and Machines 30 (4):533-562.
    In the last 20 years the Turing test has been left further behind by new developments in artificial intelligence. At the same time, however, these developments have revived some key elements of the Turing test: imitation and adversarialness. On the one hand, many generative models, such as generative adversarial networks, build imitators under an adversarial setting that strongly resembles the Turing test. The term “Turing learning” has been used for this kind of setting. On the other hand, AI benchmarks are (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Computer models solving intelligence test problems: Progress and implications.José Hernández-Orallo, Fernando Martínez-Plumed, Ute Schmid, Michael Siebers & David L. Dowe - 2016 - Artificial Intelligence 230 (C):74-107.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • The Turing Test is a Thought Experiment.Bernardo Gonçalves - 2023 - Minds and Machines 33 (1):1-31.
    The Turing test has been studied and run as a controlled experiment and found to be underspecified and poorly designed. On the other hand, it has been defended and still attracts interest as a test for true artificial intelligence (AI). Scientists and philosophers regret the test’s current status, acknowledging that the situation is at odds with the intellectual standards of Turing’s works. This article refers to this as the Turing Test Dilemma, following the observation that the test has been under (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Galilean resonances: the role of experiment in Turing’s construction of machine intelligence.Bernardo Gonçalves - forthcoming - Annals of Science.
    In 1950, Alan Turing proposed his iconic imitation game, calling it a ‘test’, an ‘experiment’, and the ‘the only really satisfactory support’ for his view that machines can think. Following Turing’s rhetoric, the ‘Turing test’ has been widely received as a kind of crucial experiment to determine machine intelligence. In later sources, however, Turing showed a milder attitude towards what he called his ‘imitation tests’. In 1948, Turing referred to the persuasive power of ‘the actual production of machines’ rather than (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Anthropomorphism in AI: Hype and Fallacy.Adriana Placani - 2024 - AI and Ethics.
    This essay focuses on anthropomorphism as both a form of hype and fallacy. As a form of hype, anthropomorphism is shown to exaggerate AI capabilities and performance by attributing human-like traits to systems that do not possess them. As a fallacy, anthropomorphism is shown to distort moral judgments about AI, such as those concerning its moral character and status, as well as judgments of responsibility and trust. By focusing on these two dimensions of anthropomorphism in AI, the essay highlights negative (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Mitigating emotional risks in human-social robot interactions through virtual interactive environment indication.Aorigele Bao, Yi Zeng & Enmeng lu - 2023 - Humanities and Social Sciences Communications 2023.
    Humans often unconsciously perceive social robots involved in their lives as partners rather than mere tools, imbuing them with qualities of companionship. This anthropomorphization can lead to a spectrum of emotional risks, such as deception, disappointment, and reverse manipulation, that existing approaches struggle to address effectively. In this paper, we argue that a Virtual Interactive Environment (VIE) exists between humans and social robots, which plays a crucial role and demands necessary consideration and clarification in order to mitigate potential emotional risks. (...)
    Download  
     
    Export citation  
     
    Bookmark