Switch to: References

Add citations

You must login to add citations.
  1. Does ChatGPT have semantic understanding?Lisa Miracchi Titus - 2024 - Cognitive Systems Research 83 (101174):1-13.
    Over the last decade, AI models of language and word meaning have been dominated by what we might call a statistics-of-occurrence, strategy: these models are deep neural net structures that have been trained on a large amount of unlabeled text with the aim of producing a model that exploits statistical information about word and phrase co-occurrence in order to generate behavior that is similar to what a human might produce, or representations that can be probed to exhibit behavior similar to (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • The Computer Revolution in Philosophy: Philosophy, Science, and Models of Mind.Aaron Sloman - 1978 - Hassocks UK: Harvester Press.
    Extract from Hofstadter's revew in Bulletin of American Mathematical Society : http://www.ams.org/journals/bull/1980-02-02/S0273-0979-1980-14752-7/S0273-0979-1980-14752-7.pdf -/- "Aaron Sloman is a man who is convinced that most philosophers and many other students of mind are in dire need of being convinced that there has been a revolution in that field happening right under their noses, and that they had better quickly inform themselves. The revolution is called "Artificial Intelligence" (Al)-and Sloman attempts to impart to others the "enlighten- ment" which he clearly regrets not having (...)
    Download  
     
    Export citation  
     
    Bookmark   144 citations  
  • Consciousness, Machines, and Moral Status.Henry Shevlin - manuscript
    In light of recent breakneck pace in machine learning, questions about whether near-future artificial systems might be conscious and possess moral status are increasingly pressing. This paper argues that as matters stand these debates lack any clear criteria for resolution via the science of consciousness. Instead, insofar as they are settled at all, it is likely to be via shifts in public attitudes brought about by the increasingly close relationships between humans and AI users. Section 1 of the paper I (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • What an Algorithm Is.Robin K. Hill - 2016 - Philosophy and Technology 29 (1):35-59.
    The algorithm, a building block of computer science, is defined from an intuitive and pragmatic point of view, through a methodological lens of philosophy rather than that of formal computation. The treatment extracts properties of abstraction, control, structure, finiteness, effective mechanism, and imperativity, and intentional aspects of goal and preconditions. The focus on the algorithm as a robust conceptual object obviates issues of correctness and minimality. Neither the articulation of an algorithm nor the dynamic process constitute the algorithm itself. Analysis (...)
    Download  
     
    Export citation  
     
    Bookmark   35 citations  
  • Therapeutic Chatbots as Cognitive-Affective Artifacts.J. P. Grodniewicz & Mateusz Hohol - 2024 - Topoi 43 (3):795-807.
    Conversational Artificial Intelligence (CAI) systems (also known as AI “chatbots”) are among the most promising examples of the use of technology in mental health care. With already millions of users worldwide, CAI is likely to change the landscape of psychological help. Most researchers agree that existing CAIs are not “digital therapists” and using them is not a substitute for psychotherapy delivered by a human. But if they are not therapists, what are they, and what role can they play in mental (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Understanding Sophia? On human interaction with artificial agents.Thomas Fuchs - 2024 - Phenomenology and the Cognitive Sciences 23 (1):21-42.
    Advances in artificial intelligence (AI) create an increasing similarity between the performance of AI systems or AI-based robots and human communication. They raise the questions: whether it is possible to communicate with, understand, and even empathically perceive artificial agents; whether we should ascribe actual subjectivity and thus quasi-personal status to them beyond a certain level of simulation; what will be the impact of an increasing dissolution of the distinction between simulated and real encounters. (1) To answer these questions, the paper (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Three myths of computer science.James H. Moor - 1978 - British Journal for the Philosophy of Science 29 (3):213-222.
    Download  
     
    Export citation  
     
    Bookmark   81 citations  
  • The Turing Test is a Thought Experiment.Bernardo Gonçalves - 2023 - Minds and Machines 33 (1):1-31.
    The Turing test has been studied and run as a controlled experiment and found to be underspecified and poorly designed. On the other hand, it has been defended and still attracts interest as a test for true artificial intelligence (AI). Scientists and philosophers regret the test’s current status, acknowledging that the situation is at odds with the intellectual standards of Turing’s works. This article refers to this as the Turing Test Dilemma, following the observation that the test has been under (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • The Computer Revolution in Philosophy.Martin Atkinson & Aaron Sloman - 1980 - Philosophical Quarterly 30 (119):178.
    Download  
     
    Export citation  
     
    Bookmark   65 citations  
  • Turing test: 50 years later.Ayse Pinar Saygin, Ilyas Cicekli & Varol Akman - 2000 - Minds and Machines 10 (4):463-518.
    The Turing Test is one of the most disputed topics in artificial intelligence, philosophy of mind, and cognitive science. This paper is a review of the past 50 years of the Turing Test. Philosophical debates, practical developments and repercussions in related disciplines are all covered. We discuss Turing's ideas in detail and present the important comments that have been made on them. Within this context, behaviorism, consciousness, the 'other minds' problem, and similar topics in philosophy of mind are discussed. We (...)
    Download  
     
    Export citation  
     
    Bookmark   29 citations  
  • Capability Sensitive Design for Health and Wellbeing Technologies.Naomi Jacobs - 2020 - Science and Engineering Ethics 26 (6):3363-3391.
    This article presents the framework Capability Sensitive Design (CSD), which consists of merging the design methodology Value Sensitive Design (VSD) with Martha Nussbaum's capability theory. CSD aims to normatively assess technology design in general, and technology design for health and wellbeing in particular. Unique to CSD is its ability to account for human diversity and to counter (structural) injustices that manifest in technology design. The basic framework of CSD is demonstrated by applying it to the hypothetical design case of a (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • The Turing test.Graham Oppy & D. Dowe - 2003 - Stanford Encyclopedia of Philosophy.
    This paper provides a survey of philosophical discussion of the "the Turing Test". In particular, it provides a very careful and thorough discussion of the famous 1950 paper that was published in Mind.
    Download  
     
    Export citation  
     
    Bookmark   24 citations  
  • Computational Functionalism for the Deep Learning Era.Ezequiel López-Rubio - 2018 - Minds and Machines 28 (4):667-688.
    Deep learning is a kind of machine learning which happens in a certain type of artificial neural networks called deep networks. Artificial deep networks, which exhibit many similarities with biological ones, have consistently shown human-like performance in many intelligent tasks. This poses the question whether this performance is caused by such similarities. After reviewing the structure and learning processes of artificial and biological neural networks, we outline two important reasons for the success of deep learning, namely the extraction of successively (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • The Turing test: The first fifty years.Robert M. French - 2000 - Trends in Cognitive Sciences 4 (3):115-121.
    The Turing Test, originally proposed as a simple operational definition of intelligence, has now been with us for exactly half a century. It is safe to say that no other single article in computer science, and few other articles in science in general, have generated so much discussion. The present article chronicles the comments and controversy surrounding Turing's classic article from its publication to the present. The changing perception of the Turing Test over the last fifty years has paralleled the (...)
    Download  
     
    Export citation  
     
    Bookmark   27 citations  
  • Cognitive science in the era of artificial intelligence: A roadmap for reverse-engineering the infant language-learner.Emmanuel Dupoux - 2018 - Cognition 173 (C):43-59.
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  • A phenomenology and epistemology of large language models: transparency, trust, and trustworthiness.Richard Heersmink, Barend de Rooij, María Jimena Clavel Vázquez & Matteo Colombo - 2024 - Ethics and Information Technology 26 (3):1-15.
    This paper analyses the phenomenology and epistemology of chatbots such as ChatGPT and Bard. The computational architecture underpinning these chatbots are large language models (LLMs), which are generative artificial intelligence (AI) systems trained on a massive dataset of text extracted from the Web. We conceptualise these LLMs as multifunctional computational cognitive artifacts, used for various cognitive tasks such as translating, summarizing, answering questions, information-seeking, and much more. Phenomenologically, LLMs can be experienced as a “quasi-other”; when that happens, users anthropomorphise them. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Chatbot breakthrough in the 2020s? An ethical reflection on the trend of automated consultations in health care.Jaana Parviainen & Juho Rantala - 2022 - Medicine, Health Care and Philosophy 25 (1):61-71.
    Many experts have emphasised that chatbots are not sufficiently mature to be able to technically diagnose patient conditions or replace the judgements of health professionals. The COVID-19 pandemic, however, has significantly increased the utilisation of health-oriented chatbots, for instance, as a conversational interface to answer questions, recommend care options, check symptoms and complete tasks such as booking appointments. In this paper, we take a proactive approach and consider how the emergence of task-oriented chatbots as partially automated consulting systems can influence (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Can robots make good models of biological behaviour?Barbara Webb - 2001 - Behavioral and Brain Sciences 24 (6):1033-1050.
    How should biological behaviour be modelled? A relatively new approach is to investigate problems in neuroethology by building physical robot models of biological sensorimotor systems. The explication and justification of this approach are here placed within a framework for describing and comparing models in the behavioural and biological sciences. First, simulation models – the representation of a hypothesis about a target system – are distinguished from several other relationships also termed “modelling” in discussions of scientific explanation. Seven dimensions on which (...)
    Download  
     
    Export citation  
     
    Bookmark   26 citations  
  • How Helen Keller Used Syntactic Semantics to Escape from a Chinese Room.William J. Rapaport - 2006 - Minds and Machines 16 (4):381-436.
    A computer can come to understand natural language the same way Helen Keller did: by using “syntactic semantics”—a theory of how syntax can suffice for semantics, i.e., how semantics for natural language can be provided by means of computational symbol manipulation. This essay considers real-life approximations of Chinese Rooms, focusing on Helen Keller’s experiences growing up deaf and blind, locked in a sort of Chinese Room yet learning how to communicate with the outside world. Using the SNePS computational knowledge-representation system, (...)
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  • Anthropomorphizing Machines: Reality or Popular Myth?Simon Coghlan - 2024 - Minds and Machines 34 (3):1-25.
    According to a widespread view, people often anthropomorphize machines such as certain robots and computer and AI systems by erroneously attributing mental states to them. On this view, people almost irresistibly believe, even if only subconsciously, that machines with certain human-like features really have phenomenal or subjective experiences like sadness, happiness, desire, pain, joy, and distress, even though they lack such feelings. This paper questions this view by critiquing common arguments used to support it and by suggesting an alternative explanation. (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Artificial Speech and Its Authors.Philip J. Nickel - 2013 - Minds and Machines 23 (4):489-502.
    Some of the systems used in natural language generation (NLG), a branch of applied computational linguistics, have the capacity to create or assemble somewhat original messages adapted to new contexts. In this paper, taking Bernard Williams’ account of assertion by machines as a starting point, I argue that NLG systems meet the criteria for being speech actants to a substantial degree. They are capable of authoring original messages, and can even simulate illocutionary force and speaker meaning. Background intelligence embedded in (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • How to pass a Turing test: Syntactic semantics, natural-language understanding, and first-person cognition.William J. Rapaport - 2000 - Journal of Logic, Language, and Information 9 (4):467-490.
    I advocate a theory of syntactic semantics as a way of understanding how computers can think (and how the Chinese-Room-Argument objection to the Turing Test can be overcome): (1) Semantics, considered as the study of relations between symbols and meanings, can be turned into syntax – a study of relations among symbols (including meanings) – and hence syntax (i.e., symbol manipulation) can suffice for the semantical enterprise (contra Searle). (2) Semantics, considered as the process of understanding one domain (by modeling (...)
    Download  
     
    Export citation  
     
    Bookmark   21 citations  
  • (1 other version)Editors' Introduction: Miscommunication.Patrick G. T. Healey, Jan P. de Ruiter & Gregory J. Mills - 2018 - Topics in Cognitive Science 10 (2):264-278.
    Healey et al. introduce the special issue with a brief overview of work on communication in the Cognitive Sciences and some of the historical and conceptual influences that have marginalized the study of miscommunication. Drawing on more recent work in Cognitive Science and Conversation Analysis they argue that miscommunication is in fact a highly structured, ubiquitous phenomenon that is fundamental to human interaction.
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • The right stuff.J. Christopher Maloney - 1987 - Synthese 70 (March):349-72.
    Download  
     
    Export citation  
     
    Bookmark   17 citations  
  • The Questioning Turing Test.Nicola Damassino - 2020 - Minds and Machines 30 (4):563-587.
    The Turing Test is best regarded as a model to test for intelligence, where an entity’s intelligence is inferred from its ability to be attributed with ‘human-likeness’ during a text-based conversation. The problem with this model, however, is that it does not care if or how well an entity produces a meaningful conversation, as long as its interactions are humanlike enough. As a consequence, the TT attracts projects that concentrate on how best to fool the judges. In light of this, (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Optimizing Students’ Mental Health and Academic Performance: AI-Enhanced Life Crafting.Izaak Dekker, Elisabeth M. De Jong, Michaéla C. Schippers, Monique De Bruijn-Smolders, Andreas Alexiou & Bas Giesbers - 2020 - Frontiers in Psychology 11:535008.
    One in three university students experiences mental health problems during their study. A similar percentage leaves higher education without obtaining the degree for which they enrolled. Research suggests that both mental health problems and academic underperformance could be caused by students lacking control and purpose while they are adjusting to tertiary education. Currently, universities are not designed to cater to all the personal needs and mental health problems of large numbers of students at the start of their studies. Within the (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • A narrative review of the active ingredients in psychotherapy delivered by conversational agents.Arthur Herbener, Michal Klincewicz & Malene Flensborg Damholdt A. Show More - 2024 - Computers in Human Behavior Reports 14.
    The present narrative review seeks to unravel where we are now, and where we need to go to delineate the active ingredients in psychotherapy delivered by conversational agents (e.g., chatbots). While psychotherapy delivered by conversational agents has shown promising effectiveness for depression, anxiety, and psychological distress across several randomized controlled trials, little emphasis has been placed on the therapeutic processes in these interventions. The theoretical framework of this narrative review is grounded in prominent perspectives on the active ingredients in psychotherapy. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The Computational Search for Unity: Synthesis in Generative AI.M. Beatrice Fazi - 2024 - Journal of Continental Philosophy 5 (1):31-56.
    The outputs of generative artificial intelligence (generative AI) are often called “synthetic” to imply that they are not natural but artificial. Against that use of the term, this article focuses on a different denotation of synthesis, stressing the unifying and compositional aspects of anything synthetic. The case of large language models (LLMs) is used as an example to address synthesis philosophically alongside notions of representation in contemporary computational systems. It is argued that synthesis in generative AI should be understood as (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • (1 other version)The computational therapeutic: exploring Weizenbaum’s ELIZA as a history of the present.Caroline Bassett - 2019 - AI and Society 34 (4):803-812.
    This paper explores the history of ELIZA, a computer programme approximating a Rogerian therapist, developed by Jospeh Weizenbaum at MIT in the 1970s, as an early AI experiment. ELIZA’s reception provoked Weizenbaum to re-appraise the relationship between ‘computer power and human reason’ and to attack the ‘powerful delusional thinking’ about computers and their intelligence that he understood to be widespread in the general public and also amongst experts. The root issue for Weizenbaum was whether human thought could be ‘entirely computable’. (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Revisiting Human-Agent Communication: The Importance of Joint Co-construction and Understanding Mental States.Stefan Kopp & Nicole Krämer - 2021 - Frontiers in Psychology 12:580955.
    The study of human-human communication and the development of computational models for human-agent communication have diverged significantly throughout the last decade. Yet, despite frequently made claims of “super-human performance” in, e.g., speech recognition or image processing, so far, no system is able to lead a half-decent coherent conversation with a human. In this paper, we argue that we must start to re-consider the hallmarks of cooperative communication and the core capabilities that we have developed for it, and which conversational agents (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Buber, educational technology, and the expansion of dialogic space.Rupert Wegerif & Louis Major - 2019 - AI and Society 34 (1):109-119.
    Buber’s distinction between the ‘I-It’ mode and the ‘I-Thou’ mode is seminal for dialogic education. While Buber introduces the idea of dialogic space, an idea which has proved useful for the analysis of dialogic education with technology, his account fails to engage adequately with the role of technology. This paper offers an introduction to the significance of the I-It/I-Thou duality of technology in relation with opening dialogic space. This is followed by a short schematic history of educational technology which reveals (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Relationalism through Social Robotics.Raya A. Jones - 2013 - Journal for the Theory of Social Behaviour 43 (4):405-424.
    Social robotics is a rapidly developing industry-oriented area of research, intent on making robots in social roles commonplace in the near future. This has led to rising interest in the dynamics as well as ethics of human-robot relationships, described here as a nascent relational turn. A contrast is drawn with the 1990s’ paradigm shift associated with relational-self themes in social psychology. Constructions of the human-robot relationship reproduce the “I-You-Me” dominant model of theorising about the self with biases that (as in (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Tracing the Seminal Notion of Accountability Across the Garfinkelian Œuvre.Timothy Koschmann - 2019 - Human Studies 42 (2):239-252.
    The notion of accountability was introduced by Harold Garfinkel in the opening pages of Studies in Ethnomethodology as part of his ‘central recommendation’ for sociological inquiry. Though the term itself first appears in the Studies, it will be argued that elements of the idea were already discernible in earlier writings. The current article traces the development of the notion from its early emergence in the proto-ethnomethodological period, through its elaboration in the Studies, and, finally, to its refinement in certain later (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Is it possible to grow an I–Thou relation with an artificial agent? A dialogistic perspective.Stefan Trausan-Matu - 2019 - AI and Society 34 (1):9-17.
    The paper analyzes if it is possible to grow an I–Thou relation in the sense of Martin Buber with an artificial, conversational agent developed with Natural Language Processing techniques. The requirements for such an agent, the possible approaches for the implementation, and their limitations are discussed. The relation of the achievement of this goal with the Turing test is emphasized. Novel perspectives on the I–Thou and I–It relations are introduced according to the sociocultural paradigm and Mikhail Bakhtin’s dialogism, polyphony inter-animation, (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • A truly human interface: interacting face-to-face with someone whose words are determined by a computer program.Kevin Corti & Alex Gillespie - 2015 - Frontiers in Psychology 6:145265.
    We use speech shadowing to create situations wherein people converse in person with a human whose words are determined by a conversational agent computer program. Speech shadowing involves a person (the shadower) repeating vocal stimuli originating from a separate communication source in real-time. Humans shadowing for conversational agent sources (e.g., chat bots) become hybrid agents ("echoborgs") capable of face-to-face interlocution. We report three studies that investigated people’s experiences interacting with echoborgs and the extent to which echoborgs pass as autonomous humans. (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Entrainment and musicality in the human system interface.Satinder P. Gill - 2007 - AI and Society 21 (4):567-605.
    What constitutes our human capacity to engage and be in the same frame of mind as another human? How do we come to share a sense of what ‘looks good’ and what ‘makes sense’? How do we handle differences and come to coexist with them? How do we come to feel that we understand what someone else is experiencing? How are we able to walk in silence with someone familiar and be sharing a peaceful space? All of these aspects are (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • The cognitive development of machine consciousness implementations.Raúl Arrabales, Agapito Ledezma & Araceli Sanchis - 2010 - International Journal of Machine Consciousness 2 (2):213-225.
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • The Oxford Handbook of Causal Reasoning.Michael Waldmann (ed.) - 2017 - Oxford, England: Oxford University Press.
    Causal reasoning is one of our most central cognitive competencies, enabling us to adapt to our world. Causal knowledge allows us to predict future events, or diagnose the causes of observed facts. We plan actions and solve problems using knowledge about cause-effect relations. Without our ability to discover and empirically test causal theories, we would not have made progress in various empirical sciences. In the past decades, the important role of causal knowledge has been discovered in many areas of cognitive (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Aaron Sloman, The Computer Revolution in Philosophy: Philosophy, Science and Models of Mind[REVIEW]Stephen P. Stich - 1981 - Philosophical Review 90 (2):300-307.
    Download  
     
    Export citation  
     
    Bookmark   101 citations  
  • Imitation and Large Language Models.Éloïse Boisseau - 2024 - Minds and Machines 34 (4):1-24.
    The concept of imitation is both ubiquitous and curiously under-analysed in theoretical discussions about the cognitive powers and capacities of machines, and in particular—for what is the focus of this paper—the cognitive capacities of large language models (LLMs). The question whether LLMs understand what they say and what is said to them, for instance, is a disputed one, and it is striking to see this concept of imitation being mobilised here for sometimes contradictory purposes. After illustrating and discussing how this (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Cooperation in Online Conversations: The Response Times as a Window Into the Cognition of Language Processing.Baptiste Jacquet, Jean Baratgin & Frank Jamet - 2019 - Frontiers in Psychology 10.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • There is no general AI.Jobst Landgrebe & Barry Smith - 2020 - arXiv.
    The goal of creating Artificial General Intelligence (AGI) – or in other words of creating Turing machines (modern computers) that can behave in a way that mimics human intelligence – has occupied AI researchers ever since the idea of AI was first proposed. One common theme in these discussions is the thesis that the ability of a machine to conduct convincing dialogues with human beings can serve as at least a sufficient criterion of AGI. We argue that this very ability (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Representing practice in cognitive science.LucyA Suchman - 1988 - Human Studies 11 (2-3):305 - 325.
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Philosophy of Artificial Intelligence: A Course Outline.William J. Rapaport - 1986 - Teaching Philosophy 9 (2):103-120.
    In the Fall of 1983, I offered a junior/senior-level course in Philosophy of Artificial Intelligence, in the Department of Philosophy at SUNY Fredonia, after returning there from a year’s leave to study and do research in computer science and artificial intelligence (AI) at SUNY Buffalo. Of the 30 students enrolled, most were computerscience majors, about a third had no computer background, and only a handful had studied any philosophy. (I might note that enrollments have subsequently increased in the Philosophy Department’s (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Undecidability in the imitation game.Y. Sato & T. Ikegami - 2004 - Minds and Machines 14 (2):133-43.
    This paper considers undecidability in the imitation game, the so-called Turing Test. In the Turing Test, a human, a machine, and an interrogator are the players of the game. In our model of the Turing Test, the machine and the interrogator are formalized as Turing machines, allowing us to derive several impossibility results concerning the capabilities of the interrogator. The key issue is that the validity of the Turing test is not attributed to the capability of human or machine, but (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • The Three Social Dimensions of Chatbot Technology.Mauricio Figueroa-Torres - 2024 - Philosophy and Technology 38 (1):1-23.
    The development and deployment of chatbot technology, while spanning decades and employing different techniques, require innovative frameworks to understand and interrogate their functionality and implications. A mere technocentric account of the evolution of chatbot technology does not fully illuminate how conversational systems are embedded in societal dynamics. This study presents a structured examination of chatbots across three societal dimensions, highlighting their roles as objects of scientific research, commercial instruments, and agents of intimate interaction. Through furnishing a dimensional framework for the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Artificial intelligence assistants and risk: framing a connectivity risk narrative.Martin Cunneen, Martin Mullins & Finbarr Murphy - 2020 - AI and Society 35 (3):625-634.
    Our social relations are changing, we are now not just talking to each other, but we are now also talking to artificial intelligence (AI) assistants. We claim AI assistants present a new form of digital connectivity risk and a key aspect of this risk phenomenon is related to user risk awareness (or lack of) regarding AI assistant functionality. AI assistants present a significant societal risk phenomenon amplified by the global scale of the products and the increasing use in healthcare, education, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Psychotherapy and Artificial Intelligence: A Proposal for Alignment.Flávio Luis de Mello & Sebastião Alves de Souza - 2019 - Frontiers in Psychology 10.
    Download  
     
    Export citation  
     
    Bookmark  
  • A dualist-interactionist perspective.John C. Eccles - 1980 - Behavioral and Brain Sciences 3 (3):430-431.
    Download  
     
    Export citation  
     
    Bookmark  
  • Toward the search for the perfect blade runner: a large-scale, international assessment of a test that screens for “humanness sensitivity”.Robert Epstein, Maria Bordyug, Ya-Han Chen, Yijing Chen, Anna Ginther, Gina Kirkish & Holly Stead - 2023 - AI and Society 38 (4):1543-1563.
    We introduce a construct called “humanness sensitivity,” which we define as the ability to recognize uniquely human characteristics. To evaluate the construct, we used a “concurrent study design” to conduct an internet-based study with a convenience sample of 42,063 people from 88 countries (52.4% from the U.S. and Canada).We sought to determine to what extent people could identify subtle characteristics of human behavior, thinking, emotions, and social relationships which currently distinguish humans from non-human entities such as bots. Many people were (...)
    Download  
     
    Export citation  
     
    Bookmark