Switch to: References

Add citations

You must login to add citations.
  1. Anthropomorphizing Machines: Reality or Popular Myth?Simon Coghlan - 2024 - Minds and Machines 34 (3):1-25.
    According to a widespread view, people often anthropomorphize machines such as certain robots and computer and AI systems by erroneously attributing mental states to them. On this view, people almost irresistibly believe, even if only subconsciously, that machines with certain human-like features really have phenomenal or subjective experiences like sadness, happiness, desire, pain, joy, and distress, even though they lack such feelings. This paper questions this view by critiquing common arguments used to support it and by suggesting an alternative explanation. (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Personal AI, deception, and the problem of emotional bubbles.Philip Maxwell Thingbø Mlonyeni - forthcoming - AI and Society:1-12.
    Personal AI is a new type of AI companion, distinct from the prevailing forms of AI companionship. Instead of playing a narrow and well-defined social role, like friend, lover, caretaker, or colleague, with a set of pre-determined responses and behaviors, Personal AI is engineered to tailor itself to the user, including learning to mirror the user’s unique emotional language and attitudes. This paper identifies two issues with Personal AI. First, like other AI companions, it is deceptive about the presence of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Affective Artificial Agents as sui generis Affective Artifacts.Marco Facchin & Giacomo Zanotti - 2024 - Topoi 43 (3).
    AI-based technologies are increasingly pervasive in a number of contexts. Our affective and emotional life makes no exception. In this article, we analyze one way in which AI-based technologies can affect them. In particular, our investigation will focus on affective artificial agents, namely AI-powered software or robotic agents designed to interact with us in affectively salient ways. We build upon the existing literature on affective artifacts with the aim of providing an original analysis of affective artificial agents and their distinctive (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • All too human? Identifying and mitigating ethical risks of Social AI.Henry Shevlin - manuscript
    This paper presents an overview of the risks and benefits of Social AI, understood as conversational AI systems that cater to human social needs like romance, companionship, or entertainment. Section 1 of the paper provides a brief history of conversational AI systems and introduces conceptual distinctions to help distinguish varieties of Social AI and pathways to their deployment. Section 2 of the paper adds further context via a brief discussion of anthropomorphism and its relevance to assessment of human-chatbot relationships. Section (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • The expected AI as a sociocultural construct and its impact on the discourse on technology.Auli Viidalepp - 2023 - Dissertation, University of Tartu
    The thesis introduces and criticizes the discourse on technology, with a specific reference to the concept of AI. The discourse on AI is particularly saturated with reified metaphors which drive connotations and delimit understandings of technology in society. To better analyse the discourse on AI, the thesis proposes the concept of “Expected AI”, a composite signifier filled with historical and sociocultural connotations, and numerous referent objects. Relying on cultural semiotics, science and technology studies, and a diverse selection of heuristic concepts, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Understanding Sophia? On human interaction with artificial agents.Thomas Fuchs - 2024 - Phenomenology and the Cognitive Sciences 23 (1):21-42.
    Advances in artificial intelligence (AI) create an increasing similarity between the performance of AI systems or AI-based robots and human communication. They raise the questions: whether it is possible to communicate with, understand, and even empathically perceive artificial agents; whether we should ascribe actual subjectivity and thus quasi-personal status to them beyond a certain level of simulation; what will be the impact of an increasing dissolution of the distinction between simulated and real encounters. (1) To answer these questions, the paper (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Manipulation, injustice, and technology.Michael Klenk - 2022 - In Michael Klenk & Fleur Jongepier (eds.), The Philosophy of Online Manipulation. Routledge. pp. 108-131.
    This chapter defends the view that manipulated behaviour is explained by an injustice. Injustices that explain manipulated behaviour need not involve agential features such as intentionality. Therefore, technology can manipulate us, even if technological artefacts like robots, intelligent software agents, or other ‘mere tools’ lack agential features such as intentionality. The chapter thus sketches a comprehensive account of manipulated behaviour related to but distinct from existing accounts of manipulative behaviour. It then builds on that account to defend the possibility that (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Technology as Terrorism: Police Control Technologies and Drone Warfare.Jessica Wolfendale - 2021 - In Scott Robbins, Alastair Reed, Seamus Miller & Adam Henschke (eds.), Counter-Terrorism, Ethics, and Technology: Emerging Challenges At The Frontiers Of Counter-Terrorism,. Springer. pp. 1-21.
    Debates about terrorism and technology often focus on the potential uses of technology by non-state terrorist actors and by states as forms of counterterrorism. Yet, little has been written about how technology shapes how we think about terrorism. In this chapter I argue that technology, and the language we use to talk about technology, constrains and shapes our understanding of the nature, scope, and impact of terrorism, particularly in relation to state terrorism. After exploring the ways in which technology shapes (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Living with AI personal assistant: an ethical appraisal.Lorraine K. C. Yeung, Cecilia S. Y. Tam, Sam S. S. Lau & Mandy M. Ko - forthcoming - AI and Society:1-16.
    Mark Coeckelbergh (Int J Soc Robot 1:217–221, 2009) argues that robot ethics should investigate what interaction with robots can do to humans rather than focusing on the robot’s moral status. We should ask what robots do to our sociality and whether human–robot interaction can contribute to the human good and human flourishing. This paper extends Coeckelbergh’s call and investigate what it means to live with disembodied AI-powered agents. We address the following question: Can the human–AI interaction contribute to our moral (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Can an AI-carebot be filial? Reflections from Confucian ethics.Kathryn Muyskens, Yonghui Ma & Michael Dunn - forthcoming - Nursing Ethics.
    This article discusses the application of artificially intelligent robots within eldercare and explores a series of ethical considerations, including the challenges that AI (Artificial Intelligence) technology poses to traditional Chinese Confucian filial piety. From the perspective of Confucian ethics, the paper argues that robots cannot adequately fulfill duties of care. Due to their detachment from personal relationships and interactions, the “emotions” of AI robots are merely performative reactions in different situations, rather than actual emotional abilities. No matter how “humanized” robots (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Robot Technology for the Elderly and the Value of Veracity: Disruptive Technology or Reinvigorating Entrenched Principles?Seppe Segers - 2022 - Science and Engineering Ethics 28 (6):1-14.
    The implementation of care robotics in care settings is identified by some authors as a disruptive innovation, in the sense that it will upend the praxis of care. It is an open ethical question whether this alleged disruption will also have a transformative impact on established ethical concepts and principles. One prevalent worry is that the implementation of care robots will turn deception into a routine component of elderly care, at least to the extent that these robots will function as (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Criticizing Danaher’s Approach to Superficial State Deception.Maciej Musiał - 2023 - Science and Engineering Ethics 29 (5):1-15.
    If existing or future robots appear to have some capacity, state or property, how can we determine whether they truly have it or whether we are deceived into believing so? John Danaher addresses this question by formulating his approach to what he refers to as superficial state deception (SSD) from the perspective of his theory termed ethical behaviourism (EB), which was initially designed to determine the moral status of robots. In summary, Danaher believes that focusing on behaviour is sufficient to (...)
    Download  
     
    Export citation  
     
    Bookmark