Switch to: References

Add citations

You must login to add citations.
  1. Anthropomorphizing Machines: Reality or Popular Myth?Simon Coghlan - 2024 - Minds and Machines 34 (3):1-25.
    According to a widespread view, people often anthropomorphize machines such as certain robots and computer and AI systems by erroneously attributing mental states to them. On this view, people almost irresistibly believe, even if only subconsciously, that machines with certain human-like features really have phenomenal or subjective experiences like sadness, happiness, desire, pain, joy, and distress, even though they lack such feelings. This paper questions this view by critiquing common arguments used to support it and by suggesting an alternative explanation. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Real feeling and fictional time in human-AI interactions.Krueger Joel & Tom Roberts - forthcoming - Topoi.
    As technology improves, artificial systems are increasingly able to behave in human-like ways: holding a conversation; providing information, advice, and support; or taking on the role of therapist, teacher, or counsellor. This enhanced behavioural complexity, we argue, encourages deeper forms of affective engagement on the part of the human user, with the artificial agent helping to stabilise, subdue, prolong, or intensify a person's emotional condition. Here, we defend a fictionalist account of human/AI interaction, according to which these encounters involve an (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Mitigating emotional risks in human-social robot interactions through virtual interactive environment indication.Aorigele Bao, Yi Zeng & Enmeng lu - 2023 - Humanities and Social Sciences Communications 2023.
    Humans often unconsciously perceive social robots involved in their lives as partners rather than mere tools, imbuing them with qualities of companionship. This anthropomorphization can lead to a spectrum of emotional risks, such as deception, disappointment, and reverse manipulation, that existing approaches struggle to address effectively. In this paper, we argue that a Virtual Interactive Environment (VIE) exists between humans and social robots, which plays a crucial role and demands necessary consideration and clarification in order to mitigate potential emotional risks. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The Kant-Inspired Indirect Argument for Non-Sentient Robot Rights.Tobias Flattery - 2023 - AI and Ethics.
    Some argue that robots could never be sentient, and thus could never have intrinsic moral status. Others disagree, believing that robots indeed will be sentient and thus will have moral status. But a third group thinks that, even if robots could never have moral status, we still have a strong moral reason to treat some robots as if they do. Drawing on a Kantian argument for indirect animal rights, a number of technology ethicists contend that our treatment of anthropomorphic or (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Why Indirect Harms do not Support Social Robot Rights.Paula Sweeney - 2022 - Minds and Machines 32 (4):735-749.
    There is growing evidence to support the claim that we react differently to robots than we do to other objects. In particular, we react differently to robots with which we have some form of social interaction. In this paper I critically assess the claim that, due to our tendency to become emotionally attached to social robots, permitting their harm may be damaging for society and as such we should consider introducing legislation to grant social robots rights and protect them from (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Fictionalism about Chatbots.Fintan Mallory - 2023 - Ergo: An Open Access Journal of Philosophy 10.
    According to widely accepted views in metasemantics, the outputs of chatbots and other artificial text generators should be meaningless. They aren’t produced with communicative intentions and the systems producing them are not following linguistic conventions. Nevertheless, chatbots have assumed roles in customer service and healthcare, they are spreading information and disinformation and, in some cases, it may be more rational to trust the outputs of bots than those of our fellow human beings. To account for the epistemic role of chatbots (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations