Switch to: References

Add citations

You must login to add citations.
  1. Artificial Forms of Life.Sebastian Sunday Grève - 2023 - Philosophies 8 (5).
    The logical problem of artificial intelligence—the question of whether the notion sometimes referred to as ‘strong’ AI is self-contradictory—is, essentially, the question of whether an artificial form of life is possible. This question has an immediately paradoxical character, which can be made explicit if we recast it (in terms that would ordinarily seem to be implied by it) as the question of whether an unnatural form of nature is possible. The present paper seeks to explain this paradoxical kind of possibility (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Expressive Avatars: Vitality in Virtual Worlds.David Ekdahl & Lucy Osler - 2023 - Philosophy and Technology 36 (2):1-28.
    Critics have argued that human-controlled avatar interactions fail to facilitate the kinds of expressivity and social understanding afforded by our physical bodies. We identify three claims meant to justify the supposed expressive limits of avatar interactions compared to our physical interactions. First, “The Limited Expressivity Claim”: avatars have a more limited expressive range than our physical bodies. Second, “The Inputted Expressivity Claim”: any expressive avatarial behaviour must be deliberately inputted by the user. Third, “The Decoding Claim”: users must infer or (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Could a robot feel pain?Amanda Sharkey - forthcoming - AI and Society:1-11.
    Questions about robots feeling pain are important because the experience of pain implies sentience and the ability to suffer. Pain is not the same as nociception, a reflex response to an aversive stimulus. The experience of pain in others has to be inferred. Danaher’s (Sci Eng Ethics 26(4):2023–2049, 2020. https://doi.org/10.1007/s11948-019-00119-x) ‘ethical behaviourist’ account claims that if a robot behaves in the same way as an animal that is recognised to have moral status, then its moral status should also be assumed. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Reconfiguring the alterity relation: the role of communication in interactions with social robots and chatbots.Dakota Root - forthcoming - AI and Society:1-12.
    Don Ihde’s alterity relation focuses on the quasi-otherness of dynamic technologies that interact with humans. The alterity relation is one means to study relations between humans and artificial intelligence (AI) systems. However, research on alterity relations has not defined the difference between playing with a toy, using a computer, and interacting with a social robot or chatbot. We suggest that Ihde’s quasi-other concept fails to account for the interactivity, autonomy, and adaptability of social robots and chatbots, which more closely approach (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Disrupted self, therapy, and the limits of conversational AI.Dina Babushkina & Bas de Boer - forthcoming - Philosophical Psychology.
    Conversational agents (CA) are thought to be promising for psychotherapy because they give the impression of being able to engage in conversations with human users. However, given the high risk for therapy patients who are already in a vulnerable situation, there is a need to investigate the extent to which CA are able to contribute to therapy goals and to discuss CA’s limitations, especially in complex cases. In this paper, we understand psychotherapy as a way of dealing with existential situations (...)
    Download  
     
    Export citation  
     
    Bookmark