Order:
  1.  93
    A Formal Account of AI Trustworthiness: Connecting Intrinsic and Perceived Trustworthiness.Piercosma Bisconti, Letizia Aquilino, Antonella Marchetti & Daniele Nardi - forthcoming - Aies '24: Proceedings of the 2024 Aaai/Acmconference on Ai, Ethics, and Society.
    This paper proposes a formal account of AI trustworthiness, connecting both intrinsic and perceived trustworthiness in an operational schematization. We argue that trustworthiness extends beyond the inherent capabilities of an AI system to include significant influences from observers' perceptions, such as perceived transparency, agency locus, and human oversight. While the concept of perceived trustworthiness is discussed in the literature, few attempts have been made to connect it with the intrinsic trustworthiness of AI systems. Our analysis introduces a novel schematization to (...)
    Download  
     
    Export citation  
     
    Bookmark  
  2. The Psychological Implications of Companion Robots: A Theoretical Framework and an Experimental Setup.Nicoletta Massa, Piercosma Bisconti & Daniele Nardi - 2022 - International Journal of Social Robotics (Online):1-14.
    In this paper we present a theoretical framework to understand the underlying psychological mechanism involved in human-Companion Robot interactions. At first, we take the case of Sexual Robotics, where the psychological dynamics are more evident, to thereafter extend the discussion to Companion Robotics in general. First, we discuss the differences between a sex-toy and a Sexual Robots, concluding that the latter may establish a collusive and confirmative dynamics with the user. We claim that the collusiveness leads to two main consequences, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  3. Companion robots: the hallucinatory danger of human-robot interactions.Piercosma Bisconti & Daniele Nardi - 2018 - In Piercosma Bisconti & Daniele Nardi (eds.), AIES '18: Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society. pp. 17-22.
    The advent of the so-called Companion Robots is raising many ethical concerns among scholars and in the public opinion. Focusing mainly on robots caring for the elderly, in this paper we analyze these concerns to distinguish which are directly ascribable to robotic, and which are instead preexistent. One of these is the “deception objection”, namely the ethical unacceptability of deceiving the user about the simulated nature of the robot’s behaviors. We argue on the inconsistency of this charge, as today formulated. (...)
    Download  
     
    Export citation  
     
    Bookmark