Switch to: References

Add citations

You must login to add citations.
  1. Generative AI and human–robot interaction: implications and future agenda for business, society and ethics.Bojan Obrenovic, Xiao Gu, Guoyu Wang, Danijela Godinic & Ilimdorjon Jakhongirov - forthcoming - AI and Society:1-14.
    The revolution of artificial intelligence (AI), particularly generative AI, and its implications for human–robot interaction (HRI) opened up the debate on crucial regulatory, business, societal, and ethical considerations. This paper explores essential issues from the anthropomorphic perspective, examining the complex interplay between humans and AI models in societal and corporate contexts. We provided a comprehensive review of existing literature on HRI, with a special emphasis on the impact of generative models such as ChatGPT. The scientometric study posits that due to (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • ChatGPT: deconstructing the debate and moving it forward.Mark Coeckelbergh & David J. Gunkel - 2024 - AI and Society 39 (5):2221-2231.
    Large language models such as ChatGPT enable users to automatically produce text but also raise ethical concerns, for example about authorship and deception. This paper analyses and discusses some key philosophical assumptions in these debates, in particular assumptions about authorship and language and—our focus—the use of the appearance/reality distinction. We show that there are alternative views of what goes on with ChatGPT that do not rely on this distinction. For this purpose, we deploy the two phased approach of deconstruction and (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Maximizing team synergy in AI-related interdisciplinary groups: an interdisciplinary-by-design iterative methodology.Piercosma Bisconti, Davide Orsitto, Federica Fedorczyk, Fabio Brau, Marianna Capasso, Lorenzo De Marinis, Hüseyin Eken, Federica Merenda, Mirko Forti, Marco Pacini & Claudia Schettini - 2022 - AI and Society 1 (1):1-10.
    In this paper, we propose a methodology to maximize the benefits of interdisciplinary cooperation in AI research groups. Firstly, we build the case for the importance of interdisciplinarity in research groups as the best means to tackle the social implications brought about by AI systems, against the backdrop of the EU Commission proposal for an Artificial Intelligence Act. As we are an interdisciplinary group, we address the multi-faceted implications of the mass-scale diffusion of AI-driven technologies. The result of our exercise (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • A Pragmatic Approach to the Intentional Stance Semantic, Empirical and Ethical Considerations for the Design of Artificial Agents.Guglielmo Papagni & Sabine Koeszegi - 2021 - Minds and Machines 31 (4):505-534.
    Artificial agents are progressively becoming more present in everyday-life situations and more sophisticated in their interaction affordances. In some specific cases, like Google Duplex, GPT-3 bots or Deep Mind’s AlphaGo Zero, their capabilities reach or exceed human levels. The use contexts of everyday life necessitate making such agents understandable by laypeople. At the same time, displaying human levels of social behavior has kindled the debate over the adoption of Dennett’s ‘intentional stance’. By means of a comparative analysis of the literature (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • We need to talk about deception in social robotics!Amanda Sharkey & Noel Sharkey - 2020 - Ethics and Information Technology 23 (3):309-316.
    Although some authors claim that deception requires intention, we argue that there can be deception in social robotics, whether or not it is intended. By focusing on the deceived rather than the deceiver, we propose that false beliefs can be created in the absence of intention. Supporting evidence is found in both human and animal examples. Instead of assuming that deception is wrong only when carried out to benefit the deceiver, we propose that deception in social robotics is wrong when (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  • Criticizing Danaher’s Approach to Superficial State Deception.Maciej Musiał - 2023 - Science and Engineering Ethics 29 (5):1-15.
    If existing or future robots appear to have some capacity, state or property, how can we determine whether they truly have it or whether we are deceived into believing so? John Danaher addresses this question by formulating his approach to what he refers to as superficial state deception (SSD) from the perspective of his theory termed ethical behaviourism (EB), which was initially designed to determine the moral status of robots. In summary, Danaher believes that focusing on behaviour is sufficient to (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Artificial agents’ explainability to support trust: considerations on timing and context.Guglielmo Papagni, Jesse de Pagter, Setareh Zafari, Michael Filzmoser & Sabine T. Koeszegi - 2023 - AI and Society 38 (2):947-960.
    Strategies for improving the explainability of artificial agents are a key approach to support the understandability of artificial agents’ decision-making processes and their trustworthiness. However, since explanations are not inclined to standardization, finding solutions that fit the algorithmic-based decision-making processes of artificial agents poses a compelling challenge. This paper addresses the concept of trust in relation to complementary aspects that play a role in interpersonal and human–agent relationships, such as users’ confidence and their perception of artificial agents’ reliability. Particularly, this (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Investigating user perceptions of commercial virtual assistants: A qualitative study.Leilasadat Mirghaderi, Monika Sziron & Elisabeth Hildt - 2022 - Frontiers in Psychology 13.
    As commercial virtual assistants become an integrated part of almost every smart device that we use on a daily basis, including but not limited to smartphones, speakers, personal computers, watches, TVs, and TV sticks, there are pressing questions that call for the study of how participants perceive commercial virtual assistants and what relational roles they assign to them. Furthermore, it is crucial to study which characteristics of commercial virtual assistants are perceived as important for establishing affective interaction with commercial virtual (...)
    Download  
     
    Export citation  
     
    Bookmark