Playing the Blame Game with Robots

In Companion of the 2021 ACM/IEEE International Conference on Human-Robot Interaction (HRI’21 Companion). New York, NY, USA: (2021)
  Copy   BIBTEX

Abstract

Recent research shows – somewhat astonishingly – that people are willing to ascribe moral blame to AI-driven systems when they cause harm [1]–[4]. In this paper, we explore the moral- psychological underpinnings of these findings. Our hypothesis was that the reason why people ascribe moral blame to AI systems is that they consider them capable of entertaining inculpating mental states (what is called mens rea in the law). To explore this hypothesis, we created a scenario in which an AI system runs a risk of poisoning people by using a novel type of fertilizer. Manipulating the computational (or quasi-cognitive) abilities of the AI system in a between-subjects design, we tested whether people’s willingness to ascribe knowledge of a substantial risk of harm (i.e., recklessness) and blame to the AI system. Furthermore, we investigated whether the ascription of recklessness and blame to the AI system would influence the perceived blameworthiness of the system’s user (or owner). In an experiment with 347 participants, we found (i) that people are willing to ascribe blame to AI systems in contexts of recklessness, (ii) that blame ascriptions depend strongly on the willingness to attribute recklessness and (iii) that the latter, in turn, depends on the perceived “cognitive” capacities of the system. Furthermore, our results suggest (iv) that the higher the computational sophistication of the AI system, the more blame is shifted from the human user to the AI system.

Author Profiles

Michael T. Stuart
University of York
Markus Kneer
University of Graz

Analytics

Added to PP
2021-02-08

Downloads
488 (#32,184)

6 months
111 (#31,023)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?