A Comparative Defense of Self-initiated Prospective Moral Answerability for Autonomous Robot harm

Science and Engineering Ethics 29 (4):1-26 (2023)
  Copy   BIBTEX


As artificial intelligence becomes more sophisticated and robots approach autonomous decision-making, debates about how to assign moral responsibility have gained importance, urgency, and sophistication. Answering Stenseke’s (2022a) call for scaffolds that can help us classify views and commitments, we think the current debate space can be represented hierarchically, as answers to key questions. We use the resulting taxonomy of five stances to differentiate—and defend—what is known as the “blank check” proposal. According to this proposal, a person activating a robot could willingly make themselves answerable for whatever events ensue, even if those events stem from the robot’s autonomous decision(s). This blank check solution was originally proposed in the context of automated warfare (Champagne & Tonkens, 2015), but we extend it to cover all robots. We argue that, because moral answerability in the blank check is accepted voluntarily and before bad outcomes are known, it proves superior to alternative ways of assigning blame. We end by highlighting how, in addition to being just, this self-initiated and prospective moral answerability for robot harm provides deterrence that the four other stances cannot match.

Author Profiles

Marc Champagne
Kwantlen Polytechnic University
Ryan Tonkens
Dalhousie University


Added to PP

222 (#53,002)

6 months
222 (#5,666)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?