The value of responsibility gaps in algorithmic decision-making

Ethics and Information Technology 25 (1):1-11 (2023)
  Copy   BIBTEX

Abstract

Many seem to think that AI-induced responsibility gaps are morally bad and therefore ought to be avoided. We argue, by contrast, that there is at least a pro tanto reason to welcome responsibility gaps. The central reason is that it can be bad for people to be responsible for wrongdoing. This, we argue, gives us one reason to prefer automated decision-making over human decision-making, especially in contexts where the risks of wrongdoing are high. While we are not the first to suggest that responsibility gaps should sometimes be welcomed, our argument is novel. Others have argued that responsibility gaps should sometimes be welcomed because they can reduce or eliminate the psychological burdens caused by tragic moral choice-situations. By contrast, our argument explains why responsibility gaps should sometimes be welcomed even in the absence of tragic moral choice-situations, and even in the absence of psychological burdens.

Author Profiles

Jakob Mainz
Aalborg University (PhD)
Lauritz Munch
Aarhus University

Analytics

Added to PP
2023-02-19

Downloads
856 (#15,522)

6 months
459 (#3,436)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?