Mind the Gap: Autonomous Systems, the Responsibility Gap, and Moral Entanglement

Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency (FAccT ’22) (2022)
  Copy   BIBTEX

Abstract

When a computer system causes harm, who is responsible? This question has renewed significance given the proliferation of autonomous systems enabled by modern artificial intelligence techniques. At the root of this problem is a philosophical difficulty known in the literature as the responsibility gap. That is to say, because of the causal distance between the designers of autonomous systems and the eventual outcomes of those systems, the dilution of agency within the large and complex teams that design autonomous systems, and the impossibility of fully predicting how autonomous systems will behave once deployed, determining who is morally responsible for harms caused by autonomous systems is unclear at a conceptual level. I review past work on this topic, criticizing prior works for suggesting workarounds rather than philosophical answers to the conceptual problem presented by the responsibility gap. The view I develop, drawing on my earlier work on vicarious moral responsibility, explains why computing professionals are ethically required to take responsibility for the systems they design, despite not being blameworthy for the harms these systems may cause.

Author's Profile

Trystan S. Goetze
Cornell University

Analytics

Added to PP
2022-05-10

Downloads
871 (#15,109)

6 months
427 (#3,870)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?