Mind the Gap: Autonomous Systems, the Responsibility Gap, and Moral Entanglement

Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency (FAccT ’22) (forthcoming)
Download Edit this record How to cite View on PhilPapers
When a computer system causes harm, who is responsible? This question has renewed significance given the proliferation of autonomous systems enabled by modern artificial intelligence techniques. At the root of this problem is a philosophical difficulty known in the literature as the responsibility gap. That is to say, because of the causal distance between the designers of autonomous systems and the eventual outcomes of those systems, the dilution of agency within the large and complex teams that design autonomous systems, and the impossibility of fully predicting how autonomous systems will behave once deployed, determining who is morally responsible for harms caused by autonomous systems is unclear at a conceptual level. I review past work on this topic, criticizing prior works for suggesting workarounds rather than philosophical answers to the conceptual problem presented by the responsibility gap. The view I develop, drawing on my earlier work on vicarious moral responsibility, explains why computing professionals are ethically required to take responsibility for the systems they design, despite not being blameworthy for the harms these systems may cause.
PhilPapers/Archive ID
Upload history
Archival date: 2022-05-10
View other versions
Added to PP index

Total views
50 ( #61,578 of 69,180 )

Recent downloads (6 months)
50 ( #16,192 of 69,180 )

How can I increase my downloads?

Downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.