The Point of Blaming AI Systems

Journal of Ethics and Social Philosophy 27 (2) (2024)
  Copy   BIBTEX

Abstract

As Christian List (2021) has recently argued, the increasing arrival of powerful AI systems that operate autonomously in high-stakes contexts creates a need for “future-proofing” our regulatory frameworks, i.e., for reassessing them in the face of these developments. One core part of our regulatory frameworks that dominates our everyday moral interactions is blame. Therefore, “future-proofing” our extant regulatory frameworks in the face of the increasing arrival of powerful AI systems requires, among others things, that we ask whether it makes sense to extend our blaming practices to these systems. In the paper, we argue for the admittedly surprising thesis that this question should be answered in the affirmative: contrary to what one might initially think, it can make a lot of sense to blame AI systems, since, as we furthermore argue, many of the important functions that are fulfilled by blaming humans can also be served by blaming AI systems. The paper concludes that this result gives us a good pro tanto reason to extend our blame practices to AI systems.

Author Profiles

Hannah Altehenger
Universität Konstanz
Leonhard Menges
University of Salzburg

Analytics

Added to PP
2023-06-15

Downloads
866 (#23,124)

6 months
198 (#13,230)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?