Against the Double Standard Argument in AI Ethics

Philosophy and Technology 37 (1):1-5 (2024)
  Copy   BIBTEX

Abstract

In an important and widely cited paper, Zerilli, Knott, Maclaurin, and Gavaghan (2019) argue that opaque AI decision makers are at least as transparent as human decision makers and therefore the concern that opaque AI is not sufficiently transparent is mistaken. I argue that the concern about opaque AI should not be understood as the concern that such AI fails to be transparent in a way that humans are transparent. Rather, the concern is that the way in which opaque AI is opaque is very different than the way in which humans are opaque. What matters is the degree to which the opaque processes of a class of decision makers are stable, uniform, and safe. The degree to which such processes have these features in humans is higher than the degree to which such processes have these features in opaque AI. And therefore we should require AI to be more transparent than humans.

Author's Profile

Scott Hill
University of Innsbruck

Analytics

Added to PP
2024-02-14

Downloads
142 (#79,201)

6 months
142 (#22,629)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?