AI and the expert; a blueprint for the ethical use of opaque AI

AI and Society:1-12 (forthcoming)
  Copy   BIBTEX

Abstract

The increasing demand for transparency in AI has recently come under scrutiny. The question is often posted in terms of “epistemic double standards”, and whether the standards for transparency in AI ought to be higher than, or equivalent to, our standards for ordinary human reasoners. I agree that the push for increased transparency in AI deserves closer examination, and that comparing these standards to our standards of transparency for other opaque systems is an appropriate starting point. I suggest that a more fruitful exploration of this question will involve a different comparison class. We routinely treat judgments made by highly trained experts in specialized fields as fair or well-grounded even though—by the nature of expert/layperson division of epistemic labor—an expert will not be able to provide an explanation of the reasoning behind these judgments that makes sense to most other people. Regardless, laypeople are thought to be acting reasonably—and ethically—in deferring to the judgments of experts that concern their areas of specialization. I suggest that we reframe our question regarding the appropriate standards of transparency in AI as one that asks when, why, and to what degree it would be ethical to accept opacity in AI. I argue that our epistemic relation to certain opaque AI technology may be relevantly similar to the layperson’s epistemic relation to the expert in certain respects, such that the successful expert/layperson division of epistemic labor can serve as a blueprint for the ethical use of opaque AI.

Author's Profile

Amber Ross
University of Florida

Analytics

Added to PP
2022-09-19

Downloads
638 (#23,838)

6 months
251 (#9,024)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?