Living with Uncertainty: Full Transparency of AI isn’t Needed for Epistemic Trust in AI-based Science

Social Epistemology Review and Reply Collective (forthcoming)
  Copy   BIBTEX

Abstract

Can AI developers be held epistemically responsible for the processing of their AI systems when these systems are epistemically opaque? And can explainable AI (XAI) provide public justificatory reasons for opaque AI systems’ outputs? Koskinen (2024) gives negative answers to both questions. Here, I respond to her and argue for affirmative answers. More generally, I suggest that when considering people’s uncertainty about the factors causally determining an opaque AI’s output, it might be worth keeping in mind that a degree of uncertainty about conclusions is inevitable even in entirely human-based empirical science because in induction there’s always a risk of getting it wrong. Keeping this in mind may help appreciate that requiring full transparency from AI systems before epistemically trusting their outputs might be unusually (and potentially overly) demanding.

Author's Profile

Uwe Peters
Utrecht University

Analytics

Added to PP
2024-06-08

Downloads
252 (#79,527)

6 months
171 (#18,343)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?