Interpretability and Unification

Philosophy and Technology 35 (2):1-6 (2022)
Download Edit this record How to cite View on PhilPapers
Abstract
In a recent reply to our article, “What is Interpretability?,” Prasetya argues against our position that artificial neural networks are explainable. It is claimed that our indefeasibility thesis—that adding complexity to an explanation of a phenomenon does not make the phenomenon any less explainable—is false. More precisely, Prasetya argues that unificationist explanations are defeasible to increasing complexity, and thus, we may not be able to provide such explanations of highly complex AI models. The reply highlights an important lacuna in our original paper, the omission of the unificationist account of explanation, and affords us the opportunity to respond. Here, we argue that artificial neural networks are explainable in a way that should satisfy unificationists and that interpretability methods present ways in which ML theories can achieve unification.
Categories
No categories specified
(categorize this paper)
ISBN(s)
PhilPapers/Archive ID
BRUIAU
Upload history
Archival date: 2022-04-26
View other versions
Added to PP
2022-04-24

Downloads
46 (#66,282)

6 months
46 (#17,398)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?