Interpretability and Unification

Philosophy and Technology 35 (2):1-6 (2022)
  Copy   BIBTEX

Abstract

In a recent reply to our article, “What is Interpretability?,” Prasetya argues against our position that artificial neural networks are explainable. It is claimed that our indefeasibility thesis—that adding complexity to an explanation of a phenomenon does not make the phenomenon any less explainable—is false. More precisely, Prasetya argues that unificationist explanations are defeasible to increasing complexity, and thus, we may not be able to provide such explanations of highly complex AI models. The reply highlights an important lacuna in our original paper, the omission of the unificationist account of explanation, and affords us the opportunity to respond. Here, we argue that artificial neural networks are explainable in a way that should satisfy unificationists and that interpretability methods present ways in which ML theories can achieve unification.

Author Profiles

Analytics

Added to PP
2022-04-24

Downloads
287 (#54,384)

6 months
117 (#29,885)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?