ANNs and Unifying Explanations: Reply to Erasmus, Brunet, and Fisher

Philosophy and Technology 35 (2):1-9 (2022)
  Copy   BIBTEX

Abstract

In a recent article, Erasmus, Brunet, and Fisher (2021) argue that Artificial Neural Networks (ANNs) are explainable. They survey four influential accounts of explanation: the Deductive-Nomological model, the Inductive-Statistical model, the Causal-Mechanical model, and the New-Mechanist model. They argue that, on each of these accounts, the features that make something an explanation is invariant with regard to the complexity of the explanans and the explanandum. Therefore, they conclude, the complexity of ANNs (and other Machine Learning models) does not make them less explainable. In this reply, it is argued that Erasmus et al. left out one influential account of explanation from their discussion: the Unificationist model. It is argued that, on the Unificationist model, the features that makes something an explanation is sensitive to complexity. Therefore, on the Unificationist model, ANNs (and other Machine Learning models) are not explainable. It is emphasized that Erasmus et al.’s general strategy is correct. The literature on explainable Artificial Intelligence can benefit by drawing from philosophical accounts of explanation. However, philosophical accounts of explanation do not settle the problem of whether ANNs are explainable because they do not unanimously declare that explanation is invariant with regard to complexity.

Author's Profile

Yunus Prasetya
Yale-NUS College

Analytics

Added to PP
2022-04-21

Downloads
556 (#38,781)

6 months
125 (#36,718)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?