Axe the X in XAI: A Plea for Understandable AI

In Juan Manuel Durán & Giorgia Pozzi (eds.), Philosophy of science for machine learning: Core issues and new perspectives. Springer (forthcoming)
  Copy   BIBTEX

Abstract

In a recent paper, Erasmus et al. (2021) defend the idea that the ambiguity of the term “explanation” in explainable AI (XAI) can be solved by adopting any of four different extant accounts of explanation in the philosophy of science: the Deductive Nomological, Inductive Statistical, Causal Mechanical, and New Mechanist models. In this chapter, I show that the authors’ claim that these accounts can be applied to deep neural networks as they would to any natural phenomenon is mistaken. I also provide a more general argument as to why the notion of explainability as it is currently used in the XAI literature bears little resemblance to the traditional concept of scientific explanation. It would be more fruitful to use the label “understandable AI” to avoid the confusion that surrounds the goal and purposes of XAI. In the second half of the chapter, I argue for a pragmatic conception of understanding that is better suited to play the central role attributed to explanation in XAI. Following De Regt (2017) and Kuorikoski and Ylikoski (2015), the conditions of satisfaction for understanding an ML system are fleshed out in terms of an agent’s success in using the system.

Author's Profile

Andrés Páez
University of the Andes

Analytics

Added to PP
2024-03-06

Downloads
115 (#85,341)

6 months
115 (#34,412)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?