Explaining Explanations in AI

FAT* 2019 Proceedings 1 (forthcoming)
Download Edit this record How to cite View on PhilPapers
Abstract
Recent work on interpretability in machine learning and AI has focused on the building of simplified models that approximate the true criteria used to make decisions. These models are a useful pedagogical device for teaching trained professionals how to predict what decisions will be made by the complex system, and most importantly how the system might break. However, when considering any such model it’s important to remember Box’s maxim that "All models are wrong but some are useful." We focus on the distinction between these models and explanations in philosophy and sociology. These models can be understood as a "do it yourself kit" for explanations, allowing a practitioner to directly answer "what if questions" or generate contrastive explanations without external assistance. Although a valuable ability, giving these models as explanations appears more difficult than necessary, and other forms of explanation may not have the same trade-offs. We contrast the different schools of thought on what makes an explanation, and suggest that machine learning might benefit from viewing the problem more broadly.
PhilPapers/Archive ID
MITEEI
Revision history
Archival date: 2018-11-04
View upload history
References found in this work BETA

No references found.

Add more references

Citations of this work BETA

No citations found.

Add more citations

Added to PP index
2018-11-04

Total downloads
132 ( #16,626 of 35,452 )

Recent downloads (6 months)
132 ( #2,062 of 35,452 )

How can I increase my downloads?

Monthly downloads since first upload
This graph includes both downloads from PhilArchive and clicks to external links.