The explanation game: a formal framework for interpretable machine learning

Synthese:1-32 (forthcoming)
Download Edit this record How to cite View on PhilPapers
Abstract
We propose a formal framework for interpretable machine learning. Combining elements from statistical learning, causal interventionism, and decision theory, we design an idealised explanation game in which players collaborate to find the best explanation for a given algorithmic prediction. Through an iterative procedure of questions and answers, the players establish a three-dimensional Pareto frontier that describes the optimal trade-offs between explanatory accuracy, simplicity, and relevance. Multiple rounds are played at different levels of abstraction, allowing the players to explore overlapping causal patterns of variable granularity and scope. We characterise the conditions under which such a game is almost surely guaranteed to converge on a optimal explanation surface in polynomial time, and highlight obstacles that will tend to prevent the players from advancing beyond certain explanatory thresholds. The game serves a descriptive and a normative function, establishing a conceptual space in which to analyse and compare existing proposals, as well as design new and improved solutions.
Categories
No categories specified
(categorize this paper)
PhilPapers/Archive ID
WATTEG
Upload history
Archival date: 2021-06-08
View other versions
Added to PP index
2020-04-03

Total views
29 ( #56,886 of 2,432,203 )

Recent downloads (6 months)
7 ( #52,784 of 2,432,203 )

How can I increase my downloads?

Downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.