Explaining Go: Challenges in Achieving Explainability in AI Go Programs

Journal of Go Studies 17 (2):29-60 (2023)
  Copy   BIBTEX


There has been a push in recent years to provide better explanations for how AIs make their decisions. Most of this push has come from the ethical concerns that go hand in hand with AIs making decisions that affect humans. Outside of the strictly ethical concerns that have prompted the study of explainable AIs (XAIs), there has been research interest in the mere possibility of creating XAIs in various domains. In general, the more accurate we make our models the harder they are to explain. Go playing AIs like AlphaGo and KataGo provide fantastic examples of this phenomenon. In this paper, I discuss a non-exhaustive list of the leading theories of explanation and what each of these theories would say about the explainability of AI-played moves of Go. Finally, I consider the possibility of ever explaining AI-played Go moves in a way that meets the four principles of XAI. I conclude, somewhat pessimistically, that Go is not as imminently explainable as other domains. As such, the probability of having an XAI for Go that meets the four principles is low.

Author's Profile

Zack Garrett
University of Nebraska, Lincoln (PhD)


Added to PP

114 (#82,654)

6 months
114 (#29,720)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?