Abstract
Artificial intelligence (AI) systems are increasingly adopted to make decisions in domains such as business, education, health care, and criminal justice. However, such algorithmic decision systems can have prevalent biases against marginalized social groups and undermine social justice. Explainable artificial intelligence (XAI) is a recent development aiming to make an AI system’s decision processes less opaque and to expose its problematic biases. This paper argues against technical XAI, according to which the detection and interpretation of algorithmic bias can be handled more or less independently by technical experts who specialize in XAI methods. Drawing on resources from feminist epistemology, we show why technical XAI is mistaken. Specifically, we demonstrate that the proper detection of algorithmic bias requires relevant interpretive resources, which can only be made available, in practice, by actively involving a diverse group of stakeholders. Finally, we suggest how feminist theories can help shape integrated XAI: an inclusive social-epistemic process that facilitates the amelioration of algorithmic bias.