Unjustified Sample Sizes and Generalizations in Explainable AI Research: Principles for More Inclusive User Studies

IEEE Intelligent Systems (forthcoming)
  Copy   BIBTEX

Abstract

Many ethical frameworks require artificial intelligence (AI) systems to be explainable. Explainable AI (XAI) models are frequently tested for their adequacy in user studies. Since different people may have different explanatory needs, it is important that participant samples in user studies are large enough to represent the target population to enable generalizations. However, it is unclear to what extent XAI researchers reflect on and justify their sample sizes or avoid broad generalizations across people. We analyzed XAI user studies (N = 220) published between 2012 and 2022. Most studies did not offer rationales for their sample sizes. Moreover, most papers generalized their conclusions beyond their target population, and there was no evidence that broader conclusions in quantitative studies were correlated with larger samples. These methodological problems can impede evaluations of whether XAI systems implement the explainability called for in ethical frameworks. We outline principles for more inclusive XAI user studies.

Author Profiles

Uwe Peters
Utrecht University
Mary Carman
University of Witwatersrand

Analytics

Added to PP
2023-04-21

Downloads
286 (#54,671)

6 months
147 (#20,783)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?