Abstract
An increasing number of complex and important decisions are now being made with the aid of opaque algorithms. This has led to calls from both theorists and legislators for the implementation of a right to an explanation for algorithmic decisions. In this paper, we argue that, in most cases and for most kinds of explanations, there is no such right. After differentiating a number of different things that might be meant by a ‘right to an explanation,’ we argue that, for the most plausible and popular versions (a right to an explanation for a specific bureaucratic decision, a right to an explanation grounded in public reason considerations, or a right to a higher-level explanation of an entire system of automated decision-making), such a right is either superfluous, impossible to obtain, or not the best way to secure relevant normative goods. We also argue that proponents of a right to an explanation carve off certain kinds of automated decisions as requiring more justification than others in similar areas of social (and natural) science policy, a demarcation we argue is unjustified. While there will be some cases where an explanation is the only thing that can secure an important normative good (e.g. when an explanation is the only thing that would catch an unjustified individual decision), we argue these cases are too rare and scattered to ground something as weighty as a right to an explanation.