Abstract
Algorithmic risk assessment tools, such as COMPAS, are increasingly used in criminal justice systems to predict the risk of defendants to reoffend in the future. This paper argues that these tools may not only predict recidivism, but may themselves causally induce recidivism through self-fulfilling predictions. We argue that such “performative” effects can yield severe harms both to individuals and to society at large, which raise epistemic-ethical responsibilities on the part of developers and users of risk assessment tools. To meet these responsibilities, we present a novel desideratum on algorithmic tools, called explainability-in-context, which requires clarifying how these tools causally interact with the social, technological, and institutional environments they are embedded in. Risk assessment practices are thus subject to high epistemic standards, which haven't been sufficiently appreciated to date. Explainability-in-context, we contend, is a crucial goal to pursue in addressing the ethical challenges surrounding risk assessment tools.