Abstract
This paper concerns the double standard debate in the ethics of AI literature. This debate essentially revolves around the question of whether we should subject AI systems to different normative standards than humans. So far, the debate has centered around the desideratum of transparency. That is, the debate has focused on whether AI systems must be more transparent than humans in their decision-making processes in order for it to be morally permissible to use such systems. Some have argued that the same standards of transparency should be applied across the board, for AI systems and humans alike. Others have argued that we should hold AI systems to higher standards than humans in terms of transparency. In this paper, we first point out that there are structurally similar double standard debates to be had about other desiderata besides transparency, such as predictive accuracy. Second, we argue that when we focus on predictive accuracy, there are at least two reasons for holding AI systems to a lower standard than humans.