Abstract
Algorithmic systems and predictive analytics play an increasingly important role in various aspects of modern life. Scholarship on the moral ramifications of such systems is in its early stages, and much of it focuses on bias and harm. This paper argues that in understanding the moral salience of algorithmic systems it is essential to understand the relation between algorithms, autonomy, and agency. We draw on several recent cases in criminal sentencing and K–12 teacher evaluation to outline four key ways in which issues of agency, autonomy, and respect for persons can conflict with algorithmic decision-making. Three of these involve failures to treat individual agents with the respect they deserve. The fourth involves distancing oneself from a morally suspect action by attributing one’s decision to take that action to an algorithm, thereby laundering one’s agency.