Abstract
Robotization is an increasingly pervasive feature of our lives. Robots with high
degrees of autonomy may cause harm, yet in sufciently complex systems neither
the robots nor the human developers may be candidates for moral blame. John Danaher has recently argued that this may lead to a retribution gap, where the human
desire for retribution faces a lack of appropriate subjects for retributive blame. The
potential social and moral implications of a retribution gap are considerable. I argue
that the retributive intuitions that feed into retribution gaps are best understood as
deontological intuitions. I apply a debunking argument for deontological intuitions
in order to show that retributive intuitions cannot be used to justify retributive punishment in cases of robot harm without clear candidates for blame. The fundamental moral question thus becomes what we ought to do with these retributive intuitions, given that they do not justify retribution. I draw a parallel from recent work
on implicit biases to make a case for taking moral responsibility for retributive intuitions. In the same way that we can exert some form of control over our unwanted
implicit biases, we can and should do so for unjustifed retributive intuitions in cases
of robot harm.