Abstract
In this paper, I will argue that the responsibility gap arising from new AI systems is reducible to the problem of many
hands and collective agency. Systematic analysis of the agential dimension of AI will lead me to outline a disjunctive
between the two problems. Either we reduce individual responsibility gaps to the many hands, or we abandon the individual
dimension and accept the possibility of responsible collective agencies. Depending on which conception of AI
agency we begin with, the responsibility gap will boil down to one of these two moral problems. Moreover, I will adduce
that this conclusion reveals an underlying weakness in AI ethics: the lack of attention to the question of the disciplinary
boundaries of AI ethics. This absence has made it difficult to identify the specifics of the responsibility gap arising from
new AI systems as compared to the responsibility gaps of other applied ethics. Lastly, I will be concerned with outlining
these specific aspects.