Abstract
AI systems have often been found to contain gender biases. As a result of these gender biases, AI routinely fails to adequately recognize the needs, rights, and accomplishments of women. In this article, we use Axel Honneth’s theory of recognition to argue that AI’s gender biases are not only an ethical problem because they can lead to discrimination, but also because they resemble forms of misrecognition that can hurt women’s self-development and self-worth. Furthermore, we argue that Honneth’s theory of recognition offers a fruitful framework for improving our understanding of the psychological and normative implications of gender bias in modern technologies. Moreover, our Honnethian analysis of gender bias in AI shows that the goal of responsible AI requires us to address these issues not only through technical interventions, but also through a change in how we grant and deny recognition to each other.