The Struggle for AI’s Recognition: Understanding the Normative Implications of Gender Bias in AI with Honneth’s Theory of Recognition

Philosophy and Technology 35 (2) (2022)
  Copy   BIBTEX

Abstract

AI systems have often been found to contain gender biases. As a result of these gender biases, AI routinely fails to adequately recognize the needs, rights, and accomplishments of women. In this article, we use Axel Honneth’s theory of recognition to argue that AI’s gender biases are not only an ethical problem because they can lead to discrimination, but also because they resemble forms of misrecognition that can hurt women’s self-development and self-worth. Furthermore, we argue that Honneth’s theory of recognition offers a fruitful framework for improving our understanding of the psychological and normative implications of gender bias in modern technologies. Moreover, our Honnethian analysis of gender bias in AI shows that the goal of responsible AI requires us to address these issues not only through technical interventions, but also through a change in how we grant and deny recognition to each other.

Author Profiles

Michał Wieczorek
Dublin City University
Rosalie Waelen
Universität Bonn

Analytics

Added to PP
2022-06-03

Downloads
527 (#41,559)

6 months
121 (#38,226)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?