Algorithmic Microaggressions

Feminist Philosophy Quarterly 8 (3) (2022)
  Copy   BIBTEX

Abstract

We argue that machine learning algorithms can inflict microaggressions on members of marginalized groups and that recognizing these harms as instances of microaggressions is key to effectively addressing the problem. The concept of microaggression is also illuminated by being studied in algorithmic contexts. We contribute to the microaggression literature by expanding the category of environmental microaggressions and highlighting the unique issues of moral responsibility that arise when we focus on this category. We theorize two kinds of algorithmic microaggression, stereotyping and erasure microaggressions, and argue that corporations are responsible for the microaggressions their algorithms create. As a case study, we look at the problems faced by Google’s autocomplete prediction and at the insufficiency of their solutions. The case study of autocomplete demonstrates our core claim that microaggressions constitute a distinct form of algorithmic bias and that identifying them as such allows us to avoid seeming solutions that recreate the same kinds of harms. Google has a responsibility to make information freely available, without exposing users to degradation. To fulfill its duties to marginalized groups, Google must abandon the fiction of neutral prediction and instead embrace the liberatory power of suggestion.

Author Profiles

Benjamin Wald
University of Toronto, St. George Campus
Emma McClure
Saint Mary's University

Analytics

Added to PP
2022-12-25

Downloads
331 (#49,400)

6 months
221 (#11,210)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?