Abstract
We argue that machine learning algorithms can inflict microaggressions on members of marginalized groups and that recognizing these harms as instances of microaggressions is key to effectively addressing the problem. The concept of microaggression is also illuminated by being studied in algorithmic contexts. We contribute to the microaggression literature by expanding the category of environmental microaggressions and highlighting the unique issues of moral responsibility that arise when we focus on this category. We theorize two kinds of algorithmic microaggression, stereotyping and erasure microaggressions, and argue that corporations are responsible for the microaggressions their algorithms create. As a case study, we look at the problems faced by Google’s autocomplete prediction and at the insufficiency of their solutions. The case study of autocomplete demonstrates our core claim that microaggressions constitute a distinct form of algorithmic bias and that identifying them as such allows us to avoid seeming solutions that recreate the same kinds of harms. Google has a responsibility to make information freely available, without exposing users to degradation. To fulfill its duties to marginalized groups, Google must abandon the fiction of neutral prediction and instead embrace the liberatory power of suggestion.