The Bias Dilemma: The Ethics of Algorithmic Bias in Natural-Language Processing

Feminist Philosophy Quarterly 8 (3) (2022)
  Copy   BIBTEX

Abstract

Addressing biases in natural-language processing (NLP) systems presents an underappreciated ethical dilemma, which we think underlies recent debates about bias in NLP models. In brief, even if we could eliminate bias from language models or their outputs, we would thereby often withhold descriptively or ethically useful information, despite avoiding perpetuating or amplifying bias. Yet if we do not debias, we can perpetuate or amplify bias, even if we retain relevant descriptively or ethically useful information. Understanding this dilemma provides for a useful way of rethinking the ethics of algorithmic bias in NLP.

Author's Profile

Oisín Deery
York University

Analytics

Added to PP
2022-12-25

Downloads
504 (#47,759)

6 months
134 (#32,737)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?