Classifying Genetic Essentialist Biases using Large Language Models

Abstract

The rapid rise of generative AI, including LLMs, has prompted a great deal of concern, both within and beyond academia. One of these concerns is that generative models embed, reproduce, and therein potentially perpetuate all manner of bias. The present study offers an alternative perspective: exploring the potential of LLMs to detect bias in human generated text. Our target is genetic essentialism in obesity discourse in Australian print media. We develop and deploy an LLM-based classification model to evaluate a large sample of relevant articles (n=26,163). We show that our model detects genetic essentialist biases as reliably as human experts; and find that, while genes figure less prominently in popular discussions of obesity than previous work might suggest, when genetic information is invoked, it is often presented in a biased way. Implications for future work are discussed.

Author Profiles

Jack Chan
National University of Singapore
Ritsaart Reimann
Macquarie University
Kate Lynch
University of Melbourne
1 more

Analytics

Added to PP
2024-12-01

Downloads
101 (#97,217)

6 months
101 (#53,432)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?