Ethical assessments and mitigation strategies for biases in AI-systems used during the COVID-19 pandemic

Big Data and Society 10 (1) (2023)
  Copy   BIBTEX

Abstract

The main aim of this article is to reflect on the impact of biases related to artificial intelligence (AI) systems developed to tackle issues arising from the COVID-19 pandemic, with special focus on those developed for triage and risk prediction. A secondary aim is to review assessment tools that have been developed to prevent biases in AI systems. In addition, we provide a conceptual clarification for some terms related to biases in this particular context. We focus mainly on nonracial biases that may be less considered when addressing biases in AI systems in the existing literature. In the manuscript, we found that the existence of bias in AI systems used for COVID-19 can result in algorithmic justice and that the legal frameworks and strategies developed to prevent the apparition of bias have failed to adequately consider social determinants of health. Finally, we make some recommendations on how to include more diverse professional profiles in order to develop AI systems that increase the epistemic diversity needed to tackle AI biases during the COVID-19 pandemic and beyond.

Author Profiles

Txetxu Ausin
Spanish National Research Council (CSIC)
David Sevilla
Universitat Autonoma de Barcelona
Jon Rueda
University of Granada
1 more

Analytics

Added to PP
2023-06-28

Downloads
304 (#52,561)

6 months
221 (#11,042)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?