A Cross-Cultural Examination of Fairness Beliefs in Human-AI Interaction

In Adam Dyrda, Maciej Juzaszek, Bartosz Biskup & Cuizhu Wang (eds.), Ethics of Institutional Beliefs: From Theoretical to Empirical. Edward Elgar (forthcoming)
  Copy   BIBTEX

Abstract

In this chapter, we integrate three distinct strands of thought to argue that the concept of “fairness” varies significantly across cultures. As a result, ensuring that human-AI interactions meet relevant fairness standards requires a deep understanding of the cultural contexts in which AI-enabled systems are deployed. Failure to do so will not only result in the generation of unfair outcomes by an AI-enabled system, but it will also degrade legitimacy of and trust in the system. The first strand concerns the dominant approach taken in the technology industry to ensure that AI-enabled systems are fair. This approach is to reduce fairness to some mathematical formalism that can be applied universally, a quintessentially Western conception of fairness. The second strand concerns alternative conceptions of fairness that have their roots in philosophical traditions of the East, namely Confucian virtue ethics. Understanding how individuals from diverse cultural backgrounds perceive fairness—particularly their beliefs about fairness in human-to-human interactions—is crucial for understanding how they will interpret fairness in human-AI interactions. Building on these philosophical and behavioral differences, as highlighted by empirical research, the third strand integrates insights from political science and interdisciplinary studies. This perspective offers valuable guidance on designing AI-enabled systems to align with contextually relevant standards of fairness. Examining existing beliefs about fairness within the context of institutional decision-making provides valuable insights into what people expect from AI-generated decisions. These expectations often include key elements such as a sufficient degree of transparency, clear lines of accountability, and mechanisms to contest decisions made by the system—all of which are essential components of procedural fairness. Rather than adopting a one-size-fits-all approach to ensuring fairness, the design and deployment of AI systems must carefully account for the operating environment, including the socio-political and cultural context, to ensure that the system aligns with the relevant standards of fairness.

Author Profiles

Xin Han
Montana State University-Bozeman
Marten H. L. Kaas
Charité Universitätsmedizin Berlin
Cuizhu Wang
Jagiellonian University

Analytics

Added to PP
2025-01-30

Downloads
57 (#101,235)

6 months
57 (#89,977)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?