Disciplining Deliberation: A Sociotechnical Perspective on Machine Learning Trade-offs

British Journal for the Philosophy of Science (forthcoming)
  Copy   BIBTEX

Abstract

This paper examines two prominent formal trade-offs in artificial intelligence (AI)---between predictive accuracy and fairness, and between predictive accuracy and interpretability. These trade-offs have become a central focus in normative and regulatory discussions as policymakers seek to understand the value tensions that can arise in the social adoption of AI tools. The prevailing interpretation views these formal trade-offs as directly corresponding to tensions between underlying social values, implying unavoidable conflicts between those social objectives. In this paper, I challenge that prevalent interpretation by introducing a sociotechnical approach to examining the value implications of trade-offs. After providing a set-theoretic understanding of these model trade-offs, I argue that their implications cannot be understood in isolation from the technical, psychological, organizational, and social factors that shape the integration of AI models into decision-making pipelines. Specifically, I identify three key considerations---validity and instrumental relevance, compositionality, and dynamics---for contextualizing and characterizing these implications. These considerations reveal that the relationship between model trade-offs and corresponding values depends on critical choices and assumptions. Crucially, judicious sacrifices in one model property for another can, in fact, promote both sets of corresponding values. The proposed sociotechnical perspective thus shows that we can and should aspire to higher epistemic and ethical possibilities than the prevalent interpretation suggests, while offering practical guidance for achieving those outcomes. Finally, I draw out the broader implications of this perspective for AI design and governance, highlighting the need to broaden normative engagement across the AI lifecycle, develop legal and auditing tools sensitive to sociotechnical considerations, and rethink the vital role and appropriate structure of interdisciplinary collaboration in fostering a responsible AI workforce.

Author's Profile

Sina Fazelpour
Northeastern University

Analytics

Added to PP
2024-05-30

Downloads
250 (#82,902)

6 months
132 (#36,673)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?