Switch to: References

Add citations

You must login to add citations.
  1. Algorithmic Fairness Criteria as Evidence.Will Fleisher - forthcoming - Ergo: An Open Access Journal of Philosophy.
    Statistical fairness criteria are widely used for diagnosing and ameliorating algorithmic bias. However, these fairness criteria are controversial as their use raises several difficult questions. I argue that the major problems for statistical algorithmic fairness criteria stem from an incorrect understanding of their nature. These criteria are primarily used for two purposes: first, evaluating AI systems for bias, and second constraining machine learning optimization problems in order to ameliorate such bias. The first purpose typically involves treating each criterion as a (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • On the site of predictive justice.Seth Lazar & Jake Stone - 2024 - Noûs 58 (3):730-754.
    Optimism about our ability to enhance societal decision‐making by leaning on Machine Learning (ML) for cheap, accurate predictions has palled in recent years, as these ‘cheap’ predictions have come at significant social cost, contributing to systematic harms suffered by already disadvantaged populations. But what precisely goes wrong when ML goes wrong? We argue that, as well as more obvious concerns about the downstream effects of ML‐based decision‐making, there can be moral grounds for the criticism of these predictions themselves. We introduce (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Diversity in sociotechnical machine learning systems.Maria De-Arteaga & Sina Fazelpour - 2022 - Big Data and Society 9 (1).
    There has been a surge of recent interest in sociocultural diversity in machine learning research. Currently, however, there is a gap between discussions of measures and benefits of diversity in machine learning, on the one hand, and the broader research on the underlying concepts of diversity and the precise mechanisms of its functional benefits, on the other. This gap is problematic because diversity is not a monolithic concept. Rather, different concepts of diversity are based on distinct rationales that should inform (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • What’s Impossible about Algorithmic Fairness?Otto Sahlgren - 2024 - Philosophy and Technology 37 (4):1-23.
    The now well-known impossibility results of algorithmic fairness demonstrate that an error-prone predictive model cannot simultaneously satisfy two plausible conditions for group fairness apart from exceptional circumstances where groups exhibit equal base rates. The results sparked, and continue to shape, lively debates surrounding algorithmic fairness conditions and the very possibility of building fair predictive models. This article, first, highlights three underlying points of disagreement in these debates, which have led to diverging assessments of the feasibility of fairness in prediction-based decision-making. (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • On Hedden's proof that machine learning fairness metrics are flawed.Anders Søgaard, Klemens Kappel & Thor Grünbaum - forthcoming - Inquiry: An Interdisciplinary Journal of Philosophy.
    1. Fairness is about the just distribution of society's resources, and in ML, the main resource being distributed is model performance, e.g. the translation quality produced by machine translation...
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Disciplining Deliberation: A Socio-technical Perspective on Machine Learning Trade-Offs.Sina Fazelpour - forthcoming - British Journal for the Philosophy of Science.
    This paper examines two prominent formal trade-offs in artificial intelligence (AI)---between predictive accuracy and fairness, and between predictive accuracy and interpretability. These trade-offs have become a central focus in normative and regulatory discussions as policymakers seek to understand the value tensions that can arise in the social adoption of AI tools. The prevailing interpretation views these formal trade-offs as directly corresponding to tensions between underlying social values, implying unavoidable conflicts between those social objectives. In this paper, I challenge that prevalent (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Disciplining Deliberation: A Sociotechnical Perspective on Machine Learning Trade-offs.Sina Fazelpour - forthcoming - British Journal for the Philosophy of Science.
    This paper examines two prominent formal trade-offs in artificial intelligence (AI)---between predictive accuracy and fairness, and between predictive accuracy and interpretability. These trade-offs have become a central focus in normative and regulatory discussions as policymakers seek to understand the value tensions that can arise in the social adoption of AI tools. The prevailing interpretation views these formal trade-offs as directly corresponding to tensions between underlying social values, implying unavoidable conflicts between those social objectives. In this paper, I challenge that prevalent (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Conceptualizing Automated Decision-Making in Organizational Contexts.Anna Katharina Boos - 2024 - Philosophy and Technology 37 (3):1-30.
    Despite growing interest in automated (or algorithmic) decision-making (ADM), little work has been done to conceptually clarify the term. This article aims to tackle this issue by developing a conceptualization of ADM specifically tailored to organizational contexts. It has two main goals: (1) to meaningfully demarcate ADM from similar, yet distinct algorithm-supported practices; and (2) to draw internal distinctions such that different ADM types can be meaningfully distinguished. The proposed conceptualization builds on three arguments: First, ADM primarily refers to the (...)
    Download  
     
    Export citation  
     
    Bookmark