Switch to: References

Add citations

You must login to add citations.
  1. Algorithmic Fairness Criteria as Evidence.Will Fleisher - forthcoming - Ergo: An Open Access Journal of Philosophy.
    Statistical fairness criteria are widely used for diagnosing and ameliorating algorithmic bias. However, these fairness criteria are controversial as their use raises several difficult questions. I argue that the major problems for statistical algorithmic fairness criteria stem from an incorrect understanding of their nature. These criteria are primarily used for two purposes: first, evaluating AI systems for bias, and second constraining machine learning optimization problems in order to ameliorate such bias. The first purpose typically involves treating each criterion as a (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • The Permissibility of Biased AI in a Biased World: An Ethical Analysis of AI for Screening and Referrals for Diabetic Retinopathy in Singapore.Kathryn Muyskens, Angela Ballantyne, Julian Savulescu, Harisan Unais Nasir & Anantharaman Muralidharan - 2025 - Asian Bioethics Review 17 (1):167-185.
    A significant and important ethical tension in resource allocation and public health ethics is between utility and equity. We explore this tension between utility and equity in the context of health AI through an examination of a diagnostic AI screening tool for diabetic retinopathy developed by a team of researchers at Duke-NUS in Singapore. While this tool was found to be effective, it was not equally effective across every ethnic group in Singapore, being less effective for the minority Malay population (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Artificial Intelligence, Discrimination, Fairness, and Other Moral Concerns.Re’em Segev - 2024 - Minds and Machines 34 (4):1-22.
    Should the input data of artificial intelligence (AI) systems include factors such as race or sex when these factors may be indicative of morally significant facts? More importantly, is it wrong to rely on the output of AI tools whose input includes factors such as race or sex? And is it wrong to rely on the output of AI systems when it is correlated with factors such as race or sex (whether or not its input includes such factors)? The answers (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Knowledge, algorithmic predictions, and action.Eleonora Cresto - 2024 - Asian Journal of Philosophy 3 (2):1-17.
    I discuss the epistemic status of algorithmic predictions in the legal realm. My main claim is that algorithmic predictions do not give us knowledge, not even probabilistic knowledge. The situation, however, is relevantly different from the one in which we find ourselves at the time of assessing statistical evidence in general, and it is rather related to the fact that algorithmic fairness in legal contexts is essentially undetermined. In the light of this, we have to settle for justified beliefs and (...)
    Download  
     
    Export citation  
     
    Bookmark