Switch to: References

Add citations

You must login to add citations.
  1. Justice by Algorithm: The Limits of AI in Criminal Sentencing.Isaac Taylor - 2023 - Criminal Justice Ethics 42 (3):193-213.
    Criminal justice systems have traditionally relied heavily on human decision-making, but new technologies are increasingly supplementing the human role in this sector. This paper considers what general limits need to be placed on the use of algorithms in sentencing decisions. It argues that, even once we can build algorithms that equal human decision-making capacities, strict constraints need to be placed on how they are designed and developed. The act of condemnation is a valuable element of criminal sentencing, and using algorithms (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Is Explainable AI Responsible AI?Isaac Taylor - forthcoming - AI and Society.
    When artificial intelligence (AI) is used to make high-stakes decisions, some worry that this will create a morally troubling responsibility gap—that is, a situation in which nobody is morally responsible for the actions and outcomes that result. Since the responsibility gap might be thought to result from individuals lacking knowledge of the future behavior of AI systems, it can be and has been suggested that deploying explainable artificial intelligence (XAI) techniques will help us to avoid it. These techniques provide humans (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • On the Site of Predictive Justice.Seth Lazar & Jake Stone - forthcoming - Noûs.
    Optimism about our ability to enhance societal decision‐making by leaning on Machine Learning (ML) for cheap, accurate predictions has palled in recent years, as these ‘cheap’ predictions have come at significant social cost, contributing to systematic harms suffered by already disadvantaged populations. But what precisely goes wrong when ML goes wrong? We argue that, as well as more obvious concerns about the downstream effects of ML‐based decision‐making, there can be moral grounds for the criticism of these predictions themselves. We introduce (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Dirty data labeled dirt cheap: epistemic injustice in machine learning systems.Gordon Hull - 2023 - Ethics and Information Technology 25 (3):1-14.
    Artificial intelligence (AI) and machine learning (ML) systems increasingly purport to deliver knowledge about people and the world. Unfortunately, they also seem to frequently present results that repeat or magnify biased treatment of racial and other vulnerable minorities. This paper proposes that at least some of the problems with AI’s treatment of minorities can be captured by the concept of epistemic injustice. To substantiate this claim, I argue that (1) pretrial detention and physiognomic AI systems commit testimonial injustice because their (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Statistical evidence and algorithmic decision-making.Sune Holm - 2023 - Synthese 202 (1):1-16.
    The use of algorithms to support prediction-based decision-making is becoming commonplace in a range of domains including health, criminal justice, education, social services, lending, and hiring. An assumption governing such decisions is that there is a property Y such that individual a should be allocated resource R by decision-maker D if a is Y. When there is uncertainty about whether a is Y, algorithms may provide valuable decision support by accurately predicting whether a is Y on the basis of known (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Yet Another Impossibility Theorem in Algorithmic Fairness.Fabian Beigang - 2023 - Minds and Machines 33 (4):715-735.
    In recent years, there has been a surge in research addressing the question which properties predictive algorithms ought to satisfy in order to be considered fair. Three of the most widely discussed criteria of fairness are the criteria called equalized odds, predictive parity, and counterfactual fairness. In this paper, I will present a new impossibility result involving these three criteria of algorithmic fairness. In particular, I will argue that there are realistic circumstances under which any predictive algorithm that satisfies counterfactual (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • An Impossibility Theorem for Base Rate Tracking and Equalized Odds.Rush T. Stewart, Benjamin Eva, Shanna Slank & Reuben Stern - forthcoming - Analysis.
    There is a theorem that shows that it is impossible for an algorithm to jointly satisfy the statistical fairness criteria of Calibration and Equalised Odds non-trivially. But what about the recently advocated alternative to Calibration, Base Rate Tracking? Here, we show that Base Rate Tracking is strictly weaker than Calibration, and then take up the question of whether it is possible to jointly satisfy Base Rate Tracking and Equalised Odds in non-trivial scenarios. We show that it is not, thereby establishing (...)
    Download  
     
    Export citation  
     
    Bookmark