Switch to: References

Add citations

You must login to add citations.
  1. “Democratizing AI” and the Concern of Algorithmic Injustice.Ting-an Lin - 2024 - Philosophy and Technology 37 (3):1-27.
    The call to make artificial intelligence (AI) more democratic, or to “democratize AI,” is sometimes framed as a promising response for mitigating algorithmic injustice or making AI more aligned with social justice. However, the notion of “democratizing AI” is elusive, as the phrase has been associated with multiple meanings and practices, and the extent to which it may help mitigate algorithmic injustice is still underexplored. In this paper, based on a socio-technical understanding of algorithmic injustice, I examine three notable notions (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Engineers on responsibility: feminist approaches to who’s responsible for ethical AI.Eleanor Drage, Kerry McInerney & Jude Browne - 2024 - Ethics and Information Technology 26 (1):1-13.
    Responsibility has become a central concept in AI ethics; however, little research has been conducted into practitioners’ personal understandings of responsibility in the context of AI, including how responsibility should be defined and who is responsible when something goes wrong. In this article, we present findings from a 2020–2021 data set of interviews with AI practitioners and tech workers at a single multinational technology company and interpret them through the lens of feminist political thought. We reimagine responsibility in the context (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Disambiguating Algorithmic Bias: From Neutrality to Justice.Elizabeth Edenberg & Alexandra Wood - 2023 - In Francesca Rossi, Sanmay Das, Jenny Davis, Kay Firth-Butterfield & Alex John (eds.), AIES '23: Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society. Association for Computing Machinery. pp. 691-704.
    As algorithms have become ubiquitous in consequential domains, societal concerns about the potential for discriminatory outcomes have prompted urgent calls to address algorithmic bias. In response, a rich literature across computer science, law, and ethics is rapidly proliferating to advance approaches to designing fair algorithms. Yet computer scientists, legal scholars, and ethicists are often not speaking the same language when using the term ‘bias.’ Debates concerning whether society can or should tackle the problem of algorithmic bias are hampered by conflations (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • On the site of predictive justice.Seth Lazar & Jake Stone - 2024 - Noûs 58 (3):730-754.
    Optimism about our ability to enhance societal decision‐making by leaning on Machine Learning (ML) for cheap, accurate predictions has palled in recent years, as these ‘cheap’ predictions have come at significant social cost, contributing to systematic harms suffered by already disadvantaged populations. But what precisely goes wrong when ML goes wrong? We argue that, as well as more obvious concerns about the downstream effects of ML‐based decision‐making, there can be moral grounds for the criticism of these predictions themselves. We introduce (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • (Some) algorithmic bias as institutional bias.Camila Hernandez Flowerman - 2023 - Ethics and Information Technology 25 (2):1-10.
    In this paper I argue that some examples of what we label ‘algorithmic bias’ would be better understood as cases of institutional bias. Even when individual algorithms appear unobjectionable, they may produce biased outcomes given the way that they are embedded in the background structure of our social world. Therefore, the problematic outcomes associated with the use of algorithmic systems cannot be understood or accounted for without a kind of structural account. Understanding algorithmic bias as institutional bias in particular (as (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • AI employment decision-making: integrating the equal opportunity merit principle and explainable AI.Gary K. Y. Chan - forthcoming - AI and Society:1-12.
    Artificial intelligence tools used in employment decision-making cut across the multiple stages of job advertisements, shortlisting, interviews and hiring, and actual and potential bias can arise in each of these stages. One major challenge is to mitigate AI bias and promote fairness in opaque AI systems. This paper argues that the equal opportunity merit principle is an ethical approach for fair AI employment decision-making. Further, explainable AI can mitigate the opacity problem by placing greater emphasis on enhancing the understanding of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Democratic self-government and the algocratic shortcut: the democratic harms in algorithmic governance of society.Nardine Alnemr - 2024 - Contemporary Political Theory 23 (2):205-227.
    Algorithms are used to calculate and govern varying aspects of public life for efficient use of the vast data available about citizens. Assuming that algorithms are neutral and efficient in data-based decision making, algorithms are used in areas such as criminal justice and welfare. This has ramifications on the ideal of democratic self-government as algorithmic decisions are made without democratic deliberation, scrutiny or justification. In the book _Democracy without Shortcuts_, Cristina Lafont argued against “shortcutting” democratic self-government. Lafont’s critique of shortcuts (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • SAF: Stakeholders’ Agreement on Fairness in the Practice of Machine Learning Development.Georgina Curto & Flavio Comim - 2023 - Science and Engineering Ethics 29 (4):1-19.
    This paper clarifies why bias cannot be completely mitigated in Machine Learning (ML) and proposes an end-to-end methodology to translate the ethical principle of justice and fairness into the practice of ML development as an ongoing agreement with stakeholders. The pro-ethical iterative process presented in the paper aims to challenge asymmetric power dynamics in the fairness decision making within ML design and support ML development teams to identify, mitigate and monitor bias at each step of ML systems development. The process (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Escaping the Impossibility of Fairness: From Formal to Substantive Algorithmic Fairness.Ben Green - 2022 - Philosophy and Technology 35 (4):1-32.
    Efforts to promote equitable public policy with algorithms appear to be fundamentally constrained by the “impossibility of fairness” (an incompatibility between mathematical definitions of fairness). This technical limitation raises a central question about algorithmic fairness: How can computer scientists and policymakers support equitable policy reforms with algorithms? In this article, I argue that promoting justice with algorithms requires reforming the methodology of algorithmic fairness. First, I diagnose the problems of the current methodology for algorithmic fairness, which I call “formal algorithmic (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • The credit they deserve: contesting predictive practices and the afterlives of red-lining.Emily Katzenstein - 2024 - Contemporary Political Theory 23 (3):371-391.
    Racial capitalism depends on the reproduction of an existing racialized economic order. In this article, I argue that the disavowal of past injustice is a central way in which this reproduction is ensured and that market-based forms of knowledge production, such as for-profit predictive practices, play a crucial role in facilitating this disavowal. Recent debates about the fairness of algorithms, data justice, and predictive policing have intensified long-standing controversies, both popular and academic, about the way in which statistical and financial (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Situating questions of data, power, and racial formation.Kathryn Henne & Renee Shelby - 2022 - Big Data and Society 9 (1).
    This special theme of Big Data & Society explores connections, relationships, and tensions that coalesce around data, power, and racial formation. This collection of articles and commentaries builds upon scholarly observations of data substantiating and transforming racial hierarchies. Contributors consider how racial projects intersect with interlocking systems of oppression across concerns of class, coloniality, dis/ability, gendered difference, and sexuality across contexts and jurisdictions. In doing so, this special issue illuminates how data can both reinforce and challenge colorblind ideologies as well (...)
    Download  
     
    Export citation  
     
    Bookmark