Switch to: References

Add citations

You must login to add citations.
  1. The Ideals Program in Algorithmic Fairness.Rush T. Stewart - forthcoming - AI and Society.
    I consider statistical criteria of algorithmic fairness from the perspective of the _ideals_ of fairness to which these criteria are committed. I distinguish and describe three theoretical roles such ideals might play. The usefulness of this program is illustrated by taking Base Rate Tracking and its ratio variant as a case study. I identify and compare the ideals of these two criteria, then consider them in each of the aforementioned three roles for ideals. This ideals program may present a way (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • The Authority to Moderate: Social Media Moderation and its Limits.Bhanuraj Kashyap & Paul Formosa - 2023 - Philosophy and Technology 36 (4):1-22.
    The negative impacts of social media have given rise to philosophical questions around whether social media companies have the authority to regulate user-generated content on their platforms. The most popular justification for that authority is to appeal to private ownership rights. Social media companies own their platforms, and their ownership comes with various rights that ground their authority to moderate user-generated content on their platforms. However, we argue that ownership rights can be limited when their exercise results in significant harms (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Big Tech, Algorithmic Power, and Democratic Control.Ugur Aytac - forthcoming - Journal of Politics.
    This paper argues that instituting Citizen Boards of Governance (CBGs) is the optimal strategy to democratically contain Big Tech’s algorithmic powers in the digital public sphere. CBGs are bodies of randomly selected citizens that are authorized to govern the algorithmic infrastructure of Big Tech platforms. The main advantage of CBGs is to tackle the concentrated powers of private tech corporations without giving too much power to governments. I show why this is a better approach than ordinary state regulation or relying (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Informational richness and its impact on algorithmic fairness.Marcello Di Bello & Ruobin Gong - forthcoming - Philosophical Studies:1-29.
    The literature on algorithmic fairness has examined exogenous sources of biases such as shortcomings in the data and structural injustices in society. It has also examined internal sources of bias as evidenced by a number of impossibility theorems showing that no algorithm can concurrently satisfy multiple criteria of fairness. This paper contributes to the literature stemming from the impossibility theorems by examining how informational richness affects the accuracy and fairness of predictive algorithms. With the aid of a computer simulation, we (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • On the Philosophy of Unsupervised Learning.David S. Watson - 2023 - Philosophy and Technology 36 (2):1-26.
    Unsupervised learning algorithms are widely used for many important statistical tasks with numerous applications in science and industry. Yet despite their prevalence, they have attracted remarkably little philosophical scrutiny to date. This stands in stark contrast to supervised and reinforcement learning algorithms, which have been widely studied and critically evaluated, often with an emphasis on ethical concerns. In this article, I analyze three canonical unsupervised learning problems: clustering, abstraction, and generative modeling. I argue that these methods raise unique epistemological and (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Three Lessons For and From Algorithmic Discrimination.Frej Klem Thomsen - 2023 - Res Publica (2):1-23.
    Algorithmic discrimination has rapidly become a topic of intense public and academic interest. This article explores three issues raised by algorithmic discrimination: 1) the distinction between direct and indirect discrimination, 2) the notion of disadvantageous treatment, and 3) the moral badness of discriminatory automated decision-making. It argues that some conventional distinctions between direct and indirect discrimination appear not to apply to algorithmic discrimination, that algorithmic discrimination may often be discrimination between groups, as opposed to against groups, and that it is (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • “Just” accuracy? Procedural fairness demands explainability in AI‑based medical resource allocation.Jon Rueda, Janet Delgado Rodríguez, Iris Parra Jounou, Joaquín Hortal-Carmona, Txetxu Ausín & David Rodríguez-Arias - 2022 - AI and Society:1-12.
    The increasing application of artificial intelligence (AI) to healthcare raises both hope and ethical concerns. Some advanced machine learning methods provide accurate clinical predictions at the expense of a significant lack of explainability. Alex John London has defended that accuracy is a more important value than explainability in AI medicine. In this article, we locate the trade-off between accurate performance and explainable algorithms in the context of distributive justice. We acknowledge that accuracy is cardinal from outcome-oriented justice because it helps (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Enabling Fairness in Healthcare Through Machine Learning.Geoff Keeling & Thomas Grote - 2022 - Ethics and Information Technology 24 (3):1-13.
    The use of machine learning systems for decision-support in healthcare may exacerbate health inequalities. However, recent work suggests that algorithms trained on sufficiently diverse datasets could in principle combat health inequalities. One concern about these algorithms is that their performance for patients in traditionally disadvantaged groups exceeds their performance for patients in traditionally advantaged groups. This renders the algorithmic decisions unfair relative to the standard fairness metrics in machine learning. In this paper, we defend the permissible use of affirmative algorithms; (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • AI and bureaucratic discretion.Kate Vredenburgh - forthcoming - Inquiry: An Interdisciplinary Journal of Philosophy.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Introduction: Digital Technologies and Human Decision-Making.Sofia Bonicalzi, Mario De Caro & Benedetta Giovanola - 2023 - Topoi 42 (3):793-797.
    Download  
     
    Export citation  
     
    Bookmark