Switch to: References

Add citations

You must login to add citations.
  1. Broomean(ish) Algorithmic Fairness?Clinton Castro - forthcoming - Journal of Applied Philosophy.
    Recently, there has been much discussion of ‘fair machine learning’: fairness in data-driven decision-making systems (which are often, though not always, made with assistance from machine learning systems). Notorious impossibility results show that we cannot have everything we want here. Such problems call for careful thinking about the foundations of fair machine learning. Sune Holm has identified one promising way forward, which involves applying John Broome's theory of fairness to the puzzles of fair machine learning. Unfortunately, his application of Broome's (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Deference to Opaque Systems and Morally Exemplary Decisions.James Fritz - forthcoming - AI and Society:1-13.
    Many have recently argued that there are weighty reasons against making high-stakes decisions solely on the basis of recommendations from artificially intelligent (AI) systems. Even if deference to a given AI system were known to reliably result in the right action being taken, the argument goes, that deference would lack morally important characteristics: the resulting decisions would not, for instance, be based on an appreciation of right-making reasons. Nor would they be performed from moral virtue; nor would they have moral (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Algorithms and dehumanization: a definition and avoidance model.Mario D. Schultz, Melanie Clegg, Reto Hofstetter & Peter Seele - forthcoming - AI and Society:1-21.
    Dehumanization by algorithms raises important issues for business and society. Yet, these issues remain poorly understood due to the fragmented nature of the evolving dehumanization literature across disciplines, originating from colonialism, industrialization, post-colonialism studies, contemporary ethics, and technology studies. This article systematically reviews the literature on algorithms and dehumanization (n = 180 articles) and maps existing knowledge across several clusters that reveal its underlying characteristics. Based on the review, we find that algorithmic dehumanization is particularly problematic for human resource management (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Algorithms Advise, Humans Decide: the Evidential Role of the Patient Preference Predictor.Nicholas Makins - forthcoming - Journal of Medical Ethics.
    An AI-based “patient preference predictor” (PPP) is a proposed method for guiding healthcare decisions for patients who lack decision-making capacity. The proposal is to use correlations between sociodemographic data and known healthcare preferences to construct a model that predicts the unknown preferences of a particular patient. In this paper, I highlight a distinction that has been largely overlooked so far in debates about the PPP–that between algorithmic prediction and decision-making–and argue that much of the recent philosophical disagreement stems from this (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Understanding bias through diverse lenses.Katherine Puddifoot - 2024 - Philosophical Psychology 37 (6):1287-1296.
    Download  
     
    Export citation  
     
    Bookmark  
  • The (Dis)unity of Psychological (Social) Bias.Gabbrielle M. Johnson - 2024 - Philosophical Psychology (6):1349-1377.
    This paper explores the complex nature of social biases, arguing for a functional framework that recognizes their unity and diversity. The functional approach posits that all biases share a common functional role in overcoming underdetermination. This framework, I argue, provides a comprehensive understanding of how all psychological biases, including social biases, are unified. I then turn to the question of disunity, demonstrating how psychological social biases differ systematically in the mental states and processes that constitute them. These differences indicate that (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Philosophical Investigations into AI Alignment: A Wittgensteinian Framework.José Antonio Pérez-Escobar & Deniz Sarikaya - 2024 - Philosophy and Technology 37 (3):1-25.
    We argue that the later Wittgenstein’s philosophy of language and mathematics, substantially focused on rule-following, is relevant to understand and improve on the Artificial Intelligence (AI) alignment problem: his discussions on the categories that influence alignment between humans can inform about the categories that should be controlled to improve on the alignment problem when creating large data sets to be used by supervised and unsupervised learning algorithms, as well as when introducing hard coded guardrails for AI models. We cast these (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • (1 other version)Engineering Social Concepts: Feasibility and Causal Models.Eleonore Neufeld - forthcoming - Philosophy and Phenomenological Research.
    How feasible are conceptual engineering projects of social concepts that aim for the engineered concept to be widely adopted in ordinary everyday life? Predominant frameworks on the psychology of concepts that shape work on stereotyping, bias, and machine learning have grim implications for the prospects of conceptual engineers: conceptual engineering efforts are ineffective in promoting certain social-conceptual changes. Specifically, since conceptual components that give rise to problematic social stereotypes are sensitive to statistical structures of the environment, purely conceptual change won’t (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Listening to algorithms: The case of self‐knowledge.Casey Doyle - forthcoming - European Journal of Philosophy.
    This paper begins with the thought that there is something out of place about offloading inquiry into one's own mind to AI. The paper's primary goal is to articulate the unease felt when considering cases of doing so. It draws a parallel between the use of algorithms in the criminal law: in both cases one feels entitled to be treated as an exception to a verdict made on the basis of a certain kind of evidence. Then it identifies an account (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The Ethics of Belief (3rd edition).Rima Basu - forthcoming - In Kurt Sylvan, Ernest Sosa, Jonathan Dancy & Matthias Steup (eds.), The Blackwell Companion to Epistemology, 3rd edition. Wiley Blackwell.
    This chapter is a survey of the ethics of belief. It begins with the debate as it first emerges in the foundational dispute between W. K. Clifford and William James. Then it surveys how the disagreements between Clifford and James have shaped the work of contemporary theorists, touching on topics such as pragmatism, whether we should believe against the evidence, pragmatic and moral encroachment, doxastic partiality, and doxastic wronging.
    Download  
     
    Export citation  
     
    Bookmark  
  • What we owe to decision-subjects: beyond transparency and explanation in automated decision-making.David Gray Grant, Jeff Behrends & John Basl - 2023 - Philosophical Studies 2003:1-31.
    The ongoing explosion of interest in artificial intelligence is fueled in part by recently developed techniques in machine learning. Those techniques allow automated systems to process huge amounts of data, utilizing mathematical methods that depart from traditional statistical approaches, and resulting in impressive advancements in our ability to make predictions and uncover correlations across a host of interesting domains. But as is now widely discussed, the way that those systems arrive at their outputs is often opaque, even to the experts (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Unconscious Perception and Unconscious Bias: Parallel Debates about Unconscious Content.Gabbrielle Johnson - 2023 - In Uriah Kriegel (ed.), Oxford Studies in Philosophy of Mind Vol. 3. Oxford: Oxford University Press. pp. 87-130.
    The possibilities of unconscious perception and unconscious bias prompt parallel debates about unconscious mental content. This chapter argues that claims within these debates alleging the existence of unconscious content are made fraught by ambiguity and confusion with respect to the two central concepts they involve: consciousness and content. Borrowing conceptual resources from the debate about unconscious perception, the chapter distills the two conceptual puzzles concerning each of these notions and establishes philosophical strategies for their resolution. It then argues that empirical (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Human achievement and artificial intelligence.Brett Karlan - 2023 - Ethics and Information Technology 25 (3):1-12.
    In domains as disparate as playing Go and predicting the structure of proteins, artificial intelligence (AI) technologies have begun to perform at levels beyond which any humans can achieve. Does this fact represent something lamentable? Does superhuman AI performance somehow undermine the value of human achievements in these areas? Go grandmaster Lee Sedol suggested as much when he announced his retirement from professional Go, blaming the advances of Go-playing programs like AlphaGo for sapping his will to play the game at (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Are You Anthropomorphizing AI?Ali Hasan - 2024 - Blog of the American Philosophical Association.
    I argue that, given the way that AI models work and the way that ordinary human rationality works, it is very likely that people are anthropomorphizing AI, with potentially serious consequences. There are good reasons to doubt that LLMs have anything like human understanding, and even if they have representations or meaningful contents in some sense, these are unlikely to correspond to our ordinary understanding of natural language. However, there are natural, and in some ways quite rational, pressures to anthropomorphize (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • (1 other version)Explanation and the Right to Explanation.Elanor Taylor - 2023 - Journal of the American Philosophical Association 10 (3):467-482.
    In response to widespread use of automated decision-making technology, some have considered a right to explanation. In this paper I draw on insights from philosophical work on explanation to present a series of challenges to this idea, showing that the normative motivations for access to such explanations ask for something difficult, if not impossible, to extract from automated systems. I consider an alternative, outcomes-focused approach to the normative evaluation of automated decision-making, and recommend it as a way to pursue the (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • The Importance of Forgetting.Rima Basu - 2022 - Episteme 19 (4):471-490.
    Morality bears on what we should forget. Some aspects of our identity are meant to be forgotten and there is a distinctive harm that accompanies the permanence of some content about us, content that prompts a duty to forget. To make the case that forgetting is an integral part of our moral duties to others, the paper proceeds as follows. In §1, I make the case that forgetting is morally evaluable and I survey three kinds of forgetting: no-trace forgetting, archival (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Egalitarian Machine Learning.Clinton Castro, David O’Brien & Ben Schwan - 2023 - Res Publica 29 (2):237–264.
    Prediction-based decisions, which are often made by utilizing the tools of machine learning, influence nearly all facets of modern life. Ethical concerns about this widespread practice have given rise to the field of fair machine learning and a number of fairness measures, mathematically precise definitions of fairness that purport to determine whether a given prediction-based decision system is fair. Following Reuben Binns (2017), we take ‘fairness’ in this context to be a placeholder for a variety of normative egalitarian considerations. We (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Putting explainable AI in context: institutional explanations for medical AI.Jacob Browning & Mark Theunissen - 2022 - Ethics and Information Technology 24 (2).
    There is a current debate about if, and in what sense, machine learning systems used in the medical context need to be explainable. Those arguing in favor contend these systems require post hoc explanations for each individual decision to increase trust and ensure accurate diagnoses. Those arguing against suggest the high accuracy and reliability of the systems is sufficient for providing epistemic justified beliefs without the need for explaining each individual decision. But, as we show, both solutions have limitations—and it (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Algorithmic Political Bias in Artificial Intelligence Systems.Uwe Peters - 2022 - Philosophy and Technology 35 (2):1-23.
    Some artificial intelligence systems can display algorithmic bias, i.e. they may produce outputs that unfairly discriminate against people based on their social identity. Much research on this topic focuses on algorithmic bias that disadvantages people based on their gender or racial identity. The related ethical problems are significant and well known. Algorithmic bias against other aspects of people’s social identity, for instance, their political orientation, remains largely unexplored. This paper argues that algorithmic bias against people’s political orientation can arise in (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Are Algorithmic Decisions Legitimate? The Effect of Process and Outcomes on Perceptions of Legitimacy of AI Decisions.Kirsten Martin & Ari Waldman - 2022 - Journal of Business Ethics 183 (3):653-670.
    Firms use algorithms to make important business decisions. To date, the algorithmic accountability literature has elided a fundamentally empirical question important to business ethics and management: Under what circumstances, if any, are algorithmic decision-making systems considered legitimate? The present study begins to answer this question. Using factorial vignette survey methodology, we explore the impact of decision importance, governance, outcomes, and data inputs on perceptions of the legitimacy of algorithmic decisions made by firms. We find that many of the procedural governance (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Algorithms and the Individual in Criminal Law.Renée Jorgensen - 2022 - Canadian Journal of Philosophy 52 (1):1-17.
    Law-enforcement agencies are increasingly able to leverage crime statistics to make risk predictions for particular individuals, employing a form of inference that some condemn as violating the right to be “treated as an individual.” I suggest that the right encodes agents’ entitlement to a fair distribution of the burdens and benefits of the rule of law. Rather than precluding statistical prediction, it requires that citizens be able to anticipate which variables will be used as predictors and act intentionally to avoid (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Algorithmic bias: Senses, sources, solutions.Sina Fazelpour & David Danks - 2021 - Philosophy Compass 16 (8):e12760.
    Data‐driven algorithms are widely used to make or assist decisions in sensitive domains, including healthcare, social services, education, hiring, and criminal justice. In various cases, such algorithms have preserved or even exacerbated biases against vulnerable communities, sparking a vibrant field of research focused on so‐called algorithmic biases. This research includes work on identification, diagnosis, and response to biases in algorithm‐based decision‐making. This paper aims to facilitate the application of philosophical analysis to these contested issues by providing an overview of three (...)
    Download  
     
    Export citation  
     
    Bookmark   29 citations  
  • What's Fair about Individual Fairness?Will Fleisher - 2021 - Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society.
    One of the main lines of research in algorithmic fairness involves individual fairness (IF) methods. Individual fairness is motivated by an intuitive principle, similar treatment, which requires that similar individuals be treated similarly. IF offers a precise account of this principle using distance metrics to evaluate the similarity of individuals. Proponents of individual fairness have argued that it gives the correct definition of algorithmic fairness, and that it should therefore be preferred to other methods for determining fairness. I argue that (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • On statistical criteria of algorithmic fairness.Brian Hedden - 2021 - Philosophy and Public Affairs 49 (2):209-231.
    Predictive algorithms are playing an increasingly prominent role in society, being used to predict recidivism, loan repayment, job performance, and so on. With this increasing influence has come an increasing concern with the ways in which they might be unfair or biased against individuals in virtue of their race, gender, or, more generally, their group membership. Many purported criteria of algorithmic fairness concern statistical relationships between the algorithm’s predictions and the actual outcomes, for instance requiring that the rate of false (...)
    Download  
     
    Export citation  
     
    Bookmark   43 citations  
  • Are Algorithms Value-Free?Gabbrielle M. Johnson - 2023 - Journal Moral Philosophy 21 (1-2):1-35.
    As inductive decision-making procedures, the inferences made by machine learning programs are subject to underdetermination by evidence and bear inductive risk. One strategy for overcoming these challenges is guided by a presumption in philosophy of science that inductive inferences can and should be value-free. Applied to machine learning programs, the strategy assumes that the influence of values is restricted to data and decision outcomes, thereby omitting internal value-laden design choice points. In this paper, I apply arguments from feminist philosophy of (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Oppressive Things.Shen-yi Liao & Bryce Huebner - 2020 - Philosophy and Phenomenological Research 103 (1):92-113.
    In analyzing oppressive systems like racism, social theorists have articulated accounts of the dynamic interaction and mutual dependence between psychological components, such as individuals’ patterns of thought and action, and social components, such as formal institutions and informal interactions. We argue for the further inclusion of physical components, such as material artifacts and spatial environments. Drawing on socially situated and ecologically embedded approaches in the cognitive sciences, we argue that physical components of racism are not only shaped by, but also (...)
    Download  
     
    Export citation  
     
    Bookmark   43 citations  
  • Enabling Fairness in Healthcare Through Machine Learning.Geoff Keeling & Thomas Grote - 2022 - Ethics and Information Technology 24 (3):1-13.
    The use of machine learning systems for decision-support in healthcare may exacerbate health inequalities. However, recent work suggests that algorithms trained on sufficiently diverse datasets could in principle combat health inequalities. One concern about these algorithms is that their performance for patients in traditionally disadvantaged groups exceeds their performance for patients in traditionally advantaged groups. This renders the algorithmic decisions unfair relative to the standard fairness metrics in machine learning. In this paper, we defend the permissible use of affirmative algorithms; (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Assembled Bias: Beyond Transparent Algorithmic Bias.Robyn Repko Waller & Russell L. Waller - 2022 - Minds and Machines 32 (3):533-562.
    In this paper we make the case for the emergence of novel kind of bias with the use of algorithmic decision-making systems. We argue that the distinctive generative process of feature creation, characteristic of machine learning (ML), contorts feature parameters in ways that can lead to emerging feature spaces that encode novel algorithmic bias involving already marginalized groups. We term this bias _assembled bias._ Moreover, assembled biases are distinct from the much-discussed algorithmic bias, both in source (training data versus feature (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • (1 other version)Explanation and the Right to Explanation.Elanor Taylor - 2024 - Journal of the American Philosophical Association 10 (3):467-482.
    In response to widespread use of automated decision-making technology, some have considered a right to explanation. In this article, I draw on insights from philosophical work on explanation to present a series of challenges to this idea, showing that the normative motivations for access to such explanations ask for something difficult, if not impossible, to extract from automated systems. I consider an alternative, outcomes-focused approach to the normative evaluation of automated decision making and recommend it as a way to pursue (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • AI and bureaucratic discretion.Kate Vredenburgh - 2023 - Inquiry: An Interdisciplinary Journal of Philosophy.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Using (Un)Fair Algorithms in an Unjust World.Kasper Lippert-Rasmussen - 2022 - Res Publica 29 (2):283-302.
    Algorithm-assisted decision procedures—including some of the most high-profile ones, such as COMPAS—have been described as unfair because they compound injustice. The complaint is that in such procedures a decision disadvantaging members of a certain group is based on information reflecting the fact that the members of the group have already been unjustly disadvantaged. I assess this reasoning. First, I distinguish the anti-compounding duty from a related but distinct duty—the proportionality duty—from which at least some of the intuitive appeal of the (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Which Limitations Block Requirements?Amy Berg - 2023 - Moral Philosophy and Politics 10 (2):229-248.
    One of David Estlund’s key claims in Utopophobia is that theories of justice should not bend to human motivational limitations. Yet he does not extend this view to our cognitive limitations. This creates a dilemma. Theories of justice may ignore cognitive as well as motivational limitations—but this makes them so unrealistic as to be unrecognizable as theories of justice. Theories may bend to both cognitive and motivational limitations—but Estlund wants to reject this view. The other alternative is to find some (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Algorithmic Decision-Making, Agency Costs, and Institution-Based Trust.Keith Dowding & Brad R. Taylor - 2024 - Philosophy and Technology 37 (2):1-22.
    Algorithm Decision Making (ADM) systems designed to augment or automate human decision-making have the potential to produce better decisions while also freeing up human time and attention for other pursuits. For this potential to be realised, however, algorithmic decisions must be sufficiently aligned with human goals and interests. We take a Principal-Agent (P-A) approach to the questions of ADM alignment and trust. In a broad sense, ADM is beneficial if and only if human principals can trust algorithmic agents to act (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The Permissibility of Biased AI in a Biased World: An Ethical Analysis of AI for Screening and Referrals for Diabetic Retinopathy in Singapore.Kathryn Muyskens, Angela Ballantyne, Julian Savulescu, Harisan Unais Nasir & Anantharaman Muralidharan - forthcoming - Asian Bioethics Review:1-19.
    A significant and important ethical tension in resource allocation and public health ethics is between utility and equity. We explore this tension between utility and equity in the context of health AI through an examination of a diagnostic AI screening tool for diabetic retinopathy developed by a team of researchers at Duke-NUS in Singapore. While this tool was found to be effective, it was not equally effective across every ethnic group in Singapore, being less effective for the minority Malay population (...)
    Download  
     
    Export citation  
     
    Bookmark