Switch to: References

Add citations

You must login to add citations.
  1. Unconscious Perception and Unconscious Bias: Parallel Debates about Unconscious Content.Gabbrielle Johnson - 2023 - In Uriah Kriegel (ed.), Oxford Studies in Philosophy of Mind Vol. 3. Oxford: Oxford University Press. pp. 87-130.
    The possibilities of unconscious perception and unconscious bias prompt parallel debates about unconscious mental content. This chapter argues that claims within these debates alleging the existence of unconscious content are made fraught by ambiguity and confusion with respect to the two central concepts they involve: consciousness and content. Borrowing conceptual resources from the debate about unconscious perception, the chapter distills the two conceptual puzzles concerning each of these notions and establishes philosophical strategies for their resolution. It then argues that empirical (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The Ethics of Belief (3rd edition).Rima Basu - forthcoming - In Kurt Sylvan, Ernest Sosa, Jonathan Dancy & Matthias Steup (eds.), The Blackwell Companion to Epistemology, 3rd edition. Wiley Blackwell.
    This chapter is a survey of the ethics of belief. It begins with the debate as it first emerges in the foundational dispute between W. K. Clifford and William James. Then it surveys how the disagreements between Clifford and James have shaped the work of contemporary theorists, touching on topics such as pragmatism, whether we should believe against the evidence, pragmatic and moral encroachment, doxastic partiality, and doxastic wronging.
    Download  
     
    Export citation  
     
    Bookmark  
  • Are Algorithms Value-Free?Gabbrielle M. Johnson - 2023 - Journal Moral Philosophy 21 (1-2):1-35.
    As inductive decision-making procedures, the inferences made by machine learning programs are subject to underdetermination by evidence and bear inductive risk. One strategy for overcoming these challenges is guided by a presumption in philosophy of science that inductive inferences can and should be value-free. Applied to machine learning programs, the strategy assumes that the influence of values is restricted to data and decision outcomes, thereby omitting internal value-laden design choice points. In this paper, I apply arguments from feminist philosophy of (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Engineering Social Concepts: Feasibility and Causal Models.Eleonore Neufeld - forthcoming - Philosophy and Phenomenological Research.
    How feasible are conceptual engineering projects of social concepts that aim for the engineered concept to be widely adopted in ordinary everyday life? Predominant frameworks on the psychology of concepts that shape work on stereotyping, bias, and machine learning have grim implications for the prospects of conceptual engineers: conceptual engineering efforts are ineffective in promoting certain social-conceptual changes. Specifically, since conceptual components that give rise to problematic social stereotypes are sensitive to statistical structures of the environment, purely conceptual change won’t (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Assembled Bias: Beyond Transparent Algorithmic Bias.Robyn Repko Waller & Russell L. Waller - 2022 - Minds and Machines 32 (3):533-562.
    In this paper we make the case for the emergence of novel kind of bias with the use of algorithmic decision-making systems. We argue that the distinctive generative process of feature creation, characteristic of machine learning (ML), contorts feature parameters in ways that can lead to emerging feature spaces that encode novel algorithmic bias involving already marginalized groups. We term this bias _assembled bias._ Moreover, assembled biases are distinct from the much-discussed algorithmic bias, both in source (training data versus feature (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • AI and bureaucratic discretion.Kate Vredenburgh - forthcoming - Inquiry: An Interdisciplinary Journal of Philosophy.
    1. Virginia Eubanks (2018, Chapter 4) tells the story of Pat Gordan, an intake screener in the Department of Human Services in Allegheny County, Pennsylvania. The Department deploys a risk assessme...
    Download  
     
    Export citation  
     
    Bookmark  
  • Explanation and the Right to Explanation.Elanor Taylor - 2023 - Journal of the American Philosophical Association 1:1-16.
    In response to widespread use of automated decision-making technology, some have considered a right to explanation. In this paper I draw on insights from philosophical work on explanation to present a series of challenges to this idea, showing that the normative motivations for access to such explanations ask for something difficult, if not impossible, to extract from automated systems. I consider an alternative, outcomes-focused approach to the normative evaluation of automated decision-making, and recommend it as a way to pursue the (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Algorithmic Political Bias in Artificial Intelligence Systems.Uwe Peters - 2022 - Philosophy and Technology 35 (2):1-23.
    Some artificial intelligence systems can display algorithmic bias, i.e. they may produce outputs that unfairly discriminate against people based on their social identity. Much research on this topic focuses on algorithmic bias that disadvantages people based on their gender or racial identity. The related ethical problems are significant and well known. Algorithmic bias against other aspects of people’s social identity, for instance, their political orientation, remains largely unexplored. This paper argues that algorithmic bias against people’s political orientation can arise in (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Are Algorithmic Decisions Legitimate? The Effect of Process and Outcomes on Perceptions of Legitimacy of AI Decisions.Kirsten Martin & Ari Waldman - 2022 - Journal of Business Ethics 183 (3):653-670.
    Firms use algorithms to make important business decisions. To date, the algorithmic accountability literature has elided a fundamentally empirical question important to business ethics and management: Under what circumstances, if any, are algorithmic decision-making systems considered legitimate? The present study begins to answer this question. Using factorial vignette survey methodology, we explore the impact of decision importance, governance, outcomes, and data inputs on perceptions of the legitimacy of algorithmic decisions made by firms. We find that many of the procedural governance (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Using (Un)Fair Algorithms in an Unjust World.Kasper Lippert-Rasmussen - 2022 - Res Publica 29 (2):283-302.
    Algorithm-assisted decision procedures—including some of the most high-profile ones, such as COMPAS—have been described as unfair because they compound injustice. The complaint is that in such procedures a decision disadvantaging members of a certain group is based on information reflecting the fact that the members of the group have already been unjustly disadvantaged. I assess this reasoning. First, I distinguish the anti-compounding duty from a related but distinct duty—the proportionality duty—from which at least some of the intuitive appeal of the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Oppressive Things.Shen-yi Liao & Bryce Huebner - 2020 - Philosophy and Phenomenological Research 103 (1):92-113.
    In analyzing oppressive systems like racism, social theorists have articulated accounts of the dynamic interaction and mutual dependence between psychological components, such as individuals’ patterns of thought and action, and social components, such as formal institutions and informal interactions. We argue for the further inclusion of physical components, such as material artifacts and spatial environments. Drawing on socially situated and ecologically embedded approaches in the cognitive sciences, we argue that physical components of racism are not only shaped by, but also (...)
    Download  
     
    Export citation  
     
    Bookmark   35 citations  
  • Enabling Fairness in Healthcare Through Machine Learning.Geoff Keeling & Thomas Grote - 2022 - Ethics and Information Technology 24 (3):1-13.
    The use of machine learning systems for decision-support in healthcare may exacerbate health inequalities. However, recent work suggests that algorithms trained on sufficiently diverse datasets could in principle combat health inequalities. One concern about these algorithms is that their performance for patients in traditionally disadvantaged groups exceeds their performance for patients in traditionally advantaged groups. This renders the algorithmic decisions unfair relative to the standard fairness metrics in machine learning. In this paper, we defend the permissible use of affirmative algorithms; (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Human achievement and artificial intelligence.Brett Karlan - 2023 - Ethics and Information Technology 25 (3):1-12.
    In domains as disparate as playing Go and predicting the structure of proteins, artificial intelligence (AI) technologies have begun to perform at levels beyond which any humans can achieve. Does this fact represent something lamentable? Does superhuman AI performance somehow undermine the value of human achievements in these areas? Go grandmaster Lee Sedol suggested as much when he announced his retirement from professional Go, blaming the advances of Go-playing programs like AlphaGo for sapping his will to play the game at (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Algorithms and the Individual in Criminal Law.Renée Jorgensen - 2022 - Canadian Journal of Philosophy 52 (1):1-17.
    Law-enforcement agencies are increasingly able to leverage crime statistics to make risk predictions for particular individuals, employing a form of inference that some condemn as violating the right to be “treated as an individual.” I suggest that the right encodes agents’ entitlement to a fair distribution of the burdens and benefits of the rule of law. Rather than precluding statistical prediction, it requires that citizens be able to anticipate which variables will be used as predictors and act intentionally to avoid (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • On statistical criteria of algorithmic fairness.Brian Hedden - 2021 - Philosophy and Public Affairs 49 (2):209-231.
    Predictive algorithms are playing an increasingly prominent role in society, being used to predict recidivism, loan repayment, job performance, and so on. With this increasing influence has come an increasing concern with the ways in which they might be unfair or biased against individuals in virtue of their race, gender, or, more generally, their group membership. Many purported criteria of algorithmic fairness concern statistical relationships between the algorithm’s predictions and the actual outcomes, for instance requiring that the rate of false (...)
    Download  
     
    Export citation  
     
    Bookmark   34 citations  
  • What we owe to decision-subjects: beyond transparency and explanation in automated decision-making.David Gray Grant, Jeff Behrends & John Basl - 2023 - Philosophical Studies 2003:1-31.
    The ongoing explosion of interest in artificial intelligence is fueled in part by recently developed techniques in machine learning. Those techniques allow automated systems to process huge amounts of data, utilizing mathematical methods that depart from traditional statistical approaches, and resulting in impressive advancements in our ability to make predictions and uncover correlations across a host of interesting domains. But as is now widely discussed, the way that those systems arrive at their outputs is often opaque, even to the experts (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Algorithmic bias: Senses, sources, solutions.Sina Fazelpour & David Danks - 2021 - Philosophy Compass 16 (8):e12760.
    Data‐driven algorithms are widely used to make or assist decisions in sensitive domains, including healthcare, social services, education, hiring, and criminal justice. In various cases, such algorithms have preserved or even exacerbated biases against vulnerable communities, sparking a vibrant field of research focused on so‐called algorithmic biases. This research includes work on identification, diagnosis, and response to biases in algorithm‐based decision‐making. This paper aims to facilitate the application of philosophical analysis to these contested issues by providing an overview of three (...)
    Download  
     
    Export citation  
     
    Bookmark   22 citations  
  • Listening to algorithms: The case of self‐knowledge.Casey Doyle - forthcoming - European Journal of Philosophy.
    This paper begins with the thought that there is something out of place about offloading inquiry into one's own mind to AI. The paper's primary goal is to articulate the unease felt when considering cases of doing so. It draws a parallel between the use of algorithms in the criminal law: in both cases one feels entitled to be treated as an exception to a verdict made on the basis of a certain kind of evidence. Then it identifies an account (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Egalitarian Machine Learning.Clinton Castro, David O’Brien & Ben Schwan - 2023 - Res Publica 29 (2):237–264.
    Prediction-based decisions, which are often made by utilizing the tools of machine learning, influence nearly all facets of modern life. Ethical concerns about this widespread practice have given rise to the field of fair machine learning and a number of fairness measures, mathematically precise definitions of fairness that purport to determine whether a given prediction-based decision system is fair. Following Reuben Binns (2017), we take ‘fairness’ in this context to be a placeholder for a variety of normative egalitarian considerations. We (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Putting explainable AI in context: institutional explanations for medical AI.Jacob Browning & Mark Theunissen - 2022 - Ethics and Information Technology 24 (2).
    There is a current debate about if, and in what sense, machine learning systems used in the medical context need to be explainable. Those arguing in favor contend these systems require post hoc explanations for each individual decision to increase trust and ensure accurate diagnoses. Those arguing against suggest the high accuracy and reliability of the systems is sufficient for providing epistemic justified beliefs without the need for explaining each individual decision. But, as we show, both solutions have limitations—and it (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Which Limitations Block Requirements?Amy Berg - 2023 - Moral Philosophy and Politics 10 (2):229-248.
    One of David Estlund’s key claims in Utopophobia is that theories of justice should not bend to human motivational limitations. Yet he does not extend this view to our cognitive limitations. This creates a dilemma. Theories of justice may ignore cognitive as well as motivational limitations—but this makes them so unrealistic as to be unrecognizable as theories of justice. Theories may bend to both cognitive and motivational limitations—but Estlund wants to reject this view. The other alternative is to find some (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • The Importance of Forgetting.Rima Basu - 2022 - Episteme 19 (4):471-490.
    Morality bears on what we should forget. Some aspects of our identity are meant to be forgotten and there is a distinctive harm that accompanies the permanence of some content about us, content that prompts a duty to forget. To make the case that forgetting is an integral part of our moral duties to others, the paper proceeds as follows. In §1, I make the case that forgetting is morally evaluable and I survey three kinds of forgetting: no-trace forgetting, archival (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Why you are (probably) anthropomorphizing AI.Ali Hasan - manuscript
    In this paper I argue that, given the way that AI models work and the way that ordinary human rationality works, it is very likely that people are anthropomorphizing AI, with potentially serious consequences. I start with the core idea, recently defended by Thomas Kelly (2022) among others, that bias involves a systematic departure from a genuine standard or norm. I briefly discuss how bias can take on different explicit, implicit, and “truly implicit” (Johnson 2021) forms such as bias by (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • What's Fair about Individual Fairness?Will Fleisher - 2021 - Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society.
    One of the main lines of research in algorithmic fairness involves individual fairness (IF) methods. Individual fairness is motivated by an intuitive principle, similar treatment, which requires that similar individuals be treated similarly. IF offers a precise account of this principle using distance metrics to evaluate the similarity of individuals. Proponents of individual fairness have argued that it gives the correct definition of algorithmic fairness, and that it should therefore be preferred to other methods for determining fairness. I argue that (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation