Switch to: References

Add citations

You must login to add citations.
  1. What we owe to decision-subjects: beyond transparency and explanation in automated decision-making.David Gray Grant, Jeff Behrends & John Basl - 2023 - Philosophical Studies 2003:1-31.
    The ongoing explosion of interest in artificial intelligence is fueled in part by recently developed techniques in machine learning. Those techniques allow automated systems to process huge amounts of data, utilizing mathematical methods that depart from traditional statistical approaches, and resulting in impressive advancements in our ability to make predictions and uncover correlations across a host of interesting domains. But as is now widely discussed, the way that those systems arrive at their outputs is often opaque, even to the experts (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Aspirational Affordances of AI.Sina Fazelpour & Meica Magnani - manuscript
    As artificial intelligence (AI) systems increasingly permeate processes of cultural and epistemic production, there are growing concerns about how their outputs may confine individuals and groups to static or restricted narratives about who or what they could be. In this paper, we advance the discourse surrounding these concerns by making three contributions. First, we introduce the concept of aspirational affordance to describe how technologies of representation---paintings, literature, photographs, films, or video games---shape the exercising of imagination, particularly as it pertains to (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Machine learning in healthcare and the methodological priority of epistemology over ethics.Thomas Grote - forthcoming - Inquiry: An Interdisciplinary Journal of Philosophy.
    This paper develops an account of how the implementation of ML models into healthcare settings requires revising the methodological apparatus of philosophical bioethics. On this account, ML models are cognitive interventions that provide decision-support to physicians and patients. Due to reliability issues, opaque reasoning processes, and information asymmetries, ML models pose inferential problems for them. These inferential problems lay the grounds for many ethical problems that currently claim centre-stage in the bioethical debate. Accordingly, this paper argues that the best way (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Authenticity in algorithm-aided decision-making.Brett Karlan - 2024 - Synthese 204 (93):1-25.
    I identify an undertheorized problem with decisions we make with the aid of algorithms: the problem of inauthenticity. When we make decisions with the aid of algorithms, we can make ones that go against our commitments and values in a normatively important way. In this paper, I present a framework for algorithm-aided decision-making that can lead to inauthenticity. I then construct a taxonomy of the features of the decision environment that make such outcomes likely, and I discuss three possible solutions (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • (Some) algorithmic bias as institutional bias.Camila Hernandez Flowerman - 2023 - Ethics and Information Technology 25 (2):1-10.
    In this paper I argue that some examples of what we label ‘algorithmic bias’ would be better understood as cases of institutional bias. Even when individual algorithms appear unobjectionable, they may produce biased outcomes given the way that they are embedded in the background structure of our social world. Therefore, the problematic outcomes associated with the use of algorithmic systems cannot be understood or accounted for without a kind of structural account. Understanding algorithmic bias as institutional bias in particular (as (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • What we owe to decision-subjects: beyond transparency and explanation in automated decision-making.David Gray Grant, Jeff Behrends & John Basl - 2025 - Philosophical Studies 182 (1):55-85.
    The ongoing explosion of interest in artificial intelligence is fueled in part by recently developed techniques in machine learning. Those techniques allow automated systems to process huge amounts of data, utilizing mathematical methods that depart from traditional statistical approaches, and resulting in impressive advancements in our ability to make predictions and uncover correlations across a host of interesting domains. But as is now widely discussed, the way that those systems arrive at their outputs is often opaque, even to the experts (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Hiring, Algorithms, and Choice: Why Interviews Still Matter.Vikram R. Bhargava & Pooria Assadi - 2024 - Business Ethics Quarterly 34 (2):201-230.
    Why do organizations conduct job interviews? The traditional view of interviewing holds that interviews are conducted, despite their steep costs, to predict a candidate’s future performance and fit. This view faces a twofold threat: the behavioral and algorithmic threats. Specifically, an overwhelming body of behavioral research suggests that we are bad at predicting performance and fit; furthermore, algorithms are already better than us at making these predictions in various domains. If the traditional view captures the whole story, then interviews seem (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Artificial Intelligence, Discrimination, Fairness, and Other Moral Concerns.Re’em Segev - 2024 - Minds and Machines 34 (4):1-22.
    Should the input data of artificial intelligence (AI) systems include factors such as race or sex when these factors may be indicative of morally significant facts? More importantly, is it wrong to rely on the output of AI tools whose input includes factors such as race or sex? And is it wrong to rely on the output of AI systems when it is correlated with factors such as race or sex (whether or not its input includes such factors)? The answers (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • A philosophical inquiry on the effect of reasoning in A.I models for bias and fairness.Aadit Kapoor - manuscript
    Advances in Artificial Intelligence (AI) have driven the evolution of reasoning in modern AI models, particularly with the development of Large Language Models (LLMs) and their "Think and Answer" paradigm. This paper explores the influence of human reinforcement on AI reasoning and its potential to enhance decision-making through dynamic human interaction. It analyzes the roots of bias and fairness in AI, arguing that these issues often stem from human data and reflect inherent human biases. The paper is structured as follows: (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • (1 other version)If the Difference Principle Won’t Make a Real Difference in Algorithmic Fairness, What Will?Reuben Binns - 2024 - Philosophy and Technology 37 (4):1-8.
    Download  
     
    Export citation  
     
    Bookmark  
  • Algorithmic Decision-making, Statistical Evidence and the Rule of Law.Vincent Chiao - forthcoming - Episteme.
    The rapidly increasing role of automation throughout the economy, culture and our personal lives has generated a large literature on the risks of algorithmic decision-making, particularly in high-stakes legal settings. Algorithmic tools are charged with bias, shrouded in secrecy, and frequently difficult to interpret. However, these criticisms have tended to focus on particular implementations, specific predictive techniques, and the idiosyncrasies of the American legal-regulatory regime. They do not address the more fundamental unease about the prospect that we might one day (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Assembled Bias: Beyond Transparent Algorithmic Bias.Robyn Repko Waller & Russell L. Waller - 2022 - Minds and Machines 32 (3):533-562.
    In this paper we make the case for the emergence of novel kind of bias with the use of algorithmic decision-making systems. We argue that the distinctive generative process of feature creation, characteristic of machine learning (ML), contorts feature parameters in ways that can lead to emerging feature spaces that encode novel algorithmic bias involving already marginalized groups. We term this bias _assembled bias._ Moreover, assembled biases are distinct from the much-discussed algorithmic bias, both in source (training data versus feature (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Emergent Discrimination: Should We Protect Algorithmic Groups?Jannik Zeiser - forthcoming - Journal of Applied Philosophy.
    Discrimination is usually thought of in terms of socially salient groups, such as race or gender. Some scholars argue that the rise of algorithmic decision‐making poses a challenge to this notion. Algorithms are not bound by a social view of the world. Therefore, they may not only inherit pre‐existing social biases and injustices but may also discriminate based on entirely new categories that have little or no meaning to humans at all, such as ‘being born on a Tuesday’. Should this (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Be Intentional About Fairness!: Fairness, Size, and Multiplicity in the Rashomon Set.Gordon Dai, Pavan Ravishankar, Rachel Yuan, Daniel B. Neill & Emily Black - manuscript
    When selecting a model from a set of equally performant models, how much unfairness can you really reduce? Is it important to be intentional about fairness when choosing among this set, or is arbitrarily choosing among the set of “good” models good enough? Recent work has highlighted that the phenomenon of model multiplicity—where multiple models with nearly identical predictive accuracy exist for the same task—has both positive and negative implications for fairness, from strengthening the enforcement of civil rights law in (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Book review: Jana Schaich Borg, Walter Sinnott-Armstrong and Vincent Conitzer, Moral AI and How We Get There[REVIEW]Jyoti Kishore - forthcoming - Journal of Human Values.
    Download  
     
    Export citation  
     
    Bookmark  
  • (1 other version)If the Difference Principle Won’t Make a Real Difference in Algorithmic Fairness, What Will? [REVIEW]Reuben Binns - manuscript
    In ‘Rawlsian algorithmic fairness and a missing aggregation property of the difference Principle’, the authors argue that there is a false assumption in algorithmic fairness interventions inspired by John Rawls’ theory of justice. They argue that applying the difference principle at the level of a local algorithmic decision-making context (what they term a ‘constituent situation’), is neither necessary nor sufficient for the difference principle to be upheld at the aggregate level of society at large. I find these arguments compelling. They (...)
    Download  
     
    Export citation  
     
    Bookmark