Switch to: References

Add citations

You must login to add citations.
  1. Explanation Hacking: The perils of algorithmic recourse.E. Sullivan & Atoosa Kasirzadeh - forthcoming - In Juan Manuel Durán & Giorgia Pozzi (eds.), Philosophy of science for machine learning: Core issues and new perspectives. Springer.
    We argue that the trend toward providing users with feasible and actionable explanations of AI decisions—known as recourse explanations—comes with ethical downsides. Specifically, we argue that recourse explanations face several conceptual pitfalls and can lead to problematic explanation hacking, which undermines their ethical status. As an alternative, we advocate that explanations of AI decisions should aim at understanding.
    Download  
     
    Export citation  
     
    Bookmark  
  • Sensational Science, Archaic Hominin Genetics, and Amplified Inductive Risk.Joyce C. Havstad - 2022 - Canadian Journal of Philosophy 52 (3):295-320.
    More than a decade of exacting scientific research involving paleontological fragments and ancient DNA has lately produced a series of pronouncements about a purportedly novel population of archaic hominins dubbed “the Denisova.” The science involved in these matters is both technically stunning and, socially, at times a bit reckless. Here I discuss the responsibilities which scientists incur when they make inductively risky pronouncements about the different relative contributions by Denisovans to genomes of members of apparent subpopulations of current humans. This (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • Machine learning in healthcare and the methodological priority of epistemology over ethics.Thomas Grote - forthcoming - Inquiry: An Interdisciplinary Journal of Philosophy.
    This paper develops an account of how the implementation of ML models into healthcare settings requires revising the methodological apparatus of philosophical bioethics. On this account, ML models are cognitive interventions that provide decision-support to physicians and patients. Due to reliability issues, opaque reasoning processes, and information asymmetries, ML models pose inferential problems for them. These inferential problems lay the grounds for many ethical problems that currently claim centre-stage in the bioethical debate. Accordingly, this paper argues that the best way (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Inductive Risk, Understanding, and Opaque Machine Learning Models.Emily Sullivan - 2022 - Philosophy of Science 89 (5):1065-1074.
    Under what conditions does machine learning (ML) model opacity inhibit the possibility of explaining and understanding phenomena? In this article, I argue that nonepistemic values give shape to the ML opacity problem even if we keep researcher interests fixed. Treating ML models as an instance of doing model-based science to explain and understand phenomena reveals that there is (i) an external opacity problem, where the presence of inductive risk imposes higher standards on externally validating models, and (ii) an internal opacity (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Science and values: a two-way direction.Emanuele Ratti & Federica Russo - 2024 - European Journal for Philosophy of Science 14 (1):1-23.
    In the science and values literature, scholars have shown how science is influenced and shaped by values, often in opposition to the ‘value free’ ideal of science. In this paper, we aim to contribute to the science and values literature by showing that the relation between science and values flows not only from values into scientific practice, but also from (allegedly neutral) science to values themselves. The extant literature in the ‘science and values’ field focuses by and large on reconstructing, (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Transparency is Surveillance.C. Thi Nguyen - 2021 - Philosophy and Phenomenological Research 105 (2):331-361.
    In her BBC Reith Lectures on Trust, Onora O’Neill offers a short, but biting, criticism of transparency. People think that trust and transparency go together but in reality, says O'Neill, they are deeply opposed. Transparency forces people to conceal their actual reasons for action and invent different ones for public consumption. Transparency forces deception. I work out the details of her argument and worsen her conclusion. I focus on public transparency – that is, transparency to the public over expert domains. (...)
    Download  
     
    Export citation  
     
    Bookmark   17 citations  
  • On the Site of Predictive Justice.Seth Lazar & Jake Stone - forthcoming - Noûs.
    Optimism about our ability to enhance societal decision‐making by leaning on Machine Learning (ML) for cheap, accurate predictions has palled in recent years, as these ‘cheap’ predictions have come at significant social cost, contributing to systematic harms suffered by already disadvantaged populations. But what precisely goes wrong when ML goes wrong? We argue that, as well as more obvious concerns about the downstream effects of ML‐based decision‐making, there can be moral grounds for the criticism of these predictions themselves. We introduce (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • The New Worries about Science.Janet A. Kourany - 2022 - Canadian Journal of Philosophy 52 (3):227-245.
    Science is based onfacts—facts that are systematically gathered by a community of enquirers through detailed observation and experiment. In the twentieth century, however, philosophers of science claimed that the facts that scientists “gather” in this way are shaped by the theories scientists accept, and this seemed to threaten the authority of science. Call this theold worries about science.By contrast, what seemed not to threaten that authority were other factors that shaped the facts that scientists gather—for example, the mere questions scientists (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Enabling Fairness in Healthcare Through Machine Learning.Geoff Keeling & Thomas Grote - 2022 - Ethics and Information Technology 24 (3):1-13.
    The use of machine learning systems for decision-support in healthcare may exacerbate health inequalities. However, recent work suggests that algorithms trained on sufficiently diverse datasets could in principle combat health inequalities. One concern about these algorithms is that their performance for patients in traditionally disadvantaged groups exceeds their performance for patients in traditionally advantaged groups. This renders the algorithmic decisions unfair relative to the standard fairness metrics in machine learning. In this paper, we defend the permissible use of affirmative algorithms; (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Values and inductive risk in machine learning modelling: the case of binary classification models.Koray Karaca - 2021 - European Journal for Philosophy of Science 11 (4):1-27.
    I examine the construction and evaluation of machine learning binary classification models. These models are increasingly used for societal applications such as classifying patients into two categories according to the presence or absence of a certain disease like cancer and heart disease. I argue that the construction of ML classification models involves an optimisation process aiming at the minimization of the inductive risk associated with the intended uses of these models. I also argue that the construction of these models is (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Algorithmic legitimacy in clinical decision-making.Sune Holm - 2023 - Ethics and Information Technology 25 (3):1-10.
    Machine learning algorithms are expected to improve referral decisions. In this article I discuss the legitimacy of deferring referral decisions in primary care to recommendations from such algorithms. The standard justification for introducing algorithmic decision procedures to make referral decisions is that they are more accurate than the available practitioners. The improvement in accuracy will ensure more efficient use of scarce health resources and improve patient care. In this article I introduce a proceduralist framework for discussing the legitimacy of algorithmic (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Epistemic Value of Digital Simulacra for Patients.Eleanor Gilmore-Szott - 2023 - American Journal of Bioethics 23 (9):63-66.
    Artificial Intelligence and Machine Learning (AI/ML) models introduce unique considerations when determining their epistemic value. Fortunately, existing work on the epistemic features of AI/ML can...
    Download  
     
    Export citation  
     
    Bookmark  
  • A Taxonomy of Transparency in Science.Kevin C. Elliott - 2022 - Canadian Journal of Philosophy 52 (3):342-355.
    Both scientists and philosophers of science have recently emphasized the importance of promoting transparency in science. For scientists, transparency is a way to promote reproducibility, progress, and trust in research. For philosophers of science, transparency can help address the value-ladenness of scientific research in a responsible way. Nevertheless, the concept of transparency is a complex one. Scientists can be transparent about many different things, for many different reasons, on behalf of many different stakeholders. This paper proposes a taxonomy that clarifies (...)
    Download  
     
    Export citation  
     
    Bookmark   17 citations  
  • Predicting and explaining with machine learning models: Social science as a touchstone.Oliver Buchholz & Thomas Grote - 2023 - Studies in History and Philosophy of Science Part A 102 (C):60-69.
    Machine learning (ML) models recently led to major breakthroughs in predictive tasks in the natural sciences. Yet their benefits for the social sciences are less evident, as even high-profile studies on the prediction of life trajectories have shown to be largely unsuccessful – at least when measured in traditional criteria of scientific success. This paper tries to shed light on this remarkable performance gap. Comparing two social science case studies to a paradigm example from the natural sciences, we argue that, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • How to Philosophically Tackle Kinds without Talking About ‘Natural Kinds’.Ingo Brigandt - 2022 - Canadian Journal of Philosophy 52 (3):356-379.
    Recent rival attempts in the philosophy of science to put forward a general theory of the properties that all (and only) natural kinds across the sciences possess may have proven to be futile. Instead, I develop a general methodological framework for how to philosophically study kinds. Any kind has to be investigated and articulated together with the human aims that motivate referring to this kind, where different kinds in the same scientific domain can answer to different concrete aims. My core (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • Engaging with science, values, and society: introduction.Ingo Brigandt - 2022 - Canadian Journal of Philosophy 52 (3):223-226.
    Philosophical work on science and values has come to engage with the concerns of society and of stakeholders affected by science and policy, leading to socially relevant philosophy of science and socially engaged philosophy of science. This special issue showcases instances of socially relevant philosophy of science, featuring contributions on a diversity of topics by Janet Kourany, Andrew Schroeder, Alison Wylie, Kristen Intemann, Joyce Havstad, Justin Biddle, Kevin Elliott, and Ingo Brigandt.
    Download  
     
    Export citation  
     
    Bookmark  
  • What Kind of Artificial Intelligence Should We Want for Use in Healthcare Decision-Making Applications?Jordan Joseph Wadden - 2021 - Canadian Journal of Bioethics / Revue canadienne de bioéthique 4 (1).
    The prospect of including artificial intelligence in clinical decision-making is an exciting next step for some areas of healthcare. This article provides an analysis of the available kinds of AI systems, focusing on macro-level characteristics. This includes examining the strengths and weaknesses of opaque systems and fully explainable systems. Ultimately, the article argues that “grey box” systems, which include some combination of opacity and transparency, ought to be used in healthcare settings.
    Download  
     
    Export citation  
     
    Bookmark