Switch to: References

Add citations

You must login to add citations.
  1. Understanding, Idealization, and Explainable AI.Will Fleisher - 2022 - Episteme 19 (4):534-560.
    Many AI systems that make important decisions are black boxes: how they function is opaque even to their developers. This is due to their high complexity and to the fact that they are trained rather than programmed. Efforts to alleviate the opacity of black box systems are typically discussed in terms of transparency, interpretability, and explainability. However, there is little agreement about what these key concepts mean, which makes it difficult to adjudicate the success or promise of opacity alleviation methods. (...)
    Download  
     
    Export citation  
     
    Bookmark   20 citations  
  • Reliability in Machine Learning.Thomas Grote, Konstantin Genin & Emily Sullivan - 2024 - Philosophy Compass 19 (5):e12974.
    Issues of reliability are claiming center-stage in the epistemology of machine learning. This paper unifies different branches in the literature and points to promising research directions, whilst also providing an accessible introduction to key concepts in statistics and machine learning – as far as they are concerned with reliability.
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • On Predicting Recidivism: Epistemic Risk, Tradeoffs, and Values in Machine Learning.Justin B. Biddle - 2022 - Canadian Journal of Philosophy 52 (3):321-341.
    Recent scholarship in philosophy of science and technology has shown that scientific and technological decision making are laden with values, including values of a social, political, and/or ethical character. This paper examines the role of value judgments in the design of machine-learning systems generally and in recidivism-prediction algorithms specifically. Drawing on work on inductive and epistemic risk, the paper argues that ML systems are value laden in ways similar to human decision making, because the development and design of ML systems (...)
    Download  
     
    Export citation  
     
    Bookmark   24 citations  
  • Two Dimensions of Opacity and the Deep Learning Predicament.Florian J. Boge - 2021 - Minds and Machines 32 (1):43-75.
    Deep neural networks have become increasingly successful in applications from biology to cosmology to social science. Trained DNNs, moreover, correspond to models that ideally allow the prediction of new phenomena. Building in part on the literature on ‘eXplainable AI’, I here argue that these models are instrumental in a sense that makes them non-explanatory, and that their automated generation is opaque in a unique way. This combination implies the possibility of an unprecedented gap between discovery and explanation: When unsupervised models (...)
    Download  
     
    Export citation  
     
    Bookmark   29 citations  
  • Reclaiming AI as a Theoretical Tool for Cognitive Science.Iris van Rooij, Olivia Guest, Federico Adolfi, Ronald de Haan, Antonina Kolokolova & Patricia Rich - 2024 - Computational Brain and Behavior 7:616–636.
    The idea that human cognition is, or can be understood as, a form of computation is a useful conceptual tool for cognitive science. It was a foundational assumption during the birth of cognitive science as a multidisciplinary field, with Artificial Intelligence (AI) as one of its contributing fields. One conception of AI in this context is as a provider of computational tools (frameworks, concepts, formalisms, models, proofs, simulations, etc.) that support theory building in cognitive science. The contemporary field of AI, (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Philosophy of science at sea: Clarifying the interpretability of machine learning.Claus Beisbart & Tim Räz - 2022 - Philosophy Compass 17 (6):e12830.
    Philosophy Compass, Volume 17, Issue 6, June 2022.
    Download  
     
    Export citation  
     
    Bookmark   14 citations  
  • Trust, Explainability and AI.Sam Baron - 2025 - Philosophy and Technology 38 (4):1-23.
    There has been a surge of interest in explainable artificial intelligence (XAI). It is commonly claimed that explainability is necessary for trust in AI, and that this is why we need it. In this paper, I argue that for some notions of trust it is plausible that explainability is indeed a necessary condition. But that these kinds of trust are not appropriate for AI. For notions of trust that are appropriate for AI, explainability is not a necessary condition. I thus (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Scientific Exploration and Explainable Artificial Intelligence.Carlos Zednik & Hannes Boelsen - 2022 - Minds and Machines 32 (1):219-239.
    Models developed using machine learning are increasingly prevalent in scientific research. At the same time, these models are notoriously opaque. Explainable AI aims to mitigate the impact of opacity by rendering opaque models transparent. More than being just the solution to a problem, however, Explainable AI can also play an invaluable role in scientific exploration. This paper describes how post-hoc analytic techniques from Explainable AI can be used to refine target phenomena in medical science, to identify starting points for future (...)
    Download  
     
    Export citation  
     
    Bookmark   17 citations  
  • The Importance of Understanding Deep Learning.Tim Räz & Claus Beisbart - 2024 - Erkenntnis 89 (5).
    Some machine learning models, in particular deep neural networks (DNNs), are not very well understood; nevertheless, they are frequently used in science. Does this lack of understanding pose a problem for using DNNs to understand empirical phenomena? Emily Sullivan has recently argued that understanding with DNNs is not limited by our lack of understanding of DNNs themselves. In the present paper, we will argue, _contra_ Sullivan, that our current lack of understanding of DNNs does limit our ability to understand with (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • A Metatheory of Classical and Modern Connectionism.Olivia Guest & Andrea E. Martin - manuscript
    Contemporary AI models owe much of their success and discontents to connectionism, a framework in cognitive science that has been (and continues to be) highly influential. Herein, we analyze artificial neural networks (ANNs): a) when used as scientific instruments of study; and b) when functioning as emergent arbiters of the zeitgeist in the cognitive, computational, and neural sciences. Building on our previous work with respect to analogizing between ANNs and cognition, brains, or behaviour (Guest & Martin, 2023), we use metatheoretical (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Inductive Risk, Understanding, and Opaque Machine Learning Models.Emily Sullivan - 2022 - Philosophy of Science 89 (5):1065-1074.
    Under what conditions does machine learning (ML) model opacity inhibit the possibility of explaining and understanding phenomena? In this article, I argue that nonepistemic values give shape to the ML opacity problem even if we keep researcher interests fixed. Treating ML models as an instance of doing model-based science to explain and understand phenomena reveals that there is (i) an external opacity problem, where the presence of inductive risk imposes higher standards on externally validating models, and (ii) an internal opacity (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Explainable AI and Causal Understanding: Counterfactual Approaches Considered.Sam Baron - 2023 - Minds and Machines 33 (2):347-377.
    The counterfactual approach to explainable AI (XAI) seeks to provide understanding of AI systems through the provision of counterfactual explanations. In a recent systematic review, Chou et al. (Inform Fus 81:59–83, 2022) argue that the counterfactual approach does not clearly provide causal understanding. They diagnose the problem in terms of the underlying framework within which the counterfactual approach has been developed. To date, the counterfactual approach has not been developed in concert with the approach for specifying causes developed by Pearl (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Conceptual challenges for interpretable machine learning.David S. Watson - 2022 - Synthese 200 (2):1-33.
    As machine learning has gradually entered into ever more sectors of public and private life, there has been a growing demand for algorithmic explainability. How can we make the predictions of complex statistical models more intelligible to end users? A subdiscipline of computer science known as interpretable machine learning (IML) has emerged to address this urgent question. Numerous influential methods have been proposed, from local linear approximations to rule lists and counterfactuals. In this article, I highlight three conceptual challenges that (...)
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  • Understanding with Toy Surrogate Models in Machine Learning.Andrés Páez - 2024 - Minds and Machines 34 (4):45.
    In the natural and social sciences, it is common to use toy models—extremely simple and highly idealized representations—to understand complex phenomena. Some of the simple surrogate models used to understand opaque machine learning (ML) models, such as rule lists and sparse decision trees, bear some resemblance to scientific toy models. They allow non-experts to understand how an opaque ML model works globally via a much simpler model that highlights the most relevant features of the input space and their effect on (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Explanatory pragmatism: a context-sensitive framework for explainable medical AI.Diana Robinson & Rune Nyrup - 2022 - Ethics and Information Technology 24 (1).
    Explainable artificial intelligence (XAI) is an emerging, multidisciplinary field of research that seeks to develop methods and tools for making AI systems more explainable or interpretable. XAI researchers increasingly recognise explainability as a context-, audience- and purpose-sensitive phenomenon, rather than a single well-defined property that can be directly measured and optimised. However, since there is currently no overarching definition of explainability, this poses a risk of miscommunication between the many different researchers within this multidisciplinary space. This is the problem we (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  • Explainability, Public Reason, and Medical Artificial Intelligence.Michael Da Silva - 2023 - Ethical Theory and Moral Practice 26 (5):743-762.
    The contention that medical artificial intelligence (AI) should be ‘explainable’ is widespread in contemporary philosophy and in legal and best practice documents. Yet critics argue that ‘explainability’ is not a stable concept; non-explainable AI is often more accurate; mechanisms intended to improve explainability do not improve understanding and introduce new epistemic concerns; and explainability requirements are ad hoc where human medical decision-making is often opaque. A recent ‘political response’ to these issues contends that AI used in high-stakes scenarios, including medical (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • On the Opacity of Deep Neural Networks.Anders Søgaard - 2023 - Canadian Journal of Philosophy:1-16.
    Deep neural networks are said to be opaque, impeding the development of safe and trustworthy artificial intelligence, but where this opacity stems from is less clear. What are the sufficient properties for neural network opacity? Here, I discuss five common properties of deep neural networks and two different kinds of opacity. Which of these properties are sufficient for what type of opacity? I show how each kind of opacity stems from only one of these five properties, and then discuss to (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • On the Philosophy of Unsupervised Learning.David S. Watson - 2023 - Philosophy and Technology 36 (2):1-26.
    Unsupervised learning algorithms are widely used for many important statistical tasks with numerous applications in science and industry. Yet despite their prevalence, they have attracted remarkably little philosophical scrutiny to date. This stands in stark contrast to supervised and reinforcement learning algorithms, which have been widely studied and critically evaluated, often with an emphasis on ethical concerns. In this article, I analyze three canonical unsupervised learning problems: clustering, abstraction, and generative modeling. I argue that these methods raise unique epistemological and (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Machine learning in healthcare and the methodological priority of epistemology over ethics.Thomas Grote - forthcoming - Inquiry: An Interdisciplinary Journal of Philosophy.
    This paper develops an account of how the implementation of ML models into healthcare settings requires revising the methodological apparatus of philosophical bioethics. On this account, ML models are cognitive interventions that provide decision-support to physicians and patients. Due to reliability issues, opaque reasoning processes, and information asymmetries, ML models pose inferential problems for them. These inferential problems lay the grounds for many ethical problems that currently claim centre-stage in the bioethical debate. Accordingly, this paper argues that the best way (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Do ML models represent their targets?Emily Sullivan - forthcoming - Philosophy of Science.
    I argue that ML models used in science function as highly idealized toy models. If we treat ML models as a type of highly idealized toy model, then we can deploy standard representational and epistemic strategies from the toy model literature to explain why ML models can still provide epistemic success despite their lack of similarity to their targets.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • The Epistemic Cost of Opacity: How the Use of Artificial Intelligence Undermines the Knowledge of Medical Doctors in High-Stakes Contexts.Eva Schmidt, Paul Martin Putora & Rianne Fijten - 2025 - Philosophy and Technology 38 (1):1-22.
    Artificial intelligent (AI) systems used in medicine are often very reliable and accurate, but at the price of their being increasingly opaque. This raises the question whether a system’s opacity undermines the ability of medical doctors to acquire knowledge on the basis of its outputs. We investigate this question by focusing on a case in which a patient’s risk of recurring breast cancer is predicted by an opaque AI system. We argue that, given the system’s opacity, as well as the (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Can AI Make Scientific Discoveries?Marianna Bergamaschi Ganapini - forthcoming - Philosophical Studies:1-19.
    AI technologies have recently shown remarkable capabilities in various scientific fields, such as drug discovery, medicine, climate modeling, and archaeology, primarily through their pattern recognition abilities. They can also generate hypotheses and suggest new research directions. While acknowledging AI’s potential to aid in scientific breakthroughs, the paper shows that current AI models do not meet the criteria for making independent scientific discoveries. Discovery is an epistemic achievement that requires a level of competence and self-reflectivity that AI does not yet possess.
    Download  
     
    Export citation  
     
    Bookmark  
  • ML interpretability: Simple isn't easy.Tim Räz - 2024 - Studies in History and Philosophy of Science Part A 103 (C):159-167.
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Understanding climate phenomena with data-driven models.Benedikt Knüsel & Christoph Baumberger - 2020 - Studies in History and Philosophy of Science Part A 84 (C):46-56.
    In climate science, climate models are one of the main tools for understanding phenomena. Here, we develop a framework to assess the fitness of a climate model for providing understanding. The framework is based on three dimensions: representational accuracy, representational depth, and graspability. We show that this framework does justice to the intuition that classical process-based climate models give understanding of phenomena. While simple climate models are characterized by a larger graspability, state-of-the-art models have a higher representational accuracy and representational (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • Understanding climate change with statistical downscaling and machine learning.Julie Jebeile, Vincent Lam & Tim Räz - 2020 - Synthese (1-2):1-21.
    Machine learning methods have recently created high expectations in the climate modelling context in view of addressing climate change, but they are often considered as non-physics-based ‘black boxes’ that may not provide any understanding. However, in many ways, understanding seems indispensable to appropriately evaluate climate models and to build confidence in climate projections. Relying on two case studies, we compare how machine learning and standard statistical techniques affect our ability to understand the climate system. For that purpose, we put five (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Putting explainable AI in context: institutional explanations for medical AI.Jacob Browning & Mark Theunissen - 2022 - Ethics and Information Technology 24 (2).
    There is a current debate about if, and in what sense, machine learning systems used in the medical context need to be explainable. Those arguing in favor contend these systems require post hoc explanations for each individual decision to increase trust and ensure accurate diagnoses. Those arguing against suggest the high accuracy and reliability of the systems is sufficient for providing epistemic justified beliefs without the need for explaining each individual decision. But, as we show, both solutions have limitations—and it (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • A Puzzle concerning Compositionality in Machines.Ryan M. Nefdt - 2020 - Minds and Machines 30 (1):47-75.
    This paper attempts to describe and address a specific puzzle related to compositionality in artificial networks such as Deep Neural Networks and machine learning in general. The puzzle identified here touches on a larger debate in Artificial Intelligence related to epistemic opacity but specifically focuses on computational applications of human level linguistic abilities or properties and a special difficulty with relation to these. Thus, the resulting issue is both general and unique. A partial solution is suggested.
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Values and inductive risk in machine learning modelling: the case of binary classification models.Koray Karaca - 2021 - European Journal for Philosophy of Science 11 (4):1-27.
    I examine the construction and evaluation of machine learning binary classification models. These models are increasingly used for societal applications such as classifying patients into two categories according to the presence or absence of a certain disease like cancer and heart disease. I argue that the construction of ML classification models involves an optimisation process aiming at the minimization of the inductive risk associated with the intended uses of these models. I also argue that the construction of these models is (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • AI and bureaucratic discretion.Kate Vredenburgh - 2023 - Inquiry: An Interdisciplinary Journal of Philosophy.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Looks Unhelpful.William E. S. McNeill - forthcoming - Mind.
    By looking at it you come to know that a thing is an apple. How? A natural answer is that this is down to how it looks – its superficial visual appearance. Looks Views treat our acquaintance with such looks as accounting for how visual knowledge is secured. Here I argue that for many pairings of properties and perceivers Looks Views will turn out not to work. We can visually track many properties through huge variation in things’ visual appearances. For (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Minds and Machines Special Issue: Machine Learning: Prediction Without Explanation?F. J. Boge, P. Grünke & R. Hillerbrand - 2022 - Minds and Machines 32 (1):1-9.
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Scientific Inference with Interpretable Machine Learning: Analyzing Models to Learn About Real-World Phenomena.Timo Freiesleben, Gunnar König, Christoph Molnar & Álvaro Tejero-Cantero - 2024 - Minds and Machines 34 (3):1-39.
    To learn about real world phenomena, scientists have traditionally used models with clearly interpretable elements. However, modern machine learning (ML) models, while powerful predictors, lack this direct elementwise interpretability (e.g. neural network weights). Interpretable machine learning (IML) offers a solution by analyzing models holistically to derive interpretations. Yet, current IML research is focused on auditing ML models rather than leveraging them for scientific inference. Our work bridges this gap, presenting a framework for designing IML methods—termed ’property descriptors’—that illuminate not just (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Effective theory building and manifold learning.David Peter Wallis Freeborn - 2025 - Synthese 205 (1):1-33.
    Manifold learning and effective model building are generally viewed as fundamentally different types of procedure. After all, in one we build a simplified model of the data, in the other, we construct a simplified model of the another model. Nonetheless, I argue that certain kinds of high-dimensional effective model building, and effective field theory construction in quantum field theory, can be viewed as special cases of manifold learning. I argue that this helps to shed light on all of these techniques. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Humanistic interpretation and machine learning.Juho Pääkkönen & Petri Ylikoski - 2021 - Synthese 199:1461–1497.
    This paper investigates how unsupervised machine learning methods might make hermeneutic interpretive text analysis more objective in the social sciences. Through a close examination of the uses of topic modeling—a popular unsupervised approach in the social sciences—it argues that the primary way in which unsupervised learning supports interpretation is by allowing interpreters to discover unanticipated information in larger and more diverse corpora and by improving the transparency of the interpretive process. This view highlights that unsupervised modeling does not eliminate the (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Epistemic Opacity and Scientific Realism and Anti-Realism.Jack Casey - forthcoming - In Juan Manuel Durán & Giorgia Pozzi, Philosophy of science for machine learning: Core issues and new perspectives. Springer.
    Download  
     
    Export citation  
     
    Bookmark  
  • Do artificial intelligence systems understand?Carlos Blanco Pérez & Eduardo Garrido-Merchán - 2024 - Claridades. Revista de Filosofía 16 (1):171-205.
    Are intelligent machines really intelligent? Is the underlying philosoph- ical concept of intelligence satisfactory for describing how the present systems work? Is understanding a necessary and sufficient condition for intelligence? If a machine could understand, should we attribute subjectivity to it? This paper addresses the problem of deciding whether the so-called ”intelligent machines” are capable of understanding, instead of merely processing signs. It deals with the relationship between syntax and semantics. The main thesis concerns the inevitability of semantics for any (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Epistemo-ethical constraints on AI-human decision making for diagnostic purposes.Dina Babushkina & Athanasios Votsis - 2022 - Ethics and Information Technology 24 (2).
    This paper approaches the interaction of a health professional with an AI system for diagnostic purposes as a hybrid decision making process and conceptualizes epistemo-ethical constraints on this process. We argue for the importance of the understanding of the underlying machine epistemology in order to raise awareness of and facilitate realistic expectations from AI as a decision support system, both among healthcare professionals and the potential benefiters. Understanding the epistemic abilities and limitations of such systems is essential if we are (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Disciplining Deliberation: A Socio-technical Perspective on Machine Learning Trade-Offs.Sina Fazelpour - forthcoming - British Journal for the Philosophy of Science.
    This paper examines two prominent formal trade-offs in artificial intelligence (AI)---between predictive accuracy and fairness, and between predictive accuracy and interpretability. These trade-offs have become a central focus in normative and regulatory discussions as policymakers seek to understand the value tensions that can arise in the social adoption of AI tools. The prevailing interpretation views these formal trade-offs as directly corresponding to tensions between underlying social values, implying unavoidable conflicts between those social objectives. In this paper, I challenge that prevalent (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Predicting and explaining with machine learning models: Social science as a touchstone.Oliver Buchholz & Thomas Grote - 2023 - Studies in History and Philosophy of Science Part A 102 (C):60-69.
    Machine learning (ML) models recently led to major breakthroughs in predictive tasks in the natural sciences. Yet their benefits for the social sciences are less evident, as even high-profile studies on the prediction of life trajectories have shown to be largely unsuccessful – at least when measured in traditional criteria of scientific success. This paper tries to shed light on this remarkable performance gap. Comparing two social science case studies to a paradigm example from the natural sciences, we argue that, (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • The Problem of Differential Importability and Scientific Modeling.Anish Seal - 2024 - Philosophies 9 (6):164.
    The practice of science appears to involve “model-talk”. Scientists, one thinks, are in the business of giving accounts of reality. Scientists, in the process of furnishing such accounts, talk about what they call “models”. Philosophers of science have inspected what this talk of models suggests about how scientific theories manage to represent reality. There are, it seems, at least three distinct philosophical views on the role of scientific models in science’s portrayal of reality: the abstractionist view, the indirect fictionalist view, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Disciplining Deliberation: A Sociotechnical Perspective on Machine Learning Trade-offs.Sina Fazelpour - forthcoming - British Journal for the Philosophy of Science.
    This paper examines two prominent formal trade-offs in artificial intelligence (AI)---between predictive accuracy and fairness, and between predictive accuracy and interpretability. These trade-offs have become a central focus in normative and regulatory discussions as policymakers seek to understand the value tensions that can arise in the social adoption of AI tools. The prevailing interpretation views these formal trade-offs as directly corresponding to tensions between underlying social values, implying unavoidable conflicts between those social objectives. In this paper, I challenge that prevalent (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Deep Learning as Method-Learning: Pragmatic Understanding, Epistemic Strategies and Design-Rules.Phillip H. Kieval & Oscar Westerblad - manuscript
    We claim that scientists working with deep learning (DL) models exhibit a form of pragmatic understanding that is not reducible to or dependent on explanation. This pragmatic understanding comprises a set of learned methodological principles that underlie DL model design-choices and secure their reliability. We illustrate this action-oriented pragmatic understanding with a case study of AlphaFold2, highlighting the interplay between background knowledge of a problem and methodological choices involving techniques for constraining how a model learns from data. Building successful models (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The proper role of history in evolutionary explanations.Thomas A. C. Reydon - 2023 - Noûs 57 (1):162-187.
    Evolutionary explanations are not only common in the biological sciences, but also widespread outside biology. But an account of how evolutionary explanations perform their explanatory work is still lacking. This paper develops such an account. I argue that available accounts of explanations in evolutionary science miss important parts of the role of history in evolutionary explanations. I argue that the historical part of evolutionary science should be taken as having genuine explanatory force, and that it provides how‐possibly explanations sensu Dray. (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Epistemic Value of Digital Simulacra for Patients.Eleanor Gilmore-Szott - 2023 - American Journal of Bioethics 23 (9):63-66.
    Artificial Intelligence and Machine Learning (AI/ML) models introduce unique considerations when determining their epistemic value. Fortunately, existing work on the epistemic features of AI/ML can...
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Hypothesis-driven science in large-scale studies: the case of GWAS.Sumana Sharma & James Read - 2021 - Biology and Philosophy 36 (5):1-21.
    It is now well-appreciated by philosophers that contemporary large-scale ‘-omics’ studies in biology stand in non-trivial relationships to more orthodox hypothesis-driven approaches. These relationships have been clarified by Ratti (2015); however, there remains much more to be said regarding how an important field of genomics cited in that work—‘genome-wide association studies’ (GWAS)—fits into this framework. In the present article, we propose a revision to Ratti’s framework more suited to studies such as GWAS. In the process of doing so, we introduce (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • A Knower Without a Voice: Co-Reasoning with Machine Learning.Eleanor Gilmore-Szott & Ryan Dougherty - 2024 - American Journal of Bioethics 24 (9):103-105.
    Bioethical consensus promotes a shared decision making model, which requires healthcare professionals to partner their knowledge with that of their patients—who, at a minimum, contribute their valu...
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Deep learning models and the limits of explainable artificial intelligence.Jens Christian Bjerring, Jakob Mainz & Lauritz Munch - 2025 - Asian Journal of Philosophy 4 (1):1-26.
    It has often been argued that we face a trade-off between accuracy and opacity in deep learning models. The idea is that we can only harness the accuracy of deep learning models by simultaneously accepting that the grounds for the models’ decision-making are epistemically opaque to us. In this paper, we ask the following question: what are the prospects of making deep learning models transparent without compromising on their accuracy? We argue that the answer to this question depends on which (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Testimony by LLMs.Jinhua He & Chen Yang - forthcoming - AI and Society.
    Artificial testimony generated by large language models (LLMs) can be a source of knowledge. However, the requirement that artificial testifiers must satisfy for successful knowledge acquisition is different from the requirement that human testifiers must satisfy. Correspondingly, the epistemic ground of artificial testimonial knowledge is not the well-known and accepted ones suggested by renowned epistemological theories of (human) testimony. Based on Thomas Reid’s old teaching, we suggest a novel epistemological theory of artificial testimony that for receivers to justifiably believe artificially (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Why Attention is Not Explanation: Surgical Intervention and Causal Reasoning about Neural Models.Christopher Grimsley, Elijah Mayfield & Julia Bursten - 2020 - Proceedings of the 12th Conference on Language Resources and Evaluation.
    As the demand for explainable deep learning grows in the evaluation of language technologies, the value of a principled grounding for those explanations grows as well. Here we study the state-of-the-art in explanation for neural models for natural-language processing (NLP) tasks from the viewpoint of philosophy of science. We focus on recent evaluation work that finds brittleness in explanations obtained through attention mechanisms.We harness philosophical accounts of explanation to suggest broader conclusions from these studies. From this analysis, we assert the (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • The Explanatory Role of Machine Learning in Molecular Biology.Fridolin Gross - forthcoming - Erkenntnis:1-21.
    The philosophical debate around the impact of machine learning in science is often framed in terms of a choice between AI and classical methods as mutually exclusive alternatives involving difficult epistemological trade-offs. A common worry regarding machine learning methods specifically is that they lead to opaque models that make predictions but do not lead to explanation or understanding. Focusing on the field of molecular biology, I argue that in practice machine learning is often used with explanatory aims. More specifically, I (...)
    Download  
     
    Export citation  
     
    Bookmark