Switch to: References

Add citations

You must login to add citations.
  1. Mapping representational mechanisms with deep neural networks.Phillip Hintikka Kieval - 2022 - Synthese 200 (3):1-25.
    The predominance of machine learning based techniques in cognitive neuroscience raises a host of philosophical and methodological concerns. Given the messiness of neural activity, modellers must make choices about how to structure their raw data to make inferences about encoded representations. This leads to a set of standard methodological assumptions about when abstraction is appropriate in neuroscientific practice. Yet, when made uncritically these choices threaten to bias conclusions about phenomena drawn from data. Contact between the practices of multivariate pattern analysis (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Humanistic interpretation and machine learning.Juho Pääkkönen & Petri Ylikoski - 2021 - Synthese 199:1461–1497.
    This paper investigates how unsupervised machine learning methods might make hermeneutic interpretive text analysis more objective in the social sciences. Through a close examination of the uses of topic modeling—a popular unsupervised approach in the social sciences—it argues that the primary way in which unsupervised learning supports interpretation is by allowing interpreters to discover unanticipated information in larger and more diverse corpora and by improving the transparency of the interpretive process. This view highlights that unsupervised modeling does not eliminate the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Conceptual challenges for interpretable machine learning.David S. Watson - 2022 - Synthese 200 (2):1-33.
    As machine learning has gradually entered into ever more sectors of public and private life, there has been a growing demand for algorithmic explainability. How can we make the predictions of complex statistical models more intelligible to end users? A subdiscipline of computer science known as interpretable machine learning has emerged to address this urgent question. Numerous influential methods have been proposed, from local linear approximations to rule lists and counterfactuals. In this article, I highlight three conceptual challenges that are (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Karl Jaspers and artificial neural nets: on the relation of explaining and understanding artificial intelligence in medicine.Christopher Poppe & Georg Starke - 2022 - Ethics and Information Technology 24 (3):1-10.
    Assistive systems based on Artificial Intelligence are bound to reshape decision-making in all areas of society. One of the most intricate challenges arising from their implementation in high-stakes environments such as medicine concerns their frequently unsatisfying levels of explainability, especially in the guise of the so-called black-box problem: highly successful models based on deep learning seem to be inherently opaque, resisting comprehensive explanations. This may explain why some scholars claim that research should focus on rendering AI systems understandable, rather than (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Analogue Models and Universal Machines. Paradigms of Epistemic Transparency in Artificial Intelligence.Hajo Greif - 2022 - Minds and Machines 32 (1):111-133.
    The problem of epistemic opacity in Artificial Intelligence is often characterised as a problem of intransparent algorithms that give rise to intransparent models. However, the degrees of transparency of an AI model should not be taken as an absolute measure of the properties of its algorithms but of the model’s degree of intelligibility to human users. Its epistemically relevant elements are to be specified on various levels above and beyond the computational one. In order to elucidate this claim, I first (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • A Puzzle concerning Compositionality in Machines.Ryan M. Nefdt - 2020 - Minds and Machines 30 (1):47-75.
    This paper attempts to describe and address a specific puzzle related to compositionality in artificial networks such as Deep Neural Networks and machine learning in general. The puzzle identified here touches on a larger debate in Artificial Intelligence related to epistemic opacity but specifically focuses on computational applications of human level linguistic abilities or properties and a special difficulty with relation to these. Thus, the resulting issue is both general and unique. A partial solution is suggested.
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Understanding climate phenomena with data-driven models.Benedikt Knüsel & Christoph Baumberger - 2020 - Studies in History and Philosophy of Science Part A 84 (C):46-56.
    In climate science, climate models are one of the main tools for understanding phenomena. Here, we develop a framework to assess the fitness of a climate model for providing understanding. The framework is based on three dimensions: representational accuracy, representational depth, and graspability. We show that this framework does justice to the intuition that classical process-based climate models give understanding of phenomena. While simple climate models are characterized by a larger graspability, state-of-the-art models have a higher representational accuracy and representational (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • The predictive reframing of machine learning applications: good predictions and bad measurements.Alexander Martin Mussgnug - 2022 - European Journal for Philosophy of Science 12 (3):1-21.
    Supervised machine learning has found its way into ever more areas of scientific inquiry, where the outcomes of supervised machine learning applications are almost universally classified as predictions. I argue that what researchers often present as a mere terminological particularity of the field involves the consequential transformation of tasks as diverse as classification, measurement, or image segmentation into prediction problems. Focusing on the case of machine-learning enabled poverty prediction, I explore how reframing a measurement problem as a prediction task alters (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Understanding, Idealization, and Explainable AI.Will Fleisher - 2022 - Episteme 19 (4):534-560.
    Many AI systems that make important decisions are black boxes: how they function is opaque even to their developers. This is due to their high complexity and to the fact that they are trained rather than programmed. Efforts to alleviate the opacity of black box systems are typically discussed in terms of transparency, interpretability, and explainability. However, there is little agreement about what these key concepts mean, which makes it difficult to adjudicate the success or promise of opacity alleviation methods. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Scientific Exploration and Explainable Artificial Intelligence.Carlos Zednik & Hannes Boelsen - 2022 - Minds and Machines 32 (1):219-239.
    Models developed using machine learning are increasingly prevalent in scientific research. At the same time, these models are notoriously opaque. Explainable AI aims to mitigate the impact of opacity by rendering opaque models transparent. More than being just the solution to a problem, however, Explainable AI can also play an invaluable role in scientific exploration. This paper describes how post-hoc analytic techniques from Explainable AI can be used to refine target phenomena in medical science, to identify starting points for future (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Epistemo-ethical constraints on AI-human decision making for diagnostic purposes.Athanasios Votsis & Dina Babushkina - 2022 - Ethics and Information Technology 24 (2).
    This paper approaches the interaction of a health professional with an AI system for diagnostic purposes as a hybrid decision making process and conceptualizes epistemo-ethical constraints on this process. We argue for the importance of the understanding of the underlying machine epistemology in order to raise awareness of and facilitate realistic expectations from AI as a decision support system, both among healthcare professionals and the potential benefiters. Understanding the epistemic abilities and limitations of such systems is essential if we are (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Inductive Risk, Understanding, and Opaque Machine Learning Models.Emily Sullivan - forthcoming - Philosophy of Science:1-13.
    Under what conditions does machine learning (ML) model opacity inhibit the possibility of explaining and understanding phenomena? In this paper, I argue that non-epistemic values give shape to the ML opacity problem even if we keep researcher interests fixed. Treating ML models as an instance of doing model-based science to explain and understand phenomena reveals that there is (i) an external opacity problem, where the presence of inductive risk imposes higher standards on externally validating models, and (ii) an internal opacity (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Hypothesis-driven science in large-scale studies: the case of GWAS.Sumana Sharma & James Read - 2021 - Biology and Philosophy 36 (5):1-21.
    It is now well-appreciated by philosophers that contemporary large-scale ‘-omics’ studies in biology stand in non-trivial relationships to more orthodox hypothesis-driven approaches. These relationships have been clarified by Ratti ; however, there remains much more to be said regarding how an important field of genomics cited in that work—‘genome-wide association studies’ —fits into this framework. In the present article, we propose a revision to Ratti’s framework more suited to studies such as GWAS. In the process of doing so, we introduce (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • The Importance of Understanding Deep Learning.Tim Räz & Claus Beisbart - forthcoming - Erkenntnis:1-18.
    Some machine learning models, in particular deep neural networks, are not very well understood; nevertheless, they are frequently used in science. Does this lack of understanding pose a problem for using DNNs to understand empirical phenomena? Emily Sullivan has recently argued that understanding with DNNs is not limited by our lack of understanding of DNNs themselves. In the present paper, we will argue, contra Sullivan, that our current lack of understanding of DNNs does limit our ability to understand with DNNs. (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Explanatory pragmatism: a context-sensitive framework for explainable medical AI.Diana Robinson & Rune Nyrup - 2022 - Ethics and Information Technology 24 (1).
    Explainable artificial intelligence is an emerging, multidisciplinary field of research that seeks to develop methods and tools for making AI systems more explainable or interpretable. XAI researchers increasingly recognise explainability as a context-, audience- and purpose-sensitive phenomenon, rather than a single well-defined property that can be directly measured and optimised. However, since there is currently no overarching definition of explainability, this poses a risk of miscommunication between the many different researchers within this multidisciplinary space. This is the problem we seek (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Computers Are Syntax All the Way Down: Reply to Bozşahin.William J. Rapaport - 2019 - Minds and Machines 29 (2):227-237.
    A response to a recent critique by Cem Bozşahin of the theory of syntactic semantics as it applies to Helen Keller, and some applications of the theory to the philosophy of computer science.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Sources of Understanding in Supervised Machine Learning Models.Paulo Pirozelli - 2022 - Philosophy and Technology 35 (2):1-19.
    In the last decades, supervised machine learning has seen the widespread growth of highly complex, non-interpretable models, of which deep neural networks are the most typical representative. Due to their complexity, these models have showed an outstanding performance in a series of tasks, as in image recognition and machine translation. Recently, though, there has been an important discussion over whether those non-interpretable models are able to provide any sort of understanding whatsoever. For some scholars, only interpretable models can provide understanding. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Values and inductive risk in machine learning modelling: the case of binary classification models.Koray Karaca - 2021 - European Journal for Philosophy of Science 11 (4):1-27.
    I examine the construction and evaluation of machine learning binary classification models. These models are increasingly used for societal applications such as classifying patients into two categories according to the presence or absence of a certain disease like cancer and heart disease. I argue that the construction of ML classification models involves an optimisation process aiming at the minimization of the inductive risk associated with the intended uses of these models. I also argue that the construction of these models is (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Agree to disagree: the symmetry of burden of proof in human–AI collaboration.Karin Rolanda Jongsma & Martin Sand - 2022 - Journal of Medical Ethics 48 (4):230-231.
    In their paper ‘Responsibility, second opinions and peer-disagreement: ethical and epistemological challenges of using AI in clinical diagnostic contexts’, Kempt and Nagel discuss the use of medical AI systems and the resulting need for second opinions by human physicians, when physicians and AI disagree, which they call the rule of disagreement.1 The authors defend RoD based on three premises: First, they argue that in cases of disagreement in medical practice, there is an increased burden of proof for the physician in (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Understanding climate change with statistical downscaling and machine learning.Julie Jebeile, Vincent Lam & Tim Räz - 2020 - Synthese (1-2):1-21.
    Machine learning methods have recently created high expectations in the climate modelling context in view of addressing climate change, but they are often considered as non-physics-based ‘black boxes’ that may not provide any understanding. However, in many ways, understanding seems indispensable to appropriately evaluate climate models and to build confidence in climate projections. Relying on two case studies, we compare how machine learning and standard statistical techniques affect our ability to understand the climate system. For that purpose, we put five (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Putting explainable AI in context: institutional explanations for medical AI.Jacob Browning & Mark Theunissen - 2022 - Ethics and Information Technology 24 (2).
    There is a current debate about if, and in what sense, machine learning systems used in the medical context need to be explainable. Those arguing in favor contend these systems require post hoc explanations for each individual decision to increase trust and ensure accurate diagnoses. Those arguing against suggest the high accuracy and reliability of the systems is sufficient for providing epistemic justified beliefs without the need for explaining each individual decision. But, as we show, both solutions have limitations—and it (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Can Robots Do Epidemiology? Machine Learning, Causal Inference, and Predicting the Outcomes of Public Health Interventions.Alex Broadbent & Thomas Grote - 2022 - Philosophy and Technology 35 (1):1-22.
    This paper argues that machine learning and epidemiology are on collision course over causation. The discipline of epidemiology lays great emphasis on causation, while ML research does not. Some epidemiologists have proposed imposing what amounts to a causal constraint on ML in epidemiology, requiring it either to engage in causal inference or restrict itself to mere projection. We whittle down the issues to the question of whether causal knowledge is necessary for underwriting predictions about the outcomes of public health interventions. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Two Dimensions of Opacity and the Deep Learning Predicament.Florian J. Boge - 2022 - Minds and Machines 32 (1):43-75.
    Deep neural networks have become increasingly successful in applications from biology to cosmology to social science. Trained DNNs, moreover, correspond to models that ideally allow the prediction of new phenomena. Building in part on the literature on ‘eXplainable AI’, I here argue that these models are instrumental in a sense that makes them non-explanatory, and that their automated generation is opaque in a unique way. This combination implies the possibility of an unprecedented gap between discovery and explanation: When unsupervised models (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Minds and Machines Special Issue: Machine Learning: Prediction Without Explanation?F. J. Boge, P. Grünke & R. Hillerbrand - 2022 - Minds and Machines 32 (1):1-9.
    Download  
     
    Export citation  
     
    Bookmark  
  • On Predicting Recidivism: Epistemic Risk, Tradeoffs, and Values in Machine Learning.Justin B. Biddle - 2022 - Canadian Journal of Philosophy 52 (3):321-341.
    Recent scholarship in philosophy of science and technology has shown that scientific and technological decision making are laden with values, including values of a social, political, and/or ethical character. This paper examines the role of value judgments in the design of machine-learning systems generally and in recidivism-prediction algorithms specifically. Drawing on work on inductive and epistemic risk, the paper argues that ML systems are value laden in ways similar to human decision making, because the development and design of ML systems (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • Philosophy of science at sea: Clarifying the interpretability of machine learning.Claus Beisbart & Tim Räz - 2022 - Philosophy Compass 17 (6):e12830.
    Philosophy Compass, Volume 17, Issue 6, June 2022.
    Download  
     
    Export citation  
     
    Bookmark  
  • Models, Algorithms, and the Subjects of Transparency.Hajo Greif - 2022 - In Vincent C. Müller (ed.), Philosophy and Theory of Artificial Intelligence 2021. Berlin: Springer. pp. 27-37.
    Concerns over epistemic opacity abound in contemporary debates on Artificial Intelligence (AI). However, it is not always clear to what extent these concerns refer to the same set of problems. We can observe, first, that the terms 'transparency' and 'opacity' are used either in reference to the computational elements of an AI model or to the models to which they pertain. Second, opacity and transparency might either be understood to refer to the properties of AI systems or to the epistemic (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Why Attention is Not Explanation: Surgical Intervention and Causal Reasoning about Neural Models.Christopher Grimsley, Elijah Mayfield & Julia Bursten - 2020 - Proceedings of the 12th Conference on Language Resources and Evaluation.
    As the demand for explainable deep learning grows in the evaluation of language technologies, the value of a principled grounding for those explanations grows as well. Here we study the state-of-the-art in explanation for neural models for natural-language processing (NLP) tasks from the viewpoint of philosophy of science. We focus on recent evaluation work that finds brittleness in explanations obtained through attention mechanisms.We harness philosophical accounts of explanation to suggest broader conclusions from these studies. From this analysis, we assert the (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation