Switch to: Citations

Add references

You must login to add references.
  1. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead.Cynthia Rudin - 2019 - Nature Machine Intelligence 1.
    Download  
     
    Export citation  
     
    Bookmark   123 citations  
  • Peeking inside the black-box: A survey on explainable artificial intelligence (XAI).A. Adadi & M. Berrada - 2018 - IEEE Access 6.
    Download  
     
    Export citation  
     
    Bookmark   68 citations  
  • Understanding Deep Learning with Statistical Relevance.Tim Räz - 2022 - Philosophy of Science 89 (1):20-41.
    This paper argues that a notion of statistical explanation, based on Salmon’s statistical relevance model, can help us better understand deep neural networks. It is proved that homogeneous partitions, the core notion of Salmon’s model, are equivalent to minimal sufficient statistics, an important notion from statistical inference. This establishes a link to deep neural networks via the so-called Information Bottleneck method, an information-theoretic framework, according to which deep neural networks implicitly solve an optimization problem that generalizes minimal sufficient statistics. The (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Transparency in Complex Computational Systems.Kathleen A. Creel - 2020 - Philosophy of Science 87 (4):568-589.
    Scientists depend on complex computational systems that are often ineliminably opaque, to the detriment of our ability to give scientific explanations and detect artifacts. Some philosophers have s...
    Download  
     
    Export citation  
     
    Bookmark   59 citations  
  • Explicating Objectual Understanding: Taking Degrees Seriously.Christoph Baumberger - 2019 - Journal for General Philosophy of Science / Zeitschrift für Allgemeine Wissenschaftstheorie 50 (3):367-388.
    The paper argues that an account of understanding should take the form of a Carnapian explication and acknowledge that understanding comes in degrees. An explication of objectual understanding is defended, which helps to make sense of the cognitive achievements and goals of science. The explication combines a necessary condition with three evaluative dimensions: an epistemic agent understands a subject matter by means of a theory only if the agent commits herself sufficiently to the theory of the subject matter, and to (...)
    Download  
     
    Export citation  
     
    Bookmark   29 citations  
  • Deep learning: A philosophical introduction.Cameron Buckner - 2019 - Philosophy Compass 14 (10):e12625.
    Deep learning is currently the most prominent and widely successful method in artificial intelligence. Despite having played an active role in earlier artificial intelligence and neural network research, philosophers have been largely silent on this technology so far. This is remarkable, given that deep learning neural networks have blown past predicted upper limits on artificial intelligence performance—recognizing complex objects in natural photographs and defeating world champions in strategy games as complex as Go and chess—yet there remains no universally accepted explanation (...)
    Download  
     
    Export citation  
     
    Bookmark   47 citations  
  • Understanding from Machine Learning Models.Emily Sullivan - 2022 - British Journal for the Philosophy of Science 73 (1):109-133.
    Simple idealized models seem to provide more understanding than opaque, complex, and hyper-realistic models. However, an increasing number of scientists are going in the opposite direction by utilizing opaque machine learning models to make predictions and draw inferences, suggesting that scientists are opting for models that have less potential for understanding. Are scientists trading understanding for some other epistemic or pragmatic good when they choose a machine learning model? Or are the assumptions behind why minimal models provide understanding misguided? In (...)
    Download  
     
    Export citation  
     
    Bookmark   56 citations  
  • External representations and scientific understanding.Jaakko Kuorikoski & Petri Ylikoski - 2015 - Synthese 192 (12):3817-3837.
    This paper provides an inferentialist account of model-based understanding by combining a counterfactual account of explanation and an inferentialist account of representation with a view of modeling as extended cognition. This account makes it understandable how the manipulation of surrogate systems like models can provide genuinely new empirical understanding about the world. Similarly, the account provides an answer to the question how models, that always incorporate assumptions that are literally untrue of the model target, can still provide factive explanations. Finally, (...)
    Download  
     
    Export citation  
     
    Bookmark   32 citations  
  • Understanding climate change with statistical downscaling and machine learning.Julie Jebeile, Vincent Lam & Tim Räz - 2020 - Synthese (1-2):1-21.
    Machine learning methods have recently created high expectations in the climate modelling context in view of addressing climate change, but they are often considered as non-physics-based ‘black boxes’ that may not provide any understanding. However, in many ways, understanding seems indispensable to appropriately evaluate climate models and to build confidence in climate projections. Relying on two case studies, we compare how machine learning and standard statistical techniques affect our ability to understand the climate system. For that purpose, we put five (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Against Interpretability: a Critical Examination of the Interpretability Problem in Machine Learning.Maya Krishnan - 2020 - Philosophy and Technology 33 (3):487-502.
    The usefulness of machine learning algorithms has led to their widespread adoption prior to the development of a conceptual framework for making sense of them. One common response to this situation is to say that machine learning suffers from a “black box problem.” That is, machine learning algorithms are “opaque” to human users, failing to be “interpretable” or “explicable” in terms that would render categorization procedures “understandable.” The purpose of this paper is to challenge the widespread agreement about the existence (...)
    Download  
     
    Export citation  
     
    Bookmark   36 citations  
  • How to Tell When Simpler, More Unified, or Less A d Hoc Theories Will Provide More Accurate Predictions.Malcolm R. Forster & Elliott Sober - 1994 - British Journal for the Philosophy of Science 45 (1):1-35.
    Traditional analyses of the curve fitting problem maintain that the data do not indicate what form the fitted curve should take. Rather, this issue is said to be settled by prior probabilities, by simplicity, or by a background theory. In this paper, we describe a result due to Akaike [1973], which shows how the data can underwrite an inference concerning the curve's form based on an estimate of how predictively accurate it will be. We argue that this approach throws light (...)
    Download  
     
    Export citation  
     
    Bookmark   232 citations  
  • Explaining Machine Learning Decisions.John Zerilli - 2022 - Philosophy of Science 89 (1):1-19.
    The operations of deep networks are widely acknowledged to be inscrutable. The growing field of Explainable AI has emerged in direct response to this problem. However, owing to the nature of the opacity in question, XAI has been forced to prioritise interpretability at the expense of completeness, and even realism, so that its explanations are frequently interpretable without being underpinned by more comprehensive explanations faithful to the way a network computes its predictions. While this has been taken to be a (...)
    Download  
     
    Export citation  
     
    Bookmark   14 citations  
  • The no-free-lunch theorems of supervised learning.Tom F. Sterkenburg & Peter D. Grünwald - 2021 - Synthese 199 (3-4):9979-10015.
    The no-free-lunch theorems promote a skeptical conclusion that all possible machine learning algorithms equally lack justification. But how could this leave room for a learning theory, that shows that some algorithms are better than others? Drawing parallels to the philosophy of induction, we point out that the no-free-lunch results presuppose a conception of learning algorithms as purely data-driven. On this conception, every algorithm must have an inherent inductive bias, that wants justification. We argue that many standard learning algorithms should rather (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • Solving the Black Box Problem: A Normative Framework for Explainable Artificial Intelligence.Carlos Zednik - 2019 - Philosophy and Technology 34 (2):265-288.
    Many of the computing systems programmed using Machine Learning are opaque: it is difficult to know why they do what they do or how they work. Explainable Artificial Intelligence aims to develop analytic techniques that render opaque computing systems transparent, but lacks a normative framework with which to evaluate these techniques’ explanatory successes. The aim of the present discussion is to develop such a framework, paying particular attention to different stakeholders’ distinct explanatory requirements. Building on an analysis of “opacity” from (...)
    Download  
     
    Export citation  
     
    Bookmark   65 citations  
  • A Contextual Approach to Scientific Understanding.Henk W. de Regt & Dennis Dieks - 2005 - Synthese 144 (1):137-170.
    Achieving understanding of nature is one of the aims of science. In this paper we offer an analysis of the nature of scientific understanding that accords with actual scientific practice and accommodates the historical diversity of conceptions of understanding. Its core idea is a general criterion for the intelligibility of scientific theories that is essentially contextual: which theories conform to this criterion depends on contextual factors, and can change in the course of time. Our analysis provides a general account of (...)
    Download  
     
    Export citation  
     
    Bookmark   208 citations  
  • Philosophy of science at sea: Clarifying the interpretability of machine learning.Claus Beisbart & Tim Räz - 2022 - Philosophy Compass 17 (6):e12830.
    Philosophy Compass, Volume 17, Issue 6, June 2022.
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  • MUDdy understanding.Daniel A. Wilkenfeld - 2017 - Synthese 194 (4).
    This paper focuses on two questions: Is understanding intimately bound up with accurately representing the world? Is understanding intimately bound up with downstream abilities? We will argue that the answer to both these questions is “yes”, and for the same reason-both accuracy and ability are important elements of orthogonal evaluative criteria along which understanding can be assessed. More precisely, we will argue that representational-accuracy and intelligibility are good-making features of a state of understanding. Interestingly, both evaluative claims have been defended (...)
    Download  
     
    Export citation  
     
    Bookmark   25 citations  
  • (2 other versions)The explanation game: a formal framework for interpretable machine learning.David S. Watson & Luciano Floridi - 2021 - Synthese 198 (10):9211-9242.
    We propose a formal framework for interpretable machine learning. Combining elements from statistical learning, causal interventionism, and decision theory, we design an idealisedexplanation gamein which players collaborate to find the best explanation(s) for a given algorithmic prediction. Through an iterative procedure of questions and answers, the players establish a three-dimensional Pareto frontier that describes the optimal trade-offs between explanatory accuracy, simplicity, and relevance. Multiple rounds are played at different levels of abstraction, allowing the players to explore overlapping causal patterns of (...)
    Download  
     
    Export citation  
     
    Bookmark   17 citations  
  • Scientific explanation and the sense of understanding.J. D. Trout - 2002 - Philosophy of Science 69 (2):212-233.
    Scientists and laypeople alike use the sense of understanding that an explanation conveys as a cue to good or correct explanation. Although the occurrence of this sense or feeling of understanding is neither necessary nor sufficient for good explanation, it does drive judgments of the plausibility and, ultimately, the acceptability, of an explanation. This paper presents evidence that the sense of understanding is in part the routine consequence of two well-documented biases in cognitive psychology: overconfidence and hindsight. In light of (...)
    Download  
     
    Export citation  
     
    Bookmark   167 citations  
  • The Importance of Understanding Deep Learning.Tim Räz & Claus Beisbart - 2024 - Erkenntnis 89 (5).
    Some machine learning models, in particular deep neural networks (DNNs), are not very well understood; nevertheless, they are frequently used in science. Does this lack of understanding pose a problem for using DNNs to understand empirical phenomena? Emily Sullivan has recently argued that understanding with DNNs is not limited by our lack of understanding of DNNs themselves. In the present paper, we will argue, _contra_ Sullivan, that our current lack of understanding of DNNs does limit our ability to understand with (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • (1 other version)Euler’s Königsberg: the explanatory power of mathematics.Tim Räz - 2017 - European Journal for Philosophy of Science 8:331–46.
    The present paper provides an analysis of Euler’s solutions to the Königsberg bridges problem. Euler proposes three different solutions to the problem, addressing their strengths and weaknesses along the way. I put the analysis of Euler’s paper to work in the philosophical discussion on mathematical explanations. I propose that the key ingredient to a good explanation is the degree to which it provides relevant information. Providing relevant information is based on knowledge of the structure in question, graphs in the present (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • (1 other version)Euler’s Königsberg: the explanatory power of mathematics.Tim Räz - 2018 - European Journal for Philosophy of Science 8 (3):331-346.
    The present paper provides an analysis of Euler’s solutions to the Königsberg bridges problem. Euler proposes three different solutions to the problem, addressing their strengths and weaknesses along the way. I put the analysis of Euler’s paper to work in the philosophical discussion on mathematical explanations. I propose that the key ingredient to a good explanation is the degree to which it provides relevant information. Providing relevant information is based on knowledge of the structure in question, graphs in the present (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Learning from the Shape of Data.Sarita Rosenstock - 2021 - Philosophy of Science 88 (5):1033-1044.
    To make sense of large data sets, we often look for patterns in how data points are “shaped” in the space of possible measurement outcomes. The emerging field of topological data analysis offers a toolkit for formalizing the process of identifying such shapes. This article aims to discover why and how the resulting analysis should be understood as reflecting significant features of the systems that generated the data. I argue that a particular feature of TDA—its functoriality—is what enables TDA to (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Visualization as a Tool for Understanding.Henk W. de Regt - 2014 - Perspectives on Science 22 (3):377-396.
    The act of understanding is at the heart of all scientific activity; without it any ostensibly scientific activity is as sterile as that of a high school student substituting numbers into a formula. Ordinary language often uses visual metaphors in connection with understanding. When we finally understand what someone is trying to point out to us, we exclaim: “I see!” When someone really understands a subject matter, we say that she has “insight”. There appears to be a link between visualization (...)
    Download  
     
    Export citation  
     
    Bookmark   22 citations