Switch to: Citations

Add references

You must login to add references.
  1. Causality: Models, reasoning and inference.Christopher Hitchcock - 2001 - Philosophical Review 110 (4):639-641.
    book reveiw van boek met gelijknamige titel van Judea Pearl.
    Download  
     
    Export citation  
     
    Bookmark   83 citations  
  • Transparency in Complex Computational Systems.Kathleen A. Creel - 2020 - Philosophy of Science 87 (4):568-589.
    Scientists depend on complex computational systems that are often ineliminably opaque, to the detriment of our ability to give scientific explanations and detect artifacts. Some philosophers have s...
    Download  
     
    Export citation  
     
    Bookmark   58 citations  
  • Transparency in Algorithmic and Human Decision-Making: Is There a Double Standard?John Zerilli, Alistair Knott, James Maclaurin & Colin Gavaghan - 2018 - Philosophy and Technology 32 (4):661-683.
    We are sceptical of concerns over the opacity of algorithmic decision tools. While transparency and explainability are certainly important desiderata in algorithmic governance, we worry that automated decision-making is being held to an unrealistically high standard, possibly owing to an unrealistically high estimate of the degree of transparency attainable from human decision-makers. In this paper, we review evidence demonstrating that much human decision-making is fraught with transparency problems, show in what respects AI fares little worse or better and argue that (...)
    Download  
     
    Export citation  
     
    Bookmark   79 citations  
  • Solving the Black Box Problem: A Normative Framework for Explainable Artificial Intelligence.Carlos Zednik - 2019 - Philosophy and Technology 34 (2):265-288.
    Many of the computing systems programmed using Machine Learning are opaque: it is difficult to know why they do what they do or how they work. Explainable Artificial Intelligence aims to develop analytic techniques that render opaque computing systems transparent, but lacks a normative framework with which to evaluate these techniques’ explanatory successes. The aim of the present discussion is to develop such a framework, paying particular attention to different stakeholders’ distinct explanatory requirements. Building on an analysis of “opacity” from (...)
    Download  
     
    Export citation  
     
    Bookmark   63 citations  
  • Deep learning: A philosophical introduction.Cameron Buckner - 2019 - Philosophy Compass 14 (10):e12625.
    Deep learning is currently the most prominent and widely successful method in artificial intelligence. Despite having played an active role in earlier artificial intelligence and neural network research, philosophers have been largely silent on this technology so far. This is remarkable, given that deep learning neural networks have blown past predicted upper limits on artificial intelligence performance—recognizing complex objects in natural photographs and defeating world champions in strategy games as complex as Go and chess—yet there remains no universally accepted explanation (...)
    Download  
     
    Export citation  
     
    Bookmark   47 citations  
  • Understanding from Machine Learning Models.Emily Sullivan - 2022 - British Journal for the Philosophy of Science 73 (1):109-133.
    Simple idealized models seem to provide more understanding than opaque, complex, and hyper-realistic models. However, an increasing number of scientists are going in the opposite direction by utilizing opaque machine learning models to make predictions and draw inferences, suggesting that scientists are opting for models that have less potential for understanding. Are scientists trading understanding for some other epistemic or pragmatic good when they choose a machine learning model? Or are the assumptions behind why minimal models provide understanding misguided? In (...)
    Download  
     
    Export citation  
     
    Bookmark   56 citations  
  • Empiricism without Magic: Transformational Abstraction in Deep Convolutional Neural Networks.Cameron Buckner - 2018 - Synthese (12):1-34.
    In artificial intelligence, recent research has demonstrated the remarkable potential of Deep Convolutional Neural Networks (DCNNs), which seem to exceed state-of-the-art performance in new domains weekly, especially on the sorts of very difficult perceptual discrimination tasks that skeptics thought would remain beyond the reach of artificial intelligence. However, it has proven difficult to explain why DCNNs perform so well. In philosophy of mind, empiricists have long suggested that complex cognition is based on information derived from sensory experience, often appealing to (...)
    Download  
     
    Export citation  
     
    Bookmark   48 citations  
  • Understanding as compression.Daniel A. Wilkenfeld - 2019 - Philosophical Studies 176 (10):2807-2831.
    What is understanding? My goal in this paper is to lay out a new approach to this question and clarify how that approach deals with certain issues. The claim is that understanding is a matter of compressing information about the understood so that it can be mentally useful. On this account, understanding amounts to having a representational kernel and the ability to use it to generate the information one needs regarding the target phenomenon. I argue that this ambitious new account (...)
    Download  
     
    Export citation  
     
    Bookmark   22 citations  
  • The explanatory potential of artificial societies.Till Grüne-Yanoff - 2009 - Synthese 169 (3):539 - 555.
    It is often claimed that artificial society simulations contribute to the explanation of social phenomena. At the hand of a particular example, this paper argues that artificial societies often cannot provide full explanations, because their models are not or cannot be validated. Despite that, many feel that such simulations somehow contribute to our understanding. This paper tries to clarify this intuition by investigating whether artificial societies provide potential explanations. It is shown that these potential explanations, if they contribute to our (...)
    Download  
     
    Export citation  
     
    Bookmark   18 citations  
  • Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead.Cynthia Rudin - 2019 - Nature Machine Intelligence 1.
    Download  
     
    Export citation  
     
    Bookmark   121 citations  
  • Understanding climate change with statistical downscaling and machine learning.Julie Jebeile, Vincent Lam & Tim Räz - 2020 - Synthese (1-2):1-21.
    Machine learning methods have recently created high expectations in the climate modelling context in view of addressing climate change, but they are often considered as non-physics-based ‘black boxes’ that may not provide any understanding. However, in many ways, understanding seems indispensable to appropriately evaluate climate models and to build confidence in climate projections. Relying on two case studies, we compare how machine learning and standard statistical techniques affect our ability to understand the climate system. For that purpose, we put five (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • How could models possibly provide how-possibly explanations?Philippe Verreault-Julien - 2019 - Studies in History and Philosophy of Science Part A 73:1-12.
    One puzzle concerning highly idealized models is whether they explain. Some suggest they provide so-called ‘how-possibly explanations’. However, this raises an important question about the nature of how-possibly explanations, namely what distinguishes them from ‘normal’, or how-actually, explanations? I provide an account of how-possibly explanations that clarifies their nature in the context of solving the puzzle of model-based explanation. I argue that the modal notions of actuality and possibility provide the relevant dividing lines between how-possibly and how-actually explanations. Whereas how-possibly (...)
    Download  
     
    Export citation  
     
    Bookmark   33 citations  
  • Factive scientific understanding without accurate representation.Collin C. Rice - 2016 - Biology and Philosophy 31 (1):81-102.
    This paper analyzes two ways idealized biological models produce factive scientific understanding. I then argue that models can provide factive scientific understanding of a phenomenon without providing an accurate representation of the features of their real-world target system. My analysis of these cases also suggests that the debate over scientific realism needs to investigate the factive scientific understanding produced by scientists’ use of idealized models rather than the accuracy of scientific models themselves.
    Download  
     
    Export citation  
     
    Bookmark   23 citations  
  • Understanding, explanation, and unification.Victor Gijsbers - 2013 - Studies in History and Philosophy of Science Part A 44 (3):516-522.
    In this article I argue that there are two different types of understanding: the understanding we get from explanations, and the understanding we get from unification. This claim is defended by first showing that explanation and unification are not as closely related as has sometimes been thought. A critical appraisal of recent proposals for understanding without explanation leads us to discuss the example of a purely classificatory biology: it turns out that such a science can give us understanding of the (...)
    Download  
     
    Export citation  
     
    Bookmark   39 citations  
  • Understanding Deep Learning with Statistical Relevance.Tim Räz - 2022 - Philosophy of Science 89 (1):20-41.
    This paper argues that a notion of statistical explanation, based on Salmon’s statistical relevance model, can help us better understand deep neural networks. It is proved that homogeneous partitions, the core notion of Salmon’s model, are equivalent to minimal sufficient statistics, an important notion from statistical inference. This establishes a link to deep neural networks via the so-called Information Bottleneck method, an information-theoretic framework, according to which deep neural networks implicitly solve an optimization problem that generalizes minimal sufficient statistics. The (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations