Switch to: References

Add citations

You must login to add citations.
  1. SIDEs: Separating Idealization from Deceptive ‘Explanations’ in xAI.Emily Sullivan - forthcoming - Proceedings of the 2024 Acm Conference on Fairness, Accountability, and Transparency.
    Explainable AI (xAI) methods are important for establishing trust in using black-box models. However, recent criticism has mounted against current xAI methods that they disagree, are necessarily false, and can be manipulated, which has started to undermine the deployment of black-box models. Rudin (2019) goes so far as to say that we should stop using black-box models altogether in high-stakes cases because xAI explanations ‘must be wrong’. However, strict fidelity to the truth is historically not a desideratum in science. Idealizations (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The Explanatory Role of Machine Learning in Molecular Biology.Fridolin Gross - forthcoming - Erkenntnis:1-21.
    The philosophical debate around the impact of machine learning in science is often framed in terms of a choice between AI and classical methods as mutually exclusive alternatives involving difficult epistemological trade-offs. A common worry regarding machine learning methods specifically is that they lead to opaque models that make predictions but do not lead to explanation or understanding. Focusing on the field of molecular biology, I argue that in practice machine learning is often used with explanatory aims. More specifically, I (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Predicting and explaining with machine learning models: Social science as a touchstone.Oliver Buchholz & Thomas Grote - 2023 - Studies in History and Philosophy of Science Part A 102 (C):60-69.
    Machine learning (ML) models recently led to major breakthroughs in predictive tasks in the natural sciences. Yet their benefits for the social sciences are less evident, as even high-profile studies on the prediction of life trajectories have shown to be largely unsuccessful – at least when measured in traditional criteria of scientific success. This paper tries to shed light on this remarkable performance gap. Comparing two social science case studies to a paradigm example from the natural sciences, we argue that, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Do ML models represent their targets?Emily Sullivan - forthcoming - Philosophy of Science.
    I argue that ML models used in science function as highly idealized toy models. If we treat ML models as a type of highly idealized toy model, then we can deploy standard representational and epistemic strategies from the toy model literature to explain why ML models can still provide epistemic success despite their lack of similarity to their targets.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Understanding climate change with statistical downscaling and machine learning.Julie Jebeile, Vincent Lam & Tim Räz - 2020 - Synthese (1-2):1-21.
    Machine learning methods have recently created high expectations in the climate modelling context in view of addressing climate change, but they are often considered as non-physics-based ‘black boxes’ that may not provide any understanding. However, in many ways, understanding seems indispensable to appropriately evaluate climate models and to build confidence in climate projections. Relying on two case studies, we compare how machine learning and standard statistical techniques affect our ability to understand the climate system. For that purpose, we put five (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Conceptualizing understanding in explainable artificial intelligence (XAI): an abilities-based approach.Timo Speith, Barnaby Crook, Sara Mann, Astrid Schomäcker & Markus Langer - 2024 - Ethics and Information Technology 26 (2):1-15.
    A central goal of research in explainable artificial intelligence (XAI) is to facilitate human understanding. However, understanding is an elusive concept that is difficult to target. In this paper, we argue that a useful way to conceptualize understanding within the realm of XAI is via certain human abilities. We present four criteria for a useful conceptualization of understanding in XAI and show that these are fulfilled by an abilities-based approach: First, thinking about understanding in terms of specific abilities is motivated (...)
    Download  
     
    Export citation  
     
    Bookmark