Switch to: References

Add citations

You must login to add citations.
  1. Understanding Deep Learning with Statistical Relevance.Tim Räz - 2022 - Philosophy of Science 89 (1):20-41.
    This paper argues that a notion of statistical explanation, based on Salmon’s statistical relevance model, can help us better understand deep neural networks. It is proved that homogeneous partitions, the core notion of Salmon’s model, are equivalent to minimal sufficient statistics, an important notion from statistical inference. This establishes a link to deep neural networks via the so-called Information Bottleneck method, an information-theoretic framework, according to which deep neural networks implicitly solve an optimization problem that generalizes minimal sufficient statistics. The (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Conceptualizing uncertainty: the IPCC, model robustness and the weight of evidence.Margherita Harris - 2021 - Dissertation, London School of Economics
    The aim of this thesis is to improve our understanding of how to assess and communicate uncertainty in areas of research deeply afflicted by it, the assessment and communication of which are made more fraught still by the studies’ immediate policy implications. The IPCC is my case study throughout the thesis, which consists of three parts. In Part 1, I offer a thorough diagnosis of conceptual problems faced by the IPCC uncertainty framework. The main problem I discuss is the persistent (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The epistemic benefits of generalisation in modelling II: expressive power and abstraction.Aki Lehtinen - 2022 - Synthese 200 (2):1-24.
    This paper contributes to the philosophical accounts of generalisation in formal modelling by introducing a conceptual framework that allows for recognising generalisations that are epistemically beneficial in the sense of contributing to the truth of a model result or component. The framework is useful for modellers themselves because it is shown how to recognise different kinds of generalisation on the basis of changes in model descriptions. Since epistemically beneficial generalisations usually de-idealise the model, the paper proposes a reformulation of the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The epistemic benefits of generalisation in modelling I: Systems and applicability.Aki Lehtinen - 2021 - Synthese 199 (3-4):10343-10370.
    This paper provides a conceptual framework that allows for distinguishing between different kinds of generalisation and applicability. It is argued that generalising models may bring epistemic benefits. They do so if they show that restrictive and unrealistic assumptions do not threaten the credibility of results derived from models. There are two different notions of applicability, generic and specific, which give rise to three different kinds of generalizations. Only generalising a result brings epistemic benefits concerning the truth of model components or (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • The epistemic value of independent lies: false analogies and equivocations.Margherita Harris - 2021 - Synthese 199 (5-6):14577-14597.
    Here I critically assess an argument put forward by Kuorikoski et al. (Br J Philos Sci, 61(3):541–567, 2010) for the epistemic import of model-based robustness analysis. I show that this argument is not sound since the sort of probabilistic independence on which it relies is unfeasible. By revising the notion of probabilistic independence imposed on the models’ results, I introduce a prima-facie more plausible argument. However, despite this prima-facie plausibility, I show that even this new argument is unsound in most (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations