Switch to: References

Add citations

You must login to add citations.
  1. The Explanatory Role of Machine Learning in Molecular Biology.Fridolin Gross - forthcoming - Erkenntnis:1-21.
    The philosophical debate around the impact of machine learning in science is often framed in terms of a choice between AI and classical methods as mutually exclusive alternatives involving difficult epistemological trade-offs. A common worry regarding machine learning methods specifically is that they lead to opaque models that make predictions but do not lead to explanation or understanding. Focusing on the field of molecular biology, I argue that in practice machine learning is often used with explanatory aims. More specifically, I (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The ethics and epistemology of explanatory AI in medicine and healthcare.Karin Jongsma, Martin Sand & Juan M. Durán - 2022 - Ethics and Information Technology 24 (4):1-4.
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Hypothesis-driven science in large-scale studies: the case of GWAS.Sumana Sharma & James Read - 2021 - Biology and Philosophy 36 (5):1-21.
    It is now well-appreciated by philosophers that contemporary large-scale ‘-omics’ studies in biology stand in non-trivial relationships to more orthodox hypothesis-driven approaches. These relationships have been clarified by Ratti (2015); however, there remains much more to be said regarding how an important field of genomics cited in that work—‘genome-wide association studies’ (GWAS)—fits into this framework. In the present article, we propose a revision to Ratti’s framework more suited to studies such as GWAS. In the process of doing so, we introduce (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Understanding climate change with statistical downscaling and machine learning.Julie Jebeile, Vincent Lam & Tim Räz - 2020 - Synthese (1-2):1-21.
    Machine learning methods have recently created high expectations in the climate modelling context in view of addressing climate change, but they are often considered as non-physics-based ‘black boxes’ that may not provide any understanding. However, in many ways, understanding seems indispensable to appropriately evaluate climate models and to build confidence in climate projections. Relying on two case studies, we compare how machine learning and standard statistical techniques affect our ability to understand the climate system. For that purpose, we put five (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Software engineering standards for epidemiological models.Jack K. Horner & John F. Symons - 2020 - History and Philosophy of the Life Sciences 42 (4):1-24.
    There are many tangled normative and technical questions involved in evaluating the quality of software used in epidemiological simulations. In this paper we answer some of these questions and offer practical guidance to practitioners, funders, scientific journals, and consumers of epidemiological research. The heart of our paper is a case study of the Imperial College London covid-19 simulator, set in the context of recent work in epistemology of simulation and philosophy of epidemiology.
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Understanding climate phenomena with data-driven models.Benedikt Knüsel & Christoph Baumberger - 2020 - Studies in History and Philosophy of Science Part A 84 (C):46-56.
    In climate science, climate models are one of the main tools for understanding phenomena. Here, we develop a framework to assess the fitness of a climate model for providing understanding. The framework is based on three dimensions: representational accuracy, representational depth, and graspability. We show that this framework does justice to the intuition that classical process-based climate models give understanding of phenomena. While simple climate models are characterized by a larger graspability, state-of-the-art models have a higher representational accuracy and representational (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Analogue Models and Universal Machines. Paradigms of Epistemic Transparency in Artificial Intelligence.Hajo Greif - 2022 - Minds and Machines 32 (1):111-133.
    The problem of epistemic opacity in Artificial Intelligence is often characterised as a problem of intransparent algorithms that give rise to intransparent models. However, the degrees of transparency of an AI model should not be taken as an absolute measure of the properties of its algorithms but of the model’s degree of intelligibility to human users. Its epistemically relevant elements are to be specified on various levels above and beyond the computational one. In order to elucidate this claim, I first (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Deep Learning Applied to Scientific Discovery: A Hot Interface with Philosophy of Science.Louis Vervoort, Henry Shevlin, Alexey A. Melnikov & Alexander Alodjants - 2023 - Journal for General Philosophy of Science / Zeitschrift für Allgemeine Wissenschaftstheorie 54 (2):339-351.
    We review publications in automated scientific discovery using deep learning, with the aim of shedding light on problems with strong connections to philosophy of science, of physics in particular. We show that core issues of philosophy of science, related, notably, to the nature of scientific theories; the nature of unification; and of causation loom large in scientific deep learning. Therefore, advances in deep learning could, and ideally should, have impact on philosophy of science, and vice versa. We suggest lines of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The predictive reframing of machine learning applications: good predictions and bad measurements.Alexander Martin Mussgnug - 2022 - European Journal for Philosophy of Science 12 (3):1-21.
    Supervised machine learning has found its way into ever more areas of scientific inquiry, where the outcomes of supervised machine learning applications are almost universally classified as predictions. I argue that what researchers often present as a mere terminological particularity of the field involves the consequential transformation of tasks as diverse as classification, measurement, or image segmentation into prediction problems. Focusing on the case of machine-learning enabled poverty prediction, I explore how reframing a measurement problem as a prediction task alters (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation