Switch to: References

Add citations

You must login to add citations.
  1. The Problem of Differential Importability and Scientific Modeling.Anish Seal - 2024 - Philosophies 9 (6):164.
    The practice of science appears to involve “model-talk”. Scientists, one thinks, are in the business of giving accounts of reality. Scientists, in the process of furnishing such accounts, talk about what they call “models”. Philosophers of science have inspected what this talk of models suggests about how scientific theories manage to represent reality. There are, it seems, at least three distinct philosophical views on the role of scientific models in science’s portrayal of reality: the abstractionist view, the indirect fictionalist view, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Understanding with Toy Surrogate Models in Machine Learning.Andrés Páez - 2024 - Minds and Machines 34 (4):45.
    In the natural and social sciences, it is common to use toy models—extremely simple and highly idealized representations—to understand complex phenomena. Some of the simple surrogate models used to understand opaque machine learning (ML) models, such as rule lists and sparse decision trees, bear some resemblance to scientific toy models. They allow non-experts to understand how an opaque ML model works globally via a much simpler model that highlights the most relevant features of the input space and their effect on (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Reliability in Machine Learning.Thomas Grote, Konstantin Genin & Emily Sullivan - 2024 - Philosophy Compass 19 (5):e12974.
    Issues of reliability are claiming center-stage in the epistemology of machine learning. This paper unifies different branches in the literature and points to promising research directions, whilst also providing an accessible introduction to key concepts in statistics and machine learning – as far as they are concerned with reliability.
    Download  
     
    Export citation  
     
    Bookmark  
  • Machine learning in healthcare and the methodological priority of epistemology over ethics.Thomas Grote - forthcoming - Inquiry: An Interdisciplinary Journal of Philosophy.
    This paper develops an account of how the implementation of ML models into healthcare settings requires revising the methodological apparatus of philosophical bioethics. On this account, ML models are cognitive interventions that provide decision-support to physicians and patients. Due to reliability issues, opaque reasoning processes, and information asymmetries, ML models pose inferential problems for them. These inferential problems lay the grounds for many ethical problems that currently claim centre-stage in the bioethical debate. Accordingly, this paper argues that the best way (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • The Explanatory Role of Machine Learning in Molecular Biology.Fridolin Gross - forthcoming - Erkenntnis:1-21.
    The philosophical debate around the impact of machine learning in science is often framed in terms of a choice between AI and classical methods as mutually exclusive alternatives involving difficult epistemological trade-offs. A common worry regarding machine learning methods specifically is that they lead to opaque models that make predictions but do not lead to explanation or understanding. Focusing on the field of molecular biology, I argue that in practice machine learning is often used with explanatory aims. More specifically, I (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Listening to algorithms: The case of self‐knowledge.Casey Doyle - forthcoming - European Journal of Philosophy.
    This paper begins with the thought that there is something out of place about offloading inquiry into one's own mind to AI. The paper's primary goal is to articulate the unease felt when considering cases of doing so. It draws a parallel between the use of algorithms in the criminal law: in both cases one feels entitled to be treated as an exception to a verdict made on the basis of a certain kind of evidence. Then it identifies an account (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • ML interpretability: Simple isn't easy.Tim Räz - 2024 - Studies in History and Philosophy of Science Part A 103 (C):159-167.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Do ML models represent their targets?Emily Sullivan - forthcoming - Philosophy of Science.
    I argue that ML models used in science function as highly idealized toy models. If we treat ML models as a type of highly idealized toy model, then we can deploy standard representational and epistemic strategies from the toy model literature to explain why ML models can still provide epistemic success despite their lack of similarity to their targets.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • AI and bureaucratic discretion.Kate Vredenburgh - forthcoming - Inquiry: An Interdisciplinary Journal of Philosophy.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Explainable AI and Causal Understanding: Counterfactual Approaches Considered.Sam Baron - 2023 - Minds and Machines 33 (2):347-377.
    The counterfactual approach to explainable AI (XAI) seeks to provide understanding of AI systems through the provision of counterfactual explanations. In a recent systematic review, Chou et al. (Inform Fus 81:59–83, 2022) argue that the counterfactual approach does not clearly provide causal understanding. They diagnose the problem in terms of the underlying framework within which the counterfactual approach has been developed. To date, the counterfactual approach has not been developed in concert with the approach for specifying causes developed by Pearl (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Models, Algorithms, and the Subjects of Transparency.Hajo Greif - 2022 - In Vincent C. Müller (ed.), Philosophy and Theory of Artificial Intelligence 2021. Berlin: Springer. pp. 27-37.
    Concerns over epistemic opacity abound in contemporary debates on Artificial Intelligence (AI). However, it is not always clear to what extent these concerns refer to the same set of problems. We can observe, first, that the terms 'transparency' and 'opacity' are used either in reference to the computational elements of an AI model or to the models to which they pertain. Second, opacity and transparency might either be understood to refer to the properties of AI systems or to the epistemic (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Understanding, Idealization, and Explainable AI.Will Fleisher - 2022 - Episteme 19 (4):534-560.
    Many AI systems that make important decisions are black boxes: how they function is opaque even to their developers. This is due to their high complexity and to the fact that they are trained rather than programmed. Efforts to alleviate the opacity of black box systems are typically discussed in terms of transparency, interpretability, and explainability. However, there is little agreement about what these key concepts mean, which makes it difficult to adjudicate the success or promise of opacity alleviation methods. (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  • The Importance of Understanding Deep Learning.Tim Räz & Claus Beisbart - 2024 - Erkenntnis 89 (5).
    Some machine learning models, in particular deep neural networks (DNNs), are not very well understood; nevertheless, they are frequently used in science. Does this lack of understanding pose a problem for using DNNs to understand empirical phenomena? Emily Sullivan has recently argued that understanding with DNNs is not limited by our lack of understanding of DNNs themselves. In the present paper, we will argue, _contra_ Sullivan, that our current lack of understanding of DNNs does limit our ability to understand with (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Inductive Risk, Understanding, and Opaque Machine Learning Models.Emily Sullivan - 2022 - Philosophy of Science 89 (5):1065-1074.
    Under what conditions does machine learning (ML) model opacity inhibit the possibility of explaining and understanding phenomena? In this article, I argue that nonepistemic values give shape to the ML opacity problem even if we keep researcher interests fixed. Treating ML models as an instance of doing model-based science to explain and understand phenomena reveals that there is (i) an external opacity problem, where the presence of inductive risk imposes higher standards on externally validating models, and (ii) an internal opacity (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Scientific Exploration and Explainable Artificial Intelligence.Carlos Zednik & Hannes Boelsen - 2022 - Minds and Machines 32 (1):219-239.
    Models developed using machine learning are increasingly prevalent in scientific research. At the same time, these models are notoriously opaque. Explainable AI aims to mitigate the impact of opacity by rendering opaque models transparent. More than being just the solution to a problem, however, Explainable AI can also play an invaluable role in scientific exploration. This paper describes how post-hoc analytic techniques from Explainable AI can be used to refine target phenomena in medical science, to identify starting points for future (...)
    Download  
     
    Export citation  
     
    Bookmark   14 citations  
  • Analogue Models and Universal Machines. Paradigms of Epistemic Transparency in Artificial Intelligence.Hajo Greif - 2022 - Minds and Machines 32 (1):111-133.
    The problem of epistemic opacity in Artificial Intelligence is often characterised as a problem of intransparent algorithms that give rise to intransparent models. However, the degrees of transparency of an AI model should not be taken as an absolute measure of the properties of its algorithms but of the model’s degree of intelligibility to human users. Its epistemically relevant elements are to be specified on various levels above and beyond the computational one. In order to elucidate this claim, I first (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Understanding climate change with statistical downscaling and machine learning.Julie Jebeile, Vincent Lam & Tim Räz - 2020 - Synthese (1-2):1-21.
    Machine learning methods have recently created high expectations in the climate modelling context in view of addressing climate change, but they are often considered as non-physics-based ‘black boxes’ that may not provide any understanding. However, in many ways, understanding seems indispensable to appropriately evaluate climate models and to build confidence in climate projections. Relying on two case studies, we compare how machine learning and standard statistical techniques affect our ability to understand the climate system. For that purpose, we put five (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Humanistic interpretation and machine learning.Juho Pääkkönen & Petri Ylikoski - 2021 - Synthese 199:1461–1497.
    This paper investigates how unsupervised machine learning methods might make hermeneutic interpretive text analysis more objective in the social sciences. Through a close examination of the uses of topic modeling—a popular unsupervised approach in the social sciences—it argues that the primary way in which unsupervised learning supports interpretation is by allowing interpreters to discover unanticipated information in larger and more diverse corpora and by improving the transparency of the interpretive process. This view highlights that unsupervised modeling does not eliminate the (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Why Attention is Not Explanation: Surgical Intervention and Causal Reasoning about Neural Models.Christopher Grimsley, Elijah Mayfield & Julia Bursten - 2020 - Proceedings of the 12th Conference on Language Resources and Evaluation.
    As the demand for explainable deep learning grows in the evaluation of language technologies, the value of a principled grounding for those explanations grows as well. Here we study the state-of-the-art in explanation for neural models for natural-language processing (NLP) tasks from the viewpoint of philosophy of science. We focus on recent evaluation work that finds brittleness in explanations obtained through attention mechanisms.We harness philosophical accounts of explanation to suggest broader conclusions from these studies. From this analysis, we assert the (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • A Knower Without a Voice: Co-Reasoning with Machine Learning.Eleanor Gilmore-Szott & Ryan Dougherty - 2024 - American Journal of Bioethics 24 (9):103-105.
    Bioethical consensus promotes a shared decision making model, which requires healthcare professionals to partner their knowledge with that of their patients—who, at a minimum, contribute their valu...
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Epistemic Value of Digital Simulacra for Patients.Eleanor Gilmore-Szott - 2023 - American Journal of Bioethics 23 (9):63-66.
    Artificial Intelligence and Machine Learning (AI/ML) models introduce unique considerations when determining their epistemic value. Fortunately, existing work on the epistemic features of AI/ML can...
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Explainability, Public Reason, and Medical Artificial Intelligence.Michael Da Silva - 2023 - Ethical Theory and Moral Practice 26 (5):743-762.
    The contention that medical artificial intelligence (AI) should be ‘explainable’ is widespread in contemporary philosophy and in legal and best practice documents. Yet critics argue that ‘explainability’ is not a stable concept; non-explainable AI is often more accurate; mechanisms intended to improve explainability do not improve understanding and introduce new epistemic concerns; and explainability requirements are ad hoc where human medical decision-making is often opaque. A recent ‘political response’ to these issues contends that AI used in high-stakes scenarios, including medical (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Deep Learning Applied to Scientific Discovery: A Hot Interface with Philosophy of Science.Louis Vervoort, Henry Shevlin, Alexey A. Melnikov & Alexander Alodjants - 2023 - Journal for General Philosophy of Science / Zeitschrift für Allgemeine Wissenschaftstheorie 54 (2):339-351.
    We review publications in automated scientific discovery using deep learning, with the aim of shedding light on problems with strong connections to philosophy of science, of physics in particular. We show that core issues of philosophy of science, related, notably, to the nature of scientific theories; the nature of unification; and of causation loom large in scientific deep learning. Therefore, advances in deep learning could, and ideally should, have impact on philosophy of science, and vice versa. We suggest lines of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Karl Jaspers and artificial neural nets: on the relation of explaining and understanding artificial intelligence in medicine.Christopher Poppe & Georg Starke - 2022 - Ethics and Information Technology 24 (3):1-10.
    Assistive systems based on Artificial Intelligence (AI) are bound to reshape decision-making in all areas of society. One of the most intricate challenges arising from their implementation in high-stakes environments such as medicine concerns their frequently unsatisfying levels of explainability, especially in the guise of the so-called black-box problem: highly successful models based on deep learning seem to be inherently opaque, resisting comprehensive explanations. This may explain why some scholars claim that research should focus on rendering AI systems understandable, rather (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Explanatory pragmatism: a context-sensitive framework for explainable medical AI.Diana Robinson & Rune Nyrup - 2022 - Ethics and Information Technology 24 (1).
    Explainable artificial intelligence (XAI) is an emerging, multidisciplinary field of research that seeks to develop methods and tools for making AI systems more explainable or interpretable. XAI researchers increasingly recognise explainability as a context-, audience- and purpose-sensitive phenomenon, rather than a single well-defined property that can be directly measured and optimised. However, since there is currently no overarching definition of explainability, this poses a risk of miscommunication between the many different researchers within this multidisciplinary space. This is the problem we (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  • Hypothesis-driven science in large-scale studies: the case of GWAS.Sumana Sharma & James Read - 2021 - Biology and Philosophy 36 (5):1-21.
    It is now well-appreciated by philosophers that contemporary large-scale ‘-omics’ studies in biology stand in non-trivial relationships to more orthodox hypothesis-driven approaches. These relationships have been clarified by Ratti (2015); however, there remains much more to be said regarding how an important field of genomics cited in that work—‘genome-wide association studies’ (GWAS)—fits into this framework. In the present article, we propose a revision to Ratti’s framework more suited to studies such as GWAS. In the process of doing so, we introduce (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Two Dimensions of Opacity and the Deep Learning Predicament.Florian J. Boge - 2021 - Minds and Machines 32 (1):43-75.
    Deep neural networks have become increasingly successful in applications from biology to cosmology to social science. Trained DNNs, moreover, correspond to models that ideally allow the prediction of new phenomena. Building in part on the literature on ‘eXplainable AI’, I here argue that these models are instrumental in a sense that makes them non-explanatory, and that their automated generation is opaque in a unique way. This combination implies the possibility of an unprecedented gap between discovery and explanation: When unsupervised models (...)
    Download  
     
    Export citation  
     
    Bookmark   23 citations  
  • Can Machine Learning Provide Understanding? How Cosmologists Use Machine Learning to Understand Observations of the Universe.Helen Meskhidze - 2023 - Erkenntnis 88 (5):1895-1909.
    The increasing precision of observations of the large-scale structure of the universe has created a problem for simulators: running the simulations necessary to interpret these observations has become impractical. Simulators have thus turned to machine learning (ML) algorithms instead. Though ML decreases computational expense, one might be worried about the use of ML for scientific investigations: How can algorithms that have repeatedly been described as black-boxes deliver scientific understanding? In this paper, I investigate how cosmologists employ ML, arguing that in (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • On Predicting Recidivism: Epistemic Risk, Tradeoffs, and Values in Machine Learning.Justin B. Biddle - 2022 - Canadian Journal of Philosophy 52 (3):321-341.
    Recent scholarship in philosophy of science and technology has shown that scientific and technological decision making are laden with values, including values of a social, political, and/or ethical character. This paper examines the role of value judgments in the design of machine-learning systems generally and in recidivism-prediction algorithms specifically. Drawing on work on inductive and epistemic risk, the paper argues that ML systems are value laden in ways similar to human decision making, because the development and design of ML systems (...)
    Download  
     
    Export citation  
     
    Bookmark   19 citations  
  • Scientific Inference with Interpretable Machine Learning: Analyzing Models to Learn About Real-World Phenomena.Timo Freiesleben, Gunnar König, Christoph Molnar & Álvaro Tejero-Cantero - 2024 - Minds and Machines 34 (3):1-39.
    To learn about real world phenomena, scientists have traditionally used models with clearly interpretable elements. However, modern machine learning (ML) models, while powerful predictors, lack this direct elementwise interpretability (e.g. neural network weights). Interpretable machine learning (IML) offers a solution by analyzing models holistically to derive interpretations. Yet, current IML research is focused on auditing ML models rather than leveraging them for scientific inference. Our work bridges this gap, presenting a framework for designing IML methods—termed ’property descriptors’—that illuminate not just (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Searching for Features with Artificial Neural Networks in Science: The Problem of Non-Uniqueness.Siyu Yao & Amit Hagar - 2024 - International Studies in the Philosophy of Science 37 (1):51-67.
    Artificial neural networks and supervised learning have become an essential part of science. Beyond using them for accurate input-output mapping, there is growing attention to a new feature-oriented approach. Under the assumption that networks optimised for a task may have learned to represent and utilise important features of the target system for that task, scientists examine how those networks manipulate inputs and employ the features networks capture for scientific discovery. We analyse this approach, show its hidden caveats, and suggest its (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • On the Philosophy of Unsupervised Learning.David S. Watson - 2023 - Philosophy and Technology 36 (2):1-26.
    Unsupervised learning algorithms are widely used for many important statistical tasks with numerous applications in science and industry. Yet despite their prevalence, they have attracted remarkably little philosophical scrutiny to date. This stands in stark contrast to supervised and reinforcement learning algorithms, which have been widely studied and critically evaluated, often with an emphasis on ethical concerns. In this article, I analyze three canonical unsupervised learning problems: clustering, abstraction, and generative modeling. I argue that these methods raise unique epistemological and (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Putting explainable AI in context: institutional explanations for medical AI.Jacob Browning & Mark Theunissen - 2022 - Ethics and Information Technology 24 (2).
    There is a current debate about if, and in what sense, machine learning systems used in the medical context need to be explainable. Those arguing in favor contend these systems require post hoc explanations for each individual decision to increase trust and ensure accurate diagnoses. Those arguing against suggest the high accuracy and reliability of the systems is sufficient for providing epistemic justified beliefs without the need for explaining each individual decision. But, as we show, both solutions have limitations—and it (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • A Puzzle concerning Compositionality in Machines.Ryan M. Nefdt - 2020 - Minds and Machines 30 (1):47-75.
    This paper attempts to describe and address a specific puzzle related to compositionality in artificial networks such as Deep Neural Networks and machine learning in general. The puzzle identified here touches on a larger debate in Artificial Intelligence related to epistemic opacity but specifically focuses on computational applications of human level linguistic abilities or properties and a special difficulty with relation to these. Thus, the resulting issue is both general and unique. A partial solution is suggested.
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • The proper role of history in evolutionary explanations.Thomas A. C. Reydon - 2023 - Noûs 57 (1):162-187.
    Evolutionary explanations are not only common in the biological sciences, but also widespread outside biology. But an account of how evolutionary explanations perform their explanatory work is still lacking. This paper develops such an account. I argue that available accounts of explanations in evolutionary science miss important parts of the role of history in evolutionary explanations. I argue that the historical part of evolutionary science should be taken as having genuine explanatory force, and that it provides how‐possibly explanations sensu Dray. (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Do artificial intelligence systems understand?Carlos Blanco Pérez & Eduardo Garrido-Merchán - 2024 - Claridades. Revista de Filosofía 16 (1):171-205.
    Are intelligent machines really intelligent? Is the underlying philosoph- ical concept of intelligence satisfactory for describing how the present systems work? Is understanding a necessary and sufficient condition for intelligence? If a machine could understand, should we attribute subjectivity to it? This paper addresses the problem of deciding whether the so-called ”intelligent machines” are capable of understanding, instead of merely processing signs. It deals with the relationship between syntax and semantics. The main thesis concerns the inevitability of semantics for any (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • On the Opacity of Deep Neural Networks.Anders Søgaard - forthcoming - Canadian Journal of Philosophy:1-16.
    Deep neural networks are said to be opaque, impeding the development of safe and trustworthy artificial intelligence, but where this opacity stems from is less clear. What are the sufficient properties for neural network opacity? Here, I discuss five common properties of deep neural networks and two different kinds of opacity. Which of these properties are sufficient for what type of opacity? I show how each kind of opacity stems from only one of these five properties, and then discuss to (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Epistemo-ethical constraints on AI-human decision making for diagnostic purposes.Dina Babushkina & Athanasios Votsis - 2022 - Ethics and Information Technology 24 (2).
    This paper approaches the interaction of a health professional with an AI system for diagnostic purposes as a hybrid decision making process and conceptualizes epistemo-ethical constraints on this process. We argue for the importance of the understanding of the underlying machine epistemology in order to raise awareness of and facilitate realistic expectations from AI as a decision support system, both among healthcare professionals and the potential benefiters. Understanding the epistemic abilities and limitations of such systems is essential if we are (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Understanding climate phenomena with data-driven models.Benedikt Knüsel & Christoph Baumberger - 2020 - Studies in History and Philosophy of Science Part A 84 (C):46-56.
    In climate science, climate models are one of the main tools for understanding phenomena. Here, we develop a framework to assess the fitness of a climate model for providing understanding. The framework is based on three dimensions: representational accuracy, representational depth, and graspability. We show that this framework does justice to the intuition that classical process-based climate models give understanding of phenomena. While simple climate models are characterized by a larger graspability, state-of-the-art models have a higher representational accuracy and representational (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • The predictive reframing of machine learning applications: good predictions and bad measurements.Alexander Martin Mussgnug - 2022 - European Journal for Philosophy of Science 12 (3):1-21.
    Supervised machine learning has found its way into ever more areas of scientific inquiry, where the outcomes of supervised machine learning applications are almost universally classified as predictions. I argue that what researchers often present as a mere terminological particularity of the field involves the consequential transformation of tasks as diverse as classification, measurement, or image segmentation into prediction problems. Focusing on the case of machine-learning enabled poverty prediction, I explore how reframing a measurement problem as a prediction task alters (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Negotiating becoming: a Nietzschean critique of large language models.Simon W. S. Fischer & Bas de Boer - 2024 - Ethics and Information Technology 26 (3):1-12.
    Large language models (LLMs) structure the linguistic landscape by reflecting certain beliefs and assumptions. In this paper, we address the risk of people unthinkingly adopting and being determined by the values or worldviews embedded in LLMs. We provide a Nietzschean critique of LLMs and, based on the concept of will to power, consider LLMs as will-to-power organisations. This allows us to conceptualise the interaction between self and LLMs as power struggles, which we understand as negotiation. Currently, the invisibility and incomprehensibility (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Predicting and explaining with machine learning models: Social science as a touchstone.Oliver Buchholz & Thomas Grote - 2023 - Studies in History and Philosophy of Science Part A 102 (C):60-69.
    Machine learning (ML) models recently led to major breakthroughs in predictive tasks in the natural sciences. Yet their benefits for the social sciences are less evident, as even high-profile studies on the prediction of life trajectories have shown to be largely unsuccessful – at least when measured in traditional criteria of scientific success. This paper tries to shed light on this remarkable performance gap. Comparing two social science case studies to a paradigm example from the natural sciences, we argue that, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Mapping representational mechanisms with deep neural networks.Phillip Hintikka Kieval - 2022 - Synthese 200 (3):1-25.
    The predominance of machine learning based techniques in cognitive neuroscience raises a host of philosophical and methodological concerns. Given the messiness of neural activity, modellers must make choices about how to structure their raw data to make inferences about encoded representations. This leads to a set of standard methodological assumptions about when abstraction is appropriate in neuroscientific practice. Yet, when made uncritically these choices threaten to bias conclusions about phenomena drawn from data. Contact between the practices of multivariate pattern analysis (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Philosophy of science at sea: Clarifying the interpretability of machine learning.Claus Beisbart & Tim Räz - 2022 - Philosophy Compass 17 (6):e12830.
    Philosophy Compass, Volume 17, Issue 6, June 2022.
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • Health Digital Twins, Legal Liability, and Medical Practice.Andreas Kuersten - 2023 - American Journal of Bioethics 23 (9):66-69.
    Digital twins for health care have the potential to significantly impact the provision of medical services. In addition to possible use in care, this technology could serve as a conduit by which no...
    Download  
     
    Export citation  
     
    Bookmark  
  • Sources of Understanding in Supervised Machine Learning Models.Paulo Pirozelli - 2022 - Philosophy and Technology 35 (2):1-19.
    In the last decades, supervised machine learning has seen the widespread growth of highly complex, non-interpretable models, of which deep neural networks are the most typical representative. Due to their complexity, these models have showed an outstanding performance in a series of tasks, as in image recognition and machine translation. Recently, though, there has been an important discussion over whether those non-interpretable models are able to provide any sort of understanding whatsoever. For some scholars, only interpretable models can provide understanding. (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Conceptual challenges for interpretable machine learning.David S. Watson - 2022 - Synthese 200 (2):1-33.
    As machine learning has gradually entered into ever more sectors of public and private life, there has been a growing demand for algorithmic explainability. How can we make the predictions of complex statistical models more intelligible to end users? A subdiscipline of computer science known as interpretable machine learning (IML) has emerged to address this urgent question. Numerous influential methods have been proposed, from local linear approximations to rule lists and counterfactuals. In this article, I highlight three conceptual challenges that (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • Agree to disagree: the symmetry of burden of proof in human–AI collaboration.Karin Rolanda Jongsma & Martin Sand - 2022 - Journal of Medical Ethics 48 (4):230-231.
    In their paper ‘Responsibility, second opinions and peer-disagreement: ethical and epistemological challenges of using AI in clinical diagnostic contexts’, Kempt and Nagel discuss the use of medical AI systems and the resulting need for second opinions by human physicians, when physicians and AI disagree, which they call the rule of disagreement.1 The authors defend RoD based on three premises: First, they argue that in cases of disagreement in medical practice, there is an increased burden of proof for the physician in (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Values and inductive risk in machine learning modelling: the case of binary classification models.Koray Karaca - 2021 - European Journal for Philosophy of Science 11 (4):1-27.
    I examine the construction and evaluation of machine learning binary classification models. These models are increasingly used for societal applications such as classifying patients into two categories according to the presence or absence of a certain disease like cancer and heart disease. I argue that the construction of ML classification models involves an optimisation process aiming at the minimization of the inductive risk associated with the intended uses of these models. I also argue that the construction of these models is (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Deep Learning-Aided Research and the Aim-of-Science Controversy.Yukinori Onishi - forthcoming - Journal for General Philosophy of Science / Zeitschrift für Allgemeine Wissenschaftstheorie:1-19.
    The aim or goal of science has long been discussed by both philosophers of science and scientists themselves. In The Scientific Image (van Fraassen 1980), the aim of science is famously employed to characterize scientific realism and a version of anti-realism, called constructive empiricism. Since the publication of The Scientific Image, however, various changes have occurred in scientific practice. The increasing use of machine learning technology, especially deep learning (DL), is probably one of the major changes in the last decade. (...)
    Download  
     
    Export citation  
     
    Bookmark