Switch to: References

Add citations

You must login to add citations.
  1. Values and inductive risk in machine learning modelling: the case of binary classification models.Koray Karaca - 2021 - European Journal for Philosophy of Science 11 (4):1-27.
    I examine the construction and evaluation of machine learning binary classification models. These models are increasingly used for societal applications such as classifying patients into two categories according to the presence or absence of a certain disease like cancer and heart disease. I argue that the construction of ML classification models involves an optimisation process aiming at the minimization of the inductive risk associated with the intended uses of these models. I also argue that the construction of these models is (...)
    Download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  • Prediction Versus Understanding in Computationally Enhanced Neuroscience.M. Chirimuuta - 2020 - Synthese 199 (1-2):767-790.
    The use of machine learning instead of traditional models in neuroscience raises significant questions about the epistemic benefits of the newer methods. I draw on the literature on model intelligibility in the philosophy of science to offer some benchmarks for the interpretability of artificial neural networks used as a predictive tool in neuroscience. Following two case studies on the use of ANN’s to model motor cortex and the visual system, I argue that the benefit of providing the scientist with understanding (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Throwing Light on Black Boxes: Emergence of Visual Categories From Deep Learning.Ezequiel López-Rubio - 2020 - Synthese 198 (10):10021-10041.
    One of the best known arguments against the connectionist approach to artificial intelligence and cognitive science is that neural networks are black boxes, i.e., there is no understandable account of their operation. This difficulty has impeded efforts to explain how categories arise from raw sensory data. Moreover, it has complicated investigation about the role of symbols and language in cognition. This state of things has been radically changed by recent experimental findings in artificial deep learning research. Two kinds of artificial (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Algorithmic and Human Decision Making: For a Double Standard of Transparency.Mario Günther & Atoosa Kasirzadeh - forthcoming - AI and Society.
    Should decision-making algorithms be held to higher standards of transparency than human beings? The way we answer this question directly impacts what we demand from explainable algorithms, how we govern them via regulatory proposals, and how explainable algorithms may help resolve the social problems associated with decision making supported by artificial intelligence. Some argue that algorithms and humans should be held to the same standards of transparency and that a double standard of transparency is hardly justified. We give two arguments (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • What is Interpretability?Adrian Erasmus, Tyler D. P. Brunet & Eyal Fisher - 2020 - Philosophy and Technology.
    We argue that artificial networks are explainable and offer a novel theory of interpretability. Two sets of conceptual questions are prominent in theoretical engagements with artificial neural networks, especially in the context of medical artificial intelligence: Are networks explainable, and if so, what does it mean to explain the output of a network? And what does it mean for a network to be interpretable? We argue that accounts of “explanation” tailored specifically to neural networks have ineffectively reinvented the wheel. In (...)
    Download  
    Translate
     
     
    Export citation  
     
    Bookmark   2 citations  
  • Freedom at Work.Kate Vredenburgh - manuscript
    Download  
     
    Export citation  
     
    Bookmark  
  • Randomised Controlled Trials in Medical AI: Ethical Considerations.Thomas Grote - forthcoming - Journal of Medical Ethics:medethics-2020-107166.
    In recent years, there has been a surge of high-profile publications on applications of artificial intelligence systems for medical diagnosis and prognosis. While AI provides various opportunities for medical practice, there is an emerging consensus that the existing studies show considerable deficits and are unable to establish the clinical benefit of AI systems. Hence, the view that the clinical benefit of AI systems needs to be studied in clinical trials—particularly randomised controlled trials —is gaining ground. However, an issue that has (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Understanding Climate Phenomena with Data-Driven Models.Benedikt Knüsel & Christoph Baumberger - 2020 - Studies in History and Philosophy of Science Part A 84:46-56.
    In climate science, climate models are one of the main tools for understanding phenomena. Here, we develop a framework to assess the fitness of a climate model for providing understanding. The framework is based on three dimensions: representational accuracy, representational depth, and graspability. We show that this framework does justice to the intuition that classical process-based climate models give understanding of phenomena. While simple climate models are characterized by a larger graspability, state-of-the-art models have a higher representational accuracy and representational (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation