Switch to: Citations

Add references

You must login to add references.
  1. Explanatory pragmatism: a context-sensitive framework for explainable medical AI.Diana Robinson & Rune Nyrup - 2022 - Ethics and Information Technology 24 (1).
    Explainable artificial intelligence (XAI) is an emerging, multidisciplinary field of research that seeks to develop methods and tools for making AI systems more explainable or interpretable. XAI researchers increasingly recognise explainability as a context-, audience- and purpose-sensitive phenomenon, rather than a single well-defined property that can be directly measured and optimised. However, since there is currently no overarching definition of explainability, this poses a risk of miscommunication between the many different researchers within this multidisciplinary space. This is the problem we (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • How the machine ‘thinks’: Understanding opacity in machine learning algorithms.Jenna Burrell - 2016 - Big Data and Society 3 (1):205395171562251.
    This article considers the issue of opacity as a problem for socially consequential mechanisms of classification and ranking, such as spam filters, credit card fraud detection, search engines, news trends, market segmentation and advertising, insurance or loan qualification, and credit scoring. These mechanisms of classification all frequently rely on computational algorithms, and in many cases on machine learning algorithms to do this work. In this article, I draw a distinction between three forms of opacity: opacity as intentional corporate or state (...)
    Download  
     
    Export citation  
     
    Bookmark   194 citations  
  • Algorithmic bias: on the implicit biases of social technology.Gabbrielle M. Johnson - 2020 - Synthese 198 (10):9941-9961.
    Often machine learning programs inherit social patterns reflected in their training data without any directed effort by programmers to include such biases. Computer scientists call this algorithmic bias. This paper explores the relationship between machine bias and human cognitive bias. In it, I argue similarities between algorithmic and cognitive biases indicate a disconcerting sense in which sources of bias emerge out of seemingly innocuous patterns of information processing. The emergent nature of this bias obscures the existence of the bias itself, (...)
    Download  
     
    Export citation  
     
    Bookmark   27 citations  
  • Transparency in Complex Computational Systems.Kathleen A. Creel - 2020 - Philosophy of Science 87 (4):568-589.
    Scientists depend on complex computational systems that are often ineliminably opaque, to the detriment of our ability to give scientific explanations and detect artifacts. Some philosophers have s...
    Download  
     
    Export citation  
     
    Bookmark   56 citations  
  • Mapping Value Sensitive Design onto AI for Social Good Principles.Steven Umbrello & Ibo van de Poel - 2021 - AI and Ethics 1 (3):283–296.
    Value Sensitive Design (VSD) is an established method for integrating values into technical design. It has been applied to different technologies and, more recently, to artificial intelligence (AI). We argue that AI poses a number of challenges specific to VSD that require a somewhat modified VSD approach. Machine learning (ML), in particular, poses two challenges. First, humans may not understand how an AI system learns certain things. This requires paying attention to values such as transparency, explicability, and accountability. Second, ML (...)
    Download  
     
    Export citation  
     
    Bookmark   33 citations  
  • Transparency in Algorithmic and Human Decision-Making: Is There a Double Standard?John Zerilli, Alistair Knott, James Maclaurin & Colin Gavaghan - 2018 - Philosophy and Technology 32 (4):661-683.
    We are sceptical of concerns over the opacity of algorithmic decision tools. While transparency and explainability are certainly important desiderata in algorithmic governance, we worry that automated decision-making is being held to an unrealistically high standard, possibly owing to an unrealistically high estimate of the degree of transparency attainable from human decision-makers. In this paper, we review evidence demonstrating that much human decision-making is fraught with transparency problems, show in what respects AI fares little worse or better and argue that (...)
    Download  
     
    Export citation  
     
    Bookmark   75 citations  
  • On the ethics of algorithmic decision-making in healthcare.Thomas Grote & Philipp Berens - 2020 - Journal of Medical Ethics 46 (3):205-211.
    In recent years, a plethora of high-profile scientific publications has been reporting about machine learning algorithms outperforming clinicians in medical diagnosis or treatment recommendations. This has spiked interest in deploying relevant algorithms with the aim of enhancing decision-making in healthcare. In this paper, we argue that instead of straightforwardly enhancing the decision-making capabilities of clinicians and healthcare institutions, deploying machines learning algorithms entails trade-offs at the epistemic and the normative level. Whereas involving machine learning might improve the accuracy of medical (...)
    Download  
     
    Export citation  
     
    Bookmark   65 citations  
  • Principles of Biomedical Ethics: Marking Its Fortieth Anniversary.James Childress & Tom Beauchamp - 2019 - American Journal of Bioethics 19 (11):9-12.
    Volume 19, Issue 11, November 2019, Page 9-12.
    Download  
     
    Export citation  
     
    Bookmark   262 citations  
  • Understanding from Machine Learning Models.Emily Sullivan - 2022 - British Journal for the Philosophy of Science 73 (1):109-133.
    Simple idealized models seem to provide more understanding than opaque, complex, and hyper-realistic models. However, an increasing number of scientists are going in the opposite direction by utilizing opaque machine learning models to make predictions and draw inferences, suggesting that scientists are opting for models that have less potential for understanding. Are scientists trading understanding for some other epistemic or pragmatic good when they choose a machine learning model? Or are the assumptions behind why minimal models provide understanding misguided? In (...)
    Download  
     
    Export citation  
     
    Bookmark   53 citations  
  • (2 other versions)Personal Knowledge.Michael Polanyi - 1958 - Chicago,: Routledge.
    First published in 2012. Routledge is an imprint of Taylor & Francis, an informa company.
    Download  
     
    Export citation  
     
    Bookmark   201 citations  
  • Inductive Risk, Epistemic Risk, and Overdiagnosis of Disease.Justin B. Biddle - 2016 - Perspectives on Science 24 (2):192-205.
    . Recent philosophers of science have not only revived the classical argument from inductive risk but extended it. I argue that some of the purported extensions do not fit cleanly within the schema of the original argument, and I discuss the problem of overdiagnosis of disease due to expanded disease definitions in order to show that there are some risks in the research process that are important and that very clearly fall outside of the domain of inductive risk. Finally, I (...)
    Download  
     
    Export citation  
     
    Bookmark   39 citations  
  • (2 other versions)Personal knowledge.Michael Polanyi - 1958 - Chicago,: University of Chicago Press.
    In this work the distinguished physical chemist and philosopher, Michael Polanyi, demonstrates that the scientist's personal participation in his knowledge, in ...
    Download  
     
    Export citation  
     
    Bookmark   515 citations  
  • Dermatologist-level classification of skin cancer with deep neural networks.Andre Esteva, Brett Kuprel, Roberto A. Novoa, Justin Ko, Susan M. Swetter, Helen M. Blau & Sebastian Thrun - 2017 - Nature 542 (7639):115-118.
    Download  
     
    Export citation  
     
    Bookmark   59 citations  
  • Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead.Cynthia Rudin - 2019 - Nature Machine Intelligence 1.
    Download  
     
    Export citation  
     
    Bookmark   112 citations  
  • Who is afraid of black box algorithms? On the epistemological and ethical basis of trust in medical AI.Juan Manuel Durán & Karin Rolanda Jongsma - 2021 - Journal of Medical Ethics 47 (5):medethics - 2020-106820.
    The use of black box algorithms in medicine has raised scholarly concerns due to their opaqueness and lack of trustworthiness. Concerns about potential bias, accountability and responsibility, patient autonomy and compromised trust transpire with black box algorithms. These worries connect epistemic concerns with normative issues. In this paper, we outline that black box algorithms are less problematic for epistemic reasons than many scholars seem to believe. By outlining that more transparency in algorithms is not always necessary, and by explaining that (...)
    Download  
     
    Export citation  
     
    Bookmark   48 citations  
  • Theory choice, non-epistemic values, and machine learning.Ravit Dotan - 2020 - Synthese (11):1-21.
    I use a theorem from machine learning, called the “No Free Lunch” theorem to support the claim that non-epistemic values are essential to theory choice. I argue that NFL entails that predictive accuracy is insufficient to favor a given theory over others, and that NFL challenges our ability to give a purely epistemic justification for using other traditional epistemic virtues in theory choice. In addition, I argue that the natural way to overcome NFL’s challenge is to use non-epistemic values. If (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Grounds for Trust: Essential Epistemic Opacity and Computational Reliabilism.Juan M. Durán & Nico Formanek - 2018 - Minds and Machines 28 (4):645-666.
    Several philosophical issues in connection with computer simulations rely on the assumption that results of simulations are trustworthy. Examples of these include the debate on the experimental role of computer simulations :483–496, 2009; Morrison in Philos Stud 143:33–57, 2009), the nature of computer data Computer simulations and the changing face of scientific experimentation, Cambridge Scholars Publishing, Barcelona, 2013; Humphreys, in: Durán, Arnold Computer simulations and the changing face of scientific experimentation, Cambridge Scholars Publishing, Barcelona, 2013), and the explanatory power of (...)
    Download  
     
    Export citation  
     
    Bookmark   43 citations  
  • Solving the Black Box Problem: A Normative Framework for Explainable Artificial Intelligence.Carlos Zednik - 2019 - Philosophy and Technology 34 (2):265-288.
    Many of the computing systems programmed using Machine Learning are opaque: it is difficult to know why they do what they do or how they work. Explainable Artificial Intelligence aims to develop analytic techniques that render opaque computing systems transparent, but lacks a normative framework with which to evaluate these techniques’ explanatory successes. The aim of the present discussion is to develop such a framework, paying particular attention to different stakeholders’ distinct explanatory requirements. Building on an analysis of “opacity” from (...)
    Download  
     
    Export citation  
     
    Bookmark   58 citations  
  • Epistemic risks in cancer screening: Implications for ethics and policy.Justin B. Biddle - 2020 - Studies in History and Philosophy of Science Part C: Studies in History and Philosophy of Biological and Biomedical Sciences 79:101200.
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Dissecting scientific explanation in AI (sXAI): A case for medicine and healthcare.Juan M. Durán - 2021 - Artificial Intelligence 297 (C):103498.
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • Responsibility beyond design: Physicians’ requirements for ethical medical AI.Martin Sand, Juan Manuel Durán & Karin Rolanda Jongsma - 2021 - Bioethics 36 (2):162-169.
    Bioethics, Volume 36, Issue 2, Page 162-169, February 2022.
    Download  
     
    Export citation  
     
    Bookmark   14 citations  
  • Artificial Intelligence and Black‐Box Medical Decisions: Accuracy versus Explainability.Alex John London - 2019 - Hastings Center Report 49 (1):15-21.
    Although decision‐making algorithms are not new to medicine, the availability of vast stores of medical data, gains in computing power, and breakthroughs in machine learning are accelerating the pace of their development, expanding the range of questions they can address, and increasing their predictive power. In many cases, however, the most powerful machine learning techniques purchase diagnostic or predictive accuracy at the expense of our ability to access “the knowledge within the machine.” Without an explanation in terms of reasons or (...)
    Download  
     
    Export citation  
     
    Bookmark   73 citations  
  • Randomized Controlled Trials in Medical AI.Konstantin Genin & Thomas Grote - 2021 - Philosophy of Medicine 2 (1).
    Various publications claim that medical AI systems perform as well, or better, than clinical experts. However, there have been very few controlled trials and the quality of existing studies has been called into question. There is growing concern that existing studies overestimate the clinical benefits of AI systems. This has led to calls for more, and higher-quality, randomized controlled trials of medical AI systems. While this a welcome development, AI RCTs raise novel methodological challenges that have seen little discussion. We (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations