Switch to: References

Add citations

You must login to add citations.
  1. Reliability in Machine Learning.Thomas Grote, Konstantin Genin & Emily Sullivan - forthcoming - Philosophy Compass.
    Issues of reliability are claiming center-stage in the epistemology of machine learning. This paper unifies different branches in the literature and points to promising research directions, whilst also providing an accessible introduction to key concepts in statistics and machine learning---as far as they are concerned with reliability.
    Download  
     
    Export citation  
     
    Bookmark  
  • On the Philosophy of Unsupervised Learning.David S. Watson - 2023 - Philosophy and Technology 36 (2):1-26.
    Unsupervised learning algorithms are widely used for many important statistical tasks with numerous applications in science and industry. Yet despite their prevalence, they have attracted remarkably little philosophical scrutiny to date. This stands in stark contrast to supervised and reinforcement learning algorithms, which have been widely studied and critically evaluated, often with an emphasis on ethical concerns. In this article, I analyze three canonical unsupervised learning problems: clustering, abstraction, and generative modeling. I argue that these methods raise unique epistemological and (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Integrating Artificial Intelligence in Scientific Practice: Explicable AI as an Interface.Emanuele Ratti - 2022 - Philosophy and Technology 35 (3):1-5.
    A recent article by Herzog provides a much-needed integration of ethical and epistemological arguments in favor of explicable AI in medicine. In this short piece, I suggest a way in which its epistemological intuition of XAI as “explanatory interface” can be further developed to delineate the relation between AI tools and scientific research.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Automating anticorruption?María Carolina Jiménez & Emanuela Ceva - 2022 - Ethics and Information Technology 24 (4):1-14.
    The paper explores some normative challenges concerning the integration of Machine Learning (ML) algorithms into anticorruption in public institutions. The challenges emerge from the tensions between an approach treating ML algorithms as allies to an exclusively legalistic conception of anticorruption and an approach seeing them within an institutional ethics of office accountability. We explore two main challenges. One concerns the variable opacity of some ML algorithms, which may affect public officeholders’ capacity to account for institutional processes relying upon ML techniques. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The Explanatory Role of Machine Learning in Molecular Biology.Fridolin Gross - forthcoming - Erkenntnis:1-21.
    The philosophical debate around the impact of machine learning in science is often framed in terms of a choice between AI and classical methods as mutually exclusive alternatives involving difficult epistemological trade-offs. A common worry regarding machine learning methods specifically is that they lead to opaque models that make predictions but do not lead to explanation or understanding. Focusing on the field of molecular biology, I argue that in practice machine learning is often used with explanatory aims. More specifically, I (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Allure of Simplicity.Thomas Grote - 2023 - Philosophy of Medicine 4 (1).
    This paper develops an account of the opacity problem in medical machine learning (ML). Guided by pragmatist assumptions, I argue that opacity in ML models is problematic insofar as it potentially undermines the achievement of two key purposes: ensuring generalizability and optimizing clinician–machine decision-making. Three opacity amelioration strategies are examined, with explainable artificial intelligence (XAI) as the predominant approach, challenged by two revisionary strategies in the form of reliabilism and the interpretability by design. Comparing the three strategies, I argue that (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Explainability, Public Reason, and Medical Artificial Intelligence.Michael Da Silva - 2023 - Ethical Theory and Moral Practice 26 (5):743-762.
    The contention that medical artificial intelligence (AI) should be ‘explainable’ is widespread in contemporary philosophy and in legal and best practice documents. Yet critics argue that ‘explainability’ is not a stable concept; non-explainable AI is often more accurate; mechanisms intended to improve explainability do not improve understanding and introduce new epistemic concerns; and explainability requirements are ad hoc where human medical decision-making is often opaque. A recent ‘political response’ to these issues contends that AI used in high-stakes scenarios, including medical (...)
    Download  
     
    Export citation  
     
    Bookmark