Inductive Risk, Understanding, and Opaque Machine Learning Models

Philosophy of Science 89 (5):1065-1074 (2022)
  Copy   BIBTEX

Abstract

Under what conditions does machine learning (ML) model opacity inhibit the possibility of explaining and understanding phenomena? In this article, I argue that nonepistemic values give shape to the ML opacity problem even if we keep researcher interests fixed. Treating ML models as an instance of doing model-based science to explain and understand phenomena reveals that there is (i) an external opacity problem, where the presence of inductive risk imposes higher standards on externally validating models, and (ii) an internal opacity problem, where greater inductive risk demands a higher level of transparency regarding the inferences the model makes.

Author's Profile

Emily Sullivan
Utrecht University

Analytics

Added to PP
2022-04-24

Downloads
550 (#28,381)

6 months
167 (#16,568)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?