Inductive Risk, Understanding, and Opaque Machine Learning Models

Philosophy of Science:1-13 (forthcoming)
Download Edit this record How to cite View on PhilPapers
Abstract
Under what conditions does machine learning (ML) model opacity inhibit the possibility of explaining and understanding phenomena? In this paper, I argue that non-epistemic values give shape to the ML opacity problem even if we keep researcher interests fixed. Treating ML models as an instance of doing model-based science to explain and understand phenomena reveals that there is (i) an external opacity problem, where the presence of inductive risk imposes higher standards on externally validating models, and (ii) an internal opacity problem, where greater inductive risk demands a higher level of transparency regarding the inferences the model makes.
Keywords
No keywords specified (fix it)
PhilPapers/Archive ID
SULIRU
Upload history
Archival date: 2022-04-24
View other versions
Added to PP index
2022-04-24

Total views
110 ( #50,298 of 71,281 )

Recent downloads (6 months)
110 ( #6,198 of 71,281 )

How can I increase my downloads?

Downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.