Understanding from Machine Learning Models

Download Edit this record How to cite View on PhilPapers
Abstract
Simple idealized models seem to provide more understanding than opaque, complex, and hyper-realistic models. However, an increasing number of scientists are going in the opposite direction by utilizing opaque machine learning models to make predictions and draw inferences, suggesting that scientists are opting for models that have less potential for understanding. Are scientists trading understanding for some other epistemic or pragmatic good when they choose a machine learning model? Or are the assumptions behind why minimal models provide understanding misguided? In this paper, using the case of deep neural networks, I argue that it is not the complexity or black box nature of a model that limits how much understanding the model provides. Instead, it is a lack of scientific and empirical evidence supporting the link that connects a model to the target phenomenon that primarily prohibits understanding.
PhilPapers/Archive ID
SULUFM
Revision history
Archival date: 2019-07-18
View upload history
References found in this work BETA
Understanding Why.Hills, Alison

View all 14 references / Add more references

Citations of this work BETA

No citations found.

Add more citations

Added to PP index
2019-07-18

Total views
134 ( #23,565 of 43,897 )

Recent downloads (6 months)
134 ( #3,313 of 43,897 )

How can I increase my downloads?

Downloads since first upload
This graph includes both downloads from PhilArchive and clicks to external links.