Using Computer Simulations for Hypothesis-Testing and Prediction: Epistemological Strategies

Abstract

This paper explores the epistemological challenges in using computer simulations for two distinct goals: explanation via hypothesis-testing and prediction. It argues that each goal requires different strategies for justifying inferences drawn from simulation results due to different practical and conceptual constraints. The paper identifies unique and shared strategies researchers employ to increase confidence in their inferences for each goal. For explanation via hypothesis-testing, researchers need to address the underdetermination, interpretability, and attribution challenges. In prediction, the emphasis is on the model's ability to generalize across multiple domains. Shared strategies researchers employ to increase confidence in inferences are empirical corroboration of theoretical assumptions and adequacy of computational operationalizations, and this paper argues that these are necessary for explanation via hypothesis-testing but not for prediction. This paper emphasizes the need for a nuanced approach to the epistemology of computer simulation, given the diverse applications of computer simulation in scientific research. Understanding these differences is crucial for both researchers and philosophers of science, as it helps develop appropriate methodologies and criteria for assessing the trustworthiness of computer simulation.

Author's Profile

Tan Nguyen
Washington University in St. Louis

Analytics

Added to PP
2023-09-08

Downloads
579 (#38,285)

6 months
307 (#5,252)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?