Abstract
This paper explores the epistemological challenges in using computer simulations for two distinct goals: explanation via hypothesis-testing and prediction. It argues that each goal requires different strategies for justifying inferences drawn from simulation results due to different practical and conceptual constraints. The paper identifies unique and shared strategies researchers employ to increase confidence in their inferences for each goal. For explanation via hypothesis-testing, researchers need to address the underdetermination, interpretability, and attribution challenges. In prediction, the emphasis is on the model's ability to generalize across multiple domains. Shared strategies researchers employ to increase confidence in inferences are empirical corroboration of theoretical assumptions and adequacy of computational operationalizations, and this paper argues that these are necessary for explanation via hypothesis-testing but not for prediction. This paper emphasizes the need for a nuanced approach to the epistemology of computer simulation, given the diverse applications of computer simulation in scientific research. Understanding these differences is crucial for both researchers and philosophers of science, as it helps develop appropriate methodologies and criteria for assessing the trustworthiness of computer simulation.