Imagine This: Opaque DLMs are Reliable in the Context of Justification

Abstract

Artificial intelligence (AI) and machine learning (ML) models have undoubtedly become useful tools in science. In general, scientists and ML developers are optimistic – perhaps rightfully so – about the potential that these models have in facilitating scientific progress. The philosophy of AI literature carries a different mood. The attention of philosophers remains on potential epistemological issues that stem from the so-called “black box” features of ML models. For instance, Eamon Duede (2023) argues that opacity in deep learning models (DLMs) is epistemically problematic in the context of justification, though not in the context of discovery. In this paper, I aim to show that a similar epistemological concern is echoed in the epistemology of imagination literature. It is traditionally held that, given its black box features, reliance on the imagination is epistemically problematic in the context of justification, though not in the context of discovery. The constraints-based approach to the imagination answers the epistemological concern by providing an account of how we can rely on the imagination in the context of justification by way of constraints. I argue by analogy that a similar approach can be applied to the opaque DLM case. Ultimately, my goal is to explore just how far this analogy extends, and whether a constraints-based approach to opaque DLMs can answer the epistemological concern surrounding their black box features in the context of justification. (Note that this paper is IN PROGRESS and UNPUBLISHED).

Author's Profile

Logan Carter
Florida State University

Analytics

Added to PP
2024-06-16

Downloads
52 (#94,234)

6 months
52 (#85,736)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?