Beyond Interpretability and Explainability: Systematic AI and the Function of Systematizing Thought

Abstract

Recent debates over artificial intelligence have focused on its perceived lack of interpretability and explainability. I argue that these notions fail to capture an important aspect of what end-users—as opposed to developers—need from these models: what is needed is systematicity, in a more demanding sense than the compositionality-related sense that has dominated discussions of systematicity in the philosophy of language and cognitive science over the last thirty years. To recover this more demanding notion of systematicity, I distinguish between (i) the systematicity of thinkable contents, (ii) the systematicity of thinking, and (iii) the ideal of systematic thought. I then deploy this distinction to critically evaluate Fodor’s systematicity-based argument for the language of thought hypothesis before recovering the notion of the systematicity of thought as a regulative ideal, which has historically shaped our understanding of what it means for thought to be rational, authoritative, and scientific. To assess how much systematicity we need from AI models, I then argue, we must look to the functions of systematizing thought. To this end, I identify five functions served by systematization, and show how these can be used to arrive at a dynamic understanding of the need to systematize thought that can tell us what kind of systematicity is called for and when.

Author's Profile

Matthieu Queloz
University of Bern

Analytics

Added to PP
yesterday

Downloads
0

6 months
0

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?