Abstract
Recent debates over artificial intelligence have focused on its perceived lack of interpretability and explainability. I argue that these notions fail to capture an important aspect of what end-users—as opposed to developers—need from these models: what is needed is systematicity, in a more demanding sense than the compositionality-related sense that has dominated discussions of systematicity in the philosophy of language and cognitive science over the last thirty years. To recover this more demanding notion of systematicity, I distinguish between (i) the systematicity of thinkable contents, (ii) the systematicity of thinking, and (iii) the ideal of systematic thought. I then deploy this distinction to critically evaluate Fodor’s systematicity-based argument for the language of thought hypothesis before recovering the notion of the systematicity of thought as a regulative ideal, which has historically shaped our understanding of what it means for thought to be rational, authoritative, and scientific. To assess how much systematicity we need from AI models, I then argue, we must look to the functions of systematizing thought. To this end, I identify five functions served by systematization, and show how these can be used to arrive at a dynamic understanding of the need to systematize thought that can tell us what kind of systematicity is called for and when.