Functional Analyses, Mechanistic Explanations, and Explanatory Tradeoffs
Journal of Cognitive Science 14:229-251 (2013)
Abstract
Recently, Piccinini and Craver have stated three theses concerning the relations between functional analysis and mechanistic explanation in cognitive sciences: No Distinctness: functional analysis and mechanistic explanation are explanations of the same kind; Integration: functional analysis is a kind of mechanistic explanation; and Subordination: functional analyses are unsatisfactory sketches of mechanisms. In this paper, I argue, first, that functional analysis and mechanistic explanations are sub-kinds of explanation by scientific (idealized) models. From that point of view, we must take into account the tradeoff between the representational/explanatory goals of generality and precision that govern the practice of model-building. In some modeling scenarios, it is rational to maximize explanatory generality at the expense of mechanistic precision. This tradeoff allows me to put forward a problem for the mechanist position. If mechanistic modeling endorses generality as a valuable goal, then Subordination should be rejected. If mechanists reject generality as a goal, then Integration is false. I suggest that mechanists should accept that functional analysis can offer acceptable explanations of cognitive phenomena.
Categories
(categorize this paper)
PhilPapers/Archive ID
BARFAM-3
Upload history
Archival date: 2018-04-26
View other versions
View other versions
Added to PP index
2018-04-26
Total views
113 ( #38,062 of 58,435 )
Recent downloads (6 months)
24 ( #30,843 of 58,435 )
2018-04-26
Total views
113 ( #38,062 of 58,435 )
Recent downloads (6 months)
24 ( #30,843 of 58,435 )
How can I increase my downloads?
Downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.