Functional Analyses, Mechanistic Explanations, and Explanatory Tradeoffs

Journal of Cognitive Science 14:229-251 (2013)
  Copy   BIBTEX

Abstract

Recently, Piccinini and Craver have stated three theses concerning the relations between functional analysis and mechanistic explanation in cognitive sciences: No Distinctness: functional analysis and mechanistic explanation are explanations of the same kind; Integration: functional analysis is a kind of mechanistic explanation; and Subordination: functional analyses are unsatisfactory sketches of mechanisms. In this paper, I argue, first, that functional analysis and mechanistic explanations are sub-kinds of explanation by scientific (idealized) models. From that point of view, we must take into account the tradeoff between the representational/explanatory goals of generality and precision that govern the practice of model-building. In some modeling scenarios, it is rational to maximize explanatory generality at the expense of mechanistic precision. This tradeoff allows me to put forward a problem for the mechanist position. If mechanistic modeling endorses generality as a valuable goal, then Subordination should be rejected. If mechanists reject generality as a goal, then Integration is false. I suggest that mechanists should accept that functional analysis can offer acceptable explanations of cognitive phenomena.

Author's Profile

Sergio Daniel Barberis
Universidad de Buenos Aires (UBA)

Analytics

Added to PP
2018-04-26

Downloads
385 (#55,619)

6 months
88 (#72,400)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?