Evaluating Artificial Models of Cognition

Studies in Logic, Grammar and Rhetoric 40 (1):43-62 (2015)
  Copy   BIBTEX

Abstract

Artificial models of cognition serve different purposes, and their use determines the way they should be evaluated. There are also models that do not represent any particular biological agents, and there is controversy as to how they should be assessed. At the same time, modelers do evaluate such models as better or worse. There is also a widespread tendency to call for publicly available standards of replicability and benchmarking for such models. In this paper, I argue that proper evaluation ofmodels does not depend on whether they target real biological agents or not; instead, the standards of evaluation depend on the use of models rather than on the reality of their targets. I discuss how models are validated depending on their use and argue that all-encompassing benchmarks for models may be well beyond reach

Author's Profile

Marcin Miłkowski
Polish Academy of Sciences

Analytics

Added to PP
2015-04-17

Downloads
489 (#34,125)

6 months
80 (#57,624)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?