Evaluating Artificial Models of Cognition

Download Edit this record How to cite View on PhilPapers
Artificial models of cognition serve different purposes, and their use determines the way they should be evaluated. There are also models that do not represent any particular biological agents, and there is controversy as to how they should be assessed. At the same time, modelers do evaluate such models as better or worse. There is also a widespread tendency to call for publicly available standards of replicability and benchmarking for such models. In this paper, I argue that proper evaluation ofmodels does not depend on whether they target real biological agents or not; instead, the standards of evaluation depend on the use of models rather than on the reality of their targets. I discuss how models are validated depending on their use and argue that all-encompassing benchmarks for models may be well beyond reach
PhilPapers/Archive ID
Upload history
Archival date: 2015-04-27
View other versions
Added to PP index

Total views
213 ( #21,969 of 53,031 )

Recent downloads (6 months)
21 ( #29,075 of 53,031 )

How can I increase my downloads?

Downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.