Have a personal or library account? Click to login
Open Access
|Apr 2015

Abstract

Artificial models of cognition serve different purposes, and their use determines the way they should be evaluated. There are also models that do not represent any particular biological agents, and there is controversy as to how they should be assessed. At the same time, modelers do evaluate such models as better or worse. There is also a widespread tendency to call for publicly available standards of replicability and benchmarking for such models. In this paper, I argue that proper evaluation of models does not depend on whether they target real biological agents or not; instead, the standards of evaluation depend on the use of models rather than on the reality of their targets. I discuss how models are validated depending on their use and argue that all-encompassing benchmarks for models may be well beyond reach.

DOI: https://doi.org/10.1515/slgr-2015-0003 | Journal eISSN: 2199-6059 | Journal ISSN: 0860-150X
Language: English
Page range: 43 - 62
Published on: Apr 10, 2015
Published by: University of Białystok, Department of Pedagogy and Psychology
In partnership with: Paradigm Publishing Services
Publication frequency: 4 times per year
Related subjects:

© 2015 Marcin Miłkowski, published by University of Białystok, Department of Pedagogy and Psychology
This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 3.0 License.