Evaluating Architectures for Intelligence
Papers from the AAAI Workshop
Gal A. Kaminka and Catherina R. Burghart, Cochairs
Cognitive architectures form an integral part of robots and agents. Architectures structure and organize the knowledge used by the agents to select actions in dynamic environments, plan and solve problems, learn, and coordinate with others. Architectures serve to integrate general capabilities expected of an intelligent agent (such as planning and learning), to implement and test theories about agent cognition, and to explore domain-independent mechanisms for intelligence.
As AI research has improved in formal and empirical rigor, traditional evaluation methodologies for architectures have sometimes proved insufficient. Formal analysis has often proved elusive; we seem to be missing the notation required for proving properties of architectures. Experiments that demonstrate generality are notoriously expensive to perform, are not sufficiently informative, and, at a high-level, evaluation is difficult because the criteria are not well defined: Is it generality? Ease of programmability? Compatibility with data from biology and psychology? There are no established evaluation methodologies and only a handful of established evaluation criteria.
Recognizing that scientific progress depends on the ability to conduct informative evaluation (by experiment or formal analysis), this workshop will address the methodologies needed for evaluating architectures. The focus is on methodology, rather than specific architectures. The workshop has two goals — to promote discussion and to propose evaluation criteria that will be accepted by the research community as recognized evaluation guidelines.