Metacognition in Computation
Papers from the AAAI Spring Symposium
Mike Anderson and Tim Oates, Cochairs
The importance of metacognition in human thinking, learning, and problem solving is well established. Humans use metacognitive monitoring and control to choose goals, assess their own progress, and, if necessary, adopt new strategies for achieving those goals, or even abandon a goal entirely. For instance, students preparing for an examination will make judgments about the relative difficulty of the material, and use this to choose study strategies. Since in such cases accuracy of metacognitive judgments correlates with academic performance, understanding human metacognition has been an important part of work on automated tutoring systems, and has led to the use of computer assistants that help improve human metacognition.
However, there has also been growing interest in trying to create, and investigate the potential benefits of, intelligent systems which are themselves metacognitive. It is thought that systems that monitor themselves, and proactively respond to problems, can perform better, for longer, with less need for (expensive) human intervention. Thus has IBM widely publicized their “autonomic computing” initiative, aimed at developing computers which are (in their words) self-aware, self-configuring, self-optimizing, self-healing, self-protecting, and self-adapting. More ambitiously, it is hypothesized that metacognitive awareness may be one of the keys to developing truly intelligent artificial systems. DARPA’s recent Cognitive Information Processing Technology initiative, for instance, foregrounds reﬂection (along with reaction and deliberation) as one of the three pillars required for ﬂexible, robust AI systems.
On the other side of the coin, it has also been established that metacognition can actually interfere with performance. Metacognition is no panacea, and therefore one of the issues that require further inquiry is the scope and limits of its usefulness.