Introspective Reasoning in a Case-Based Planner

Susan Fox, David Leake

Many current AI systems assume that the reasoning mechanisms used to manipulate their knowledge may be fixed ahead of time by the designer. This assumption may break down in complex domains. The focus of this research is developing a model of introspective reasoning and learning to enable a system to improve its own reasoning as well as its domain knowledge. Our model is based on the proposal of (Birnbaum et al. 1991) to use a model of the ideal behavior of a case-based system to judge system performance and to refine its reasoning mechanisms; it also draws on the research of (Ram and Cox 1994) on introspective failure-driven learning.

This page is copyrighted by AAAI. All rights reserved. Your use of this site constitutes acceptance of all of AAAI's terms and conditions and privacy policy.