Evaluating Explanations

David B. Leake

Explanation-based learning (EBL) is a powerful method for category formation. However, EBL systems are only effective if they start with good explanations. The problem of evaluating candidate explanations has received little attention: Current research usually assumes that a single explanation will be available for any situation, and that this explanation will be appropriate. In the real world many explanations can be generated for a given anomaly, only some of which are reasonable. Thus it is crucial to be able to distinguish between good and bad explanations. In people, the criteria for evaluating explanations are dynamic: they reflect context, the explainer’s current knowledge, and his needs for specific information. I present a theory of how these factors affect evaluation of explanations, and describe its implementation in ACCEPTER, a program to evaluate explanations for anomalies detected during story understanding.


This page is copyrighted by AAAI. All rights reserved. Your use of this site constitutes acceptance of all of AAAI's terms and conditions and privacy policy.