Raja Sooriamurthi, David Leake
In AI research on explanation, the mechanisms used to construct explanations have traditionally been neutral to the environment in which the explanations are sought. Our view is that the explanation process cannot be isolated from the situation in which it occurs. Without considering the intended use for an explanation, the explanation construction process cannot be properly focussed; without considering the situation the process cannot act effectively to gather corroborating information. The emphasis of this research is to view explanation as a means to an end and in this work the end is the successful functioning of the system requesting explanation. We develop a model of explanation as a situated, utility-based, hierarchical, goal-driven process.