Papers from the AAAI Workshop
Thomas Roth-Berghofer, Stefan Schulz, Daniel Bahls, and David B. Leake, Cochairs
Explanation has been widely investigated in disciplines such as artificial intelligence, cognitive science, linguistics, philosophy of science, and education. All these disciplines consider varying aspects of “explanation,” making it clear that there are many different views of the nature of explanation. Both within AI systems and in interactive systems, the ability to explain reasoning processes and results can have substantial impact. Within the field of knowledge-based systems, explanations have been considered as an important link between humans and machines to increase the confidence of the user in the system’s result, by providing evidence of how it was derived. In mixed-initiative problem solving, explanations exchanged between human and software agents may play an important role in communication between humans and software systems. Additional research has focused on how computer systems can themselves use explanations, for example to guide learning. This workshop aimed to draw on multiple perspectives on explanation, to examine how explanation can be applied to further the development of robust and dependable systems and to illuminate system processes to increase user acceptance and feeling of control.