AAAI Publications, Workshops at the Thirty-First AAAI Conference on Artificial Intelligence

Font Size: 
"Why Did You Do That?" Explainable Intelligent Robots
Raymond Ka-Man Sheh

Last modified: 2017-03-21

Abstract


As autonomous intelligent systems become more widespread, society is beginning to ask: "What are the machines up to?". Various forms of artificial intelligence control our latest cars, load balance components of our power grids, dictate much of the movement in our stock markets and help doctors diagnose and treat our ailments. As they become increasingly able to learn and model more complex phenomena, so the ability of human users to understand the reasoning behind their decisions often decreases. It becomes very difficult to ensure that the robot will perform properly and that it is possible to correct errors. In this paper, we outline a variety of techniques for generating the underlying knowledge required for explainable artificial intelligence, ranging from early work in expert systems through to systems based on Behavioural Cloning. These are techniques that may be used to build intelligent robots that explain their decisions and justify their actions. We will then illustrate how decision trees are particularly well suited to generating these kinds of explanations. We will also discuss how additional explanations can be obtained, beyond simply the structure of the tree, based on knowledge of how the training data was generated. Finally, we will illustrate these capabilities in the context of a robot learning to drive over rough terrain in both simulation and in reality.

Keywords


Explainable Artificial Intelligence; Behavioural Cloning; Machine Learning; Robot Behaviour; Human-Robot Interaction

Full Text: PDF