V. N. Papudesi, Y. Wang, M. Huber, and D. J. Cook
Making robot technology accessible to general endusers promises numerous benefits for all aspects of life. However, it also poses many challenges by requiring increasingly autonomous operation and the capability to interact with users that are generally not skilled robot operators. This paper presents an approach to variable autonomy that integrates user commands at varying levels of abstraction into an autonomous reinforcement learning component to permit faster policy acquisition and to modify robot behavior based on the preferences of the user. User commands are used here as training input as well as to modify the reward structure of the learning component. Safety of the mechanism is ensured in the underlying control substrate as well as by an interface layer that suppresses inconsistent user commands. To illustrate the applicability of the presented approach, it is employed in a set of navigation experiments on a mobile and a walking robot in the context of the MavHome project.