AAAI Publications, Thirtieth AAAI Conference on Artificial Intelligence

Font Size: 
Sparse Latent Space Policy Search
Kevin Sebastian Luck, Joni Pajarinen, Erik Berger, Ville Kyrki, Heni Ben Amor

Last modified: 2016-02-21

Abstract


Computational agents often need to learn policies that involve many control variables, e.g., a robot needs to control several joints simultaneously. Learning a policy with a high number of parameters, however, usually requires a large number of training samples. We introduce a reinforcement learning method for sample-efficient policy search that exploits correlations between control variables. Such correlations are particularly frequent in motor skill learning tasks. The introduced method uses Variational Inference to estimate policy parameters, while at the same time uncovering a low-dimensional latent space of controls. Prior knowledge about the task and the structure of the learning agent can be provided by specifying groups of potentially correlated parameters. This information is then used to impose sparsity constraints on the mapping between the high-dimensional space of controls and a lower-dimensional latent space. In experiments with a simulated bi-manual manipulator, the new approach effectively identifies synergies between joints, performs efficient low-dimensional policy search, and outperforms state-of-the-art policy search methods.

Keywords


Policy Search; Reinforcement Learning; Robotics; Dimensionality Reduction

Full Text: PDF