AAAI Publications, Tenth Symposium of Abstraction, Reformulation, and Approximation

Font Size: 
Efficient Abstraction Selection in Reinforcement Learning (Extended Abstract)
Harm van Seijen, Shimon Whiteson, Leon Kester

Last modified: 2013-06-19


This paper introduces a novel approach for abstraction selection in reinforcement learning problems modelled as factored Markov decision processes (MDPs), for which a state is described via a set of state components. In abstraction selection, an agent must choose an abstraction from a set of candidate abstractions, each build up from a different combination of state components.


reinforcement learning; model-free learning; structure learning; abstraction selection

Full Text: PDF