Alexander L. Strehl, Carlos Diuk, Michael L. Littman
We consider the problem of reinforcement learning in factored-state MDPs in the setting in which learning is conducted in one long trial with no resets allowed. We show how to extend existing efficient algorithms that learn the conditional probability tables of dynamic Bayesian networks (DBNs) given their structure to the case in which DBN structure is not known in advance. Our method learns the DBN structures as part of the reinforcement-learning process and provably provides an efficient learning algorithm when combined with factored Rmax.
Subjects: 12.1 Reinforcement Learning; 10. Knowledge Acquisition
Submitted: Apr 24, 2007