AAAI Publications, Workshops at the Twenty-Fifth AAAI Conference on Artificial Intelligence

Font Size: 
Recurrent Transition Hierarchies for Continual Learning: A General Overview
Mark Ring

Last modified: 2011-08-24


Continual learning is the unending process of learning new things on top of what has already been learned (Ring, 1994).Temporal Transition Hierarchies (TTHs) were developed to allow prediction of Markov-k sequences in a way that was consistent with the needs of a continual-learning agent (Ring, 1993).However, the algorithm could not learn arbitrary temporal contingencies.This paper describes Recurrent Transition Hierarchies (RTH), a learning method that combines several properties desirable for agents that must learn as they go.In particular, it learns online and incrementally, autonomously discovering new features as learning progresses.It requires no reset or episodes.It has a simple learning rule with update complexity linear in the number of parameters.

Full Text: PDF