AAAI Publications, Workshops at the Twenty-Sixth AAAI Conference on Artificial Intelligence

Font Size: 
Scalable Inverse Reinforcement Learning via Instructed Feature Construction
Tomas Singliar, Dragos D. Margineantu

Last modified: 2012-07-15


Inverse reinforcement learning (IRL) techniques (Ng and Russell, 2000) provide a foundation for detecting abnormal agent behavior and predicting agent intent through estimating its reward function. Unfortunately, IRL algorithms suffer from the large dimensionality of the reward function space. Meanwhile, most applications that can benefit from an IRL-based approach to assessing agent intent, involve interaction with an analyst or domain expert. This paper proposes a procedure for scaling up IRL by eliciting good IRL basis functions from the domain expert. Further, we propose a new paradigm for modeling limited rationality. Unlike traditional models of limited rationality that assume an agent making stochastic choices with the value function being treated as if it is known, we propose that observed irrational behavior is actually due to uncertainty about the cost of future actions. This treatment normally leads to a POMDP formulation which is unnecessarily complicated, and we show that adding a simple noise term to the value function approximation accomplishes the same at a much smaller cost.


Inverse Reinforment Learning, Instructed Learning, Feature Construction

Full Text: PDF