Learning Models of Sequential Decision-Making with Partial Specification of Agent Behavior

Authors

  • Vaibhav V. Unhelkar Massachusetts Institute of Technology
  • Julie A. Shah Massachusetts Institute of Technology

DOI:

https://doi.org/10.1609/aaai.v33i01.33012522

Abstract

Artificial agents that interact with other (human or artificial) agents require models in order to reason about those other agents’ behavior. In addition to the predictive utility of these models, maintaining a model that is aligned with an agent’s true generative model of behavior is critical for effective human-agent interaction. In applications wherein observations and partial specification of the agent’s behavior are available, achieving model alignment is challenging for a variety of reasons. For one, the agent’s decision factors are often not completely known; further, prior approaches that rely upon observations of agents’ behavior alone can fail to recover the true model, since multiple models can explain observed behavior equally well. To achieve better model alignment, we provide a novel approach capable of learning aligned models that conform to partial knowledge of the agent’s behavior. Central to our approach are a factored model of behavior (AMM), along with Bayesian nonparametric priors, and an inference approach capable of incorporating partial specifications as constraints for model learning. We evaluate our approach in experiments and demonstrate improvements in metrics of model alignment.

Downloads

Published

2019-07-17

How to Cite

Unhelkar, V. V., & Shah, J. A. (2019). Learning Models of Sequential Decision-Making with Partial Specification of Agent Behavior. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 2522-2530. https://doi.org/10.1609/aaai.v33i01.33012522

Issue

Section

AAAI Technical Track: Human-AI Collaboration