New Challenges in Multi-Agent Intention Recognition

Gal Kaminka, Jan Wendler, and Galit Ronen

Recently, there has been an emergence of interest in understanding how observations of the actions of agents can be used as the basis for inference of the unobservable state of these agents, in order to improve the ability of the observer to respond to these agents. In particular, there is increasing interest in the area of Agent Modeling, which investigates mechanisms allowing an agent to acquire, maintain, and infer knowledge of other agents. This area unites plan-, goal-, and intent-recognition under a single umbrella with user-modeling, behaviorrecognition, belief ascription, agent tracking, etc. Traditionally, agent modeling researchers have explored techniques in which two agents are involved e.g., (Kautz and Allen 1986; Charniak and Goldman 1993; Lesh, Rich, and Sidner 1999). In such techniques one agent observes the actions of another agent, and attempts to infer its unobservable state features, such as intent, goal, or plan. These techniques are successful in many cases, and new techniques are still being investigated, e.g., (Pynadath 8z Wellman 2000). However, the transition from agent-modeling techniques, where an observing agent is monitoring the state of another agent, to multi-agent modeling, where the observing agent is monitoring the actions of more than one agent, present new challenges that have not been previously addressed by agent modeling researchers. These include both computational challenges, such as bandwidth and computational load, as well as conceptual challenges, such as reasoning about previously unseen behavior of teams of agents. This extended abstract outlines some of these current key challenges in multi-agent modeling, and the steps we have begun to take in address these challenges, specifically in the context of agents that are collaborating with each other.


This page is copyrighted by AAAI. All rights reserved. Your use of this site constitutes acceptance of all of AAAI's terms and conditions and privacy policy.