Confirming Changes in Beliefs and Intentions

Rajah Annamalai Subramanian, Sanjeev Kumar, Philip Cohen

Today's spoken dialog systems rely on procedures such as confidence scores, machine learning approaches, etc. to understand changes in users' beliefs and intentions. They depend on these scores to decide when to do an implicit or explicit confirmation of a value obtained to fill a required slot. These approaches work reasonably well for small domain-based systems and to make them work, several rules need to be written. Similarly, more rules are necessary to handle the changes in the confidence scores for the values heard from the users, as and when the user changes his belief over a particular value he had given earlier. We propose that a joint intention interpreter integrated with a reasoner over beliefs and communicative acts to form the core of a dialogue engine can handle such belief updates and changes in intentions in a general domain independent manner. We show how the confirmations and clarifications can be automatically planned as and when new or contradictory beliefs are received. Furthermore, these changes in beliefs and intentions are cascaded to other agents and humans in the team that are part of the joint commitment. There is no necessity to write explicit rules and conditions to handle these changes, and communication follows from the constructs of the Joint Intentions Theory (JIT).

Subjects: 7.1 Multi-Agent Systems; 15.1 Belief Revision

Submitted: Jan 25, 2007

This page is copyrighted by AAAI. All rights reserved. Your use of this site constitutes acceptance of all of AAAI's terms and conditions and privacy policy.