Learning other Agents’ Preferences in Multiagent Negotiation

H. H. Bui, D. Kieronska, S. Venkatesh

In multiagent systems, an agent does not usually have complete information about the preferences and decision making processes of other agents. This might prevent the agents from making coordinated choices, purely due to their ignorance of what others want. This paper describes the integration of a learning module into a communication-intensive negotiating agent architecture. The learning module gives the agents the ability to learn about other agents’ preferences via past interactions. Over time, the agents can incrementally update their models of other agents’ preferences and use them to make better coordinated decisions. Combining both communication and learning, as two complement knowledge acquisition methods, helps to reduce the amount of communication needed on average, and is justified in situations where communication is computationally costly or simply not desirable (e.g. to preserve the individual privacy).


This page is copyrighted by AAAI. All rights reserved. Your use of this site constitutes acceptance of all of AAAI's terms and conditions and privacy policy.