Kiran Lakkaraju, Les Gasser
Our general objective is to explain how norms can emerge in complex, ambiguous situations: settings with large and complex spaces of normative options over which populations may try to agree using only limited, indirect knowledge of each others' currently preferred options, possibly gained through limited interaction samples. We study this process using the concrete example of agents developing and using common languages. Language can be viewed as an inherently distributed information and representation system. Because of this, it serves as a model problem for studying central issues in many kinds of distributed information systems, and norms are one such issue. Language is inherently normative. First, communicative language requires agreement---conventions---on many language aspects (such as ontological units, grammar, lexicon, morphology, etc.). Second, accurate communication is valuable. The value of successful communication translates to value for the conventionalization of language; this value in turn creates the decentralized normative force that drives agents to obey linguistic conventions. In this way, linguistic conventions become normative constraints on possible communication options, since obeying them increases communicability and its resulting communication payoffs. None of this means that convergence to linguistic norms is easy; in fact it presents novel issues not yet well understood. We show how ambiguity arises in language convergence, and describe a variety of techniques to resolve that ambiguity. We focus on one particular technique, text based learning. We show that it significantly reduces the amount of effort required for linguistic norms to emerge, and we show how it is an instance of a general norm-convergence technique.
Subjects: 7.1 Multi-Agent Systems; 7. Distributed AI
Submitted: May 5, 2008