Simon D. Levy
Semantic roles describe who did what to whom and as such are central to many subfields of AI and cognitive science. Each subfield or application tends to use its own flavor of roles. For analogy processing, logical deduction,and related tasks, roles are usually specific to each predicate: for LOVES there is a LOVER and a BELOVED, for EATS an EATER and an EATEN, etc. Language modeling, on the other hand, requires more general roles like AGENT and PATIENT in order to relate form to meaning in a parsimonious way. Commitment to a particular type of role makes it difficult to model processes of change, for example the change from specific to general roles that seems to take place in language learning. The use of semantic features helps solve this problem, but still limits the nature and number of changes that can take place. This paper presents a new model of semantic role change that addresses this problem. The model uses an existing technique, Holographic Reduced Representation (HRR) for representing roles and their fillers. Starting with specific roles, the model learns to generalize roles through exposure to language data. The learning mechanism is simple and efficient, and is scaling properties are well-understood. The model is able to learn and exploit new representations without losing the information from existing ones. We present experimental data illustrating these principles, and conclude with by discussing some implications of the model for the issues of changing representations as a whole.
Subjects: 4. Cognitive Modeling; 14. Neural Networks
Submitted: Sep 11, 2007