The past 15 years have witnessed a rapid growth in computational modeling of emotion and cognitive-affective architectures. Architectures are being built both to elucidate mechanisms of emotions, and to enhance believability and effectiveness of synthetic agents and robots. Yet in spite of the many emotion models developed to date, there is a lack of consistency, and clarity, regarding what exactly it means to model emotions. The purpose of this paper is to attempt to deconstruct the vague term emotion modeling by (1) suggesting that we view emotion models in terms of two fundamental categories of processes: emotion generation and emotion effects; and (2) identifying some of the fundamental computational tasks necessary to implement these processes. These model building blocks can then provide a basis for the development of more systematic guidelines for the theoretical and data requirements, and the representational and reasoning alternatives, in emotion modeling. Identification of a set of generic computational tasks is also a good starting point for a systematic comparison of alternative approaches.
Subjects: 4. Cognitive Modeling; 9. Foundational Issues
Submitted: Jan 28, 2008