AAAI Publications, Thirty-Second AAAI Conference on Artificial Intelligence

Font Size: 
Exploring Implicit Feedback for Open Domain Conversation Generation
Wei-Nan Zhang, Lingzhi Li, Dongyan Cao, Ting Liu

Last modified: 2018-04-25

Abstract


User feedback can be an effective indicator to the success of the human-robot conversation. However, to avoid to interrupt the online real-time conversation process, explicit feedback is usually gained at the end of a conversation. Alternatively, users' responses usually contain their implicit feedback, such as stance, sentiment, emotion, etc., towards the conversation content or the interlocutors. Therefore, exploring the implicit feedback is a natural way to optimize the conversation generation process. In this paper, we propose a novel reward function which explores the implicit feedback to optimize the future reward of a reinforcement learning based neural conversation model. A simulation strategy is applied to explore the state-action space in training and test. Experimental results show that the proposed approach outperforms the Seq2Seq model and the state-of-the-art reinforcement learning model for conversation generation on automatic and human evaluations on the OpenSubtitles and Twitter datasets.

Keywords


Conversation Generation; Implicit Feedback

Full Text: PDF