Hierarchically Structured Reinforcement Learning for Topically Coherent Visual Story Generation

Authors

  • Qiuyuan Huang Microsoft Research AI
  • Zhe Gan Microsoft
  • Asli Celikyilmaz Microsoft Research
  • Dapeng Wu University of Florida
  • Jianfeng Wang Microsoft Research
  • Xiaodong He JD AI Research

DOI:

https://doi.org/10.1609/aaai.v33i01.33018465

Abstract

We propose a hierarchically structured reinforcement learning approach to address the challenges of planning for generating coherent multi-sentence stories for the visual storytelling task. Within our framework, the task of generating a story given a sequence of images is divided across a two-level hierarchical decoder. The high-level decoder constructs a plan by generating a semantic concept (i.e., topic) for each image in sequence. The low-level decoder generates a sentence for each image using a semantic compositional network, which effectively grounds the sentence generation conditioned on the topic. The two decoders are jointly trained end-to-end using reinforcement learning. We evaluate our model on the visual storytelling (VIST) dataset. Empirical results from both automatic and human evaluations demonstrate that the proposed hierarchically structured reinforced training achieves significantly better performance compared to a strong flat deep reinforcement learning baseline.

Downloads

Published

2019-07-17

How to Cite

Huang, Q., Gan, Z., Celikyilmaz, A., Wu, D., Wang, J., & He, X. (2019). Hierarchically Structured Reinforcement Learning for Topically Coherent Visual Story Generation. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 8465-8472. https://doi.org/10.1609/aaai.v33i01.33018465

Issue

Section

AAAI Technical Track: Vision