Unsupervised Stylish Image Description Generation via Domain Layer Norm

  • Cheng-Kuan Chen National Tsing Hua University
  • Zhufeng Pan National Tsing Hua University
  • Ming-Yu Liu NVIDIA Corporation
  • Min Sun National Tsing Hua University

Abstract

Most of the existing works on image description focus on generating expressive descriptions. The only few works that are dedicated to generating stylish (e.g., romantic, lyric, etc.) descriptions suffer from limited style variation and content digression. To address these limitations, we propose a controllable stylish image description generation model. It can learn to generate stylish image descriptions that are more related to image content and can be trained with the arbitrary monolingual corpus without collecting new paired image and stylish descriptions. Moreover, it enables users to generate various stylish descriptions by plugging in style-specific parameters to include new styles into the existing model. We achieve this capability via a novel layer normalization layer design, which we will refer to as the Domain Layer Norm (DLN). Extensive experimental validation and user study on various stylish image description generation tasks are conducted to show the competitive advantages of the proposed model.

Published
2019-07-17
Section
AAAI Technical Track: Vision