Hierarchical Attention Network for Image Captioning

Authors

  • Weixuan Wang Sun Yat-sen University
  • Zhihong Chen Sun Yat-sen University
  • Haifeng Hu Sun Yat-sen University

DOI:

https://doi.org/10.1609/aaai.v33i01.33018957

Abstract

Recently, attention mechanism has been successfully applied in image captioning, but the existing attention methods are only established on low-level spatial features or high-level text features, which limits richness of captions. In this paper, we propose a Hierarchical Attention Network (HAN) that enables attention to be calculated on pyramidal hierarchy of features synchronously. The pyramidal hierarchy consists of features on diverse semantic levels, which allows predicting different words according to different features. On the other hand, due to the different modalities of features, a Multivariate Residual Module (MRM) is proposed to learn the joint representations from features. The MRM is able to model projections and extract relevant relations among different features. Furthermore, we introduce a context gate to balance the contribution of different features. Compared with the existing methods, our approach applies hierarchical features and exploits several multimodal integration strategies, which can significantly improve the performance. The HAN is verified on benchmark MSCOCO dataset, and the experimental results indicate that our model outperforms the state-of-the-art methods, achieving a BLEU1 score of 80.9 and a CIDEr score of 121.7 in the Karpathy’s test split.

Downloads

Published

2019-07-17

How to Cite

Wang, W., Chen, Z., & Hu, H. (2019). Hierarchical Attention Network for Image Captioning. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 8957-8964. https://doi.org/10.1609/aaai.v33i01.33018957

Issue

Section

AAAI Technical Track: Vision