Connecting Language to Images: A Progressive Attention-Guided Network for Simultaneous Image Captioning and Language Grounding

  • Lingyun Song Xi'an JiaoTong University
  • Jun Liu Xi'an Jiaotong Univerisity
  • Buyue Qian Xi'an Jiaotong University
  • Yihe Chen University of Toronto


Image captioning and visual language grounding are two important tasks for image understanding, but are seldom considered together. In this paper, we propose a Progressive Attention-Guided Network (PAGNet), which simultaneously generates image captions and predicts bounding boxes for caption words. PAGNet mainly has two distinctive properties: i) It can progressively refine the predictive results of image captioning, by updating the attention map with the predicted bounding boxes. ii) It learns bounding boxes of the words using a weakly supervised strategy, which combines the frameworks of Multiple Instance Learning (MIL) and Markov Decision Process (MDP). By using the attention map generated in the captioning process, PAGNet significantly reduces the search space of the MDP. We conduct experiments on benchmark datasets to demonstrate the effectiveness of PAGNet and results show that PAGNet achieves the best performance.

AAAI Technical Track: Vision