To Find Where You Talk: Temporal Sentence Localization in Video with Attention Based Location Regression

Authors

  • Yitian Yuan Tsinghua University
  • Tao Mei JD.com
  • Wenwu Zhu Tsinghua University

DOI:

https://doi.org/10.1609/aaai.v33i01.33019159

Abstract

We have witnessed the tremendous growth of videos over the Internet, where most of these videos are typically paired with abundant sentence descriptions, such as video titles, captions and comments. Therefore, it has been increasingly crucial to associate specific video segments with the corresponding informative text descriptions, for a deeper understanding of video content. This motivates us to explore an overlooked problem in the research community — temporal sentence localization in video, which aims to automatically determine the start and end points of a given sentence within a paired video. For solving this problem, we face three critical challenges: (1) preserving the intrinsic temporal structure and global context of video to locate accurate positions over the entire video sequence; (2) fully exploring the sentence semantics to give clear guidance for localization; (3) ensuring the efficiency of the localization method to adapt to long videos. To address these issues, we propose a novel Attention Based Location Regression (ABLR) approach to localize sentence descriptions in videos in an efficient end-to-end manner. Specifically, to preserve the context information, ABLR first encodes both video and sentence via Bi-directional LSTM networks. Then, a multi-modal co-attention mechanism is presented to generate both video and sentence attentions. The former reflects the global video structure, while the latter highlights the sentence details for temporal localization. Finally, a novel attention based location prediction network is designed to regress the temporal coordinates of sentence from the previous attentions. We evaluate the proposed ABLR approach on two public datasets ActivityNet Captions and TACoS. Experimental results show that ABLR significantly outperforms the existing approaches in both effectiveness and efficiency.

Downloads

Published

2019-07-17

How to Cite

Yuan, Y., Mei, T., & Zhu, W. (2019). To Find Where You Talk: Temporal Sentence Localization in Video with Attention Based Location Regression. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 9159-9166. https://doi.org/10.1609/aaai.v33i01.33019159

Issue

Section

AAAI Technical Track: Vision