Learning Transferable Self-Attentive Representations for Action Recognition in Untrimmed Videos with Weak Supervision

Authors

  • Xiao-Yu Zhang Chinese Academy of Sciences
  • Haichao Shi Chinese Academy of Sciences
  • Changsheng Li University of Electronic Science and Technology of China
  • Kai Zheng University of Electronic Science and Technology of China
  • Xiaobin Zhu Beijing Technology and Business University
  • Lixin Duan University of Electronic Science and Technology of China

DOI:

https://doi.org/10.1609/aaai.v33i01.33019227

Abstract

Action recognition in videos has attracted a lot of attention in the past decade. In order to learn robust models, previous methods usually assume videos are trimmed as short sequences and require ground-truth annotations of each video frame/sequence, which is quite costly and time-consuming. In this paper, given only video-level annotations, we propose a novel weakly supervised framework to simultaneously locate action frames as well as recognize actions in untrimmed videos. Our proposed framework consists of two major components. First, for action frame localization, we take advantage of the self-attention mechanism to weight each frame, such that the influence of background frames can be effectively eliminated. Second, considering that there are trimmed videos publicly available and also they contain useful information to leverage, we present an additional module to transfer the knowledge from trimmed videos for improving the classification performance in untrimmed ones. Extensive experiments are conducted on two benchmark datasets (i.e., THUMOS14 and ActivityNet1.3), and experimental results clearly corroborate the efficacy of our method.

Downloads

Published

2019-07-17

How to Cite

Zhang, X.-Y., Shi, H., Li, C., Zheng, K., Zhu, X., & Duan, L. (2019). Learning Transferable Self-Attentive Representations for Action Recognition in Untrimmed Videos with Weak Supervision. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 9227-9234. https://doi.org/10.1609/aaai.v33i01.33019227

Issue

Section

AAAI Technical Track: Vision