Semantic Adversarial Network with Multi-Scale Pyramid Attention for Video Classification

Authors

  • De Xie Xidian University
  • Cheng Deng Xidian University
  • Hao Wang Xidian University
  • Chao Li Xidian University
  • Dapeng Tao Yunnan University

DOI:

https://doi.org/10.1609/aaai.v33i01.33019030

Abstract

Two-stream architecture have shown strong performance in video classification task. The key idea is to learn spatiotemporal features by fusing convolutional networks spatially and temporally. However, there are some problems within such architecture. First, it relies on optical flow to model temporal information, which are often expensive to compute and store. Second, it has limited ability to capture details and local context information for video data. Third, it lacks explicit semantic guidance that greatly decrease the classification performance. In this paper, we proposed a new two-stream based deep framework for video classification to discover spatial and temporal information only from RGB frames, moreover, the multi-scale pyramid attention (MPA) layer and the semantic adversarial learning (SAL) module is introduced and integrated in our framework. The MPA enables the network capturing global and local feature to generate a comprehensive representation for video, and the SAL can make this representation gradually approximate to the real video semantics in an adversarial manner. Experimental results on two public benchmarks demonstrate our proposed methods achieves state-of-the-art results on standard video datasets.

Downloads

Published

2019-07-17

How to Cite

Xie, D., Deng, C., Wang, H., Li, C., & Tao, D. (2019). Semantic Adversarial Network with Multi-Scale Pyramid Attention for Video Classification. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 9030-9037. https://doi.org/10.1609/aaai.v33i01.33019030

Issue

Section

AAAI Technical Track: Vision