Action Knowledge Transfer for Action Prediction with Partial Videos

  • Yijun Cai Sun Yat-sen University
  • Haoxin Li Sun Yat-sen University
  • Jian-Fang Hu Sun Yat-sen University
  • Wei-Shi Zheng Sun Yat-sen University

Abstract

Predicting action class from partially observed videos, which is known as action prediction, is an important task in computer vision field with many applications. The challenge for action prediction mainly lies in the lack of discriminative action information for the partially observed videos. To tackle this challenge, in this work, we propose to transfer action knowledge learned from fully observed videos for improving the prediction of partially observed videos. Specifically, we develop a two-stage learning framework for action knowledge transfer. At the first stage, we learn feature embeddings and discriminative action classifier from full videos. The knowledge in the learned embeddings and classifier is then transferred to the partial videos at the second stage. Our experiments on the UCF-101 and HMDB-51 datasets show that the proposed action knowledge transfer method can significantly improve the performance of action prediction, especially for the actions with small observation ratios (e.g., 10%). We also experimentally illustrate that our method outperforms all the state-of-the-art action prediction systems.

Published
2019-07-17
Section
AAAI Technical Track: Vision