Video Inpainting by Jointly Learning Temporal Structure and Spatial Details

Authors

  • Chuan Wang Face++
  • Haibin Huang Face++
  • Xiaoguang Han The Chinese University of Hong Kong
  • Jue Wang Face++

DOI:

https://doi.org/10.1609/aaai.v33i01.33015232

Abstract

We present a new data-driven video inpainting method for recovering missing regions of video frames. A novel deep learning architecture is proposed which contains two subnetworks: a temporal structure inference network and a spatial detail recovering network. The temporal structure inference network is built upon a 3D fully convolutional architecture: it only learns to complete a low-resolution video volume given the expensive computational cost of 3D convolution. The low resolution result provides temporal guidance to the spatial detail recovering network, which performs imagebased inpainting with a 2D fully convolutional network to produce recovered video frames in their original resolution. Such two-step network design ensures both the spatial quality of each frame and the temporal coherence across frames. Our method jointly trains both sub-networks in an end-to-end manner. We provide qualitative and quantitative evaluation on three datasets, demonstrating that our method outperforms previous learning-based video inpainting methods.

Downloads

Published

2019-07-17

How to Cite

Wang, C., Huang, H., Han, X., & Wang, J. (2019). Video Inpainting by Jointly Learning Temporal Structure and Spatial Details. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 5232-5239. https://doi.org/10.1609/aaai.v33i01.33015232

Issue

Section

AAAI Technical Track: Machine Learning