Learning Dynamic Generator Model by Alternating Back-Propagation through Time

Authors

  • Jianwen Xie Hikvision
  • Ruiqi Gao University of California, Los Angeles
  • Zilong Zheng University of California, Los Angeles
  • Song-Chun Zhu University of California, Los Angeles
  • Ying Nian Wu University of California, Los Angeles

DOI:

https://doi.org/10.1609/aaai.v33i01.33015498

Abstract

This paper studies the dynamic generator model for spatialtemporal processes such as dynamic textures and action sequences in video data. In this model, each time frame of the video sequence is generated by a generator model, which is a non-linear transformation of a latent state vector, where the non-linear transformation is parametrized by a top-down neural network. The sequence of latent state vectors follows a non-linear auto-regressive model, where the state vector of the next frame is a non-linear transformation of the state vector of the current frame as well as an independent noise vector that provides randomness in the transition. The non-linear transformation of this transition model can be parametrized by a feedforward neural network. We show that this model can be learned by an alternating back-propagation through time algorithm that iteratively samples the noise vectors and updates the parameters in the transition model and the generator model. We show that our training method can learn realistic models for dynamic textures and action patterns.

Downloads

Published

2019-07-17

How to Cite

Xie, J., Gao, R., Zheng, Z., Zhu, S.-C., & Wu, Y. N. (2019). Learning Dynamic Generator Model by Alternating Back-Propagation through Time. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 5498-5507. https://doi.org/10.1609/aaai.v33i01.33015498

Issue

Section

AAAI Technical Track: Machine Learning