Talking Face Generation by Adversarially Disentangled Audio-Visual Representation

Authors

  • Hang Zhou The Chinese University of Hong Kong
  • Yu Liu The Chinese University of Hong Kong
  • Ziwei Liu The Chinese University of Hong Kong
  • Ping Luo The Chinese University of Hong Kong
  • Xiaogang Wang The Chinese University of Hong Kong

DOI:

https://doi.org/10.1609/aaai.v33i01.33019299

Abstract

Talking face generation aims to synthesize a sequence of face images that correspond to a clip of speech. This is a challenging task because face appearance variation and semantics of speech are coupled together in the subtle movements of the talking face regions. Existing works either construct specific face appearance model on specific subjects or model the transformation between lip motion and speech. In this work, we integrate both aspects and enable arbitrary-subject talking face generation by learning disentangled audio-visual representation. We find that the talking face sequence is actually a composition of both subject-related information and speech-related information. These two spaces are then explicitly disentangled through a novel associative-and-adversarial training process. This disentangled representation has an advantage where both audio and video can serve as inputs for generation. Extensive experiments show that the proposed approach generates realistic talking face sequences on arbitrary subjects with much clearer lip motion patterns than previous work. We also demonstrate the learned audio-visual representation is extremely useful for the tasks of automatic lip reading and audio-video retrieval.

Downloads

Published

2019-07-17

How to Cite

Zhou, H., Liu, Y., Liu, Z., Luo, P., & Wang, X. (2019). Talking Face Generation by Adversarially Disentangled Audio-Visual Representation. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 9299-9306. https://doi.org/10.1609/aaai.v33i01.33019299

Issue

Section

AAAI Technical Track: Vision