Cross-Domain Visual Representations via Unsupervised Graph Alignment

Authors

  • Baoyao Yang Hong Kong Baptist University
  • Pong C. Yuen Hong Kong Baptist University

DOI:

https://doi.org/10.1609/aaai.v33i01.33015613

Abstract

In unsupervised domain adaptation, distributions of visual representations are mismatched across domains, which leads to the performance drop of a source model in the target domain. Therefore, distribution alignment methods have been proposed to explore cross-domain visual representations. However, most alignment methods have not considered the difference in distribution structures across domains, and the adaptation would subject to the insufficient aligned cross-domain representations. To avoid the misclassification/misidentification due to the difference in distribution structures, this paper proposes a novel unsupervised graph alignment method that aligns both data representations and distribution structures across the source and target domains. An adversarial network is developed for unsupervised graph alignment, which maps both source and target data to a feature space where data are distributed with unified structure criteria. Experimental results show that the graph-aligned visual representations achieve good performance on both crossdataset recognition and cross-modal re-identification.

Downloads

Published

2019-07-17

How to Cite

Yang, B., & Yuen, P. C. (2019). Cross-Domain Visual Representations via Unsupervised Graph Alignment. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 5613-5620. https://doi.org/10.1609/aaai.v33i01.33015613

Issue

Section

AAAI Technical Track: Machine Learning