Aligning Domain-Specific Distribution and Classifier for Cross-Domain Classification from Multiple Sources

Authors

  • Yongchun Zhu Chinese Academy of Sciences
  • Fuzhen Zhuang Chinese Academy of Sciences
  • Deqing Wang Beihang University

DOI:

https://doi.org/10.1609/aaai.v33i01.33015989

Abstract

While Unsupervised Domain Adaptation (UDA) algorithms, i.e., there are only labeled data from source domains, have been actively studied in recent years, most algorithms and theoretical results focus on Single-source Unsupervised Domain Adaptation (SUDA). However, in the practical scenario, labeled data can be typically collected from multiple diverse sources, and they might be different not only from the target domain but also from each other. Thus, domain adapters from multiple sources should not be modeled in the same way. Recent deep learning based Multi-source Unsupervised Domain Adaptation (MUDA) algorithms focus on extracting common domain-invariant representations for all domains by aligning distribution of all pairs of source and target domains in a common feature space. However, it is often very hard to extract the same domain-invariant representations for all domains in MUDA. In addition, these methods match distributions without considering domain-specific decision boundaries between classes. To solve these problems, we propose a new framework with two alignment stages for MUDA which not only respectively aligns the distributions of each pair of source and target domains in multiple specific feature spaces, but also aligns the outputs of classifiers by utilizing the domainspecific decision boundaries. Extensive experiments demonstrate that our method can achieve remarkable results on popular benchmark datasets for image classification.

Downloads

Published

2019-07-17

How to Cite

Zhu, Y., Zhuang, F., & Wang, D. (2019). Aligning Domain-Specific Distribution and Classifier for Cross-Domain Classification from Multiple Sources. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 5989-5996. https://doi.org/10.1609/aaai.v33i01.33015989

Issue

Section

AAAI Technical Track: Machine Learning