Backbone Cannot Be Trained at Once: Rolling Back to Pre-Trained Network for Person Re-Identification

  • Youngmin Ro Seoul National University
  • Jongwon Choi Samsung SDS
  • Dae Ung Jo Seoul National University
  • Byeongho Heo Seoul National University
  • Jongin Lim Seoul National University
  • Jin Young Choi Seoul National University

Abstract

In person re-identification (ReID) task, because of its shortage of trainable dataset, it is common to utilize fine-tuning method using a classification network pre-trained on a large dataset. However, it is relatively difficult to sufficiently finetune the low-level layers of the network due to the gradient vanishing problem. In this work, we propose a novel fine-tuning strategy that allows low-level layers to be sufficiently trained by rolling back the weights of high-level layers to their initial pre-trained weights. Our strategy alleviates the problem of gradient vanishing in low-level layers and robustly trains the low-level layers to fit the ReID dataset, thereby increasing the performance of ReID tasks. The improved performance of the proposed strategy is validated via several experiments. Furthermore, without any addons such as pose estimation or segmentation, our strategy exhibits state-of-the-art performance using only vanilla deep convolutional neural network architecture.

Published
2019-07-17
Section
AAAI Technical Track: Vision