Multiple Saliency and Channel Sensitivity Network for Aggregated Convolutional Feature

Authors

  • Xuanlu Xiang Beijing University of Posts and Telecommunications
  • Zhipeng Wang Beijing University of Posts and Telecommunications
  • Zhicheng Zhao Beijing University of Posts and Telecommunications
  • Fei Su Beijing University of Posts and Telecommunications

DOI:

https://doi.org/10.1609/aaai.v33i01.33019013

Abstract

In this paper, aiming at two key problems of instance-level image retrieval, i.e., the distinctiveness of image representation and the generalization ability of the model, we propose a novel deep architecture - Multiple Saliency and Channel Sensitivity Network(MSCNet). Specifically, to obtain distinctive global descriptors, an attention-based multiple saliency learning is first presented to highlight important details of the image, and then a simple but effective channel sensitivity module based on Gram matrix is designed to boost the channel discrimination and suppress redundant information. Additionally, in contrast to most existing feature aggregation methods, employing pre-trained deep networks, MSCNet can be trained in two modes: the first one is an unsupervised manner with an instance loss, and another is a supervised manner, which combines classification and ranking loss and only relies on very limited training data. Experimental results on several public benchmark datasets, i.e., Oxford buildings, Paris buildings and Holidays, indicate that the proposed MSCNet outperforms the state-of-the-art unsupervised and supervised methods.

Downloads

Published

2019-07-17

How to Cite

Xiang, X., Wang, Z., Zhao, Z., & Su, F. (2019). Multiple Saliency and Channel Sensitivity Network for Aggregated Convolutional Feature. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 9013-9020. https://doi.org/10.1609/aaai.v33i01.33019013

Issue

Section

AAAI Technical Track: Vision