Norm-Explicit Quantization: Improving Vector Quantization for Maximum Inner Product Search

Authors

  • Xinyan Dai The Chinese University of Hong Kong
  • Xiao Yan The Chinese University of Hong Kong
  • Kelvin K. W. Ng The Chinese University of Hong Kong
  • Jiu Liu The Chinese University of Hong Kong
  • James Cheng The Chinese University of Hong Kong

DOI:

https://doi.org/10.1609/aaai.v34i01.5333

Abstract

Vector quantization (VQ) techniques are widely used in similarity search for data compression, computation acceleration and etc. Originally designed for Euclidean distance, existing VQ techniques (e.g., PQ, AQ) explicitly or implicitly minimize the quantization error. In this paper, we present a new angle to analyze the quantization error, which decomposes the quantization error into norm error and direction error. We show that quantization errors in norm have much higher influence on inner products than quantization errors in direction, and small quantization error does not necessarily lead to good performance in maximum inner product search (MIPS). Based on this observation, we propose norm-explicit quantization (NEQ) — a general paradigm that improves existing VQ techniques for MIPS. NEQ quantizes the norms of items in a dataset explicitly to reduce errors in norm, which is crucial for MIPS. For the direction vectors, NEQ can simply reuse an existing VQ technique to quantize them without modification. We conducted extensive experiments on a variety of datasets and parameter configurations. The experimental results show that NEQ improves the performance of various VQ techniques for MIPS, including PQ, OPQ, RQ and AQ.

Downloads

Published

2020-04-03

How to Cite

Dai, X., Yan, X., Ng, K. K. W., Liu, J., & Cheng, J. (2020). Norm-Explicit Quantization: Improving Vector Quantization for Maximum Inner Product Search. Proceedings of the AAAI Conference on Artificial Intelligence, 34(01), 51-58. https://doi.org/10.1609/aaai.v34i01.5333

Issue

Section

AAAI Technical Track: AI and the Web