Unsupervised Post-Processing of Word Vectors via Conceptor Negation

Authors

  • Tianlin Liu Jacobs University Bremen
  • Lyle Ungar University of Pennsylvania
  • João Sedoc University of Pennsylvania

DOI:

https://doi.org/10.1609/aaai.v33i01.33016778

Abstract

Word vectors are at the core of many natural language processing tasks. Recently, there has been interest in post-processing word vectors to enrich their semantic information. In this paper, we introduce a novel word vector post-processing technique based on matrix conceptors (Jaeger 2014), a family of regularized identity maps. More concretely, we propose to use conceptors to suppress those latent features of word vectors having high variances. The proposed method is purely unsupervised: it does not rely on any corpus or external linguistic database. We evaluate the post-processed word vectors on a battery of intrinsic lexical evaluation tasks, showing that the proposed method consistently outperforms existing state-of-the-art alternatives. We also show that post-processed word vectors can be used for the downstream natural language processing task of dialogue state tracking, yielding improved results in different dialogue domains.

Downloads

Published

2019-07-17

How to Cite

Liu, T., Ungar, L., & Sedoc, J. (2019). Unsupervised Post-Processing of Word Vectors via Conceptor Negation. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 6778-6785. https://doi.org/10.1609/aaai.v33i01.33016778

Issue

Section

AAAI Technical Track: Natural Language Processing