Learning Semantic Representations for Novel Words: Leveraging Both Form and Context

Authors

  • Timo Schick Ludwig-Maximilians-Universität München
  • Hinrich Schütze Ludwig-Maximilians-Universität München

DOI:

https://doi.org/10.1609/aaai.v33i01.33016965

Abstract

Word embeddings are a key component of high-performing natural language processing (NLP) systems, but it remains a challenge to learn good representations for novel words on the fly, i.e., for words that did not occur in the training data. The general problem setting is that word embeddings are induced on an unlabeled training corpus and then a model is trained that embeds novel words into this induced embedding space. Currently, two approaches for learning embeddings of novel words exist: (i) learning an embedding from the novel word’s surface-form (e.g., subword n-grams) and (ii) learning an embedding from the context in which it occurs. In this paper, we propose an architecture that leverages both sources of information – surface-form and context – and show that it results in large increases in embedding quality. Our architecture obtains state-of-the-art results on the Definitional Nonce and Contextual Rare Words datasets. As input, we only require an embedding set and an unlabeled corpus for training our architecture to produce embeddings appropriate for the induced embedding space. Thus, our model can easily be integrated into any existing NLP system and enhance its capability to handle novel words.

Downloads

Published

2019-07-17

How to Cite

Schick, T., & Schütze, H. (2019). Learning Semantic Representations for Novel Words: Leveraging Both Form and Context. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 6965-6973. https://doi.org/10.1609/aaai.v33i01.33016965

Issue

Section

AAAI Technical Track: Natural Language Processing