Antonym-Synonym Classification Based on New Sub-Space Embeddings

Authors

  • Muhammad Asif Ali University of New South Wales
  • Yifang Sun University of New South Wales
  • Xiaoling Zhou University of New South Wales
  • Wei Wang University of New South Wales
  • Xiang Zhao National University of Defence Technology

DOI:

https://doi.org/10.1609/aaai.v33i01.33016204

Abstract

Distinguishing antonyms from synonyms is a key challenge for many NLP applications focused on the lexical-semantic relation extraction. Existing solutions relying on large-scale corpora yield low performance because of huge contextual overlap of antonym and synonym pairs. We propose a novel approach entirely based on pre-trained embeddings. We hypothesize that the pre-trained embeddings comprehend a blend of lexical-semantic information and we may distill the task-specific information using Distiller, a model proposed in this paper. Later, a classifier is trained based on features constructed from the distilled sub-spaces along with some word level features to distinguish antonyms from synonyms. Experimental results show that the proposed model outperforms existing research on antonym synonym distinction in both speed and performance.

Downloads

Published

2019-07-17

How to Cite

Ali, M. A., Sun, Y., Zhou, X., Wang, W., & Zhao, X. (2019). Antonym-Synonym Classification Based on New Sub-Space Embeddings. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 6204-6211. https://doi.org/10.1609/aaai.v33i01.33016204

Issue

Section

AAAI Technical Track: Natural Language Processing