Learning Cross-Lingual Word Embeddings from Twitter via Distant Supervision

  • Jose Camacho-Collados Cardiff University
  • Yerai Doval Universidade de Vigo
  • Eugenio Martínez-Cámara Universidad de Granada
  • Luis Espinosa-Anke Cardiff University
  • Francesco Barbieri Snap Inc.
  • Steven Schockaert Cardiff University

Abstract

Cross-lingual embeddings represent the meaning of words from different languages in the same vector space. Recent work has shown that it is possible to construct such representations by aligning independently learned monolingual embedding spaces, and that accurate alignments can be obtained even without external bilingual data. In this paper we explore a research direction that has been surprisingly neglected in the literature: leveraging noisy user-generated text to learn cross-lingual embeddings particularly tailored towards social media applications. While the noisiness and informal nature of the social media genre poses additional challenges to cross-lingual embedding methods, we find that it also provides key opportunities due to the abundance of code-switching and the existence of a shared vocabulary of emoji and named entities. Our contribution consists of a very simple post-processing step that exploits these phenomena to significantly improve the performance of state-of-the-art alignment methods.

Published
2020-05-26
How to Cite
Camacho-Collados, J., Doval, Y., Martínez-Cámara, E., Espinosa-Anke, L., Barbieri, F., & Schockaert, S. (2020). Learning Cross-Lingual Word Embeddings from Twitter via Distant Supervision. Proceedings of the International AAAI Conference on Web and Social Media, 14(1), 72-82. Retrieved from https://www.aaai.org/ojs/index.php/ICWSM/article/view/7280