SepNE: Bringing Separability to Network Embedding

Authors

  • Ziyao Li Peking University
  • Liang Zhang Peking University
  • Guojie Song Peking University

DOI:

https://doi.org/10.1609/aaai.v33i01.33014261

Abstract

Many successful methods have been proposed for learning low dimensional representations on large-scale networks, while almost all existing methods are designed in inseparable processes, learning embeddings for entire networks even when only a small proportion of nodes are of interest. This leads to great inconvenience, especially on super-large or dynamic networks, where these methods become almost impossible to implement. In this paper, we formalize the problem of separated matrix factorization, based on which we elaborate a novel objective function that preserves both local and global information. We further propose SepNE, a simple and flexible network embedding algorithm which independently learns representations for different subsets of nodes in separated processes. By implementing separability, our algorithm reduces the redundant efforts to embed irrelevant nodes, yielding scalability to super-large networks, automatic implementation in distributed learning and further adaptations. We demonstrate the effectiveness of this approach on several real-world networks with different scales and subjects. With comparable accuracy, our approach significantly outperforms state-of-the-art baselines in running times on large networks.

Downloads

Published

2019-07-17

How to Cite

Li, Z., Zhang, L., & Song, G. (2019). SepNE: Bringing Separability to Network Embedding. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 4261-4268. https://doi.org/10.1609/aaai.v33i01.33014261

Issue

Section

AAAI Technical Track: Machine Learning