AAAI Publications, Thirtieth AAAI Conference on Artificial Intelligence

Font Size: 
Co-Regularized PLSA for Multi-Modal Learning
Xin Wang, MingChing Chang, Yiming Ying, Siwei Lyu

Last modified: 2016-03-02


Many learning problems in real world applications involve rich datasets comprising multiple information modalities. In this work, we study co-regularized PLSA(coPLSA) as an efficient solution to probabilistic topic analysis of multi-modal data. In coPLSA, similarities between topic compositions of a data entity across different data modalities are measured with divergences between discrete probabilities, which are incorporated as a co-regularizer to augment individual PLSA models over each data modality. We derive efficient iterative learning algorithms for coPLSA with symmetric KL, L2 and L1 divergences as co-regularizers, in each case the essential optimization problem affords simple numerical solutions that entail only matrix arithmetic operations and numerical solution of 1D nonlinear equations. We evaluate the performance of the coPLSA algorithms on text/image cross-modal retrieval tasks, on which they show competitive performance with state-of-the-art methods.


PLSA, Topic Model, Multi-Modal Learning

Full Text: PDF