AAAI Publications, Twenty-Sixth AAAI Conference on Artificial Intelligence

Font Size: 
Efficient Online Learning for Large-Scale Sparse Kernel Logistic Regression
Lijun Zhang, Rong Jin, Chun Chen, Jiajun Bu, Xiaofei He

Last modified: 2012-07-14


In this paper, we study the problem of large-scale Kernel Logistic Regression (KLR). A straightforward approach is to apply stochastic approximation to KLR. We refer to this approach as non-conservative online learning algorithm because it updates the kernel classifier after every received training example, leading to a dense classifier. To improve the sparsity of the KLR classifier, we propose two conservative online learning algorithms that update the classifier in a stochastic manner and generate sparse solutions. With appropriately designed updating strategies, our analysis shows that the two conservative algorithms enjoy similar theoretical guarantee as that of the non-conservative algorithm. Empirical studies on several benchmark data sets demonstrate that compared to batch-mode algorithms for KLR, the proposed conservative online learning algorithms are able to produce sparse KLR classifiers, and achieve similar classification accuracy but with significantly shorter training time. Furthermore, both the sparsity and classification accuracy of our methods are comparable to those of the online kernel SVM.


Sparse kernel logistic regression

Full Text: PDF