Quasi-newton method for LP multiple kernel learning
详细信息    查看全文
文摘
Multiple kernel learning method has more advantages over the single one on the model’s interpretability and generalization performance. The existing multiple kernel learning methods usually solve SVM in the dual which is equivalent to the primal optimization. Research shows solving in the primal achieves faster convergence rate than solving in the dual. This paper provides a novel LP-norm(P>1) constraint non-spare multiple kernel learning method which optimizes the objective function in the primal. Subgradient and Quasi-Newton approach are used to solve standard SVM which possesses superlinear convergence property and acquires inverse Hessian without computing a second derivative, leading to a preferable convergence speed. Alternating optimization method is used to solve SVM and to learn the base kernel weights. Experiments show that the proposed algorithm converges rapidly and that its efficiency compares favorably to other multiple kernel learning algorithms.
NGLC 2004-2010.National Geological Library of China All Rights Reserved.
Add:29 Xueyuan Rd,Haidian District,Beijing,PRC. Mail Add: 8324 mailbox 100083
For exchange or info please contact us via email.