摘要
本文针对一类机器学习中的大规模优化问题,在凸非光滑的假设条件下,提出了一种新的邻近随机L-BFGS方法,它具有很好的扩展性和鲁棒性。文中分析了该数值方法的线性收敛性,并给出了数值算例,数值算例检验了算法的有效性和收敛性。
A new proximal and stochastic L-BFGS method was proposed for large-scale optimization problems in a class of machine learning under the convex and non smooth assumptions. The algorithm is expansible and robust.The linear convergence of the numerical method was analyzed to produce the numerical examples. The results show that numerical examples verifies the method's validity and convergence.
引文
[1]BYRD R H,HANSEN S L,NOCEDAL J,et al.A Stochastic QuasiNewton Method for Large-Scale Optimization[J].Siam Journal on Optimization,2016,26(2):1008-1031.
[2]LEE J D,SUN Y,SAUNDERS M A.Proximal Newton-type methods for minimizing composite functions[J].Mathematics,2014,24(3):1420-1443.
[3]XIAO L,ZHANG T.A Proximal Stochastic Gradient Method with Progressive Variance Reduction[J].Siam Journal on Optimization,2014,24(2):2047-2075.
[4]MOTITZ P,NISHIHARA R,Jordan M I,et al.A Linearly-Convergent Stochastic L-BFGS Algorithm[J].Mathematics,2016,13(4):1623-1640.
[5]NOCEDAL J,WREIGHT S J.Numerical Optimization[M].北京:科学出版社,2006.
[6]FAN R E,LIN C J.LIBSVM data:Classification,regression and multi-label[EB/OL].[2017-11-28].https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/.