New analysis and results for the Frank–Wolfe method
详细信息    查看全文
  • 作者:Robert M. Freund ; Paul Grigas
  • 关键词:90C06 ; 90C25 ; 65K05
  • 刊名:Mathematical Programming
  • 出版年:2016
  • 出版时间:January 2016
  • 年:2016
  • 卷:155
  • 期:1-2
  • 页码:199-230
  • 全文大小:668 KB
  • 参考文献:1.Clarkson, K.L.: Coresets, sparse Greedy approximation, and the Frank–Wolfe algorithm. In: 19th ACM-SIAM Symposium on Discrete Algorithms, pp. 922–931 (2008)
    2.d’Aspremont, A.: Smooth optimization with approximate gradient. SIAM J. Optim. 19(3), 1171–1183 (2008)MATH MathSciNet CrossRef
    3.Demyanov, V., Rubinov, A.: Approximate Methods in Optimization Problems. American Elsevier Publishing, New York (1970)
    4.Devolder, O., Glineur, F., Nesterov, Y.E.: First-order methods of smooth convex optimization with inexact oracle. Technical Report, CORE, Louvain-la-Neuve, Belgium (2013)
    5.Dudík, M., Harchaoui, Z., Malick, J.: Lifted coordinate descent for learning with trace-norm regularization. In: AISTATS (2012)
    6.Dunn, J.: Rates of convergence for conditional gradient algorithms near singular and nonsinglular extremals. SIAM J. Control Optim. 17(2), 187–211 (1979)MATH MathSciNet CrossRef
    7.Dunn, J.: Convergence rates for conditional gradient sequences generated by implicit step length rules. SIAM J. Control Optim. 18(5), 473–487 (1980)MATH MathSciNet CrossRef
    8.Dunn, J., Harshbarger, S.: Conditional gradient algorithms with open loop step size rules. J. Math. Anal. Appl. 62, 432–444 (1978)MATH MathSciNet CrossRef
    9.Frank, M., Wolfe, P.: An algorithm for quadratic programming. Naval Res. Logist. Q. 3, 95–110 (1956)MathSciNet CrossRef
    10.Freund, R.M., Grigas, P., Mazumder, R.: Boosting methods in regression: computational guarantees and regularization via subgradient optimization. Technical Report, MIT Operations Research Center (2014)
    11.Giesen, J., Jaggi, M., Laue, S.: Optimizing over the growing spectrahedron. In: ESA 2012: 20th Annual European Symposium on Algorithms (2012)
    12.Harchaoui, Z., Juditsky, A., Nemirovski, A.: Conditional gradient algorithms for norm-regularized smooth convex optimization. Math. Program. (2014). doi:10.​1007/​s10107-014-0778-9
    13.Hazan, E.L.: Sparse approximate solutions to semidefinite programs. In: Proceedings of Theoretical Informatics, 8th Latin American Symposium (LATIN), pp. 306–316 (2008)
    14.Hearn, D.: The gap function of a convex program. Oper. Res. Lett. 1(2), 67–71 (1982)MATH MathSciNet CrossRef
    15.Jaggi, M.: Revisiting Frank–Wolfe: projection-free sparse convex optimization, In: Proceedings of the 30th International Conference on Machine Learning (ICML-13), pp. 427–435 (2013)
    16.Khachiyan, L.: Rounding of polytopes in the real number model of computation. Math. Oper. Res. 21(2), 307–320 (1996)MATH MathSciNet CrossRef
    17.Lacoste-Julien, S., Jaggi, M., Schmidt, M., Pletscher, P.: Block-coordinate frank-wolfe optimization for structural svms. In: Proceedings of the 30th International Conference on Machine Learning (ICML-13) (2013)
    18.Lan, G.: The complexity of large-scale convex programming under a linear optimization oracle. Department of Industrial and Systems Engineering, University of Florida, Gainesville, Florida. Technical Report (2013)
    19.Levitin, E., Polyak, B.: Constrained minimization methods. USSR Comput. Math. Math. Phys. 6, 1 (1966)CrossRef
    20.Nemirovski, A.: Private communication (2007)
    21.Nesterov, Y.E.: Introductory Lectures on Convex Optimization: A Basic Course, Applied Optimization, vol. 87. Kluwer Academic Publishers, Boston (2003)
    22.Nesterov, Y.E.: Smooth minimization of non-smooth functions. Math. Program. 103(1), 127–152 (2005)MATH MathSciNet CrossRef
    23.Nesterov, Y.E.: Primal-dual subgradient methods for convex problems. Math. Program. 120, 221–259 (2009)MATH MathSciNet CrossRef
    24.Polyak, B.: Introduction to Optimization. Optimization Software, Inc., New York (1987)
    25.Temlyakov, V.: Greedy approximation in convex optimization. University of South Carolina. Technical report (2012)
  • 作者单位:Robert M. Freund (1)
    Paul Grigas (2)

    1. MIT Sloan School of Management, 77 Massachusetts Avenue, Cambridge, MA, 02139, USA
    2. MIT Operations Research Center, 77 Massachusetts Avenue, Cambridge, MA, 02139, USA
  • 刊物类别:Mathematics and Statistics
  • 刊物主题:Mathematics
    Calculus of Variations and Optimal Control
    Mathematics of Computing
    Numerical Analysis
    Combinatorics
    Mathematical and Computational Physics
    Mathematical Methods in Physics
  • 出版者:Springer Berlin / Heidelberg
  • ISSN:1436-4646
文摘
We present new results for the Frank–Wolfe method (also known as the conditional gradient method). We derive computational guarantees for arbitrary step-size sequences, which are then applied to various step-size rules, including simple averaging and constant step-sizes. We also develop step-size rules and computational guarantees that depend naturally on the warm-start quality of the initial (and subsequent) iterates. Our results include computational guarantees for both duality/bound gaps and the so-called FW gaps. Lastly, we present complexity bounds in the presence of approximate computation of gradients and/or linear optimization subproblem solutions. Mathematics Subject Classification 90C06 90C25 65K05

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700