非线性优化问题的一类无记忆非拟牛顿算法研究
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
对于非线性优化问题寻找快速有效的算法一直是优化专家们研究的热门方向之一。经理论证明和实践检验,拟牛顿法和共轭梯度法已经成为无约束下最优化方法中最有效的两类算法。前者具有许多优点,比如,迭代中仅需一阶导数不必计算Hessian矩阵,当H_k正定时,算法产生的方向均为下降方向,并且这类算法具有二次终止性,在一定的条件下,文[11,25,26,27,28]等给出了除DFP算法外的Broyden族算法的超线性收敛性,而且还具有n步二阶收敛速率。拟牛顿算法的缺点是所需存储量较大,对于大型问题,可能遇到存储方面的困难。共扼梯度法的基本思想是把共轭性与最速下降方法相结合,具有占用内存少,二次终止性和良好的数值表现。然而当目标函数为一般的非线性函数时,即使在精确线搜索下,各共轭梯度法的收敛性也很难保证。考虑到以上两种算法的优缺点,文[3]给出了无约束优化的一类无记忆拟牛顿算法。较求解无约束优化问题的共轭梯度法,无记忆拟牛顿法无论在内存还是每次迭代的计算量都没有增加多少,但其计算表现比共轭梯度法好得多。本文基于非拟Newton方程,结合文[3]中的无记忆拟牛顿法,给出了求解无约束非线性优化问题的一类新算法。数值实验表明,此类算法具有良好的计算效能,特别适合求解大规模的最优化问题。
     在第一章我们首先简要的介绍了最优化问题的提出以及判断最优解常用的最优性条件,回顾了无约束优化问题常用的几类导数下降类算法。
     在第二章中,就无记忆拟牛顿族在无约束最优化问题上,采用非单调线搜索下是否具有全局收敛性进行了研究。在目标函数为凸函数的条件下,证明了无记忆拟牛顿算法是全局收敛的。
     在第三章中,导出了无记忆非拟Newton公式,并讨论了目标函数在满足一致凸的条件下,就采用非精确线搜索是否具有全局收敛性的问题进行了研究。文中还将无记忆非拟Newton公式和Perry-Shanno无记忆拟牛顿公式相结合,提出了另一种混合无记忆非拟牛顿校正公式,并证明在目标函数是凸函数的条件下,此校正在Wolfe线搜索下具有全局收敛性。
Seeking fast theoretical convergence and effective algorithms in unconstrained optimization is a very interested research topic for the optimization specialists and engineers, non-quasi-newton methods and conjugate method have been proved to be two of the most efficient methods when applied to unconstrained optimization by the favorable numerical experience and theoretics. It. also show clear superities that it is not necessary to compute hessian matrix, only need the gradient. Decent direction can be resulted when the hessian matrix is positive. It has been shown that the iterates generated by broyden class except the DFP method has superline convergence[ll,26,27,28,29], and has n-step convergence rate. The disadvantages of quasi-Newton algorithm is the great memory,so for large problems, memory difficulties may be encountered.The basic idea of the conjugate gredient methods is the combination of the conjugation and the most rapid decline method.It involves less memory and has secondary termination and effective numerical performance. However, when the function is the general nonlinear function,even under the exact line search,the convergence of the conjugate gredient methods can hardly be got. Consider the advantages and disadvantages of the above two algorithms,paper [3] gives a class of memoryless non-quasi-newton algorithm about unconstrained optimization. Compared the conjugate gredient method for unconstrained optimization, memoryless non-quasi-newton algorithm has not much increased whether in memory or each iterative calculations,but its numerical performance is much better than the conjugate gradient methods.Based on the non-quasi-newton equation, combining the memoryless non-quasi-newton algorithm in paper [3], we give a new class of algorithm for unconstrained optimization.numerical experiments indicate that this algorithm is very effective, particularly for solving large-scale optimization problems.
    In chapter 1 , we first introduce the development of optimization and some extensive optimality conditions which to decide the optimum solution. We review
引文
[1] J.M.Perry.A class of conjugate gradient algorithms with a two step variable metric memory.Discussion paper 269.Center for Mathematical Studies in Economic and Management Science,Northwestern University,1997.
    [2] D.F.Shanno.On the convergence of a new conjugate gradient algorithm.SIAM J.numer.Anal., 15(1978),pp.1247-1257.
    [3] D.F.Shanno.conjugate gradient methods with inexact searches.Mathematics of Operation Research,3(1978),pp.244-256.
    [4] M.J.D.Powell.Globle convergence conjugate gradient algorithms.Math.Prog., 33(1985),pp.61-67.
    [5] M.J.D.Powell.Restart Procedures for the Conjugate Gradient Method[J].Math. Prog.12(1977):241-254.
    [6] Han Jiye,Liu Guanghui,Yin Hongxia.Convergence of Perry and Shanno's Memooryless Quasi-Newton Method for Nonconvex Optimization Problem.运筹学学报, 1997,1(1):22
    [7] 袁亚湘,非线性规划数值方法[M].上海科技出版社,1993.
    [8] Han.J.Y.,Liu G.H.,Globle convergence analysis of a new non-monontone BFGS algorithm on covex objective functions.Computational Optimization and Applications,7(1997),,pp277-289.
    [9] PH.L.Toint.Globle convergence of the patitioned BFGS algorithms for convex partially separable optimization.Mathematical Programming.,77(1997),pp.69-94.
    [10] Grippo,L.,Lampariello,F.,Lucidi,S.A nonmonontone line search technique for Newton's method. SIAM Journal on Numerical Analysis.,23(1986),pp.707-716
    [11] 席少霖,非线性最优化方法.高等教育出版社,1992
    [12] N.I.Djuranvic-milic,On a modification of a stepsize algorithm,Europen J.of O.R., 31(1987),66-70.
    [13] J.Ortega,W.Rheinboldt,Interative Solution of Nonlinear Equations in Several Variables, Academic Press,New York,1970.
    [14] Xu,D.C,Global convergence of the Broyden's class of Quasi-Newton methods with Nonmonotone Linesearch.Acta Mathematic Sinica,English Series,19(2003),pp.1924
    [15] Yaxiang Yuan.A modified BFGS algorithm for unconstrained optimization.IMA Joural of Numerical Analyses,1991,11,325-332.
    [16] 赵云彬,段虞荣.伪Newton-B族的导出及其性质.应用数学与计算数学学报,1996,10(1),pp82—90.
    [17] 赵云彬,易正俊.伪Newton-δ族的导出和全局收敛性.数值计算与计算机应用,1995,3(1),PP53-62.
    [18] Lanping Chen,Baocong Jiao. A Class of Non-Quast-Newton Methods and Its Convergence.Comm on appl. and Comput,1997,11(2), pp.9-17(in chinese).
    [19] 于静静,焦宝聪.Perry-Shanno无记忆拟牛顿方法在非单凋搜索下的收敛性.发表中.
    [20] Oren.S.S(1979).Self-Scaling Variable Metric Algorithm.PartⅡ.Management Sci.20 PP.845-862.
    [21] R.Fletcher. Practical methods of optimization. 2nd ed..John Wiley & Sons. Chichseter,1987.
    [22] Elijah Polak, Optimization: Algorithms and consistent Approximations ,Spinger Verlag New York Inc. ,1997.
    [23] L.Grippo,S.Lucidi, A global convergence version of the Plak-Ribiére conjugate gradient method, Mathematical Programming 1997(78):375-391
    [24] Liu.G.H.,Han.J.Y.,Qi.H.D. and Xu.Z.L., Convergence analysis on a class of conjugate gradient methods Acta Math.Scientia, 1998(18):11-16.
    [25] 刘光辉,韩继业.带一类非精确搜索的Broyden族的全局收敛性.计算数学,1996,3,233—240.
    [26] 韩继业,刘光辉.无约束最优化线搜索一般模型及BFGS方法的整体收敛性.应用数学学报,1995,18(1),112-122.
    [27] R.Byrd, J.Nocedal and Y.Yuan. Global convergence of a class of quasi-Newton methods on convex problems.SIAM J. Number. Anal .,24(1987), pp.1171-1190
    [28] R.Byrd and J.Nocedal. A tool for analysis of quasi-Newton methods with application to unconstrained minimization.SIAM J. Number. Anal .,26(1989), pp.727-739.
    [29] Lanping Chen,Baocong Jiao.Convergence Properties of the Preconvex Part of Non-Quast-Newton's family. Mathematica Numerica Sinica.2000,22(3), pp.369-378(in chinese).
    [30] J.J.Moré,B.S.Garbow and K.E.Hillstrome. Testing unconstrained optimization software.A CM Trans. Math. Software,7(1981), pp.17-41.
    [31] Al-Baali M.Descent Property and global convergence of the Fletcher-Reeves method with inexact line searches.IMA Journal of Numerical Analysis,1985,5(1):121-124.
    [32] Powell M J D.Nonconvex minimization calculation and the conjugate gradient method Report No.DAMTP1983/NA14,Department of Applied Mathematics and Theoretical Physical, University of Cambridge, Cambridge,England,1983.
    [33] Elijah Polak, Optimization: Algorithms and consistent Approximations,Spinger Verlag New York Inc. ,1997.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700