用户名: 密码: 验证码:
非线性最优化超记忆梯度算法与GLP梯度投影算法研究
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
最优化方法是运筹学的一个重要组成部分,在自然科学,社会科学,生产实践,工程设计和现代化管理中具有广泛的应用。很多实际问题都可以归结为最优化问题来解决。最优化问题的一个核心是设计有效的算法。
     本文研究非线性最优化中的超记忆梯度算法。首先给出求解无约束最优化问题的超记忆梯度下降算法,研究了算法的全局收敛性,并对算法进行数值实验。其次结合Rosen投影矩阵,广义投影矩阵和GLP投影等技术将无约束最优化问题的新的超记忆梯度算法进行推广,对约束最优化问题设计超记忆梯度投影算法,并对算法进行收敛性分析和数值实验。
     论文的创新点有六个:
     一.给出无约束最优化问题中的三项记忆梯度算法中的参数的取值范围,以保证得到目标函数的充分下降方向,设计求解无约束最优化问题的超记忆梯度下降算法,在去掉迭代点列有界的条件下研究了算法的全局收敛性,并证明新算法在目标函数是凸,伪凸,拟凸时具有较强的收敛性质。同时给出结合拟牛顿方程的三项记忆梯度算法,从而给出需要向量存储且具有全局收敛性的拟牛顿算法的修正形式。数值例子表明算法是有效的。
     二.利用Rosen投影矩阵,建立求解带线性或非线性不等式约束优化问题的三项记忆梯度Rosen投影下降算法,并证明了算法的收敛性。同时给出了结合FR,PR,HS共轭梯度参数的三项记忆梯度Rosen投影算法,从而将经典的共轭梯度法推广用于求解约束规划问题。数值例子表明算法是有效的。
     三.利用广义投影矩阵,建立求解非线性等式和不等式约束优化问题的一个三项记忆梯度广义投影算法,并在较弱条件下证明了算法的收敛性。数值例子表明算法是有效的。
     四.利用广义投影矩阵与处理任意初始点的技巧结合建立求解非线性不等式约束优化问题的一个初始点任意的三项记忆梯度广义投影算法,并在较弱条件下证明算法的收敛性。数值例子表明算法是有效的。
     五.对闭凸集约束的非线性规划问题构造了一个修正GLP梯度投影下降算法,在一维精确步长搜索和去掉迭代点列有界的条件下分析了算法的全局收敛性,并证明了算法具有较强的收敛性质。数值例子表明该算法是有效的。
     六.对闭凸集约束的非线性规划问题构造了一个修正GLP梯度投影下降算法,在广义Armijo步长直线搜索下分析了算法的全局收敛性,并证明了算法具有较强的收敛性质。数值例子表明该算法是有效的。
Optimization method is an important part of operations research. It has wide application to many fields, such as, natural science, social science, practical production, engineering design, and modern management, etc. Many practical problems can be reduced to optimization problems. The key to investigating optimization problems is to design efficient algorithm for solving it.
    The PHD thesis is to study the super-memory gradient methods for optimization problems. Firstly, a new class of three-term memory gradient methods for unconstrained optimization is presented. Global convergence properties of the new methods are discussed. Numerical results show that the new methods are efficient. Secondly, the new super-memory gradient method is generalized to solve constrained optimization by using projection matrixes, such as Rosen projection matrix, generalized projection matrix, GLP projection, etc. Global convergence properties of the new methods are discussed. Numerical results show that the new methods are efficient.
    The innovation of the thesis has six points.
    1. A new class of three-term memory gradient methods with Armijo-like step size rule for unconstrained optimization is presented. Global convergence properties of the new methods are discussed without assuming that the sequence of iterates is bounded. Moreover, it is shown that, when the objective function is a pseudo-convex (quasi-convex) function, this new method has strong convergence results. Combining quasi-Newton method with our new method, quasi-Newton method is modified to have global convergence property. Numerical results show that the new algorithms are efficient.
    2. By using projection matrix, a new three-term memory gradient projection method for nonlinear programming with linear or nonlinear in-equality constraints is presented. The global convergence properties of the new method are discussed. Combining with conjugate gradient scalar with our new method, three new classes of three-term memory gradient projection methods with conjugate gradient scalar are presented. The numerical results illustrate that the new methods are effective.
    3. By using generalized projection matrix, a new three-term memory gradient projection method for nonlinear programming with nonlinear equality and in-equality constraints is presented. The global convergence properties of the new method are discussed. The numerical results illustrate that the new methods are effective.
    4. By using generalized projection matrix, a three-term memory gradient projection method with arbitrary initial point for nonlinear programming with nonlinear in-equality constraints is presented. The global convergence properties of the new method are discussed. Numerical results illustrate that the new methods are effective.
    5. A generalized memory gradient projection method for convex constrained optimization is presented by using Goldstein-Lavintin-Polyak projection. The global convergence properties of the new method are discussed with an accurate step size rule and without
    
    
    assuming that the sequence of iteration is bounded. Numerical results show that the algorithm is efficient.
    6. A modified gradient GLP projection descent method for convex constrained optimization is presented. The global convergence properties of the new method are discussed without assuming that the sequence of iteration is bounded. Moreover, it is shown that this new method has strong convergence results. The numerical results illustrate that the new methods are effective.
引文
[1] Hestenes M.R., Stiefel E.L., Methods of conjugate gradients for solving linear systems, J Res Nat Bur Standards Sect. 1952, 5(49): 409-436.
    [2] Fletcher R., Reeves C., Function minimization by conjugate gradients, Comput J, 1964, 7: 149-154.
    [3] Polak E., Ribiere G., Note sur la convergence de directions conjugees, Rev. Francaise Informat Recherche Opertionelle, 3e Armee, 1969, 16: 35-43.
    [4] Fletcher R., Practical method of optimization, Vol Ⅰ: Unconstrained Optimization, 2nd edition, Wiley, New York, 1997.
    [5] Y. Dai and Y. Yuan, A nonlinear conjugate gradient with a strong global convergence properties, SIAM Journal of Optimization, 2000, 10:177-182.
    [6] Liu Y. And Storey C. Efficient generalized conjugate gradient algorithms [J]. Part 1: Theory. JOTA, 1991, 69 (1): 129~137.
    [7] Liu Y. And Storey C. Efficient generalized conjugate gradient algorithms [J]. Part 2: Implementation. JOTA, 1991, 69 (1): 139~152.
    [8] Al-Baali M. Descent property and globally convergence of the Fletcher-Reeves method with inexact line searches, IMA, Journal of Numerical Analysis, 1985, 5: 121-124
    [9] Powell M. J. D., Non-convex minimization calculations and the conjugate gradient method. In: Lecture Notes in Mathematics 1066, Berlin: Springer-Verlag, 1984. 122-124.
    [10] H.D. Qi, J.Y. Han and G.H. Liu, A modified Hestenes-Stiefel conjugate gradient algorithm, Chinese Annals of Mathematics, 1996, 17A(3): 277-284.(In Chinese)
    [11] Beale, E.M.L., A derivative of conjugate gradients, In: Lootsma, F.A., Numerical methods for nonlinear optimization, Academic Press, London, 1972, 39-43.
    [12] Powell, M.J.D., Restart procedures of the conjugate gradient method, Mathematical Programming, 1977, 2: 241-254.
    [13] Deng, N.Y., Li, Z.F., Global convergence of three terms conjugate gradient methods , Optimization method and Software, 1995, 4: 273-282.
    [14] Dai, Y.H., Yuan, Y.X., Convergence of three terms conjugate gradient methods, Mathematica Numeriea Sinica, 1999, 3: 355-362.
    [15] Zoutendijk G. Nonlinear programming, computational methods. In: Abadie J, ed. Inter and Nonlinear programming, Amsterdam: North-Holland, 1970, 37-86.
    [16] Touati-Ahmed, D. and Storey, C., Efficient hybrid conjugate gradient techniques. JOTA, 1990, 64 (2): 379-397.
    [17] Y.F. Hu and Storey, C., Global convergence result for conjugate gradient methods. JOTA, 1991, 71 (2): 399-405.
    [18] Nada I. Djuranovic-milicic, On a modification of a step-size algorithm. European Journal of Operational Research, 1987, 31: 66-70. North-Holland.
    [19] TODD, M.J. On convergence properties of algorithms for unconstrained minimization. IMA Journal of Numerical Analysis. 1989, 9:435-411.
    [20] Z. Wei, L.Qi, and H. Jiang, Some convergence properties of descent methods. JOTA,
    
    1997, 95 (1): 177-188.
    [21] WU. S.Q., Convergence properties of descent methods for unconstrained minimization. Optimization, 1992, 26: 229-237.
    [22] Dussault, J.P., Convergence of implementable descent algorithms for unconstrained optimization, JOTA, 2000, 104(3): 739-745.
    [23] Miele, A., Cantrell, J.W. Memory gradient method for the minimization of functions, JOTA, 1969, 3: 459-470.
    [24] Wolfe, M. A., A quasi-Newton method with memory for unconstrained function minimization, J. Inst. Maths Applics, 1975, 15: 85-94.
    [25] Wolfe, M. A., Viazminsky, C. Super-memory descent methods for unconstrained function minimization, JOTA, 1976, 18: 455-468.
    [26] Cragg, E.E., Levy, A.V., Study on a memory gradient method for the minimization of functions, JOTA, 1969, 4(3): 191-205.
    [27] 孙麟平,无约束极小化的自是适应多信息下降算法,高校计算数学学报,1982,14(2):107-114.
    [28] 赵庆祯,一个改进的超记忆梯度法的收敛性及其敛速估计,应用数学学报,1983,6(3):376-385.
    [29] 胡宗英,一个改进的记忆梯度法,高等学校计算数学学报,1989,2:173-179.
    [30] 时贞军,无约束优化的超记忆梯度算法,工程数学学报,2000,17(2):99-104.
    [31] 时贞军,改进HS共轭梯度算法及其全局收敛性,计算数学,2001,23(4):393-406.
    [32] Rosen, J. B., The Gradient Projection Method for Nonlinear Programming, part Ⅰ, Linear Constraints, J. SIAM, 1960, 8(1): 182-217.
    [33] Rosen, J. B., The Gradient Projection Method for Nonlinear Programming, Part Ⅱ, Nonlinear Constraints, J. SIAM, 1960, 9(4): 514-532.
    [33] Du D Z, Zhang X S., A convergence theorem of Rosen's gradient projection method, Mathematical Programming, 1986, 36:135-144.
    [34] 陈广军,一个解带线性或非线性约束最优化问题的梯度投影算法,计算数学,1987,49:356-364.
    [35] 章祥荪,改进的Rosen-polak方法,应用数学学报,1979,2(3):257-267.
    [36] 堵丁柱,孙捷,一个新的梯度投影方法,计算数学,1983,5(4):378-386.
    [37] 王长钰,非线性规划的一个可行方向法,数学学报,1982,25(1):15-19.
    [38] 薛声家,解非线性约束拟凸规划的一个梯度投影法,数学研究与评论,1984,4(2):87-92.
    [39] 越民义,韩继业,一个新的既约梯度法及其收敛性,中国科学,1979,4:345-356.
    [40] 曾庆光,拓广的Rosen梯度投影法及其整体收敛性证明,应用数学学报,1991,14(3):312-321.
    [41] 堵丁柱,非线性约束条件下的梯度投影方法,应用数学学报,1985,8(1):7-16.
    [42] 旌保昌,一族非线性约束条件下的摄动梯度投影法,应用数学学报,1989,12(1):190-195.
    [43] Jose Herskovits, A Two-stage feasible directions algorithm for nonlinear constrained optimization, Mathematical programming, 1986, 36: 19-38.
    [44] 高自友,贺国平,约束优化问题的一个广义梯投影法,科学通报,1991,36(19):1444-1447.
    
    
    [45] 赖炎连等,非线性最优化的广义梯度投影方法,中国科学A辑,1992,9:916-924.
    [46] 高自友,于伟,任意初始点下的广义梯投影方法,科学通报,1992,20:1832-1836.
    [47] 赖炎连等,初始点任意的一个非线性优化的广义梯度投影法,系统科学与数学,1995,15(4):374-380.
    [48] 赖炎连等,初始点任意且全局收敛的梯度投影法,科学通报,1990,20:1536-1539.
    [49] Wolfe, P., On the convergence of gradient methods under constraint, IBM J. Research and Development, 1972, 16: 407-411.
    [50] Bertsekas, D.P., On the Goldstein-Levitin-Polyak gradient projection method, IEEE Trans. Automat. Contr. 1976, AC-21: 174-184.
    [51] Bertsekas, D.P., Projected Newton method for optimization problems with simple constraints, SIAM J. Control Optimization, 1982, 20: 221-246.
    [52] Calamai, P.H. and More, J. J., Projected gradient methods for linearly constrained problems, Mathematical Programming, 1987, 39: 93-116.
    [53] Conn, A. R., Gould, N. I. M. and Toint, P H. L., Global convergence of a class of trust region algorithms for optimization with simple bounds, SIAM J. Numer. Anal.,1988, 25(2): 433- 460.
    [54] Conn, A. R., Gould, N. I. M. and Toint, P H. L., Testing a class of methods for solving minimization problems with simple bounds on the variable, Mathematics of Computation, 1988, 50: 399-430.
    [55] Danskin, J. M., The theory of max-min, with applications, SIAM J. Appl. Math., 1966, 14: 641- 664.
    [56] Dunn, J.C., Global and asymptotic convergence rate estimates for a class of projected gradient processes, SIAM J. Control Optim.,1981, 19: 368-400.
    [57] Dunn, J.C., On the convergence of projected gradient processes to singular attractors, J. Optim. Theory Appl, 1987, 55: 203-215.
    [58] Dunn, J.C., A subspace decomposition principle for scaled gradient projection methods: Global theory, SLAM J. Control Optim.,1991, 29: 1160-1175.
    [59] Gafni, E.M. and Bertsekas, D.P., Two-metric projection methods for constrained optimization, SIAM J. Control. Optim., 1984, 22: 936-964.
    [60] Goldstein, A. A., Convex programming in Hilbert space, Bull. Amer. Math. Soc. 1964, 70: 709- 710.
    [61] Goldstein, A.A., On gradient projection, in Proc.12th Ann. Allerton Conference and Circuits and Systems, Allerton Park, IL, 1974, pp.38-40.
    [62] Kiwiel, K.C. and Murty, K., Convergence of the steepest descent method for minimizing quasi-convex function, J. Optim. Theory Appl., 1996, 89:221-226.
    [63] Levitin, E.S. and Polyak, B.T., Constrained minimization problems, USSR. Comput. Math, Math, Phys, 1966, 6: 1-50.
    [64] Z. Q. Luo and Tseng, P., On the convergence of the coordinate descent method for convex differentiable minimization, J. Optim.Theory Appl., 1992, 72:7-35.
    [65] Z.Q. Luo and Tseng, P., On the linear convergence of descent methods for convex essentially smooth minimization, SIAM J. Control Optim, 1992, 30: 408-425.
    [66] McCormickj, G.P. and Tapia, R.A., The gradient projection method under mild differentiability conditions, SIAM J.Control Optim., 1972, 10: 93-98.
    
    
    [67] Phelps, R.R., The gradient projection method using Curry's step-length, SIAM J. Control Optim, 1986, 24: 692-699.
    [68] Rustem, B., A class of superlinearly convergent algorithms with relaxed step-sizes, Appl. Math. Optim., 1984, 12: 29-43.
    [69] Solodov M.V. and Tseng, P., Modified projection-type methods for monotone variational inequalities, SIAM J. Control Optim., 1996, 34:1814-1830.
    [70] C.Y. Wang, Convergence characterizations of both new pivote methods and simplified Levitin-Polyak gradient projection method, Acta Math. Appl. Sinica, 1981, 4: 37-52.(In Chinese)
    [71] C.Y. Wang, On convergence properties of an improved reduced gradient method, Ke Xue Tongbao, 1982, 17:1030-1033.(In Chinese)
    [72] F. Wu and S. Wu, A modified Frank-Wolfe algorithm and its convergence properties, Acta Math. Appl. Sinica, 1995, 11: 286-291.
    [73] G. L. Xue, A family of gradient projection algorithms and their convergence properties, Acta Math. Appl. Sinica, 1987, 10: 396-404.(In Chinese)
    [74] G. L. Xue, On convergence properties of a Least-Distance programming procedure for minimization problems under linear constraints, J. Optirn. Theory Appl., 1986, 50: 365-370.
    [75] C.Y. Wang and Naihua Xiu, Convergence of the gradient projection method for generalized convex minimization, Computational Optimization and Applications, 2000, 16: 111-120.
    [76] 时贞军,王嘉松,一般线性或非线性约束下的共轭投影梯度法,系统科学与数学,1995,15(4):312-318.
    [77] 时贞军,一类全局收敛的共轭投影梯度法及其超线性收敛性[J],计算数学,1996,4:411-421.
    [78] 张金诚,一个记忆梯度投影法,高等学校计算数学学报,1988,3(2):249-255.
    [79] Gao Zi You, He Gou Ping A super-memory gradient projection Algorithm for optimization problem with nonlinear constraints. ACTA Mathematicae Applicatae Sinica, 1992, 8(4):323-332.
    [80] Y. J. Wang, C.Y. Wang and N.H. Xiu, A family of super-memory gradient projection methods for constrained optimization, Optimization, 2002, 51 (6): 889-905.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700