求解非线性约束优化问题的精确罚函数方法
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
精确罚函数方法是求解非线性约束优化问题的一种重要方法。理论上,精确罚函数方法只需求解罚参数取某一有限值的罚问题,就可得到约束优化问题的解,从而避免了当罚参数的值趋于无穷大时产生病态的缺点。精确罚函数又分为不可微精确罚函数和连续可微精确罚函数。通常情况下,简单精确罚函数一定是不可微的,从而会在一些快速算法中阻止局部快速收敛,产生" Maratos效应”。连续可微精确罚函数就克服了上述缺点,因此具有更好地性质。增广拉格朗日函数就是这样一种特殊的连续可微精确罚函数。
     对于一般的非线性约束优化模型,本文将提出一种新的非线性Lagrange函数,讨论该函数在KKT点处的性质,并证明在适当条件下,基于该函数的对偶算法产生的迭代点列具有局部收敛性,然后给出与罚参数有关的解的误差估计。这为解决非线性约束优化问题又提供了一种新途径。
     然后对非光滑罚函数进行二阶可微光滑逼近,并给出原优化问题、相应的非光滑罚函数、光滑罚函数最优值间的误差估计,然后设计基于该光滑罚函数的算法,并证明在适当条件下它具有全局收敛性,最后再利用数值实验来说明算法的有效性。
     最后对于锥优化问题,运用增广拉格朗日函数这一特殊的精确罚函数,给出一种迭代算法,并证明这种算法具有一种较弱的全局收敛性,即提出一种ε-全局最优解,对于每一次迭代k,得到相应的εk-全局最优解,该序列都收敛到原问题的ε-全局最优解,从而证明算法具有ε-全局收敛性。
The exact penalty function method is an important method for solving nonlinear constrained optimization problem. In theory, the exact penalty function method only need to solve the penalty problem when the parameter take a finite value, then we can get the solution of the constrained optimization problem, thus avoid the shortcomings of morbid when the value of the penalty parameter tends to infinity. Exact penalty function is divided into nondifferentiable exact penalty function and continuously differentiable exact penalty function. Under normal circumstances, the simple exact penalty function must be nondifferentiable, which will prevent the local fast convergence of some fast algorithm, and result in the "Maratos effect". Continuously differentiable exact penalty function overcomes the above shortcomings, it has better properties. Augmented Lagrangian function is a special kind of continuously differentiable exact penalty function.
     First, for a general nonlinear constrained optimization model, this paper will propose a new nonlinear Lagrangian function to discuss the properties of the function at the KKT point, and prove that under wild conditions, the iterated points based on the dual algorithm of the function are local convergent, and then give the error estimates of the solution about the penalty parameter. This provides a new way for solving nonlinear constrained optimization problem.
     Then we make a second order differentiable smooth approximation for the nonsmooth penalty function of the above model, and give the error estimates of the optimal values of the original optimization problem, the corresponding nonsmooth penalty function and the smooth penalty function, and design the algorithm based on the smooth penalty function, then prove that under wild conditions, it is globally convergent, and finally numerical experiments are given to illustrate the effectiveness of the algorithm.
     Finally, for the cone optimization problem, we use the augmented Lagrangian function, which is a special exact penalty function, give an iterative algorithm, and prove that the algorithm has a weak global convergence, that is, propose a ε-global optimal solution, then for each iterationk, we get the corresponding εkglobal optimal solution,which converge to the ε-global optimal solution,thus the ε-global convergence of the algorithm is proved.
引文
[1]Courant R., Variational methods for the solution of problems of equilibrium and vibrations, Bull. Amer. Math. Soc.,1943,49,1-23.
    [2]Cmnp G.D., Inequality-constrained stationary value problems, J. Oper. Res. Soc. Am.,1955,3, 548-50.
    [3]Pietrzykowski T., Application of the steepest descent method to concave programming, Proceedings of the International Federation of Information Processing Societies Congress (Munich), Amsterdam: North Holland,1962,185-189.
    [4]Fiacco A.V., McCormick G.P., Computational algorithm for the sequential unconstrained minimization technique for nonlinear programming, Management Sci.,1964,10,601-617.
    [5]Fiacco A.V., McCormick G.P., Extensions of SUMT for nonlinear programming:equality constraints and extrapolation, Management Sci.,1966,12,816-828.
    [6]Fiacco A.V., McCormick G.P., Nonlinear Programming:Sequential Unconstrained Minimization Techniques, New York:John Wiley & Sons,1968.
    [7]Fiacco A.V., McConnick G.P., The sequential unconstrained minimization technique for nonlinear programming, a primal-dual method, Management Sci.,1964,10,360-366.
    [8]Fiacco A.V., McCormick G.P., The sequential unconstrained minimization technique(SUMT) without parameters, Oper. Res.,1967,15,820-827.
    [9]Fiacco A.V., McConnick G.P., The slacked unconstrained minimization technique for convex programming, SIAM J. Appl. Math,1967,15,505-515.
    [10]Ereminl I., The penalty method in convex programming, Soviet. Math. Dokl,1966,8,459-462.
    [11]Zangwill W.I., Nonlinear programming via penalty function, Management Sci.,1967,13, 344-358.
    [12]Bertsekas D., Constrained Optimization and Lagrange Multiplier Methods, New York:Academic Press,1982.
    [13]Hestenes M.R., Multiplier and gradient methods, J. Optim. Theory. Appl.,1969,4,303-320.
    [14]Di P.G., Grippo L., A new augmented Lagrangian function for inequality constraints in nonlinear programming problems, J. Optim. Theory. Appl.,1982,36,495-519.
    [15]Lucidi S., New results on a class of exact augmented Lagrangians, J. Optim. Theory. Appl.,1988, 58,259-282.
    [16]Di P.G., Lucidi S., An augmented Lagrangian function with improved exactness properties. SIAM J. Optim,2001,12,376-406.
    [17]Di P.G., Lucidi S., An exact augmented Lagrangian functions in nonlinear programming, in: Nonlinear Optimization and Applications, Di P.G. and Giannessi F., eds., New York:Plenum Press,1996,85-100.
    [18]Rubinov A.M., Gasimov R.N., Strictly increasing positively homogeneous functions with application to exact penalization, Optimization,2003,52,1-28.
    [19]Rubinov A.M., Glover B.M., Yang X.Q., Decreasing functions with applications to penalization, SIAM J. Optim,1999,10,289-313.
    [20]Rubinov A.M., Glover B.M., Yang X.Q., Extended Lagrange and penalty functions in continuous optimization, Optimization,1999,46,327-351.
    [21]Rubinov A.M., Glover B.M., Yang X.Q., Modified Lagrangian and penalty functions in continuous optimization, Optimization,1999,46,327-351.
    [22]Rubinov A.M., Yang X.Q., Bagirov A.M., Penalty functions with a small penalty parameter, Optimization Methods Softw,2002,17,931-964.
    [23]Luo Z.Q., Pang J.S., Ralph D., Mathematical Programs with Equilibrium Constraints, Cambridge: Cambridge University Press,1996.
    [24]Pang J.S., Error bounds in mathematical programming, Math. Program,1997,79,299-332.
    [25]Polyak R.A., Modified barrier function:theory and methods, Math. Program,1992,54,177-222.
    [26]Polyak R.A., Log-Sigmoid multipliers method in constrained optimization, Ann. Oper. Res.,2001, 101,427-460.
    [27]贺素香,张立卫,李兴斯不等式约束优化问题的一个势函数,数学进展,2004,33,343-350.
    [28]Fiacco A.V., Mccormick G.P., Nonlinear Programming:Sequential Unconstrained Minimization Techniques, New York:Wiley,1968.
    [29]顾健,任咏红,求解约束规划的一个非线性Lagrange函数,数学进展,2007,36,749-759.
    [30]王炜,张丽霞,袁笑宇,求解不等式约束优化问题的一个非线性Lagrange函数,海南师范大学学报(自然科学版),2009,22,387-392.
    [31]王炜,朱琳琳,仟咏红,求解约束规划的一个非线性Lagrange函数,辽宁师范大学学报(自然科学版),2009,32,265-269.
    [32]王炜,田正俊,姜珊,求解非线性优化问题的一个非线性Lagrange函数,大连民族\师范大学,2010,12,31-34.
    [33]Bonnans J.F., Shapiro A., Perturbation Analysis of Optimization Problems, Beijing:Science Press, 2008.
    [34]Han S.P., Mangasrian O.L., Exact penalty function in nonlinear programming, Math. Program, 1979,17,251-269.
    [35]Rosenberg E., Globally convergent algorithms for convex programming, Math. Oper. Res.,1981, 6,437-443.
    [36]Lasserre J.B., A globally convergent algorithm for exact penalty functions, Euro. J. Oper. Res., 1981,7,389-395.
    [37]Rosenberg E., Exact penalty functions and stability in locally Lipschitz programming, Math. Program,1984,30,340-356.
    [38]Auslender A., Cominetti R., Haddou M., Asymptotic analysis for penalty and barrier methods in convex and linear programming, Math. Oper. Res,1997,22,43-62.
    [39]Bental A., Teboulle M., A smoothing technique for non-differentiable optimization problems, Lecture Notes in Mathematics. Springer, Berlin,1989.
    [40]Gonzaga C.C., Castillo R.A., A nonlinear programming algorithm based on non-coercive penalty functions, Math. Program,2003,96,87-101.
    [41]Pinar M.C., Zenios S.A., On smoothing exact penalty functions for convex constrained optimization, SIAM J. Optim,1994,4,486-511.
    [42]Wu Z.Y., Lee H.W., Bai F.S., Zhang L.S., Quadratic smoothing approximation to l1 exact penalty function in global optimization, J. Ind. Management Optim,2005,53,533-547.
    [43]Bai F.S., Wu Z.Y., Zhu D.L., Lower order calmness and exact penalty function, Optimization Methods Softw,1993,21,515-525.
    [44]Wu Z.Y., Bai F.S., Yang X.Q., Zhang L.S., An exact lower order penalty function and its smoothing in nonlinear programming, Optimization,2004,53,51-58.
    [45]Meng Z.Q., Dang C.Y., Yang X.Q., On the smoothing of the square-root exact penalty function for inequality constrained optimization, Comput. Optim,2006,35,375-398.
    [46]He Z.H., and Bai F.S., A smoothing approximation to the lower order exact penalty function, Oper. Res.,2010,14,11-22.
    [47]Bazaraa M.S., Sherali H.D., Shetty C.M., Nonlinear Programming:Theory and Algorlthms, John Wiley Sons Inc., New York, USA,1993.
    [48]Andreani R, Birgin E.G, Martinez J.M., On Augmented Lagrangian methods with general lower-level constraints, SIAM J. Optim,2007,18,1286-1309.
    [49]Andreani R., Birgin E.G., Martinez J.M., Augmented Lagrangian methods under the Constant Positive Linear Dependence constraint qualification, Math. Program,2008,111,5-32.
    [50]Birgin E.G. Floudas J.M. Martinez J M., Global minimization using an Augmented Lagrangian method with variable lower-level constraints, Math. Program,2010,125,139-162.
    [51]Al-Khayyal F.A., Sherali H.D., On finitely terminating Branch-and-Bound algorithms for some global optimization problems, SI AM J.Optim,2000,10,1049-1057.
    [52]张立卫,锥约束优化-最优性理论与增广Lagrange方法,科学出版社,2010.
    [53]唐莉萍,赵克全,一类非光滑锥约束规划问题的混合对偶,重庆师范大学学报,2010,5,5-8.
    [54]陈瑶,黄学祥,郭丽,带多面体控制锥的锥约束凸向量优化问题的有效解集的非空有界性的刻画,运筹学学报,2010,1,46-54.
    [55]Adjiman C.S., Androulakis I.P., Maranas C.D., A global optimization method aBB for process design, Comput.Chem.Eng,1996,20,409-424.
    [56]Adjiman C.S., Dallwig S., Floudas C.A., A global optimization method, αBB, for general twice-differentiable constrained NLPs. I. Theoretical Advances, Comput.Chem. Eng,1998,22, 1137-1158.
    [57]Adjiman C.S., Androulakis I.P., Floudas C.A., A global optimization method, aBB, for general twice-differentiable constrained NLPs. II. Implementation and computational results, Comput.Chem. Eng,1998,22,1159-1179.
    [58]杜学武,求解约束优化问题的增广拉格朗日函数法,上海:上海大学,2005.
    [59]李冉冉,赵文玲,周金川,带有锥约束的全局最优化问题的增广拉格朗日方法,山东理工大学学报,2011,25,25-27.
    [60]袁亚湘,孙文瑜,最优化理论与方法,科学出版社,1997,404-411.
    [61]Arrow K., Hurwicz L., Uzawa H., Studies on Linear and Nonlinear Programming, Stanford: Stanford University Press,1958.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700