计算智能在控制、优化和决策中的应用研究
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
经过半个多世纪的发展,最优化理论和相关的优化算法已逐渐成熟。这些在运筹学(Operational Research)和数学规划工具(Mathematical Programming)基础上形成的最优化算法,具有理论完备、算法效率高、稳定性好等优点,因而在许多需要进行优化计算的场合被广泛的使用。但是,这些算法本质上都基于梯度计算,有些甚至对目标函数有连续二阶可导的要求,并且一般不能含有离散的变量(除混合整数规划外),严重的限制了应用范围。而且,基于梯度的算法对初始点的选取位置严重依赖,不能保证收敛到全局最优解或者近似的全局最优解。随着社会的进步,技术的创新,当需要求解的问题比较复杂时,特别是当需要进行不确定系统研究而带来的多层优化命题产生时,这些算法在求解上就遇到了困难。这时,人们往往就需要求助于计算智能。
     本论文的研究目标是探索计算智能在优化领域的应用,不仅研究传统的单层优化命题,而且研究以minimax优化为代表的多层优化命题;不仅考虑算法的有效性,而且兼顾算法的稳定性。针对一般的带有等式约束和不等式约束的非线性优化问题,提出了神经动力学求解模型,对多模的非线性优化问题改进了粒子群算法以获得更好近似全局最优解。随后,同时利用神经网络和进化计算,对minimax双层优化命题进行算法设计,得到了两种不同特点的算法。在能够求解minimax问题的基础上,探索了相关的应用领域。比如,在不确定系统中,通过minimax算法进行后悔度计算,并把区间数优化问题转化为等价的多目标优化问题以求解。另一个应用实例是鲁棒PID的设计,通过minimax计算整定控制器参数,使最恶劣工况下系统性仍然能够保持在一个令人满意的水平上。
     本文的主要研究工作与贡献如下:
     1.对计算智能在优化方面的应用的历史和现状进行了详尽的分析和综述,并提出将优化计算拓展到不确定系统中的思想。特别详述了参数不确定系统的一个重要分支,即用区间数描述的参数不确定系统的优化命题的研究现状。
     2.神经网络动力学原理用于解决单目标的优化命题。带有等式约束和不等式约束的一般非线性优化问题,是单层优化的难点。使用增广Lagrange乘子法求解时,虽然可以避免罚参数无限增大的弊病,但同时也提出了一个难以求解的子命题。本文中运用Lyapunov函数和Lassalle不变性原理,采
    
    中文摘要
    用微分动力学系统为增广Lagrange乘子法的子命题设计了一个神经网络模
    型,从而使得目标函数和约束条件一阶可微的一般非线性优化问题能够顺
    利求解。
    3.单层优化的一个难题是全局最优解或者近似全局最优解的获得。进化算法
     能够很好的避免陷入局部最优解从而备受青睐,粒子群算法就是近年来新
     发展起来的一种仿生算法。但是进化算法中的一些参数设置通常是通过比
     较实验获得的,缺乏理论上的依据。本文分析了粒子群中独立粒子的运动
     轨迹,以及整个粒子群系统的稳定性,在此基础之上提出新的参数设置方
     法。新方法不仅符合理论上的解释,而且通过标准测试函数库的检验表明
     新算法是有效的。文中涉及的稳定性研究在粒子群算法研究中是不多见
     的。
    4.不确定系统的建模求解,离不开minimax问题的最终解决。本文
     为minimax优化这个几乎是崭新的优化命题设计了多种求解算法:首
     先是神经网络算法。神经网络的计算速度快,利于电路实现,同时算法的
     稳定性也得到了证明。其次是基于进化计算的算法。该算法综合了遗传算
     法和单纯形法的优点,主要应用于求解目标函数不可微的minimax全局优
     化。
    5.对不确定系统中的一类问题,即用区间数作为参数进行建模求解的区间数
     规划问题,本文受顾基发研究员的“物理一事理一人理(WSR)”〔26]的
     系统科学思想的启发,创造性的提出了一个结合目标函数期望,不确定度
     和后悔度的三目标鲁棒优化命题,本优化命题可作为原不确定系统优化命
     题的替代命题。此外对含区间数参数的不等式约束的处理也提出了新的转
     化准则。
    6.由于参数不确定性和滞后特性的存在,整定好的控制器在环境产生波动的
     情况下有可能失效。针对这种工业过程,本论文基于LQR性能指标(该指
     标和衰减系数和自然频率密切相关),研究了具有滞后的不确定频域系统
     的鲁棒PID控制器设计。通过这种方式设计出来的控制器不仅在通常工况
     下有和其他控制器相近的良好性能,当工况发生改变乃至其他控制器失控
     的情况下,仍然能够对过程进行平稳的控制。
Having developed for half an century, the conventional optimization algorithms which are based on the Operational Research Theory and some Mathematical Programming tools come into mature. Such algorithms have been widely used in many fields due to their high efficiency and robustness. However, they usually require the optimized functions to be continuous even high order differentiable. What is more, they heavily rely on the initial position of search, without guarantee of the global solutions. When coming to difficult problems especially those exist in the Uncertain System Optimization, people sometimes resort to Computational Intelligence.
    The object of this thesis is the application of Computational Intelligence in Optimization, not only the single level optimization, but also the multi level optimization. Both the efficiency and the stability of algorithms are considered. For general nonlinear optimization with constraints, a neural network model and a revised PSO algorithm are proposed. For minimax two level optimization problem, neural network and genetic algorithm solutions are provided, respectively. After that, the minimax optimization is applied into the uncertain system analysis. Through minimax algorithm the regret index could be calculated, then the Interval Programming is translated into equivalent Multiobjective Programming. Another application example of minimax optimization is the robust PID controller design. By this design measurement the control quality is guaranteed even in the worst working conditions.
    The main contributions of this thesis are listed as following:
    1. The history and the current research progress of Computational Intelligence is synthesized, especially those algorithms applied in the optimization with interval number parameters, an important branch of the uncertain system.
    2. A neural network solver for the Augmented Lagrange multiplier (ALM) method is provided, which has a wide application in the constrained nonlinear optimization propositions. A dynamic approach for the minimization subproblem in ALM method is discussed, and then a neural network iterative algorithm is proposed for general constrained nonlinear optimization.
    
    
    
    3. A PSO with an increasing inertia weight, distinct from a widely used PSO with a decreasing inertia weight, is proposed for the single level optimization problems. Far from drawing conclusions from sole empirical studies or rule of thumb, this algorithm is derived from particle trajectory study and convergence analysis. Four standard test functions with asymmetric initial range settings are used to confirm the validity of the PSO with an increasing inertia weight. From the experiments, it is clear that a PSO with an increasing inertia weight outperforms the one with a decreasing inertia weight, both in convergent speed and solution precision, with no additional computing load compared with the PSO with a decreasing inertia weight.
    4. The minimax problem is a significant topic in signal process and process control, which is relevant to robustness, parameters uncertainty, and signal noise etc. However, efficient algorithms are scarce, especially those for general minimax problem with nonlinear equality and inequality constraints. A novel neural network for general minimax problem has been constructed based on a penalty function approach. The only request on objective function and constraint functions is that they should be first-order differentiable. A Lyapunov function is established for the global stability analysis. The SGA (Simplex-Genetic Algorithm), an improved algorithm of Genetic Algorithm for solving Stackelberg-Nash Equilibrium, is also proposed for the minimax optimization.
    5. For the uncertainty optimization with interval coefficients in the objective function, a robust optimization framework is proposed, in which the concept of
    "regret" is incorporated. This framework is inspired by the methodology of "Wuli-Shili-Renli" [26] raised by J. Gu. Through this method an uncertainty optimization problem may be transferred i
引文
[1]陈士俊,孙永广,吴宗鑫,顾阿伦.求解线性不等式组的仿射梯度算法.系统工程学报,17(2):155-160,2002.
    [2]张青富,焦李成,保铮.一种新的求解线性规划的神经网络.电子学报,20(10):44—49,1992.
    [3]蔡国昌,何怡刚,吴杰.一种求解线性规划的神经网络.湖南大学学报,23(3):87-91,1996.
    [4]叶仲泉,张邦礼,曹长修.Minimax神经网络收敛性分析.信息与控制,26(1):1-6,1997.
    [5]陶卿,王常波,方廷健.一种求解闭凸集上二次规划问题的神经网络模型.模式识别与人工智能,11(1):7-11,1998.
    [6]黄远灿,孙圣和,韩京清.基于lagrange乘子法的非线性规划神经网络.电子学报,26(1):24-28,1998.
    [7]曾孝平,周家启,杨士中.Hopfi eld模型的二次规划神经网络动态性能的研究.电路与系统学报,4(1):42-50,1999.
    [8]邹锐,郭怀成,刘磊.不确定性条件下经济开发区环境规划方法与应用研究(i)(ii).北京大学学报(自然科学版),35:794-808,1999.
    [9]陶卿,曹进德,方廷健.非线性规划神经网络模型.电子科学学刊,22(3):429-433,2000.
    [10]陶卿,刘欣,方廷健.一类求解约束非线性规划问题的神经网络模型.生物数学学报,15(1):1-7,2000.
    [11]李绍军,王惠,姚平经.求解全局优化的遗传(ga)-alopex算法的研究.信息与控制,29(4),2000.
    [12]王伟,张晶涛,柴天佑.Pid参数先进整定方法综述.自动化学报,26(3):347-355,2000.
    [13]殷春霞,胡铁松,郭元裕.多目标线性规划的t-h网络方法及其应用.武汉水利电力大学学报,33(3):98-103,2000.
    [14]张国平,王正欧,袁国林.一个新的线性规划通用神经网络模型.天津大学学报,34(4):459-462,2001.
    [15]郑泳凌,马龙华,钱积新.鲁棒pid控制器参数整定方法.化工自动化及仪表,28(5):14-16,2001.
    [16]李晓磊,邵之江,钱积新.一种基于动物自治体的寻优模式:鱼群算法.系统工程理论与实践,22(11):32-38,2002.
    
    
    [17]陶卿,任富兴,孙德敏.求解混合约束非线性规划的神经网络模型.软件学报,13(2):304-310,2002.
    [18]祝世京,罗云峰,王书宁,陈珽.具有不确定参数多目标决策的一类鲁棒有效解.自动化学报,24(3):394-399,1998.
    [19]文福拴,韩祯祥.一类非线性规划人工神经网络模型.浙江大学学报(自然科学版),26(5):506-512,1992.
    [20]夏又生,吴新余.解线性及二次型规划问题增广的神经网络.电子学报,23(1):67-72,1995.
    [21]夏又生,叶大振.求解线性等式与不等式组的神经网络模型.南京邮电学院学报,16(4):101-103,1996.
    [22]杨若黎and吴沧浦.一种新的非线性规划神经网络模型.自动化学报,22(3):293-300,1996.
    [23]黄西士,吴沧浦.精确求解一类动态规划问题的神经网络.控制与决策,11(2):261-265,1996.
    [24]胡铁松,郭元裕.一种求解线性规划问题的神经网络方法.武汉水利电力大学学报,30(5):80-82,1997.
    [25]谭营,何振亚.求解一类特殊优化问题的模拟人工神经网络方法.电路与系统学报,2(1):15-19,1997.
    [26]顾基发,高飞.从管理科学角度谈物理—事理—人理系统方法论.系统工程理论与实践,18(8):1-5,1998.
    [27]刘宝碇,赵瑞清.随机规划和模糊规划.清华大学出版社,1998.
    [28]宋玉阶,吴怀宇.非线性优化问题的一种神经网络模型.湖北工学院学报,13(3):68-72,1998.
    [29]陈士华,陆君安.混沌动力学初步.武汉水利电力大学出版社,1998.
    [30]王知人,侯培国.求解线性规划问题的神经网络模型.燕山大学学报,23(3):249-251,1999.
    [31]田大钢,费奇.一种新的线性规划问题的神经网络解法.自动化学报,25(5):709-712,1999.
    [32]胡铁松,郭元裕.多目标动态规划的神经网络方法.电子学报,27(10):70-73,1999.
    
    
    [33]侯增广,吴沧浦.一种基于动态规划策略的离散动态大系统递阶优化神经网络.自动化学报,25(1):45-51,1999.
    [34]刘新旺,达庆利.一种区间数线性规划的满意解.系统工程学报,14(2),1999.
    [35]樊治平,张全.一种不确定性多属性决策模型的改进.系统工程的理论与实践,19(12),1999.
    [36]杜丽莉,高兴宝.线性多目标规划的神经网络方法.陕西师范大学学报,28(4):15-18,2000.
    [37]阎平凡,张长水.人工神经网络与模拟进化计算.清华大学出版社,2000.
    [38]陶卿,方廷健.求解约束minimax问题的神经网络模型.控制理论与应用,17(1):82-84,2000.
    [39]张菊亮,章祥荪.一个新的解线性规划的神经网络.运筹学学报,5(2):46-54,2001.
    [40]宋荣方,毕光国.线性约束凸规划神经网络.信号处理17(2):104-109,2001.
    [41]郭立山,沈祖诒.约束非线性规划的神经网络算法.运筹与管理,10(3):51-54,2001.
    [42]盛昭瀚,马军海.非线性动力系统引论.科学出版社,2001.
    [43]马威,王正欧.神经网络融合信赖域求解非线性规划的新方法.天津大学学报,35(6):705-709,2002.
    [44]王先甲等.分离样本与复合样本统计证据推断的一致性.控制与决策,11(6):662-666,1996.
    [45]陈挺.决策分析.科学出版社,1987.
    [46]宣家骥.多目标决策.湖南科学技术出版社,1988.
    [47]陈宝林.最优化理论与算法.清华大学出版社,1989.
    [48]钱学森.一个科学的新领域—开放的复杂巨系统及其方法论,chapter科学决策与系统工程,pages 1-8.中国科学技术出版社,1990.
    [49]袁亚湘.非线性规划数值方法.上海科学技术出版社,1992.
    [50]夏又生.解一般线性规划问题的神经网络.电子学报,23(12):64-70,1995.
    [51]牛明洁.解无约束极大极小问题的非对称神经网络算法.电子学报,23(12):111-114,1995.
    
    
    [52] 陶卿.基于约束区域的神经网络模型及其应用.模式识别与人工智能,11(4):474-478,1998.
    [53] 宋荣方.求解任意凸规划问题的神经网络模型.电路与系统学报,3(1):14-18,1998.
    [54] 周春晖.化工过程控制原理.化学工业出版社,1998.
    [55] 盛昭瀚.主从递阶决策—Stackelberg问题.科学出版杜,1998.
    [56] 高协平.神经网络方法求解线性规划问题研究.长沙电力学院学报,14(2):112-114,1999.
    [57] 徐光辉.运筹学基础手册.科学出版社,1999.
    [58] 王小平,曹立民.遗传算法-理论、应用与软件实现.西安交通大学出版社,2000.
    [59] 梁昔明.大规模优化理论及算法研究.Technical report,浙江大学博士后科研工作报告,2000.
    [60] 黄润生.混沌及其应用.武汉大学出版社,2000.
    [61] 吴沧浦.最优控制的理论与方法.国防工业出版社,2000.
    [62] 韩京清.系统科学与工程研究,chapter控制系统的鲁棒性与歌德尔不完备性定理,pages 211-223.上海科技教育出版社,2000.
    [63] 宋玉阶.求解二次规划问题的神经网络方法.武汉大学学报,47(3):347-350,2001.
    [64] 高兴宝.线性约束非线性规划的神经网络方法.陕西师范大学学报(自然科学版),29(2):20-23,2001.
    [65] 黄远灿.一种新型的lagrange非线性规划神经网络.电子学报,30(1):27-29,2002.
    [66] O. Aberth. The solution of linear interval equations by a linear programming method. Linear Algebra and its Applications, 259:271-279, 1997.
    [67] E. Aiyoshi and K. Shimizu. A solution method for the static constrained stackelberg problem via penalty method. IEEE Trans. on Auto. Contr., 29(12): 1111-1114, 1984.
    [68] R. Alefeld and J. Herzberger. Introduction of interval computations. Academic Press, New York, 1983.
    [69] G. Anandalingam and T. L. Friesz. Hierarchical optimization: An introduction. Annals of Oper. Res., 34:1-11, 1992.
    
    
    [70] P. J. Angeline. Evolutionary optimization versus particle swarm optimization: philosophy and performance difference. In Proc. of the 7th Annual Conf. on Evolutionary Programming, pages 601-610, 1998.
    [71] P. J. Angeline. Using selection to improve particle swarm optimization. In Proc. IEEE Int. Conf. on Evolutionary Computation, pages 84-89, 1998.
    [72] P. J. Angeline. Using selection to improve particle swarm optimization. In Proc. IEEE Int. Conf. on Evolutionary Computation, pages 84-89, 1998.
    [73] S. W. Annie. Non-coding DNA and floating building blocks for the genetic algorithm. PhD thesis, The University of Michigan, 1996.
    [74] J. E. Baker. Reducing bias and ineffi ciency in the selection algorithm. In Proceedings of the 2nd International Conference on Genetic Algorithm, 1987.
    [75] J. F. Bard. An algorithm for solving the general bilevel programming problem. Oper Res., 8(12):260-272, 1983.
    [76] J. F. Bard. An effi cient point algorithm for a linear two-stage optimization problem. Oper. Res., 31(4):670-684, 1983.
    [77] J. F. Bard. Investigation of the three level linear programming problem. IEEE Transactions on Systems, Man and Cybernetics, 14(5):711-717, 1984.
    [78] J. F. Bard. Optimality conditions for the bilevel programming problem. Naval Research Logistics Quarterly, 31(1): 15-27, 1984.
    [79] J. F. Bard and J. E. Falk. An explicit solution to the multilevel programming problem. Computers and Operations Research, 9(1):77-100, 1982.
    [80] J. F. Bard and J. T. Moore. A branch and bound algorithm for the bilevel programming problem. SIAM J. Sci. Statist. Comp., 11:281-292, 1990.
    [81] A. Bemporad and M. Morari. Control of systems integrating logic, dynamics, and constraints. Automatica, 35:407-427, 1999.
    [82] A. Ben-Tal and A. Nemirovski. Robust convex optimization. Mathematics of Operations Research, 23(4):769-805, 1998.
    [83] F. P. Bemardo and P. M. Saraiva. Robust optimization framework for process parameter and tolerance design. AIChE Journal, 26(4):569-580, 1978.
    [84] F. P. Bemardo and P. M. Saraiva. Robust optimization framework for process parameter and tolerance design. AICHE Journal, 44:2007-2016, 1998.
    
    
    [85] Q. Bi. Advanced controller auto-tuning and its application in hvac systems. Control Engineering Practice, 8:633-644, 2000.
    [86] T. Blickle and L. Thiele. A mathematical analysis of tournament selection. In Proceedings of the 6th International Conference on Genetic Algorithm, 1995.
    [87] J. Bracken and J. M. McGill. Mathematical programs with optimization problems in the constraints. Oper. Res., 21:37-44, 1973.
    [88] J. Bracken and J. M. McGill. A method for solving mathematical programs with nonlinear programs in the constraints. Open Res., 22:1097-1101, 1974.
    [89] N. Bryson and A. Mobolurin. An action learning evaluation procedure for multiple criteria decision making problems. European Journal of Operational Research, 96:379-386, 1996.
    [90] A. J. Chipperfi eld and P. Fleming. Paralled and Distributed Computing Handbook, chapter Paralled genetic algorithms, pages 1034-1059. McGraw-Hill, New York, 1995.
    [91] L. O. Chua and G. N. Lin. Nonlinear programming without computation. IEEE Trans. on Circuits and Systems, 31 (2): 182-188, 1984.
    [92] A. Cichocki and R. Unbehauen. Neural networks for optimization and signal processing. Wiley, Chichester, 1993.
    [93] M. Clerc. The swarm and the queen: towards a deterministic and adaptive particle swarm optimization. In Proc. of the Congress on Evolutionary Computation, pages 1951-1957, 1999.
    [94] C.A.C. Coello. A comprehensi e survey of evolutionary based multiobjective optimization techniques, knowledge and information systems. An International Journal, 1(3):269-308, 1999.
    [95] G.D. Cohen and G.A Coon. Theoretical consideration for retarded controllers. Trans. ASME, 75:827, 1953.
    [96] A. Colomi, M. Dorigo, and V. Maniezzo. An investigation of some properties of an ant algorithm. In Parallel Problem Solving from Nature Conference, pages 509-520. Elsevier Publishing, 1992.
    [97] J. Darlington, C. C. Pantelides, B. Rustem, and B.A.Tanyi. An algorithm for constrained nonlinear optimization under uncertainty. Automatica, 35:317-228, 1999.
    [98] K. Deb. Multi-objective optimization using evolutionary algorithms. Wiley, Chichester, 2001.
    
    
    [99] F. Van den Bergh and A. Engelbrecht. Cooperative learning in neural networks using swarm optimizers. South African Computer Journal, pages 84-90, 2000.
    [100] A. S. Drud. Conopt-a large-scale grg code. ORSA Journal on Computing, 6(2):207-216, 1994.
    [101] R. Eberhart and Y. Shi. Tracking and optimizing dynamic system with swarms. In Proc. of the Congress on Evolutionary Computation, 2001.
    [102] C.A. Floudas. Global opitmization in design and control of chemical process systems. Journal of Process Control, 10:125-134, 2000.
    [103] L.J. Fogel, A.J. Owens, and M.J. Walsh. Artificial intelligence through simulated evolution. John Wiley, New York, 1966.
    [104] C.M. Fonseca and P. J. Fleming. Genetic algorithm for multiobjective optimization: Formulation, discussion and generalization. In Proceedings of the 5th International Conference on Genetic Algorithm, pages 416-423, 1993.
    [105] C. M. Fonseca and P. J. Fleming. An overview of evolutionary algorithms in multiobjective optimization. Evolutionary Computation, 3(1):1-16, 1995.
    [106] C. M. Fonseca and P. J. Fleming. Multiobjective optimization and multiple constraint han- dling with evolutionary algorithms - part i: A unifi ed formulation and part ii: Application example. IEEE Trans. on System, Man, and Cybernetics, 28(1):26-37,38-47, 1998.
    [107] K. Fukushima. Cognitron: A self-organizing multilayer neural network. Biological Cybernetics, 20:121-136, 1975.
    [108] M. Gen and R. Cheng. Optimal design of system reliability under uncertainty using interval programming and genetic algorithm. Technical report, Ashikaga Institute of Technology, Ashikaga, Japan, 1994.
    [109] D. E. Goldberg. Genetic algorithm in search, optimization, and machine learning. AddisonWesley, Reading, MA, 1989.
    [110] D. Gong, M. Gen, and G. Yamazaki. A modifi ed ann for convex programming with linear constraints. In IEEE International Conference on Neural Networks-Conference Proceedings, volume 1, 1996.
    [111] J.J. Grefenstette. Optimization of control parameters for genetic algorithms. IEEE Transactions on Systems, Man and Cybernetics, 16(1): 122-128, 1986.
    [112] J. B. He, Q. G. Wang, and T. H. Lee. Pi/pid controller tuning via lqr approach. Chemical Engineering Science, 55(2429-2439), 1999.
    
    
    [113] D. O. Hebb. The organization of behavior. A neuropsychological theory. Wiley, New York, 1949.
    [114] S. J. Henkind and M. C. Harisan. An analysis of four uncertainty. IEEE Transactions on Systems, Man and Cybernetics, 18(5):700-714, 1988.
    [115] J.W. Herrmann. A genetic algorithm for minimax optimization problems. In Proceedings of the 1999 Congress on Evolutionary Computation, volume 2, pages 1099-1103, 1999.
    [116] M. R. Hestenes. Multiplier and gradient methods. J. Opt. Theory Appl., 4(303-320), 1969.
    [117] W. Hillis. The connection machine. The MIT Press, Cambridge, MA, 1989.
    [118] G. E. Hinton, T. J. Sejnowski, and D. H. Ackley. Boltzmann machines: Constraint satisfaction networks that leam. Technical report, Camegie Mellon University, Pittsburgh, PA, 1984.
    [119] J. H. Holland. Adaptation in natural and artificial systems. The University of Michigan Press, Ann Arbor, 1975.
    [120] J.J. Hopfi eld. Neural networks and physical systems with emergent collective computational abilities. In Proc. Natl. Acad. Sci., volume 79, pages 2554-2558, 1982.
    [121] G. H. Huang. A hybrid inexact-stochastic water management model. European Journal of Operational Research, 107:137-158, 1998.
    [122] M. Inuiguchi and M. Sakawa. Minimax regret solution to linear programming problems with an interval objective function. European Journal of OperationalResearch, 86:526-536, 1995.
    [123] H. Ishbuchi and H. Tanaka. Formulation and analysis of linear programming problem with interval coeffi cients. Journal of Japan Industrial Management Association, 40(5):320-329, 1989.
    [124] H. Ishbuchi and H. Tanaka. Multiobjective programming in optimization of the interval objective function. European Journal of Operation Research, 48:219-225, 1990.
    [125] R. Islam, M. P. Biswal, and S. S. Alam. Preference programming and inconsistent interval judgements. European Journal of Operational Research, 97:53-62, 1997.
    [126] J. Arabas, Z. Michalewicz and J. Mulawka. A genetic algorithm with varying population size. In The 1st IEEE International Conference on Evolutionary Computation, 1994.
    [127] H. A. Jensen and S. Maturana. A possibilistic decision support system for imprecise mathematical programming problems. International Journal of Production Economics, 77:145-158, 2002.
    
    
    [128] K. A. De Jong. An Analysis of the behavior of a class of genetic adaptive systems. PhD thesis, University of Michigan, 1975.
    [129] K. A. De Jong and W.M. Spears. Using genetic algorithms to solve np-complete problems. In Proc. 3rd Int'nl Conf. Genet. Algorithms, pages 124-132, 1989.
    [130] I. Karafyllis and A. Kokossis. On a new measure for the integration of process design and control: the disturbance resiliency index. Chemical Engineering Science, 57:873-886, 2002.
    [131] J. Kennedy. The particle swarm: social adaptation of knowledge. In Proc. IEEE Int. Conf on Evolutionary Computation, pages 303-308, 1997.
    [132] J. Kennedy. The behavior of particles. In Proc. 7th Annual Conf. on Evolutionary Programming, pages 581-591, 1998.
    [133] J. Kennedy. Stereotyping: improving particle swarm performance with cluster analysis. In Proc. IEEE Int. Conf. on Evolutionary Computation, pages 1507-1512, 2000.
    [134] J. Kennedy and R. Eberhart. Particle swarm optimization. In the 4th IEEE International Conference on Neural Networks, pages 1942-1948, 1995.
    [135] J. Kennedy, R. Eberhart, and Y. Shi. Swarm intelligence. Morgan Kaufmann, San Francisco, 2001.
    [136] M. E Kennedy and L. O. Chua. Unifying the tank and hopfi eld linear programming circuit and the canonical nonlinear programming circuit of chua and lin. IEEE Trans. on Circuits and Systems, 34(2):210-214, 1987.
    [137] M. P. Kennedy and L. O. Chua. Neural networks for nonlinear programming. IEEE Trans. on Circuits and Systems, 35(5):554-562, 1988.
    [138] T. Kohonen. Self-organisation and associative memory. Springer Verlag, Berlin, 3rd edition, 1989.
    [139] B. J. Korhonen. Multiple criteria decision support: A review. European Journal of Operational Research, 63:361-375, 1992.
    [140] J. R. Kosa. Genetic programming Ⅱ. Automatic discovery of reusable programs. The MIT Press, Cambridge, MA.
    [141] J. R. Kosa. Genetic programming—on the programming of computers by means of natural selection. The MIT Press, Cambridge, MA.
    [142] E Kouvelis and G. Yu. Robust descrete optimization and its applications. Kluwer Academic Publishers, Boston, 1997.
    
    
    [143] J.P. LaSalle. The stability of dynamical systems. SIAM, Philadelphia, PA, 1976.
    [144] F. Lewis and V. L. Syrmos. Optimal Control. Wiley, New York, 1995.
    [145] W. E. Lillo, M. H. Hui, and S. H. Zak. On solving constrained optimization problems with neural networks: A penalty method approach. IEEE Trans. on Neural Networks, 4(6):931-940, 1993.
    [146] B. Liu. Stackelberg-nash equilibrium for multilevel programming with multiple followers using genetic algorithms. Computers Math. Applic., 36(7):79-89, 1998.
    [147] B. Liu and A. O. Esogbue. Cluster validity for fuzzy criterion clustering. Proceedings of the IEEE International Conference on Systems ,Man and Cybernetics, 5:4702-4705, 1995.
    [148] Y. Liu and K. Passino. Biomimicry of social foraging bacteria for distributed optimization: Models, principles, and emergent behaviors. Journal of Optimization Theory and Application, 115(3):603-628, 2002.
    [149] G. Looms and R. Sugden. Testing for regret and disappointment in choice under uncertainty. The Economic Journal, 92:805-824, 1982.
    [150] M. K. Luhandjula. Fuzzy optimization: An appraisal. Fuzzy Sets and Systems, 30:257-282, 1989.
    [151] C. Y. Maa and M. A. Shanblatt. A two phase optimization neural network. IEEE Trans on Neural Networks, 3(6):1003-1009, 1992.
    [152] J. E. Marshall. Control of time-delay systems. Peter Peregrinus LTD, London, 1979.
    [153] H. E. Mausser and M. Laguna. A heuristic to minimax absolute regret for linear programs with interval objectives function coeffi cients. European Journal of Operational Research, 117:157-174, 1999.
    [154] H.E. Mausser and M. Laguna. A new mixed integer formulation for the maximum regret problem. 5(5):389-403, 1998.
    [155] W. S. McCulloch and W. Pitts. A logical calculus of ideas immanent in nervous activity. Bulletin of Mathematical Biophysics, 5:115-133, 1943.
    [156] Z. Michalewica and C. Janikow. Handling constraints in genetic algorithms. In Proceedings of the 4th International Conference on Genetic Algorithm, pages 151-157, 1991.
    [157] Z. Michalewicz. Genetic algorithms + data structures = evolution programs. Springer, Berlin, 1996.
    
    
    [158] Z. Michalewicz and D. B. Fogel. How to solve it: Moden heuristics. Springer, Berlin, 2000.
    [159] J. Miele, L. Tietze, and A. V. Levy. Minimization Algorithms, Mathematical Theories and Computer Results, chapter A computer algorithm for constrained minimization. Academic Press, New York, 1972.
    [160] K. Miettinen. Nonlinear Multiobjective Optimization. Kluwer Academic Publishers, Boston, 1999.
    [161] M. Minsky and S. Papert. Perceptrons: An introduction to computational geometry. The MIT Press, Cambridge, MA, 1969.
    [162] R. E. Moore. Interval Analysis. Prentice-Hall, Englewood Cliffs, NJ, 1966.
    [163] H. Muhlenbein. Parallel Problem Solving from Nature 2, chapter How genetic algodthrns really work: mutation and hillclimbing. North-Holland, 1992.
    [164] J.M. Mulvey, R. J. Vanderbei, and S. A. Zenios. Robust optimization of large scale systems. Operations Research, 43:264-281, 1995.
    [165] B. A. Murtagh and M. A. Saunder. Minos 5.0 users guide. Technical report, Systems Optimization Laboratory, Dept. of Operations Research, Stanford University, 1998.
    [166] M. Museli and S. Ridella. Global optimization of functions with the genetic algorithm. Complex Systems, 6:193-212, 1992.
    [167] K. Nakano. Associatron: a model of associative memory. IEEE Transactions on Systems, Man and Cybernetics, 2:381-388, 1972.
    [168] J. P. Norton. Identifi cation and application of bounded-parameter models. Automatica, 23:497-507, 1987.
    [169] S. Osowski. Neural network for nonlinear programming with linear equality constraints. Inter. J. of Circuit Theo. and App., 20:93-98, 1992.
    [170] E. Ozcan and C. Mohan. Analysis of a simple particle swarm optimization system. Intelligent Engineering Systems Through Artificial Neural Networks, pages 253-258, 1998.
    [171] E. Ozcan and C. Mohan. Particle swarm optimization: surfi ng the waves. In Proceedings of the Congress on Evolutionary Computation, pages 1939-1944, 1999.
    [172] K. Parsopoulos and et al. Stretching technique for obtaining global minimizers through particle swarm optimization. In Proceedings of the Particle Swarm Optimization Workshop, pages 22-29, 2001.
    
    
    [173] J. C. Pinto. On the costs of parameter uncertainties, effects of parameter uncertainties during optimization and design of experiments. Chemical Engineering Science, 53:2029-2040, 1998.
    [174] R. Polyak. Modifi ed barrier functions (theory and methods). Mathematical Programming, 54:177-222, 1992.
    [175] H. Raiffa. Decision Analysis. Addison-Wesley, 1968.
    [176] T. Ray and K. M. Liew. A swarm with an effective information sharing mechanism for unconstrained and constrained single objective optimization problems. In Proc. IEEE Int. Conf. on Evolutionary Computation, pages 75-80, 2001.
    [177] I. Rechenberg. Evolutionsstrategie: Optimierung technischer systeme nach prinzipien der biologischen evolution. Frommann-Holzboog Verlag, Stuttgart, Germany, 1973.
    [178] B. Ricceri and S. Simons. Minimax theory and application. Kluwer Academic Publishers, 1998.
    [179] A. Rodriguez-Vazquez and R. Dominguez-Castro. Nonlinear switched-capacitor neural net- works for optimization problems. IEEE Trans. on Circuits and Systems, 37(3):384-397, 1990.
    [180] H. Rommelfanger. Linear programming with fuzzy objective. Fuzzy Sets and Systems, 29:31-48, 1989.
    [181] E Rosenblatt. The perceptton: A probabilistic model for information storage and organization in the brain. Psychological Review, 65:386-408, 1958.
    [182] D. E. Rumelhart, G. E. Hinton, and R. J. Williams. Learning representations by backpropagating errors. Nature, 323:533-536, 1986.
    [183] D. E. Rumelhart and J. L. McCelland. Parallel distributed processing: Explorations in the microstructure of cognition. The MIT Press, Cambridge, MA, 1986.
    [184] R. Salomon. Re-evaluating genetic algorithm performance under coordinate rotation of benchmark functions: A survey of some theoretical and practical aspects of genetic algorithms. Biosystems, 39(3):263-278, 1996.
    [185] N. J. Samsatli, L. G. Papageorgiou, and N. Shah. Robustness metrics for dynamic optimization models under parameter uncertainty. AIChE Journal, 44(9): 1993-2006, 1998.
    [186] G. Savard and J. Gauvin. The steepest descent direction for nonlinear bilevel programming problem. Operations Research Letters, 15:265-272, 1994.
    
    
    [187] Y. Sawaragi and Y. Q. Zhou. Shinayakana systems approach. In Proceedings of the 2nd International Conference of Systems Science and Systems Engineering, pages 24-29, 1993.
    [188] J. Schaffer, R. Caruana, L. Eshelman, and K. Das. A study of control parameters affecting online performance of genetic algorithm for function optimization. In Proceedings of the Third International Conference on Genetic Algorithm, 1989.
    [189] H. Scherm. Simulating uncertainty in climate±pest models with fuzzy numbers. Environmental Pollution, 108:373-379, 2000.
    [190] A. Sengupta, T. K. Pal, and D. Chakraborty. Interpretion of inequality constraints involving interval coeffi cients and a solution to interval linear programming. Fuzzy Sets and Systems, 119:129-138,2001.
    [191] G. Shafer. The mathematical theory of evidence. Princeton University Press, 1976.
    [192] Y. Shi and R. Eberhart. A modifi ed particle swarm optimizer. In Proc. IEEE Int. Conf. on Evolutionary Computation, pages 69-73, 1998.
    [193] Y. Shi and R. Eberhart. Parameter selection in particle swarm optimization. In Proc. of the 7th Annual Conf. on Evolutionary Programming, pages 591-600, 1998.
    [194] Y. Shi and R. Eberhart. Empirical study of particle swarm optimization. In Proc. of the Congress on Evolutionary Computation, pages 1945-1950, 1999.
    [195] Y. Shi and R. Eberhart. Fuzzy adaptive particle swarm optimization. In Proc. IEEE Int. Conf. on Evolutionary Computation, pages 101-106, 2001.
    [196] K. Shimizu and Y. Ishizuka. Optimality conditions and algorithms for parameter design problems with two-level structure. IEEE Trans. on Auto. Contr., 30:986-993, 1985.
    [197] S. Smith and L. Lasdon. Solving large sparse nonlinear programming using grg. ORSA Journal on Computing, 4(1):2-15, 1992.
    [198] D.G. Sotiropoulos, E. C. Stavropoulos, and M. N. Vrahatis. A new hybrid genetic algorithm for global optimization. Nonlinear Analysis, Theory, Methods and Application, 30(7):4529-4538, 1997.
    [199] W. Spears and K. De Jong. Foundations of Genetic Algorithms, chapter An analysis of multipoint crossover, pages 301-315. Morgan Kaufman.
    [200] P. N. Suganthan. Particle swarm optimiser with neighbourhood operator. In Proc. of the Congress on Evolutionary Computation, pages 1958-1962, 1999.
    
    
    [201] T. Takeaki and Y. Takao. Optimal design problem of system reliability with interval coefficient using improved genetic algorithm. Computers and Industrial Engineering, 37:145-149, 1999.
    [202] H. Tanaka. On fuzzy mathematical programming. Journal of Cybernetics, 3(4):37-46, 1984.
    [203] D.W. Tank and J. J. Hopfi eld. Simple neural optimization networks: An a/d converter, signal decision circuit, and linear programming circuit. IEEE Trans. on Circuit, 33(5):533-541, 1986.
    [204] D. J. Ternet and L. T. Biegler. Recent improvements to a multiplier-free reduced hessian successive quadratic programming algorithm. Computers chem. Engng., 22(7-8):963-978, 1998.
    [205] A.L. Tits and J. L. Zhou. A simple quadratically convergent interior-point strategies. Journal of Optimization Theary and Applications, 98(2):399-429, 1998.
    [206] S. Tong. Interval number and fuzzy number linear programming. Fuzzy Sets and Systems, 66:301-306, 1994.
    [207] C. Quintana V. Kreinovieh and O. Fuentes. Genetic algorithm-what fitness scaling is opti-mal. Cybern and Systems, 24(1):9-26, 1993.
    [208] S. Vasantharajan and L. T. Biegler. Reduced successive quadratic programming: Computing implementation for large-scale optimization problems with smaller degrees of freedom. Computers chem. Engng., 14(8):907-915, 1990.
    [209] V. S. Vassiliadis and S. A. Brooks. Application of the modifi ed barrier method in large-scale quadratic programming problems. Computers chem. Engng., 22(9): 1197-1205, 1998.
    [210] V. S. Vassiliadis and C. A. Floudas. The modifi ed barrier function approach for large-scale optimization. Computers chem. Engng., 21(8):855-874, 1997.
    [211] D. A. Veldhuizen and G. B. Lamont. Multiobjective evolutionary algorithms: Analyzing the state-of-art. Evolutionary Computation, 8(2): 125-147, 2000.
    [212] V. V. Vinod, S. Ghose, and P. P. Chakrabarti. Resultant projection neural networks for optimization under inequality constrains. IEEE Trans. on System, Man and Cybernetics, 26(4):509-521, 1996.
    [213] B. W. Wah, T. Wang, Y. Shang, and W. Zhe. Improving the performance of weighted lagrange-multiplier methods for nonlinear constrained optimization. Information Sciences, 124:241-272, 2000.
    
    
    [214] D. Whitely. Foundations of genetic algorithms, chapter Fundamental Principles of Deception in Genetic Search. Morgan Kaufmann, 1989.
    [215] D. Whitely. The genitor algorithm and selection pressure: Why rank-based allocation of reproductive trials is best. In Proceedings of the 3rd International Conference on Genetic Algorithm, 1989.
    [216] B. Widrow. Selforganizing Systems, chapter Generalization and information storage in networks of ADALINE neurons, pages 435-461. Spartan Books, Washington DC, 1962.
    [217] Y. Xia and J. Wang. A general methodology for designing globally convergent optimization neural networks. IEEE Trans on Neural Networks, 9:1311-1343, 1998.
    [218] X. Xie, W. Zhang, and Z. Yang. Hybrid particle swarm optimizer with mass extinction. In Proceedings of International Conference on Communication, Circuits and Systems, pages 1170-1174, 2002.
    [219] M. Saraki Y. Nakahara and M. Gen. On the linear programming with interval coeffi cients. International Journal of Computers and Engineering, 23:301-304, 1992.
    [220] J. Yan, J. S. Tsai, and F. Kung. Robust stability analysis of interval systems with multiple time-varying delays: Evolutionary programming approach. Journal of Franklin Institute, 336:711-720, 1999.
    [221] H. Yoshida, K. Kawata, Y. Fukuyama, S. Takayama, and Y. Nakanishi. A particle swarm optimization for reactive power and voltage control considering voltage security assessment. Transactions of the Institute of Electrical Engineers of Japan, 119-B(12): 1462-1469, 1999.
    [222] H. Yue and W. Jiang. New probabilistic robust optimization method. In Proceedings of the IEEE International Conference on System, Man and Cybernetics, volume 3, pages 2205-2209, 1996.
    [223] L. A. Zadeh. Fuzzy sets. Information and Control, 8:338-353, 1965.
    [224] S. Zhang and A. G. Constantinides. Lagrange programming neural networks. IEEE Trans. on Circuits and Systems, 39(7):441-452, 1990.
    [225] S. Zhang, X. Zhu, and L. H. Zou. Second-order neural nets for constrained optimization. IEEE, 3(6):1021-1024, 1992.
    [226] M. Zhuang and D. P. Atherton. Automatic tuning of optimum pid controllers. In Proceedings of IEE, volume 140, pages 216-224, 1993.
    [227] W. Ziarko. Variable precision rough set model. Journal of Computer and System Sciences, 40:39-59, 1993.
    
    
    [228] J.G. Ziegler and N. B. Nichols. Optimum settings for automatic controllers. Trans. ASME, 64:759-768, 1942.
    [229] H. J. Zimmermann. Fuzzy set theory and its applications. Kluwer Nijhof, Boston, 1985.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700