智能优化算法的性能及搜索空间研究
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
优化作为一个重要的科学分支,一直受到人们的广泛重视,它对多个学科产生了重大影响,并在诸多工程领域得到迅速推广和应用,己成为不同领域中很多工作不可或缺的工具。
     优化算法基于某种思想和机制,通过一定的途径来得到满足用户要求的解。现在,求解线性规划、非线性规划以及随机规划、几何规划、整数规划等各种最优化问题的理论研究发展迅速,新方法不断出现,实际应用日益广泛。在计算机技术的推动下,最优化理论与方法在经济计划、工程设计、生产管理、交通运输等方面得到了广泛应用,成为一门十分活跃的学科。
     很多优化问题如旅行商问题、设备布局问题、生产调度问题等等,己被证明是NP完全问题,至今没有有效的多项式时间解法,用传统的最优化方法求解,需要的计算时间与问题的规模成指数关系。因此,人们转而求其次,发展了很多随机性搜索算法,希望在有限的时间内求得问题的次优解或近优解,如蒙特卡罗模拟、模拟退火、遗传算法、禁忌搜索及其混合优化策略等。
     智能优化算法作为新兴的搜索算法,一般是指利用自然界的生物系统与优化过程的某些相似性而逐步发展起来的优化算法,如遗传算法、粒子群算法、蚁群算法等,它们通过对搜索空间中的一组解按某些概率规则操作得到下一组解。因此算法本身的搜索机制决定算法对空间的全局搜索能力和局部寻优能力,从而决定算法的搜索性能,即是否能较快地收敛于全局最优解。同时,由于变量取值、约束复杂等使得搜索空间中可行解与不可行解交叉分布,空间结构很不规则,往往造成智能优化算法得到不可行解的无效搜索过多,极大地影响搜索速度,且算法不能很好地收敛到最优解。因此,本论文主要围绕智能优化算法的空间搜索能力,从算法搜索机制和搜索空间结构两个方面,研究增强算法的全局搜索能力的方法,同时兼顾局部寻优能力,以避免算法陷入局部最优,提高搜索性能。
     此前的智能优化算法,在搜索过程中不断繁殖好解并择优保留个体,导致有效成分缺失,群体多样性降低,从而使算法失去了探索新空间区域的能力,易于陷入局部最优。而算法运行后期局部搜索能力不强,收敛速度过慢,得到的解的质量不高。权衡算法的空间探测和开发能力,即大范围全局搜索和小范围局部寻优能力,并且在算法运行过程中有效控制搜索的进行,才能使算法具有较好的搜索性能,提高收敛速度和解的质量。
     1.针对算法种群多样性减小导致的算法失去空间探索能力的问题,在算法迭代过程中以合理的方式接受劣解,以增加种群的多样性,可以避免算法陷入局部最优,从而提高算法的搜索性能。
     变异操作是产生种群多样性的根本,交叉(包括信息交互或传递)起加速作用。因此选用仅包含变异操作的进化规划算法,引入退火概率的选择方式,每次迭代以退火概率接受劣解,形成退火进化规划算法。经证明这种算法是全局收敛的,可有效地避免局部最优,较快地达到最优解。通过估计算法种群以概率1包含最优解的速度和达到收敛的最大时间,发现退火因子会影响算法的收敛速度,通过选择合适的退火因子,可以提高算法的收敛速度。
     2.在算法的搜索过程中以当前信息为基础不断产生新解,在牺牲部分时间性能的条件下,也可以扩展搜索范围。本文利用组合优化问题的解的模式概念,通过比较种群中的解包含的相同模式,在迭代过程中产生新解来提高算法的全局搜索能力和解的质量。
     (1)首先改变利用极值操作的个体解的变量适应度定义,使其与目标函数相关。经实验证实,新的目标函数适应度可较大程度提高个体的搜索能力。应用新的变量适应度定义,设计了基于模式的极值进化算法。解的模式信息的利用增强了算法的局部搜索能力,较好地改善了解的质量。最后证明了算法的收敛性。
     (2)基于新的目标函数适应度,分析迭代过程中产生的新解对搜索的影响作用。然后以拓扑进化网络描述的生物进化模型为结构,设计具有可变种群规模的网络拓扑进化算法。经过对比,该算法明显扩展了搜索的范围。由于网络拓扑进化算法不设定解的变异方式,因而可以采用各种现有的变异方法,具有广泛的可结合性。该算法主要用于离散优化问题,然而通过将连续变量转换成二进制编码,以相同取值的二进制“位”表示解的相同部分,算法也可应用于连续函数优化。
     优化问题搜索空间庞大、算法搜索的随机性与搜索速度之间的矛盾,也在很大程度上影响了算法性能。大部分智能优化算法属于全局搜索算法,都是在问题的全空间上进行搜索,当问题规模变大时,搜索空间迅速膨胀。此前的智能优化算法都是在不明确空间结构的情况下进行寻优,所以其盲目性很大,随着优化问题规模的增大,算法的搜索时间往往以指数速度增长。目前所研究的确定型模型,一般都有确定的空间结构,但是算法在搜索过程中不了解最优解和空间结构的关系,因此要改善算法的性能,除了设计更为合理的搜索机制,研究问题的搜索空间也有重要意义。
     1.智能优化算法通过在空间的搜索来寻找问题的最优解,因此了解搜索空间结构有助于理解问题求解的难度,从而设计合适的搜索策略以提高求解效率。
     (1)传统的遗传算法等智能优化算法基本不使用搜索空间的知识,从而影响了算法的搜索效率。已经搜索到的解能够反映空间的部分信息,利用这些信息可以对算法进行指导,实现算法的自适应搜索,在此思想基础上设计了具有信息指导的退火进化规划算法。经过仿真比较,算法具有更快的收敛速度:对于较大规模的优化问题,解的精度误差百分比仍然可以控制在3%以内。
     (2)虽然智能优化算法的共同特点是对优化问题的依赖性小,但都具有随机性,且都是在问题的全空间寻优。实现寻优空间的简约,可以提高算法的搜索速度和解的质量。此前的空间收缩和划分方法,缺乏针对性,难处理离散优化问题,且需要人为设定划分的子空间个数。这里用不完全演化的智能优化算法获得好解信息收缩空间后,利用组合优化的解的模式将搜索空间自动地划分为一个或多个最优解域,然后再进行局部优化。经车间作业调度问题仿真验证,该方法可快速获得多个高质量的解。
     2.对于具体的优化问题,问题的空间结构或某些性质是已知的,如能充分利用,必能提高算法效率,从而减少搜索的不确定性,提高解的质量。
     混合整数规划模型(MIP)的空间,可由整数变量自然地划分为多个分散的子空间,将搜索限制在可行解的子空间可降低优化算法的搜索工作量。在生产调度形成的MIP模型中存在任务加工所用设备之间的竞争约束,我们将这个性质融合到算法中,可以实现空间的进一步简约,从而将搜索限制在可行的子空间。通过分析间歇过程生产调度形成的0-1MIP模型中的0-1变量间及其与实数变量之间的关系,设计基于空间划分的分解算法,并对分解算法所适用的问题规模进行分析。结果表明:当求解算法的复杂度O(f(·))的函数增长速度大于2~n的增长速度,且连续变量个数m、0-1变量个数n均大于阈值ξ时,分解算法可以极大地降低求解的计算量。
     综上所述,经过对智能优化算法的搜索机制和搜索空间的分析,本文从算法搜索机制和空间结构两方面对提高求解性能的方法做了系统的研究工作。一方面,在算法迭代过程中以合理的方式接受劣解或产生新解,扩展了算法的搜索范围,从而增强了算法的全局搜索能力,使其能获得高质量的解。另一方面,利用搜索空间的信息来指导搜索或估计空间结构,提高了算法的效率和求解质量。基于模型的已知空间结构和问题的性质简约空间,可以设计更合适的求解策略,降低求解的复杂性。将智能优化算法与问题空间结构结合的优化方式,是很有发展前途的研究方向,正受到越来越多优化研究人员的关注。
Optimization has received widespread attention as an important scientific branch. It has influenced many disciplines greatly and abtained rapid promotion and applications in so many engineering domains. Optimization has become an indispensable tool for many different areas.
    Optimization algorithms get solutions satisfying the requirments of users by some certain approch, based on some kind of idea or mechanism. The optimization theory develops rapidly for optimization problems such as linear programming, nonlinear and stochastic programming, geometric programming and integer programming, etc., and the approaches emerge with increasingly widespread practical applications. Computer technology in the promotion, optimization theory and methods in economic planning, engineering design, production management, transportation and other areas, obtained widespread applications, which becomes an extremely active discipline.
    Many optimization problems such as the traveling salesman problem, and equipment layout, production scheduling, etc., have been proved to be NP-Complete problems. So far there is no effective method of polynomial time for NPC, and the computing time needed by a traditional optimization method has an exponential relationship with the scale of the problem. Thus, instead, researchers developed a lot of random search algorithms such as Monte Carlo Simulation, Simulated Annealing, Genetic Algorithms, Particle Swarm Optimizer, Tabu Search and hybrid optimization strategies, hopping to achieve optimal or near-optimal solutions in limited time.
    As a new type of random search algorithms, intelligent optimization ones generally reffer to those gradually developed based on the similarity of natural biological systems with the optimization process. They get next generation of solutions through operating on a set of solutions in the search space according to certain statistical rules. Therefore, the search mechanism of an algorithm determines its global and local search ability, and its search performance, in other words, if the algorithm can converge to global optima quickly. At the same time, because of variable assignement and complicated constraints, feasibale and infeasible solutions often distribute across in search space, which results in an irregular spatial structure. Therefore, an intelligent algorithm spends most time on invalid searching on infeasible solutions, which influences its search speed greatly, and it can not converge to a global optimum. So, this paper mainly focuses on the search capacity of intelligent algorithms, to improve their convergence speed and solution quality, and avoid them trapping in local optima. We study methods to enhance the global serach ability of an algorithm from the algorithm searching mechanism and spatial structure, and at the same time consider its lobal search ability.
    Former intelligent optimization algorithms reproduce good solutions and reserve individuals according to their qualification in the search process, which results in effective component failures and the reduction of population diversity. Thus the algorithms loose the ability to explore new space and are easy to trap in local optima. In the latter iterations, low local search ability causes the excessively slow convergence. Weighting space exploration and exploitation capacity of an algorithm, that is its global and local search ability, and controlling the searching effectively across the space in the running process, can only get found performance and enhance the convergence speed and solution quality.
    1. According the problem of diversity reduction that resulting in loss of ability to explore space, accepting inferior solutions in a reasonable manner in the optimizing process may increase the diversity of populations, and make the algorithms avoid trapping into the Local optima and improve their performance.
    Mutation is the foundation producing the diversity of populations, and crossing (including information exchange or transfer) acts as an accelerator. Therefore we take use of Evolutionary Programming (EP) only with mutation operation, and introduce the selection mechanism of annealing probability into it. In each iteration, inferior solutions are accepted by an annealing probability. It is proved that the new algorithm has global convergency, and it can avoid trapping into local optima effectively and achieve the optimal solution faster. By estimating the speed that the population contains optimal solution with probability 1 and the longest time that it gets convergence, we found that annealing factor will affect the convergence speed, and it may be improved by selecting a suitable annealing factor.
    2. Creating new solutions, based on the information available in the search process, may extend the search scope at some expense of time performance. Here we take use of solution backbones to creat new solutions in iterations, to improve the global search ability and solution quality.
    (1) First, we change the fitness definition of variables using extremal operation, to make it consistant with objective function. Simulations prove that the new definition can enhance search ability of individual solutions. Furthermore, we discuss the influence of parameter assignation. Then combined with the concept of solution backboens of combinatorial optimization problems, we design a bacbkone-based extremal evolutionary algorithm. The use of backbone information strengthens its local search ability. At last, we give the convergence proof of this algorithm.
    (2) Under the definition of the fitness by object function, we analyze the role of new solutions produced in the iterative process. On the basis of individual performance comparison and the model structure of topology evolving networks describing evolution, we designed Network Topology Evolving Optimization (NTEO), which has variable population scale. By comparision, this new algorithm extends search scope obviously, and precisely for this reason its convergence speed is slow. But on the consition that computer technology develops rapidly, time expense will not limit its applications. Moreover, NTEO does not fix the mutation mechanism of solutions, so it may be combined with others various mutation technologies. And coding continues variables into binary and denoting the same parts of solutions using binary bits with the same values, we may optimize continues problems by this algorithm.
    Large search space of an optimization problem and the contradiction between search randomness and searching speed, also greatly affect the performance of an algorithm. Most intelligent algorithms carry out global searching in the entire search space. When the problem scale becomes large, the search space inflates rapidly. Former intelligent optimization algorithms are not clear the spatial structure in the conduct of optimization, so their searching has great blindness and the searching time often increases with exponential rate. Deterministic models studied at present usually have fixed spatial structure. However, search algorithms have no knowledge of the relationship between the structure and the optimal solutions. Therefore, in order to improve their performance, in addition to more rational design of the search mechanism, the research of search space is also of important significance.
    1. Intelligent optimization algorithms find optimal solutions of a problem through searching the whole space, so the space size and spatial structure have a profound impact on their performance. Knowledge of the search space structure will help us understand the difficulty to solve a problem, and will be helpful to design appropriate strategies to enhance the solving efficiency.
    (1) Traditional intelligent algorithms like Genetic Algorithm do not use knowledge of search space, which greatly affects performance of the algorithms. While the searched solutions reflect information of search space, so the algorithm can use this information to guide searching and realize self-adaptive searching. Based on this idea, we design an evolutionary programming algorithm with knowledge guiding. Through simulation comparison, the new algorithm has faster convergence rate. For large-scale optimization problems, the solution accuracy percentage can be controlled within 3%.
    (2) Although intelligent optimization algorithms have a common characteristics that with small dependence on problems, but they all have randomness and search optima in the whole space. Simplizing search space may improve convergence speed and solution quality. Former space contraction and partition methods lack pertinence to discrete optimization problems, and the number of sub-spaces needs to be assigned by users. Here we contract the search space of a combinatorial optimization problem by no-entirely evolving of intelligent optimization algorithms, and the space is divided into one or several optimal domains by solution backbones, then optimization is cariied out in local domains. Proved by simulations with Job-shop Scheduling Problems, this method can get several high quality solutions quickly.
    2. With regard to a specific optimization problem, its spatial structure or some nature is known. If it is fully utilized, we will be able to improve the efficiency of algorithms, thereby reducing the uncertainty of search, and improve solution quality.
    The space of Mixed Integer Programming (MIP) model is be divided into a number of discrete sub-spaces by integer variables. Restricting the search in feasible sub-spaces can greatly reduce the search workload using search algorithms. In the MIP model for production scheduling, there must exist the competition constrains among equipment to process tasks. We integrate the nature into the algorithm, and can achieve further space contraction, thus may limit the search in feasible sub-spaces. By analyzing the relationship between 0-1 variables, and that between them and the real variables in the 0-1 MIP model of production scheduling for batch process, we design a decomposition algorithm based on Grouping Genetic Algorithm (GGA), and analyze the scale of a problem that this decomposition algorithm fits. The results show that, when the complexity function, O(f(·)), of an algorithm used grows exceeding the growth rate of 2~n, and the number of continuous variables m, 0-1 variable number n, are greater than a threshold value ζ, the decomposition algorithm can greatly reduce the computation work.
    In short, after analyzing the search mechanism of intelligent algorithms, and serach space, some systematic work is done on methods to improve solving performance from these two aspects. At one aspect, accepting bad solutions or creating new solutions by some reasonable manner, which extends search scope, can enhance the global search ability of an algorithm and makes it get high quality solutions. At another aspect, using information of search space to guide searching or estimate spatial structure enhances the solving effeciency and soution quality. Simlify search space by known spatial structure and characters of problems may help us design more suitable strategy to decrease computational complexity. The optimization strategy of combining intelligent algorithms with spatial structure is a potential research direction, which is attracting more and more optimization researchers.
引文
[1] 徐成贤,陈志平,李乃成.近代优化方法[M].北京:科学出版社,2002.
    [2] 姚恩瑜,何勇,陈仕平.数学规划与组合优化[M].杭州:浙江大学出版社,2001.
    [3] Dantzig G B. Linear programming under uncertainty [J]. Management Science, 1955, 1(3): 197-206.
    [4] Dantzig G B, Orden G B A, Wolfe P. Note on linear programs[J]. Pacific J Math, 1955, (1)5: 183-195.
    [5] 席少霖,赵凤治.最优化计算方法[M].上海:上海科学技术出版社,1983.
    [6] 管梅谷,郑汉鼎.线性规划[M].山东:山东科学技术出版社,1983.
    [7] 张建中,许绍信.线性规划[M].北京:科学出版社,2002.
    [8] 陆宗元,广义对偶单纯形方法[J].上海师范大学学报(自然科学版),2002,31(2):39-43.
    [9] Khachian L G. A Polynomial algorithm in linear programming[J]. Dokl. Akad. Nauk SSSR 1979, (224): 1093-1096. English translation in Soviet Math. Dokl. 1979, (20): 191-194.
    [10] Karmarkar N. A new polynomial-time algorithm for linear Programming[J]. Combinatorica, 1984, (4): 373-395.
    [11] Lustig I J, Marsten R E, Shanno D F. Computational experience with a primal-dualinterior point method for linear programming[J]. Linear Algorithm Application, 1991, (152): 191-222.
    [12] 聂普焱.一种内点法解二次规划[J].应用数学,2003,16(2):1-6.
    [13] 李三平,何尚录,徐成贤.求解一类互补问题的不可行内点法及其计算复杂性[J].陕西师范大学学报(自然科学版),2001,29(2):6-11.
    [14] Forsgren A, Gill P E, Wright M H. Interior methods for nonlinear optimization[J]. SIAM Review, 2002, (44): 525-597.
    [15] Joines J, Houck C. On the use of non-stationary penalty functions to solve nonlinear constrained optimization problems with gas[C]. In Proceedings of the First IEEE International Conference on Evolutionary Computation, 1994, pp: 579-584.
    [16] Gould N I M, Orban D, Toint Ph L. An interior-point penalty method for nonlinear optimization[R]. RAL-TR-2003-022, Rutherford Appleton Laboratory Chilton, Oxfordshire, UK, 2003.
    [17] 邵之江.大规模过程系统优化的简约空间SQP算法[J].系统工程理论与实践,1999,(1):56-60.
    [18] Leyffer S. Integrating SQP and branch and bound for mixed integer nonlinear programming[J]. Computational Optimization and Applications, 2001, 18: 295-309.
    [19] Kirkpatric S, Gelatt C D, Vecchi M P. Optimization by simulated annealing[J]. Science, 1983, 220: 671-680.
    [20] Zarand G, Pazmandi F, Pal K F, Zimanyi G T. Using hysteresis for optimization[J]. Physical Review Letters, 2002, 89(15): 150201.
    [21] Holland J H. Concerning efficient adaptive systems[M]. In Yovits M C Eds., Self-Organizing Systems, 1962, pp: 215-230.
    [22] Bremermann H J. Optimization through evolution and recombination[M]. In Yovits M C et al. Eds., Self-Organizing Systems, 1962, pp: 93-106.
    [23] Holland J H. Adaptation in natural adaptive systems[M]. 1st ed., Cambrigde, MA: MIT press, 1975.
    [24] Rechenberg I. Evolution strategies: optimieruing technischer system nach prinzipien der biologischen evolution[M]. Stuttgart: Frommann Holzboog Verlag, 1973.
    [25] Schwefel H P. Numerical optimization of computer models[M]. Chichester, John Wiley, 1981.
    [26] Fogel L J. On the organization of intellect[D]. Doctoral Dissertation, UCLA, 1964.
    [27] Fogel L J, Owens A J, Walsh M J. Artificial intelligence through simulated evolution[M]. New York: John Wiley, 1966.
    [28] 李晓磊.一种新型的智能优化方法-人工鱼群算法[D].浙江大学博士学位论文,2003,1.
    [29] Colorni A, Dorigo M, Maniezzo V. Distributed optimization by ant colonies[C]. In Proceedings of 1st European Conferenc on Artificial Life, Pans, France; Elsevier Publishing, 1991, pp: 134-142
    [30] Dorigo M, Maniezzo V, Colorni A. Ant system: Optimization by a colony of cooperating agents[J]. IEEE Trans on SMC, 1996, 26(1): 28-41.
    [31] Dorigo M, Gambardella L M. Ant colony system: a cooperative learning approach to the traveling salesman problem [J]. IEEE Trans on Evolutionary Computation, 1997, 1(1): 53-66.
    [32] Kennedy J, Eberhart R C. Particle swarm optimization[C]. In Proceedings of IEEE International Conference on Neural Networks, Piscataway, NJ, 1995, 4: 1942-1948.
    [33] Kennedy J, Eberhart R C. A discrete binary version of the particle swarm algorithm[C]. In Proceedings of the World Multiconference on Systemics, Cybernetics and Informatics, Piscataway, N J, 1997, pp: 4104-4109.
    [34] Shi Y, Eberhart R C. A modified particle swarm optimizer[A]. In IEEE International Conference of Evolutionary Computation[C]. Anchorage, Alaska, May 1998.
    [35] 李晓磊,邵之江,钱积新.一种基于动物自治体的寻优模式:鱼群算法[J].系统工程理论与实践,2002,22(11):32-38.
    [36] Glover F. Tabu search: part Ⅰ[J]. ORSA Journal on Computing, 1989, 1(3): 190-206.
    [37] Glover F. Tabu search: part Ⅱ[J]. ORSA Journal on Computing, 1990, 2(1): 4-32.
    [38] Glover F, Laguna M, Marti R. Scatter Search[A]. Advances in Evolutionary Computation: Theory and Applications[M], A. Ghosh and S. Tsutsui (Eds.), Springer-Verlag, New York, 2003, pp: 519-537.
    [39] Glover F. Scatter Search and Path Relinking[A]. New Ideas in Optimization[M], D.Come, M. Dorigo and F. Clover, Eds, McGraw Hill,1999,297-316
    [40] Glover F, Laguna M, Marti R. Fundamentals of scatter search and path relinking[J]. Control and Cybernetics, 2000, 29(3): 653-684.
    [41] Laguna M, Armentano V. Lessons from Applying and Experimenting with Scatter Search[R]. Report on Conference of Adaptive Memory and Evolution: Tabu Search and Scatter Search, Cesar Rego and Bahram Alidaee (Eds.), 2001.
    [42] Glover F, Laguna M, R. Marti. Scatter Search and Path Relinking: Advances and Applications[A]. Handbook of Metaheuristics[M], F. Clover and G Kochenberger (Eds.), Kluwer Academic Publishers, Boston, 2003.
    [43] Clover F, Laguna M, Marti R. Scatter search and path relinking: foundations and advanced designs[A]. New Optimization Techniques in Engineering[M], Onwubolu G C and Babu B V(eds), Springer, 2004.
    [44] 徐宗本,高勇.遗传算法过早收敛现象的特征分析及其预防[J].中国科学(E辑),1996,26(4):364-375.
    [45] 于志刚,宋审民,段广仁.遗传算法的机理与收敛性研究[J].控制与决策,2005,20(9):971-980.
    [46] 王书振等.嫁接遗传算法及其在车间作业调度问题中的应用[J].机械科学与技术,2003,22(6):873-876。
    [47] 徐国顺等.基于遗忘策略的双群进化规划算法[J].数据采集与处理,2005,20(9):263-267.
    [48] 卢辉斌,范庆辉,贾兴伟.一种改进的自适应蚁群算法[J].计算机工程与设计,2005,26(11):3065-3067.
    [49] 王向军,嵇斗,张民.一种多群竞争进化规划算法[J].电子学报,2004,32(11):1824-1828.
    [50] 巩敦卫,孙晓燕.变搜索区域多种群遗传算法[J].控制理论与应用, 2006,23(2):256-260.
    [51] 徐明华,陈凌,陈宏建.可变种群规模的遗传算法[J].系统仿真学报,2006,18(4):870-873.
    [52] 王俊年等.一种改进的小生境微粒群算法[J].山东大学学报(工学版),2005,35(3):98-102.
    [53] 王艳松等.基于小生境遗传算法的配电网开关优化配置[J].电工技术学报,2006,21(5):82-86.
    [54] 薛明志等.正交Multi-agent遗传算法及其性能分析[J].控制与决策,2004,19(3):290-295.
    [55] 姚文俊.一种基于正交实验设计的遗传算法[J].中南民族大学学报(自然科学版),2004,23(1):62-65.
    [56] 李敏强,寇纪淞.多模态函数优化的协同多群体遗传算法[J].自动化学报,2002,28(4):497-504.
    [57] Bergh F, Engelbrecht A P. A cooperative approach to particle swarm optimization [J]. IEEE Trans on Evolutionary Computation, 2004, 8(3): 1-15.
    [58] 张兴华,朱莜蓉,林锦国.基于自适应遗传算法的多目标PID优化设计[J].系统工程与电子技术,2006,28(5):74-747.
    [59] Shi Y, Eberhart R C. Empirical study of particle swarm optimization [A]. Proceeding of Congress on Evolutionary Computation [C], Piscataway, NJ: IEEE Service Center, 1999: 1945-1949.
    [60] Shi Y, Eberhart R C. Fuzzy adaptive particle swarm optimization[A]. Proceedings of the Congress on Evolutionary Computation[C], Seoul, Korea, 2001
    [61] 张惟皎,刘春煌.基于多Agent结构的自适应蚁群优化聚类算法[J].计算机工程与应用,2005,41(15):17-20.
    [62] Lovbjerg M, Krink T. Extending particle swarm optimizers with self-organized critically[C]. In: Proc. of the IEEE Int'l Conf. on Evolutionary Computation. Honolulu: IEEE Inc., 2002: 1588-1593.
    [63] Kazemi BAL, Mohan C K. Multi-Phase generalization of the particle swarm optimization algorithm[C]. In Proceedings of the IEEE Int'l Conference on Evolutionary Computation. Honolulu: IEEE Inc., 2002: 489-494.
    [64] Hu X H, Eberhart R C. Adaptive particle swarm optimization: Detection and response to dynamic system[C]. In Proceedings of the IEEE Int'l Conference on Evolutionary Computation. Honolulu: IEEE Inc., 2002: 1666-1670.
    [65] 赫然等.一种改进的自适应逃逸微粒群算法及实验分析[J].软件学报,2005,16(12):2036-2045.
    [66] Krink T. Vesterstrom JS. Riget J. Particle swarm optimization with spatial particle extension[C]. In Proceedings of the IEEE Int'l Conference on Evolutionary Computation. Honolulu: IEEE Inc., 2002: 1474-1497.
    [67] Xie X F, Zhang W J, Yang Z L. A dissipative particle swarm optimization[C]. In Proceedings of the IEEE Int'l Conference on Evolutionary Computation. Honolulu: IEEE Inc., 2002: 1456-1461.
    [68] 郝晋,石立宝,周家启.求解复杂TSP问题的随机扰动蚁群算法[J].系统工程理论与实践,2002,(9):88-91+136.
    [69] 周永华,毛宗源.求解约束优化问题的分组比较遗传算法.华南理工大学学报(自然科学版),2003,31(2):38-43.
    [70] Zhang W J, Xie X F. DEPSO: hybrid particle swarm with differential evolution operator[C]. In Proceedings of the IEEE Int'l Int'l Conference on Systems, Man and Cybernetics, Washington: IEEE Inc., 2003: 3816-3821.
    [71] 毕惟红,任红民,吴庆标.一种新的遗传算法最优保存策略.浙江大学学报(理学版),2006,33(1):32-35.
    [72] 陈丽娟等.先锋遗传算法在多峰值函数优化中的应用[J].电子科技,2006,(1):40-43.
    [73] 钟一文.智能优化方法及其应用研究[D].浙江:浙江大学计算机科学与技术专业,2005,5.
    [74] Kenneth D B, Andrew B K, Sudhukar M. A new adaptive multi-start technique for combinatorial global optimizations [J]. Operations Research Letters, 1994, (16): 101-113.
    [75] Yong-Hyuk Kim and Byung-Ro Moon. Investigation of the fitness landscapes in graph bipartitioning: An empirical study [J]. Journal of Heuristics, 2004, (10): 111-133.
    [76] Reeves C R, Yamada T. Genetic algorithms, path relinking, and the flowshop sequencing problem[J]. Evolutionary Computation, 1998, (6): 45-60.
    [77] Watson Jean-Paul, Barbulescu L, Whitley L D, Howe A. Contrasting structured and random permutation flow-shop scheduling problems: search-space topology and algorithm performance[J]. INFORMS Journal on Computing, 2002, 14(2): 98-123.
    [78] Eugeniusz N, Czeslaw S. Some new ideas in TS for job shop scheduling[R]. Technical Report 50/2001, University of Wroclaw, 2001.
    [79] Slaney J, Walsh T. Backbones in optimization and approximation [A]. In Proceedings of the 17th International Joint Conference on Artificial Intelligence (IJCAI-2001) [C], 2001: 254-259.
    [80] Olivier C M, R'emi M, Ricardo Z. Statistical mechanics methods and phase transitions in combinatorial problems[j]. Theoretical Computer Science, 2001, 265(1-2): 3-67.
    [81] Monasson R, Zecchina R, Kirkpatrick S, Selman B, Troyansky L. Determining computational complexity from characteristic 'phase transitions' [J]. Nature, 1999, 400: 133-137.
    [82] Kilby P, Slaney J K, Walsh T. The backbone of the travelling Salesperson [A]. In Proceedings of the International Joint Conference on artificial intelligence (IJCAI)[C], 2005: 175-180.
    [83] Zhang Weixiong. Phase Transitions and Backbones of the Asymmetric Traveling Salesman Problem[J]. Journal of Artificial Intelligence Research, 2004, (21): 471-497.
    [84] Zhang Weixiong, Looks M. A novel local search algorithm for the Traveling Salesman Problem that exploits backbones[C]. In Proceedings of the 19th International Joint Conference on Artificial Intelligence (IJCAI-05), 2005, pp: 343-350.
    [85] M'ezard M, Parisi G. A replica analysis of the traveling salesman problem[j]. Journal de Physique, 1986, (47): 1285-1296.
    [86] 王凌等.调度问题及其遗传算法[M].北京:清华大学出版社,2003.
    [87] Geoffrion A M. Generalized benders decomposition[J]. Journal of Optimization Theory and Applications, 1972, 10(4): 237-260.
    [88] Holmberg K. On the convergence of the cross decomposition[J]. Mathematical Programming, 47:269-316, 1990.
    [89] Holmberg K. Generalized cross decomposition applied to nonlinear integer programming problems: Duality gaps and convexification in parts[J]. Optimization, 1992, 23: 341-356.
    [90] Van Roy T J. Cross decomposition for mixed integer programming[J]. Mathematical Programming, 1983, 25: 46-63.
    [91] Duran M A, Grossmann I E. An outer-approximation algorithm for a class of mixed-integer nonlinear programs[J]. Mathematical Programming, 1986, 36: 307-339.
    [92] Fletcher R, Leyffer S. Solving mixed integer nonlinear programs by outer approximation[J]. Math Programming, 1994, 66: 327-349.
    [93] Gupta O K, Ravindran V. Branch and bound experiments in convex nonlinear integer programming[J]. Management Science, 1985, 31(12): 1533-1546.
    [94] Borchers B, Mitchell J E. An improved branch and bound algorithm for mixed integer nonlinear programming[J]. Computers and Operations Research, 1994. 21: 359-367.
    [95] Stubbs R, Mehrotra S. A branch-and-cut method for 0-1 mixed convex programming[J]. Mathematical Programming 1999, 86(3): 515-532.
    [96] Leyffer S. Integrating SQP and branch and bound for mixed integer nonlinear programming[J]. Computational Optimization and Applications, 2001, 18: 295-309.
    [97] Sahinidis N V. BARON: A general purpose global optimization software package[J]. Journal of Global Optimization, 1996, 8(2): 201-205.
    [98] 陈禹六.大系统理论及其应用[M].清华大学出版社,1988.
    [99] Dantzig G B, Wolfe P. Decomposition principle for linear programs[J]. Operations Research, 1960, 8:101-111.
    [100] 李晓磊,钱积新.基于分解协调的人工鱼群算法研究[J].电路与系统学报,2003,8(1):1-6.
    [101] 朱宝琳,于海斌.炼钢-连铸-热轧生产调度模型及算法研究[J].计算机集成制造系统-CIMS,2003,9(1):33-36.
    [102] Iiro H, Ignacio E G. A decomposition approach for the scheduling of a steel plant production[J]. Computers and Chemical Engineering, 2001, 25: 1647-1660.
    [103] Chen Yiyin. Solving Nonlinear Constrained Optimization Problems Through Constraint Partitioning[D]. Urbana-Champaign: Philosophy in Computer Science of University of Illinois, 2005.
    [104] 万耀青等.最优化计算方法常用程序汇编[M].北京:工人出版社,1983.
    [105] 张启富.网格法在多极值规划问题中的应用[J].运筹与管理,1998,7(2):75-77.
    [106] 石临嵩.用于处理等式约束的网格法[J].石油大学学报(自然科学版),1999,23(3):91-93.
    [107] Li Sheng-tai, Linda P. Moving mesh methods with upwinding schemes for time-dependent PDEs [J]. Journal of Computational Physics, 1997, 131(2): 368-377.
    [108] Olafsson S, Shi L. An integrated framework for deterministic and stochastic optimization[A]. In Proceedings of the 1997 Winter Simulation Conference[C], 1997, pp: 358-365.
    [109] Shi L. Nested partitions method for global optimization[J]. Operations Research, 2000, 48(3): 390-407.
    [110] Zeng Sanyou, Kang Lishan, Ding Lixin. A new method of evolutionary algorithm for mixed-integer Nonlinear optimization problem[J]. Wuhan University, 2000, 46(5B), 554-558.
    [111] 王涛,李歧强.基于空间收缩的并行演化算法[J],中国工程科学,2003,5(3):57-61.
    [112] 杨维.现代优化算法及其应用研究[D].山东大学硕士学位论文,2004,5.
    [113] 刘峰,刘贵中,张茁生.进化规划的Markov过程分析及收敛性[J].电子学报,1998,26(8):76-79.
    [114] Rudolph G. Convergence of non-elitist strategies [C]. In Proceedings of the First IEEE Conference on Evolutionary Computation, Piscataway: IEEE Service Center, 1994, pp: 63-66.
    [115] Rudolph G. Convergence of evolutionary algorithms in general search spaces [C]. In Proceedings of the Third IEEE Conference on Evolutionary Computation, Piscataway, NJ: IEEE Press, 1996, pp: 50-54.
    [116] Fogel D B. Asymptotic convergence properties of genetic algorithms and evolutionary programming: analysis and experiments[J]. Cybernetics and Systems: An International Journal, 1994, 25(3): 389-407.
    [117] 郭崇慧,唐焕文.一种改进的进化规划算法及其收敛性[J].高等学校计算数学学报,2002,3:51-56.
    [118] 张讲社,徐宗本,梁怡.整体退火遗传算法及其收敛充要条件[J].中国科学E辑,1997,27(2):154-164.
    [119] 任庆生,叶中行,曾进.进化算法的收敛速度[J].上海交通大学学报,1999,33(6):671-673.
    [120] 彭宏,王兴华.具有Elitist选择的遗传算法和收敛速度估计[J].科学通报,1997,42(2):144-147.
    [121] Boettcher S, Frank M. Optimizing at the Ergodic Edge [DB/OL]. arXiv:physics/0509001.
    [122] Boettcher S. Extremal optimization: heuristics via coevolutionary avalanches[J]. Computing in Science and Engineering, 2000, 2(6): 75-82.
    [123] Bak P, Tang C, Wiessenfeld K. Self-organized criticality: an explanation of 1/f noise [J]. Physical Review Letters, 1987, 59(4): 381-384.
    [124] Bak P, Sneppen K. Punctuated equilibrium and criticality in a simple model of evolution[J]. Physical Review Letters, 1993, 71(24): 4083-4086.
    [125] Paczuski M, Maslov S, Bak P. Avalanche dynamics in evolution, growth, and depinning models[J]. Physical Review E, 1996, 53: 414-443.
    [126] Boettcher S. Extremal Optimization of Graph Partitioning at the Percolation Threshold [J]. Journal of Physics A, 1999, 32: 5201-5211.
    [127] Boettcher S, Percus A G. Optimization with extremal dynamics[J]. Physical Review Letters, 2001, (86): 5211-5214.
    [128] Meshoul S, Batouche M. Robust Point Correspondence for Image Registration Using Optimization with Extremal Dynamics[C]. Pattern Recognition: 24th DAGM Symposium, Zurich, Switzerland, September 16-18, Proceedings 2002, pp: 330-337.
    [129] Svenson P. Extremal optimization for sensor report pre-processing [DB/OL]. arXiv:cs:AI/0411072.
    [130] Duch J, Arenas A. Community detection in complex networks using extremal optimization [J]. Physical Review E, 2005, 72(2), 027104.
    [131] Sousa F L, Ramos F M. Function optimization using extremal dynamics [C]. In Proceedings of the 4th International Conference on Inverse Problems in Engineering, Rio de Janeiro, Brazil, 2002.
    [132] Sousa F L, Vlassov V, Ramos F M. Generalized extremal optimization: an application in heat pipe design[J]. Applied Mathematical Modelling, 2004,28: 911-931.
    [133] 萧蕴诗, 李柄宇,吴启迪. 求解TSP问题的模式学习并行蚁群算法[J] 控制与决策,2004,19(8):885-888.
    [134] Boettcher S, Percus A G. Extremal Optimization: an Evolutionary Local-Search Algorithm [DB/OL]. arXiv:physics/0209030.
    [135] 覃俊,康立山,陈毓屏.演化算法的收敛性分析及算法改进[J].计算机工程与应用,2003,(19):91-93.
    [136] Paczuski M, Maslov S, Bak P. Avalanche dynamics in evolution, growth, and depinning models[J]. Physical Review E, 1996, 53: 414-443.
    [137] Slanina F, Kotrla M. Extremal Dynamics Model on Evolving Networks[J]. Physical Review Letters, 1999, 83(26): 5587-5590.
    [138] Schaffer J D, Carnana R A, Eshelman L J. A study of control parameters affecting online performance of genetic algorithm for function optimization [A]. In Proceedings of the 3rd International Conference on Genetic Algorithms [C]. Morgan Kaufmann, Los Alto, 1989, pp: 51-60.
    [139] 吴庆洪,张纪会,徐心和.一种有效的进化规划算法[J].系统仿真学报,1999,11(6):409-413.
    [140] 石立宝,徐国禹.一种通用的全局寻优演化算法一自适应进化[J].数值计算与计算机应用,2002,3:1-5.
    [141] 刘芳,李人厚.基于自适应变异规则的一种有效的进化规划[J].控制与决策,2002,17(2):148-150.
    [142] 王凌.车间调度及其遗传算法[M].清华大学出版社,2003,5.
    [143] 孙元凯,刘民,吴澄.调度问题及其解空间的特征分析[J].电子学报,2001,29(8):1042-1045.
    [144] 王波等.Job Shop排序问题解空间定量分析[J].控制与决策,2001,16(1):34-37.
    [145] Nowicki E, Smutnicki C. An advanced tabu search algorithm for the job shop problem [J]. Journal of scheduling, 2005, 8: 145-159.
    [146] Matthew J S, Stephen F S. How the landscape of random job shop scheduling instances depends on the ratio of jobs to machines[R]. Pittsburgh: School of Computer Science, Carnegie Mellon University, CMU-CS-05-162, 2005.
    [147] 谢胜利,黄强,董金祥.求解JSP的遗传算法中不可行调度的方案[J].计算机集成制造系统—CIMS,2002,8(11):902-906.
    [148] Heinonen, J, Pettersson, F. Scheduling a specific type of batch process with evolutionary computation[C]. The Congress on Evolutionary Computation, 8-12 Dec. 2003, 2: 966-970.
    [149] Yang Jun-Jie, Zhou Jian-Zhong, Yu Jing, Wu Wei. A hybrid intelligent messy genetic algorithm for daily generation scheduling in power systemiC]. In Proceedings of 2004 International Conference on Machine Learning and Cybernetics, 2004, 4: 2217-2222.
    [150] Chen Yibo, Yao Jianchu, Zhong Yifang. Ant System Based Optimization Algorithm and Its Applications in Identical Parallel Machine Scheduling[J]. Journal of systems engineering and electronics, 2002, 13(3): 78-85.
    [151] 李歧强.具有约束指导的模拟退火算法[J].系统工程,2001,19(3):49-55.
    [152] Kennedy J, Spears W M. Matching algorithms to problems: an experimental test of the particle swarm and some genetic algorithms on the multimodal problem generator[C]. Proceedings of the International Conference on Evolutionary Computation, Anchorage, Alaska, 1998, pp: 78-83.
    [153] Nowicki E, Smutnicki C. An advanced tabu search algorithm for the job shop problem[J]. Journal of scheduling, 2005, 8: 145-159.
    [154] Kennedy J, Eberhart R C. A discrete binary version of the particle swarm algorithm[C]. Proceedings of the World Multiconference on Systemics, Cybernetics and Informatics, Piscataway, NJ., 1997, pp: 4104-4109.
    [155] Kondili E, Panteledes C C, Sargent R. W H. A general algorithm for short-term scheduling of batch operations-Ⅰ MILP formulation[J]. Computers and Chemical Engineering, 1993, 17(2): 211-277.
    [156] Mockus L, Reklaitis G V. Continuous time representation approach to batch and continuous process scheduling. Part 1. MINLP formulation[J]. Industrial and Engineering Chemistry Research, 1999, 38: 197-203.
    [157] Falkenauer E. The Grouping Genetic Algorithms-Widening the Scope of the Gas. Belgian Journal of Operations Research[J]. Statistics and Computer Science, 1993, 33(1): 79-102.
    [158] Brown E C, Sumichrast R T. Evaluating performance advantages of grouping genetic algorithms[J]. Engineering Application of Artificial Intelligence, 2005, 18: 1-12.
    [159] 陈志平,徐宗本.计算机数学—计算复杂性理论与NPC、NP难问题的求解[M].北京:科学出版社,2001,PP:68-96.
    [160] Cook S. A. The complexity of theorem-proving procedures[A]. In Proceedings of the 3rd Annual ACM Symposium on Theory of Computing[C], NewYork: ACM Press, 1971, pp: 151-158.
    [161] 谢晓锋,张文俊,阮骏等.针对带约束非线性规划问题的遗传算法[J].计算机工程与应用,2002,21:64-67.
    [162] 董颖,唐加福,许宝栋,汪定伟.一种求解非线性规划问题的混合粒子群优化算法[J].东北大学学报(自然科学版),2003,24(12):1141-1144.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700