用户名: 密码: 验证码:
基于多方法融合的进化算法研究
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
作为一类新兴的计算理论与方法,进化算法(EvolutionaryAlgorithm(EA))已在许多工程与科研应用领域展现出优越的性能。相比于传统优化算法,EA只需要极少的参数设定以及少量的问题先验知识便可以实施寻优;对于工程优化问题中所存在的多个约束条件,EA无需进行复杂的归一化处理。此外,EA在多峰优化问题和多变量相关联的优化问题上也展现出了明显的性能优势。
     近二十年来,EA的研究已经取得了许多重要进展,但是,当应用于一些复杂的工程应用问题时,此前的EA算法版本仍然存在许多问题有待研究解决。其中,以下三个方面的不足最为业内关注:(1)EA的普适性仍然有待提高;(2)EA的延展性不足,其性能随着优化问题规模的增大(变量个数增多)而迅速降低;(3)EA在工程优化问题上的应用并不广泛。针对前两方面的不足,本文通过引入多方法融合的思想,提出更具鲁棒性的EA算法框架,以提升EA的延展性和普适性。同时,本文将所提出的算法应用于解决多项工程应用问题,取得了较好的成果。
     在算法设计方面,本文主要围绕着多方法融合(multi-method ensemble)思想展开研究。所取得的成果可以分为三个方面:
     1.针对此前各种分布估计算法(EDA)版本延展性不强的问题,本文通过引入多方法融合思想,设计一种自适应混合采样操作,并提出一种面向大规模优化应用的EDA -基于自适应混合分布采样的EDA(MUEDA)。MUEDA相对于传统EDA与经典的大规模优化EA的性能优势通过30 -1500维的函数优化实验得到全面验证。
     2.针对此前各种粒子群优化算法(PSO)版本普适性较差的问题,本文设计了一种自适应学习的算法框架,将多个PSO新解生成策略并列执行,提出自适应学习PSO(SLPSO)。SLPSO可以根据不同优化问题的特性,甚至是同一优化问题在不同优化阶段对优化算法要求的不同,将较多的计算资源分配给当前表现最好的策略。在这样的情况下,SLPSO的普适性得到了显著地提高。这一自适应方法的效果得到了函数优化实验和电力系统负载调配优化(ELD)实验结果的有力支持。
     3.在之前的研究中,以多方法融合为指导思想的EA均采用并列执行的框架。此类算法的普适性基本由框架对优化反馈的学习能力所决定。因此,并列框架在病态优化问题、带欺骗性优化问题以及极度多峰优化问题上表现仍然无法令人满意。为了改变这一现状,本文提出了基于两层串列结构的多方法融合框架(TSEA)。该框架的主体思想是根据具体问题的特性,将优化过程自适应地划分为相对独立的两个阶段:全局收敛阶段和深度搜索阶段。该算法框架的有效性在对多个复杂优化问题实施求解的过程中得到了充分验证。
     以两阶段串列结构的多方法融合算法框架为基础,本文设计了可应用于一般单目标优化、大规模单目标优化、多目标优化以及动态多目标优化问题求解的一系列具体算法实例。这些算法的性能除了在各类复杂函数优化问题上得到验证外,还在实际工程优化应用问题上展现出显著超越此前优化算法的性能表现。这些应用问题包括:
     1.大规模优化问题(变量个数在102数量级以上):MUEDA和TSEA在该类问题上取得了较大的突破。这主要表现为:在常规的问题上,本文算法的效率取得了与变量个数增加呈近似线性关系的降低趋势;在较难的问题上,对比于多种新近提出的面向大规模优化算法,我们的算法无论在搜索效率和有效性方面都表现出较大的优势。在2008年和2010年IEEE计算智能大会(WCCI)所组织的大规模优化竞赛中,MUEDA和TSEA均取得综合排名第二的好成绩。
     2.大规模电力系统负载调配(ELD):ELD问题是电力系统中非常重要但仍难以有效解决的优化问题。此前算法的性能随着ELD问题规模的增大衰减很快。为了改解决这一问题,本文基于TSEA算法框架、利用EDA和差分进化(DE)算法设计了一个自适应大规模ELD优化算法。对比此前最好的ELD优化算法,ED-DE以较小的代价搜索到更好的负载调配方案。特别地,ED-DE在所有已知的经典ELD问题上均刷新了最优解的记录。此外,本文也将SLPSO算法应用于大规模ELD问题求解,也取得了很好的效果
     3.数字IIR滤波器设计:数字IIR滤波器在数字信号处理领域有重要的作用。进化算法是求解该问题的主要算法之一。此前基于进化算法的求解方案存在两方面不足:(1)求解问题的规模(滤波器的阶数)有限;(2)取得的滤波器设计方案一般是以浮点数表示。此前算法在实际应用时会遇到两方面困难:求解问题规模的增加对算法的延展性提出了更高的要求;定点数的使用会造成求解空间的退化以及搜索信息的缺失,从而提升了对算法鲁棒性的要求。本文基于串列多方法融合框架思想,设计了新的两阶段Memetic算法(MA)TSMA,在数字IIR滤波器设计优化上取得了很好的效果。在高阶定点数数字IIR滤波器设计问题上,此前最为有效的优化算法均已失效,而TSMA仍能够获得可靠的性能表现。
     此外,串列多方法融合框架思想还被用于新兴的动态多目标优化问题的求解,也取得了很好的性能表现。综合本文的研究成果,基于两层串列结构的多方法融合框架在增强算法效率、有效性、鲁棒性以及普适性等各方面都表现出强大的生命力,适用于大规模复杂优化问题的求解。
As the performance rising, Evolutionary algorithm (EA) has become more andmore important in optimization domain. Compared with classical optimization meth-ods, EAs require less parameter settings and less pre-knowledge. For the engineeringapplications with many constrains, EAs can work without complicated uniformizationprocedure. Besides, EAshaveshownespeciallybetterperformanceonmulti-modalandlinked problems.
     Although EAs have achieved significant performance previously, several aspectsof EAs still need to be improved. Generally speaking, there are three demerits of EAsas follows: (1) their universality, which means the ability of solving diverse kinds prob-lems, is still relatively poor; (2) their scalability, which is mainly the capability of solv-ing high dimensional problems, needs to be improved; (3) their application in engineer-ing optimization problems is still limited. For the first two drawbacks, we design amore robust and effective EA framework by using multi-method ensemble idea. Fur-thermore, the proposed EAs are applied to solve real world engineering applicationsand scientific problems, and the performance mostly surpass those of the classical EAs.
     This dissertation focuses on designing more effective and efficient multi-methodensemble based EAs, and then applying them to difficult engineering and scientificoptimization problems. The important findings are as follows:
     1. InordertoimprovethescalabilityofEstimationofDistributionAlgorithm(EDA),we design a new self-adaptive mixed distribution based sampling operator andthen,proposeaself-adaptiveMixeddistributionbasedUni-variateEDA(MUEDA).The advantages of MUEDA compared with the classical EDAs and state-of-the-art EAs are verified on the function optimization experiment scaling from 30 to1500 dimension.
     2. In order to improve the universality of Particle Swarm Optimization (PSO), wepropose a self-adaptive learning framework to extract strengthes from differentPSO new offspring creation strategies, which results in a new PSO variant self-adaptive learning PSO (SLPSO). Generally, SLPSO can assign more computa-tionalbudgettosuitablestrategiesbasedonthefeedbackofpreviousoptimizationprocedure. In this case, the universality of SLPSO can be remarkably improvedcompared with the other PSOs. This viewpoint is confirmed by function opti- mization and economic load dispatch of power system experimental results.
     3. In the previous research, the parallel execution of multiple new offspring cre-ation strategies is the usual form of multi-method ensemble algorithms. In thiskind of algorithmic framework, the learning mechanism is crucial to extend theapplication area. Therefore, it is difficult to handle ill-conditional and deceiv-able problems. In order to solve this, we propose a new Two-Stage based serialmulti-method ensemble EA (TSEA) framework, whose main idea is to dividethe optimization procedure into two stages, global convergence and exhaustivesearch.
     Based on TSEA framework, we propose a series of algorithmic versions to han-dle general function optimization problems, large scale optimization problems, multi-objective optimization problems and dynamic multi-objective optimization problems.Besides the function optimization experiments, TSEA shows significant advantages onreal world engineering and scientific optimization applications:
     1. Large scale global optimization: (with more than 102 variables) MUEDA andTSEA provide much better performance than the current state-of-the-art largescaleglobaloptimizationalgorithms. Asthedifficultylevelarises,theadvantagesof our proposed algorithms in effectiveness and efficiency become more clear. InIEEE CEC2008and2010largescaleglobal optimization competitions, MUEDAand TSEA are among the best candidates.
     2. Large scale ELD optimization: As an important and difficult optimization taskin power system, ELD has attracted public attention from research community.However,theperformanceofpreviousalgorithmsisstilllimited. Inordertosolvethis, weapplySLPSOandaTSEAversion, ED-DE,tolargescaleELDproblems.ComparedwiththecurrentbestELDoptimizationalgorithms,ED-DEcanallbestsolution records within lower computational cost.
     3. Digital IIR filter design: In the previous research, many methods have been ap-plied to this optimization task. However, they have similar demerits: (1) theycan only be applied to low-quality filter design; (2) the filter is represented byfloating-point numbers. These two aspects require more effective design meth-ods. This dissertation adopts Two-Stage based multi-method ensemble memeticAlgorithm (TSMA) to solve this problem and achieve remarkable progresses. It is worth noting that TSMA shows especially reliable performance on the hardtasks, on which the other algorithms all fail.
     Besides, the TSEA based algorithm is applied to dynamic multi-objective optimizationdomain, which attracts little attention previously, and gain significant progresses. Insummary, TSEA framework shows promising performance in terms of effectiveness,efficiency, robustness and universality, and is especially suitable for hard optimizationtasks.
引文
[1] Thomas Weise. Global Optimization Algorithms– Theory and Application. http://www.it-weise.de/, Chem-nitz, Germany, 2009.
    [2] K. Deb. Multi-Objective Optimization Using Evolutionary Algorithms. Wiley, New York, 2001.
    [3] K. Deb, A. Pratap, S. Agarwal, and T. Meyarivan. A fast and elitist multiobjective genetic algorithm: Nsga-ii.IEEE Transaction on Evolutionary Computation, 6(2):182–197, 2002.
    [4] N. Metropolis and S. Ulam. The monte carlo method. Journal of the American Statistical Association (Amer-ican Statistical Association), 44(247):93–100, 1949.
    [5] H. Robbins and S. Monro. A stochastic approximation method. Annals of Mathematical Statistics, 22(3):400–407, 1951.
    [6] C. Blum and A. Roli. Metaheuristics in combinatorial optimization: Overview and conceptual comparison.ACM Computing Surveys, 35(3):268–308, 2003.
    [7] W. W. Bledsoe and I. Browning. Pattern recognition and reading by machine. In Eastern Joint ComputerConference (EJCC), pages 225–232, 1959.
    [8] H. J. Bremermann. Optimization through evolution and recombination. Self-Organizing systems, pages93–100, 1962.
    [9] R. M. Friedberg. A learning machine: Part i. IBM Journal of Research and Development, 2:93–100, 1958.
    [10] J. H. Holland. Adaptation in Natural and Artificial Systems. The University of Michigan Press, Ann Arbor,1975.
    [11] F. Glover. Tabu search—part i. ORSA Journal on Computing, 1(3):190–206, 1989.
    [12] F. Glover. Tabu search—part ii. ORSA Journal on Computing, 2(1):4–32, 1990.
    [13] S. Kirkpatrick, C.D. Gelett, and M.P. Vecchi. Optimization by simulated annealing. Science, 220:621–630,1983.
    [14] J. Kennedy and R. C. Eberhart. Particle swarm optimization. In IEEE International Conference on NeuralNetworks, number 4, pages 1942–1948, 1995.
    [15] Charles Darwin. On the Origin of Species. John Murray, 1895.
    [16] George E. P. Box. Evolutionary operation: A method for increasing industrial productivity. Applied Statistics,6(2):81–101, 1957.
    [17] George E. P. Box and Norman R. Draper. Evolutionary operation. A statistical method for process improve-ment. Wiley Publication in Applied Statistics, 1969.
    [18] W. Spendley, G. R. Hext, and F. R. Himsworth. Sequential application of simplex designs in optimisation andevolutionary operation. Technometrics, 4(4):441–461, 1962.
    [19] John Ashworth Nelder and Roger A. Mead. A simplex method for function minimization. Computer Journal,7:308–313, 1965.
    [20] Kenneth Alan De Jong, David B. Fogel, and Hans-Paul Schwefel. A history of evolutionary computation.Institute of Physics Publishing (IOP), 2000.
    [21] L. J. Fogel, A. J. Owens, and M. J. Walsh. Artifical Intelligence through Simulated Evolution. New York:Wiley, 1966.
    [22] D. B.Fogel. SystemIdentification throughSimultated Evolution: A Machine Learning ApproachtoModeling.Needham Height: Ginn Press, 1991.
    [23] H. P. Schwefel. Kybernetische Evolution als Strategie der experimentellen Forschung derStromungstechnik.Diploma thesis, Technical University of Berlin, 1965.
    [24] Rechenberg. Evolutionsstrategie: Optimierung technischer Systeme nach Prizipien der biologischen Evolu-tion. Stuttgart: Frommann-Holzboog, 1973.
    [25] H. P. Schwefel. Numerische Optimierung von Computer-Modellen mittels der Evolutionsstrategie. Interdis-ciplinary systems research.
    [26] K. De Jong. An analysis of the behaviour of a class of genetic adaptive systems. Ph. D thesis, The Universityof Michigan, 1975.
    [27] J. J. Grefenstette. Optimization of control parmeters for genetic algorithms. IEEE Transactions on SystemsMan and Cybernetics, 16(1):122–128, 1986.
    [28] D. E. Goldberg. Simple genetic algorithms and the minimal, deceptive problem. Genetic algorithms andsimulated annealing.
    [29] D. E. Goldberg. Genetic algorithms in search, optimizationand machine learning. MA: Addison Wesley,1989.
    [30] R. Storn and K. Price. Differential evolution - a simple and efficient heuristic strategy for global optimizationover continuous spaces. Journal of Global Optimization, 11:341–359, 1997.
    [31] J. Brest, S. Greiner, B. Boskovic, M. Mernik, and V. Zumer. Self-adapting control parameters in differentialevolution: A comparative study on numerical benchmark problems. IEEE Transaction Evolutionary Compu-tation, 10(6):646–657, 2006.
    [32] A. K. Qin, V. L. Huang, and P. N. Suganthan. Differential evolution algorithm with strategy adaptation forglobal numerical optimization. IEEE Transaction Evolutionary Computation, 13(2):398–417, 2009.
    [33] P. Larranaga and J. A. Lozano (Eds.). Estimation of Distribution Algorithms: A New Tool for EvolutionaryComputation. Kluwer, 2002.
    [34] Yu Wang, Bin Li, and T. Weise. Estimation of distribution and differential evolution cooperation for largescale economic load dispatch optimization of power systems. Information Sciences, 180:2405–2420, 2010.
    [35] Yu Wang, Bin Li, and Yunbi Chen. Digital iir filter design using multi-objective optimization evolutionaryalgorithm. Applied Soft Computing, 11:1851–1857, 2011.
    [36] Yu Wang, Bin Li, Thomas Weise, Jianyu Wang, Bo Yuan, and Qiongjie Tian. Self-adaptive learning basedparticle swarm optimization. Information Sciences, accepted in press.
    [37] Yu Wang and Bin Li. A self-adaptive mixed distribution based uni-variate estimation of distribution algorithmforlargescaleglobaloptimization. InR.Chiong, editor, Nature-InspiredAlgorithmsforOptimization, volume193 of Studies in Computational Intelligence, pages 171–198. Springer, 2009.
    [38] X. Yao and Y. Liu. Fast evolutionary programming. In the Fifth Annual Conference on Evolutionary Pro-gramming (EP96), pages 451–460, 1996.
    [39] X. Yao, Y. Liu, and G. Lin. Evolutionary programming made faster. IEEE Transaction on EvolutionaryComputation, 3(2):82–102, 1999.
    [40] X. Yao and Y. Liu. Fast evolution strategies. Control and Cybernetics, 26(3):467–496, 1997.
    [41] J. Horn, N. Nafpliotis, and D.E. Goldberg. A niched pareto genetic algorithm for multiobjective optimization.In Proceedings of the IEEE Congress on Evolutionary Computation (CEC1994), volume 1, pages 82–87,1994.
    [42] Yu Wang and Bin Li. Multi-strategy ensemble evolutionary algorithm for dynamic multi-objective optimiza-tion. Memetic Computing, 2(1):3–24, 2009.
    [43] David H. Wolpert and William G. Macready. No free lunch theorems for optimization. IEEE Transaction onEvolutionary Computation, 1(1):67–82, 1997.
    [44] B. A. Huberman, R. M. Lukose, and T. Hogg. An economics approach to hard computational problems.Science, 275(5296):51–54, 1997.
    [45] C. P. Gomes and B. Selmon. Algorithm portfolios. Artificial Intelligence, 126(1-2):43–62, 2001.
    [46] A. S. Fukunaga. Genetic algorithm portfolios. In Proceedings of the IEEE Congress on Evolutionary Com-putation (CEC2000), pages 16–19, 2000.
    [47] J. M. Pena, V. Robles, P. Larranaga, V. Herves, F. Rosales, and M. S. Perez. Ga-eda: Hybrid evolutionary al-gorithm using genetic and estimation of distribution algorithms. In Proceedings of Lecture Notes in ComputerScience, pages 361–371, 2004.
    [48] J. Sun, Q. F. Zhang, and E.Tsang. De/eda: A new evolutionary algorithm for global optimization. InformationSciences, 169:249–262, 2005.
    [49] Fei Peng, Ke Tang, Guoliang Chen, and Xin Yao. Population-based algorithm portfolios for numerical opti-mization. IEEE Transaction on Evolutionary Computation, accepted in press.
    [50] A. K. Qin and P. N. Suganthan. Self-adaptive differential evolution algorithm for numerical optimization. InProceedings of Congress on Evolutionary Computation (CEC2005), pages 1785–1791, 2005.
    [51] Jasper A. Vrugt, Bruce A. Robinson, and James M. Hyman. Self-adaptive multimethod search for globaloptimizationinreal-parameterspaces. IEEETransactiononEvolutionaryComputation,13(2):243–259,2009.
    [52] P. Moscato. Genetic algorithms and martial arts: Towards memetic algorithms. In Publication Report 790,1989.
    [53] C. Houck, J. Joines, and M. Kay. Utilizing lamarckian evolution and the baldwin effect in hybrid genetic algo-rithms. In NCSU-IE Technical Report 96- 01, Meta-Heuristic Research and Applications Group, Departmentof Industrial Engineering, North Carolina State University, 1996.
    [54] A.ViciniandD.Quagliarella. Airfoilandwingdesignusinghybridoptimizationstrategies. AmericanInstituteof Aeronautics and Astronautics Journal, 37(5):634–641, 1999.
    [55] Y. S. Ong and A. J. Keane. Meta-lamarckian learning in memetic algorithms. IEEE Transaction on Evolu-tionary Computation, 8(2):99–110, 2004.
    [56] N.KrasnogorandJ.Smith. Atutorialforcompetentmemeticalgorithms: model, taxonomy, anddesignissues.IEEE Transaction on Evolutionary Computation, 9(5):474–488, 2005.
    [57] A. Caponio, G. L. Cascella, F. Neri, N. Salvatore, and M. Sumne. A fast adaptive memetic algorithm foronline and offline control design of pmsm drives. IEEE Trans. Systems, Man, and Cybernetics, Part B, 37(1):28–41, 2007.
    [58] M.TangandX.Yao. Amemeticalgorithmforvlsifloorplanning. IEEETrans.Systems,Man,andCybernetics,Part B, 37(1):62–69, 2007.
    [59] S. Hasan, R. Sarker, D. Essam, and D. Cornforth. Memetic algorithms for solving job-shop scheduling prob-lems. Memetic Computing, 1(1):69–83, 2009.
    [60] W.E.Hart. AdaptiveGlobalOptimizationwithLocalSearch. PhDthesis,UniversityofCalifornia,SanDiego?1994.
    [61] N. K. Bambha, S. S. Bhattacharyya, J. Teich, and E. Zitzler. Systematic integration of parameterized localsearch in evolutionary algorithm. IEEE Transaction on Evolutionary Computation, 8(2):137–155, 2004.
    [62] Quang Huy Nguyen, Yew-Soon Ong, and Meng Hiot Lim. A probabilistic memetic framework. IEEE Trans-action on Evolutionary Computation, 13(3):604–623, 2009.
    [63] Y. S. Ong, N. Krasnogor, and H. Ishibuchi. Special issue on memetic algorithm. IEEE Trans. Systems, Man,and Cybernetics, Part B, 37(1):2–5, 2007.
    [64] Y. S. Ong, M. H. Lim, F. Neri, and H. Ishibuchi. Special issue on memetic algorithm. Special issue onemerging trends in soft computing: memetic algorithms, Soft Computing-A Fusion of Foundations, 13(8-9):1–2, 2009.
    [65] Y.S.Ong,M.H.Lim,N.Zhu,andK.W.Wong. Classificationofadaptivememeticalgorithms: Acomparativestudy. IEEE Trans. Systems, Man, and Cybernetics, Part B, 36(1):141–152, 2006.
    [66] Y. S. Ong, M. H. Lim, and X. S. Chen. Research frontier: Memetic computation - past, present & future. IEEEComputational Intelligence Magazine, 5(2):24–36, 2010.
    [67] P. Pardalos. Large-Scale Nonlinear Optimization. Springer, Netherlands, 2006.
    [68] K. Tang, X. Yao, P. N. Suganthan, C. MacNish, Y. P. Chen, C. M. Chen, and Z. Y. Yang. Benchmark functionsfor the cec2008 special session and competition on large scale global optimization. In Technical Report forIEEE Congress of Evolutionary Computation (CEC) special issue, 2008.
    [69] J.Brest,A.Zamuda,B.Boskovic,M.S.Maucec,andV.Zumer. High-dimensionalreal-parameteroptimizationusing self-adaptive differential evolution algorithm with population size reduction. In Proceding of IEEECongress on Evolutionary Computation (CEC2008), pages 2032–2039, 2008.
    [70] S. Hsieh, T. Sun, C. Liu, and S. Tsai. Solving large scale global optimization using improved particle swarmoptimizer. InProcedingofIEEECongressonEvolutionaryComputation(CEC2008),pages1777–1784,2008.
    [71] Y. Liu, X. Yao, Q. Zhao, and T. Higuchi. Scaling up fast evolutionary porgramming with cooperative coevo-lution. In Proceding of IEEE Congress on Evolutionary Computation (CEC2001), pages 1101–1108, 2001.
    [72] C. MaCnish and X. Yao. Direction matters in high-dimensional optimisation. In Proceding of IEEE Congresson Evolutionary Computation (CEC2008), pages 2377–2384, 2008.
    [73] A. M. Potter and K. A. D. Jong. Cooperative coevolution: An architecture for evolving coadapted subcom-ponents. Evolutionary Computation, 8(1):1–29, 2000.
    [74] Y. Shi, H. Teng, and Z. Li. Cooperative co-evolutionary differential evolution for function optimization. InProceding of IEEE Congress on Evolutionary Computation (CEC2005), pages 1080–1088, 2005.
    [75] Z.Yang,K.Tang,andX.Yao. Differentialevolutionforhigh-dimensionalfunctionoptimization. InProcedingof IEEE Congress on Evolutionary Computation (CEC2007), pages 3523–3530, 2007.
    [76] Z. Yang, K. Tang, and X. Yao. Multilevel cooperative coevolution for large scale optimization. In Procedingof IEEE Congress on Evolutionary Computation (CEC2008), pages 1663–1670, 2008.
    [77] Z. Yang, K. Tang, and X. Yao. Large scale evolutionary optimization using cooperative coevolution. Infor-mation Sciences, 178(15):2985–2999, 2008.
    [78] S. Zhao, J. J. Liang, P. N. Suganthan, and M. F. Tasgetiren. Dynamic multi-swarm particle swarm optimizerwith local search for large scale global optimization. In Proceding of IEEE Congress on Evolutionary Com-putation (CEC2008), pages 3845–3852, 2008.
    [79] L. Tseng and C. Chen. Multiple trajectory search for large scale global optimization. In Proceding of IEEECongress on Evolutionary Computation (CEC2008), pages 3052–3059, 2008.
    [80] Y. Wang and B. Li. A restart uni-variable estimation of distribution algorithms: Sampling under mixedgaussian and levy probability distribution. In Proceding of IEEE Congress on Evolutionary Computation(CEC2008), pages 3218–3925, 2008.
    [81] H. Szu and R. Hartley. Fast simulated annealing. Phys. Lett. A, 122(3,4):157–162, 1987.
    [82] M. Sebag and A. Ducoulombier. Extending population-based incremental learning to continuous searchspaces. In Proceding of Parallel Problem Solving from Nature (PPSN VI), pages 418–427, 1998.
    [83] S. Rudlof and M. Koppen. Stochastic hill climbing by vectors of normal distributions. In 1st OnlineWorkshopon Soft Computing (WSC1), pages 60–70, 1996.
    [84] S. Rudlof and M. Koppen. Optimization in continuous domains by learning and simulation of gaussian net-works. In Genetic and Evolutionary Computation Conf. Workshop Program, pages 201–204, 2000.
    [85] Q. Lu and X. Yao. Clustering and learning gaussian distribution for continuous optimization. IEEE Trans.Systems, Man and Cybernetics, 35(2):195–204, 2005.
    [86] P. A. N. Bosman. Design and application of iterated density-estimation evolutionary algorithms. Ph.D.dissertation of Utrecht Univ. TB Utrecht, Netherlands, 2003.
    [87] C.Y. Lee and X. Yao. Evolutionary programming using mutations based on the l′evy probability distribution.IEEE Transaction on Evolutionary Computation, 8(1):1–13, 2004.
    [88] P. L′evy. Theorie de l’Addition des Veriables Aleatoires. Paris, France, Gauthier-Villars, 1937.
    [89] T. Smith, P. Husbands, P. Layzell, and M. OShea. Fitness landscapes and evolvability. Evolutionary Compu-tation, 10(1):1–34, 2002.
    [90] A. Zamuda, J. Brest, B. Boskovic, and V. Zumer. Large scale global optimization using differential evo-lution with self adaptation and cooperative co-evolution. In Proceding of IEEE Congress on EvolutionaryComputation (CEC2008), pages 3719–3726, 2008.
    [91] Yu Wang, Bin Li, and Xuexiao Lai. Variance priority based cooperative co-evolution differential evolution forlarge scale global optimization. In Proceding of IEEE Congress on Evolutionary Computation (CEC2009),pages 1232–1239, 2009.
    [92] H. M¨uhlenbein, M. Schomisch, and J. Born. The parallel genetic algorithm as function optimizer. ParallelComputation, 17:25–49, 1991.
    [93] H. M¨uhlenbein and D. Schlierkamp-Voosen. Predictive models for breeder genetic algorithm. EvolutionaryComputation, 1(1):25–49, 1993.
    [94] S. Nema, J. Goulermas, G. Sparrow, and P. Cook. A hybrid particle swarm branch-and-bound (hpb) optimizerfor mixed discrete nonlinear programming. IEEE Transactions on Systems, Man, and Cybernetics, 38(6):1411–1427, 2008.
    [95] A.Cervantes, I.M.Galvan, andP.Isasi. Anewparticleswarmmethodfornearestneighborhoodclassification.IEEE Transactions on Systems, Man, and Cybernetics, 39(5):1082–1091, 2008.
    [96] H. W. Liu and J. Li. A particle swarm optimization-based multiuser detection for receive-diversity-aided stbcsystems. IEEE Sig. Proce. Let., 15:29–32, 2008.
    [97] F.-C. Chang and H.-C. Huang. A refactoring method for cache-efficient swarm intelligence algorithms. In-formation Sciences. doi: doi:10.1016/j.ins.2010.02.025.
    [98] A. Ratnaweera, S. Halgamuge, and H. Watson. Self-organizing hierarchical particle swarm optimizer withtime varying accelerating coefficients. IEEE Transaction on Evolutionary Computation, 8(3):240–255, 2004.
    [99] J. Kennedy and R. C. Eberhart. Swarm Intelligence. Morgan Kaufmann, 2001.
    [100] R. Mendes, J. Kennedy, and J. Neves. The fully informed particle swarm: Simpler, maybe better. IEEETransaction on Evolutionary Computation, 8(3):204–210, 2004.
    [101] T. Peram, K. Veeramachaneni, and C. K. Mohan. Fitness-distance-ratio based particle swarm optimization. InProceding of Swarm Intelligence Symp., pages 174–181, 2003.
    [102] F. van den Bergh and A. P. Engelbrecht. A cooperative approach to particle swarm optimization. IEEETransaction on Evolutionary Computation, 8(3):225–239, 2004.
    [103] Sheng-Ta Hsieh, Tsung-Ying Sun, Chan-Cheng Liu, and Shang-Jeng Tsai. Efficient population utilizationstrategy for particle swarm optimizer. IEEE Transaction on Systems, Man, and Cybernetics, 39(2):444–456,2009.
    [104] J. J. Liang, A.K. Qin, P. N. Suganthan, and S. Baskar. Comprehensive learning particle swarm optimizerfor global optimization of multimodal functions. IEEE Transaction on Evolutionary Computation, 10(3):281–295, 2006.
    [105] W. L. Du and B. Lin. Multi-strategy ensemble particle swarm optimization for dynamic optimization. Infor-mation Sciences, 178:3096–3109, 2008.
    [106] R. Mallipeddi, S. Mallipeddi, and P. N. Suganthan. Ensemble strategies with adaptive evolutionary program-ming. Information Sciences, 180:1571–1581, 2010.
    [107] R. Mallipeddi and P. N. Suganthan. Differential evolution algorithm with ensemble of populations for globalnumerical optimization. Opsearch, 46(2):184–213, 2009.
    [108] V. Robles, Jose M. Pe na, Pedro Larra naga, and Mar′?a S. P′?rez. Ga-eda: A new hybrid cooperative searchevolutionary algorithm. In I. Inza J.A. Lozano, P. Larra?naga and E. Bengoetxea, editors, Towards a NewEvolutionary Computation, volume 192 of Studies in Computational Intelligence, pages 187–219. Springer,2006.
    [109] Y.ShiandR.C.Eberhart. Amodifiedparticleswarmoptimizer. InProcedingof7thConf.Evol.Programming,pages 591–600, 1998.
    [110] F. van den Bergh and A. P. Engelbrecht. A study of particle swarm optimization particle trajectories. Infor-mation Sciences, 176:937–971, 2006.
    [111] Qiang Luo and Dongyun Yi. A co-evolving framework for robust particle swarm optimization. AppliedMathematics and Computation, 199:611–622, 2008.
    [112] PraveenKumarTripathi,SanghamitraBandyopadhyay,andSankarKumarPal. Multi-objectiveparticleswarmoptimization with time variant inertia and acceleration coefficients. Information Sciences, 177:5033–5049,2007.
    [113] Y. Shi and R. C. Eberhart. Particle swarm optimization with fuzzy adaptive inertia weight. In Proceding ofWorkshop Particle Swarm Optimization, pages 101–106, 2001.
    [114] K. E. Parsopoulos and M. N. Vrahatis. Parameter selection and adaptation in unified particle swarm optimiza-tion. Mathematical and Computer Modelling, 46:198–213, 2007.
    [115] K. E. Parsopoulos and M. N. Vrahatis. Upso—a unified particle swarm optimization scheme. In Procedingof Lecture Series on Computational Sciences, pages 868–873, 2004.
    [116] DeBaoChenandChunXiaZhao. Particleswarmoptimizationwithadaptivepopulationsizeanditsapplication.Applied Soft Computing, 9:39–48, 2009.
    [117] J.C.Culberson. Onthefutilityofblindsearchanalgorithmicviewofnofreelunch. EvolutionaryComputation,6(2):109–127, 1998.
    [118] Yuxin Zhao, Wei Zub, and Haitao Zeng. A modified particle swarm optimization via particle visual modelinganalysis. Computers and Mathematics with Applications, 57:2022–2029, 2009.
    [119] Xinchao Zhao. A perturbed particle swarm algorithm for numerical optimization. Applied Soft Computing,10:119–124, 2010.
    [120] M. Clerc and J. Kennedy. The particle swarm-explosion, stability, and convergence in a multidimensionalcomplex space. IEEE Transaction on Evolutionary Computation, 6(1):58–73, 2002.
    [121] J.Kennedy. Smallworldsandmega-minds: Effectsofneighborhoodtopologyonparticleswarmperformance.In Proceding of Congress on Evolutionary Computation, pages 1931–1938, 1999.
    [122] J. Kennedy and R. Mendes. Population structure and particle swarm performance. In Proceding of IEEECongress on Evolutionary Computation (CEC2002), pages 1671–1676, 2002.
    [123] P. N. Suganthan. Particle swarm optimizer with neighborhood operator. In Proceding of Congress on Evolu-tionary Computation, pages 1958–1962, 1999.
    [124] M. Lovbjerg and T. Krink. Extending particle swarm optimizers with self-organized criticality. In Procedingof IEEE Congress on Evolutionary Computation (CEC2002), pages 1588–1593, 2002.
    [125] C. K. Goh, K.C. Tan, D.S. Liu, and S.C. Chiam. A competitive and cooperative co-evolutionary approach tomulti-objective particle swarm optimization algorithm design. European Journal of Operational Research,202(1):42–54, 2010.
    [126] Qiang Luo and Dongyun Yi. A co-evolving framework for robust particle swarm optimization. AppliedMathematics and Computation, 199:42–54, 2008.
    [127] T. M. Blackwell and P. J. Bentley. Don’t push me! collision-avoiding swarms. In Proceding of IEEECongress on Evolutionary Computation (CEC2002), pages 1691–1696, 2002.
    [128] P. J. Angeline. Using selection to improve particle swarm optimization. In Proceding of IEEE Congress onEvolutionary Computation (CEC1998), pages 84–89, 1998.
    [129] V. Miranda and N. Fonseca. New evolutionary particle swarm algorithm (epso) applied to voltage/var control.In Proceding of 14th Power Syst. Comput. Conf., 2002.
    [130] Chia-Feng Juang and Chi-Yen Wang. A self-generating fuzzy system with ant and particle swarm cooperativeoptimization. Expert Systems with Applications, 36:5362–5370, 2009.
    [131] A. Kaveh and S. Talatahari. Particle swarm optimizer, ant colony strategy and harmony search scheme hy-bridized for optimization of truss structures. Computers and Structures, 87:267–283, 2009.
    [132] P. S. Shelokar, Patrick Siarry, V.K. Jayaraman, and B.D. Kulkarni. Particle swarm and ant colony algorithmshybridized for improved continuous optimization. Applied Mathematics and Computation, 188:129–142,2007.
    [133] Peng-YengYin, FredGlover, ManuelLaguna, andJia-XianZhu. Cyberswarmalgorithms-improvingparticleswarm optimization using adaptive memory strategies. European Journal of Operational Research, 201:377–389, 2010.
    [134] E. K. Burke and M. Hyde and G. Kendall and G. Ochoa and E. Ozcan and R. Qu. A survey of hyper-heuristics.Tech. Rep. NOTTCS-TR-SUB-0906241418-2747, School of Computer Science and Information Technology,University of Nottingham, 2009.
    [135] Y. Shi and R. C. Eberhart. A modified particle swarm optimizer. In Proceding of IEEE Congress on Evolu-tionary Computation (CEC1998), pages 69–73, 1998.
    [136] P.S. Manoharan, P. S. Kannan, S. Baskar, and M. W. Iruthayarajan. Penalty parameter-less constraint handlingscheme based evolutionary algorithm solutions to economic dispatch. IET Generation, Transmission andDistribution, 2(4):478–490, 2008.
    [137] C. Chiang. Improved genetic algorithm for power economic dispatch of units with valve-point effects andmultiple fuels. IEEE Transaction on Power Systems, 20(4):1690–1699, 2005.
    [138] N. Hansen and A. Ostermeier. Completely derandomized self-adaptation in evolution strategies. EvolutionaryComputation, 9(2):159–195, 2001.
    [139] A. Auger and N. Hansen. A restart cma evolution strategy with increasing population size. In IEEE Congresson Evolutionary Computation (CEC2005), pages 1769–1776, 2005.
    [140] A. Auger and N. Hansen. Performance evaluation of an advanced local search evolutionary algorithm. InIEEE Congress on Evolutionary Computation (CEC2005), pages 1777–1784, 2005.
    [141] J. J. Liang and P. N. Suganthan. Dynamic multi-swarm particle swarm optimizer with local search. In IEEECongress on Evolutionary Computation (CEC2005), pages 522–528, 2005.
    [142] D. Molina, F. Herrera, and M. Lozano. Adaptive local search parameters for real-coded memetic algorithms.In IEEE Congress on Evolutionary Computation (CEC2005), pages 888–895, 2005.
    [143] YuWang,BinLi,andZhengDongLi. Adaptivecooperativeco-evolutionforlargescaleglobaloptimization. In2010 IEEE Youth Conference on Information Computing and Telecommunications (YC-ICT), pages 178–181,2010.
    [144] Yu Wang and Bin Li. Two-stage based ensemble optimization for large-scale global optimization. In IEEEconference on evolutionary computation (CEC10), pages 4488–4495, 2010.
    [145] Nasimul Noman and Hitoshi Iba. Accelerating differential evolution using an adaptive local search. IEEETransaction on Evolutionary Computation, 12(1):107–125, 2008.
    [146] V. D. Valle, G. K. Venayagamoorthy, S.Mohagheghi, J. C. Hernandez, and R. G. Harley. Particle swarmoptimization: Basic concepts, variants and applications in power systems. IEEE Transaction on EvolutionaryComputation, 12(2):171–195, 2008.
    [147] T. Aruldoss, A. Victoire, and A. Jeyakumar. Reserve constrained dynamic dispatch of units with valve-pointeffects. IEEE Transaction on Power Systems, 20(3):1273–1282, 2005.
    [148] D. C. Walter and G. B. Sheble. Genetic algorithm solution of economic dispatch with valve point loading.IEEE Transaction on Power Systems, 8:1325–1332, 1993.
    [149] J. H. Park, Y. S. Kim, I. K. Eom, and K. Y. Lee. Economic load dispatch for piecewise quadratic cost functionusing hopfield neural network. IEEE Transaction on Power Systems, 8:1030–1038, 1993.
    [150] P. Attaviriyanupap, H. Kita, E. Tanaka, and J. Hasegawa. A hybrid ep and sqp for dynamic economic dispatchwith nonsmooth fuel cost function. IEEE Transaction on Power Systems, 17(2):411–416, 2002.
    [151] Z. Gaing. Particle swarm optimization to solving the economic dispatch considering the generator constraints.IEEE Transaction on Power Systems, 18(3):1187–1195, 2003.
    [152] A. El-Gallad, M. El-Hawary, A. Sallam, and A. Kalas. Particle swarm optimizer for constrained economicdispatch with prohibited operating zones. In 2002 IEEE Canadian Conf. Elect. Comput. Eng., pages 78–81,2002.
    [153] A. Kumar, K. Dhanushkodi, J. Kumar, and C. Paul. Particle swarm optimization solution to emission andeconomic dispatch problem. In Conf. Convergent technologies for Asia-Pacific Region (TENCON-2003),pages 435–439, 2003.
    [154] L. Lai, T. Nieh, Y. Ma, Y. Lu, Y. Yang, and H. Braun. Particle swarm optimization for economic dispatchof units with non-smooth inputoutput characteristic functions. In IEEE Intell. Syst. Appl. Power Conf., pages499–503, 2005.
    [155] R. Pancholi and K. Swarup. Particle swarm optimization for security constrained economic dispatch. In Int.Conf. Intell. Sensing Inf., pages 7–12, 2004.
    [156] J. B. Park, K. S. Lee, J. R. Shin, and K. Y. Lee. A particle swarm optimization for economic dispatch withnonsmooth cost functions. IEEE Transaction on Power Systems, 20(1):34–42, 2005.
    [157] L. S. Coelho and V. C. Mariani. Combining of chaotic differential evolution and quadratic programming foreconomicdispatchoptimizationwithvalve-pointeffect. IEEETransactiononPowerSystems, 21(2):989–996,2006.
    [158] K. T. Chaturvedi, M. Pandit, and Laxmi Srivastava. Self-organizing hierarchical particle swarm optimizationfor nonconvex economic dispatch. IEEE Transaction on Power Systems, 23(3):1079–1087, 2008.
    [159] N. Sinha, R. Chakrabati, and P. K. Chattopadhyay. Evolutionary programming techniques for economic loaddispatch. IEEE Transaction on Evolutionary Computation, 7(1):83–94, 2003.
    [160] T. A. A. Victoire and A. E. Jeyakumar. Hybrid pso-sqp for economic dispatch with valve-point effect. Elect.Power Syst. Res., 71:51–59, 2004.
    [161] D. M. Etter, M. J. Hicks, and K. H. Cho. Recursive adaptive filter design using an adaptive genetic algorithm.In IEEE Int. Conf. ASSP, pages 635–638, 1982.
    [162] C. T. Chen. One-Dimensional Digital Signal Processing. New York: Marcel Dekker, 1979.
    [163] Jinn-Tsong Tsai, Jyh-Horng Chou, and Tung-Kuan Liu. Optimal design of digital iir filters by using hybridtaguchi genetic algorithm. IEEE Transaction on Industial Electronics, 53(3):867–879, 2006.
    [164] Hunsoo Choo, Khurram Muhammad, and Kaushik Roy. Complexity reduction of digital filters using shiftinclusive differential coefficients. IEEE Transaction on Signal Processing, 52(6):1760–1772, 2004.
    [165] Yu Yang and Xinjie Yu. Cooperative coevolutionary genetic algorithm for digital iir filter design. IEEETransaction on Industial Electronics, 54(3):1811–1819, 2007.
    [166] MikiHaseyamaandDaikiMatsuura. Afiltercoefficientquantizationmethodwithgeneticalgorithm,includingsimulated annealing. IEEE Signal Process. Lett., 13(4):189–192, 2006.
    [167] A. Tarczynski, E. Hermanowicz G. D. Cain, and M. Rojewski. A wise method for designing iir filters. IEEETransaction on Signal Processing, 49(7):1421–1432, 2001.
    [168] S. P. Harris and E. C. Ifeachor. Automatic design of frequency sampling filters by hybrid genetic algorithmtechniques. IEEE Transaction on Signal Processing, 46(12):3304–3314, 1998.
    [169] K.S.Tang,K.F.Man,S.Kwong,andZ.F.Liu. Designandoptimizationofiirfilterstructureusinghierarchicalgenetic algorithms. IEEE Transaction on Industial Electronics, 45(3):481–487, 1998.
    [170] H. Y. F. Lam. Analog and Digital Filters: Design and Realization. NJ: Prentice-Hall, 1979.
    [171] N. Karaboga, A. Kalinli, and D. Karaboga. Designing iir filters using ant colony optimisation algorithm. J.Eng. Appl. Artif. Intell., 17(3):301–309, 2004.
    [172] Chaohua Dai, Weirong Chen, and Yunfang Zhu. Seeker optimization algorithm for digital iir filter design.IEEE Transaction on Industial Electronics, 57(5):1710–1718, 2010.
    [173] A. Kalinli and N. Karaboga. Artificial immune algorithm for iir filter design. J. Eng. Appl. Artif. Intell., 18(5):919–929, 2005.
    [174] A. Kalinli and N. Karaboga. A new method for adaptive iir filter design based on tabu search algorithm. Int.J. Electron. Commun., 59(2):111–117, 2005.
    [175] D.J.KrusienskiandW.K.Jenkins. Designandperformanceofadaptivesystemsbasedonstructuredstochasticoptimization. IEEE Circuits Syst. Mag., 5(1):8–20, 2005.
    [176] K. Deb. http://www.iitk.ac.in/kangal/code/new nsga/nsga2code.tar.
    [177] P. Amato and M. Farina. An alife-inspired evolutionary algorithm for dynamicmultiobjective optimizationproblems. Adv Soft Comput., 1:113–125, 2005.
    [178] M. Annunziato, I. Bertini, A. Pannicelli, and S. Pizzuti. Evolutionary control and optimization: an industrialapplication for combustion processes. In EUROGEN, pages 367–372, 2001.
    [179] M. Farina, P. Amato, and K. Deb. Dynamic multi-objective optimization problems: test cases approximationsand applications. IEEE Transaction on Evolutionary Computation, 8(5):425–442, 2004.
    [180] Y. Jin and J. Branke. Evolutionary optimization in uncertain environments—a survey. IEEE Transaction onEvolutionary Computation, 9(3):303–317, 2005.
    [181] M. Bhattacharya and G. Lu. A dynamic approximate fitnessbased hybrid ea for optimization problems. InIEEE Congress Evolutionary Computation (CEC03), pages 1879–1886, 2003.
    [182] T. Blackwell and J. Branke. Multi-swarm optimization in dynamic environments. Appl Evol Comput, 3005:489–500, 2004.
    [183] T. Blackwell and J. Branke. Dynamic multi-objective optimization problems: test cases approximations andapplications. IEEE Transaction on Evolutionary Computation, 10(4):459–472, 2006.
    [184] T. Blackwell. On the Origin of Species. John Murray, 1895.
    [185] J. Branke and H. Schmeck. Designing evolutionary algorithms for dynamic optimization problems. In theoryand applications of evolutionary computation, recent trends, pages 239–262. Springer, Heidelberg, 2002.
    [186] J. Branke. Memory enhanced evolutionary algorithms for changing optimization problems. In IEEE CongressEvolutionary Computation (CEC99), pages 1875–1882, 1999.
    [187] S. Yang. Population-based incremental learning with memory scheme for changing environments. In GenetEvol Comput Conf, pages 711–718, 2005.
    [188] S. Yang and X. Yao. Population-based incremental learning with associative memory for dynamic environ-ments. IEEE Transaction on Evolutionary Computation, 12(5):542–561, 2008.
    [189] J. D. Schaffer. Multiple objective optimization with vector evaluated genetic algorithms. In 1st internationalconference on genetic algorithms, pages 93–100, 1985.
    [190] Coello, C. and Van Veldhuizen, D. and Lamont, G. Evolutionary algorithms for solving multi-objective prob-lems. Kluwer, Norwell, 2002.
    [191] V. L. Huang, A. K. Qin, P. N. Suganthan, and M. F. Tasgetiren. Multi-objective optimization based on self-adaptive differential evolution. In IEEE conference on evolutionary computation (CEC08), pages 25–28,2007.
    [192] E. Zitzler, M. Laumanns, and L.Thiele. Spea2: improving the strength pareto evolutionary algorithm. Techni-calReport103, ComputerEngineeringandNetworksLaboratory(TIK),SwissFederalInstituteofTechnology(ETH), 2001.
    [193] J. Knowles and D. Corne. The pareto archived evolution strategy: a new baseline algorithm for multiobjectiveoptimization. In IEEE conference on evolutionary computation (CEC99), pages 98–105, 1999.
    [194] K. Deb, M. Mohan, and S. Mishra. Evaluating the epsilon-domination based multi-objective evolutionaryalgorithm for a quick computation of pareto-optimal solution. Evolutionary Computation, 13(4):501–525,2005.
    [195] M. Emmerich, N. Beume, and B. Naujoks. An emo algorithm using the hypervolume measure as selectioncriterion. In third international conference on evolutionary multicriterion optimization (EMO 2005), pages62–76, 2005.
    [196] Q. F. Zhang, A. M. Zhou, and Y. C. Jin. Rm-meda: a regularity model-based multiobjective estimation ofdistribution algorithm. IEEE Transaction on Evolutionary Computation, 12(1):41–63, 2008.
    [197] Q. F. Zhang and H. Li. Moea/d: a multiobjective evolutionary algorithm based on decomposition. IEEETransaction on Evolutionary Computation, 11(6):712–731, 2007.
    [198] Y.WangandB.Li. Fh-moea: multi-objectiveevolutionaryalgorithmbased-onfasthyper-volumecontributionapproach. J Univ Sci Technol China, 38(7):802–809, 2008.
    [199] H. Ishibuchi, T. Yoshida, and T. Murata. Balance between genetic search and local search in memetic algo-rithms for multiobjective permutation flowshop scheduling. IEEE Transaction on Evolutionary Computation,7(2):204–223, 2003.
    [200] S. Kukkonen and J. Lampinen. Performance assessment of generalized differential evolution 3 (gde3) with agiven set of problems. In IEEE conference on evolutionary computation (CEC07), pages 25–28, 2007.
    [201] E. Zitzler, L. Thiele, M. Laumanns, and C. M. Fonseca. Performance assesment of multiobjective optimizers:an analysis and review. IEEE Transaction on Evolutionary Computation, 7(2):117–132, 2003.
    [202] P. N. Suganthan. Performance assessment on multi-objective optimization algorithms. In IEEE conferenceon evolutionary computation special session, competition on performance assessment of multi-objective opti-mization algorithms, 2007.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700