变换域中的多源图像融合方法研究
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
多源图像融合是将源自同一场景或目标的多幅图像合成一幅新的图像,以获取对该场景或目标更为准确和全面的描述。图像融合过程充分利用了不同源图像之间的冗余信息和互补信息,使融合后的图像具有更高的可信度、较少的模糊和更好的可理解性,更适合于人类视觉感知或计算机检测、分类和识别。
     多源图像融合主要分为空间域图像融合和变换域图像融合。变换域融合方法利用了多尺度、多分辨率分析方法在表征信号局部特征上的优点,弥补了空间域图像融合方法在细节表现力上的不足。但是这类融合方法改变了变换系数,若融合规则选取不当,容易造成源图像信息的损失。
     本论文的研究工作主要围绕变换域中的多源图像融合展开。从有利于场景理解和目标识别的角度出发,在分析变换域图像融合理论和先进方法的基础上,寻找能够保留源图像更多有用信息,有效提高融合图像质量的新方法。论文的主要研究内容如下:
     (1)从图像融合的层次架构出发,探讨了多源图像融合的基本流程和基本方法,归纳了图像融合效果的主要评价指标及其选取原则;分析了基于多分辨率分析的变换域图像融合技术,并对几种典型变换域融合方法的融合效果进行了比较。
     (2)在研究多源图像融合及非下采样Contourlet变换理论的基础上,提出了一种基于区域特征的多聚焦图像融合方法。为了有效获取源图像的边缘和细节信息,利用非下采样Contourlet变换对图像进行多尺度分解,再根据所得到的子带系数的区域特征及其接近度分别采用相应的融合规则。该方法的融合效果优于传统的空间域融合方法和变换域像素级融合方法。
     (3)结合脉冲耦合神经网络,对变换域多源图像融合方法进行了研究。在多聚焦图像融合中,利用非下采样Contourlet变换有效捕捉源图像的特征信息,并根据脉冲耦合神经网络点火映射图的邻域接近度,为非下采样Contourlet变换获得的子带系数提供融合规则。根据红外图像与可见光融合的要求,利用自约束限定函数定义神经元链接强度,并引入脉冲耦合神经网络的迭代过程,在此基础上对非下采样Contourlet变换的子带系数进行融合处理。提出的相应融合方法针对多聚焦图像及红外与可见光图像,均能够有效提高融合图像的质量。
     (4)研究了变换域多源图像融合中的多目标优化问题。在分析多目标优化理论与算法的基础上,提出了一种自适应差分进化算法。该算法采用自适应变异因子、动态交叉概率函数以及精英最优的排序策略,具有良好的搜索能力和收敛性,将其应用到基于非下采样Contourlet变换的多源图像融合多目标优化中,有效解决了图像融合过程的综合评价问题。
Multi-source image fusion will integrate multiple images derived from the same scene or target collected into a new image to obtain more accurate and more complete description about the scene or target. Image fusion process can take advantage of the complementary information and redundant information in different source images, so as to obtain a fusion result with higher reliability, less blur and better intelligibility. As a result, the fusion image is more suitable for human vision perception and computer processing, such as detection, classification and identification.
     Multi-source image fusion can be divided into space domain image fusion and transform domain image fusion. The transform domain fusion methods utilize a multi-scale, multi-resolution analysis on the expression of the advantages on local signal characteristics, so that it makes up the lack of expression in detail for space domain fusion methods. However, such fusion methods change transform coefficients, which is likely to cause loss of source image information in inappropriate fusion rules.
     In this thesis, most of the research works focus on multi-source image fusion in transform domain. From the benefit point of scene understand and target identification, this research is to analyze the transform domain image fusion of basic theory and advanced algorithms, and look for the new ways to retain more useful information of source images and improve fusion image quality effectively. The main contents are as follows:
     (1) From the level structure of image fusion, the basic process and basic methods of multi-source image fusion are discussed, while the evaluation criteria and their selection principles of image fusion effect are summarized Then the transform domain image fusion technology based on Mmulti-resolution analysis is analysized, and the fusion effect of several typical fusion methods in transform domain are compared by experiments.
     (2) On the basis of researching multi-source image fusion and non-subsampled Contourlet transform theory, a multi-focus image fusion method is proposed based on regional characteristics. In order to effectively obtain the edge and detail information, the source images are decomposed under multi-scale by use of nonsubsampled Contourlet transform, while the corresponding fusion rules are employed to according to the regional characteristics and approach degree of subband coefficients. The fusion effects of this method are better than those of traditional space domain fusion methods and transform domain fusion methods on pixel level.
     (3) Combined with pulse coupled neural network, the multi-source image fusion methods in transform domain are researched. In multi-focus image fusion, the non-subsampled Contourlet transform is used to the capture feature information of source images, and according to the neighborhood approach degree in ignition mapping images of pulse coupled neural network, the fusion rules are provided to sub-band coefficients from non-subsampled Contourlet transform. According to the requirements of infrared and visible light image fusion, the self-constraint restrictive function is used to define neuron link strength, which is introduced into iteration of pulse coupled neural network. On this basis, fusion process is performed on the subband coefficients generated under non-subsampled Contourlet transform. The proposed corresponding fusion methods for multi-focus image and the infrared and visible images are able to effectively improve the quality of fused images.
     (4) The multi-objective optimization problem of multi-source image fusion is researched in transform domain. Based on the analysis of multi-objective optimization theory and algorithms, an adaptive differential evolution algorithm is proposed. With adaptive variance factor, dynamical crossover probability function and optimal elite ordering strategy, the algorithm reflects not only good search capability but also good convergence. When applied to multi-objective optimization of multi-source image fusion of transform domain, it will be an effective solution to the comprehensive evaluation in the image fusion process.
引文
[1]李建中,高宏.无线传感器网络的研究进展[J].计算机研究与发展, 2008, 45(1): 1—15
    [2] Akyildiz I F, Su W, Sankarasubramaniam Y, et al. Wireless sensor networks: a survey[J]. Computer Networks, 2002, 38(4): 393—422
    [3]韩崇昭,朱洪艳,段战胜,等.多源信息融合[M].北京:清华大学出版社, 2006
    [4]马华东,陶丹.多媒体传感器网络及其研究进展.软件学报[J]. 2006, 7(9): 2013—2028
    [5] Downes L, Rad B, Aghajan H. Development of a mote for wireless image sensor networks[A]. Proceedings of Conference on Cognitive Systems with Interactive Sensors[C], Paris, France, 2006: 1—8
    [6]焦竹青,邵金涛,徐保国.非下采样Contourlet变换域多聚焦图像融合方法[J].浙江大学学报, 2010, 44(7): 1333—1337
    [7] http://www.ibm.com/smarterplanet/us/en/
    [8]刘强,崔莉,陈海明.物联网关键技术与应用[J].计算机科学, 2010, 37(6): 1—4
    [9] Guo B, Zhang Q, Hou Y. Region-based fusion of infrared and visible images using nonsubsampled Contourlet transform[J]. Chinese Optics Letters, 2008, 6(5): 338—341
    [10]陶丹,马华东.视频传感器网络中基于相关性图像融合算法[J].计算机辅助设计与图形学学报, 2007, 19(5): 259—280
    [11]李晖晖.多传感器图像融合算法研究[D]:[博士学位论文].西安:西北工业大学, 2006
    [12]岳晋.多传感器图像融合方法研究[D]:[博士学位论文].北京:中国科学院研究生院, 2008
    [13]王玉斐,王汝传,曾鸣,等.多媒体传感器网络中基于颜色空间的图像融合方案[J].电子学报, 2009, 37(8): 1659—1663
    [14]孙延奎.小波分析及其应用[M].北京:机械工业出版社, 2005
    [15] Foresti G, Snidaro L. A distributed sensor network for video surveillance of outdoor environments[A]. Proceedings of IEEE International Conference on Image Processing[C]. Rochester, USA, 2002: 525—528
    [16] Dixon T D, Li J, Noyes J M, et al. Scanpath analysis of fused multi-sensor images with luminance change: A pilot study[A]. Proceedings of International Conference on Information Fusion[C], Florence, Italy, 2006: 1—8
    [17] Piella G. A general framework for multiresolution image fusion: from pixels to regions[J]. Information Fusion, 2003, 4(4): 259—280
    [18]冈萨雷斯.数字图像处理(第二版)[M].北京:电子工业出版社, 2003
    [19] Gunatilaka A H, Baertlein B A. Feature-level and decision-level fusion of noncoincidently sampled sensors for land mine detection[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2001, 23(6): 577—589
    [20]王爱玲,叶明生,邓秋香.图像处理技术及应用[M].北京:电子工业出版社, 2008
    [21]何贵青,陈世浩,田沄.多传感器图像融合效果综合评价研究[J].计算机学报, 2008, 31(3): 486—492
    [22] Smith M I, Heather J P, Review of image fusion technology in 2005[A]. Proceedings of the Defense and Security Symposium[C], Orlando, USA, 2005: 29—45
    [23]刘培,王建英,尹忠科.图像光照补偿方法的研究[J].系统工程与电子技术, 2008, 30(7): 1343—1346
    [24] Wang Z, Ziou D, Armenakis C, et al. A comparative analysis of image fusion methods[J]. IEEE Transactions on Geosciences and Remote Sensing, 2005, 43(6): 1391—1402
    [25] Yang C C, Kwok S H. Efficient gamut clipping for color image processing using LHS and YIQ[J]. Society of Photo-Optical Instrumentation Engineers, 2003, 42(3): 701—711
    [26] Jiao Z, Xu B. An image enhancement approach using Retinex and YIQ[A]. Proceedings of International Conference on Information Technology and Computer Science[C], Kiev, Ukraine, 2009: 476—479
    [27]刘贵喜,杨万海.基于多尺度对比度塔的图像融合方法及性能评价[J].光学学报, 2001, 21(11): 1336—1342
    [28] Li M, Cai W, Tan Z. Pulse coupled neural network based image fusion[A]. Proceedings of International Symposium on Neural Networks[C], Chongqing, China, 2005: 741—746
    [29] Shi W, Zhu C, Tian Y, et al. Wavelet-based image fusion and quality assessment[J]. International Journal of Applied Earth Observation and Geoinformation, 2005, 6(3): 241—251
    [30] Moghaddam M, Dungan J L, Acker S. Forest variable estimation from fusion of SAR and multispectral optical data[J]. IEEE Transactions on Geosciences and Remote Sensing, 2002, 40(10): 2176—2187
    [31] Li J, Sun J, Mao X. Multiresolution fusion of remote sensing images based onresolution degradation model[J]. Geo-Spatial Information Science, 2005, 8(1): 50—56
    [32] Piella G.. New quality measures for image fusion[A]. Proceedings of International Conference on Information Fusion[C], Stockholm, Sweden, 2004: 542—546
    [33]阳方林,郭红阳,杨风暴.像素级图像融合效果的评价方法研究[J].测试技术学报, 2002, 16(4): 276—279
    [34]孙巍.像素级多聚焦图像融合算法[D]: [博士学位论文].长春:吉林大学, 2008
    [35]焦李成,谭山.图像的多尺度几何分析:回顾和展望[J].电子学报, 2003, 31(12A): 1975—1981
    [36] Zhong Z, Blum R S. A categorization of multiscale-decomposition-based image fusion schemes with a performance study for a digital camera application[J]. Journal of Computer-Aided Design and Computer Graphics,1999, 87(8): 1315—1328
    [37] Burt P J, Adelson E H. The Laplacian pyramid as a compact image code[J]. IEEE Transactions on Communications, 1983, 31(4): 532—540
    [38] Burt P J, Adelson E H. Merging images through pattern decomposition[A]. Processings of SPIE[C], 1985:173—182
    [39] Toet A, Ruyven V, Valeton J M. Merging thermal and visual images by acontrast pyramid[J]. Optical Engineering, 1989, 28(7): 789—792
    [40] Burt P J. A gradient pyramid basis for pattern selective image fusion. Processings of the Society for Information Display Conference[M]. San Jose: SID Press, 1992: 467—470
    [41] Barron D R, Thomas O D J. Image fusion though consideration of texture components[J]. IEEE Transactions on Electronics Letters, 2001, 37(12): 746—748
    [42] Mallat S G. A theory for multi-resolution signal decomposition the wavelet representation[J]. IEEE Transactions on pattern analysis and machine intelligence, 1989, 11(7): 647—693
    [43] Yang X, Yang W, Pei J. Different focus point images fusion based on wavelet decomposition[A]. Proceedings of Third International Conference on Information Fusion[C], Paris, France, 2000: 3—8
    [44] Li H, Manjunath B S, Mitra S K. Multisensor image fusion using the wavelet transform[J]. Graphical Models and Image Processing, 1995, 27(3): 235—244
    [45] Chipman L J, Orr T M. Wavelets and images fusion[A]. Proceedings of IEEE International Conference on Image Processing[C], Washington D. C., USA, 1995: 248—251
    [46]王海晖,彭嘉雄.基于多小波变换的图像融合研究[J].中国图象图形学报, 2004, 9(8): 1002—1007
    [47] Kingsbury N. The dual-tree complex wavelet transform: a new technique for shiftinvariance and directional filters[A]. Proceedings of 8th IEEE Digital Signal Processing[C], Canyon, USA, 1998: 86—89
    [48] Candès E J. Ridgelets and the representation of mutilated sobolev functions[J]. SIAM J Mathl Anal, 1999, 33(2): 2495—2509
    [49]李晖晖,郭雷,李国新.基于脊波变换的SAR与可见光图像融合研究[J].西北工业大学学报, 2006, 24(4): 418—422
    [50] Candès E J, Donoho D L. Curvelets-a surprisingly effective nonadaptive representation for objects with edges. Curves and Surfaces[M]. Nashville: Vanderbilt University Press, 2000
    [51] Choi M, Rae Kim Y, Nam M R, et al. Fusion of multispectral and panchromatic satellite images using the Curvelet transform[J]. IEEE Geoscience and Remote Sensing Letters, 2005, 2(2): 136—140
    [52] Alparone L, Baronti S, Garzelli A, et al. Remote sensing image fusion using the Curvelet transform[J]. International Journal on Information Fusion, 2007, 8(2):143—156
    [53] Do M N, Vetterli M. Contourlet: a directional multiresolution image representation[A]. Processings of International Conference on image Processing[C], Rochester, USA, 2002: 357—360
    [54] Do M N, Vetterli M. The Contourlet transform: an efficient directional multiresolution image representation[J]. IEEE Transactions on Image Processing, 2005, 14(12): 2091—2106
    [55]李光鑫,王珂.基于Contourlet变换的彩色图像融合算法[J].电子学报, 2007, 35(1): 112—117
    [56] Cunha A L, Zhou J P, Do M N. The nonsubsampled Contourlet transform: theory, design, and applications[J]. IEEE Transactions on Image Processing, 2006, 15(10): 3089—3101
    [57]叶传奇,苗启广,王宝树.基于非子采样Contourlet变换的图像融合方法[J].计算机辅助设计与图形学学报, 2007, 19(10): 1274—1278
    [58]张强,郭宝龙.一种基于非采样Contourlet变换红外图像与可见光图像融合算法[J].红外与毫米波学报, 2007, 26(6): 476—480
    [59]贾建,焦李成,孙强.基于非下采样Contourlet变换的多传感器图像融合[J].电子学报, 2007, 35(10): 1934—1938
    [60] Li Z, Jing Z, Liu G, et al. Pixel visibility based multifocus image fusion[A]. Proceedings of IEEE International Conference on Neural Networks and SignalProcessing[C], Nanjing, China, 2003: 1050—1053
    [61]杨镠,郭宝龙,倪伟.基于区域特性Contourlet域多聚焦图像融合[J].西安交通大学学报, 2007, 41(4): 449—452
    [62] Arthur L, Cunha D, Zhou J. The nonsubsampled Contourlet transform: theory, design and application[J]. IEEE. Transactions on Image Processing, 2006, 10(15): 3089-3101.
    [63] Hill P R, Canagarajah N, Bull D. Image fusion using complex wavelets[A]. Proceedings of British Machine Vision Conference[C], 2002: 487—496
    [64]叶传奇,苗启广,王宝树.基于区域分割和Counterlet变换的图像融合算法[J].光学学报, 2008, 28(3): 447—453
    [65] Yang X, Yang W, Pei J. Different focus points images fusion based on wavelet decomposition[J]. Acta Electronical Sinic, 2001, 29(6): 846—848
    [66] Li M, Wu Y, Wu S J. Multi-focus image fusion based on wavelet decomposition and evolutionary strategy[A]. Proceedings of IEEE Interenational Conference on Neural Networks and Signal Processing[C]. Nanjing, China, 2003: 951—955
    [67]王蓉,高立群,柴玉华,等.一种多聚焦图像融合方法[J].控制与决策, 2005, 20(11): 1256—1260
    [68] Donobo D L, Flesia A G. Can recent innovations in hatmonie analysis’explain’key findings in natural image statistics [J]. Computation in Neural Systems, 2001, 12(3): 371—393
    [69] Piella G, Heijmans H. A new quality metric for image fusion[A]. Proceeding of IEEE International Conference on Image Processing[C]. Barcelona, Spain, 2003: 173—176
    [70] Jiang Z G, Han D B, Chen J, et al. A wavelet based algorithm for multi-focus micro-image fusion[A]. Processings of IEEE on the third International Conference on Image and Graphic[C], Hong Kong, 2004, 176—179
    [71] Pajares G, Cruz J M. A wavelet-based image fusion tutorial[J]. Pattern Recognition, 2004, 37(9): 1855—1872
    [72] Hill P R, Bull D R, Canagarajah C N. Image fusion using a new framework for complex Wavelet transforms[A]. Proceedings of IEEE International Conference on Image Processing[C], Genoa, Italy, 2005: 11—14
    [73]倪伟,郭宝龙,杨镠.图像多尺度几何分析新进展: Contourlet[J].计算机科学, 2006, 33(2): 234—236, 262
    [74] Cunha A L, Zhou J P, Do M N. The nonsubsampled Contourlet transform: theory, design, and applications[J]. IEEE Transactions on Image Processing, 2006, 15(10): 3089—3101
    [75] Zhou J P, Cunha L, Do M N. Nonsubsampled Contourlet transform: Construction and application in enhancement[A]. Proceedings of the IEEE International Conference on Image Processing[C]. Piscataway, USA, 2005: 469—472
    [76] Yang X, Jiao L. Fusion algorithm for remote sensing images based on nonsubsampled Contourlet transform[J]. Acta Automatica Sinica, 2008, 34(3): 274—281
    [77]何凯,何明一,吴晓荣,等.基于小波能量加权的医学图像融合新算法[J].科学技术与工程, 2006, 6(13): 1949—1954
    [78]叶传奇,王宝树,苗启广.一种基于区域的NSCT域多光谱与高分辨率图像融合算法[J].光学学报, 2008, 29(5): 1240—1247
    [79] Piella G. A region-based multiresolution image fusion algorthim[A]. ISIF Fusion 2002 conference[C], Annapolis, USA, 2002: 1557—1564
    [80]宋亚军,倪国强,高昆.基于小波-Contourlet变换的区域能量加权图像融合算法[J].北京理工大学学报, 2008, 28(2): 168—172
    [81]魏从玲,王建力,赵占轻.基于小波包变换区域方差的遥感影像融合[J].测绘与空间地理信息, 2008, 31(5):131—133,137
    [82] Wang J, Gao Y. Multi-sensor data fusion for land vehicle attitude estimation using fuzzy expert system[J]. Data Science Journal, 2005, 26(4): 127—139
    [83]焦竹青,熊伟丽,徐保国.基于接近度的多传感器数据融合方法研究[J].压电与声光, 2009, 31(5): 771—774
    [84]胡振涛,刘先省.基于相对距离的一种多传感器数据融合方法[J].系统工程与电子技术, 2006, 28(2): 196—198
    [85] Huang W, Jing Z. Evaluation of focus measures in multi-focus image fusion[J]. Pattern Recognition Letters, 2007, 28(4): 493—500
    [86]方辉.图像融合方法研究[D]: [硕士学位论文].成都:西南交通大学, 2009
    [87]顾晓东,余道衡. PCNN的原理及其应用[J].电路与系统学报, 2001, 6(3): 45—50
    [88] Berg H, Olsson R, Lindblad T, et al. Automatic design of pulse coupled neurons for image segmentation[J]. Neurocomputing, 2008, 71(10-12): 1980—1993
    [89] Liu K, Guo L, Li H, et al. Fusion of infrared and visible light images based on region segmentation[J]. Chinese Journal of Aeronautics, 2009, 22(1): 75—80
    [90]宋寅卯,刘国乐.基于改进的PCNN多目标图像分割算法[J].数据采集与处理, 2009, 24(4): 536—542
    [91]闫敬文,屈小波.超小波分析及应用[M].北京:国防工业出版社, 2008
    [92] Yang S, Wang M, Lu Y, et al. Fusion of multi-parametric SAR images based on SW-nonsubsampled Contourlet and PCNN[J]. Signal Processing, 2009, 89(12): 2596—2608
    [93]李美丽,李言俊,王红梅,等.基于NSCT和PCNN的红外与可见光图像融合方法[J].光电工程, 2010, 37(6): 91—95
    [94]高隽.人工神经网络原理与仿真实例(第2版)[M].北京:机械工业出版社, 2008
    [95] Lu Y, Miao J, Duan L, et al. A new approach to image segmentation based on simplified region growing PCNN[J]. Applied Mathematics and Computation, 2008, 205(2): 807—814
    [96] Thomas L. Inherent features of wavelets and pulse coupled neural networks[J]. IEEE Trans on Neural Networks, 1999, 10(3): 1342—1344
    [97] Wang Z, Ma Y, Cheng F, et al. Review of pulse-coupled neural networks[J]. Image and Vision Computing, 2010, 28(1): 5—13
    [98] Wang Z, Ma Y. Medical image fusion using m-PCNN[J]. Information Fusion, 2008, 9(2): 176—185
    [99]姚畅,陈后金,李居朋.改进型脉冲耦合神经网络在图像处理中的动态行为分析[J].自动化学报, 2008, 34(10): 1291—1297
    [100] Broussard R P, Rogers S K, Oxley M E. Physiologically motivated image fusion for object detection using a pulse coupled neural networks[J]. IEEE Transactions on Neural Networks, 1999, 10(3): 554—563
    [101] Zhao Z, Zhao C, Zhang Z. A new method of PCNN′s Parameter′s optimization[J]. Acta Electronic Sinic, 2007, 35(5): 996—1000
    [102]张志宏,马光胜. PCNN模型参数优化与多阈值图像分割[J].哈尔滨工业大学学报2009, 41(3): 240—242
    [103]于江波,陈后金,王巍,等.脉冲耦合神经网络在图像处理中的参数确定[J].电子学报, 2008, 36(1): 81—85
    [104]苗启广,王宝树.一种自适应多聚焦图像融合新方法[J].电子与信息学报, 2006, 28(3): 466—470
    [105]刘贵喜.多传感器图像融合方法研究[D]: [博士学位论文].西安:西安电子科技大学, 2001
    [106]周锋飞,陈卫东,李良福.一种基于区域生长的红外与可见光的图像融合方法[J].应用光学, 2007, 28(6): 737—741
    [107] Liu K, Guo L, Li H, et al. Fusion of infrared and visible light images based on region segmentation[J]. Chinese Journal of Aeronautics, 2009, 22(1): 75—80
    [108]王加,蒋晓瑜,纪伯公.基于感知颜色空间的灰度可见光与红外图像融合算法[J].光电子·激光, 2008, 19(9): 1261—1264
    [109] http://www.baesystems.com/Newsroom/NewsReleases/2007/autoGen_10772913162.html
    [110] http://www.globalsecurity.org/military/systems/ground/envg.htm
    [111] Dwyer D, Hickman D, Riley T, et al. Real time implementation of image alignment and fusion on a police helicopter[A]. Proceedings of SPIE[C], 2006: 1—11
    [112] Xue Z, Blum R S. Concealed weapon detection using color image fusion[A]. The 6th International Conference of Information Fusion[C], Cairns, Australia, 2003: 622—627
    [113] R. Schmidt. Benefits of IR/visible fusion[A]. Proceedings of SPIE[C], 2007: 1—6
    [114]何友,王国宏,陆大金,等.多传感器信息融合及应用(第一版)[M].北京:电子工业出版社, 2000
    [115] Qu X, Yan J, Xiao H, et al. Image fusion algorithm based on spatial frequency-motivated pulse coupled neural networks in nonsubsampled Contourlet transform domain[J]. Acta Automatica Sinica, 2008, 34(12): 1508—1514
    [116]苗启广,王宝树.基于局部对比度的自适应PCNN图像融合[J].计算机学报, 2008, 31(5): 875—880
    [117] Wang Y, Vijay V J. Interaction trust evaluation in decentralized environments[A]. Processings of the 5th International Conference on Electronic Commerce and Web Technology[C], Zaragoza, Spain, 2004: 144—153
    [118]张明武,杨波,张文政.一种自约束信誉更新模型[J].计算机工程, 2007, 33(18): 145—147
    [119]彭真明,蒋彪,肖峻,等.基于并行点火PCNN模型的图像分割新方法[J].自动化学报, 2008, 34(9): 1169—1173
    [120]齐永锋,火元莲,张家树.基于简化的PCNN与类内最小离散度的图像自动分割方法[J].光电子·激光, 2008, 19(9): 1258—1260
    [121]宋寅卯,刘国乐.一种改进的PCNN图像分割算法[J].电路与系统学报, 2010, 15(1): 77—81
    [122]李康顺,李元香,康立山,等.一种基于输运理论的多目标演化算法[J].计算机学报, 2007, 30(5):796—805
    [123]牛轶峰,卜彦龙,沈林成.多目标优化在图像处理中的应用综述[J].系统工程与电子技术, 2008, 30(9): 1774—1780
    [124] Back T, Fogel D, Michalewicz Z. Evolutionary computation 2-advanced algorithmsand operators[M]. Philadelphia: Institute of Physics, 2000
    [125]崔逊学,林闯.一种基于偏好的多目标调和遗传算法[J].软件学报, 2005, 16(5): 761—770
    [126] Coello C A, Aquirre A H, Zitzler E. Evolutionary multi-objective optimization[J]. European Journal of Operational Research, 2007, 181(3): 1617—1619
    [127]牛大鹏,王福利,何大阔,等.多目标混沌差分进化算法[J].控制与决策, 2009, 24(3): 361—364
    [128] Coello C A. Evolutionary multi-objective optimization: a historical view of the field[J]. IEEE Computational Intelligence Magazine, 2006, 1(1): 28—36
    [129]公茂果,焦李成,杨咚咚,等.进化多目标优化算法研究[J].软件学报, 2009, 20(2): 271—289
    [130]蓝艇,刘士荣,顾幸生.基于进化算法的多目标优化方法[J].控制与决策, 2006, 21(6): 601—605
    [131] Deb K, Pratap A, Agarwal S, et al. A fast and elitist multiobjective genetic algorithm: NSGA-II[J]. IEEE Trans on Evolutionary Computation, 2002, 6(2): 182—197
    [132]李鸿亮,陆金桂,侯卫锋,等.基于混合遗传算法的催化重整过程多目标优化[J].化工学报, 2010 ,61(2): 432—438
    [133]王雪松,郝名林,程玉虎,等.一种多目标优化问题的混合优化算法[J].系统仿真学报, 2009, 21(16): 4980—4985
    [134] Zielinski K, Laur R. Differential evolution with adaptive parameter setting for multi-objective optimization[C]. Poceedings of the IEEE Congress on Evolutionary Computation, Singapore, 2007: 3585—359
    [135]吴亮红,王耀南,袁小芳,等.多目标优化问题的差分进化算法研究[J].湖南大学学报(自然科学版), 2009, 36(2): 53—57
    [136]刘波,王凌,金以慧.差分进化算法研究进展[J].控制与决策, 2007, 22(7): 721—729
    [137] Coello C A, Pulido G T, Lechuga M S. Handingmultiple objectives with particle swarm optimization[J]. IEEE Transactions on Evolutionary Computation, 2004, 8(3): 256—279
    [138]牛轶峰,沈林成.基于IMOPSO算法的多目标多聚焦图像融合[J].电子学报, 2006, 34(9): 1578—1583
    [139] Coello C A, Cortes N C. Solving multiobjective optimization problems using an artificial immune system[J]. Genetic Programming and Evolvable Machines, 2005, 6(2): 163—190
    [140]柴勇,何友,曲长文.基于基因表达式编程的多目标优化多聚焦图像融合算法[J].宇航学报, 2009, 30(4): 1658—1662

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700