基于多尺度分解的多传感器图像融合算法研究
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
图像融合是信息融合的一个重要分支,也是图像理解和计算机视觉中的一项重要技术。图像融合是将同一场景的多幅图像进行综合以得到关于该场景更为全面、更为准确描述的信息处理过程。图像融合可以为进一步的图像处理,如图像分割、目标检测与识别、战损评估与理解等提供更有效的信息。目前,图像融合技术已广泛应用于遥感、军事、机器人以及医学处理等领域。
     本论文主要研究了基于多尺度分解的多源图像融合算法。针对现有大多融合算法没有考虑图像固有特性的问题,本论文对图像传感器的成像机理、源图像的成像特性等先验信息进行了综合分析,围绕冗余小波变换及无下采样Contourlet变换等多尺度几何分析工具,提出了多种与源图像特性相适应的图像融合算法。本文的主要研究工作和创新成果如下:
     1.针对正交离散小波变换中由于移变性而产生振铃效应的问题,提出了一种基于冗余小波变换的灰度多聚焦图像融合算法。在该算法中,根据离焦光学系统具有低通滤波特性,可按照源图像中的高频细节信息来判断源图像中的聚焦区域与离焦区域这一理论依据,在冗余小波变换域引入了区域向量范数和局部对比度量测算子,并分别制定了基于区域向量范数的低频系数融合策略和基于局部对比度的高频系数融合策略。该算法较好保留了源图像中的有用信息,能有效消除振铃效应,得到整幅图像均聚焦清晰的图像。
     2.结合无下采样Contourlet变换所具有的多尺度性、多方向性以及平移不变性等优良特性,提出了基于无下采样Contourlet变换的图像融合框架,并根据红外图像和可见光图像各自的成像特性,提出了两种基于无下采样Contourlet变换的红外与可见光融合算法。在基于窗口选择的融合算法中,提出了以局部能量和局部方差作为量测算子的低频子带系数融合策略及以局部方向对比度作为量测算子的高频方向子带融合策略,该算法有效融合了红外图像中的热目标信息及可见光图像中的丰富光谱信息。在基于区域分割的融合算法中,采取了区域融合的思想,并定义了区域能量比和区域清晰比量测算子,用以表征区域特征信息,指导无下采样Contourlet变换域融合系数的选取。该算法将具有关联性的多个像素作为一个整体参与到融合过程,与基于像素及基于窗口选择的融合算法相比具有更佳的融合性能。
     3.通过对遥感图像融合中出现的光谱失真问题的分析,提出了一种基于区域相关系数的无下采样Contourlet变换域多光谱与全色图像的融合算法。在该算法中,按照区域融合的思想,定义了区域相关系数测度算子,将源图像按空间特性划分为相关度不同的区域,根据区域相关度的不同分别制定不同的融合策略。该算法在空间分辨率和光谱特性两方面达到了良好的平衡,融合后的多光谱图像在减少光谱失真的同时,有效增强了空间分辨率,且保留了原多光谱图像中的显著特征信息。
     4.针对SAR与全色图像的融合,提出了一种基于SAR图像成像特性的融合算法。在该算法中,以区域信息熵和区域均值比作为联合量测算子,将SAR图像区分为粗糙区域、平滑区域以及高亮点目标区域,并针对不同区域采取不同的融合策略。融合图像既有效加入了全色图像中难以辨识的SAR目标信息,又有效保持了全色图像的空间分辨率。
Image fusion is an important part of multi-sensor information fusion. It is also an important and useful technique for image understanding and computer vision. Image fusion is the process by which multiple images of the same scene are combined to generate more complete and accurate description of the scene than any of the individual source images. The fused image can provide useful information for further computer processing, for example, image segmentation, object recognition, object detection, battle damage evaluation and understanding, and so on. The technique of image fusion has been widely used in many fields such as remote sensing, military application, robot engineering, medical imaging, and so on.
     This dissertation mainly aims at the research of multisensor image fusion algorithms based on the multiscale decomposition. In order to solve the issues of existed fusion algorithms which do not take the intrinsic characteristic of the source images into account, the priori information such as the imaging mechanism of image sensors and the imaging characteristic of the source images has been deeply analyzed in this dissertation. Several image fusion algorithms that adapt to the characteristic of the source images have been proposed based on the multiscale geometric analysis tools such as the redundant wavelet transform and the nonsubsampled contourlet transform.
     The main contributions of this dissertation are summarized as follows:
     1. Aiming at the ringing effect arising from shift-variance in the orthogonal discrete wavelet transform, a novel gray-scale multifocus image fusion algorithm based on redundant wavelet transform is proposed. According to its imaging principle, the defocused optical imaging system can be characterized as a lowpass filtering. Therefore, in multifocus images, a pixel or region in focus or out of focus can be determined by its corresponding high frequency information. On the basis of the above theoretic evidence, the region vector normal and the local contrast are introduced in the redundant wavelet transform domain, and the selection principles based on the region vector normal and the local contrast are presented for the low frequency subband coefficients and the high frequency subband coefficients respectively. The algorithm can preserve the useful information of the source images and overcome the ringing effect from the final merged image reconstructed by orthogonal discrete wavelet transform. It can get clear focus in the whole fused image.
     2. Combining with the excellent characteristics including multiscale, multi-direction and shift-invariant in the nonsubsampled contourlet transform, an image fusion framework based on the nonsubsampled contourlet transform is proposed. In the light of imaging characteristic of infrared and visible images respectively, two algorithms for fusion of infrared and visible images based on the nonsubsampled contourlet transform are proposed. One is a window-based algorithm, in which a selection principle based on the local energy and local variance for the low frequency subband coefficients and a selection scheme based on the local directional contrast for the high frequency subband coefficients are presented. It combines the hot object information of the infrared image with the rich spectrum information of the visible image together. The other one is based on region segmentation, in which the fusion idea of region division is introduced. Two measurements named ratio of region energy and ratio of region sharpness are presented to characterize the regional salience information. They used to guide the selection of the fusion coefficients in the nonsubsampled contourlet transform domain. The algorithm takes the relative pixels as a whole to participate in the fusion process so that it has better fusion performance than the pixel-based algorithm and the windows-based algorithm.
     3. After analyzing the problem of spectral distortion in the fused remote sensing image, a novel fusion algorithm for multi-spectral and panchromatic images based on the region correlation coefficient in the nonsubsampled contourlet transform domain is proposed. According to the fusion idea of region division, the measurement named region correlation coefficient is presented. The source images firstly are split into different regions with various spatial characteristics, and then different fusion rules are employed according to the degree of correlation between the multi-spectral image and the panchromatic image. The algorithm has a good balance between the spectral information and the spatial information. The fused multi-specral image can reduce spectral distortion and improve spatial information at the same time. Especially, it has preserved the salience feature information of the original multi-spectral image.
     4. Focusing on the fusion of SAR and panchromatic images,a novel fusion algorithm based on the imaging characteristic of the SAR image is presented. Two measurements named region information entropy and ratio of region mean are presented in the nonsubsampled contourlet transform domain so that the SAR image can be split into roughness regions, smoothness regions and highlight point target regions. The algorithm performs the different fusion rules for each particular region independently. The fused image not only joins the target information in the SAR image that is difficult to identify in the panchromatic image but also preserves the spatial resolution of the panchromatic image.
引文
[1] Llinas J., Edward W. Multisensor data fusion. Boston[M]. MA: Artech House, 1990.
    [2] http://www.isif.org.
    [3]敬忠良,肖刚,李振华.图像融合――理论与应用[M].北京.高等教育出版社, 2007.
    [4] Pohl C, Van Genderen J L. Multisensor image fusion in remote sensing: Concepts, methods, and applications[J]. International Journal of Remote Sensing, 1998, 19(5): 823-854.
    [5]毛士艺,赵魏.多传感器图像融合技术综述[J].北京航空航天大学学报, 2002, 28(5): 512-518.
    [6]朱述龙,张占睦.遥感图像获取与分析[M].北京:科学出版社, 2000.
    [7] R.McDaniel.战术应用中的图像融合(上)[J].《红外》月刊, 2000,(5): 17-21.
    [8] R.McDaniel.战术应用中的图像融合(下)[J].《红外》月刊, 2000,(6): 17-23.
    [9]苗启广.多传感器图像融合方法研究[D].西安:西安电子科技大学博士学位论文, 2005.
    [10] David L H. An Introduction to Multisensor Data Fusion[J]. Proceedings of the IEEE, 1997, 85(1): 6-23.
    [11] Luo R.C, Kay M.G. Multisensor Integration and Fusion for Intelligent Machines and Systems[M]. New Jersey:Ablex Publishing Corporation, 1995: 1-25.
    [12] Varshney P.K. Multisensor data fusion[J]. Electronics & communication Engineering Journa1, 1997, 9(6): 245-253.
    [13] Abidi M.A, Gonzalez R.C. Data Fusion in Robotics and Machine Intelligence[M]. San Diego: Academic Press, Inc. 1992.
    [14] Daily M.I., Farr T., Elachi C. Geologic Interpretation from Composited Radar and Landsat Imagery[J]. Photogrammetric Engineering and Remote Sensing, 1979, 45(8): 1109-1116.
    [15] Laner D. T., Todd W. J. Land Cover Mapping with Merged Landsat RBV and MSS Stereoscopic Images[C]. In: Proc. of the ASP Fall Technical Conference, San Franciso, 1981, 680-689.
    [16] G. Cliche, F. Bonn, P. Teillet. Intergration of the SPOT Pan. channel into itsmultispectral mode for image sharpness enhancement[J]. Photogrammetric Engineering and Remote Sensing. 1985, 51: 311-316.
    [17] Burt P. J . The pyramid as a structure for efficient computation[C]. In: Multiresolution Image Processing and Analysis,London: Springer-Verlag, 1984, 6-35.
    [18] Li H, Zhou YT, Chellappa R. SAR/IR sensor image fusion and real-time implementation[C]. in: Proc. Int Conf. on Signals, Systems and Computers. New Jersey, USA: IEEE Press, 1996: 1121-1125.
    [19] Ranchin T, Wald L. The wavelet transform for the analysis of remotely sensed images[J]. International Journal of Remote Sensing, 1993,14(3): 615-619.
    [20] William F Herriington, Jr ., Berthold K P Horn, et al. Application of the discrete haar wavelet transform to image fusion for nighttime driving[C]. 2005 IEEE Intelligent Vehicles Symposium. 2005: 273-277.
    [21] http://www.flightinternational.com/falanding_168251.htm.
    [22] http://www.thalesfist.com.
    [23] http://www.mod.uk/dpa/projects/fres/.
    [24] http://www.sensorsinc.com/downloads/article_defenseweek.pdf.
    [25] Smith M I, Heather J P. Review of Image Fusion Technology in 2005[C]. Proceedings of SPIE, 2005, 5782:29-45.
    [26]蒋晓瑜.基于小波变换和伪彩色方法的多重图像融合算法研究[D].北京:北京理工大学博士论文, 1997.
    [27]胡江华,柏连发,张保民.像素级多传感器图像融合技术[J].南京:南京理工大学学报, 1996, 20(5):453-456.
    [28]张加友,王江安.红外图像融合[J].光电子·激光. 2000, 11(5): 537-539.
    [29]李振华.像素级多源图像融合研究[D].上海:上海交通大学博士学位论文, 2005.
    [30]王宏,敬忠良,李建勋.多分辨率图像融合的研究与发展[J].控制理论与应用, 2004, 2l(4): 145-151.
    [31] Eltoukhy H A, Kavusi S. A computationally efficient algorithm for multi-focus image reconstruction[C]. Proceedings of SPIE Electronic Imaging, 2003, 332-341.
    [32] Toet A, Walraven J. New false color mapping for image fusion[J]. Optical Engineering, 1996, 35(3): 650-658.
    [33] Waxman A M, Fay D A, Gove A N, etal. Color night vision: fusion of intensified visible and thermal IR imagery[C]. Synthetic Vision for Vehicle Guidance and Control, Proceedings of SPIE, 1995, 2463: 58-68.
    [34]倪国强,戴文,李勇量,等.基于响尾蛇双模式细胞机理的可见光/红外图像彩色融合技术的优势和前景展望[J].北京理工大学学报, 2004, 24(2): 95~100.
    [35] Smith S, Scarff L A.Combining visual and IR images for Sensor fusion: two approaches[C]. Proceedings of SPIE, 1992, 1668: 102-112.
    [36] Blum R. S. On multisensor image fusion performance limits from and estimation theory perspective[J]. Information Fusion, 2006, 7(3): 250-263.
    [37] Sharma R K, Leen T K, Pavel M. Bayesian sensor image fusion using local linear generative models[J]. Optical Engineering, 2001, 40(7): 1364-1376.
    [38] Xia Y S, Leung H, Bosse E. Neural data fusion algorithms based on a linearly constrained least square method[J]. IEEE Transactions on Neural Networks. 2002, 13(2): 320-329.
    [39]余二永,王润生.基于线性融合模型的多传感器图像融合[J].电子学报, 2005, 33(6): 1008-1010.
    [40] Zhang Z L, Sun S H, Zheng F C. Image fusion based on median filters and SOFM neural networks: a three-step scheme[J]. Signal Processing, 2001, 81(6): 1325-1330.
    [41] Broussard R P, Rogers S K, Oxley M E, eta1. Physiologically motivated image fusion for object detection using a pulse coupled neural network[J]. IEEE Transactions on Neural Networks, 1999, 10(3): 554-563.
    [42] Li S T, Kwork J T, Wang Y N. Multifocus image fusion using artificial networks[J]. Pattern Recognition Letters, 2002, 23: 985-997.
    [43] Yamamoto K, Yamada K.Image processing and fusion to detect navigation obstacles[C]. Proc. SPIE. 1998,3364: 337-346.
    [44] Burt P. J.,Kolcznski R.J. Enhanced image capture through fusion[C]. IEEE 4th international Conf. on Computer Vision. 1993. 4: 173-182.
    [45] Shetigara V., A generalised component substitution technique for spatial enhancement of multispectral image using a higher resolution data set[J]. Photogrammetric Engineering and Remote Sensing. 1992, 58(5): 561-567.
    [46] G. Xiao, Z. L. Jing, J. X. Li et al. Analysis of color distortion and improvement for IHS image fusion[C]. Proc. of IEEE International Conference on Intelligent TransPortation Systems, 2003, l: 80-85.
    [47] M. Li, S. J. Wu. A new image fusion algorithm based on wavelet transform[C]. Proc.of the 5th International Conference on Computational Intelligence and Multimedia Applications, 2003, 154-159.
    [48] T. M. Tu, P. S. Huang, C. L. Hung et al. A fast intensity-hue-saturation fusiontechnique with spectral adjustment for IKONOS imagery[J]. IEEE Geoscience and Remote Sensing Letters, 2004, l(4): 309-312.
    [49] J. L. Li, J. C. Luo, D. P. Ming et al. A new method for merging IKONOS Panchromatic and multispectral image data[C]. Proc. Of IEEE International Geoscience and Remote Sensing SymPosium, 2005, 6: 3916-3919.
    [50] L. Alparone, S. Baronti, A. Garzelli et al. Landsat ETM+ and SAR image fusion based on generalized intensity modulation[J]. IEEE Trans.on Geoscience and Remote Sensing, 2004, 42(12): 2832-2839.
    [51] Geng B. Y., Xu J. Z., Yang J. Y. An approach based on the features of space-frequency domain for fusion of edge maps obtained through multisensors [J]. Systems Engineering and Electronics. 2002, 22(4): 18-22.
    [52] Wright W. A., Fast image fusion with a Markov random field[C]. Proc. Int. Conf. on Image Processing and Its Applications. Stevenage, UK:IEE. 1999, 557-561.
    [53]刘刚,敬忠良,孙韶媛.基于期望值最大算法的图像融合[J].激光与红外, 2005, 35(2): 130-133
    [54] Melgani F. S., sebestiano B., Vernazza G. Fusion of multitemporal contextual information by neural networks for multisensor remote sensing image classification [J]. Integrated Computer-Aided Engineering. 2003, 10(1): 81-90.
    [55] Tang J S. A contrast based image fusion technique in the DCT domain[J].Digital Signal Processing, 2004, 14: 218-226.
    [56] Tsai V J D. Frequency-based fusion of multiresolution images[C]. In: Proceedings of 2003 IEEE International Geoscience and Remote Sensing Symposium, Taichung, 2003, 6: 3665-3667.
    [57] Zhang Z, Blum R S. A categorization of multiscale-decomposition-based image fusion schemes with a performance study for a digital camera application[J]. Proceedings of the IEEE,1999, 87(8): 1315-1326.
    [58] Piella G. A general framework for multiresolution image fusion: from pixels to regions[J]. Information Fusion, 2003, 4(4): 259-280.
    [59] Pagares G, de la Cruz J M. A wavelet-based image fusion tutorial[J]. Pattern Recognition, 2004, 37: 1855-1872.
    [60] Burt P. J., Adelson E. H. The laplacian pyramid as a compact image code[J]. IEEE Transactions on Communications. 1983, 31(4): 432-540.
    [61]苗启广,王宝树.基于改进的拉普拉斯金字塔变换的图像融合算法[J].光学学报, 2007, 27(9): 1605-1610.
    [62] Toet A, Ruyven L J, Valeton J M. Merging thermal and visual images by a contrastpyramid[J]. Optical Engineering, 1989, 28(7): 789-792.
    [63] Toet A. Multiscale contrast enhancement with applications to image fusion[J]. Optical Engineering, 1992, 31(5): 1026-1031.
    [64]张新曼,韩九强.基于视觉特性的多尺度对比度图像融合及其性能评价[J].西安交通大学学报, 2004, 38(4): 380-383.
    [65] Matsopoulos G K, Marshall S. Application of morphological pyramids: fusion of MR and CT phantoms[J]. Journal of Visual Communication and Image Representation, 1995, 6(2): 196-207.
    [66]杨万海,赵曙光,刘贵喜.基于梯度塔形分解的多传感器图像融合[J].光电子激光, 2001, 12(3): 293-296.
    [67] Petrovic V S, Xydeas C S. Gradient-based multiresolution image fusion[J]. IEEE Transactions on Image Processing, 2004, 13(2): 228-237.
    [68] Liu Z, Tsukada K, Hanasaki K, et al. Image fusion by using steerable pyramid[J]. Pattern Recognition Letters, 2001, 22: 929-939.
    [69] Liu G, Jing Z L, Sun S Y, et a1. Image fusion based on expectation maximization algorithm and steerable pyramid[J]. Chinese Optics Letters, 2004, 2(7): 386-389.
    [70] Mallat S G. A theory for multiresolution signal decomposition: the wavelet representation[J]. IEEE Transaction on Pattern Analysis and Machine Intelligence, 1989, 11(7): 674-693.
    [71] Li H, Manjunath B S, Mitra S K. Multesensor image fusion using the wavelet transform[J]. Graphical Models and Image Processing, 1995, 57(3): 235-245.
    [72] Li S T, Kwork J T, Wang Y N. Using the discrete wavelet frame transform to merge Landsat TM and SPOT panchromatic images[C]. Information Fusion, 2002, 3: 17-23.
    [73] Li Z H, Jing Z L, Yang X H, et al. Color transfer based remote sensing image fusion using non-separable wavelet frame transform[J]. Pattem Recognition Letters, 2005, 26(13): 2006-2014.
    [74] Nunez J, Otazu X, Fors O, et al. Multiresolution-based image fusion with additive wavelet decomposition[J]. IEEE Transactions on Geoscience and Remote Sensing, 1999, 37(3): 1204-1211.
    [75]王洪华,杜春萍.基于多进制小波的多源遥感影像融合[J].中国图象图形学报: A辑, 2002, 7(4):341-345.
    [76] Wu J, Huang H L, Tian J W, et al. Remote sensing image data fusion based on local deviation of wavelet packet transform[C]. In: IEEE 7th International Symposium on Autonomous Decentralized Systems, Chengdu, China, 2005,372-377.
    [77]张登荣,张宵宇,愈乐,等.基于小波包移频算法的遥感图像融合技术[J].浙江大学学报(工学版), 2007, 41(7): 1098-1100.
    [78] Lewis J J, O’Callaghan R J, Nikolov S G, et a1. Pixel- and region-based image fusion with complex wavelets[C]. Information Fusion, 2007, 8(2): 119-130.
    [79] Petrovic V S., Xydeas C S. Cross-band pixel selection in multiresolution image fusion[C]. Proceedings of SPIE, 1999, 3719: 319-326.
    [80] Pu T, Ni G. Contrast-based image fusion using the discrete wavelet transform[J]. Optical Engineering, 2000, 39(8): 2075-2082.
    [81] Liu G X, Chert W J, Ling W J. An image fusion method based on directional contrast and area-based standard deviation[C]. Proceedings of SPIE, 2005, 5637: 50-56.
    [82] Pan J P, Gong J Y, Lu J, et a1.. image fusion based on local deviation and high-pass filtering of wavelet transform[C]. Proceedings of SPIE, 2004, 5660: 191-198.
    [83]杨志,毛士艺,陈炜.基于局部方向能量的鲁棒图像融合算法[J].电子与信息学报, 2006, 28(9): 1537-1541.
    [84] Zhang Z, Blum R. Region-based image fusion scheme for concealed weapon detection. In:Proceedings of the 31st Annual Conference on Information Sciences and Systems, Baltimore, USA, 1997, 168-173.
    [85] Cvejic N, Bull D, Canagarajah N.Region-based multimodal image fusion using ICA bases. IEEE Sensor Journal, 2007, 7(5): 743-751.
    [86]张强.基于多尺度几何分析的多传感器图像融合研究[D].西安:西安电子科技大学博士学位论文, 2008.
    [87] G. X. Lu, D. W. Zhou, J. L. Wang et al. Geological information extracting from remote sensing image in complex area: based on Wavelet analysis for automatic image segmentation[J]. Earth Science-Journal of China University of Geosciences, 2002, 27(l): 50-54.
    [88] E J Candès. Ridgelets: Theory and application[D].USA: Department of statistics, Stanford University, 1998.
    [89] E J Candès, D L Donoho. Curvelets[R]. USA: Department of statistics, Stanford University,1999.
    [90] E L Pennec, S Mallat. Image compression with geometrical wavelets [A]. Proc. of ICIP’2000[C]. Vancouver, Canada, 2000, 9: 661-664.
    [91] M N Do, M Vetterli. Coutourlets: A new directional multiresolution imagerepresentation[A]. Conference Record of the Thirty-Sixth Asilomar Conference on Signals, Systems and Computers. 2002, 11, 1:497-501.
    [92] A. L. Cunha, J. Zhou, M. N. Do. The nonsubsampled contourlet transform: Theory, design, and applications[J]. IEEE Trans. Image Proc., 2006, 15(10): 3089-3101.
    [93]程英蕾.多源遥感图像融合方法研究[D].西安:西北工业大学博士学位论文, 2006.
    [94]那彦.图像融合方法研究[D].西安:西安电子科技大学博士学位论文, 2005.
    [95]张文峦.基于伪彩色的图像融合算法研究[D].西北工业大学硕士学位论文, 2007.
    [96]洪日昌.多源图像融合算法及应用研究[D].合肥:中国科技大学博士学位论文, 2007.
    [97]汤磊.多分辨率图像融合方法与技术研究[D].南京:中国人民解放军理工大学博士学位论文, 2008.
    [98] A. Bijaoui, J. L. Starck, F. Murtagh. Restauration des images multi-échelles par l’algorithmeàtrous[J]. Traitement du signal, 1994, 11(3): 229-243.
    [99] Y. Chibani.Multisource image fusion by using the redundant wavelet decomposition[J]. Proc. on IEEE International Geoscience and Remote Sensing SymPosium, 2003, 2:1383-1385.
    [100] S.Mallat著.杨力华,戴道清,黄文良等译.信号处理的小波导引(原书第二版) [M].北京:机械工业出版社, 2002.
    [101]成礼智,王红霞,罗永.小波的理论与应用[M].北京:科学出版社, 2004.
    [102]李弼程,罗建书编著.小波分析及其应用[M].北京:电子工业出版社, 2003.
    [103]陆欢,吴庆宪,姜长生.基于PCA与小波变换的彩色图像融合算法[J].计算机仿真. 2007, 24(9): 202-205.
    [104]刘贵喜.多传感器图像融合方法研究[D].西安:西安电子科技大学博士学位论文, 2001.
    [105] Peli E. Contrast in complex images[J]. Optical Society of America, 1990, 17(10): 2032-2040.
    [106] Scfibner D A, Schuler J M, Warren P R, et a1. Infrared color vision: separating objects from background[C]. Proceedings of SPIE, 1998, 3379: 2-9.
    [107]李光鑫,王珂.基于Contourlet变换的彩色图像融合算法[J].电子学报, 2007, 35(1): 112-117.
    [108] Bogoni L, Hansen M. Pattern-selective color image fusion[J]. Pattern Recognition, 2001, 34: 1515-1526.
    [109]章毓晋.图像工程(上册)-图像处理和分析[M].清华大学出版社. 1999.
    [110] Garzelli A. Wavelet-based Fusion of Optical and SAR Image Data Over Urban Area[C]. In: Photogrammetric Computer Vision, ISPRS Commission III, Symposium 2002, Graz, Austria. September 9-13, 2002. p. B-59.
    [111] Chibani Y, Houacine A, Barbier Ch et al. Fusion of Multispectral and Radar Images In the Redundant Wavelet Domain[C]. In: Proc. SPIE Vol. 3500. Image and Signal Processing for Remote Sensing IV. Sebastiano B. Serpico, Spain. September 1998. 330-338.
    [112] Luciano Alparone, Luca Facheris, Stefano Baronti etc. Fusion of Multispectral and SAR Images by Intensity Modulation[C]. In: Proceedings of the Seventh International Conference on Information Fusion, Swedish Defence Research Agency, Stockholm, Sweden. 2004. 637-643.
    [113] Yong Du, Paris W. Vachon, and Joost J. van der Sanden. Satellite Image Fusion with Multiscale Wavelet Analysis for Marine Applications: Preserving Spatial Information and Minimizing Artifacts (PSIMA) [J]. Can. J. Remote Sensing, 29(1): 14-23
    [114]李树涛,王耀南,张昌凡.基于视觉特性的多聚焦图像融合[J].电子学报, 2001, 29(12):1699-1701.
    [115]陈勇,皮德富,周士源,顾东升.基于小波变换的红外图像融合技术研究[J].红外与激光工程, 2001, 30 (1) : 15-17.
    [116]倪伟,郭宝龙,杨谬.图像多尺度几何分析新进展: Contourlet[J]..计算机科学. 2006, 33(2): 234-236.
    [117] J Daugman, Two-dimensional Spectral Analysis of Cortical Receptive Field Profile[J]. Vision Research, 1980, 20:847-856.
    [118] M Porat, Y Y Zeevi. The generalized Gabor Scheme of Image Representation in Biological and Machine Vision [J]. IEEE Trans, 1988, Patt.Recog. and Mach. Intell.-10(4):452-468.
    [119] A B Watson. The Cortex Transform: Rapid Computation of Simulated Neural Images[J]. Computer Vision, Graphics, and Image Processing, 1987, 39(3): 311-327
    [120] E P Simoncelli, W T Freeman, E H Adelson, D J Heeger. Shiftable Multiscale Transform[J]. IEEE Trans, 1992, Information Theory-38(2): 587~607.
    [121] J P Antoine, P Carrette, R Murenzi, B Piette. Image Analysis with Two Dimensional Continuous Wavelet Transform[J]. Signal Processing, 1993, 31: 241-272.
    [122] F G Meyer, R R Coifman, Brushlets: A Tool for Directional Image Analysis andImage Compression[J]. Applied and Computational Hamonic Analysis, 1997, 5: 147-187.
    [123] N Kingsbury. Complex Wavelets for Shift Invarian Analysis and Filtering of Signals[J]. Applied and Computational Hamonic Analysis, 2001,10(3): 234~253.
    [124] M. N. Do , M. Vetterli. Framing pyramids[J]. IEEE Trans. Signal Proc., 2003, 51(9): 2329-2342.
    [125] R. H. Bamberger, M. J. T. Smith. A filter bank for the directional decomposition of images: Theory and design[J]. IEEE Trans. Signal Proc., 1992,40(4): 882-893
    [126] A. L. Cunha, J. Zhou, M. N. Do. Nonsubsampled contourlet transform: Filter design and application in denoising[C]. In: IEEE Int. Conf. on Image Proc., Genoa, Italy, 2005, 749-752.
    [127] J. Zhou, A. L. Cunha, M. N. Do. Nonsubsampled contourlet transform: Construction and application in enhancement[C]. In: IEEE Int. Conf. on Image Proc., Genoa, Italy, 2005, 469-472.
    [128] Zhang Q, Guo B L, Research on image fusion based On the nonsubsampled contourlet transform[C].In: 2007 IEEE International Conference on Control and Automation, Guangzhou, China, 2007, 3239-3243.
    [129]李振华,敬忠良,孙韶嫒等.基于方向金字塔变换的遥感图像融合算法[J].光学学报, 2005, 25(5): 598-602.
    [130] R. H. Bamberger, M. J. T. Smith. A filter bank for the directional decomposition of images: Theory and design[J]. IEEE Transactions on Signal Processing, 1992, 40(4): 882-893.
    [131] DeVallois R. L., Yund E. W., Hepler N. The orientation and direction selectivity of cells in macaque visual cortex[J]. Vision Research, 1982, 22: 531-544.
    [132] Piella G. A region-based multiresolution image fusion algorithm[C].In: ISIF Fusion 2002 conference, Annapolis, 2002:1557-1564.
    [133] Rong Wang, Li-Qun Gao, Shu Yang. An image fusion approach based on segmentation region[J]. International Journal of Information Technology, 2005,11(7):92-100.
    [134] N. Otsu. A threshold selection method from gray-level histograms[J]. IEEE Trans. on Syst. Man, Cybern., 1979, 9(1): 62-66.
    [135] Freeman W T, Adelson E H. The design and use of steerable filters [J]. IEEE Trans. Pattern Anal Mach Intell, 1991, 13 (9) :891-906.
    [136] Z. J.Wang, D.Ziou, C.Armenakis,. A comparative analysis of image fusion methods[J]. IEEE Trans. Geosci. Remote Sens. 2005, 43(6), 1391-1402.
    [137] X.Otazu, M.González-Audícana, O.Fors. Introduction of sensor spectral response into image fusion methods. Application to Wavelet-Based Methods[J]. IEEE Trans. Geosci. Remote Sens. 2005, 43(10), 2376-2385.
    [138] M. Choi. A new intensity-hue-saturation fusion approach to image fusion with a tradeoff parameter[J].IEEE Trans. Geosci. Remote Sens. 2006,44(6), 1672-1682.
    [139] Shaohui Chen, Hongbo Su, Renhua Zhang. The tradeoff analysis for remote sensing image fusion using expanded spectral angle mapper[J].Sensors. 2008, 8, 520-528.
    [140] V. Buntilov, T. Bretschneider. Objective content-dependent quality measures for image fusion of optical data[C].In :Proceedings of the IEEE International Geoscience and Remote Sensing Symposium. Anchorage , USA: IEEE Press, 2004: 613-616.
    [141] Chavez P S, Sides S C, Anderson J A, Comparison of three difference methods to merge multiresolution and multispectral data: Landsat TM and SPOT Panchromatic[J], Photogramm.Eng.Remote Sensing, 57, 1991: 295-303.
    [142]赵荣椿,赵忠明等.数字图像处理导论[M].西安:西北工业大学出版社. 2000.
    [143]贾永红. TM和SAR影像主分量变换融合法[J].遥感技术与应用. 13(1), 1998: 46-49.
    [144]覃征,鲍复民,李爱国等编著.数字图像融合.西安:西安交通大学出版社. 2004.
    [145]李晖晖,郭雷,刘航.基于互补信息特征的SAR与可见光图像融合研究[J].计算机科学. 2006, 33 (4) :221-224.
    [146]宋建社,郑永安,刘迎春.基于小波变换的SAR与可见光图像融合算法[J].计算机应用研究. 2004, 10: 110-111.
    [147] M. Mastriani, A. E. Giraldez. Enhanced directional smoothing algorithm for edge-preserving smoothing of synthetic-aperture radar images[J]. Measurement science review. 2004, 4(3), 1-11.
    [148]陈湘凭,王志成,田金文.基于局部梯度和局部熵的红外小目标融合检测[J].计算机与数字工程, 2006, 34(10):1-3.
    [149] A.Garzelli. Wavelet-based fusion of optical and SAR image data over urban area[C]. In: Photogrammetric Computer Vision, ISPRS Commission III, Symposium 2002, Graz, Austria. 2002: 59-62.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700