像素级多传感器图像融合方法研究
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
近年来随着图像传感器和图像处理技术的快速发展,图像融合的实用性也在不断增强,并逐渐从国防应用领域拓展到其他国民应用领域。目前在遥感、态势感知、侦查、全天候监控、医学诊断、武器装备、机器人等众多应用领域越来越多地使用了多传感器、多光谱图像,图像融合技术呈现出了更加广阔的应用前景,凸显了图像融合研究的重大意义和紧迫性。
     本文主要针对场景态势监控、目标检测识别等应用领域的红外与可见光等类型多传感器图像融合需求,从有利于增强对场景的理解、有利于快速准确检视识别目标的角度出发,旨在像素级融合层面上深入研究并获得一些有效的处理和分析方法。文中主要以设计能有效增强源图像中目标特征并获得良好视觉效果的融合新方法、以及能满足图像融合系统实时性需求的快速融合算法为主要目标,文中主要研究内容如下:
     1、对图像融合技术中常用的多尺度变换予以了较全面的总结,并从信号稀疏表示理论的角度审视多尺度分析方法及其在图像融合中的优势和不足;重点针对多尺度变换的冗余性与移不变性对融合效果的影响,对不同类型多尺度变换的16种融合算法进行了定量的实验对比和分析研究,总结给出了多尺度变换方法在图像融合算法设计中应注意的一些特点。
     2、现有多尺度融合算法重视细节系数的融合规则而对近似系数一般采用简单的均值或加权融合规则。近似系数代表了源图像的主要能量分布,简单的近似系数融合规则会造成融合图像中目标的亮度和对比度降低,导致强度较高的源图像抑制或淹没另一方的目标特征和纹理细节,最终影响融合图像的视觉效果和目标特征的可探测性。针对此问题,提出了一种应用亮度重映射的近似系数融合规则,实验结果表明,综合考虑源图像近似系数的强度和对比度等特征,能有效增强弱势源图像中目标特征和纹理细节等信息的融合,明显提高融合图像的动态范围和目标特征的强度。
     3、受传感器物理特性限制或自然条件影响,源图像经常表现为对比度低、灰度值范围狭窄、视觉效果模糊等情况,从而造成融合图像质量的下降。针对此问题,将数学形态学与尺度空间理论相结合构造了多尺度top-hat变换,提出了基于多尺度top-hat变换的增强融合算法。该算法使用多尺度top-hat变换提取源图像亮、暗细节特征,并依据不同应用需求灵活地融合各尺度亮、暗特征得到融合图像。实验结果表明,算法在融合过程中同步增强目标和细节特征,使融合图像中目标与背景对比度、纹理细节等特征优于其在源图像中的表现,并能根据应用需求获得具有不同增强效果的融合图像。
     4、针对实时系统需求,提出了加权和乘积相结合的互调制快速融合算法。该算法对两幅源图像分别使用基于对应像素能量比值所确定的系数进行放大,然后分别加上由图像统计参数得到的偏移项,最后将两部分相乘并规范化即得到融合图像。实验结果表明,算法综合了加法和乘法调制的优点,简单快速,实时性好,并且参数自适应,是一种非线性的互调制融合过程。算法的融合质量和效率优于小波、金字塔等融合算法,适用于多传感器图像如红外与可见光图像融合、医学图像融合等。
     5、总结了近15年来多传感器图像染色融合的研究成果,给出了染色融合算法的一般框架。在此基础上,提出了基于YCbCr颜色空间的夜视微光与红外图像染色融合算法。算法使用互调制融合方法构造Y分量并直接由源图像构造Cb.Cr分量,快速构造出色彩丰富、对比度强烈的伪彩色图像;应用颜色传递技术后,获得了细节丰富、目标背景对比度高、符合场景自然颜色分布的假彩色图像。染色融合过程结合了伪彩色和假彩色两次染色,可满足不用应用需求。因使用了互调制快速融合并直接构造颜色分量,算法效率高且参数自适应,可满足实时性需求。
     文中对融合算法的研究主要围绕目标特征增强和算法实时性两个主要目标,构造多尺度top-hat变换并应用于图像融合,实现了融合过程中同步增强;提出的互调制快速融合算法能满足实时性要求较高的应用场合;提出了结合互调制融合的快速染色融合算法,融合过程结合了伪彩色和假彩色两次染色,可满足不用应用需求。上述研究成果在态势感知、夜视监控、目标检测与跟踪等多传感器图像融合技术研究和应用领域具有重要的理论和实用价值。
With the development of sensor technology and image processing technology in recent years, the practical applicability of image fusion also constantly enhanced. Image fusion has also been extensively used in many areas from defense applications to civilian purposes. In many applications systems, such as remote sensing, situational awareness, intelligence gathering, all-weather surveillance, medical diagnostics, military, and robotics etc. the widespread use of multi-sensor and multi-spectral images has increased the importance of image fusion. Image fusion technique has been showing even more broader applications prospects at present.
     In this paper, the research work is focused on multi-sensor image fusion theory and algorithms such as infrared and visible image which be wide spread used in situation awareness, surveillance, target detection and tracking applications. Comprehensively take advantage of progress and achievement in image analysis and image understanding technology research field, the paper conducted more deeply investigation to obtain the effective processing and analyzing methods for multi-sensor image fusion at pixel level. Thus, the paper mainly aimed at to find better ways to fuse multi-sensor images which can effectively enhance the target feature synchronously in fusion process and obtain a good visual effect fused image, as well as to meet for real-time image fusion system needs. The main research work and achievements are as follows.
     1. Multi-scale transforms commonly used in image fusion methods have been reviewed and analyzed comprehensively. Its advantages and disadvantages have also been analyzed form the perspective of signal sparse representation. Then, the paper investigates shift dependency of various Multi-scale transforms and analyzes its effects on image fusion performance by quantitative and qualitative methods. We conduct experiments by combining8popular multi-scale transforms such as pyramid, wavelet and multiscale geometric analysis methods etc. with two popular fusion rules. By analyzing and comparing the experimental results, the paper proposed some guidance for using Muti-scale based fusion schemes.
     2. Most proposed fusion algorithms based on multi-scale transform attached more importance to design more delicate fusion rules for the detail coefficients, but generally use simple rules such as mean or weighted average to combine the approximate coefficients. However, due to the approximate coefficients represent the energy distribution of the source image in spatial domain, a simple approximation coefficients fusion rule would reduce the brightness and contrast of the fused image, which led to the source image with higher strength suppress or annihilate the others target characteristics and texture detail. Thus the visual effect and target feature detectability of the fused image would also be reduced. To solve this problem, this paper presents the approximate coefficient fusion rule based on the brightness remapping under using curvelet transform as multi-scale transform method. The experiment result shows that, beacause of taking into account the intensity and contrast characteristics of the source images. The proposed fusion rule can effectively increase target characteristics and exture detail in the weakness source image, and significantly improve the fused image's dynamic range and target feature intensity.
     3. Limited by sensor physical properties or impacted by natural conditions, imagery performance often present as low contrast, narrow intensity or blurry visual effect, which in turn reducing quality of the fused image. To efficiently enhance fused images in fusion process, the paper proposed a novel image fusion algorithm using multi-scale top-hat transform. Multi-scale bright and dim salient features of the source images are extracted iteratively through top-hat transform using structuring elements with the same shape and increasing sizes. Then these multi-scale bright and dim features are combined by fusion rule. The enhanced fused image is obtained by weighting the bright and dim features according to specific requirements. Experimental results on infrared and visible images and other multi-sensor images fusion from different applications using different fusion algorithms verified that the proposed algorithm could efficiently and synchronously fuse and enhance the salient features of source images, and produce better visual effects and target detection or identification capabilities. In addition, according different application requirements, the proposed algorithm could pruduce different enhanced fuison result.
     4. In order to meet the requirements of real-time fusion system, the paper proposed a novel fast mutual modulation fusion (FMMF) algorithm for multi-sensor images. First, the two source images were magnified by factors that derived from the ratio of the corresponding pixel energy respectively; Then an offset entry that obtained by computing statistical parameters of source images add to it; Finally, after the previous results are multiplied and normed, the fused image is obtained. Fusion process consists of the addition and multiplication, which is a nonlinear combine process. Experimental results show that FMMF algorithm is simple and fast and its performance and efficiency is superior to those based on pyramid and wavelet.
     5. The paper reviews the past15years research in the field of night vision multi-sensor image coloration (render night vision image in color) and reveals the general coloration model. On this basis, a new coloration method using fast multi-modulation fusion (FMMF) and color transfer is designed for low-light and infrared image pairs. The coloration process is based on YCbCr color space. Fist, fused image using fast multi-modulation fusion to merge the source images information be assigned to the Y channel; then the Cb and Cr channel is combined using Toet's method which extract the common component from source images. Finally, the false-color image is obtained by using color transfer technology to the prior pseudo-color YCbCr image. Experiments show that the result of our method is more salient information, higher color contrast, more natural color appearance than others. Due to the use of fast multi-modulation fusion, the coloration process is efficient and the parameters are adaptive, the proposed method meets the real-time applications.
     The researsh work of this paper aimed at to enhance fused image, as well as to meet for real-time image fusion system needs. The paper proposed a novel image fusion algorithm using multi-scale top-hat transform to enhance the target feature synchronously in fusion process; the proposed fast mutual modulation fusion (FMMF) algorithm can be used for real-time system; the paper reveals the general coloration model then proposed a new coloration method using fast multi-modulation fusion (FMMF) and color transfer for low-light and infrared image pairs. These fusion methods have important theoretical and practical values in research and application areas such as situation awareness, all-weather surveillance, target detection and tracking applications etc.
引文
[1]Klein L A. Sensor and data fusion concepts and applications[M].2nd ed. Bellingham, Wash.:SPIE,1999.
    [2]Hall D L, Llinas J. Handbook of multisensor data fusion[M]. Boca Raton, FL:CRC Press,2001.
    [3]Hall D L, Liggins M E, Llinas J. Handbook of multisensor data fusion:theory and practice[M].2nd ed. Boca Raton, FL:CRC Press,2009.
    [4]李勇.基于多尺度分解的多源图像融合算法研究[D].长春:吉林大学通信与信息系统,2010.
    [5]李晖晖.多传感器图像融合算法研究[D].西安:西北工业大学模式识别与智能系统,2006,
    [6]杨波.基于小波的像素级图像融合算法研究[D].海:上海交通大学控制理论与控制工程,2008.
    [7]Ardeshir G A, Nikolov S. Image fusion:Advances in the state of the art[J]. Information Fusion,2007,8(2):114-118.
    [8]http://www.equinoxsensors.com/products/VISIR.html.
    [9]Piella G. A general framework for multiresolution image fusion:from pixels to regions[J]. Information Fusion,2003,4(4):259-280.
    [10]Pohl C, Van Genderen J L. Multisensor image fusion in remote sensing:concepts, methods and applications[J]. International Journal of Remote Sensing, 1998,19(5):823-854.
    [11]Mitchell H. Image fusion theories, techniques and applications[M].1st ed. New York: Springer,2010.
    [12]吴艳.多传感器数据融合算法研究[D].西安电子科技大学电路与系统,2003.
    [13]Abidi M A. Data fusion in robotics and machine intelligence[M]. Boston:Academic Press,1992.
    [14]Ryan D M, Tinkler R D. Night pilotage assessment of image fusion[J]. Proc.SPIE, 1995,2465:50-67.
    [15]Steele P M, Perconti P. Part task investigation of multispectral image fusion using gray scale and synthetic color night-vision sensor imagery for helicopter pilotage[J]. Proc.SPIE,1997,3062:88-100.
    [16]Peli T, Peli E, Ellis K K, et al. Multispectral image fusion for visual display[J]. Proc.SPIE,1999,3719:359-368.
    [17]Lallier E, Farooq M. A real time pixel-level based image fusion via adaptive weight averaging:the Third International Conference on Information Fusion, Sunnyvale, CA, USA,2000[C]. Int. Soc. Inf. Fusion,10-13 July 2000.
    [18]刘贵喜.多传感器图像融合方法研究[D].西安电子科技大学电路与系统,2001.
    [19]蒋晓瑜,高稚允,周立伟.小波变换在多光谱图像融合中的应用[J].电子学报,1997,25(8):105-108.
    [20]金红,刘榴娣.彩色空间变换法在图像融合中的应用[J].光学技术,1997,23(4):44-48.
    [21]柴艳妹,任金昌,赵荣椿.一种基于小波分析的自适应图像融合方法[J].中国体视学与图像分析,2002,7(4):240-243.
    [22]陈东,李飚,沈振康.SAR与可见光图像融合算法的研究[J].系统工程与电子技术,200,22(9):5-7.
    [23]章毓晋.中国图像工程:2010[J].中国图象图形学报,2011,16(5):693-720.
    [24]章毓晋.中国图像工程15年[J].哈尔滨工程大学学报,2011,32(9):1238-1243.
    [25]Smith M I, Heather J P. A review of image fusion technology in 2005[J]. Proc. SPIE, 2005,5782:29-45.
    [26]Pajares G, Manuel De La Cruz J. A wavelet-based image fusion tutorial[J]. Pattern Recognition,2004,37(9):1855-1872.
    [27]Zhang Z, Blum R S. Image fusion for a digital camera application:the 1998 32nd Asilomar Conference on Signals, Systems & Computers, Pacific Grove, CA, USA, 1998[C]. IEEE Comp Soc, Nov.1-4,1998.
    [28]贾永红.TM和SAR影像主分量变换融合法[J].遥感技术与应用,1998(1):49-52.
    [29]Chavez Jr. P S, Sides S C, Anderson J A. Comparison of three different methods to merge multiresolution and multispectral data. Landsat TM and SPOT panchromatic[J]. Photogrammetric Engineering and Remote Sensing,1991,57(3):295-303.
    [30]Burt P J, Adelson E H. The Laplacian Pyramid as a Compact Image Code[J]. IEEE Transactions on Communications,1983,31(4):532-540.
    [31]Burt P J. The pyramid as a structure for efficient computation[M]//RoSENFELD A. Multiresolution Image Processing and Analysis. Berlin:Springer-Verlag,1984:6-35.
    [32]Burt P J, Adelson E H. Merging images through pattern decomposition[J]. Proc.SPIE, 1985,575:173-181.
    [33]Teot A. Image fusion by a ratio of low-pass pyramid[J]. Pattern recognition letters, 1989,9(4):245-253.
    [34]Toet A, van Ruyven L J, Valeton J M. Merging thermal and visual images by a contrast pyramid[J]. Optical Engineering,1989,28(7):789-792.
    [35]Burt P J, Kolczynski R J. Enhanced image capture through fusion:1993 IEEE 4th International Conference on Computer Vision, Berlin, Ger,1993[C]. IEEE, May 11-14, 1993.
    [36]Li H, Manjunath B S, Mitra S K. Multisensor image fusion using the wavelet transform[J]. Graphical Models and Image Processing,1995,57(3):235-245.
    [37]Chipman L J, Orr T M, Graham L N. Wavelets and image fusion[J]. Proc. SPIE, 1995,2569:208-219.
    [38]Koren I, Laine A, Taylor F. Image fusion using steerable dyadic wavelet transform: International Conference on Image Processing, Los Alamitos, CA, USA,1995[C]. IEEE Comput. Soc., Oct 23-26,1995.
    [39]Rockinger O. Image sequence fusion using a shift-invariant wavelet transform: International Conference on Image Processing, Los Alamitos, CA, USA,1997[C]. IEEE Comput. Soc, Oct 26-29,1997.
    [40]Hill P, Canagarajah N, Bull D. Image fusion using complex wavelets:British Machine Vision Conference 2002, Manchester, UK,2002[C]. British Machine Vision Assoc,2-5 Sept.2002.
    [41]Mahyari A G, Yazdi M. A novel image fusion method using curvelet transform based on linear dependency test:International Conference on Digital Image Processing, Bangkok, Thailand,2009[C]. IEEE Computer Society, March 7-9,2009.
    [42]Bhutada G G, Anand R S, Saxena S C. Edge preserved image enhancement using adaptive fusion of images denoised by wavelet and curvelet transform[J]. Digital Signal Processing,2011,21(1):118-130.
    [43]杨俊,赵忠明.基于Curvelet变换的多聚焦图像融合方法[J].光电工程,2007,34(6):67-71.
    [44]Ali F E, El Dokany I M, Saad A A, et al. A curvelet transform approach for the fusion of MR and CT images[J]. Journal of Modern Optics,2010,57(4):273-286.
    [45]Ali F E, El Dokany I M, Saad A A, et al. Fusion of MR and CT images using the curvelet transform:National Radio Science Conference, NRSC, Tanta, Egypt,2008[C]. IEEE, March 18-20,2008.
    [46]Nencini F, Garzelli A, Baronti S, et al. Remote sensing image fusion using the curvelet transform[J]. Information Fusion,2007,8(2):143-156.
    [47]Fu M, Zhao C. Fusion of infrared and visible images based on the second generation curvelet transform[J]. Journal of Infrared and Millimeter Waves,2009,28(4):254-258.
    [48]Asmare M H, Asirvadam V S, Iznita L. Multi-sensor image enhancement and fusion for vision clarity using contourlet transform:International Conference on Information Management and Engineering, ICIME 2009, Kuala Lumpur, Malaysia,2009[C]. IEEE Computer Society, April 3-5,2009.
    [49]Chen M, Li D, Qin Q, et al. Remote sensing image fusion based on contourlet transform[J]. Mini-Micro Systems,2006,27(11):2052-2055.
    [50]Fang-yuan L V, Liu Z, Yang Y. Application of nonsubsampled wavelet-based Contourlet transform in image fusion[J]. Computer Engineering and Applications, 2009,45(18):179-181.
    [51]Shah V P, Younan N H, King R. Pan-sharpening via the contourlet transform:IEEE International Geoscience and Remote Sensing Symposium, IGARSS, Barcelona, Spain, 2007[C]. Institute of Electrical and Electronics Engineers Inc., June 23-28,2007.
    [52]宋梦馨,郭平.结合Contourlet和HSI变换的组合优化遥感图像融合方法[J].计算机辅助设计与图形学学报,2012,24(1):83-88.
    [53]Jia J, Jiao L, Sun Q. The nonsubsampled contourlet transform in multisensor images fusion[J]. Acta Electronica Sinica,2007,35(10):1934-1938.
    [54]Li T, Wang Y. Biological image fusion using a NSCT based variable-weight method[J]. Information Fusion,2011,12(2):85-92.
    [55]Wang L, Li B, Tian L. Multi-modal medical image fusion using the inter-scale and intra-scale dependencies between image shift-invariant shearlet coefficients[J]. Information Fusion,2012(in press):1-9.
    [56]Miao Q, Shi C, Xu P, et al. A novel algorithm of image fusion using shearlets[J]. Optics Communications,2011,284(6):1540-1547.
    [57]朱卫东,李全海,徐克科,等.基于二代Bandelet和主成分变换的高光谱遥感图像融合[J].同济大学学报(自然科学版),2011(7):1068-1073.
    [58]Yang B, Li S. Pixel-level image fusion with simultaneous orthogonal matching pursuit[J]. Information Fusion,2012,13(1):10-19.
    [59]Ghantous M, Ghosh S, Bayoumi M. A gradient-based hybrid image fusion scheme using object extraction:IEEE International Conference on Image Processing, ICIP, San Diego, CA, United states,2008[C]. IEEE Computer Society, October 12-15,2008.
    [60]Akhtar P, Ali T J, Bhatti M I, et al. A framework for edge detection and linking using wavelets and image fusion:1 st International Congress on Image and Signal Processing, CISP, Sanya, Hainan, China,2008[C]. IEEE Computer Society, May 27,2008-May 30 2008.
    [61]刘贵喜,杨万海.一种像素级多算子红外与可见光图像融合方法[J].红外与毫米波学报,2001,20(3):207-210.
    [62]李郁峰,范勇,李绘卓,等.基于Curvelet变换的红外与彩色可见光图像融合算法[J].计算机工程与应用,2010,46(11):186-188.
    [63]Bogoni L, Hansen M. Pattern-selective color image fusion[J]. Pattern Recognition, 2001,34(8):1515-1526.
    [64]Zhiyun X, Blum R S. Concealed weapon detection using color image fusion:Sixth International Conference on Information Fusion, Gallup, NM, USA,2003[C]. Univ. New Mexico, July8-11,2003.
    [65]Zhang X, Chen Q, Men T. Comparison of fusion methods for the infrared and color visible images:2009 2nd IEEE International Conference on Computer Science and Information Technology, ICCSIT 2009, Beijing, China,2009[C]. IEEE Computer Society, August 8,2009-August 11,2009.
    [66]Toet A, Walraven J. New false color mapping for image fusion[J]. Optical Engineering, 1996,35(3):650-658.
    [67]Jang J H, Ra J B. Pseudo-color image fusion based on intensity-hue-saturation color space:2008 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems, MFI, Seoul, Korea, Republic of,2008[C]. Institute of Electrical and Electronics Engineers Inc., August 20,2008-August 22,2008.
    [68]Toet A. Color the Night:Applying Daytime Colors to Nighttime Imagery[J]. Pro. SPIE, 2003,5081:168-176.
    [69]Hogervorst M A, Toet A. Fast natural color mapping for night-time imagery[J]. Information Fusion,2010,11(2):69-77.
    [70]Toet A, Hogervorst M A. Towards an optimal color representation for multiband nightvision systems:2009 12th International Conference on Information Fusion, FUSION 2009, Seattle, WA, United states,2009[C]. IEEE Computer Society, July 6, 2009-July 9,2009.
    [71]Hogervorst M A, Toet A. Evaluation of a color fused dual-band NVG:2009 12th International Conference on Information Fusion, FUSION 2009, Seattle, WA, United states,2009[C]. IEEE Computer Society, July 6,2009-July 9,2009.
    [72]Hogervorst M A, Toet A. Presenting nighttime imagery in daytime colours:11th International Conference on Information Fusion, FUSION 2008, Cologne, Germany, 2008[C]. Inst. of Elec. and Elec. Eng. Computer Society, June 30,2008-July 3,2008.
    [73]Hogervorst M A, Toet A. Method for applying daytime colors to nighttime imagery in realtime[J]. Proc. SPIE,2008,6974:1-9.
    [74]张俊举,常本康,张宝辉,等.远距离红外与微光/可见光融合成像系统[J].红外与激光工程,2012(1):20-24.
    [75]Blum R S. On multisensor image fusion performance limits from an estimation theory perspective[J]. Information Fusion,2006,7(3):250-263.
    [76]Mitianoudis N, Stathaki T. Pixel-based and region-based image fusion schemes using ICA bases[J]. Information Fusion,2007,8(2):131-142.
    [77]Mitianoudis N, Stathaki T. Optimal contrast for color image fusion using ICA bases: 11th International Conference on Information Fusion, FUSION 2008, Cologne, Germany,2008[C]. Inst. of Elec. and Elec. Eng. Computer Society, June 30,2008-July 3,2008.
    [78]Mumtaz A, Majid A, Mumtaz A. Genetic algorithms and its application to image fusion: 4th IEEE International Conference on Emerging Technologies 2008, ICET 2008, Rawalpindi, Pakistan,2008[C]. Inst. of Elec. and Elec. Eng. Computer Society, October 18,2008-October 19,2008.
    [79]Hong L, He Z, Xiang J, et al. Fusion of infrared and visible image based on genetic algorithm and data assimilation:2009 International Workshop on Intelligent Systems and Applications, ISA 2009, Wuhan, China,2009[C]. IEEE Computer Society, May 23, 2009-May 24,2009.
    [80]时丕丽,郭雷,李晖晖,等.基于NSCT和遗传算法的SAR图像和多光谱图像融合[J].西北工业大学学报,2012(2):274-278.
    [81]Looney D, Mandic D P. Fusion of visual and thermal images using complex extensions of EMD:2008 2nd ACM/IEEE International Conference on Distributed Smart Cameras, ICDSC 2008, Palo Alto, CA, United states,2008[C]. Inst. of Elec. and Elec. Eng. Computer Society, September 7,2008-September 11,2008.
    [82]Fu J C, Chen C C, Chai J W, et al. Image segmentation by EM-based adaptive pulse coupled neural networks in brain magnetic resonance imaging[J]. Computerized Medical Imaging and Graphics,2010,34(4):308-320.
    [83]赵景朝,曲仕茹.基于Curvelet变换与自适应PCNN的红外与可见光图像融合[J].西北工业大学学报,2011(6):849-853.
    [84]Chai Y, Li H F, Guo M Y. Multifocus image fusion scheme based on features of multiscale products and PCNN in lifting stationary wavelet domain[J]. Optics Communications,2011,284(5):1146-1158.
    [85]Raghavendra R, Dorizzi B, Rao A, et al. Particle swarm optimization based fusion of near infrared and visible images for improved face verification[J]. Pattern Recognition, 2011,44(2):401-411.
    [86]Toet A, Hogervorst M A, Nikolov S G, et al. Towards cognitive image fusion[J]. Information Fusion,2010,11(2):95-113.
    [87]Muller A C, Narayanan S. Cognitively-engineered multisensor image fusion for military applications[J]. Information Fusion,2009,10(2):137-149.
    [88]Guihong Q, Dali Z, Pingfan Y. Information measure for performance of image fusion[J]. Electronics Letters,2002,38(7):313-315.
    [89]Petrovic V, Xydeas C. Objective evaluation of signal-level image fusion performance[J]. Optical Engineering,2005,44(8):87001-87003.
    [90]Piella G, Heijmans H. A new quality metric for image fusion:2003 International Conference on Image Processing, ICIP-2003, Barcelona, Spain,2003 [C]. Institute of Electrical and Electronics Engineers Computer Society, September 14,2003 September 17,2003.
    [91]Wang Q, Yu D, Shen Y. An overview of image fusion metrics:2009 IEEE Intrumentation and Measurement Technology Conference, I2MTC 2009, Singapore, Singapore,2009[C]. IEEE Computer Society, May 5,2009-May 7,2009.
    [92]Hossny M, Nahavandi S, Creighton D. Comments on 'information measure for performance of image fusion'[J]. Electronics Letters,2008,44(18):1066-1067.
    [93]Ramesh C, Ranjith T. Fusion performance measures and a lifting wavelet transform based algorithm for image fusion:Fifth International Conference on Information Fusion, Sunnyvale, CA, USA,2002[C]. Int. Soc. Inf. Fusion, July 8-11,2002.
    [94]Wang Z, Bovik A C. A universal image quality index[J]. IEEE Signal Processing Letters,2002,9(3):81-84.
    [95]Piella G. New quality measures for image fusion:the Seventh International Conference on Information Fusion, FUSION 2004, Stockholm, Sweden,2004[C]. International Society of Information Fusion, June 28,2004-July 1,2004.
    [96]Pan J, Gong J, Lu J, et al. Image fusion based on local deviation and high-pass filtering of wavelet transform[J]. Proc. SPIE,2004,5660:191-198.
    [97]Wu J, Liu J, Tian J, et al. Multi-scale image data fusion based on local deviation of wavelet transform:2004 International Conference on Intelligent Mechatronics and Automation, Chengdu, China,2004[C].
    [98]杨志,毛士艺,陈炜.基于局部方向能量的鲁棒图像融合算法[J].电子与信息学报,(1).
    [99]Wilson T A, Rogers S K, Kabrisky M. Perceptual-based image fusion for hyperspectral data[J]. IEEE Transactions on Geoscience and Remote Sensing,1997,35(4):1007-1017.
    [100]Lewis J J, O'Callaghan R J, Nikolov S G, et al. Pixel-and region-based image fusion with complex wavelets[J]. Information Fusion,2007,8(2):119-130.
    [101]杨镠,郭宝龙,倪伟.基于区域特性的Contourlet域多聚焦图像融合算法[J].西安交通大学学报,(0).
    [102]Zhenhua L, Zhongliang J, Gang L, et al. A region-based image fusion algorithm using multiresolution segmentation:Intelligent Transportation Systems,2003. Proceedings. 2003 IEEE,2003[C].12-15 Oct.2003.
    [103]Xiao G, Wei K, Jing Z. Improved dynamic image fusion scheme for infrared and visible sequence Based on Image Fusion System:the 11th International Conference on Information Fusion, FUSION 2008, Cologne, Germany,2008[C].
    [104]Sweldens W, Schroder P. Building your own wavelets at home[J]. Lecture Notes in Earth Sciences,2000,90:72-107.
    [105]Daubechies I. wavelets on irregular point sets[J]. Philosophical Transactions of the Royal Society of London. A,1999,357(1760):2397-2413.
    [106]Candes E J, Donoho D L. Ridgelets:a key to higher-dimensional intermittency?[J]. Philosophical Transactions Of The Royal Society. A,1999,357(1760).
    [107]Candes E, Demanet L, Donoho D, et al. Fast discrete curvelet transforms[J]. Multiscale Modeling and Simulation,2006,5(3):861-899.
    [108]Jianwei M, Plonka G. The Curvelet Transform[J]. IEEE Signal Processing Magazine, 2010,27(2):118-133.
    [109]Do M N, Vetterli M. Contourlets[J], Studies in Computational Mathematics, 2003,Volume 10:83-105.
    [110]Do M N, Vetterli M. The contourlet transform:An efficient directional multiresolution image representation[J]. IEEE Transactions on Image Processing, 2005,14(12):2091-2106.
    [111]Velisavljevic V, Beferull Lozano B, Vetterli M, et al. Directionlets:Anisotropic multidirectional representation with separable filtering[J]. IEEE Transactions on Image Processing,2006,15(7):1916-1933.
    [112]Willett R M, Nowak R D. Platelets:A multiscale approach for recovering edges and surfaces in photon-limited medical imaging[J]. IEEE Transactions on Medical Imaging, 2003,22(3):332-350.
    [113]Le Pennec E, Mallat S. Sparse geometric image representations with bandelets[J]. IEEE Transactions on Image Processing,2005,14(4):423-438.
    [114]Mallat S, Peyre G. A review of Bandlet methods for geometrical image representation[J]. Numerical Algorithms,2007,44(3):205-234.
    [115]Mallat S. Geometrical grouplets[J]. Applied and Computational Harmonic Analysis, 2009,26(2):161-180.
    [116]Guo K, Labate D. Optimally sparse multidimensional representation using shearlets[J]. SIAM Journal on Mathematical Analysis,2007,39(1):298-318.
    [117]Easley G, Labate D, Lim W. Sparse directional image representations using the discrete shearlet transform[J]. Applied and Computational Harmonic Analysis, 2008,25(1):25-46.
    [118]Kingsbury N. A dual-tree complex wavelet transform with improved orthogonality and symmetry properties:IEEE International Conference on Image Processing, Vancouver, BC, Canada,2000[C].
    [119]Selesnick I W, Baraniuk R G, Kingsbury N G. The dual-tree complex wavelet transform[J]. IEEE Signal Processing Magazine,2005,22(6):123-151.
    [120]Fernandas F C A, Van Spaendonck R L C, Burrus C S. A new framework for complex wavelet transforms[J]. IEEE Transactions on Signal Processing,2003,51(7):1825-1837.
    [121]Van Spaendonck R, Blu T, Baraniuk R, et al. Orthogonal Hilbert transform filter banks and wavelets:ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing, Hong Kong, Hong kong,2003 [C].
    [122]Fernandes F C A, Wakin M B, Baraniuk R G. Non-redundant, linear-phase, semi-orthogonal, directional complex wavelets:ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing, Montreal, Que, Canada, 2004[C].
    [123]Candes E J, Tao T. Near-Optimal Signal Recovery From Random Projections: Universal Encoding Strategies?[J]. IEEE Transactions on Information Theory, 2006,52(12):5406-5425.
    [124]Candes E J, Romberg J, Tao T. Robust uncertainty principles:exact signal reconstruction from highly incomplete frequency information [J]. IEEE Transactions on Information Theory,2006,52(2):489-509.
    [125]Donoho D L. Compressed sensing[J], IEEE Transactions on Information Theory, 2006,52(4):1289-1306.
    [126]Starck J L, Candes E J, Donoho D L. The curvelet transform for image denoising[J]. IEEE Transactions on Image Processing,2002,11(6):670-684.
    [127]EJ. C, DL. D. Recovering edges in ill-posed inverse problems:Optimality of curvelet frames[J]. The Annals of Statistics,2002,30(3):784-842.
    [128]J C E, L D D. New Tight Frames of Curvelets and Optimal Representations of Objects with C-2 Singularities[J]. Communications on Pure and Applied Mathematics, 2004,57(02):219-266.
    [129]Do M N, Vetterli M. Contourlets:A new directional multiresolution image representation:Conference Record of the Asilomar Conference on Signals, Systems and Computers, Pacific Groove, CA, United states,2002 [C].
    [130]Li Y, Li L, Li B. Objective evaluation of several shift invariant image fusion methods: 2008 International Conference on Internet Computing in Science and Engineering, Harbin, Leilongjiang, China,2008[C]. Inst. of Elec. and Elec. Eng. Computer Society, January 28,2008-January 29,2008.
    [131]Li R, Zhang Y J. Level Selection for Multiscale Fusion of Out-of-Focus Image[J]. Signal Processing Letters, IEEE,2005,12(9):617-620.
    [132]Li S, Yang B, Hu J. Performance comparison of different multi-resolution transforms for image fusion[J]. Information Fusion,2011,12(2):74-84.
    [133]Zhou Y, Mayyas A, Qattawi A, et al. Feature-level and pixel-level fusion routines when coupled to infrared night-vision tracking scheme[J]. Infrared Physics and Technology,2010,53(1):43-49.
    [134]Petrovic V S, Xydeas C S. Optimizing multiresolution pixel-level image fusion[J]. Proc. SPIE,2001,4385:96-107.
    [135]Lillo-Saavedra M, Gonzalo C, Lagos O. Toward reduction of artifacts in fused images[J]. International Journal of Applied Earth Observation and Geoinformation, 2011,13(3):368-375.
    [136]Zhou J, Cunha A L, Do M N. Nonsubsampled contourlet transform:Construction and application in enhancement:International Conference on Image Processing, ICIP, Genova, Italy,2005 [C].
    [137]Candes E J, Donoho D L. Curvelets and Curvilinear Integrals[J]. Journal of Approximation Theory,2001,113(1):59-90.
    [138]闫敬文曲小波.超小波分析及应用[M].北京:国防工业出版社,2008.
    [139]Hertzmann A, Jacobs C E, Oliver N, et al. Image analogies:the 28th annual conference on Computer graphics and interactive techniques, New York, USA,2001[C]. ACM, Aug 12-17,2001.
    [140]Huang H, Wang B, Zhang L. A new method for remote sensing image fusion based on nonsubsampled contourlet transform[J]. Journal of Fudan University (Natural Science), 2008,47(1):124-128.
    [141]Zhang Y, Li Y, Zhang K, et al. SAR and infrared image fusion using nonsubsampled contourlet transform:1st IITA International Joint Conference on Artificial Intelligence, JCAI 2009, Hainan Island, China,2009[C]. International Joint Conferences on Artificial Intelligence, April 25,2009-April 26,2009.
    [142]Li X, Tang W. EPMA image fusion scheme based on contourlet transform:2009 International Symposium on Intelligent Ubiquitous Computing and Education, IUCE 2009, Chengdu, China,2009[C]. IEEE Computer Society, May 16,2009-May 17, 2009.
    [143]Xi C, Wei Z. A novel algorithm for multifocus image fusion based on contourlet hidden markov tree model:2008 9th International Conference on Signal Processing, ICSP 2008, Beijing, China,2008[C]. Institute of Electrical and Electronics Engineers Inc., October 26,2008-October 29,2008.
    [144]Meyer F, Maragos P. Nonlinear Scale-Space Representation with Morphological Levelings[J]. Journal of Visual Communication and Image Representation, 2000,11(2):245-265.
    [145]Heijmans H J A M, Goutsias J. Nonlinear multiresolution signal decomposition schemes-Part Ⅱ:morphological wavelets[J]. IEEE Transactions on Image Processing, 2000,9(11):1897-1913.
    [146]Goutsias J, Heijmans H J A M. Nonlinear multiresolution signal decomposition schemes-Part Ⅰ:morphological pyramids[J]. IEEE Transactions on Image Processing, 2000,9(11):1862-1876.
    [147]Toet A. A hierarchical morphological image decomposition[J]. Pattern Recognition Letters,1990,11(4):267-274.
    [148]Mukhopadhyay S, Chanda B. Fusion of 2D gray-scale images using multiscale morphology[J]. Pattern Recognition,2001,34(10):1939-1949.
    [149]赵鹏,倪国强.基于多尺度柔性形态学滤波器的图像融合[J].光电子.激光, 2009,20(9):1252-1257.
    [150]Bai X, Zhou F, Xue B. Image enhancement using multi scale image features extracted by top-hat transform[J]. Optics & Laser Technology,2012,44(2):328-336.
    [151]Peng B, Wang Y, Yang X. A multiscale morphological approach to local contrast enhancement for ultrasound images:2010 International Conference on Computational and Information Sciences, ICCIS 2010, Chengdu, Sichuan, China,2010[C].
    [152]Serra J P. Image analysis and mathematical morphology[M]. London:Academic Press, 1982.
    [153]Maragos P. Pattern spectrum and multiscale shape representation[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,1989,11(7):701-716.
    [154]Toet A. A morphological pyramidal image decomposition[J]. Pattern Recognition Letters,1989,9(4):255-261.
    [155]Li Y F, Feng X Y. Shift dependency effects on multiresolution-based image fusion performance:2010 2nd International Workshop on Intelligent Systems and Applications, ISA 2010, Wuhan, China,2010[C].
    [156]Shensa M J. The discrete wavelet transform:wedding the a trous and Mallat algorithms[J]. IEEE Transactions on Signal Processing,1992,40(10):2464-2482.
    [157]Starck J L, Bijaoui A. Filtering and deconvolution by the wavelet transform [J]. Signal Processing,1994,35(3):195-211.
    [158]Starck J L, Moudden Y, Abrial P, et al. Wavelets, ridgelets and curvelets on the sphere[J]. Astronomy and Astrophysics,2006,446(3):1191-1204.
    [159]Starck J L, Fadili J, Murtagh F. The undecimated wavelet decomposition and its reconstruction[J]. IEEE Transactions on Image Processing,2007,16(2):297-309.
    [160]Starck J L, Fadili J M, Murtagh F. Sparse image and signal processing:wavelets, curvelets, morphological diversity[M]. Cambridge:Cambridge University Press,2010.
    [161]黄伟.像素级图像融合研究[D].上海交通大学控制理论与控制工程,2008.
    [162]Smith S, Scarff L A. Combining visual and IR images for sensor fusion:two approaches[J]. Proc. SPIE,1992,1668:102-112.
    [163]Johnson J L, Padgett M L. PCNN models and applications[J]. IEEE Transactions on Neural Networks,1999,10(3):480-498.
    [164]Asmare M H, Asirvadam V S, Izhar L I. Image enhancement:A composite image approach using contourlet transform:2009 International Conference on Electrical Engineering and Informatics, ICEEI 2009, Selangor, Malaysia,2009[C]. IEEE Computer Society, August 5,2009-August 7,2009.
    [165]李美丽,李言俊,王红梅,等.基于NSCT和PCNN的红外与可见光图像融合方法[J].光电工程,2010,37(6):90-95.
    [166]许洪,王向军.多光谱、超光谱成像技术在军事上的应用[J].红外与激光工程,2007,36(1):13-17.
    [167]Waxman A M, Fay D A, Cove A, et al. Color night vision:fusion of intensified visible and thermal IR imagery[J]. Proc. SPIE,1995,2463:58-68.
    [168]Waxman A M, Aguilar M, Fay D A, et al. Solid-state color night vision:fusion of low-light visible and thermal infrared imagery[J]. Lincoln Laboratory Journal, 1998,11(1):41-60.
    [169]Waxman A M, Gove A N, Fay D A, et al. Color night vision:opponent processing in the fusion of visible and IR imagery[J]. Neural Networks,1997,10(1):1-6.
    [170]Huang G, Ni G, Zhang B. Visual and infrared dual-band false color image fusion method motivated by Land's experiment[J]. Optical Engineering, 2007,46(2):27001-27010.
    [171]黄光华,倪国强,张彬.基于Land实验的可见红外伪彩色图像融合方法[J].光学技术,2007,33(1):98-101.
    [172]Land E H, Mccann J J. Lightness and Retinex Theory[J]. Journal of the Optical Society of America,1971,61(1):1-11.
    [173]陈玉春.多源图像融合算法研究[D].西安:西北工业大学系统工程,2006.
    [174]Reinhard E, Adhikhmin M, Gooch B, et al. Color transfer between images[J]. IEEE Computer Graphics and Applications,2001,21(5):34-41.
    [175]Toet A. Natural colour mapping for multiband nightvision imagery[J]. Information Fusion,2003,4(3):155-166.
    [176]Zheng Y, Essock E A. A local-coloring method for night-vision colorization utilizing image analysis and fusion[J]. Information Fusion,2008,9(2):186-199.
    [177]Yin S, Cao L, Ling Y, et al. One color contrast enhanced infrared and visible image fusion method[J]. Infrared Physics and Technology,2010,53(2):146-150.
    [178]殷松峰,曹良才,杨华,等.提高夜视融合目标可探测性的颜色对比度增强方法[J].红外与毫米波学报,2009,28(4):281-284.
    [179]李光鑫,徐抒岩,董吉洪.结构优化型颜色传递融合方法[J].电子学报,2011,39(1):213-218.
    [180]钱小燕,韩磊,王帮峰.基于YUV空间的彩色夜视融合方法[J].计算机应用,2010(12):3222-3224.
    [181]谷小婧.基于图像分析的自然彩色夜视成像方法研究[D].东华大学模式识别与智能系统,2011.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700