基于多尺度经验模态分解的图像融合算法研究
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
随着多种图像传感器在军事、民用领域的广泛应用,将多幅图像综合成一幅图像的融合技术具有越来越重要的研究意义。由于传感器种类多、图像数据量大、图像特征复杂等因素给图像融合技术带来众多困难和挑战。本文对图像融合问题中多尺度分解算法、分解表示的合成算法和融合图像的质量评估方面进行了深入分析,做了以下几个方面的工作:
     (1)提出自适应可协调经验模态分解算法(AC-EMD)。AC-EMD算法是完全数据驱动的分解算法,解决了目前图像融合中的多尺度分解算法存在自适应能力差的问题。算法分解过程中将待融合的多幅源图像根据其图像内容自适应并相互协调地分解成一系列具有物理意义的内蕴模函数图像组和一个趋势图像组,AC-EMD多尺度表示具有比金字塔和小波分解更好的图像表示特性。
     (2)提出了两种塔型的经验模态分解算法。第一种是金字塔结构的分解算法PEMD,PEMD把AC-EMD分解算法和拉普拉斯金字塔分解算法有机结合,有效地降低了图像分解的冗余度。第二种是AC-EMD分解算法与Contourlet分解算法相结合的EMD-CT分解算法,EMD-CT把AC-EMD算法的自适应性、塔型分解的数据结构、Contourlet算法的多方向特性三者结合在一起,算法不仅降低了分解表示的冗余度,而且该分解算法具有多方向的图像表示特性。
     (3)提出了主成分PCA和一致性检验相结合的融合规则,克服了传感器图像信息量不均等,融合图像局部不一致的情况;提出了基于区域分割的融合规则,利用图像目标区域的特性,更好解决了融合图像信息不一致、不连贯的问题。前者运行效率高,后者融合图像质量更好。
     (4)提出了两种不需要参考图像的融合图像质量评估指标:基于Renyi信息熵的质量评估指标和基于结构相似性的质量评估指标。前者在计算融合图像与输入图像互信息时利用Renyi信息熵的优点,并同时考虑了互信息交叉重迭问题对评估指标的影响;后者不仅考虑了融合图像和输入图像的结构相似度,而且考虑了输入图像之间的结构相似度对评估指标的影响。
With the increasing applications of multi-image sensors in a wide range of areas, such as military domain and civil domain, image fusion has become a more and more important issue of combing multiple source images into a single one. The factors of multi sensor modalities, plenty of image data, and complex features of image, have made vital challenges in image fusion techniques. This paper gives an intensive study on the multiscale decomposition, the synthesis algorithm of multiscale representations, and the quality evaluation metrics of fused images. The work is summarized as follows:
     Firstly, adaptive coordinate empirical mode decomposition (AC-EMD) is proposed to solve the poor performance adaptivity of multiscale decompositions (MSDs) in image fusion algorithms. AC-EMD is a fully data-driven multiscale decomposition which self-adaptively and coordinately decomposes the source images into a number of“well-behaved”intrinsic mode functions (IMFs) as well as a residual image. The representation of AC-EMD has better physical features of images than that of pyramid and wavelet decompositions.
     Secondly, two types of pyramidal empirical mode decomposition are proposed. One is pyramid empirical mode decomposition (PEMD). PEMD transform is less redundant, and combines the merits of the Laplacian pyramid and the properties of AC-EMD. The other is a hybrid representation of empirical mode decomposition (EMD) and contourlet transform (CT), named the EMD-CT. EMD-CT transform shares high adaptivity of AC-EMD, data structure of pyramidal transform, while owning multidirection analysis of contourlet transform. The proposed EMD-CT has not only reduced the redundancy of AC-EMD, but also achieved directional representation of the source images.
     Thirdly, a fusion rule based on the combination of PCA and consistency checking is proposed to overcome the uneven of the source image information of multisensors and incontinuity of fused image in synthesizing multiscale representations. To overcome the inconsistency and inconsequence problem of the fused image completely, we also propose a region-based fusion rule for synthesizing multiscale representations, using the regional properties of target image. The former fusion rule has lower computational complexity, while the latter achieves the better quality of the fused image.
     At last, two objective image quality metrics without referenced image are proposed: the image quality metric based on the Renyi entropy and the image quality metric using the structural similarity. The former metric measures the total amount of information that fused image contains about source images based on the merit of Renyi entropy, which avoids the overlapping problem of mutual information. The latter metric considers not only the similarity between the source images and the fused image, but also the similarity among the source images.
引文
[1] Mahmood A., Tudor P. M., Oxford W., Hansford R., Nelson J. D. B., Kingsbury N. G., Katartzis A., Petrou M., Mitianoudis N., Stathaki T., Achim A., Bull D., Canagarajah N., Nikolov S., Loza A., and Cvejic N. Applied multi-dimensional fusion. Computer Journal, Nov 2007, 50(6): 646-659.
    [2]敬忠良,肖刚,李振华.图像融合--理论与应用,北京:高等教育出版社, 2007.
    [3] Smith M. I. and Heather J. P. A review of image fusion technology in 2005. In: The International Society for Optical Engineering (SPIE), 2005: 29-45.
    [4] Li H., Manjunath B. S., and Mitra S. K. Multisensor Image Fusion Using the Wavelet Transform. Graphical Models and Image Processing, 1995, 57(3): 235-245.
    [5] Ardeshir G., A. and Nikolov S. Image fusion: Advances in the state of the art. Information Fusion, 2007, 8(2): 114-118.
    [6] Amolins K., Zhang Y., and Dare P. Wavelet based image fusion techniques -- An introduction, review and comparison. ISPRS Journal of Photogrammetry and Remote Sensing, 2007, 62(4): 249-263.
    [7]毛士艺,赵巍.多传感器图像融合技术综述.北京航空航天大学学报, 2002, 28(5): 512-518.
    [8]刘纯平.多源遥感信息融合方法及其应用研究[博士论文].南京理工大学, 2002.
    [9] Wang Z. J., Ziou D., Armenakis C., Li D., and Li Q. Q. A comparative analysis of image fusion methods. IEEE Transactions on Geoscience and Remote Sensing, Jun 2005, 43(6): 1391-1402.
    [10] Pajares G. and de la Cruz J. M. A wavelet-based image fusion tutorial. Pattern Recognition, Sep 2004, 37(9): 1855-1872.
    [11] Daily M. I., Farr T., Elachi C., and Schaber G. Geologic interpretation from composited radar and Landsat imagery. Photogrammetric Engineering and Remote Sensing, 1979, 45(8): 1109-1116.
    [12]佘二勇.多源图像融合方法研究[博士论文].国防科技大学, 2005.
    [13] Zhang Y. J. and Ge L. L. Efficient fusion scheme for multi-focus images by using blurring measure. Digital Signal Processing, 2009, 19(2): 186-193.
    [14] Sonka M., Hlavac V., and Boyle R. Image Processing, Analysis and Machine Vision. Chapman & Hall Computing, 1993: 164-176.
    [15] Brown L. G. A survey of image registration techniques. Computing Surveys, 12 1992, 24(4): 325-376.
    [16]王红梅,张科,李言俊.图像匹配研究进展.计算机工程与应用, 2004,(19): 42-44.
    [17] Gholipour A., Kehtarnavaz N., Briggs R., Devous M., and Gopinath K. Brain functional localization: A survey of image registration techniques. IEEE Transactions on Medical Imaging, Apr 2007, 26(4): 427-451.
    [18] Martin S., Morison G., Nailon W., and Durrani T. Fast and accurate image registration using Tsallis entropy and simultaneous perturbation stochastic approximation. Electronics Letters, May 2004, 40(10): 595-597.
    [19] Liu S. T. and Yang S. Q. Performance evaluation and implementation of image registration techniques: a survey. Electronics Optics & Control, 06 2007, 14(3): 73-78, 83.
    [20] Keller Y. and Averbuch A. Multisensor image registration via implicit similarity. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2006, 28(5): 794-801.
    [21] Pohl C. and van Genderen J. L. Multisensor image fusion in remote sensing: concepts, methods and applications. International Journal of Remote Sensing, 1998, 19(5): 823-854.
    [22] Hall D. L. and Llinas J. An introduction to multisensor data fusion. Proceedings of the IEEE, 1997, 85(1): 6-23.
    [23]覃征,鲍复民,李爱国,杨博,弓亚歌.数字图像融合:西安大学出版社, 2004.
    [24] Smith L. I. A tutorial on Principal Components Analysis. http://csnet.Otago.ac.nz/cosc453/student_tutorials/principal_components.pdf, 2002.
    [25] Richards J. A. Thematic mapping from multitemporal image data using the principal component transformation. Remote Sensing of Environment, 1984, 16(1): 35-46.
    [26] Lallier E. and Farooq M. A real time pixel-level based image fusion via adaptive weight averaging. Proceedings of the Third International Conference on Information Fusion, 2000: 3-13.
    [27] Huang W. and Jing Z. L. Multi-focus image fusion using pulse coupled neural network. Pattern Recognition Letters, 2007, 28(9): 1123-1132.
    [28] Zhang H., Sun X. N., Zhao L., and Liu L. Image Fusion Algorithm Using RBF Neural Networks. In: 3rd International Workshop on Hybrid Artificial Intelligence Systems, 2008: 417-424.
    [29] Zhang Z. and Blum R. S. A categorization of multiscale-decomposition-based image fusion schemes with a performance study for a digital camera application. Proceedings of the IEEE, Aug 1999, 87(8): 1315-1326.
    [30] Nunez J., Otazu X., Fors O., Prades A., Pala V., and Arbiol R. Multiresolution-based image fusion with additive wavelet decomposition. IEEE Transactions on Geoscience and Remote Sensing, 1999, 37(3): 1204-1211.
    [31] Chao R., Zhang K., and Li Y.-J. Pixel-level multiresolution image fusion. Systems Engineering and Electronics, 2004, 26(1): 137-141.
    [32] Rong W., Fanliang B., Hua J., and Lihua L. A feature-level image fusion algorithm based on neural networks. 2007 1st International Conference on Bioinformatics and Biomedical Engineering, 2007: 821-824.
    [33] Gunatilaka A. H. and Baertlein B. A. Feature-level and decision-level fusion of noncoincidently sampled sensors for land mine detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2001, 23(6): 577-589.
    [34] Nsaibi M. and Chaabane F. Image fusion of radar and optical remote sensing data for land cover classification. 2008 3rd International Conference on Information and Communication Technologies: From Theory to Applications, Vols 1-5, 2008: 764-767.
    [35] Kong A., Zhang D., and Kamel M. Palmprint identification using feature-level fusion. Pattern Recognition, 2006, 39(3): 478-487.
    [36] Ross A. and Govindarajan R. Feature level fusion using hand and face biometrics. Biometric Technology for Human Identification II, 2005, 5779: 196-204.
    [37] Nishii R. A Markov random field-based approach to decision-level fusion for remote sensing image classification. IEEE Transactions on Geoscience and Remote Sensing, 2003, 41(10): 2316-2319.
    [38] Yi W., Wei C., and Shiyi M. Multi-sensor decision level image fusion based on fuzzy theory and unsupervised FCM. Proceedings of the SPIE - The International Society for Optical Engineering, 2006: 62000J-62001-62007.
    [39] Qian T. and Veldhuis R. Threshold-optimized decision-level fusion and its application to biometrics. Pattern Recognition, 2009: 823-836.
    [40] Dawoud A., Alam M. S., Bal A., and Loo C. Target tracking in infrared imagery using weighted composite reference function-based decision fusion. IEEE Transactions on Image Processing, 2006, 15(2): 404-410.
    [41]胡良梅,高隽,何柯峰.图像融合质量评价方法的研究.电子学报, 2004, 32(12A): 218-221.
    [42] Dixon T. D., Canga E., Noyes J., Troscianko T., Nikolov S., Bull D., and Canagarajah N. Methods for the assessment of fused images ACM Transactions on Applied Perception, 2006, 3(3): 309-332.
    [43] Petrovic V. Subjective tests for image fusion evaluation and objective metric validation. Information Fusion, Apr 2007, 8(2): 208-216.
    [44] Ryan D. and Tinkler R. Night pilotage assessment of image fusion. Helmet and Head Mounted Displays and Symbology Design Requirements II, 1995, 2465: 50-67.
    [45] Toet A. and Franken E. M. Perceptual evaluation of different image fusion schemes. Displays, 2003, 24(1): 25-37.
    [46] Chen H. and Varshney P. K. A human perception inspired quality metric for image fusion based on regional information. Information Fusion, Apr 2007, 8(2): 193-207.
    [47] Dixon T. D., Canga E. F., Nikolov S. G., Troscianko T., Noyes J. M., Canagarajah C. N., and Bull D. R. Selection of image fusion quality measures: objective, subjective, and metric assessment. Journal of the Optical Society of America A (Optics, Image Science and Vision), 12 2007, 24(12): 125-135.
    [48] Damera-Venkata N., Kite T. D., Geisler W. S., Evans B. L., and Bovik A. C. Image quality assessment based on a degradation model. IEEE Transactions on Image Processing, Apr 2000, 9(4): 636-650.
    [49] Petrovic V. and Cootes T. Objectively adaptive image fusion. Information Fusion, Apr 2007, 8(2): 168-176.
    [50] Burt P. J. and Adelson E. H. The Laplacian pyramid as a compact image code. IEEE Transactions on Communications, 04 1983, COM-31(4): 532-540.
    [51] Toet A. Image fusion by a ratio of low-pass pyramid. Pattern Recognition Letters, 1989, 9(4): 245-253.
    [52] Liu Z., Tsukada K., Hanasaki K., Ho Y. K., and Dai Y. P. Image fusion by using steerable pyramid. Pattern Recognition Letters, 2001, 22(9): 929-939.
    [53] Li Z., Jing Z., Sun S., and Liu G. Remote sensing image fusion based on steerable pyramid frame transform. Acta Optica Sinica, 2005, 25(5): 598-602.
    [54] Toet A., van Ruyven L. J., and Valeton J. M. Merging thermal and visual images by a contrast pyramid. Optical Engineering, 07 1989, 28(7): 789-792.
    [55] Jin H. Y., Liu F., and Jiao L. C. A method of image fusion based on multiscale contrast pyramid and directional filter banks. Acta Electronica Sinica, 2007: 1295-1300.
    [56] Liu G. and Yang W. A multiscale contrast-pyramid-based image fusion scheme and its performance evaluation. Acta Optica Sinica, 2001, 21(11): 1336-1342.
    [57] Goutsias J. and Heijmans H. Nonlinear multiresolution signal decomposition schemes - Part I: Morphological pyramids. IEEE Transactions on Image Processing, Nov 2000, 9(11): 1862-1876.
    [58] Heijmans H. and Goutsias J. Nonlinear multiresolution signal decomposition schemes - Part II: morphological wavelets. IEEE Transactions on Image Processing, Nov 2000, 9(11): 1897-1913.
    [59] Toet A. A morphological pyramidal image decomposition. Pattern Recognition Letters, 1989, 9(4): 255-261.
    [60] Zhao P. and Pu Z. Image fusion based on morphological 4-subband decomposition pyramid. Acta Optica Sinica, 2007, 27(1): 40-44.
    [61]彭玉华.小波变换与工程应用,北京:科学出版社, 1999.
    [62] Daubechies I. Ten Lectures on Wavelets, Philadelphia, Pennsylvania: Society for Industrial and Applied Mathematics, 1992.
    [63] Mallat S. G. A Wavelet Tour of Signal Processing, San Diego: Academic Press, 1998.
    [64] Mallat S. G. A theory for multiresolution signal decomposition: the wavelet representation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1989, 11(7): 674-693.
    [65] Kovacevic J. and Vetterli M. Non-separable 2-dimensional and 3-dimensional wavelets. IEEE Transactions on Signal Processing, 1995, 43(5): 1269-1273.
    [66] Rockinger O. Image sequence fusion using a shift-invariant wavelet transform. In: Proceedings of International Conference on Image Processing, 1997: 288-291.
    [67] Candes E. J. and Donoho D. L. Curvelets, multiresolution representation, and scaling laws. In: Conference on Wavelet Applications in Signal and Image Processing VIII, 2000: 1-12.
    [68] Le Pennec E. and Mallat S. Sparse geometric image representations with bandelets. IEEE Transactions on Image Processing, Apr 2005, 14(4): 423-438.
    [69] Do M. N. and Vetterli M. The contourlet transform: An efficient directional multiresolution image representation. IEEE Transactions on Image Processing, 2005, 14(12): 2091-2106.
    [70] Selesnick I. W., Baraniuk R. G., and Kingsbury N. G. The dual-tree complex wavelet transform. IEEE Signal Processing Magazine, Nov 2005, 22(6): 123-151.
    [71] Hill P., Canagarajah N., and Bull D. Image fusion using complex wavelets. In: BMVC2002: British Machine Vision Conference 2002, 2002: 487-496.
    [72] Li S. T. and Yang B. Multifocus image fusion by combining curvelet and wavelet transform. Pattern Recognition Letters, Jul 2008, 29(9): 1295-1301.
    [73] Qu X. B., Yan J. W., Xie G. F., Zhu Z. Q., and Chen B. G. A novel image fusion algorithm based on bandelet transform. Chinese Optics Letters, Oct 2007, 5(10): 569-572.
    [74] Tang L. and Zhao Z. G. Multiresolution image fusion based on the wavelet-based contourlet transform. In: 10th International Conference on Information Fusion, 2007: 184-189.
    [75]梁栋,李瑶,沈敏,高清维,鲍文霞.一种基于小波-Contourlet变换的多聚焦图像融合算法.电子学报, 2007, 35(2): 320-322.
    [76]张强,郭宝龙.基于非采样contourlet变换多传感器图像融合算法.自动化学报, 2008, 34(2): 135-141.
    [77] Piella G. A general framework for multiresolution image fusion: from pixels to regions. Information Fusion, 2003, 4(4): 259-280.
    [78] Piella G. Adaptive Wavelets and their Applications to Image Fusion and Compression [Ph.D.Thesis]. CWI & University of Amsterdam, 2003.
    [79] Yang R. H., Pan Q., and Cheng Y. M. A new method of image fusion based on m-band wavelet transform. Proceedings of 2006 International Conference on Machine Learning and Cybernetics, Vols 1-7, 2006: 3870-3873.
    [80] Li Z. H., Jing Z. L., Liu G., Sun S. Y., and Leung H. A region-based image fusion algorithm using multiresolution segmentation. In 2003 IEEE Intelligent Transportation Systems Proceedings, Vols. 1 & 2, New York: I E E E, 2003: 96-101.
    [81] Yang J. Z. and Blum R. S. A region-based image fusion method using the Expectation-Maximization algorithm. In 2006 40th Annual Conference on Information Sciences and Systems, Vols 1-4, New York: IEEE, 2006: 468-473.
    [82] Li S. T. and Yang B. Multifocus image fusion using region segmentation and spatial frequency. Image and Vision Computing, Jul 2008, 26(7): 971-979.
    [83] Lewis J. J., O'Callaghan R. J., Nikolov S. G., Bull D. R., and Canagarajah N. Pixel- and region-based image fusion with complex wavelets. Information Fusion, 2007, 8(2): 119-130.
    [84] Li M., Cai W., and Tan Z. A region-based multi-sensor image fusion scheme using pulse-coupled neural network. Pattern Recognition Letters, Dec 2006, 27(16): 1948-1956.
    [85]杨镠,郭宝龙,倪伟.基于区域特性的Contourlet域多聚焦图像融合算法.西安交通大学学报, 2007, 41(4): 448-452.
    [86]李树涛,王耀南,张昌凡.多传感器图像融合的客观评价与分析.仪器仪表学报, 2002, 23(6): 651-654.
    [87]夏明革,何友,唐小明,关欣.像素级图像融合方法分类与比较.火力与指挥控制, 2002, 27(3): 1-4.
    [88] Qu G. H., Zhang D. L., and Yan P. F. Information measure for performance of image fusion. Electronics Letters, Mar 2002, 38(7): 313-315.
    [89] Hossny M., Nahavandi S., and Creighton D. Comments on 'Information measure for performance of image fusion'. Electronics Letters, Aug 2008, 44(18): 1066-U1028.
    [90] Piella G. and Heijmans H. A new quality metric for image fusion. In: IEEE International Conference on Image Processing , 2003: 173-176.
    [91] Piella G. New quality measures for image fusion. In: Seventh International Conference on Information Fusion, 2004: 542-546.
    [92] Xydeas C. S. and Petrovic V. Objective image fusion performance measure. Electronics Letters, Feb 2000, 36(4): 308-309.
    [93] Xydeas C. and Petrovic V. Objective pixel-level image fusion performance measure. In Sensor Fusion: Architectures, Algorithms, and Applications Iv. vol. 4051, Bellingham: Spie - Int Soc Optical Engineering, 2000: 89-98.
    [94] Gabor D. Theory of communication. Journal of the Institution of Electrical Engineers, 1946: 429-457.
    [95] Tomazic S. and Znidar S. A fast recursive STFT algorithm. Melecon '96 - 8th Mediterranean Electrotechnical Conference, Proceedings, Vols I-Iii, 1996: 1025-1028.
    [96] Huang N. E., Shen Z., Long S. R., Wu M. C., Shih H. H., Zheng Q., Yen N. C., Tung C. C., and Liu H. H. The empirical mode decomposition and Hilbert spectrum for nonlinear and nonstationary time series analysis. Proc Roy Soc London 1998, 454(1): 903-995.
    [97] Nunes J. C., Bouaoune Y., Delechelle E., Niang O., and Bunel P. Image analysis by bidimensional empirical mode decomposition. Image and Vision Computing, Nov 2003, 21(12): 1019-1026.
    [98] Damerval C., Meignen S., and Perrier V. A fast algorithm for bidimensional EMD. IEEE Signal Processing Letters, Oct 2005, 12(10): 701-704.
    [99] Nunes J. C., Guyot S., and Delechelle E. Texture analysis based on local analysis of the bidimensional empirical mode decomposition. Machine Vision and Applications, May 2005, 16(3): 177-188.
    [100] Liu W., Xu W. D., and Li L. H. Medical image retrieval based on bidimensional empirical mode decomposition. In Proceedings of the 7th IEEE International Symposium on Bioinformatics and Bioengineering, Vols I and Ii, New York: IEEE, 2007: 641-646.
    [101] Flandrin P., Rilling G., and Goncalves P. Empirical mode decomposition as a filter bank. IEEE Signal Processing Letters, Feb 2004, 11(2): 112-114.
    [102] Huang N. E., Shen Z., and Long S. R. A new view of nonlinear water waves: The Hilbert spectrum. Annual Review of Fluid Mechanics, 1999, 31:: 417-457.
    [103] Cheng J. S., Yu D. J., and Yang Y. Research on the intrinsic mode function (IMF) criterion in EMD method. Mechanical Systems and Signal Processing, May 2006, 20(4): 817-822.
    [104] Yang Z., Qi D., and Yang L. Signal period analysis based on Hilbert-Huang transform and its application to texture analysis. In: Proceedings. Third International Conference on Image and Graphics, 2004: 430-433.
    [105] Liu Z. X., Wang H. J., and Peng S. L. Texture segmentation using directional empirical mode decomposition. Icip: 2004 International Conference on Image Processing, Vols 1- 5, 2004: 279-282.
    [106] Vincent L. Morphological grayscale reconstruction in image analysis: applications and efficient algorithms. IEEE Trans Image Process, 1993 1993, 2(2): 176-201.
    [107]程军圣,于德介,杨宇. Hilbert-Huang变换端点效应问题的探讨.振动与冲击, 2005, 24(6): 40-44.
    [108]杨建文,贾民平.希尔伯特-黄谱的端点效应分析及处理方法研究.振动工程学报, 2006, 19(2): 283-288.
    [109] Nencini F., Garzelli A., Baronti S., and Alparone L. Remote sensing image fusion using the curvelet transform. Information Fusion, 2007, 8(2): 143-156.
    [110] Miao Q. G. and Wang B. S. The contourlet transform for image fusion. In: Conference on Multisensor, Multisource Information Fusion, 2006: Z2420-Z2420.
    [111] Chen D. P. and Li Q. The use of complex contourlet transform on fusion scheme. In: 5th International Enformatika Conference (IEC 05), 2005: 342-347.
    [112] Qu G. H., Zhang D. L., and Yan P. F. Medical image fusion by wavelet transform modulus maxima. Optics Express, 2001, 9(4): 184-190.
    [113] Nakamoto Y., Tamai K., Saga T., Higashi T., Hara T., Suga T., Koyama T., and Togashi K. Clinical Value of Image Fusion from MR and PET in Patients with Head and Neck Cancer. Molecular Imaging and Biology, 2009, 11(1): 46-53.
    [114] Felzenszwalb P. F. and Huttenlocher D. P. Efficient graph-based image segmentation. International Journal of Computer Vision, Sep 2004, 59(2): 167-181.
    [115] Cvejic N., Canagarajah C. N., and Bull D. R. Image fusion metric based on mutual information and Tsallis entropy. Electronics Letters, May 2006, 42(11): 626-627.
    [116] Lopez-Ruiz R. Shannon information, LMC complexity and Renyi entropies: a straightforward approach. Biophysical Chemistry, Apr 2005, 115(2-3): 215-218.
    [117] Wang Z. and Bovik A. C. A universal image quality index. IEEE Signal Processing Letters, Mar 2002, 9(3): 81-84.
    [118] Cvejic N., ?oza A., Bull D., and Canagarajah N. A Similarity Metric for Assessment of Image Fusion Algorithms. International journal of signal processing, 2005, 2(3): 178-182.
    [119] Shannon C. E. A mathematical theory of communication. Bell System Technical Journal, 1948, 27(3): 379-423.
    [120] Tsallis C. Nonextensive statistical mechanics and its applications, Berlin: Springer, 2001.
    [121] Gabarda S. and Cristobal G. Blind image quality assessment through anisotropy. Journal of the Optical Society of America a-Optics Image Science and Vision, Dec 2007, 24(12): B42-B51.
    [122] Atif J., Ripoche X., and Osorio A. Combined Quadratic Mutual information to a new adaptive kernel density estimator for Non Rigid Image Registration. In: SPIE Medical Imaging Conference, 2004: 331-340.
    [123] Tsagaris V. and Anastassopoulos V. An information measure for assessing pixel-level fusion methods. Image and Signal Processing for Remote Sensing X, 2004, 5573: 64-71.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700