基于小波域局部特征的图像去噪与融合
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
本论文的研究目标是分析和发掘图像在空间域、变换域的局部特征并将其应用于小波域的图像去噪和图像融合。本文的工作主要针对小波域图像去噪与尺度间系数对齐问题相关的图像细节模糊问题、Bandlet变换的自底向上蛮力搜索图像几何正则性的低效问题、基于红外图像增强低亮度光学图像时引入的漂白现象,以及小波域定义的多种对比度(显著性测度)的关系及其对相关融合算法性能的影响问题。研究的内容涉及到提出解决这些问题的概念、定义和算法,通过必要的仿真实验和实际图像的应用实验,验证了新方法的性能,并和现有的其它同类方法进行了比较和分析。本文的主要研究成果为:
     1.针对现有的基于图像相似性指标UIQI和SSIM中在计算对比度相似性的时候并未考虑图像局部背景亮度的问题,提出了一种综合性图像质量测度指标,该指标着重强调了主观评价和客观评价的全面考量,特别是考虑到人类视觉心理对于细节保持和图像对比度的感知特点,由此得到的评价指标比原来的UIQI和SSIM指标与人类视觉感知的特点更为一致。
     2.针对原有Bandlet变换中自底向上蛮力搜索图像正则几何、存在大量无效搜索量的问题,结合了图像作为二维函数的全变差值与目标边界间的积分关系,设计了一种自顶向下的几何搜索策略,并用以改进原Bandlet变换算法。新的改进算法可以避免多种区域的几何搜索操作,如匀质或常值区域、零值区域等,从而可以提高Bandlet变换算法的时间性能;
     3.在应用小波等多分辨分析去除图像噪声时,人们基于小波变换系数尺度间和尺度内的相关性,建立了多种模型,最常见的是Markov链、Markov树、高斯混合分布等统计模型。而基于尺度间点态预测和尺度内插值的非统计模型则另辟蹊径,完全不必考虑小波变换系数的统计特性——借助于尺度间的(大值)系数位置预测、通过求解一个确定性方程组即可完成基于SURE原理的阈值法去噪。然而这种尺度间的预测至少存在两个方面的问题,一是经过逐级下采样操作后,细尺度大系数在粗尺度下可能会消失;二是由于Gibbs效应影响,尺度间的大系数未必能够互相对应,细尺度下的大值系数未必对应于粗尺度下同一位置的大值系数,反之亦然。这个算法缺陷的直接后果就是在去噪过程中丢失弱纹理细节。借鉴于Bandlet变换定义图像局部几何流的思想,通过设定小波域尺度间方向几何流预测的方法,本文解决了这一弱纹理细节去噪后被弱化以致模糊的问题;该方法的关键在于,尽管个别像素点的系数可能会在粗尺度由于下采样消失、在不同的尺度间失去原来的位置对应关系,但是,这样的像素点所附着的图像几何元却依然存在,其位置和尺度间对应关系并不随尺度变化而变化。换言之,该项工作利用具有尺度间不变性的方向几何流特征间的对应预测关系解决了去噪后纹理等细节模糊的问题
     4.针对红外图像中像素值亮度(直方图)分布的局部特征,通过拉伸直方图上暗区像素的差异、同时压缩明亮区域像素的差异,并用于增强与之相对应的光学图像,从而可以消除增强图像和融合图像中存在的漂白效应,使得到的融合图像具有光学域的特点便于人类视觉系统感知,又能够避免漂白效应混淆目标特征及其边界。产生漂白效应的像素值与红外图像中低温目标产生的暗像素有关,发现这一个事实是构造出本算法的关键。
     5.在分析比较了多个图像融合研究中,基于对比度(显著性测度)值选择源图像的变换系数作为融合图像的变换系数,是近年来多分辨变换域图像融合领域的热点。然而,现有的对比度定义中,有的没有对图像特征的局部背景亮度信息给予足够的重视而未加考虑,有的在计算对比度的时候没有讨论有关的区域窗口尺寸的设置影响,有的则完全忽视了图像的多分辨分析的低通逼近系数的融合也需要考虑对比度因素,从而使得融合后的结果图像亮度减弱、红外图像的信息占优、丢失纹理等细节、边缘模糊化等等。为了解决这些问题,并对用于图像融合的多分辨变换域的对比度进行全面、深入的讨论和分析,我们在移不变小波域提出了一个基于区域窗口内变换系数统计特征的对比度定义,并用其构造了两个新的图像融合方案,以使得源图像中具有较强对比度的图像特征信息进入融合图像。实验结果证实了本文提出的方案融合的图像的优异性能及其稳定性。
This thesis is aimed at the analysis and development of local image features in thespatial and transform domain and their applications in image denoising and imagefusion using a wavelet transform. All works that have been done are supposed to dealwith several problems related to the detail blurring in the denoised images caused bycoefficients alignment across scales of wavelet transform, deficiency of the brute-forcesearching from bottom to top for the desired regular image geometry throughout aBandlet transform, the bleaching effect seen in the enhanced low light visual image by aexponential function of a registered infrared image, the discussion of those availabledefinitions of contrast in a wavelet domain and their influence on the performance ofimage fusion. The solutions to these problems are presented here that include concepts,definitions and relative algorithms. Experimental results are also shown to verify thegood performance of the proposed algorithms and to compare them to those results fromother relative methods. The main contributions can be summarized as follows:
     1. Being image quality indexes, both UIQI and SSIM miss the local backgroundlightness. This will lead to deficiency when these two indexes are used to computeobjective quality measures of images, as they will violate from the perception of humanvision systems concerning the real contrast. To fix this, a new universal image qualityindex is proposed, with emphasis on the agreement of subjective and objectiveevaluation and the HVS sensitivity of contrast changes, i.e., the perception of details.
     2. In the original second generation Bandlet transform, a brute-force search iscarried out from bottom to top for regular image geometries, which leads to someunnecessary computations. In the light of the relation between the total variation and thelength of object borders, a novel search strategy from top to bottom is devised to avoidunnecessary image partition and searching for geometries. The key idea lies in the valueof the total variation of an area; in the case of zero total variation, there is no geometryat all, so no segmentation and geometry search is needed; as for those homogeneousareas, no benefit will be added through further partition of an area. In this way, the timecomplexity can be decreased since the wavelet coefficients are sparse expression ofimages, these homogeneous areas and zero areas can be guaranteed.
     3. There are several statistical models used when noisy images are denoised withwavelet methods, such as Marko chains, Markov trees and mixed Gaussian distributionare typical stochastic processes to express the coefficients relation between neighbourscales and among the same scale. By contrast, an interesting concrete method can predict coefficients across scales without any respect of the statistical distribution ofwavelet coefficients. A threshold function based on SURE principle can be devised andinserted into a linear systems to fulfill the denoising by simply solve the linear systems.However, there are two problems exist. Firstly, a coefficient with big value in a finescale may disappear in a coarse scale because of the sub-sampling. Secondly, acoefficient of big value in a coarse scale may not be corresponding to a coefficient ofbig value in a fine scale, and vice versa. In either case, the pixel-wise prediction willlead to blur image features after thresholding and denoising. Inspired by the directionalgeometric flow used in a Bandlet transform, we propose to predict coefficients acrossscales based on the directional geometric flow, which is assumed scale-invariant afterthe thresholding process. This flow-wise prediction will fix the missing coefficient andblur texture details by interpolation along geometry directions.
     4. An intensity transformation function of infrared images is presented and used forcontext enhancement of visual images, upon which a new image fusion method in theshift-invariant wavelet domain is developed. The function behaves like a sigmoidfunction and shifts and expands the range of dark pixels of infrared images. Theseadjustments according to the local histogram characteristics can avoid artificial brightpixels introduced in the later enhancing of visual images and the bleaching effect in thefinal fused images owing to the exponential map of very dark pixels of the infraredimages. The key idea is the fact that the bleaching effect are owning to the very darkpixels of the infrared image through the used exponential function.
     5. It is a hot research area that fusing images by selecting coefficients according toa certain contrast (saliency measure) through a multiresolution analysis. However, mostknown definitions of contrast (saliency measure) miss the importance of localbackground lightness, or compute the contrast without considering a sliding window ofproper size. In fact, even the approximate coefficients need a contrast to have a betterfusion results instead of simply averaging source ones. These usually lead to decreasedlightness, prevailing of information from infrared image, and loss of details in theresulted images. By comparison and analysis of those available contrast (saliencymeasure) definitions, a universal contrast is devised and used to develop a new fusionalgorithm to allow the feature from source images enter into the fused image with moresharper contrast. Experimental results verify its fusion performance and its stability.
引文
[1] R. C. Gonzalez, R. E. Woods著,阮秋琦等译,数字图像处理(第二版),北京:电子工业出版社,2003,pp.176-182
    [2] L. Jacques, et al., A panorama on multiscale geometric representations, intertwiningspatial, directional and frequency selectivity, Signal Process.,2011,91(12):2699-2730
    [3] M. S. Nixon, Feature extraction in computer vision and image processing,2nd ed.,Academic Press,2007
    [4] E. Peli, Contrast in complex images, J. Opt. Soc. Am. A,1990,7(10):2032-2040.
    [5] K. A. Panetta, E. J. Wharton,.and S. S. Agaian, Human visual system-based imageenhancement and logarithmic contrast measure, IEEE T. Syst. Man Cy. B,2008,38(1):174-188
    [6] R. Tan, Visibility in bad weather from a single image, in Proc. CVPR'99, pp.1-8.
    [7] J. Lubin, A visual discrimination model for imaging system design and evaluation,in: Vision Models for Target Detection and Recognition, E. Peli, eds, WorldScientific Publishing,1995, pp.245-282.
    [8] T. Pu and G. Ni, Contrast-based image fusion using the discrete wavelet transform,Opt. Eng.,2000,39(8):2075-2082
    [9] A. Toet, Multiscale contrast enhancement with applications to image fusion, Opt.Eng.,1992,31(5):1026-1031
    [10]P. J. Burt and R. J. Kolczynski, Enhanced image capture through fusion, in Proc. ofIEEE Conf. Compt. Vision,1993, pp.173-182.
    [11]H. Li, B. S. Manjunath, and S. K. Mitra, Multi-sensor image fusion using thewavelet transform, in Proc. of IEEE Conf. Image Process.,1994, pp.51-54
    [12]J. Tang, E. Peli, and S. Acton, Image enhancement using a contrast measure in thecompressed domain, IEEE Signal Process. Lett.,2003,10(10):289-292
    [13]J. Tang, A contrast based image fusion technique in the DCT domain, DigitalSignal Process.,2004,14(3):218-226.
    [14]L. Yaroslavsky, Digital Picture Processing-An Introduction, Springer-Verlag,1985.
    [15]C. Tomasi and R. Manduchi., Bilateral filtering for gray and color images, in: Proc.the Sixth Internatinal Conference on Computer Vision,1998, pages839–846.
    [16]A. Buades, et. Al., A non-local algorithm for image denoising, in: Proceedings ofthe2005IEEE Computer Society Conference on Computer Vision and PatternRecognition (CVPR'05), vol.2:60-65
    [17]P. Choudhury, J. Tumblin, The trilateral filter for high contrast images and meshes,ACM SIGGRAPH2005Courses, ACM,2005:5.
    [18]K. Dabov, et. al., Image denoising by sparse3D transform-domain collaborativefiltering, IEEE Trans. Image Process.,2007,16(8):2080-2095.
    [19]C.-A. Deledalle, et. al., Iterative weighted maximum likelihood denoising withprobabilistic patch-based weights, IEEE Trans. on Image Process.,2009,18(12):2661-2672.
    [20]J. Bigun, Vision with direction, Springer,2006
    [21]T. Aach, et al., Analysis of superimposed oriented patterns, IEEE Trans. ImageProcess.,2006,15(12):477-489
    [22]A. Buades,, et al., Fast cartoon+texture image filters, IEEE Trans. Image Process.,2010,19(8):1978-1986
    [23]S. Setzer, G. Steidl, Teuber T., Restoration of images with rotated shapes,Numerical Algorithms,2008,48(1-3):49-66.
    [24]P. Kovesi, Phase preserving denoising of images, The Australian PatternRecognition Society Conference: DICTA'99. December1999. Perth WA. pp.212-217.
    [25]J. Zhao, R. Laganiere, and Z. Liu, Performance assessment of combinativepixel-level image fusion based on an absolute feature measurement, Int’l J.Innovative Computing, Information and Control,2007,3(6A):1433-1447
    [26]Z. Liu, D.S. Forsyth, and R. Laganiere, A feature-based metric for the quantitativeevaluation of pixel-level image fusion, Computer Vision and Image Understanding,2008,109(1):56-68
    [27]S. Mallat, A Wavelet Tour of Signal Processing,3rd ed., Academic Press,2008.
    [28]G. Peyré, S. Mallat, Surface compression with geometric bandelets, ACM Trans.Graphics,2005,24(3):601-608.
    [29]J. L. Starck, E. J. Candès, D. L. Donoho, The curvelet transform for imagedenoising, IEEE Trans. Image Process.,2002,11(6):670-684.
    [30]T. Blu, F. Luisier, The SURE-LET approach to image denoising, IEEE Trans.Image Processing,2007,16(11):2778-2786.
    [31]A. Toet and E. M. Franken, Perceptual evaluation of different image fusionschemes, Displays,2003,24(1):25-37.
    [32]Q. Zhang, B. Guo, Multifocus image fusion using the nonsubsampled contourlettransform, Signal Process.,2009,89(7):1334-1346.
    [33]S. M. M. Rahman, M. O. Ahmad, M. N. S. Swamy, Contrast-based fusion of noisyimages using discrete wavelet transform, IET Image Process.,2010,4(5):374-384.
    [34]Z. Shao, J. Liu, and Q. Cheng, Fusion of infrared and visible images based on focusmeasure operators in the curvelet domain, Appl. Opt.,2012,51(12):1910-1921.
    [35]H. Li, S. Wei and Y. Chai, Image fusion-based contrast enhancement, EURASIP J.Adv. Sig. Pr.,2012,39:1-16
    [36]M. Pavel, G. Sperling, T. Riedl, and A. Vanderbeek, Limits of visualcommunication: the effect of signal-to-noise ratio on the intelligibility of AmericanSign Language, J. Opt. Soc. Am. A,1987,4(12):2355-2365.
    [37]R.G. Raj, W.S. Geisler, R.A. Frazor, A.C. Bovik, Natural contrast statistics and theselection of visual fixations, in Proc. of IEEE Conf. Image Process.(IEEE,2005)1152-1155.
    [38]O. Lilles ter, Complex contrast, a definition for structured targets and backgrounds,J. Opt. Soc. Am. A,1993,10(12):2453-2457.
    [39]Z. Liu, E. Blasch, Z. Xue, J. Zhao, R. Laganière, and W. Wu, Objective assessmentof multiresolution image fusion algorithms for context enhancement in night vision:a comparative study, IEEE PAMI,2012,34(1):94-109.
    [40]D. G. Lowe, Object recognition from local scale-invariant features, in: Proc.ICCV'99,1999,2:1150-1157.
    [41]D. G. Lowe, Distinctive image features from scale-invariant keypoints, Int. J.Comput. Vision,2004,60(2):91-110.
    [42]F. Gao, X. Gao, X. Li, et al. Universal no reference image quality assessmentmetrics based on local dependency, in: Pattern Recognition (ACPR),2011FirstAsian Conference on. IEEE,2011:298-302.
    [43]A. Ciancio, A L N. da Costa, E A B. da Silva, et al. Objective no-reference imageblur metric based on local phase coherence, Electronics letters,2009,45(23):1162-1163.
    [44]R Hassen, Z Wang, and M. Salama, No-reference image sharpness assessmentbased on local phase coherence measurement, in: Acoustics Speech and SignalProcessing (ICASSP'10), IEEE International Conference on,2010:2434-2437.
    [45]Z. Wang and E P. Simoncelli, and H. Hughes, Local phase coherence and theperception of blur, in: Adv. NIPS'03(Neural Information Processing Systems'03).
    [1] Z. Wang, and A. C. Bovik, A universal image quality index, IEEE Signal Process.Lett., March2002,9(3):81-84.
    [2] Z. Wang, A. C. Bovik and L. Lu, Why is image quality assessment so difficult?IEEE International Conference on Acoustics, Speech,&Signal Processing, May2002.
    [3] Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, Image quality assessment:From error visibility to structural similarity, IEEE Trans. Image Process., Apr.2004,13(4):.600-612
    [4] A. Rehman and Z. Wang, Reduced-reference image quality assessment by structuralsimilarity estimation, IEEE Trans. Image Process., Aug.2012,21(8):3378-3389.
    [5] Z. Liu, et al., Objective assessment of multiresolution image fusion algorithms forcontext enhancement in night vision: A comparative study, IEEE Trans. PatternAnal. Mach. Intell., Jan.2012,34(1):94-109.
    [6] G. Piella and H. Heijmans, A New Quality Metric for Image Fusion, in IEEE Proc.Int. Conf. Image Processing,2003.
    [7] C. Yang, et al., A Novel Similarity Based Quality Metric for Image Fusion,Information Fusion, Apr.2008,9(4):156-160.
    [8]修吉宏,基于图像功率谱的航空图像质量判别技术研究,中国科学院长春光学精密机械与物理所,2005
    [9]庞建新,图像质量客观评价的研究,中国科学技术大学,2008
    [10]程光权,基于方向小波图像处理与几何特征保持质量评价研究,国防科学技术大学,2010
    [11]朱杰英,基于重要视觉特征的图像质量评价与图像压缩,华中科技大学,2011
    [12]王翔,数字图像缩放及图像质量评价关键技术研究,浙江大学,2012
    [13]C-A. Deledalle, L. Denis, and F. Tupin. Iterative weighted maximum likelihooddenoising with probabilistic patch-based weights, IEEE Trans. Image Process., Dec.2009,18(12):2661-2672.
    [14]J. Shen, On the foundations of vision modeling: I. Weber's law and Weberized TVrestoration, Physica D: Nonlinear Phenomena, Feb.2003,175(3–4):241–251.
    [15]D. H. Hubel and T. N.Wiesel, Early exploration of the visual cortex, Neuron, Mar.1998,20(3):401–412.
    [16]E. Peli, Contrast in complex images, J. Opt. Soc. Am. A, Oct.,1990,7(10):2032-2040.
    [17] LIVE image database: http://live.ece.utexas.edu/research/quality/subjective.htm
    [1] D. Donoho and I. M. Jonestone, Adapting to unknown smoothness via waveletshrinkage, J. Amer. Statist. Assoc., Dec.1995,90(432):1200-1224
    [2] D. Donoho, De-noising by soft-thresholding, IEEE Trans. Inform. Theory, Mar.1995,41(3):613-627
    [3] S. G. Chang, B. Yu, and M. Vetterli, Adaptive wavelet thresholding for imagedenoising and compression, IEEE Trans. Image Process., Sep.1993,9(9):1135-1151
    [4] J. R. Sveinsson,, and J. A. Benediktsson, Speckle reduction and enhancement ofSAR images in the wavelet domain, Geoscience and Remote Sensing Symposium,1996, IGARSS'96,'Remote Sensing for a Sustainable Future.', International,1:63-66
    [5] R. R. Coifman and D. Donoho,(1995). Translation-Invariant De-noising. New York:Springer,1995, pp.125-150
    [6] F. Argenti, L. Alparone, Speckle removal from SAR images in the undecimatedwavelet domain, IEEE Trans. Geosci. Remote Sens., Nov.2002,40(11):2363-2374.
    [7] F. Luisier, T. Blu, and M. Unser, A new SURE approach to image denoising:interscale orthonormal wavelet thresolding, IEEE Trans. Image Process., March2007,16(3):593-606
    [8] C. Stein, Estimation of the mean of a multivariate normal distribution, Ann. Statist.,Nov.,1981,9(6):1135-1151
    [9] L. Jacques, L. Duval, C. Chaux, and G. Peyré, A panorama on multiscale geometricrepresentations, intertwining spatial, directional and frequency selectivity, Dec.2011,91(12):2699-2730
    [10]L. Sendur, and I. W. Selesnick, Bivariate shrinkage functions for wavelet-baseddenoising expoiting interscale dependency, IEEE Trans. Signal Process., Nov.2002,50(11):2744-2756
    [11]L. Sendur, and I. W. Selesnick, Bivariate shrinkage with local variance estimation,IEEE Signal Process. Lett., Dec.2002,9(12):438-441
    [12]J. Portilla, V. Strela, M. J. Wainwright and E. P. Simoncelli, Image denoising usingscale mixtures of gaussians in the wavelet domain, IEEE Trans. Image Process.,Nov.2003,12(11):1338-1351
    [13]G. Strang and T. Nguyen, Wavelets and Filter Banks, Wellesley: Wellesley-Cambridge Press,1997, pp.417-418
    [14]F. Yan; L. Cheng; S. Peng, A new interscale and intrascale orthonormal waveletthresholding for SURE-based image denoising, IEEE Signal Process. Lett., Jan.2008,15:139-142
    [15]E. Le Pennec, S. Mallat, Sparse geometric image representations with bandelets,IEEE Trans Image Process., Apr.2005,14(4):423-438.
    [16]G. Peyre, and S. Mallat, Surface compression with geometric Bandelets, ACMTrans. Graphics,(Proc. Of SIGGRAPH’05), Aug.2005,24(3):601-608
    [17]G. Peyre, and S. Mallat, Orthogonal bandelet bases for geometric imagesapproximation, Commun. on Pure and Appl. Math., Sept2008,61(9):1173-1212
    [18]G. Peyre, E. Pennec, and S. Mallat, Geometrical image estimation with orthogonalbandlet bases, Proc. SPIE, San Diego, CA, USA, Aug.2007, Vol.6701
    [19]S. Mallat, Geometrical Grouplets, Appl. Comput. Harmon. Anal., Mar2009,26(2):161-180
    [20]S. Mallat, A Wavelet Tour of Signal Processing,3rd Edition, Burlington: AcademicPress,2008
    [21]J.-C. Dreher, and L. Tremblay, Handbook of Reward and Decision Making, NewYork: Academic Press, Jul.2009
    [22]C. Kervrann, et al., Bayesian non-local means filter, image redundancy andadaptive dictionaries for noise removal, in SSVM’07, June2007, Ischia, Italy, pp.520-532
    [23]W. P., Ziemer, Weakly Differentiable Functions, GTM Vol.120, Springer-Verlag,1989
    [24]S. Osher, and R. Fedkiw, Level set methods: an overview and some recent results, J.Comput. Phys., May2001,169(2):463-502
    [25]J. Weickert, Anisotropic Diffusion in Image Processing, ECMI Series, Stuttgart:Teubner-Verlag,1998
    [26]A. Buades, C. Coll, and J. Morel. A review of image denoising algorithms, with anew one. Multiscale Modeling and Simulation, Feb.2005,4(2):490–530
    [27]K. Dabov, et. al., Image denoising by sparse3-D transform-domain collaborativefiltering, IEEE Trans. on Image Process., Aug.2007,16(8):2080–2095
    [28]Z. Wang, and A. Bovik, A universal image quality index, IEEE Signal Process. Lett.,Mar.2002,9(3):81-84
    [29] Z. Wang, et. al., Image quality assessment: from error visibility to structuralsimilarity, IEEE Trans. Image Process., Apr.2004,13(4):600-612
    [1] L. Tao, V. K. Asari, Adaptive and integrated neighbourhood dependent approach fornonlinear enhancement of color images’, J. Electron. Imaging,2005,14(4):1–14
    [2] Z. Liu, and R. Laganiere, Context enhancement through infrared vision: a modifiedfusion scheme’, Signal Image Video Process.,2007,1(4):293–301
    [3] Z. Liu, et al., Objective assessment of multiresolution image fusion algorithms forcontext enhancement in night vision: a comparative survey, IEEE Trans. PatternAnal. Mach. Intell.,2012,34(1):94-109
    [4] P. Shah, et al., Context enhancement to reveal a camouflaged target and to assisttarget localization by fusion of multispectral surveillance videos, Signal ImageVideo Process.,2013,7(3):537-552
    [5] G. Qu, D. Zhang, and P. Yan, Information measure for performance of image fusion,Electron. Lett.,2002,38(7):313–315.
    [6] M. Hossny,, S. Nahavandi, and D. Creighton, Comments on 'Information measurefor performance of image fusion', Electron. Lett.,2008,44(18):1066-1067
    [7] C. Xydeas and V. Petrovic, Objective image fusion performance measure, Electron.Lett., Feb.2000,36(4):308–309.
    [8] J. Zhao, R. Laganiere, and Z. Liu, Performance assessment of combinativepixel-level image fusion based on an absolute feature measurement, Int'l J.Innovative Computing, Information and Control, Dec.2007,3(6A):1433-1447,.
    [9] Z. Wang, and A. C. Bovik, A universal image quality index, IEEE Signal Process.Lett., March2002,9(3):81-84.
    [10]Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, Image quality assessment:From error visibility to structural similarity, IEEE Trans. Image Process., Apr.2004,13(4):600–612.
    [11]N. Cvejic, et. al., A similarity metric for assessment of image fusion algorithms,Int'l J. Signal Process., Mar.2005,2(3):178-182.
    [12]C. Yang, et. al., A novel similarity based quality metric for image fusion,Information Fusion, Feb.2008,9(2):156-160.
    [13]J. F. Kenney and E. S. Keeping, Mathematics of Statistics, Pt.1,3rd ed. Princeton,NJ: Van Nostrand, p.101,1962.
    [14] O. Rockinger,: Image sequence fusion using a shift-invariant wavelet transform, in:Proc. IEEE Intl. Conf. on Image Process., Santa Barbara, CA, Oct.1997, Vol.3:288-291
    [15]S. Li, X. Kang, and J. Hu, Image Fusion with Guided Filtering, Jul.2013,22(7):2864-2874.
    [16]Y. Chen, R. S. Blum, A new automated quality assessment algorithm for imagefusion, Image and Vision Computing,2009,27(10):1421-1432
    [17]E. Peli, Contrast in complex images, J. Opt. Soc. Am. A7,2032-2040(1990).
    [1] A. Micheslson, Studies in Optics, Chicago, Ill.: U. Chicago Press,1927
    [2] D. H. Hubel and T. N. Wiesel, Early exploration of the visual cortex, Neuron,1998,20(3):401–412
    [3] K. A. Panetta, E. J. Wharton,.and S. S. Agaian, Human visual system-based imageenhancement and logarithmic contrast measure, IEEE T. Syst. Man Cy. B,2008,38(1):174-188
    [4] J. Tang, E. Peli, and S. Acton, Image enhancement using a contrast measure in thecompressed domain, IEEE Signal Process. Lett.,2003,10(10):289-292
    [5] G. Piella, Image fusion for enhanced visualization: a variational approach, Int. J.Comput. Vis.,2009,83(1):1–11
    [6] R. Tan, Visibility in bad weather from a single image, in Proc. CVPR,2008:1-8.
    [7] A. Toet, Multiscale contrast enhancement with applications to image fusion, Opt.Eng.,1992,31(5):1026-1031
    [8] P. J. Burt and R. J. Kolczynski, Enhanced image capture through fusion, in: Proc.ICCV,1993:173-182.
    [9] H. Li, B. S. Manjunath, and S. K. Mitra, Multi-sensor image fusion using thewavelet transform, in Proc. ICIP,1994:51-54
    [10]H. Li, B. S. Manjunath, and S. K. Mitra, Multi-sensor image fusion using thewavelet transform, Graph. Models and Image Process., May1995,57(3):235-245
    [11]T. Pu and G. Ni, Contrast-based image fusion using the discrete wavelet transform,Aug.2000, Opt. Eng.39(8):2075-2082
    [12]E. Peli, Contrast in complex images, J. Opt. Soc. Am. A7,2032-2040(1990).
    [13]J. Tang, A contrast based image fusion technique in the DCT domain, DigitalSignal Process., May,2004,14(3):218-226
    [14]J. Shen, On the foundations of vision modeling: I. Weber's law and Weberized TVrestoration, Physica D: Nonlinear Phenomena, Feb.2003,175(3–4):241-251.
    [15]C. Wang and Z. Ye, Perceptual contrast-based image fusion: a variational approach,Acta Automatica Sinica, Feb.2007,33(2):132-137
    [16]J. Lubin, A visual discrimination model for imaging system design and evaluation,in: Vision Models for Target Detection and Recognition, E. Peli, eds,(WorldScientific Publishing,1995), pp.245-282.
    [17]A. Toet, J. K. IJspeert, A. M. Waxman, and M. Aguilar, Fusion of visible andthermal imagery improves situational awareness, Displays18,85-95,(1997).
    [18]A. Toet, J. J.van Ruyven, and J. M.Valeton, Merging thermal and visual images bya contrast pyramid, Optical Engineering,1989,28(7):789-792
    [19]V. S. Petrovic and C. S Xydeas, Gradient-based multiresolution image fusion, IEEETrans. Image Process., Feb.,2004,13(2):228-237
    [20]Q. Zhang, B. Guo, Multifocus image fusion using the nonsubsampled contourlettransform, Signal Process., Jul.2009,89(7):1334-1346.
    [21]S. M. M. Rahman, M. O. Ahmad, M. N. S. Swamy, Contrast-based fusion of noisyimages using discrete wavelet transform, IET Image Process.,2010,4(5):374-384.
    [22]Z. Shao, J. Liu, and Q. Cheng, Fusion of infrared and visible images based on focusmeasure operators in the curvelet domain, Appl. Opt.2012,51(12):1910-1921.
    [23]H. Li, S. Wei and Y. Chai, Multifocus image fusion scheme based on featurecontrast in the lifting stationary wavelet domain, EURASIP J. Adv. Sig. Pr.,2012,39(1):1-16
    [24]A. Gilchrist, Seeing Black and White, New York: Oxford University Press,2006
    [25]G. Simone, M. Pedersen, and J. Y.Hardeberg, Measuring perceptual contrast indigital images, Journal of Visual Communication and Image Representation,2012,23(3):491-506.
    [26]O. Rockinger, Image sequence fusion using a shift-invariant wavelet transform, in:Proc. ICIP,1997, pp.288-291
    [27]G. Pajares and J. Manuel de la Cruz, A wavelet-based image fusion tutorial, PatternRecognition,2004,37(9):1855-1872
    [28]J. Hu and S. Li, The multiscale directional bilateral filter and its application tomultisensor image fusion, Information Fusion,2012,13(3):196-206
    [29]V. Petrovic, C. Xydeas, Cross band pixel selection in multi-resolution image fusion,Proc. SPIE, Sensor Fusion: Architectures, Algorithms, and Applications III,(March12,1999)3719:319–326.
    [30]杨晓慧,金海燕,焦李成,基于DT-CWT的红外与可见光图像自适应融合.红外与毫米波学报,2007,26(6):419-424
    [31]M. Pavel, G. Sperling, T. Riedl, and A. Vanderbeek, Limits of visualcommunication: the effect of signal-to-noise ratio on the intelligibility of AmericanSign Language, J. Opt. Soc. Am. A,1987,4(12):2355-2365.
    [32]R.G. Raj, W.S. Geisler, R.A. Frazor, A.C. Bovik, Natural contrast statistics and theselection of visual fixations, in Proc. of IEEE Conf. Image Process.(IEEE,2005)1152-1155.
    [33]Lilles ter, Complex contrast, a definition for structured targets and backgrounds, J.Opt. Soc. Am. A,1993,10(12):2453-2457
    [34]Peter J. Bex and Walter Makous, Spatial frequency, phase, and the contrast ofnatural images, J. Opt. Soc. Am. A,2002,19(6):1096-1106
    [35]D. D. Duncan, S. J. Kirkpatrick and R. K. Wang, Statistics of local speckle contrast,J. Opt. Soc. Am. A,2008,25(1):9-15.
    [36]M. Ferraro and G. Boccignone, Image contrast enhancement via entropy production,Real-Time Imag.,2004,10(4):229-238
    [37]Z. Wang and A.C. Bovik, A Universal Image Quality Index, IEEE Signal Process.Lett, Mar.2002,9(3):81-84.
    [38]Z. Wang, A.C. Bovik, H.R. Sheikh, and E.P. Simoncelli,“Image QualityAssessment: From Error Measurement to Structural Similarity,” IEEE Trans. ImageProcess.,2004,13(1):1-14.
    [39]C. S. Xydeas and V. Petrovic, Objective Image Fusion Performance Measure, IETElectron. Lett.,2000,36(4):308-309.
    [40]N. Cvejic, A. Loza, et. al., A Similarity Metric for Assessment of Image FusionAlgorithms, Int’l J. Signal Process.,2005,2(3):178-182.
    [41]C. Yang, J. Zhang, X. Wang, and X. Liu, A Novel Similarity Based Quality Metricfor Image Fusion, Information Fusion,2008,9(2):156-160.
    [42]Z. Liu, D.S. Forsyth, and R. Laganiere, A Feature-Based Metric for theQuantitative Evaluation of Pixel-Level Image Fusion, Computer Vision and ImageUnderstanding,2008,109(1):56-68.
    [43]Y. Zheng, E.A. Essock, B.C. Hansen, and A.M. Haun, A New Metric Based onExtended Spatial Frequency and Its Application to DWT Based Fusion Algorithms,Information Fusion,2007,8(2):177-192.
    [44]N. Cvejic, C.N. Canagarajah, and D.R. Bull, Image Fusion Metric Based on MutualInformation and Tsallis Entropy, IET Electron. Lett.,2006,42(11):626-627.
    [45]G. Qu, D. Zhang, and P. Yan, Information Measure for Performance of ImageFusion, IET Electron Lett,2002,38(7),313-315.
    [46]M. Hossny, S. Nahavandi, and D. Vreighton, Comments on ‘Information Measurefor Performance of Image Fusion’, IET Electron. Lett.,2008,44(18):1066-1067.
    [47]Y. Chen, R. S. Blum, A new automated quality assessment algorithm for imagefusion, Image and Vision Computing,2009,27(10):1421-1432.
    [48]Z. Liu, E. Blasch, Z. Xue, J. Zhao, R. Laganière, and W. Wu, Objective assessmentof multiresolution image fusion algorithms for context enhancement in night vision:a comparative study, IEEE PAMI,2012,34(1):94-109.
    [49]S. Li, H. Yin, L. Fang, Group-sparse representation with dictionary learning formedical image denoising and fusion,2012,59(12):3450-3459
    [50]R. Singh, A. Khare, Fusion of multimodal medical images using Daubechiescomplex wavelet transform,(in press)
    [51]R. Redondo, F. roubek, S. Fischer, et al. Multifocus image fusion using thelog-Gabor transform and a multisize windows technique, Information Fusion,2009,10(2):163-171.
    [52]T. Wilson, S. Rogers and L. Meyers, Perceptual-based image fusion forhyperspectral data, IEEE T. Geosci. Remote,1997,35(4):1007-1017.
    [53]Toet and E. M. Franken, Perceptual evaluation of different image fusion schemes,Displays,2003,24(1):25-37
    [54]S. Li, B. Yang, and J Hu, Performance comparison of different multi-resolutiontransforms for image fusion, Inform. Fusion,2011,12(2):74-84.
    [55]A. Goshtasby and S. G. Nikolov, Image fusion: advances in the state of the art,Inform. Fusion,2007,8(2):114-118
    [56]G. Piella, H. Heijmans, A new quality metric for image fusion, In: Proceedings of2003International Conference on Image Processing IEEE, vol.2, III:173-176.
    [57]图像融合网站:http://www.imagefusion.org/
    [58]Y. Choi, E. Sharifahmadian,&S. Latifi, Performance analysis of image fusionmethods in transform domain, in: SPIE Defense, Security, and Sensing (pp.87560G-87560G). International Society for Optics and Photonics, May2013.
    [59]S. Li, Xudong Kang, and J. Hu, Image Fusion with Guided filtering, IEEETransactions on Image Processing,2013,22(7):2864-2874.
    [60]Santiago Aja-Fernandez, Raul San Jose Estepar, Carlos Alberola-Lopez, and C-F.Westin, Image quality assessment based on local variance, in: Engineering inMedicine and Biology Society,2006. EMBS'06.28th Annual InternationalConference of the IEEE, pp.4815-4818