红外与可见光图像融合算法研究
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
随着图像融合技术的发展,红外传感器和可见光传感器的应用领域变得更加的广泛,同时也对红外与可见光图像的融合算法提出了更高的要求。目前,从传感器获得的图像的尺寸越来越大,给像素级融合带来了存储和计算上很大的挑战,压缩感知理论的提出为解决该问题提供了一个方向;传感器在获取图像过程中,难免会受到外接因素的影响,造成获得的图像的对比度变差,可视质量下降,这就要求图像融合算法不仅能够融合互补信息,去除冗余信息,而且还要提高融合图像的对比度,改善视觉效果;融合图像质量评价不但能够评价融合图像的视觉效果,还能评估融合算法性能的优劣,对融合算法的改进和完善,具有指导意义和参考价值。
     论文对上述三个方面的内容进行了详细研究,主要研究成果如下:
     1、研究了多尺度分析理论及压缩感知理论,提出一种分立欠采样压缩感知红外与可见光图像融合方法。该方法将小波变换与Contourlet变换结合,获得图像更加稀疏的表达,根据分解后系数分布的特点,设计了分立双放射型采样模式及恰当的融合规则,最后对融合系数进行非线性共轭梯度法压缩感知重构,提高了融合图像的质量。
     2、针对图像获取过程中可能出现的图像对比度变差、图像可视质量下降的问题,研究了一种对比度增强的红外与可见光图像融合方法。该方法使用双边滤波解决子带分解多尺度Retinex方法在图像增强过程中出现的光晕现象,将改进后的方法应用于红外与可见光图像融合,在完成图像融合的同时,提高了融合图像的对比度,增强了图像的细节。
     3、针对传统融合图像质量客观评价指标的评估结果与主观视觉感受不符的问题,提出一种符合人眼视觉特性的客观融合图像质量评价指标。该指标在可视信息逼真度的计算中引入对比敏感度作为求和的权重,解决了可视信息逼真度在评价过增强图像时评价值较高的问题。实验结果表明该指标评价结果与主观评价结果一致,而且计算简单,具有实用价值。
With the development of image fusion technology, the applications of infraredsensors and visible light sensors become more widespread. Meanwhile, a higherrequirement for fusion algorithms about infrared images and visible images isproposed. Currently, the size of the image obtained from the sensor increases. Withregard to the fusion algorithms for pixel operations, this change takes a lot ofchallenges on the storage and computing. Compressed sensing theory provides a newway to solve this problem. When the sensor is obtaining images, external factorsmay disturber the process. And these factors reduce the image's contrast. Thus,higher requirements for the fusion algorithm are put forward. Not only can the fusionalgorithm blend the complementary information, but also remove the redundantinformation. At the same time, it is better for the algorithm to improve the visualeffect of the fusion image. The fusion image plays a very important role for thealgorithm evaluation. It can not only make fusion algorithm improved and perfected,but also evaluate the property of the algorithm.
     The paper has carried lots of studies on these three aspects. There are severalresearch results as follows.
     1. Through the analysis and research of multi-scale compression perceptiontheory, we proposed a new fusion algorithm on infrared images and visible images.This algorithm is based on the discrete undersampling theory. By combiningcontourlet transform algorithm and wavelet transform algorithm, We can get the image with more sparse expression. According to the characteristics of thecoefficient of energy distribution after transformation, a new discrete doubleradiation sampling mode is proposed. According to the characteristics of thecoefficient of each layer after sampling, the appropriate rules of fusion are used. Thismethod improves the quality of the fused image.
     2. In the process of image capture, the image contrast may be variation; theimage visual effect may be descended. To solve these problems, a new algorithm isproposed. This algorithm can improve the contrast of the fused image. By using thebilateral filtering, this method works out the halo phenomena appeared in the processof image enhancement of Retinex method with multiscale subband decomposition.This method is applied to image fusion. It not only enhances the image contrast, butalso increases the detail of the image.
     3. In view of the traditional fusion image, there is a problem for the imagequality. Evaluation results of objective evaluation indicators are different fromsubjective visual perception. In order to solve this problem, a new evaluation indexof fusion image quality is proposed. The index is a kind of objective evaluationcriteria that conforms to the human eye visual features. In the calculation of thevisual information fidelity, contrast sensitivity is introduced as a weight of thesummation. It solves a problem of high value. The value is brought by the visualinformation fidelity when it is evaluating a enhance image. The experimental resultsshow that, the evaluation result is consistent with the subjective evaluation results.And, the algorithm is simple. Therefore, it has a certain practical value.
引文
[1] Hall D L,Linas J. An introduction to multisensor data fosion[J]. Proceedings ofthe IEEE,1997,85(l):6-23.
    [2] Wald L. Some terms of reference in data fusion[J]. IEEE Transaction onGeoscience and Remote Sensing,1999,37(3):1190-1193.
    [3] Van Genderen J L, Pohl C. Image fusion: Issues, techniques andapplications[C].Intelligent Image Fusion, Proceedings EARSeL Workshop, Strasbourg,France.1994,11:18-26.
    [4] Kokar M, Kim K H. Review of multisensor data fusion architectures andtechniques[C]. Intelligent Control,1993. Proceedings of the1993IEEE InternationalSymposium on. IEEE,1993:261-266.
    [5] Varshney P K. Multisensor data fusion[J]. Electronics&CommunicationEngineering Journal,1997,9(6):245-253.
    [6]何友,王国宏,陆大紟.多传感器数据融合模型综述[J].清华大学学报,1996,36(9):14-20.
    [7] Basseville M, Benveniste A, Willsky A. S. Multiscale autoregressive process,Parts I: Schur-Levinson Parametrization [J]. IEEE Trans. on Signal Processing,1992,40(8):1915-1934.
    [8] Townsend D W, Beyer T. A combined PET/CT scanner: the path to true imagefusion[J]. British journal of radiology,2002,75(Suppl.9): S24-S30.
    [9] Constantinos S P, Pattichis M S, Micheli-Tzanakou E. Medical imaging fusionapplications: An overview[C]. Signals, Systems and Computers,2001. ConferenceRecord of the Thirty-Fifth Asilomar Conference on. IEEE,2001,2:1263-1267.
    [10]Toet A, IJspeert J K, Waxman A M, et al. Fusion of visible and thermal imageryimproves situational awareness[J]. Displays,1997,18(2):85-95.
    [11]Sworder D D, Boyd J E, Clapp G A. Image fusion for tracking manoeuvringtargets[J]. International Journal of Systems Science,1997,28(1):1-14.
    [12] Snidaro L, Foresti G L, Niu R, et al. Sensor fusion for video surveillance[J].2004,2:1263-1267.
    [13]O'Brien M A, Irvine J M. Information fusion for feature extraction and thedevelopment of geospatial information[R]. National Geospatial Intelligence AgencyRestonva,2004.
    [14] Mirhosseini A R, Yan H, Lam K M, et al. Human face image recognition: Anevidence aggregation approach[J]. Computer Vision and Image Understanding,1998,71(2):213-230.
    [15] Pohl C, Van Genderen J L. Review article multisensor image fusion in remotesensing: concepts, methods and applications[J]. International journal of remotesensing,1998,19(5):823-854.
    [16] Simone G, Farina A, Morabito F C, et al. Image fusion techniques for remotesensing applications[J]. Information fusion,2002,3(1):3-15.
    [17]郭明,符拯,奚晓梁.基于局部能量的NSCT域红外与可见光图像融合算法.红外与激光工程,2012,41(8):2229-2235.
    [18]张家良.红外与可见光图像配准及融合技术研究[D]:[硕士学位论文].南京:南京理工大学,2012
    [19]J.L.Tissot. IR detection with uncooled focal plane arrays: State of the arts andtrends. Optoelectronics Review.2004,(1):105-109.
    [20]孙志君.红外焦平面阵列技术的发展现状与趋势。光机电信息,2002,3:8-13.
    [21] Charles Hanson, Howard Beratan, Robert Owen, Mac Corbin, Sam McKenney.Uncooled thermal imaging at Texas Instruments. Proceeding of SPIE.1992,1735:17-26.
    [22]邢素霞,张俊举等.非制冷红外热成像技术的发展与现状[J].红外与激光工程.2004,33(5):441-444.
    [23]胡琳. C C D图像传感器的现状及未来发展[J].电子科技.2010,23(6):82-85.
    [24]郑有志.基于多尺度经验模态分解的图像融合算法研究[D].北京:清华大学,2009,5-20
    [25]田思.微光与红外图像实时融合关键技术研究[D].[博士学位论文].南京:南京理工大学,2010.
    [26]窦亮.实时图像融合算法及其FPGA实现[D].[硕士学位论文].南京:南京理工大学,2013
    [27] Peter S. Paicopolisa, Jonathan G. Hixsonb, Valerie A. Noseckc. Human visualperformance of a dual band I2/IR sniper scope. Proc of SPIE,2007,67(37):1-12.
    [28] D L Hall, J Llinas. An introduction to multisensor data fusion[J]. Prodceedings ofthe IEEE,1997,85(1):6-23
    [29] C Pohl, J L Van Genderen. Multisensor image fusion in remote sensing: concepts,methods and applications[J]. International Journal of Remote Sensing,1998,19(5):823-854
    [30]覃征,鲍复民,李爱国等.数字图像融合[M].西安:西安大学出版社,2004,40-60
    [31] Smith M I, Heather J P. A review of image fusion technology in2005[C].Defenseand Security. International Society for Optics and Photonics,2005:29-45.
    [32] Toet A, Walraven J. New false color mapping for image fusion. OpticalEngineering,1996,35(3):650-658.
    [33] Waxman A M, Fay D A, Gove A N, et al.. Color night vision: fusion of intensifiedvisible and thermal IR imagery. Synthetic Vision for Vehicle Guidance and Control,Proceedings of SPIE,1995,2463:58-68.
    [34]倪国强,戴文,李勇量,等.基于响尾蛇双模式细胞机理的可见光/红外图像彩色融合技术的优势和前景展望.北京理工大学学报,2004,24(2):95-100.
    [35] Geng B. Y., Xu J. Z., Yang J. Y. An approach based on the features ofspace-frequency domain for fusion of edge maps obtained through multisensors [J].Systems Engineering and Electronics.2002,22(4):18-22.
    [36] Smith S, Scarff L A. Combining visual and IR images for sensor fusion: twoapproaches[C].Proceedings of SPIE,1992,1668:102-112.
    [37] J.M.Lafert, F.Heitz, P.Perez et al..Hierarchical statistical models for the fusion ofmultiresolution image data[C].Proceedings of the International Conference onComputer Vision.Cambridge,USA.1995.908-913.
    [38] W A Wright, F Bristol. Quick Markov random field image fusion[C].Proceedingsof SPIE,1998,3374:302-308.
    [39] R Azencott, B Chalmond, F Coldefy. Markov fusion of a pair of noise images todetect intensity valleys[J].International Journal of ComputerVision,1995,16(2):135-145.
    [40] Sharma R.K.,Leen T.K.,Pavel M.Bayesian sensor image fusion using local lineargenerative models[J].Optical Engineering,2001,40(7):1364-1376.
    [41] R.K.Sharma,M.Pavel.Adaptive and statistical image fusion[J].Society forInformation Display,1996.XXVII:969-972.
    [42] Zhang Z L, Sun S H, Zheng F C. Image fusion based on median filters andSOFM neural networks: a three-step scheme[J].Signal Processing,2001,81(6):1325-1330.
    [43] Li S T, Kwork J T, Wang Y N. Multifocus image fusion using artificialnetworks[J].Pattern Recognition Letters,2002,23:985-997.
    [44] Gail A Carpenter, Siegfried Martens, Ogi J Ogas. Self-organizing informationfusion and hierarchical knowledge discovery: a new framework using ARTMAPneural networks[J].Neural Networks,2005,18(3):287-295.
    [45]苗启广,王宝树.一种自适应PCNN多聚焦图像融合新方法[J].电子与信息学报,2006,28(3):466-470.
    [46] Wei Huang, Zhong Liang-jing. Multi-focus image fusion using pulse coupleneural networks[J]. Pattern Recognition Letters,2007,28(9):1123-1132.
    [47] Qu Xiaobo, Yan Jing-wen. Image Fusion Algorithm Based on SpatialFrequency-Motivated Pulse Coupled Neural Networks in Nonsubsampled ContourletTransform Domain[J]. Acta Automatica Sinica.2008,34(12):1509-1514.
    [48] Toet A.Image fusion by a ratio of low-pass pyramid.Pattern Recognition Letters,1989,9(4):245-253.
    [49] Toet A,Ruyven L J,Valeton J M.Merging thermal and visual images by a contrastpyramid.Optical Engineering,1989,28(7):789-792.
    [50] Toet A.Multiscale contrast enhancement with applications to image fusion.Optical Engineering,1992,31(5):1026-1031.
    [51] Mallat S G, A Theory for Multiresolution Signal Decomposition: The WaveletRepresentation[J].IEEE Transaction on Pattern Analysis and Machine Intelligence,1989,11(7):674-693.
    [52]彭玉华.小波变换与工程应用[M].北京:科学出版社,2003,1-108.
    [53]孙延奎.小波分析及其应用[M].北京:机械工业出版社,2005,1-260.
    [54] Miao Qiguang, Wang Baoshu. A novel algorithm of image fusion using finiteRidgelet transform[C].Proc of SPIE,2006,6242(62420Y):1-8.
    [55] Kun Liu, Lei Guo, Weiwei Chang, et al. algorithm of image fusion based onfinite Ridgelet transform[C].Proc of SPIE,2007,6786(67860D):1-7.
    [56] Miao Qiguang, Wang Baoshu. A novel image fusion method using Contourlettransform[C].2006International Conference On Communications, Circuits andSystems Proceedings, Guilin1,2006:548-552.
    [57] Miao Qiguang, Wang Baoshu. The Contourlet for image fusion[C].Proc ofSPIE,2006,6264(62640Z):1-8.
    [58]李光鑫,王珂.基于Contourlet变换的彩色图像融合算法[J].电子学报,2007,35(1):112-117.
    [59] Yajun Song, Kun Gao, Guoqiang Ni. A novel infrared image fusion algorithmbased on Contourlet transform[C].Proc of SPIE,2007,6835(68351P):1-8.
    [60] Zhou J P, Cunha A L, Do M N. Nonsubsampled contourlet transform:Construction and application in enhancement[C].In: IEEE Int Conf on ImageProc,Genoa, Italy,2005,469-472.
    [61]傅瑶,孙雪晨,薛旭成等.基于非下采样轮廓波变换的全色图像与多光谱图像融合方法研究[J].液晶与显示,2013,28(3):429-434.
    [62]David L.Donoho. Compressed sensing [J], IEEE Trans. Inform. Theory,2006,52(4):1289–1306.
    [63]T. Wan, N. Canagarajah, and A. Achim. Compressive image fusion[C], in Proc.Int. Conf. on Image Processing.San Diego,California,U.S.A,2008, pp.1463–1469.
    [64]HAN J J, LOFFELD O,HARTMANN K,et al.. Multi image fusion based oncompressive sensing[C]. Proc. Int. Conf. on Audio Language and ImageProcessing.2010:1463–1469.
    [65] W Rong, B Fanliang,J Hua, et al. A feature-level image fusion algorithm basedon neural networks[C].20071st International Conference on Bioinformatics andBiomedical Engineering,Wuhan,2007:821-824
    [66] A H Gunatilaka, B A Baertlein. Feature-level and decision-level fusion ofnoncoincidently sampled sensors for land mine detection[J]. IEEE Transactions onPattern Analysis and Machine Intelligence,2001,23(6):577-589
    [67] A Kong, D Zhang, M Kamel. Palmprint identification using feature-levelfusion[J]. Pattern Recognition,2006,39(3):478-487
    [68] A Ross, R Govindarajan. Feature level fusion using hand and face biometrics[J].Biometric Technology for Human Identification II,2005,57(79):196-204
    [69] A Kong, D Zhang, Kamel M. Palmprint identification using feature-levelfusion[J]. Pattern Recognition,2006,39(3):478-487
    [70] R Nishii. A markov random field-based approach to decision-level fusion forremote sensing image classification[J]. IEEE Transactions on Geoscience and RemoteSensing,2003,41(10):2316-2319
    [71] T Qian, Veldhuis R. Threshold-optimized decision-level fusion and itsapplication to biometrics[J]. Pattern Recognition,2009:823-836
    [72] A Dawoud, M S Alam, A Bal, et al. Target tracking in infrared imagery usingweighted composite reference function-based decision fusion[J]. IEEE Transactionson Image Processing,2006,15(2):404-410
    [73] A Kushki, P Androutsos, N Konstantinos, et al. Retrieval of images from artisticrepositories using a decision fusion framework[J]. IEEE Transactions on ImageProcessing,2004,13(3):277-292
    [74] R Huan, Y Pan. Decision fusion strategies for SAR image target recognition[J].IET RadarSonar Navig.,2011,5(7):747–755
    [75] W Yi, C Wei, M Shiyi. Multi-sensor decision level image fusion based on fuzzytheory and unsupervised FCM[C]. Proceedings of the SPIE-The InternationalSociety for Optical Engineering,2006:62000J-62001-62007.
    [76]张勇,金伟其.图像融合算法性能分析与评价效果研究[J].激光与光电子学进展.2010.47(10):101001(1-7).
    [77]蒋少华.多源图像处理关键技术研究[D].[博士学位论文].武汉:华中科技大学,2011.
    [78] N Venkata, T D Kite, W S Geisler, et al. Image quality assessment based on adegradation model[J]. IEEE Transactions on Image Processing,2000,9(4):636-650
    [79] H Chen, P K Varshney. A human perception inspired quality metric for imagefusion based on regional information[J]. Information Fusion,2007,8(2):193-207
    [80] T D Dixon, E F Canga, S G Nikolov, et al. Selection of image fusion qualitymeasures: objective, subjective, and metric assessment[J]. Journal of the OpticalSociety of America A (Optics,Image Science and Vision),2007,24(12):125-135
    [81] A M Eskicioglu, P S Fisher. Image quality measures and their performance[J].IEEE Transactions on Communication,1995,43:2959-2965
    [82] M Hossny, S Nahavandi, D Creighton. Comments on’Information measure forperformance of image fusion’[J]. Electronics letters,2008,44(18):221-222
    [83] N Cvejic, A Loza, D Bull. A similarity metric for assessment of image fusionalgorithms[J].International journal of signal processing,2005,2(3):178-182
    [84] Qu G, Zhang D, Yan P. Information measure for performance of imagefusion[J].Electronics letters,2002,38(7):313-315.
    [85] Xydeas C S, Petrovic V. Objective image fusion performance measure[J].Electronics Letters,2000,36(4):308-309.
    [86] Piella G, Heijmans H. A new quality metric for image fusion[C]. ImageProcessing,2003. ICIP2003. Proceedings.2003International Conference on.IEEE,2003,3: III-173-6vol.2.
    [87]Wang Z, Bovik A C. Modern image quality assessment[J].Synthesis Lectures onImage, Video, and Multimedia Processing.2006,2(1):1156.
    [88]Chen H, Varshney P K. A human perception inspired quality metric for imagefusion based on regional information[J].Information Fusion,2007,8(2):193207.
    [89]HAN Y, CAI Y Z, CAO Y et al. A new image fusion performance metric based onvisual information fidelity[J], Information Fusion,2013,14(2):127–135.
    [90] Burt P J, Adelson E H. The Laplacian pyramid as a compact image code[J].IEEETransactions on Communications,1983,31(4):532-540.
    [91] Syed Sohaib Ali, Muhammad Mohsin Riaz and Abdul Ghafoor. Fuzzy logic andadditive wavelet based panchromatic sharpening[J].IEEE Geoscience and RemoteSensing Letters,2014,11(1):357-360.
    [92]刘海涛,石跃祥,康蕴.基于小波分析的图像融合新方法[J].计算机工程与应用,2013,49(6):205-208.
    [93]严奉霞,成礼智,彭思龙.复数小波域的高斯尺度混合模型图像降噪[J].中国图像图形学报,2008,13(5):865-869.
    [94]吴一全,殷骏.粒子群优化的Contourlet域数字全息再现像增强[J].中国激光,2013,40(8):0809002.
    [95]刘坤,郭雷,常威威.基于Contourlet变换的区域特征自适应图像融合算法[J].光学学报,2008,28(4):681-686.
    [96]刘兴淼,王仕成,赵静.结合统计分布和非下采样Contourlet变换的红外小目标检测[J].光学精密工程,2011,19(4):908-915.
    [97]韩亮,李勇明,温罗生.基于非下采样Contourlet域高斯尺度混合模型的图像降噪[J].光电子激光,2009,20(8):1123-1128.
    [98]杨粤涛,朱明,贺柏根等.采用改进投影梯度非负矩阵分解和非采样Contourlet变换的图像融合方法[J].光学精密工程,2011,19(5):1144-1150.
    [99] S.Mallat. A Theory for Multi-resolution Signal Decomposition:the WaveletRepresentation.IEEE Trans.PAMI,1989,11(7):674-693.
    [100] S. Mallat. Multi-resolution Approximation and Wavelet Orthogonal Bases ofL2(R).Trans.Amer,Math.Soc,1989,315:69-87.
    [101] O.Rioul,P.Duhamel. Fast Algorithms for Discrete and Continuous WaveletTransforms.IEEE Trans.Information Theory,1992,38(2):569-585.
    [102] M N Do, M Vetterli. Contourlet[A]. J Stoeckler, G V Welland. BeyondWavelets[C]. Academic Press,2002.
    [103] Burt P J. The pyramid as a structure for efficient computation[C].InMultiresolution Image Processing and Analysis, London: Springer-Verlag,1984,6-35.
    [104] DO M.N, Vetterli M. Framing pyramids[J].IEEE Trans. Signal Proc.,2003,51(9):2329-2342.
    [105] Park S., Smith M.J.T, Mersereau R.M.. A new directional filterbank for imageanalysis and classification[J]. IEEE Int. Conf. Acoust., Speech, and Signal Proc.,1999:1417-1420.
    [106] E.Candes, J.Rombelg and T.Tao. Robust uncertainty principles:exact signalreconstruction from highly incomplete frequency information[J]. IEEE Transactionson Information Thcory,2006,52(2):489-509.
    [107] E.Candes, J.Romberg and T.Tao. Stable signal recovery from incomplete andinaccurate measurements[J]. Communications on Pure and Applied Mathematics.2006,59(8):1207-1223.
    [108]E.Candes. Compressive sampling[C]. In Proc. Int. Congress of Mathematics,Madrid, Spain,2006,3:1433-1452
    [109]D.Donoho. Compressed sensing[J]. IEEE Transactions on InformationTheory,2006,52(4):1289-1306
    [110]D.Donoho and Y.Tsaig. Extensions of compressed sensing[J]. IEEE SignalProcessing,2006,86(3):533-548
    [111]R.Baraniuk. A lecture on compressive sensing[J]. IEEE Signal Processing,2007,14(7):118-121
    [112]T. Wan, N. Canagarajah, and A. Achim. Compressive image fusion[C], in Proc.Int. Conf. on Image Processing.San Diego,California,U.S.A,2008, pp.1463–1469.
    [113] LI X,QIN S Y. Efficient fusion for infrared and visible images based oncompressive sensing principle [J], IET Image Process,2011,5(2):141–147.
    [114] HAN J J, LOFFELD O,HARTMANN K,et al.. Multi image fusion based oncompressive sensing[C]. Proc. Int. Conf. on Audio Language and ImageProcessing.2010:1463–1469.
    [115]黄晓生,戴秋芳,曹义亲.一种基于小波稀疏基的压缩感知图像融合算法[J].计算机应用研究,2012,29(9):3581-3583.
    [116]Tu T M, Su S C, Shyu H C,et al. A new look at HIS-like image fusionmethods[J]. Information Fusion,2001,2(3):177-186.
    [117] Rajenda Pandit Desale, Sarita V. Verma. Study and Analysis of PCA, DCT&DWT based Image Fusion Techniques[C]. International Conference on SignalProcessing, Image Processing and Pattern Recognition,2013:1-4.
    [118]潘梅森,汤井田,杨晓利.采用PCA和PSNR的医学图像配准[J].红外与激光工程,2011,40(2):355-364.
    [119]张雷,李婧,李根全,等.一种新的基于图像增强的融合算法[J].激光与红外,2013,43(9):1072-1075.
    [120]Melkamu. H. Asmare, Vijanth S. Asirvadam, Lila Iznita. Multi-sensor imageenhancement and fusion for vision clarity using contourlet transform [C].International Conference on Information Management and Engineering,2009,112:352-356.
    [121]吴泽鹏,宣明,贾宏光,等.基于最优映射曲线的红外图像动态范围压缩和对比度增强方法[J].中国激光,2013,40(12):1209002.
    [122]Gao Jingli, Li Bo, Bao Yidong, et al.. Wavelet enhanced fusion algorithm formultisensor images[C]. International Conference on Consumer Electronics,Communications and Networks,2011, pp.5474-5476.
    [123]Xu Fan, Su Xiuqin. An enhanced infrared and visible image fusion methodbased on wavelet transform[C]. International Conference on IntelligentHuman-Machine Systems and Cybernetics,2013,255:453-456.
    [124] Edwin H Land, John J MeCann. Lightness and Retinex theory[J]. Journal of theOptical Society of America,1971,61(l):1-11.
    [125]王龙志,姚晓天,孟卓,等.基于自适应多尺度Retinex的光学相干层析图像衰减补偿算法[J].中国激光,2013,40(12):1204001.
    [126] Jobson D J, Rahman Z, Woodell G A. A multiscale retinex for bridging the gapbetween color images and the human observation of scenes[J]. IEEE Transactionson Image Processing,1997,6(7):965-976.
    [127]KANG B, ZHU W P. Fusion framework for multi-focus images based oncompressed sensing[J], IET Image Process,2013,7(4):290–299.
    [128]LI S T, YIN H T, FANG L Y. Remote sensing image fusion via sparseRepresentations over learned dictionaries[J], IEEE Transactions on Geoscience andRemote Sensing,2013,51(9):4779–4789.
    [129]吴新杰,黄国兴,王静文.压缩感知在电容层析成像流型辨识中的应用[J].光学精密工程,2013,21(4):1062-1068.
    [130] M. Lustig, D. Donoho and J. Pauly. Sparse MRI: The application of compressedsensing for rapid MR imaging[J], Magnetic Resonance in Medicine,2007,58(6):1182–1195.
    [131]J.H.JANG, B. CHOI, S. D. KIM et al.. Sub-band decomposed multiscale retinexwith space varying gain[C], IEEE International Conference on Image Processing,2008, pp.3168–3171.
    [132]J. H. JANG, S. D. KIM, and J. B. RA. Enhancement of optical remote sensingimages by subband-decomposed multiscale retinex with hybrid intensity transferfunction[J]. IEEE Geoscience and Remote Sensing Letters.2011,8(5):983–987.
    [133]J.H.JANG, Y. Bae, and J. B. Ra. Multi-sensor image fusion using subbanddecomposed multiscale retinex[C], IEEE International Conference on ImageProcessing,2009, pp.2177–2180.
    [134]J. H. JANG, Y. Bae and J. B. RA. Contrast-enhanced fusion of multisensoryimages using subband-decomposed multiscale retinex[J]. IEEE Transactions on ImageProcessing.2012,21(8):3479–3490.
    [135]Tomasi C, Manduchi R. Bilateral filtering for gray and color images[C]. IEEEInternational Conference on Computer Vision,1998, pp.839–846.
    [136] Yong Ding, Shaoze Wang and Dong Zhang. Full-reference image qualityassessment using statistical local correlation[J]. Electronics Letters,2014,50(2):79-81.
    [137] Demirtas, A.M., Reibman A.R. and Jafarkhani H.. Full-Reference QualityEstimation for Images With Different Spatial Resolutions[J]. IEEE Transactions onImage Processing,2014,23(5):2069-2080
    [138] Qiang Li; Zhou Wang. Reduced-Reference Image Quality Assessment UsingDivisive Normalization-Based Image Representation[J]. Selected Topics in SignalProcessing,2009,3(2):202-211.
    [139] Bhateja, Vikrant; Kalsi, Aseem; Srivastava, Aastha. Reduced reference IQAbased on structural dissimilarity[C]. International Conference on Signal Processingand Integrated Networks,2014, pp63-68.
    [140]南栋,毕笃彦,查宇飞等.基于参数估计的无参考型图像质量评价算法[J].电子与信息学报,2013,35(9):2066-2072.
    [141] Michele A. Saad, AlanC.Bovik and Christophe Charrier. Blind Image QualityAssessment: A Natural Scene Statistics Approach in the DCT Domain[J]. IEEETransactions on Image Processing,2012,21(8):3339-3352.
    [142]范媛媛,沈湘衡,桑英军.基于对比度敏感度的无参考图像清晰度评价[J].光学精密工程,2011,19(10):2485-2493.
    [143]吕玮阁,徐海松,汪哲弘等.基于不同颜色方向和空间频率的彩色对比灵敏度特性研究[J].光学学报,2011,31(1):0133002.
    [144]李蕊,王肇圻.时间频率下神经对比敏感度的研究[J].光学学报,2010,s100504.
    [145]Mannos J L,Sakrison D J. The effects of a visual fidelity criterio on the encodingof images[J]. IEEE Transactions on Information Theory,1974,20(4):525-536.
    [146] H.R. Sheikh, A.C. Bovik, and G. de Veciana. An information fidelity criterionfor image quality assessment using natural scene statistics[J]. IEEETransactions onImage Processing,2005,14(12):2117-2128.
    [147] H.R. Sheikh and A.C. Bovik. Image information and visual quality[J]. IEEETransactions on Image Processing,2006,15(2)430-444.
    [148] Moorthy, A.K., Bovik A.C.. Blind Image Quality Assessment: From NaturalScene Statistics to Perceptual Quality[J]. IEEE Transactions on ImageProcessing,2011,20(12):3350-3364.
    [149] Xinbo Gao, Fei Gao, Dacheng Tao et al. Universal Blind Image QualityAssessment Metrics Via Natural Scene Statistics and Multiple Kernel Learning[J].IEEE Transactions on Neural Networks and Learning Systems,2013,24(12):2013-2026.
    [150] Kwanghyun Lee, Moorthy, A.K.. Sanghoon Lee et al.3D Visual ActivityAssessment Based on Natural Scene Statistics[J]. IEEE Transactions on ImageProcessing,2014,23(1):450-465.
    [151] Siwei Lyu, Simoncelli E.P.. Modeling Multiscale Subbands of PhotographicImages with Fields of Gaussian Scale Mixtures[J]. IEEE Transactions on PatternAnalysis and Machine Intelligence,2009,31(4):693-706.
    [152] Hammond, D.K., Simoncelli E.P.. Image Modeling and Denoising WithOrientation-Adapted Gaussian Scale Mixtures[J]. IEEE Transactions on ImageProcessing,2008,17(11):2089-2101.
    [153] Goossens, B., Pizurica, A., Philips, W. Image Denoising Using Mixtures ofProjected Gaussian Scale Mixtures[J]. IEEE Transactions on Image Processing,2009,18(8):1689-1702.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700