用户名: 密码: 验证码:
低秩表示和字典学习的红外与可见光图像融合算法
详细信息    查看全文 | 推荐本文 |
  • 英文篇名:Infrared and visible image fusion scheme using low rank representation and dictionary learning
  • 作者:刘琰煜 ; 周冬明 ; 聂仁灿 ; 侯瑞超 ; 丁斋生
  • 英文作者:LIU Yan-yu;ZHOU Dong-ming;NIE Ren-can;HOU Rui-chao;DING Zhai-sheng;School of Information Science and Engineering, Yunnan University;
  • 关键词:低秩表示 ; 自适应字典 ; 稀疏矩阵 ; OMP算法 ; 图像融合
  • 英文关键词:low rank representation;;adaptive dictionary;;sparse matrix;;OMP algorithm;;image fusion
  • 中文刊名:YNDZ
  • 英文刊名:Journal of Yunnan University(Natural Sciences Edition)
  • 机构:云南大学信息学院;
  • 出版日期:2019-07-10
  • 出版单位:云南大学学报(自然科学版)
  • 年:2019
  • 期:v.41;No.202
  • 基金:国家自然科学基金(61463052,61365001)
  • 语种:中文;
  • 页:YNDZ201904007
  • 页数:10
  • CN:04
  • ISSN:53-1045/N
  • 分类号:51-60
摘要
针对目前红外与可见光融合算法在保留可见光图像中的背景信息时无法同时有效地提取红外图像信息,提出了一种基于低秩表示和字典学习的红外与可见光的图像融合算法.首先,采用低秩表示对红外图像和可见光图像进行分解,分别获得源图像的低秩和稀疏成分,其中稀疏成分可以很好地表示源图像的边缘细节特征.其次,用OMP算法的字典学习方法和稀疏系数的最大范数规则,而最大范数规则在对图像背景恢复的同时能够提取目标信息.再次,对分解得到的2个分量进行融合.最后,利用融合稀疏系数和自适应字典重建融合图像.实验结果表明,本融合算法可以突出红外对象信息,同时能够保留可见光图像中的背景信息,达到良好的视觉效果.
        Current infrared and visible images fusion algorithms couldn't efficiently extract the object information in the infrared image while retaining the background information in visible image. To solve this problem, a new infrared and visible image fusion algorithm by low rank representation and dictionary learning was proposed to promote contrast and preserve edges for the source images. Firstly, low rank decomposition was performed on the input images to obtain their corresponding low rank and sparse components which could well represent the sparse feature of images. Secondly, the sparse representation using OMP algorithm with a trained dictionary was adapted to calculate the low rank coefficient and sparse coefficient, then by adding the common low rank sparse coefficient to the maximum norm of uncommon sparse coefficients could retain the background information of the source image effectively. Finally, we reconstructed the fused image from the fused sparse coefficients and adaptive dictionary. Experimental results demonstrated that this fusion algorithm could highlight the infrared objects and retained the images detailed information as well as retain the background information in visible image.
引文
[1]Goshtasby A A,Nikolov S.Image fusion:advances in the state of the art[J].Information Fusion,2007:114-118.DOI:10.1016/j.inffus.2006.04.001.
    [2]侯瑞超,周冬明,聂仁灿,等.结合视觉显著性与DualPCNN的红外与可见光图像融合[J].计算机科学,2018,45(S1):175-179.Hou R C,Zhou D M,Nie R C,et al.Infrared and visible image fusion using visual saliency and DualPCNN[J].Computer Science,2018,45(S1):175-179.
    [3]Hou R C,Zhou D M,Nie R C,et al.Infrared and visible images fusion using visual saliency and optimized spiking cortical model in non-subsampled shearlet transform domain[J].Multimedia Tools and Applications,2018:1-24.DOI:10.1007/s11042-018-6099-x.
    [4]Mitianoudis N,Stathaki T.Pixel-based and regionbased image fusion schemes using ICA bases[J].Information Fusion,2007:131-142.DOI:10.1016/j.inffus.2005.09.001.
    [5]Liu Y,Liu S P,Wang Z F.A general framework for image fusion based on multi-scale transform and sparse representation[J].Information Fusion,2015,24:147-164.DOI:10.1016/j.inffus.2014.09.004.
    [6]Li H,Manjunath B,Mitra S.Multisensor image fusion using the wavelet transform[J].Graph Models Image Process,1995,57(3):235-245.DOI:10.1006/gmip.1995.1022
    [7]Liu D,Zhou D M,Nie R C,et al.Multi-focus image fusion using cross bilateral filter in NSCT domain[C].International Conference on Multimedia and Image Processing,2018.DOI:10.1145/3195588.3195607.
    [8]Easley G,Labate D,Lim W Q.Sparse directional image representations using the discrete shearlet transform[J].Applied and Computational Harmonic Analysis,2008,25(1):25-46.DOI:10.1016/j.acha.2007.09.003.
    [9]He K J,Zhou D M,Zhang X J,et al.Multi-focus image fusion combining focus-region-level partition and pulse-coupled neural network[J].Soft Computing,2018(4):1-15.
    [10]侯瑞超,周冬明,聂仁灿,等.结合HSI变换与双通道脉冲发放皮层的彩色多聚焦图像融合[J].云南大学学报:自然科学版,2019,41(2):245-252.DOI:10.7540/j.ynu.20180191.Hou R C,Zhou D M,Nie R C,et al.Multi-focus color image fusion using HSI transform and dual channel spiking cortical model[J].Journal of Yunnan University:Natural Sciences Edition,2019,41(2):245-252.
    [11]Zhang Y X,Chen L,Zhao Z H,et al.Multi-focus image fusion based on robust principal component analysis and pulse-coupled neural network[J].Opt Int J Light Electron Opt,2014,125:5 002-5 006.DOI:10.1016/j.ijleo.2014.04.002.
    [12]Yang B,Li S T.Multi-focus image fusion and rrestoration with sparse representation[J].IEEE Transactions on Instrumentation and Measurement,2010,59(4):884-892.DOI:10.1109/TIM.2009.2026612.
    [13]Engan K,Aase S O,Husoy J H.Multi-frame compression:theory and design[J].Signal Processing,2000,80(10):2 121-2 140.DOI:10.1016/s0165-1684(00)00072-4.
    [14]Aharon M,Elad M,Bruckstein A.K-svd:an algorithm for designing overcomplete dictionaries for sparse representation[J].IEEE Trans Signal Process,2006,54(11):4 311-4 322.DOI:10.1109/TSP.2006.881199.
    [15]Li S,Yang B,Hu J.Performance comparison of different multi-resolution transforms for image fusion[J].Information Fusion,2011,12(2):74-84.DOI:10.1016/j.inffus.2010.03.002.
    [16]Lin Z C,Chen M M,Ma Y.The augmented lagrange multiplier method for exact recovery of corrupted lowrank matrices[J].Eprint Arxiv,2010,9.DOI:10.1016/j.jsb.2012.10.010.
    [17]Mallat S,Zhang Z.Matching pursuits with time-frequency dictionaries[J].IEEE Trans Signal Process,1993,41(12):3 397-3 415.DOI:10.1109/78.258082.
    [18]Toet A.Image fusion by a ratio of low pass pyramid[J].Pattern Recognition Letters,1989,9(4):245-253.DOI:10.1016/0167-8655(89)90003-2.
    [19]Lewis J,Ocallaghan R,Nikolov S,et al.Pixel-and regionbased image fusion with complex wavelets[J].Information Fusion,2007,8(2):119-130.DOI:10.1016/j.inffus.2005.09.006.
    [20]Cand`es E J,Donoho D L.Curvelets and curvilinear integrals[J].Journal of Approximation Theory,2001,113(1):59-90.DOI:10.1006/jath.2001.3624.
    [21]Shreyamsha Kumar B K.Image fusion based on pixel significance using cross bilateral filter[J].Signal Image Fusion and Video Processing,2013:1-12.DOI:10 .1007/s11760-013-0556-9.
    [22]Wang Z,Bovik A C,Sheikh H R,et al.Image quality assessment:from error visibility to structural similarity[J].IEEE Transactions on Image Processing,2004,13(4):600-612.DOI:10.1109/TIP.2003.819861.
    [23]Qu G H,Zhang D L,Yan P F.Information measure for performance of image fusion[J].Electronics Letters,2002,38(7):313-315.DOI:10.1049/el:20020212.
    [24]Xydeas C S,Petrovic V.Objective image fusion performance measure[J].Electronics Letters,2000,36(4):308-309.DOI:10.5937/vojtehg0802181B.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700