用户名: 密码: 验证码:
基于LatLRR和PCNN的红外与可见光融合算法
详细信息    查看全文 | 推荐本文 |
  • 英文篇名:Infrared and visible fusion algorithm based on latLRR and PCNN
  • 作者:谢艳新
  • 英文作者:XIE Yan-xin;College of Electrical and Information Engineering,Jilin Agricultural Science and Technology University;
  • 关键词:潜在低秩表示 ; 图像融合 ; 双通道PCNN ; NSST
  • 英文关键词:latent low rank representation;;image fusion;;dual-channel PCNN;;non-subsampled shearlet transform
  • 中文刊名:YJYS
  • 英文刊名:Chinese Journal of Liquid Crystals and Displays
  • 机构:吉林农业科技学院电气与信息工程学院;
  • 出版日期:2019-04-15
  • 出版单位:液晶与显示
  • 年:2019
  • 期:v.34
  • 语种:中文;
  • 页:YJYS201904014
  • 页数:7
  • CN:04
  • ISSN:22-1259/O4
  • 分类号:100-106
摘要
针对光谱差异较大的红外与可见光图像,本文提出一种基于潜在低秩表示(LatLRR)和脉冲式耦合神经网络(PCNN)的多尺度融合模型。首先,该算法利用非下采样剪切波变换(NSST)获取图像的低频与高频分量。鉴于图像的低频分量决定最终的融合效果,采用LatLRR算法挖掘源图像内在的显著特征对低频分量自适应加权融合。除此外,针对决定融合图像细节的高频分量,则利用双通道PCNN模型作为它的融合规则。其中平均梯度算子(AVG)和方向梯度和算子(SDG)分别作为PCNN的外界刺激与链接强度,它们能更好地表征图像的纹理特性。通过上述全新的融合规则,可将包含在红外图像内部的显著性特征与可见光图像的梯度特征完美结合,从而获取具有优良视觉效果的融合图像。本文采用3种不同的场景来测试所提方法的融合性能,与其他典型融合方法相比,本文提出的算法具有更佳的视觉效果,同时客观评价参数值增加约2%~5%。
        Aiming at the infrared and visible images with large spectral differences,this paper proposed a multi-scale fusion model based on latent low rank representation(LatLRR)and pulse coupled neural network(PCNN).Firstly,the proposed algorithm used non-subsampled shearlet transform(NSST)to acquire the low and high-frequency components of the image.In view of the low-frequency component of the image to determine the final fusion effect,the LatLRR algorithm was used to mine the intrinsic salient features of the source image,and this feature was used to adaptively weight the fusion of low-frequency components.In addition,for the high-frequency components that determined the details of the fused image,the dual-channel PCNN model was used as its fusion rule.Among them,the average gradient operator(AVG)and the sum of the direction gradients operator(SDG)were the external stimulus and link strength of PCNN,respectively,which all can better characterize the texture characteristics of the image.Through the above new fusion rules,the saliency features contained in the infrared images can be perfectly combined with the gradient features of the visible images to obtain a fused image with excellent visual effects.In this paper,three different scenarios were used to test the fusion performance of the proposed method.Compared with other typical fusion methods,the proposed algorithm has better visual effects,and the objective evaluation parameter value increases by about 2%~5%.
引文
[1]MA J Y,MA Y,LI C.Infrared and visible image fusion methods and applications:a survey[J].Information Fusion,2019,45:153-178.
    [2]LIU Y,CHEN X,WANG Z F,et al.Deep learning for pixel-level image fusion:recent advances and future prospects[J].Information Fusion,2018,42:158-173.
    [3]ZHANG P,YUAN Y C,FEI C,et al.Infrared and visible image fusion using co-occurrence filter[J].Infrared Physics&Technology,2018,93:223-231.
    [4]LIU Y,LIU S P,WANG Z F.A general framework for image fusion based on multi-scale transform and sparse representation[J].Information Fusion,2015,24:147-164.
    [5]陈广秋,高印寒,才华,等.局部化NSST与PCNN相结合的图像融合[J].液晶与显示,2015,30(4):701-712.CHEN G Q,GAO Y H,CAI H,et al.Image fusion algorithm based on local NSST and PCNN[J].Chinese Journal of Liquid Crystals and Displays,2015,30(4):701-712.(in Chinese)
    [6]LIM W Q.The discrete shearlet transform:a new directional transform and compactly supported shearlet frames[J].IEEE Transactions on Image Processing,2010,19(5):1166-1180.
    [7]LIU X B,MEI W B,DU H Q.Structure tensor and nonsubsampled shearlet transform based algorithm for CT and MRI image fusion[J].Neurocomputing,2017,235:131-139.
    [8]CHENG B Y,JIN L X,LI G N.General fusion method for infrared and visual images via latent low-rank representation and local non-subsampled shearlet transform[J].Infrared Physics&Technology,2018,92:68-77.
    [9]ZHAO J F,ZHOU Q,CHEN Y T,et al.Fusion of visible and infrared images using saliency analysis and detail preserving based image decomposition[J].Infrared Physics&Technology,2013,56:93-99.
    [10]MA J L,ZHOU Z Q,BO W,et al.Infrared and visible image fusion based on visual saliency map and weighted least square optimization[J].Infrared Physics&Technology,2017,82:8-17.
    [11]FU Z Z,XUE W,XU J,et al.Infrared and visible images fusion based on RPCA and NSCT[J].Infrared Physics&Technology,2016,77:114-123.
    [12]LIU Z W,FENG Y,ZHANG Y F,et al.A fusion algorithm for infrared and visible images based on RDU-PC-NN and ICA-bases in NSST domain[J].Infrared Physics&Technology,2016,79:183-190.
    [13]XU X Z,SHAN D,WANG G Y,et al.Multimodal medical image fusion using PCNN optimized by the QPSO algorithm[J].Applied Soft Computing,2016,46:588-595.
    [14]LIU G C,YAN S C.Latent Low-Rank Representation for subspace segmentation and feature extraction[C]//Proceedings of2011 International Conference on Computer Vision.Barcelona,Spain:IEEE,2011:1615-1622.
    [15]陈广秋,高印寒,段锦,等.基于奇异值分解的PCNN红外与可见光图像融合[J].液晶与显示,2015,30(1):126-136.CHEN G Q,GAO Y H,DUAN J,et al.Fusion algorithm of infrared and visible images based on singular value decomposition and PCNN[J].Chinese Journal of Liquid Crystals and Displays,2015,30(1):126-136.(in Chinese)
    [16]XIANG T Z,YAN L,GAO R R.A fusion algorithm for infrared and visible images based on adaptive dualchannel unit-linking PCNN in NSCT domain[J].Infrared Physics&Technology,2015,69:53-61.
    [17]KONG W W,ZHANG L J,LEI Y.Novel fusion method for visible light and infrared images based on NSST-SF-PCNN[J].Infrared Physics&Technology,2014,65:103-112.
    [18]LIU Y,CHEN X,PENG H,et al.Multi-focus image fusion with a deep convolutional neural network[J].Information Fusion,2017,36:191-207.
    [19]MA J Y,CHEN C,LI C,et al.Infrared and visible image fusion via gradient transfer and total variation minimization[J].Information Fusion,2016,31:100-109.
    [20]YIN H P,LI Y X,CHAI Y,et al.A novel sparse-representation-based multi-focus image fusion approach[J].Neurocomputing,2016,216:216-229.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700