基于特征重用模型的超分辨率重建方法
详细信息    查看全文 | 推荐本文 |
  • 英文篇名:Super-resolution reconstruction based on feature reuse model
  • 作者:欧阳宁 ; 黄慧玲 ; 林乐平
  • 英文作者:OUYANG Ning;HUNG Huiling;LIN Leping;Key Laboratory of Cognitive Radio and Information Processing Ministry of Education,Guilin University of Electronic Technology;School of Information and Communication, Guilin University of Electronic Technology;
  • 关键词:特征重用 ; 特征提取模块设计 ; 非线性映射 ; 信息融合
  • 英文关键词:feature reuse;;feature extraction module design;;nonlinear mapping;;information fusion
  • 中文刊名:GLDZ
  • 英文刊名:Journal of Guilin University of Electronic Technology
  • 机构:桂林电子科技大学认知无线电与信息处理省部共建教育部重点实验室;桂林电子科技大学信息与通信学院;
  • 出版日期:2019-06-14 15:37
  • 出版单位:桂林电子科技大学学报
  • 年:2019
  • 期:v.39;No.160
  • 基金:国家自然科学基金(61661017);; 中国博士后科学基金(2016M602923XB);; 认知无线电与信息处理重点实验室基金(CRKL160104,CRKL150103,2011KF11);; 广西自然科学基金(2017GXNSFBA198212,2014GXNSFDA118035,2016GXNSFAA38014);; 桂林电子科技大学研究生教育创新计划(2016YJCXB02);; 广西科技创新能力与条件建设计划(桂科能1598025-21);; 桂林科技开发项目(20150103-6)
  • 语种:中文;
  • 页:GLDZ201901012
  • 页数:4
  • CN:01
  • ISSN:45-1351/TN
  • 分类号:71-74
摘要
针对深度网络层与层之间的特征利用率低,每层获得特征信息相关性较少的问题,提出了基于特征重用模型的超分辨率重建方法,以增强网络中层与层之间的关联。该方法搭建了特征提取模块,并将其级联组建网络,使图像的低维特征得到充分利用,丰富了每个进入模块的信息流,增加了特征的多样性。模型中运用了多个1×1卷积层,使提高重建精度的同时也减少了参数计算。实验结果表明,提出的方法在减少计算量的同时,提高了图像的重建质量。
        In order to improve the correlation between layer and layer, a super resolution reconstruction method based on feature reuse model is proposed to solve the problem of low feature utilization between layer and layer and less correlation of feature information in each layer. This method builds the feature extraction module and cascades it into the network, which makes the low dimension feature of the image fully utilized, enriches the information flow of each entry module, and increases the diversity of the features. Multiple 1× 1 volumes are used in the model to improve the accuracy of reconstruction and reduce the computation of parameters. Experimental results show that the proposed method improves the quality of image reconstruction while reducing computation cost.
引文
[1] DONG C,LOY C C,HE K,et al.Learning a deep convolutional network for image super-resolution[C]//European Conference on Computer Vision,2014:184-199.
    [2] SIMONYAN K,ZISSERMAN A.Very deep convolutional networks for large-scale image recognition[EB/OL].(2014-09-04)[2018-03-25].https://arxiv.org/abs/1409.1556.
    [3] KIM J,KWON LEE J,MU LEE K.Accurate image super-resolution using very deep convolutional networks[C]//IEEE Conference on Computer Vision and Pattern Recognition,2016:1646-1654.
    [4] HE K,ZHANG X,REN S,ET AL.Deep residual learning for image recognition[C]//IEEE Conference on Computer Vision and Pattern Recognition,2016:770-778.
    [5] TAI Y,YANG J,LIU X.Image super-resolution via deep recursive residual network[C]//IEEE Conference on Computer Vision and Pattern Recognition (CVPR),2017:1-4.
    [6] SZEGEDY C,LIU W,JIA Y,et al.Going deeper with convolutions[C]//IEEE Conference on Computer Vision and Pattern Recognition,2015:1-9.
    [7] WANG Y,WANG L,WANG H,et al.End-to-end image super-resolution via deep and shallow convolutional networks[EB/OL].(2016-7-26)[2018-3-25].https://arxiv.org/abs/1607.07680.
    [8] KIM Y,HWANG I,CHO N I.A new convolutional network-in-network structure and its applications in skin detection,semantic segmentation,and artifact reduction[EB/OL].(2017-01-22)[2018-03-25].https://arxiv.org/abs/1701.06190.
    [9] HUANG G,LIU Z,WEINBERGER K Q,et al.Densely connected convolutional networks[C]//IEEE Conference on Computer Vision and Pattern Recognition,2017:1-3.
    [10] DONG C,LOY C C,TANG X.Accelerating the super-resolution convolutional neural network[C]//European Conference on Computer Vision,2016:391-407.
    [11] CABALLERO J,HUSZáR F,et al.Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network[C]//IEEE Conference on Computer Vision and Pattern Recognition,2016:1874-1883.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700