用户名: 密码: 验证码:
基于生成对抗网络的数字图像修复技术
详细信息    查看全文 | 推荐本文 |
  • 英文篇名:Digital image restoration technology based on generative adversarial networks
  • 作者:李雪瑾 ; 李昕 ; 徐艳杰
  • 英文作者:Li Xuejin;Li Xin;Xu Yanjie;School of Mechatronics Engineering and Automation,Shanghai University;
  • 关键词:图像修复 ; 深度学习 ; 生成对抗网络 ; 生成模型 ; 损失函数
  • 英文关键词:image restoration;;deep learning;;generative adversarial networks;;generative model;;loss function
  • 中文刊名:DZIY
  • 英文刊名:Journal of Electronic Measurement and Instrumentation
  • 机构:上海大学机电工程与自动化学院;
  • 出版日期:2019-01-15
  • 出版单位:电子测量与仪器学报
  • 年:2019
  • 期:v.33;No.217
  • 语种:中文;
  • 页:DZIY201901006
  • 页数:7
  • CN:01
  • ISSN:11-2488/TN
  • 分类号:45-51
摘要
针对破损区域面积大的图像,在现有的图像修复方法中,往往会产生与周围区域不一致的扭曲结构或模糊的纹理。随着深度学习的发展和应用,基于生成对抗网络的方法,通过调节可用数据来生成缺失的内容。对于一个数据集,先将数据集中的样本解析成概率分布中的样本点,利用生成对抗网络快速生成出大量伪造图像,通过搜索最接近的已损坏图像的编码,然后这个编码通过生成模型来推断缺失内容。在此基础上,结合了语义损失函数和感知损失函数,并通过改进激活函数Sigmoid函数扩大了不饱和区域,解决了梯度易消失的问题。通过实验表明,方法成功的预测了图像中大面积缺失区域的信息,并实现了照片的真实感,比先前的方法产生更清晰更连贯的结果。
        For an image with a large damaged area,in the existing image restoration method,a distorted structure or a blurred texture that does not coincide with the surrounding area tends to be generated. With the development and application of deep learning,this paper is based on the method of generative adversarial networks and generates missing content by adjusting the available data. For a data set,the samples in the data set are first parsed into sample points in a probability distribution,a large number of falsified images are quickly generated using the generative adversarial network,the code of the closest damaged image is searched for,and then the code is generated by generating model to infer missing content. On this basis,this paper combines the semantic loss function and the perceptual loss function,and the unsaturated region is enlarged by improving the activation function sigmoid function,and the problem that the gradient easily disappears is solved. Experiments show that the method successfully predicts the information of large areas missing in the image,and realizes the photorealism,producing clearer and more consistent results than previous methods.
引文
[1]彭啸.基于TV模型和纹理合成的图像修复算法研究[D].广州:华南理工大学,2011.PENG X. Image inpainting research based on TV modeland texture synthesis[D]. Guangzhou:South ChinaUniversity of Technology,2011.
    [2] LI H F,LI G B,LIN L,et al. Context-aware semanticinpainting[J]. IEEE Transactions on Cybernetics,2018,doi:10. 1109/TCYB.2018. 2865036.
    [3]张红英,彭启琮.数字图像修复技术综述[J].中国图象图形学报,2007,12(1):1-10.ZHANG H Y,PENG Q C,A survey on digital imageinpainting[J]. Journal of Image and Graphics,2007,12(1):1-10.
    [4]谢正伟,王创新.基于全变分模型改进的图像修复算法应用[J].电子科技,2017,30(1):61-64,68.XIE ZH W, WANG CH X. Application of improvedinpainting algorithm based on TV model[J]. ElectronicSci,2017,30(1):61-64,68.
    [5] FIDANER I B. A survey on variational image inpainting,texture synthesis and image completion[D]. Istanbul:Bogazici University,2008.
    [6]印勇,李丁,胡琳昀.采用CDD模型的自适应图像修复算法[J].重庆大学学报,2013,36(4):80-86.YIN Y, LI D,HU L Y, Adaptive image inpaintingalgorithm based on CDD model[J]. Journal ofChongqing University,2013,36(4):80-86.
    [7]刘慧青,赵杰煜,常俊生.基于低秩表示的图像修复方法[J].宁波大学学报(理工版),2017,30(3):24-29.
    [8] LIU G,LIN Z,YU Y. Robust subspace segmentation bylow-rank representation[C]. International Conference onMachine Learning,2010.
    [9] LU X Q,YUAN Y,WANG Y L. Graph-regularized low-rankrepresentation for destriping of hyperspectral images[C].IEEETransactions on Geoscience and Remote Sensing,2013,51(7):4009-4018.
    [10] BARNES C,SHECHTMAN E,FINKELSTEIN A,et al.Patchmatch:A randomized correspondence algorithm forstructural image editing[C]. ACM Transactions onGraphics,2009.
    [11] PATHAK D,KRHENBHL P,DONAHUE J,et al.Context encoders:Feature learning by inpainting[C].CVPR,2016.
    [12]王坤峰,苟超,段艳杰,等.生成式对抗网络GAN的研究进展与展望[J].自动化学报,2017,43(3):321-332.WANG K F,GOU CH,DUAN Y J,et al. Generativeadversarial networks:The state of the art and beyon[J].Acta Automatica Sinica,2017,43(3):321-332.
    [13] HINTON G E,SALAKHUTDINOV R R. Reducing thedimensionality of data with neural networks[J]. Science,2006,313(5786):504-507.
    [14] YEH R A,CHEN C,LIM T Y,et al. Semantic imageinpainting with deep generative models[C]. IEEEConference on Computer Vision and Pattern Recognition,2017:5485-5493.
    [15] SRIVASTAVA N,HINTON G,KRIZHEVSKY A,et al.Dropout:A simple way to prevent neural networks fromoverfitting[J]. Journal of Machine Learning Research,2014,15(1):1929-1958.
    [16] IIZUKA S,SIMO-SERRA E,ISHIKAWA H. Globallyand locally consistent image completion[J]. ACMTransactions on Graphics(TOG),2017,36(4):107.
    [17] JOHNSON J,ALAHI A,FEI-FEI L. Perceptual lossesfor real-time style transfer and super-resolution[C].European Conference on Computer Vision, Springer,2016:694-711.
    [18] Shannon C E. A mathematical theory of communication[J].The Bell System Technical Journal,27(3):203 379-423.
    [19] LECUN Y, BOSER B, DENKER J S, et al.Backpropagation applied to handwritten zip coderecognition[J]. Neural Computation, 1989, 1(4):541-551.
    [20] PATHAK D,KRAHENBUHL P,DONAHUE J,et al.Context encoders:Feature learning by inpainting[C].Conference on Computer Vision and Pattern Recognition,2016:2536-2544.
    [21] P'EREZ P,GANGNET M,BLAKE A. Poisson imageediting[C]. ACM Transactions on Graphics,2003.
    [22] ABADI M,BARHAM P,CHEN J,et al. Tensorflow:Asystem for large-scale machine learning[C].OSDI,2016,16:265-283.
    [23] RADFORD A,METZ L,CHINTALA S. Unsupervisedrepresentation learning with deep convolutional generativeadversarial networks[C]. ICLR,2016.
    [24] KINGMA D,BA J. Adam:A method for stochasticoptimization[C]. ICLR,2015.
    [25] LIU H Q,ZHAO J Y,CHANG J S,Image inpaintingmethod based on low-rank representation[J]. Journal ofNingbo University(Natural Science&EngineeringEdition),2017,30(3):24-29.
    [26] HUYNH-THU Q,GHANBARI M. Scope of validity ofPSNR in image/video quality assessment[J]. ElectronicsLetters,2008,44(13):800-801.
    [27] WANG L T,HOOVER N E,PORTER E H,et al.SSIM:a software levelized compiled-code simulator[C].24th ACM/IEEE Design Automation Conference,1987:2-8.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700