基于新的风格损失函数的图像风格转换方法
详细信息    查看全文 | 推荐本文 |
  • 英文篇名:Method of image style transfer based on new style loss function
  • 作者:钱燕芳 ; 王敏
  • 英文作者:Qian Yanfang;Wang Min;College of Computer and Information, HoHai University;
  • 关键词:深度学习 ; 图像风格转换 ; Gram矩阵 ; 转换Gramian矩阵
  • 英文关键词:deep learning;;image style transfer;;Gram matrix;;transformed Gramian matrix
  • 中文刊名:DZCL
  • 英文刊名:Electronic Measurement Technology
  • 机构:河海大学计算机与信息学院;
  • 出版日期:2019-02-23
  • 出版单位:电子测量技术
  • 年:2019
  • 期:v.42;No.312
  • 语种:中文;
  • 页:DZCL201904032
  • 页数:4
  • CN:04
  • ISSN:11-2175/TN
  • 分类号:76-79
摘要
虽然基于深度学习的图像风格转换方法已经取得了很大的进展,但是这些方法都没有考虑到生成图像的线条扭曲现象,为此提出直方图损失和转换Gramian矩阵相结合的方法。图像的直方图信息可以判断出图像质量的好坏,在图像风格转换中使用直方图损失,不仅可以增强图像,还可以使生成的图像更加稳定。转换Gramian矩阵类似于Gram矩阵,但是前者提取出图像纹理信息更加完整,还考虑到了图像的空间排列信息。实验结果表明,这两种方法的结合不仅能使生成的图像没有线条扭曲,还能减少图像生成的迭代次数。
        Although great progress has been made in image style transfer based on deep learning, these methods were not took into account the distortion of lines in the generated image. Therefore, new method combining histogram loss with transformed Gramian matrix is proposed. The use of histogram loss in image style transfer is not only enhanced the image, but also made the generated image more stable. Transformed Gramian matrix is similar to Gram matrix, but the former is extracted more complete texture information, and also took into account the spatial arrangement of an image information. The experimental results show that combination of two methods can not only make the generated image without line distortion, but also reduce the number of iterations in image generation.
引文
[1] 邓盈盈, 唐帆, 董未名. 图像艺术风格化的研究现状[J]. 南京信息工程大学学报(自然科学版), 2017, 9(6):593-598.
    [2] GATYS L A, ECKER A S, BETHGE M. Texture synthesis using convolutional neural networks[J]. Febs Letters, 2015, 70(1):51-55.
    [3] GATYS L A, ECKER A S, BETHGE M. Image style transfer using convolutional neural networks[C].Computer Vision and Pattern Recognition, IEEE, 2016:2414-2423.
    [4] LI C, WAND M. Combining Markov random fields and convolutional neural networks for image synthesis[C].Computer Vision and Pattern Recognition, IEEE, 2016:2479-2486.
    [5] LI C, WAND M. Precomputed real-time texture synthesis with Markovian generative adversarial networks[C].European Conference on Computer Vision, 2016:702-716.
    [6] ULYANOV D, LEBEDEV V, VEDALDI A, et al. Texture networks: Feed-forward synthesis of textures and stylized images[C].International Conference on Machine Learning,2016:1349-1357.
    [7] LI C,WAND M. Perceptual losses for real-time style transfer and super-resolution[C]. EuropeanConference on Computer Vision,2016:694-711.
    [8] LI M, ZUO W, ZHANG D. Deep identity-aware transfer of facial attributes[J]. Computer Science,2016,arXiv:1610.05586.
    [9] CHEN T Q, SCHMIDT M. Fast patch-based style transfer of arbitrary style[J]. Computer Science,2016, arXiv:1612.04337.
    [10] DUMOULIN V, SHLENS J, KUDLUR M. A learned representation for artistic style[J]. Computer Science,2016,arXiv:1610.07629.
    [11] BERGER G, MEMISEVIC R. Incorporating long-range consistency in CNN-based texture generation[J]. Computer Science,2016,arXiv:1606.01286.
    [12] RISSER E, WILMOT P, BARNES C. Stable and controllable neural texture synthesis and style transfer using histogram losses[J]. Computer Science, 2017,arXiv:1701.08893.
    [13] LUAN F, PARIS S, SHECHTMAN E, et al. Deep photo style transfer[C].IEEE Conference on Computer Vision and Pattern Recognition,2017:6997-7005.
    [14] HUANG X, BELONGIE S. Arbitrary style transfer in real-time with adaptive instance normalization[C]. IEEE International Conference on Computer Vision,2017:1510-1519.
    [15] LIAO J, YAO Y, YUAN L,et al. Visual attribute transfer through deep image analogy[J]. Acm Transactions on Graphics, 2017, 36(4): 1-15.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700