摘要
虽然基于深度学习的图像风格转换方法已经取得了很大的进展,但是这些方法都没有考虑到生成图像的线条扭曲现象,为此提出直方图损失和转换Gramian矩阵相结合的方法。图像的直方图信息可以判断出图像质量的好坏,在图像风格转换中使用直方图损失,不仅可以增强图像,还可以使生成的图像更加稳定。转换Gramian矩阵类似于Gram矩阵,但是前者提取出图像纹理信息更加完整,还考虑到了图像的空间排列信息。实验结果表明,这两种方法的结合不仅能使生成的图像没有线条扭曲,还能减少图像生成的迭代次数。
Although great progress has been made in image style transfer based on deep learning, these methods were not took into account the distortion of lines in the generated image. Therefore, new method combining histogram loss with transformed Gramian matrix is proposed. The use of histogram loss in image style transfer is not only enhanced the image, but also made the generated image more stable. Transformed Gramian matrix is similar to Gram matrix, but the former is extracted more complete texture information, and also took into account the spatial arrangement of an image information. The experimental results show that combination of two methods can not only make the generated image without line distortion, but also reduce the number of iterations in image generation.
引文
[1] 邓盈盈, 唐帆, 董未名. 图像艺术风格化的研究现状[J]. 南京信息工程大学学报(自然科学版), 2017, 9(6):593-598.
[2] GATYS L A, ECKER A S, BETHGE M. Texture synthesis using convolutional neural networks[J]. Febs Letters, 2015, 70(1):51-55.
[3] GATYS L A, ECKER A S, BETHGE M. Image style transfer using convolutional neural networks[C].Computer Vision and Pattern Recognition, IEEE, 2016:2414-2423.
[4] LI C, WAND M. Combining Markov random fields and convolutional neural networks for image synthesis[C].Computer Vision and Pattern Recognition, IEEE, 2016:2479-2486.
[5] LI C, WAND M. Precomputed real-time texture synthesis with Markovian generative adversarial networks[C].European Conference on Computer Vision, 2016:702-716.
[6] ULYANOV D, LEBEDEV V, VEDALDI A, et al. Texture networks: Feed-forward synthesis of textures and stylized images[C].International Conference on Machine Learning,2016:1349-1357.
[7] LI C,WAND M. Perceptual losses for real-time style transfer and super-resolution[C]. EuropeanConference on Computer Vision,2016:694-711.
[8] LI M, ZUO W, ZHANG D. Deep identity-aware transfer of facial attributes[J]. Computer Science,2016,arXiv:1610.05586.
[9] CHEN T Q, SCHMIDT M. Fast patch-based style transfer of arbitrary style[J]. Computer Science,2016, arXiv:1612.04337.
[10] DUMOULIN V, SHLENS J, KUDLUR M. A learned representation for artistic style[J]. Computer Science,2016,arXiv:1610.07629.
[11] BERGER G, MEMISEVIC R. Incorporating long-range consistency in CNN-based texture generation[J]. Computer Science,2016,arXiv:1606.01286.
[12] RISSER E, WILMOT P, BARNES C. Stable and controllable neural texture synthesis and style transfer using histogram losses[J]. Computer Science, 2017,arXiv:1701.08893.
[13] LUAN F, PARIS S, SHECHTMAN E, et al. Deep photo style transfer[C].IEEE Conference on Computer Vision and Pattern Recognition,2017:6997-7005.
[14] HUANG X, BELONGIE S. Arbitrary style transfer in real-time with adaptive instance normalization[C]. IEEE International Conference on Computer Vision,2017:1510-1519.
[15] LIAO J, YAO Y, YUAN L,et al. Visual attribute transfer through deep image analogy[J]. Acm Transactions on Graphics, 2017, 36(4): 1-15.