摘要
重新思考生成对抗网络(Genrative Adversarial Nets,GAN),在不改变对抗理论的情况下,提出了基于竞争对抗关系的具有多生成器的生成竞争对抗网络,减少原始生成对抗网络的难训练问题。生成竞争对抗网络利用多生成器来解构真实数据集的特征,有效地减少了低维流形引起的问题,使基于对抗关系的GAN的损失函数收敛。竞争对抗关系基于生成对抗网络的理论,改善了Wasserstein GAN(WGAN)指出的GAN的问题。另外,提出了一种缓解梯度问题的激活函数ReSigm,使训练变得稳定。最后,在数据集MNIST上进行实验,验证了生成竞争对抗网络具有足够的容量来生成稳定、多样的图像。
This paper is to rethink the Generated Adversarial Nets(GAN),and without changing the theory of adversarial theory,and proposes a multi-generator Generative Competitive Adversarial Nets(GCAN) based on competitive adversarial relation(CAR) to reduce the difficult training problem of the original GAN.In addition,this paper proposes an activation function ReSigm to alleviate the gradient problem,which makes the training stable.
引文
[1]Goodfellow I J,Pouget-Abadie J,Mirza M,et al.Generative adversarial nets[C]//International Conference on Neural Information Processing Systems,2014
[2]Arjovsky M,Bottou,Léon.Towards Principled Methods for Training Generative Adversarial Networks[C]//Conference paper on ICLR 2017,2017
[3]Arjovsky M,Chintala S,Bottou L.Wasserstein GAN[C]//Conference paper on ICLR 2017,2017
[4]Lucic M,Kurach K,Michalski M,et al.Are GANs Created Equal A Large-Scale Study[J]. arXiv:1711.10337,2017
[5]Kingma D P,Ba J.Adam:A Method for Stochastic Optimization[C]//Conference Paper on ICLR,2015
[6]Heusel M,Ramsauer H,Unterthiner T,et al.GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium[C]//International Conference on Neural Information Processing Systems,2017
[7]Radford A,Metz L,Chintala S.Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks[C]//Conference Paper on ICLR,2016