基于GAN等效模型的小样本库扩增研究
详细信息    查看全文 | 推荐本文 |
  • 英文篇名:Amplification of small sample library based on GAN equivalent model
  • 作者:高强 ; 姜忠昊
  • 英文作者:Gao Qiang;Jiang Zhonghao;Institute of Electrical and Electronic Engineering,North China Electric Power University;
  • 关键词:小样本库 ; 生成式对抗网络 ; 等效模型 ; 互相关系数 ; 绝缘子
  • 英文关键词:small sample library;;generative adversarial nets;;equivalent model;;cross-correlation coefficient;;insulator
  • 中文刊名:DCYQ
  • 英文刊名:Electrical Measurement & Instrumentation
  • 机构:华北电力大学电气与电子工程学院;
  • 出版日期:2019-01-14 15:04
  • 出版单位:电测与仪表
  • 年:2019
  • 期:v.56;No.707
  • 语种:中文;
  • 页:DCYQ201906014
  • 页数:6
  • CN:06
  • ISSN:23-1202/TH
  • 分类号:82-87
摘要
在神经网络的训练中,训练样本库的数量对神经网络的性能有着重要的影响。利用深度神经网络技术对样本进行识别分类时,训练样本库的样本越多,识别效果越好。因此对于小样本库来说,扩增训练样本库是提高神经网络性能的方法之一。生成式对抗网络(Generative Adversarial Nets,GAN)为扩增训练样本库提供了可行的解决方法。首先,分析了原始GAN的训练过程。根据GAN的工作过程,推导了生成器模型,得出了生成器模型符合维纳-霍普夫方程的结论,并对判别器符合最佳接收机模型做了进一步解释。并利用生成样本和训练样本之间的互相关系数证明了等效模型的正确性。在MNIST、CIFAR-10标准数据库上进行了实验,并依据实验结果,验证了等效模型的有效性。最后,将该等效模型应用到绝缘子样本库的扩增中,并取得了良好的效果。
        In the training of neural networks,the number of training sample library has an important influence on the performance of neural networks. When using deep neural network technology to identify and classify samples,the more samples of the training sample library,the better the recognition effect. Therefore,for a small sample library,amplifying the training sample library is one of the methods to improve the performance of the neural network. Generative adversarial nets (GAN) provide a viable solution to expand the training sample library. Firstly,analyzing the training process of the original GAN. According to the working process of GAN,the generator model is deduced,and the conclusion that the generator model accords with the Wiener-Hopf equation is obtained,and the discriminator accords with the best receiver model is further explained. The correctness of the equivalent model is proved by using the cross-correlation coefficient between the generated samples and the training samples. Experiments were conducted on the MNIST,CIFAR-10 standard database,and based on the experimental results,the working principle of the original GAN was explained. Finally,the equivalent model is applied to the amplification of the insulator sample library,and achieved good results.
引文
[1]Goodfellow I J,Pougetabadie J,Mirza M,et al.Generative Adversarial Nets[C].Neural Information Processing Systems,2014:2672-2680.
    [2]Schmidhuber J.Deep learning in neural networks[J].Neural Networks,2015:85-117.
    [3]Bengio Y,Laufer E,Alain G,et al.Deep Generative Stochastic Networks Trainable by Backprop[J].International Conference on Machine Learning,2014:226-234.
    [4]Hinton G E,Osindero S,Teh Y W,et al.A fast learning algorithm for deep belief nets[J].Neural Computation,2006,18(7):1527-1554.
    [5]Salakhutdinov R,Hinton G E.Deep Boltzmann Machines[C].International Conference on Artificial Intelligence and Statistics,2009:448-455.
    [6]Ratliff L J,Burden S A,Sastry S,et al.On the Characterization of Local Nash Equilibria in Continuous Games[J].IEEE Transactions on Automatic Control,2016,61(8):2301-2307.
    [7]Yu L,Zhang W,Wang J,et al.Seq GAN:Sequence Generative Adversarial Nets with Policy Gradient[J].National Conference on Artificial Intelligence,2016:2852-2858.
    [8]Reed S E,Akata Z,Yan X,et al.Generative adversarial text to image synthesis[J].International Conference on Machine Learning,2016:1060-1069.
    [9]Pathak D,Krahenbuhl P,Donahue J,et al.Context Encoders:Feature Learning by Inpainting[J].Computer Vision and Pattern Recognition,2016:2536-2544.
    [10]Ledig C,Theis L,Huszar F,et al.Photo-Realistic Single Image SuperResolution Using a Generative Adversarial Network[J].Computer Vision and Pattern Recognition,2017:105-114.
    [11]Mathieu M,Couprie C,Lecun Y,et al.Deep multi-scale video prediction beyond mean square error[J].International Conference on Learning Representations,2016.
    [12]Wu J,Zhang C,Xue T,et al.Learning a Probabilistic Latent Space of Object Shapes via 3D Generative-Adversarial Modeling[J].neural information processing systems,2016:82-90.
    [13]Li J,Monroe W,Shi T,et al.Adversarial Learning for Neural Dialogue Generation[J].empirical methods in natural language processing,2017:2157-2169.
    [14]高强,马艳梅.深度信念网络(DBN)网络层次数量的研究及应用[J].科学技术与工程,2016,16(23):234-238,262.Gao Qiang,Ma Yanmei.Research and Application of the Level of the Deep Belief Network(DBN)[J].Science Technology and Engineering,2016,16(23):234-238,262.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700