基于梯度下降法的Chebyshev前向神经网络
详细信息    查看全文 | 推荐本文 |
  • 英文篇名:A Chebyshev Forward Neural Network Based on Gradient Descent Method
  • 作者:肖秀春 ; 彭银桥 ; 梅其祥 ; 闫敬文
  • 英文作者:XIAO Xiuchun;PENG Yinqiao;MEI Qixiang;YAN Jingwen;College of Electronic and Information Engineering,Guangdong Ocean University;College of Mathematics and Computer,Guangdong Ocean University;College of Engineering, Shantou University;
  • 关键词:Chebyshev多项式 ; 神经网络 ; 函数逼近 ; 梯度下降法
  • 英文关键词:Chebyshev polynomials;;neural network;;function approximation;;gradient descent method
  • 中文刊名:HDYX
  • 英文刊名:Journal of Anhui University of Technology(Natural Science)
  • 机构:广东海洋大学电子与信息工程学院;广东海洋大学数学与计算机学院;汕头大学工学院;
  • 出版日期:2018-06-15
  • 出版单位:安徽工业大学学报(自然科学版)
  • 年:2018
  • 期:v.35;No.138
  • 基金:广东省数字信号与图像处理重点实验室开放课题资助项目(2016GDDSIPL-02);; 广东海洋大学博士启动基金资助项目(E13428);广东海洋大学创新强校资助项目(Q15090)
  • 语种:中文;
  • 页:HDYX201802011
  • 页数:7
  • CN:02
  • ISSN:34-1254/N
  • 分类号:58-64
摘要
传统人工神经网络模型中,同一隐层各神经元的激励函数是相同的,这与人类神经元的实际情况不一致。为此,构造一种隐层各神经元激励函数互不相同的前向神经网络模型,采用一簇Chebyshev正交多项式序列作为其隐层各神经元的激励函数(简称Chebyshev前向神经网络),并为Chebyshev前向神经网络推导基于梯度下降法的网络参数训练算法。仿真实验表明,基于梯度下降法的Chebyshev前向神经网络算法能够有效调整网络参数,使之以较高的精度逼近具有复杂模式的样本数据集。
        In traditional artificial neural network model, the activation functions of neurons in the same hidden layer are the same, which is not consistent with the actual situation of human neurons. For this reason, and a forward neural network model with different activation functions of each neuron in the hidden layer was constructed, a cluster of Chebyshev orthogonal polynomials was used as the activation function of each neuron of the hidden layer(Chebyshev forward neural network). A training algorithm for network parameters based on gradient descent method was derived for Chebyshev feedforward neural network. The simulation experiment shows that the Chebyshev forward neural network algorithm based on gradient descent method can effectively adjust the network parameters, and make it approximate in sample data set of complex patterns with high precision.
引文
[1] RUMELHART D, MCCLELLAND E. Parallel Distributed Processing:Explorations in the Microstructure of Cognition[M].Cambridge:MIT Press, 1986:1-286.
    [2] JIN L, LI S, HU B. RNN models for dynamic matrix inversion:a control-theoretical perspective[J]. IEEE Transactions on Industrial Informatics, 2018, 14(1):189-199.
    [3] JIN L, LI S, WANG H, ZHANG Z. Non-convex projection activated zeroing neurodynamic models for time-varying matrix pseudoinversion with accelerated finite-time convergence[J]. Applied Soft Computing, 2018, 62(1):840-850.
    [4] XIAO L, LIAO B, LI S, ZHANG Z. Design and analysis of FTZNN applied to the real-time solution of a non-stationary Lyapunov equation and tracking control of a wheeled mobile manipulator[J]. IEEE Trans. Industrial Informatics, 2018, 14(1):98-105.
    [5] GEOFFERY E, SALAKHUTDINOV R. Reducing the dimensionality of data with neural networks[J]. Science, 2006, 313(5786):504-507.
    [6] GAL Y, GHAHRAMANI Z. Dropout as a Bayesian approximation:representing model uncertainty in deep learning[C]//International Conference on Machine Learning. New York:PMLR Press, 2016:1050-1059.
    [7] PARKHI O, VEDALDI A, ZISSERMAN A. Deep face recognition[C]. BMVC, 2015, 9:1-6.
    [8] LECUN Y, BENGIO Y, HINTON G. Deep learning[J]. Nature, 2015, 521(7553):436-444.
    [9] HE K, ZHANG X, REN S, ET AL. Deep residual learning for image recognition[C]//IEEE Conference on Computer Vision and Pattern Recognition. Piscataway:IEEE Service Center, 2016:770-778.
    [10] KRIZHEVSKY A, SUTSKEVER I, HINTON G. Imagenet classification with deep convolutional neural networks[C]//Advances in Neural Information Processing Systems. Lake Tahoe, Nevada, USA:NIPS Foundation, 2012:1097-1105.
    [11] SCHMIDHUBER J. Deep learning in neural networks:an overview[J]. Neural Networks, 2015, 61:85-117.
    [12]张雨浓,肖争利,丁思彤,等.带后续迭代的双极S函数激励的WASD神经网络[J].中山大学学报(自然科学版),2016, 55(4):1-10.
    [13]张雨浓,罗飞恒,陈锦浩,等.三输入伯努利神经网络权值与结构双确定[J].计算机工程与科学,2013, 35(5):142-148.
    [14]肖秀春,姜孝华,张雨浓.一种基函数神经网络最优隐神经元数目快速确定算法[J].微电子学与计算机,2010, 27(1):57-60.
    [15] XIAO X, LAI J, WANG C. Parameter estimation of the exponentially damped sinusoids signal using a specific neural network[J].Neurocomputing, 2014, 143(11):331-338.
    [16]肖秀春,张雨浓,姜孝华,等.第二类Chebyshev前向神经网络权值直接确定及结构自适应确定[J].大连海事大学学报,2009, 35(1):80-84.
    [17]莫国端,刘开第.函数逼近论方法[M].北京:科学出版社,2003:25-80.
    [18]林成森.数值分析[M].北京:科学出版社,2007:146-182.
    [19]肖秀春,张雨浓,姜孝华,等.基函数神经网络逼近能力探讨及全局收敛性分析[J].现代计算机,2009(2):4-8.
    [20] LIANG X, TSO S K. An improved upper bound on step-size parameters of discrete-time recurrent neural networks for linear inequality and equation system[J]. IEEE Transactions on Circuits and Systems-I:Fundamental Theory and Applications, 2002,49(5):695-698.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700