用户名: 密码: 验证码:
Relationship between Levels of Persistent Excitation,Architectures of Neural Networks and Deterministic Learning Performance
详细信息    查看官网全文
摘要
Based on the concept of persistent excitation(PE), a deterministic learning algorithm is proposed for neural network(NN)-based identification of nonlinear systems recently. This paper investigates the quantitative relationship between the PE levels(including the level of excitation), the architectures of NNs and the convergence properties of deterministic learning,which is motivated by a practical problem of how to construct the NNs in order to guarantee sufficient level of excitation and identification accuracy for specific system trajectories. The results on PE levels are utilized to analyze the deterministic learning performance. It is proven that by increasing the density of NN centers, the approximation capabilities of NNs increase but the level of excitation decreases, which means that a trade-off exists in the convergence accuracy when adjusting NN architectures.
Based on the concept of persistent excitation(PE), a deterministic learning algorithm is proposed for neural network(NN)-based identification of nonlinear systems recently. This paper investigates the quantitative relationship between the PE levels(including the level of excitation), the architectures of NNs and the convergence properties of deterministic learning,which is motivated by a practical problem of how to construct the NNs in order to guarantee sufficient level of excitation and identification accuracy for specific system trajectories. The results on PE levels are utilized to analyze the deterministic learning performance. It is proven that by increasing the density of NN centers, the approximation capabilities of NNs increase but the level of excitation decreases, which means that a trade-off exists in the convergence accuracy when adjusting NN architectures.
引文
[1]R.M.Sanner and J.J.E.Slotine.Gaussian networks for direct adaptive control.IEEE Transactions on Neural Networks,3(6):837–863,1992.
    [2]M.Wang,X.Liu,and P.Shi.Adaptive neural control of purefeedback nonlinear time-delay systems via dynamic surface technique.IEEE Transactions on Systems,Man,and Cybernetics,Part B(Cybernetics),41(6):1681–1692,Dec 2011.
    [3]H.Wang,K.Liu,X.Liu,B.Chen,and C.Lin.Neural-based adaptive output-feedback control for a class of nonstrictfeedback stochastic nonlinear systems.IEEE Transactions on Cybernetics,45(9):1977–1987,Sept 2015.
    [4]S.Lu and T.Basar.Robust nonlinear system identification using neural-network models.IEEE Transactions on Neural Networks,9(3):407–429,1998.
    [5]P.A.Ioannou and J.Sun.Robust adaptive control.Courier Corporation,2012.
    [6]Robert M Sanner and Jean-Jacques E Slotine.Stable recursive identification using radial basis function networks.In American Control Conference,1992,pages 1829–1833.IEEE,1992.
    [7]D.Gorinevsky.On the persistency of excitation in radial basis function network identification of nonlinear systems.IEEE Transactions on Neural Networks,6(5):1237–1244,1995.
    [8]A J Kurdila,Francis J Narcowich,and Joseph D Ward.Persistency of excitation in identification using radial basis function approximants.SIAM journal on control and optimization,33(2):625–642,1995.
    [9]C.Wang and D.J.Hill.Learning from neural control.IEEE Transactions on Neural Networks,17(1):130–146,2006.
    [10]C.Wang and D.J.Hill.Deterministic learning and rapid dynamical pattern recognition.IEEE Transactions on Neural Networks,18(3):617–630,2007.
    [11]C.Wang and T.Chen.Rapid detection of small oscillation faults via deterministic learning.IEEE Transactions on Neural Networks,22(8):1284–1296,2011.
    [12]Chengzhi Yuan and Cong Wang.Persistency of excitation and performance of deterministic learning.Systems&Control Letters,60(12):952–959,2011.
    [13]A.Loria and E.Panteley.Uniform exponential stability of linear time-varying systems:revisited.Systems&Control Letters,47(1):13–24,2002.
    [14]L.Ljung.System identification:Theory for the user,ptr prentice hall information and system sciences series,1999.
    [15]P.Saratchandran G.Huang and N.Sundararajan.An efficient sequential learning algorithm for growing and pruning rbf(gap-rbf)networks.IEEE Transactions on Systems,Man,and Cybernetics,Part B:Cybernetics,34(6):2284–2292,2004.
    [16]H.Yu,P.D.Reiner,T.Xie,T.Bartczak,and B.M.Wilamowski.An incremental design of radial basis function networks.IEEE Transactions on Neural Networks and Learning Systems,25(10):1793–1803,2014.
    [17]H.Han,Q.Chen,and J.Qiao.An efficient self-organizing{RBF}neural network for water quality prediction.Neural Networks,24(7):717–725,2011.
    [18]H.Chen,Y.Gong,and X.Hong.Online modeling with tunable rbf network.IEEE Transactions on Cybernetics,43(3):935–947,June 2013.
    [19]J.Lian,J.Hu,and S.H.Zak.Variable neural adaptive robust control:A switched system approach.IEEE Transactions on Neural Networks and Learning Systems,26(5):903–915,May2015.
    [20]J.Park and I.W.Sandberg.Universal approximation using radial-basis-function networks.Neural computation,3(2):246–257,1991.
    [21]M.D.Buhmann.Radial basis functions.Acta Numerica 2000,9:1–38,2000.
    [22]J.D.Ward F.J.Narcowich and H.Wendland.Sobolev bounds on functions with scattered zeros,with applications to radial basis function surface fitting.Mathematics of Computation,74(250):743–764,2005.
    [23]C.Wang and D.J.Hill.Deterministic learning theory for identification,recognition,and control,volume 32.CRC Press,2009.
    [24]O.E.Rossler.An equation for continuous chaos.Physics Letters A,57(5):397–398,1976.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700