Online gradient method with smoothing ℓub>0ub> regularization for feedforward neural networks
详细信息    查看全文
文摘
ub>p regularization has been a popular pruning method for neural networks. The parameter p   was usually set as 0<p≤20<p≤2 in the literature, and practical training algorithms with ℓub>0 regularization are lacking due to the NP-hard nature of the ℓub>0 regularization problem; however, the ℓub>0 regularization tends to produce the sparsest solution, corresponding to the most parsimonious network structure which is desirable in view of the generalization ability. To this end, this paper considers an online gradient training algorithm with smoothing ℓub>0 regularization (OGTSL0) for feedforward neural networks, where the ℓub>0 regularizer is approximated by a series of smoothing functions. The underlying principle for the sparsity of OGTSL0 is provided, and the convergence of the algorithm is also theoretically analyzed. Simulation examples support the theoretical analysis and illustrate the superiority of the proposed algorithm.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700