A zero-gradient-sum algorithm for distributed cooperative learning using a feedforward neural network with random weights
详细信息    查看全文
文摘
This paper focuses on developing new algorithms for distributed cooperative learning based on zero-gradient-sum (ZGS) optimization in a network setting. Specifically, the feedforward neural network with random weights (FNNRW) is introduced to train on data distributed across multiple learning agents, and each agent runs the program on a subset of the entire data. In this scheme, there is no requirement for a fusion center, due to, e.g., practical limitations, security, or privacy reasons. The centralized FNNRW problem is reformulated into an equivalent separable form with consensus constraints among nodes and is solved by the ZGS-based distributed optimization strategy, which theoretically guarantees convergence to the optimal solution. The proposed method is more effective than the existing methods using the decentralized average consensus (DAC) and alternating direction method of multipliers (ADMM) strategies. It is simple and requires less computational and communication resources, which is well suited for potential applications, such as wireless sensor networks, artificial intelligence, and computational biology, involving datasets that are often extremely large, high-dimensional and located on distributed data sources. We show simulation results on both synthetic and real-world datasets.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700