用户名: 密码: 验证码:
Use Correlation Coefficients in Gaussian Process to Train Stable ELM Models
详细信息    查看全文
  • 作者:Yulin He (10)
    Joshua Zhexue Huang (10)
    Xizhao Wang (10)
    Rana Aamir Raza (11)

    10. College of Computer Science and Software Engineering
    ; Shenzhen University ; Shenzhen ; 518060 ; China
    11. College of Information and Communication Engineering
    ; Shenzhen University ; Shenzhen ; 518060 ; China
  • 关键词:Extreme learning machine ; Correlation coefficient ; Gaussian process ; Neural network
  • 刊名:Lecture Notes in Computer Science
  • 出版年:2015
  • 出版时间:2015
  • 年:2015
  • 卷:9077
  • 期:1
  • 页码:405-417
  • 全文大小:352 KB
  • 参考文献:1. Cao, J, Lin, Z, Huang, GB (2012) Self-Adaptive Evolutionary Extreme Learning Machine. Neural Process. Lett. 36: pp. 285-305 CrossRef
    2. Chatzis, S.P., Korkinof, D., Demiris, Y.: The one-hidden layer non-parametric bayesian kernel machine. In: 23rd IEEE International Conference on Tools with Artificial Intelligence, pp. 825鈥?31. IEEE Press, New York (2011)
    3. Huang, GB, Chen, L, Siew, CK (2006) Universal Approximation Using Incremental Constructive Feedforward Networks with Random Hidden Nodes. IEEE Trans. Neural Netw. 17: pp. 879-892 CrossRef
    4. Huang, GB, Li, MB, Chen, L, Siew, CK (2008) Incremental Extreme Learning Machine with Fully Complex Hidden Nodes. Neurocomputing 71: pp. 576-583 CrossRef
    5. Huang, GB, Wang, DH, Lan, Y (2011) Extreme Learning Machines: A Survey. Int. J. Mach. Learn. & Cybern. 2: pp. 107-122 CrossRef
    6. Huang, GB, Zhu, QY, Siew, CK (2006) Extreme Learning Machine: Theory and Applications. Neurocomputing 70: pp. 489-501 CrossRef
    7. Janez, D (2006) Statistical Comparisons of Classifiers over Multiple Data Sets. J. Mach. Learn. Res. 7: pp. 1-30
    8. Luo, JH, Vong, CM, Wong, PK (2014) Sparse Bayesian Extreme Learning Machine for Multi-Classification. IEEE Trans. Neural Netw. Learn. Syst. 25: pp. 836-843 CrossRef
    9. Matias, T, Souza, F, Ara煤jo, R, Antunes, CH (2014) Learning of A Single-Hidden Layer Feedforward Neural Network Using An Optimized Extreme Learning Machine. Neurocomputing 129: pp. 428-436 CrossRef
    10. Miche, Y, Sorjamaa, A, Bas, P, Simula, O, Jutten, C, Lendasse, A (2010) OP-ELM: optimally pruned extreme learning machine. IEEE Trans. Neural Netw. 21: pp. 158-162 CrossRef
    11. Rasmussen, CE, Williams, CKI (2006) Gaussian Processes for Machine Learning. The MIT Press, Cambridge
    12. Soria-Olivas, E, Gomez-Sanchis, J, Jarman, IH, Vila-Frances, J (2011) BELM: Bayesian Extreme Learning Machine. IEEE Trans. Neural Netw. 22: pp. 505-509 CrossRef
    13. Wang, XZ, Shao, QY, Miao, Q, Zhai, JH (2013) Architecture Selection for Networks Trained with Extreme Learning Machine Using Localized Generalization Error Model. Neurocomputing 102: pp. 3-9 CrossRef
    14. Wong, KI, Vong, CM, Wong, PK, Luo, JH (2015) Sparse Bayesian extreme learning machine and its application to biofuel engine performance prediction. Neurocomputing 149: pp. 397-404 CrossRef
    15. Zhu, QY, Qin, A, Suganthan, P, Huang, GB (2005) Evolutionary Extreme Learning Machine. Pattern Recogn. 38: pp. 1759-1763 CrossRef
  • 作者单位:Advances in Knowledge Discovery and Data Mining
  • 丛书名:978-3-319-18037-3
  • 刊物类别:Computer Science
  • 刊物主题:Artificial Intelligence and Robotics
    Computer Communication Networks
    Software Engineering
    Data Encryption
    Database Management
    Computation by Abstract Devices
    Algorithm Analysis and Problem Complexity
  • 出版者:Springer Berlin / Heidelberg
  • ISSN:1611-3349
文摘
This paper proposes a new method to train stable extreme learning machines (ELM). The new method, called StaELM, uses correlation coefficients in Gaussian process to measure the similarities between different hidden layer outputs. Different from kernel operations such as linear or RBF kernels to handle hidden layer outputs, using correlation coefficients can quantify the similarity of hidden layer outputs with real numbers in \((0,1]\) and avoid covariance matrix in Gaussian process to become a singular matrix. Training through Gaussian process results in ELM models insensitive to random initialization and can avoid over-fitting. We analyse the rationality of StaELM and show that existing kernel-based ELMs are special cases of StaELM. We used real world datasets to train both regression and classification StaELM models. The experiment results have shown that StaELM models achieved higher accuracies in both regression and classification in comparison with traditional kernel-based ELMs. The StaELM models are more stable with respect to different random initializations and less over-fitting. The training process of StaELM models is also faster.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700