A Dynamic Pruning Strategy for Incremental Learning on a Budget
详细信息    查看全文
  • 作者:Yusuke Kondo (20)
    Koichiro Yamauchi (20)
  • 关键词:learning on a budget ; regression ; forgetting ; virtual concept drifting environments
  • 刊名:Lecture Notes in Computer Science
  • 出版年:2014
  • 出版时间:2014
  • 年:2014
  • 卷:8834
  • 期:1
  • 页码:295-303
  • 全文大小:215 KB
  • 参考文献:1. Dekel, O., Shalev-Shwartz, S., Singer, Y.: The forgetron: A kernel-based perceptron on a fixed budget. Technical report (2005), http://www.pascal-network.org/
    2. Orabona, F., Keshet, J., Caputo, B.: The projectron: A bounded kernel-based perceptron. In: ICML 2008, pp. 720鈥?27 (2008)
    3. He, W., Wu, S.: A kernel-based perceptron with dynamic memory. Neural Networks聽25, 105鈥?13 (2011)
    4. Yamauchi, K.: Pruning with replacement and automatic distance metric detection in limited general regression neural networks. In: IJCNN 2011, pp. 899鈥?06. IEEE (July 2011)
    5. Yamauchi, K.: Incremental learning on a budget and its application to quick maximum power point tracking of photovoltaic systems. In: The 6th International Conference on Soft Computing and Intelligent Systems, pp. 71鈥?8. IEEE (November 2012)
    6. Yamauchi, K., Kondo, Y., Maeda, A., Nakano, K., Kato, A.: Incremental learning on a budget and its application to power electronics. In: Lee, M., Hirose, A., Hou, Z.-G., Kil, R.M. (eds.) ICONIP 2013, Part II. LNCS, vol.聽8227, pp. 341鈥?51. Springer, Heidelberg (2013) CrossRef
    7. Specht, D.F.: A general regression neural network. IEEE Transactions on Neural Networks聽2(6), 568鈥?76 (1991) CrossRef
    8. Lee, D., Noh, S.H., Min, S.L., Choi, J., Kim, J.H., Cho, Y., Sang, K.C.: Lrfu: A spectrum of policies that subsumes the least recently used and least frequently used policies. IEEE Transaction on Computers聽50(12), 1352鈥?361 (2001) CrossRef
  • 作者单位:Yusuke Kondo (20)
    Koichiro Yamauchi (20)

    20. Department of Computer Science, Chubu University Kasugai, Aichi, Matsumoto, 1200, Japan
  • ISSN:1611-3349
文摘
Several kernel-based perceptron learning methods on a budget have been proposed. In the early steps of learning, such methods record a new instance by allocating it a new kernel. In the later steps, however, useless memory must be forgotten to make space for recording important and new instances once the number of kernels reaches an upper bound. In such cases, it is important to find a way to determine what memory should be forgotten. This is an important process for yielding a high generalization capability. In this paper, we propose a new method that selects between one of two forgetting strategies, depending on the redundancy of the memory in the learning machine. If there is redundant memory, the learner replaces the most redundant memory with a new instance. If there is less redundant memory, the learner replaces the least recently used / least frequently used memory. Experimental results suggest that this proposed method is superior to existing learning methods on a budget.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700