An immune-inspired instance selection mechanism for supervised classification
详细信息    查看全文
  • 作者:Grazziela P. Figueredo (1) gpfigueredo@gmail.com
    Nelson F. F. Ebecken (1) nelson@ntt.ufrj.br
    Douglas A. Augusto (2) douglas@lncc.br
    Helio J. C. Barbosa (2) hcbm@lncc.br
  • 关键词:Instance selection &#8211 ; Data reduction &#8211 ; Artificial immune systems &#8211 ; Data classification
  • 刊名:Memetic Computing
  • 出版年:2012
  • 出版时间:June 2012
  • 年:2012
  • 卷:4
  • 期:2
  • 页码:135-147
  • 全文大小:292.8 KB
  • 参考文献:1. A Asuncion DN (2010) UCI Machine learning repository. http://www.ics.uci.edu/~mlearn/MLRepository.html
    2. Abbas AK, Lichtman AH, Pober JS (1991) Cellular and molecular immunology. Saunders, Philadelphia
    3. Aha DW, Kibbler D, Albert MK (1991) Instance-based learning algorithms. Mach Learn 6: 37–66
    4. Blum AL, Langley P (1997) Selection of relevant features and examples in machine learning. Artif Intell 97: 245–271
    5. Broadley CE (1993) Addressing the selective superiority problem: automatic algorithm/model class selection. In: Procedings of 10th international machine learning conference, pp 17–24
    6. Cano J, Herrera F, Lozano M (2003) Using evolutionary algorithms as instance selection for data reduction in KDD: An experimental study. IEEE Trans Evol Comput 7(6): 561–575
    7. Cano JR, Herrera F, Lozano M (2006) On the combination of evolutionary algorithms and stratified strategies for training set selection in data mining. Appl Soft Comput 6(3): 323–332
    8. Cormack DH (2001) Essential histology, 2nd edn. Lippincott Williams and Wilkins
    9. Cristianini N, Shawe-Taylor J (2000) An introduction to support vector machines (and other kernel-based learning methods). Cambridge University Press, Cambridge
    10. Demšar J (2006) Statistical comparisons of classifiers over multiple data sets. J Mach Learn Res 7:1–30. http://dl.acm.org/citation.cfm?id=1248547.1248548
    11. Devijver P, Kittler J (1980) On the edited nearest neighbour rule. IEEE Patt Recognit 1: 72–80
    12. Eick C, Zeidat N, Vilalta R (2004) Using representative-based clustering for nearest neighbor dataset editing. In: Data mining, 2004. ICDM ’04. Fourth IEEE international conference, pp 375–378
    13. Eshelman LJ (1991) The CHC adaptive search algorithm: how to have safe search when engaging in nontraditional genetic recombination. In: Rawlins GJE (ed) Proceedings of the first workshop on foundations of genetic algorithms. Morgan Kaufmann, San Mateo, pp 265–283
    14. Esp铆ndola RP, Ebecken NFF (2005) On extending f-measure and g-mean metrics to multi-class problems. In: Sixth international conference on data mining, text mining and their business applications, Wessex Institute of Technology, UK, vol 35. WIT Press, Skiathos, pp 25–34
    15. Franco A, Maltoni D, Nanni L (2010) Data pre-processing through reward-punishment editing. Pattern Anal Appl (PAA) 13:367–381(15)
    16. Garc铆a S, Cano JR, Herrera F (2010) Intelligent systems for automated learning and adaptation: emerging trends and applications, IGI Global, chap A review on evolutionary prototype selection: an empirical study of performance and efficiency, pp 92–113
    17. Garc铆a S, Cano JR, Herrera F (2008) A memetic algorithm for evolutionary prototype selection: a scaling up approach. Pattern Recognit 41:2693–2709. doi:10.1016/j.patcog.2008.02.006, http://dl.acm.org/citation.cfm?id=1367147.1367320
    18. Garfield E (1979) Citation indexing—its theory and application in science, technology, and humanities/Eugene Garfield. Wiley, New York
    19. Gates GW (1972) The reduced nearest neighbor rule. IEEE Trans Inform Theory 14: 431–433
    20. Goldberg D (1989) Genetic Algorithms in Search, Optimization, and Machine Learning. Addison-Wesley, Reading
    21. Hart PE (1968) The condensed nearest neighbor rule. IEEE Trans Inform Theory 14: 515–516
    22. Holland JH (1975) Adaptation in natural and artificial systems. University of Michigan Press, Ann Arbor
    23. Janeway CA, Travers P, Walport M, Shlomchik M (2001) Immunobiology: the immune system in health and disease, 5th edn. Garland Science, Oxford
    24. John GH, Kohavi R, Pfleger K (1994) Irrelevant features and the subset selection problem. In: International conference on machine learning. Morgan Kaufmann, San Mateo, pp 121–129
    25. Kibbler D, Aha DW (1987) Learning representative exemplars of concepts: an initial case of study. In: Proceedings of the 4th international workshop on machine learning, pp 24–30
    26. Kohavi R (1995) A study of cross-validation and bootstrap for accuracy estimation and model selection. In: IJCAI, pp 1137–1145
    27. Kohonen T (1990) The self-organizing map. In: IEEE international conference on neural networks, vol 78, pp 1464–1480
    28. Lowe DG (1995) Similarity metric learning for a variable-kernel classifier. Neural Comput 7(1): 72–85
    29. Nanni L, Lumini A (2009) Particle swarm optimization for prototype reduction. Neurocomputing 72(4–6):1092–1097 (Brain Inspired Cognitive Systems (BICS 2006) / Interplay Between Natural and Artificial Computation (IWINAC 2007))
    30. Paredes R, Vidal E (2004) Learning prototypes and distances (LPD): a prototype reduction technique based on nearest neighbor error minimization. Int Conf Pattern Recognit 3: 442–445
    31. Pedreira CE (2006) Learning vector quantization with training data selection. IEEE Trans Pattern Anal Mach Intell 28: 157–162
    32. Quinlan JR (1993) C4.5: Programs for machine learning. Morgan Kaufmann, San Mateo
    33. Rice JA (2006) Mathematical statistics and data analysis. Duxbury Press, Pacific Grove
    34. Skalak DB (1994) Prototype and feature selection by sampling and random mutation hill climbing algorithms. In: Proceedings of the 11th international conference on machine learning. Morgan Kaufmann, San Mateo
    35. Tizard I (1985) Introduction to veterinary immunology, 2nd edn. ROCA, La Roca Del Valles-barcel (in Portuguese)
    36. Whitley D (1989) The genitor algorithm and selective preasure: Why rank based allocation of reproductive trials is best. In: Proceedings of the 3rd international conference on genetic algorithms, pp 116–121
    37. Wilcoxon F (1945) Individual comparisons by ranking methods. Biometrics Bull 1(6):80–83. doi:10.2307/3001968
    38. Wilson DL (1972) Asymptotic properties of nearest neighbor rules using edited data. Syst Man Cybern IEEE Trans 2(3): 408–421
    39. Wilson DR, Martinez TR (1997) Instance pruning techniques. In: Proceedings of the 14th international conference on machine learning, pp 403–411
    40. Wilson DR, Martinez TR (2000) Reduction techniques for instance-based learning algorithms. Mach Learn 38: 257–268
    41. Yin XC, Liu CP, Han Z (2005) Feature combination using boosting. Pattern Recognit Lett 26: 2195–2205
    42. Zeidat N, Wang S, Eick CF (2005) Dataset editing techniques: a comparative study
  • 作者单位:1. Federal University of Rio de Janeiro-COPPE, Rio de Janeiro, Brazil2. LNCC-MCT, Rio de Janeiro, Brazil
  • 刊物类别:Engineering
  • 刊物主题:Applied Mathematics and Computational Methods of Engineering
    Artificial Intelligence and Robotics
    Automation and Robotics
    Complexity
    Bioinformatics
    Applications of Mathematics
  • 出版者:Springer Berlin / Heidelberg
  • ISSN:1865-9292
文摘
One issue in data classification problems is to find an optimal subset of instances to train a classifier. Training sets that represent well the characteristics of each class have better chances to build a successful predictor. There are cases where data are redundant or take large amounts of computing time in the learning process. To overcome this issue, instance selection techniques have been proposed. These techniques remove examples from the data set so that classifiers are built faster and, in some cases, with better accuracy. Some of these techniques are based on nearest neighbors, ordered removal, random sampling and evolutionary methods. The weaknesses of these methods generally involve lack of accuracy, overfitting, lack of robustness when the data set size increases and high complexity. This work proposes a simple and fast immune-inspired suppressive algorithm for instance selection, called SeleSup. According to self-regulation mechanisms, those cells unable to neutralize danger tend to disappear from the organism. Therefore, by analogy, data not relevant to the learning of a classifier are eliminated from the training process. The proposed method was compared with three important instance selection algorithms on a number of data sets. The experiments showed that our mechanism substantially reduces the data set size and is accurate and robust, specially on larger data sets.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700