参考文献:1. Angeline P, Saunders G, Pollack J (1994) An evolutionary algorithm that constructs recurrent neural networks. IEEE Trans Neural Netw 5(1):54-5 CrossRef 2. Augusteijn MF, Harrington TP (2004) Evolving transfer functions for artificial neural networks. Neural Comput Appl 13(1):38-6 CrossRef 3. Bengio Y, Lamblin P, Popovici D, Larochelle H (2007) Greedy layer-wise training of deep networks. Adv Neural Fnfo Process Syst 19:153 4. Cantú-Paz E, Kamath C (2005) An empirical comparison of combinations of evolutionary algorithms and neural networks for classification problems. IEEE Trans Syst Man Cybern Part B Cybern 35(5):915-27 CrossRef 5. Chalup SK, Wiklendt L (2007) Variations of the two-spiral task. Connect Sci 19(2):183-99 CrossRef 6. Chebira A, Madani K (2003) Advances in soft computing, vol 19, chap. A Neural network based approach for sensors issued data fusion. Physica, Wien, pp 155-60 7. Cliff D, Harvey I, Husbands P (1992) Incremental evolution of neural network architectures for adaptive behaviour. In: Proceedings of the European symposium on artificial neural networks (ESANN-3), pp 39-4 8. Duch W, Jankowski N (1999) Survey of neural transfer functions. Neural Comput Surv 2(1):163-12 9. Duch W, Jankowski N (2001) Transfer functions: hidden possibilities for better neural networks. In: ESANN, pp 81-4 10. Duch W, Jankowski N, Maszczyk T (2012) Make it cheap: learning with o (nd) complexity. In: The 2012 international joint conference on neural networks (IJCNN). IEEE, pp 1- 11. Floreano D, Dürr P, Mattiussi C (2008) Neuroevolution: from architectures to learning. Evolut Intell 1(1):47-2 CrossRef 12. Glorot X, Bengio Y (2010) Understanding the difficulty of training deep feedforward neural networks. In: Proceedings of the international conference on artificial intelligence and statistics (AISTATS10). Society for artificial intelligence and statistics 13. Hornik K, Stinchcombe M, White H (1989) Multilayer feedforward networks are universal approximators. Neural Netw 2(5):359-66 CrossRef 14. Khan MM, Ahmad MA, Khan MG, Miller JF (2013) Fast learning neural networks using Cartesian Genetic Programming. Neurocomputing 121:274-89 CrossRef 15. Koutník J, Gomez F, Schmidhuber J (2010) Evolving neural networks in compressed weight space. In: Proceedings of the conference on genetic and evolutionary computation (GECCO-10), pp 619-26 16. Larochelle H, Bengio Y, Louradour J, Lamblin P (2009) Exploring strategies for training deep neural networks. J Mach Learn Res 10:1-0 17. Liu Y, Yao X (1996) Evolutionary design of artificial neural networks with different nodes. In: Proceedings of IEEE international conference on evolutionary computation, 1996, pp 670-75. IEEE 18. Mangasarian OL, Setiono R, Wolberg WH (1990) Large-scale numerical optimization. In: Coleman TF, Li Y (eds) Pattern recognition via linear programming: theory and application to medical diagnosis. SIAM, Philadelphia, PA, pp 22-1 19. Manning T, Walsh P (2013) Improving the performance of CGPANN for breast cancer diagnosis using crossover and radial basis functions. In: Evolutionary computation, machine learning and data mining in bioinformatics. Springer, Berlin, pp 165-76 20. McCulloch W, Pitts W (1943) A logical calculus of the ideas immanent in nervous activity. Bull Math Biol 5(4):115-33 21. Miller JF (2001) What bloat? Cartesian genetic programming on Boolean problems. In: 2001 Genetic and evolutionary computation conference late breaking papers, pp 295-02 22. Miller JF (2011) Cartesian genetic programming. Springer, Berlin CrossRef 23. Miller JF, Smith S (2006) Redundancy and computational efficiency in Cartesian genetic programming. IEEE Trans Evolut Comput 10(2):167-74 CrossRef 24. Miller JF, Thomson P (2000) Cartesian genetic programming.
作者单位:Andrew James Turner (1) Julian Francis Miller (1)
1. Intelligent Systems Group, Electronics Department, The University of York, York, UK
ISSN:1864-5917
文摘
NeuroEvolution is the application of Evolutionary Algorithms to the training of Artificial Neural Networks. Currently the vast majority of NeuroEvolutionary methods create homogeneous networks of user defined transfer functions. This is despite NeuroEvolution being capable of creating heterogeneous networks where each neuron’s transfer function is not chosen by the user, but selected or optimised during evolution. This paper demonstrates how NeuroEvolution can be used to select or optimise each neuron’s transfer function and empirically shows that doing so significantly aids training. This result is important as the majority of NeuroEvolutionary methods are capable of creating heterogeneous networks using the methods described.