文摘
Multi-task learning enables learning algorithms to harness shared knowledge from several tasks in order to provide better performance. In the past, neuro-evolution has shownpromising performance for a number of real-world applications. Recently, evolutionary multi-tasking has been proposed for optimisation problems. In this paper, we present a multi-task learning for neural networks that evolves modular network topologies. In the proposed method, each task is defined by a specific network topology defined with a different number of hidden neurons. The method produces a modular network that could be effective even if some of the neurons and connections are removed from selected trained modules in the network. We demonstrate the effectiveness of the method using feedforward networks to learn selected n-bit parity problems of varying levels of difficulty. The results show better training and generalisation performance when the modules for representing additional knowledge are added by increasing hidden neurons during training.