Learning Pooling for Convolutional Neural Network
详细信息    查看全文
文摘
Convolutional neural networks (CNNs) consist of alternating convolutional layers and pooling layers. The pooling layer is obtained by applying pooling operator to aggregate information within each small region of the input feature channels and then down sampling the results. Typically, hand-crafted pooling operations are used to aggregate information within a region, but they are not guaranteed to minimize the training error. To overcome this drawback, we propose a learned pooling operation obtained by end-to-end training which is called LEAP (LEArning Pooling). Specifically, in our method, one shared linear combination of the neurons in the region is learned for each feature channel (map). In fact, average pooling can be seen as one special case of our method where all the weights are equal. In addition, inspired by the LEAP operation, we propose one simplified convolution operation to replace the traditional convolution which consumes many extra parameters. The simplified convolution greatly reduces the number of parameters while maintaining comparable performance. By combining the proposed LEAP method and the simplified convolution, we demonstrate the state-of-the-art classification performance with moderate parameters on three public object recognition benchmarks: CIFAR10 dataset, CIFAR100 dataset, and ImageNet2012 dataset.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700