用户名: 密码: 验证码:
基于压缩卷积神经网络的交通标志分类算法
详细信息    查看全文 | 推荐本文 |
  • 英文篇名:Traffic sign classification algorithm based on compressed convolutional neural network
  • 作者:张建明 ; 王伟 ; 陆朝铨 ; 李旭东
  • 英文作者:ZHANG Jianming;WANG Wei;LU Chaoquan;LI Xudong;Hunan Provincial Key Laboratory of Intelligent Processing of Big Data on Transportation,Changsha University of Science and Technology;School of Computer and Communication Engineering,Changsha University of Science and Technology;
  • 关键词:卷积神经网络 ; 交通标志分类 ; 通道剪枝 ; 参数量化 ; 模型压缩
  • 英文关键词:convolutional neural network;;traffic sign classification;;channel pruning;;quantization;;model compression
  • 中文刊名:HZLG
  • 英文刊名:Journal of Huazhong University of Science and Technology(Natural Science Edition)
  • 机构:长沙理工大学综合交通运输大数据智能处理湖南省重点实验室;长沙理工大学计算机与通信工程学院;
  • 出版日期:2019-01-10 11:31
  • 出版单位:华中科技大学学报(自然科学版)
  • 年:2019
  • 期:v.47;No.433
  • 基金:国家自然科学基金资助项目(61772454,61811530332);; 湖南省教育厅科学研究重点资助项目(No.16A008);; 湖南省研究生科研创新项目(CX2018B565)
  • 语种:中文;
  • 页:HZLG201901019
  • 页数:6
  • CN:01
  • ISSN:42-1658/N
  • 分类号:108-113
摘要
针对车载计算系统很难满足大型卷积神经网络对计算资源和存储空间需求的问题,提出了一种基于压缩卷积神经网络的交通标志分类算法.首先挑选原始VGG-16和AlexNet在GTSRB数据集上进行分类训练;然后对网络模型进行基于泰勒展开的通道剪枝删除冗余的特征图通道;接着使用三值量化方法对剪枝后的网络模型进行参数量化;最后进行了通道剪枝、参数量化和组合压缩的实验.结果表明:本算法有效地压缩了网络模型,减少了运算次数.最终组合压缩的VGG-16网络模型的存储空间减少一半,参数数量为原始模型的9%,每秒浮点运算次数减少为原始模型的1/5,模型加载速度提升了5倍,测试速度提升了2倍,精度为原始模型的97%.
        Aiming at the problem that the automotive system can hardly meet the requirements of large convolutional neural networks for computing resources and storage space,a traffic-sign classification algorithm based on compressed convolutional neural network was proposed.First,a network was trained on the GTSRB,VGG-16 and AlexNet were selected comprehensively.Then,channels were pruned based on Taylor expansion to delete redundant feature map channels for the network,and ternary quantized parameters were trained.Finally,the experimental results of channel pruned,ternary quantized parameter and combined compression for networks were compared respectively. The experimental results show that the proposed algorithm effectively compresses the network and reduces the number of operations.The storage size of the final combined compression for VGG-16 is reduced by half,and the number of parameters is 9% of the original model.The floating-point operations per second of the proposed model is reduced to one-fifth of the original one,with five times faster model loading time,two times faster testing time,and accuracy of 97%.
引文
[1] KRIZH EVS KY A,SUTSKEVER I,HINTON G E.ImageNet classification with deep convolutional neural networks[C]//Proc of Advances in Neural Information Processing Systems.Lake Tahoe:NIPS,2012:1097-1105.
    [2] SIMONYAN K,ZISSERMAN A.Very deep convolutional networks for large-scale image recognition[C]//Proc of International Conference on Learning Representations,San Diego:ICLR,2015:1-14.
    [3] REN Shaoqing,HE Kaiming,ROSS G,et al.Faster R-CNN:towords real-time object detection with region proposal networks[C]//Proc of Conference on Advances in Neural Information Processing Systems. Montreal:Curran Associates,2015:91-99.
    [4]雷杰,高鑫,宋杰,等.深度网络模型压缩综述[J].软件学报,2018,29(2):251-266.
    [5] HINTON G,VINYALS O,DEAN J.Distilling the knowledge in a neural network[C]//Proc of Conference on Advances in Neural Information Processing Systems.Montreal:IEEE,2014:2644-2652.
    [6] MOLCHANOV P,TYREE S,KARRAS T,et al.Pruning convolutional neural networks for resource efficient transfer learning[C]//Proc of International Conference on Learning Representations.Toulon:IEEE,2017:324-332.
    [7] HU H,PENG R,TAI Y W,et al.Network trimming:a data-driven neuron pruning approach towards efficient deep architectures[C]//Proc of International Conference on Learning Representations. Toulon:IEEE, 2017:214-222.
    [8] TAI C,XIAO T,WANG X,et al.Convolutional neural networks with low-rank regularization[EB/OL].[2018-03-05].https://arxiv.org/abs/1511.06067.
    [9] NOVIKOV A,PODOPRIKHIN D,OSOKIN A,et al.Tensorizing neural networks[C]//Proc of Conference on Advances in Neural Information Processing Systems.Montreal:Curran Associates,2015:442-450.
    [10] VANHOUCKE V,SENIOR A,MAO M Z.Improving the speed of neural networks on CPUs[C]//Proc of Deep Learning and Unsupervised Feature Learning NIPS Workshop.Granada:NIPS,2011:1-8.
    [11] ZHU C,HAN S,MAO H,et al.Trained ternary quanquantization[EB/OL].[2018-03-05].https://arxiv.org/abs/1612.01064.
    [12] ZHOU S,WU Y,NI Z,et al.DoReFa-Net:training low bitwidth convolutional neural networks with low bitwidth gradients[EB/OL].[2018-03-05].https://arxiv.org/abs/1606.06160.
    [13] NAMOR A F D D,SHEHAB M,KHALIFE R,et al.The german traffic sign recognition benchmark:a multi-class classification competition[C]//Proc of International Joint Conference on Neural Networks.San Jose:IEEE,2011:1453-1460.
    [14] KRIZHEVSKY A. Learning multiple layers of features from tiny images[D].Toronto:Deparment of Computer Science,University of Toronto,2009.
    [15] LI H,KADAV A,DURDANOVIC I,et al.Pruning filters for efficient ConvNets[C]//Proc of International Conference on Learning Representations.Toulon:IEEE,2017:34-42.
    [16] KIM Y D,PARK E,YOO S,et al.Compression of deep convolutional neural networks for fast and low power mobile applications[J].Computer Science,2015,71(2):576-584.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700