基于Darknet框架下YOLO v2算法的车辆多目标检测方法
详细信息    查看全文 | 推荐本文 |
  • 英文篇名:Vehicle multi-target detection method based on YOLO v2 algorithm under darknet framework
  • 作者:李珣 ; 刘瑶 ; 李鹏飞 ; 张蕾 ; 赵征凡
  • 英文作者:LI Xun;LIU Yao;LI Peng-fei;ZHANG Lei;ZHAO Zheng-fan;School of Electronics and Information,Xi'an Polytechnic University;Artillery Air Defense Equipment Technology Institute;
  • 关键词:交通信息工程 ; 深度学习 ; 多目标检测 ; Darknet框架 ; YOLO ; v2算法 ; 网络模型
  • 英文关键词:traffic information engineering;;deep learning;;multi-target detection;;darknet framework;;YOLO v2 algorithm;;network model
  • 中文刊名:JYGC
  • 英文刊名:Journal of Traffic and Transportation Engineering
  • 机构:西安工程大学电子信息学院;炮兵防空兵装备技术研究所;
  • 出版日期:2018-12-15
  • 出版单位:交通运输工程学报
  • 年:2018
  • 期:v.18;No.96
  • 基金:国家自然科学基金项目(51607133);; 陕西省自然科学基础研究计划项目(2016JQ5106);; 陕西省教育厅专项科研计划项目(16JK1342)
  • 语种:中文;
  • 页:JYGC201806020
  • 页数:17
  • CN:06
  • ISSN:61-1369/U
  • 分类号:146-162
摘要
针对道路车辆目标检测传统方法需随场景变化提取不同特征,检测率较低与鲁棒性差的问题,提出了一种基于Darknet框架下YOLO v2算法的车辆多目标检测方法;根据目标路段场景与车流量的变化对YOLO-voc网络模型进行改进,基于ImageNet数据集和微调技术获得分类训练网络模型,对训练结果和车辆目标特征进行分析后进一步调整改进的算法参数,最终获得更适合于道路车辆检测的YOLO-vocRV网络模型下车辆多目标检测方法;为验证检测方法的有效性和完备性,采用不同车流密度进行了车辆多目标检测试验,并与经典YOLO-voc、YOLO9000模型进行了对比;采用改进YOLO-vocRV网络模型,选取20 000次迭代,分析了多目标检测结果。试验结果表明:在阻塞流样本条件下,YOLO9000网络模型检测率为93.71%,YOLO-voc网络模型检测率为94.48%,改进YOLO-vocRV网络模型检测率达到了96.95%,因此,改进网络模型YOLOvocRV检测率较高;YOLO-vocRV模型精确度和召回率均聚集在0.95,因此,在获得较好精确度的条件下损失的召回率明显较小,达到了很好的折中;采用混合样本训练后,基于YOLO-vocRV模型的车辆多目标检测方法的检测率在自由流状态下可达99.11%,同步流状态下可达97.62%,阻塞流状态下可达到97.14%,具有较小的误检率和良好的鲁棒性。
        To improve the detection rate and robustness of the traditional road vehicle detection methods that need to extract different features as the scenes change,a vehicle multi-target detection method based on the YOLO v2 algorithm under the darknet framework was proposed.The YOLO-voc network model was improved according to the changes of the scenes and traffic flows of target road sections.The classification training model was obtained based on the ImageNet data and fine-tuning technology.The parameters of the improved algorithm were adjusted according to the training results and vehicle target characteristics.Lastly,the vehiclemulti-target detection method in YOLO-vocRV network model was obtained and more suitable for road vehicle detection.In order to verify the validity and completeness of the detection method,the vehicle multi-target detection experiment was carried based on different traffic densities.At the same time,the improved method was compared with the YOLO-voc and YOLO9000 model.The multi-target detection test result was analyzed by the improved YOLO-vocRV network model based on 20 000 iterations.Test result shows that in the blocking flow condition,the detection rates of the YOLO9000 network model,YOLO-voc network model and improved network model YOLO-vocRV are 93.71%,94.48% and 96.95%,respectively,so the detection rate of the YOLO-vocRV model is the highest.The precision and recall rate of the YOLO-vocRV model are gathered at 0.95,so it loses less recall rate under the condition of obtaining better precision,and agood compromise is achieved.After the mixed-sample training,the detection rate of vehicle multi-target detection method based on the YOLO-vocRV model is 99.11%in the free flow state,97.62%in the synchronous flow state,and 97.14%in the blocking flow state,so it has little false detection rate and good robustness.
引文
[1] OTTLIK A,NAGEL H H.Initialization of model-based vehicle-tracking in video sequences[J].International Journal of Computer Vision,2008,80(2):211-225.
    [2] FLEYEH H,DAVAMI E.Eigen-based traffic sign recognition[J].IET Intelligent Transport Systems,2011,5(3):190-196.
    [3]高云峰,徐立鸿,胡华,等.交叉口定周期信号控制多目标优化方法[J].中国公路学报,2011,24(5):82-88.GAO Yun-feng,XU Li-hong, HU Hua,et al. Multiobjective optimization method for fixed-time signal control at intersection[J].China Journal of Highway and Transport,2011,24(5):82-88.(in Chinese)
    [4] CHOIM K,PARK J,LEE S C.Event classification for vehicle navigation system by regional optical flow analysis[J].Machine Vision and Applications,2014,25(3):547-559.
    [5]高韬,刘正光,岳士宏,等.用于智能交通的运动车辆跟踪算法[J].中国公路学报,2010,23(3):89-94.GAO Tao,LIU Zheng-guang,YUE Shi-hong,et al.Moving vehicle tracking algorithm used for intelligent traffic[J].China Journal of Highway and Transport,2010,23(3):89-94.(in Chinese)
    [6] TEOHS S,BRUNL T.Symmetry-based monocular vehicle detection system[J]. Machine Vision and Applications,2012,23(5):831-842.
    [7] LALIMI M A,GHOFRANI S,MCLERNON D et al.A vehicle license plate detection method using region and edge based methods[J].Computers and Electrical Engineering,2013,39(3):834-845.
    [8] LONG W,YANG Y H.Stationary background generation:an alternative to the difference of two images[J].Pattern Recognition,1990,23(12):1351-1359.
    [9]田军,魏振华,武思远.能量法的自适应背景更新算法[J].计算机科学与探索,2009,3(2):218-224.TIAN Jun,WEI Zhen-hua,WU Si-yuan.A self-adaptive background updating algorithm of energy method[J].Journal of Frontiers of Computer Science and Technology,2009,3(2):218-224.(in Chinese)
    [10]李喜来,李艾华,白向峰.智能交通系统运动车辆的光流法检测[J].光电技术应用,2010,25(2):75-78.LI Xi-lai,LI Ai-hua,BAI Xiang-feng. Moving vehicles detection in intelligent transportation systems based on optical flow[J]. Electro-Optic Technology Application,2010,25(2):75-78.(in Chinese)
    [11]梁敏健,崔啸宇,宋青松,等.基于HOG-Gabor特征融合与Softmax分类器的交通标志识别方法[J].交通运输工程学报,2017,17(3):151-158.LIANG Min-jian,CUI Xiao-yu,SONG Qing-song,et al.Traffic sign recognition method based on HOG-Gabor feature fusion and Softmax classifier[J].Journal of Traffic and Transportation Engineering,2017,17(3):151-158.(in Chinese)
    [12] AHONEN T,HADID A,PIETIKINEN M.Face description with local binary patterns:application to face recognition[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,2006,28(12):2037-2041.
    [13] SIVARAMAN S,TRIVEDI M M.Active learning for on-road vehicle detection:a comparative study[J].Machine Vision and Applications,2014,25(3):599-611.
    [14]王永忠,梁彦,潘泉,等.基于自适应混合高斯模型的时空背景建模[J].自动化学报,2009,35(4):371-378.WANG Yong-zhong,LIANG Yan,PAN Quan,et al.Spatiotemporal background modeling based on adaptive mixture of Gaussians[J].Acta Automatica Sinica,2009,35(4):371-378.(in Chinese)
    [15]金立生,王岩,刘景华,等.基于Adaboost算法的日间前方车辆检测[J].吉林大学学报:工学版,2014,44(6):1604-1608.JIN Li-sheng, WANG Yan,LIU Jing-hua,et al.Front vehicle detection based on Adaboost algorithm in daytime[J].Journal of Jilin University:Engineering and Technology Edition,2014,44(6):1604-1608.(in Chinese)
    [16]徐姗姗,刘应安,徐昇.基于卷积神经网络的木材缺陷识别[J].山东大学学报:工学版,2013,43(2):23-28.XU Shan-shan,LIU Ying-an,XU Sheng. Wood defects recognition based on the convolutional neural network[J].Journal of Shandong University:Engineering Science,2013,43(2):23-28.(in Chinese)
    [17]丁松涛,曲仕茹.基于深度学习的交通目标感兴趣区域检测[J].中国公路学报,2018,31(9):167-174.DING Song-tao,QU Shi-ru.Traffic object detection based on deep learning with region of interest selection[J].China Journal of Highway and Transport,2018,31(9):167-174.(in Chinese)
    [18] LIM K,HONG Y,CHOI Y,et al.Real-time traffic sign recognition based on a general purpose GPU and deeplearning[J].Plos One,2017,12(3):1-22.
    [19] TANAKA M,MORIE T.Shadow detection and elimination using a light-source color vector and its application to invehicle camera images[J].International Journal of Innovative Computing,Information and Control,2015,11(3):865-879.
    [20] HE Kai-ming,ZHANG Xiang-yu,REN Shao-qing,et al.Delving deepintorectifiers:surpassinghuman-level performance on imagenet classification[C]∥IEEE.15th IEEE International Conference on Computer Vision.New York:IEEE,2015:1026-1034.
    [21] SZEGEDY C,TOSHEV A,ERHAN D.Deep neural networks for object detection[C]∥Neural Information Processing Systems Foundation.27th Annual Conference on Neural Information Processing Systems.La Jolla:Neural Information Processing Systems Foundation,2013:1-9.
    [22] GIRSHICK R,DONAHUE J,DARRELL T,et al.Rich feature hierarchies for accurate object detection and semantic segmentation[C]∥IEEE.27th IEEE Conference on Computer Vision and Pattern Recognition.New York:IEEE,2014:1-21.
    [23] UIJLINGS J R R,VAN DE SANDE K E A,GEVERS T,et al.Selective search for object recognition[J].International Journal of Computer Vision,2013,104(2):154-171.
    [24] GIRSHICK R.Fast R-CNN[C]∥IEEE.15th IEEE International Conference on Computer Vision.New York:IEEE,2015:1440-1448.
    [25] REN Shao-qing,HE Kai-ming,GIRSHICK R,et al.Faster R-CNN:towards real-time object detection with region proposal networks[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,2017,39(6):1137-1149.
    [26] REDMON J,DIVVALA S,GIRSHICK R,et al.You only look once:unified,real-time object detection[C]∥IEEE.29th IEEE Conference on Computer Vision and Pattern Recognition.New York:IEEE,2016:779-788.
    [27] AHONEN T,HADID A,PIETIKINEN M.Face description with local binary patterns:application to face recognition[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,2006,28(12):2037-2041.
    [28] DONG Jing-wei,SUN Mei-ting,LIANG Geng-rui,et al.The improved neural network algorithm of license plate recognition[J].International Journal of Signal Processing,Image Processing and Pattern Recognition,2015,8(5):49-54.
    [29]梁琳,何卫平,雷蕾,等.光照不均图像增强方法综述[J].计算机应用研究,2010,27(5):1625-1628.LIANG Lin, HE Wei-ping,LEI Lei,et al.Survey on enhancement methods for non-uniform illumination image[J].Application Research of Computers,2010,27(5):1625-1628.(in Chinese)
    [30] LI Xun,ZHAO Zheng-fan,LIU Li,et al.An optimization model of multi-intersection signal control for trunk road under collaborative information[J].Journal of Control Science and Engineering,2017,2017:

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700