基于并行跟踪检测框架与深度学习的目标跟踪算法
详细信息    查看全文 | 推荐本文 |
  • 英文篇名:Object tracking algorithm based on parallel tracking and detection framework and deep learning
  • 作者:闫若怡 ; 熊丹 ; 于清华 ; 肖军浩 ; 卢惠民
  • 英文作者:YAN Ruoyi;XIONG Dan;YU Qinghua;XIAO Junhao;LU Huimin;College of Intelligence Science and Technology,National University of Defense Technology;Unmanned Systems Research Center,National Innovation Institute of Defense Technology;
  • 关键词:并行跟踪和检测 ; 目标跟踪 ; 深度学习 ; 相关滤波 ; 无人机
  • 英文关键词:parallel tracking and detection;;object tracking;;deep learning;;correlation filter;;drone
  • 中文刊名:JSJY
  • 英文刊名:Journal of Computer Applications
  • 机构:国防科技大学智能科学学院;国防科技创新研究院无人系统研究中心;
  • 出版日期:2018-10-01 13:09
  • 出版单位:计算机应用
  • 年:2019
  • 期:v.39;No.342
  • 基金:国家自然科学基金资助项目(61773393,61503401)~~
  • 语种:中文;
  • 页:JSJY201902008
  • 页数:5
  • CN:02
  • ISSN:51-1307/TP
  • 分类号:39-43
摘要
在空地协同背景下,地面目标的移动导致其在无人机视角下外观会发生较大变化,传统算法很难满足此类场景的应用要求。针对这一问题,提出基于并行跟踪和检测(PTAD)框架与深度学习的目标检测与跟踪算法。首先,将基于卷积神经网络(CNN)的目标检测算法SSD作为PTAD的检测子处理关键帧获取目标信息并提供给跟踪子;其次,检测子与跟踪子并行处理图像帧并计算检测与跟踪结果框的重叠度及跟踪结果的置信度;最后,根据跟踪子与检测子的跟踪或检测状态来判断是否对跟踪子或检测子进行更新,并对图像帧中的目标进行实时跟踪。在无人机视角下的视频序列上开展实验研究和对比分析,结果表明所提算法的性能高于PTAD框架下最优算法,而且实时性提高了13%,验证了此算法的有效性。
        In the context of air-ground robot collaboration,the apperance of the moving ground object will change greatly from the perspective of the drone and traditional object tracking algorithms can hardly accomplish target tracking in such scenarios.In order to solve this problem,based on the Parallel Tracking And Detection(PTAD)framework and deep learning,an object detecting and tracking algorithm was proposed.Firstly,the Single Shot MultiBox detector(SSD)object detection algorithm based on Convolutional Neural Network(CNN)was used as the detector in the PTAD framework to process the keyframe to obtain the object information and provide it to the tracker.Secondly,the detector and tracker processed image frames in parallel and calculated the overlap between the detection and tracking results and the confidence level of the tracking results.Finally,the proposed algorithm determined whether the tracker or detector need to be updated according to the tracking or detection status,and realized real-time tracking of the object in image frames.Based on the comparison with the original algorithm of the PTAD on video sequences captured from the perspective of the drone,the experimental results show that the performance of the proposed algorithm is better than that of the best algorithm with the PTAD framework,its real-time performance is improved by 13%,verifying the effectiveness of the proposed algorithm.
引文
[1]尹宏鹏,陈波,柴毅,等.基于视觉的目标检测与跟踪综述[J].自动化学报,2016,42(10):1466-1489.(YIN H B,CHEN B,CHAI Y,et al.Vision-based object detection and tracking:a review[J].Acta Automatica Sinica,2016,42(10):1466-1489.)
    [2]管皓,薛向阳,安志勇.深度学习在视频目标跟踪中的应用进展与展望[J].自动化学报,2016,42(6):834-847.(GUAN H,XUE X Y,AN Z Y.Advances on application of deep learning for video object tracking[J].Acta Automatica Sinica,2016,42(6):834-847.)
    [3]LI X,HU W,SHEN C,et al.A survey of appearance models in visual object tracking[J].ACM Transactions on Intelligent Systems&Technology,2013,4(4):Article No.58.
    [4]KALAL Z,MIKOLAJCZYK K,MATAS J.Tracking learning detection[J].IEEE Transactions on Pattern Analysis and Machine Intelligence.2012,34(7):1409-1422.
    [5]KALAL Z,MATAS J,MIKOLAJCZYK K.P-N learning:Bootstrapping binary classifiers by structural constraints[C]//Proceedings of the2010 IEEE Conference on Computer Vision and Pattern Recognition.Washington,DC:IEEE Computer Society,2010:49-56.
    [6]熊丹.基于视觉的微小型无人机地面目标跟踪技术研究[D].长沙:国防科技大学,2017:42-59.(XIONG D.The Research Vision-based Ground Object Tracking for MAVs[D].Changsha:National University of Defense Technology,2017:42-59.)
    [7]MUELLER M,SMITH N,GHANEM B.A benchmark and simulator for UAV tracking[C]//Proceedings of the 2016 European Conference on Computer Vision,LNCS 9905.Berlin:Springer,2016:445-461.
    [8]HENRIQUES J F,CASEIRO R,MARTINS P,et al.High-speed tracking with kernelized correlation filters[J].IEEE Transactions on Pattern Analysis and Machine Intelligence.2015,37(3):583-596.
    [9]KALAL Z,MIKOLAJCZYK K,MATAS J.Forward-backward error:automatic detection of tracking failures[C]//Proceedings of the 2010 20th International Conference on Pattern Recognition.Washington,DC:IEEE Computer Society,2010:2756-2759.
    [10]GIRSHICK R,DONAHUE J,DARRELL T,et al.Region-based convolutional networks for accurate object detection and segmentation[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,2016,38(1):142-158.
    [11]HE K,ZHANG X,REN S,et al.Spatial pyramid pooling in deep convolutional networks for visual recognition[C]//Proceedings of the 2014 European Conference on Computer Vision.Cham:Springer,2014:346-361.
    [12]GIRSHICK R.Fast R-CNN[C]//Proceedings of the 2015 IEEEInternational Conference on Computer Vision.Washington,DC:IEEE Computer Society,2015:1440-1448.
    [13]REN S,HE K,GIRSHICK R,et al.Faster R-CNN:towards real-time object detection with region proposal networks[J].IEEETransactions on Pattern Analysis and Machine Intelligence,2017,39(6):1137-1149.
    [14]REDMON J,DIVVALA S,GIRSHICK R,et al.You only look once:unified,real-time object detection[EB/OL].[2018-04-06].https://arxiv.org/pdf/1506.02640.pdf.
    [15]LIU W,ANGUELOV D,ERHAN D,et al.SSD:Single Shot multibox Detector[C]//Proceedings of the 2016 European Conference on Computer Vision.Cham:Springer,2016:21-37.
    [16]REDMON J,FARHADI A.YOLO9000:better,faster,stronger[C]//Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition.Washington,DC:IEEE Computer Society,2017:6517-6525.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700