适用于复杂场景的多目标跟踪算法
详细信息    查看全文 | 推荐本文 |
  • 英文篇名:Multiple object tracking algorithm for the complex scenario
  • 作者:孙宇嘉 ; 于纪言 ; 王晓鸣
  • 英文作者:Sun Yujia;Yu Jiyan;Wang Xiaoming;Ministerial Laboratory of ZNDY, Nanjing University of Science and Technology;
  • 关键词:信息处理技术 ; 计算机视觉 ; 多目标跟踪 ; 目标检测
  • 英文关键词:information processing technology;;computer vision;;multiple object tracking;;target detection
  • 中文刊名:仪器仪表学报
  • 英文刊名:Chinese Journal of Scientific Instrument
  • 机构:南京理工大学智能弹药技术国防重点学科实验室;
  • 出版日期:2019-03-15
  • 出版单位:仪器仪表学报
  • 年:2019
  • 期:03
  • 基金:国家自然科学基金(11402121)项目资助
  • 语种:中文;
  • 页:129-140
  • 页数:12
  • CN:11-2179/TH
  • ISSN:0254-3087
  • 分类号:TP391.41
摘要
在视觉目标跟踪应用中,传统算法在光照变化、阴影、遮挡和背景运动等复杂环境下面临着鲁棒性较差的问题。针对上述问题,首先在局部二进制类型(LBP)背景模型基础上,提出了自适应像素距离阈值编码的背景模型(ST-LBSP)克服阴影和光照变化对目标检测的影响;其次,为了克服遮挡、背景运动等问题,计算图像块之间的最小像素距离和图像块和目标历史数据之间的标准化差值平方和距离,依据距离信息对图像块进行整合;同时,对整合后的图像块,计算其光流矢量、区域内差异性和区域间差异性,对图像块进行分割;最后,采用结构化的支持向量机设计多目标跟踪器,实现了鲁棒的视觉跟踪。基于标准数据集的实验结果显示,该方法具有较强的鲁棒性和较高的跟踪精度。
        In the application of object tacking,there are some typical problemsin complex scenario,such as light changing, shadow, occupancy and moving background. To overcome light changing and shadow, the algorithm of shadow tolerant local binary similarity pattern is proposed, which is based on the local binary pattern background model. Then, the distance discriminator which is calculated between the current target bounding box and the target bounding box in history is utilized to solve the problems of occupancy and moving background. The distance discriminator is also adopted to detect target fragments. Based on this detection result, the merging procedure can be achieved. After merging, the optical flow of each target bounding box is calculated.The differencesin a block and amongblocks are calculated. The bounding box segmentation procedure is processed based on the calculated similarity information. Finally, the structure support vector machineis used to design the target association procedure. Experimental results on multiple target tracking benchmark show that the proposed algorithm can achieve excellent tracking precision and robustness.
引文
[1] 梁晓丹,蔺娜,陈瀚宁.基于细菌觅食行为的移动机器人动态路径规划[J].仪器仪表学报,2016,37(6):1316-1324.LIANG X D,LIN N,CHEN H N.Mobile robot dynamic path planning based on bacterial foraging behavior[J].Chinese Journal of Scientific Instrument,2016,37(6):1316-1324.
    [2] 徐超,高梦珠,查宇锋,等.基于HOG和SVM的公交乘客人流量统计算法[J].仪器仪表学报,2015,36(2):446- 452.XU CH,GAO M ZH,ZHA Y F,et al.Bus passenger flow calculation algorithm based on HOG and SVM[J].Chinese Journal of Scientific Instrument,2015,36(2):446- 452.
    [3] WONG S C,STAMATESCU V,GATT A,et al.Track everything:Limiting prior knowledge in online multi-object recognition[J].IEEE Transactions on Image Processing,2017,26(10):4669- 4683.
    [4] MILAN A,REZATOFIGHI S H,DICK A,et al.Online multi-target tracking using recurrent neural networks[C].AAAI Conference on Artificial Intelligence,2017:4225- 4236.
    [5] HENRIQUES J F,RUI C,MARTINS P,et al.High-speed tracking with kernelized correlation filters[J].IEEE Transactions on Pattern Analysis & Machine Intelligence,2015,37(3):583- 596.
    [6] 万琴,王耀南,余洪山,等.基于属性关系图优化匹配的多运动目标跟踪[J].仪器仪表学报,2013,34(3):608- 613.WAN Q,WANG Y N,YU H SH,et al.Tracking multiple moving objects in complex scenes based on attributed relational graph optimizing match[J].Chinese Journal of Scientific Instrument,2013,34(3):608- 613.
    [7] 李艳荻,徐熙平,陈江,等.动态特征块匹配的背景更新在运动检测的应用[J].仪器仪表学报,2017,38(2):445- 453.LI Y D,XU X P,CHEN J,et al.Background updating based on dynamic feature block matching for the motion detection[J].Chinese Journal of Scientific Instrument,2017,38(2):445- 453.
    [8] ZITNICK C L,DOLLAR P.Edge boxes:Locating object proposals from edges[C].European Conference on Computer Vision,2014:391- 405.
    [9] 张聪泫,陈震,黎明.金字塔光流三维运动估计与深度重建直接方法[J].仪器仪表学报,2015,36(5):1093-1105.ZHANG C X,CHEN ZH,LI M.Direct method for 3D motion estimation and depth reconstruction based on pyramid optical flow[J].Chinese Journal of Scientific Instrument,2015,36(5):1093-1105.
    [10] WEINZAEPFEI P,REVAUD J,HARCHAOUI Z,et al.DeepFlow:Large displacement optical flow with deep matching[C].IEEE International Conference on Computer Vision,2014:1385-1392.
    [11] BARNICH O,DROOGENBROECK M V.ViBe:A universal background subtraction algorithm for video sequences[J].IEEE Transactions on Image Processing,2011,20(6):1709-1724.
    [12] ELGAMMAL A,HARWOOD D,DAVIS L.Non-parametric model for background subtraction[C].European Conference on Computer Vision,2000:751-767.
    [13] REN S,HE K,GIRSHICK R,et al.Faster R-CNN:Towards real-time object detection with region proposal networks[C].International Conference on Neural Information Processing Systems,2015:91-99.
    [14] REDMON J,FARHADI A.YOLO9000:Better,faster,stronger[C].International Conference on Computer Vision and Pattern Recognition,2017:6517- 6525.
    [15] YANG M,WU Y,JIA Y.A hybrid data association framework for robust online multi-object tracking[J].IEEE Transactions on Image Processing,2017:1- 5.
    [16] FERRYMAN J,SHAHROKNI A.PETS2009:Dataset and challenge[C].IEEE International Workshop Performance Evaluation Tracking Surveillance,2009:1- 6.
    [17] HEIKKLA M,PIETIKAINEN M.A texture-based method for modeling the background and detecting moving objects[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,2006,28(4):657- 662.
    [18] BILODEAU G A,JODOIN J P,SAUNIER N.Change detection in feature space using local binary similarity patterns[C].IEEE International Conference on Computer and Robot Vision,2013:106-112.
    [19] KROEGER T,TIMOFTE R,DAI D,et al.Fast optical flow using dense inverse search[C].European Conference on Computer Vision,2016:471-188.
    [20] TSOCHANTARIDIS I,HOFMANN T,JOACHIMS T,et al.Support vector machine learning for interdependent and structured output spaces[C].ACM International Conference on Machine Learning,2004:104-112.
    [21] KUHN H W.The Hungarian method for the assignment problem[J].Naval Research Logistics,2005,52(1):7- 21.
    [22] WANG Y,JODOIN P M,PORIKLI F,et al.CDnet2014:An expanded change detection benchmark dataset[C].IEEE Computer Vision and Pattern Recognition Workshops,2014:393- 400.
    [23] ZIVKOVIC Z.Improved adaptive gaussian mixture model for background subtraction[C].IEEE International Conference on Pattern Recognition,2004:28-31.
    [24] SAJID H,CHEUNG S S.Universal multimode background subtraction[J].IEEE Transactions on Image Processing,2017,7(26):3249-3260.
    [25] MILAN A,GADE R,DICK A,et al.Improving global multi-target tracking with local updates[C].European Conference on Computer Vision Workshops,2015:174-190.
    [26] JODOIN J P,BILODEAU G A,SAUNIER N.Urban tracker:Multiple object tracking in urban mixed traffic[C].IEEE Winter Conference on Applications of Computer Vision,2014:885- 892.
    [27] PIRSIAVASH H,RAMANAN D,FOWLKES C C.Globally-optimal greedy algorithms for tracking a variable number of objects[C].IEEE Conference on Computer Vision and Pattern Recognition,2011:1201-1208.
    [28] FLEURET F,BERCLAZ J,LENGAGNE R,et al.Multicamera people tracking with a probabilistic occupancy map[J].IEEE Transactions on Pattern Analysis & Machine Learning,2007,30(2):267- 282.
    [29] DICLE C,CAMPS O I,SZNAIER M.The way they move:Tracking multiple targets with similar appearance[C].IEEE International Conference on Computer Vision,2014:2304- 2311.
    [30] MILAN A,SCHINDLER K,ROTH S.Detection- and trajectory-level exclusion in multiple object tracking[C].IEEE Conference on Computer Vision and Pattern Recognition,2013:3682-3689.
    [31] BERNARDIN K,STIEFELHAGEN R.Evaluating multiple object tracking performance:the CLEAR MOT metrics[J].Eurasip Journal on Image & Video Processing,2008,2008(1):1-10.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700