基于加权时空上下文学习的多特征视觉跟踪(英文)
详细信息    查看全文 | 推荐本文 |
  • 英文篇名:Multiple feature visual tracking via weighted spatial-temporal context learning
  • 作者:尹明锋 ; 朱建良 ; 朱凯 ; 薄煜明 ; 赵高鹏 ; 吴盘龙
  • 英文作者:YIN Mingfeng;ZHU Jianliang;ZHU Kai;BO Yuming;ZHAO Gaopeng;WU Panlong;School of Automation, Nanjing University of Science and Technology;School of Automobile and Traffic Engineering, Jiangsu University of Technology;
  • 关键词:视觉跟踪 ; 多特征 ; 时空上下文学习 ; 加权矩阵
  • 英文关键词:visual tracking;;multiple features;;spatial-temporal context learning;;weighted matrix
  • 中文刊名:ZGXJ
  • 英文刊名:Journal of Chinese Inertial Technology
  • 机构:南京理工大学自动化学院;江苏理工学院汽车与交通工程学院;
  • 出版日期:2019-02-15
  • 出版单位:中国惯性技术学报
  • 年:2019
  • 期:v.27
  • 基金:国家自然科学基金(61473153);; 航空科学基金(2016ZC59006);; 江苏省产学研联合创新资金-前瞻性联合研究项目(BY2016004-04);; 南京市产学研合作后补助项目(201722005)
  • 语种:英文;
  • 页:ZGXJ201901007
  • 页数:8
  • CN:01
  • ISSN:12-1222/O3
  • 分类号:49-56
摘要
针对视觉跟踪过程中的光照变化、尺度变化、遮挡等干扰,提出了一种基于加权时空上下文学习的多特征目标跟踪算法,实现了快速鲁棒的目标跟踪。首先,提取灰度信息、LBP纹理特征和HSV颜色特征作为目标模型,增强目标模型的鲁棒性。其次,采用加权系数矩阵衡量特征的目标归属度,提升特征的分辨能力。然后,计算每个特征的加权时空下文跟踪置信图。最后,利用KL方法对每个通道置信图进行融合并进行目标定位。OTB数据库实验距离精度和重叠成功率分别为0.666和0.636,证明所提出的方法优于现有方法。
        To address the problem of illumination variation, scale variation, and occlusion during visual tracking, a novel multiple feature visual tracking method based on weighted spatial-temporal context learning(MFWSTC) algorithm is proposed, which achieves robust and efficient tracking. Firstly, gray information,Local Binary Pattern(LBP) texture feature and HSV color information are extracted to represent the object model, which improves the robustness of object tracking. Secondly, the weighted matrix is applied to measure the contribution of different features, which enhances the discrimination ability of the object model. Then, the confidence map of each feature is obtained by the weighted spatial-temporal context learning(WSTC) tracker.Finally, KL(Kullback-Leibler) divergence method is applied to fuse the confidence maps of different features.Experiment results show that the distance precision and overlap precision on object tracking benchmark(OTB) are 0.666 and 0.636, respectively, which are better than those of the state-of-art tracking methods.
引文
[1]Li X,Hu W,Shen C,et al.A survey of appearance models in visual object tracking[J].ACM Transactions on Intelligent Systems and Technology,2013,4:A1-A42.
    [2]闫钧华,陈少华,艾淑芳,等.基于Kalman预测器的改进的CAMShift目标跟踪[J].中国惯性技术学报,2014,22(4):536-542.Ya(bn)J H,Chen S H,Ai S F,et al,Target tracking with improved CAMshift based on Kalman predictor[J].Journal of Chinese Inertial Technology,2014,22(4):536-542.
    [3]Yin M,Zhu J,Bo Y,et al.Vision tracking based on improved similarity measurement of spatiogram and particle fiter[J].Journal of Chinese Inertial Technology,2018,26(3):359-365.
    [4]Ross D A,Lim J,Lin R,et al.Incremental learning for robust visual tracking[J].International Journal of Computer Vision,2008,77(1-3):125-141.
    [5]Kyriakides I.Target tracking using adaptive compressive sensing and processing[J].Signal Processing,2016,127:44-55.
    [6]Avidan S.Support vector tracking[J].IEEE transactions on pattern analysis and machine intelligence,2004,26(8):1064-1072.
    [7]Babenko B,Yang M,Belongie S.Robust object tracking with online multiple instance learning[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,2011,33(8):1619-1632.
    [8]Zhang K,Zhang L,Yang M.Fast compressive tracking[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,2014,36(10):2002-2015.
    [9]Lan X,Ma A,Yuen P.Multi-cue visual tracking using robust feature-level fusion based on joint sparse representtation[C]//2014 IEEE Conference on Computer Vision and Pattern Recognition.2014:1194-1201.
    [10]Zhang K,Zhang L,Liu Q,et al.Fast visual tracking via dense spatio-temporal context learning[C]//2014 European Conference on Computer Vision.Zurich,2014:217-141.
    [11]Li X,Liu Q,He Z,et al.A multi-view model for visual tracking via correlation filters[J].Knowledge-Based Systems,2016,113:88-99.
    [12]Wu Y,Lim J,Yang M.Online object tracking:a benchmark[C]//2013 IEEE Conference on Computer Vision and Pattern Recognition.Portland,2013:2411-2418.
    [13]Wu Y,Lim J,Yang M.Object tracking benchmark[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,2015,37(9):1834-1848.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700