基于激光雷达和相机信息融合的目标检测及跟踪
详细信息    查看全文 | 推荐本文 |
  • 英文篇名:An object detection and tracking algorithm based on LiDAR and camera information fusion
  • 作者:常昕 ; 陈晓冬 ; 张佳琛 ; 汪毅 ; 蔡怀宇
  • 英文作者:Chang Xin;Chen Xiaodong;Zhang Jiachen;Wang Yi;Cai Huaiyu;School of Precision Instrument and Opto-Electronics Engineering, Tianjin University;Key Laboratory of Opto-Electronics Information Technology of Ministry of Education, Tianjin University;
  • 关键词:目标检测 ; 目标跟踪 ; 智能车辆 ; 激光点云
  • 英文关键词:object detection;;object tracking;;intelligent vehicle;;LiDAR point cloud
  • 中文刊名:GDGC
  • 英文刊名:Opto-Electronic Engineering
  • 机构:天津大学精密仪器与光电子工程学院;天津大学光电信息技术教育部重点实验室;
  • 出版日期:2019-07-15
  • 出版单位:光电工程
  • 年:2019
  • 期:v.46;No.356
  • 基金:天津市自然科学基金项目(15JCQNJC14200)~~
  • 语种:中文;
  • 页:GDGC201907010
  • 页数:11
  • CN:07
  • ISSN:51-1346/O4
  • 分类号:91-101
摘要
环境感知系统是智能车辆的重要组成部分,它主要是指依赖于车载传感器对车辆周围环境进行探测。为了保证智能车辆环境感知系统的准确性和稳定性,有必要使用智能车辆车载传感器来检测和跟踪可通行区域的目标。本文提出一种基于激光雷达和摄像机信息融合的目标检测和跟踪算法,采用多传感器信息融合的方式对目标进行检测和跟踪。该算法利用激光雷达点云数据聚类方法检测可通行区域内的物体,并将其投射到图像上,以确定跟踪对象。在确定对象后,该算法利用颜色信息跟踪图像序列中的目标,由于基于图像的目标跟踪算法很容易受到光、阴影、背景干扰的影响,该算法利用激光雷达点云数据在跟踪过程中修正跟踪结果。本文采用KITTI数据集对算法进行验证和测试,结果显示,本文提出的目标检测和跟踪算法的跟踪目标平均区域重叠为83.10%,跟踪成功率为80.57%,与粒子滤波算法相比,平均区域重叠提高了29.47%,跟踪成功率提高了19.96%。
        As an important part of intelligent vehicle, environmental perception system mainly refers to the detection of the surrounding environment of the vehicle by the sensors attached on the vehicle. In order to ensure the accuracy and stability of the intelligent vehicle environmental perception system, it is necessary to use intelligent vehicle sensors to detect and track objects in the passable area. In this paper, an object detection and tracking algorithm based on the LiDAR and camera information fusion is proposed. The algorithm uses the point cloud data clustering method of Li DAR to detect the objects in the passable area and project them onto the image to determine the tracking objects. After the objects are determined, the algorithm uses color information to track objects in the image sequence. Since the object tracking algorithm based on image is easily affected by light, shadow and background interference, the algorithm uses LiDAR point cloud to modify the tracking results. This paper uses KITTI data set to verify and test this algorithm and experiments show that the target area detection overlap of the proposed target detection and tracking algorithm is 83.10% on average and the tracking success rate is 80.57%. Compared with particle filtering algorithm, the average region overlap increased by 29.47% and the tracking success rate increased by 19.96%.
引文
[1]Bishop R.Intelligent Vehicle Technology and Trends[M].Boston:Artech House,2005.
    [2]何树林.浅谈智能汽车及其相关问题[J].汽车工业研究,2010(9):28-30.
    [3]Lan Y,Huang J,Chen X.Environmental perception for information and immune control algorithm of miniature intelligent vehicle[J].International Journal of Control&Automation,2017,10(5):221-232.
    [4]Gao D Z,Duan J M,Zheng B G,et al.Application statement of intelligent vehicle environment perception sensor[J].Modern Electronics Technique,2008(19):151-156.高德芝,段建民,郑榜贵,等.智能车辆环境感知传感器的应用现状[J].现代电子技术,2008(19):151-156.
    [5]Wang S F,Dai X,Xu N,et al.Overview on environment perception technology for unmanned ground vehicle[J].Journal of Changchun University of Science and Technology(Natural Science Edition),2017,40(1):1-6.王世峰,戴祥,徐宁,等.无人驾驶汽车环境感知技术综述[J].长春理工大学学报(自然科学版),2017,40(1):1-6.
    [6]Wang Z N,Zhan W,Tomizuka M.Fusing bird view LIDAR point cloud and front view camera image for deep object detection[OL].arXiv:1711.06703[cs.CV].
    [7]Dieterle T,Particke F,Patino-Studencki L,et al.Sensor data fusion of LIDAR with stereo RGB-D camera for object tracking[C]//Proceedings of 2017 IEEE Sensors,2017:1-3.
    [8]Oh S I,Kang H B.Object detection and classification by decision-level fusion for intelligent vehicle systems[J].Sensors,2017,17(1):207.
    [9]Li X R,Xie D.Design of intelligent object tracking baggage vehicle based on binocular vision[J].Control Engineering of China,2013,20(1):98-101.厉小润,谢冬.基于双目视觉的智能跟踪行李车的设计[J].控制工程,2013,20(1):98-101.
    [10]Granstr?m K,Baum M,Reuter S.Extended object tracking:introduction,overview,and applications[J].Journal of Advances in Information Fusion,2017,12(2):139-174.
    [11]Li X,Wang K J,Wang W,et al.A multiple object tracking method using Kalman filter[C]//Proceedings of 2012 IEEE International Conference on Information and Automation,2010:1862-1866.
    [12]Liu B,Cheng S,Shi Y H.Particle filter optimization:a brief introduction[C]//Proceedings of the 7th Advances in Swarm Intelligence,2016.
    [13]Dou J F,Li J X.Robust visual tracking based on interactive multiple model particle filter by integrating multiple cues[J].Neurocomputing,2014,135:118-129.
    [14]Hou Z Q,Wang L P,Guo J X,et al.An object tracking algorithm based on color,space and texture inf ormation[J].Opto-Electronic Engineering,2018,45(5):170643.侯志强,王利平,郭建新,等.基于颜色、空间和纹理信息的目标跟踪[J].光电工程,2018,45(5):170643.
    [15]Zhang J,Mao X B,Chen T J.Survey of moving object tracking algorithm[J].Application Research of Computers,2009,26(12):4407-4410.张娟,毛晓波,陈铁军.运动目标跟踪算法研究综述[J].计算机应用研究,2009,26(12):4407-4410.
    [16]Zhang Z.A flexible new technique for camera calibration[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,2000,22(11):1330-1334.
    [17]Cai X P,Zhao Y,Huang J M,et al.Research on the performance of imaging laser radar[J].Optical Technique,2001,27(1):60-62.蔡喜平,赵远,黄建明,等.成像激光雷达系统性能的研究[J].光学技术,2001,27(1):60-62.
    [18]Rusu R B,Cousins S.3D is here:point cloud library(PCL)[C]//Proceedings of 2011 IEEE International Conference on Robotics and Automation,2011:1-4.
    [19]Zhou Q,Zhang X D,Hu J,et al.Noise analysis of staring three-dimensinal active imaging laser radar[J].Chinese Journal of Lasers,2011,38(9):0908005.周琴,张秀达,胡剑,等.凝视成像三维激光雷达噪声分析[J].中国激光,2011,38(9):0908005.
    [20]Guan Y L,Liu S T,Zhou S J,et al.Obust plane fitting of point clouds based on TLS[J].Journal of Geodesy and Geodynamics,2011,31(5):80-83.官云兰,刘绍堂,周世健,等.基于整体最小二乘的稳健点云数据平面拟合[J].大地测量与地球动力学,2011,31(5):80-83.
    [21]Zou X L,Miao J,Guo R Z,et al.Automatic road marking detection and extraction based on LiDAR point clouds from vehicle-borne MMS[J].Geomatics&Spatial Information Technology,2012,35(9):5-8.邹晓亮,缪剑,郭锐增,等.移动车载激光点云的道路标线自动识别与提取[J].测绘与空间地理信息,2012,35(9):5-8.
    [22]Ester M,Kriegel H P,Sander J,et al.A density-based algorithm for discovering clusters a density-based algorithm for discovering clusters in large spatial databases with noise[C]//Proceedings of the 2nd International Conference on Knowledge Discovery and Data Mining,1996:226-231.
    [23]Zhu M Q,Wang Z L,Chen Z H.Modified Bhattacharyya coefficient for particle filter visual tracking[J].Control and Decision,2012,27(10):1579-1583.朱明清,王智灵,陈宗海.基于改进Bhattacharyya系数的粒子滤波视觉跟踪算法[J].控制与决策,2012,27(10):1579-1583.
    [24]Feng C,Wang M,Ji Q B.Analysis and comparison of resampling algorithms in particle filter[J].Journal of System Simulation,21(4):1101-1105,1110.冯驰,王萌,汲清波.粒子滤波器重采样算法的分析与比较[J].系统仿真学报,2009,21(4):1101-1105,1110.
    [25]Geiger A,Lenz P,Stiller C,et al.Vision meets robotics:the KITTI dataset[J].The International Journal of Robotics Research,2013,32(11):1231-1237.
    [26]Zhou T R,Ouyang Y N,Wang R,et al.Particle filter based on real-time compressive tracking[C]//Proceedings of 2016 International Conference on Audio,Language and Image Processing,2016:754-759.
    [27]Shi Y,Han C Z.Adaptive UKF method with applications to target tracking[J].Acta Automatica Sinica,2011,37(6):755-759.石勇,韩崇昭.自适应UKF算法在目标跟踪中的应用[J].自动化学报,2011,37(6):755-759.
    [28]Milan A,Schindler K,Roth S.Detection-and trajectory-level exclusion in multiple object tracking[C]//Proceedings of 2013IEEE Conference on Computer Vision and Pattern Recognition,2013:3682-3689.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700