二维激光雷达数据角点特征的提取
详细信息    查看全文 | 推荐本文 |
  • 英文篇名:Corner feature extraction of 2D lidar data
  • 作者:康俊民 ; 赵祥模 ; 杨荻
  • 英文作者:KANG Jun-min;ZHAO Xiang-mo;YANG Di;School of Economy and Finance,Xi'an International Studies University;School of Information Engineering,Chang'an University;School of Business,Xi'an International Studies University;
  • 关键词:信息工程 ; 无人车 ; 激光雷达 ; 同步定位与地图构建 ; 二变量正态概率密度 ; 特征提取
  • 英文关键词:information engineering;;unmanned vehicle;;lidar;;simultaneous localization and mapping;;bivariate normal probability density;;feature extraction
  • 中文刊名:JYGC
  • 英文刊名:Journal of Traffic and Transportation Engineering
  • 机构:西安外国语大学经济金融学院;长安大学信息工程学院;西安外国语大学商学院;
  • 出版日期:2018-06-15
  • 出版单位:交通运输工程学报
  • 年:2018
  • 期:v.18;No.93
  • 基金:高等学校学科创新引智计划项目(B14043)
  • 语种:中文;
  • 页:JYGC201803026
  • 页数:11
  • CN:03
  • ISSN:61-1369/U
  • 分类号:232-242
摘要
为增强无人车识别行驶环境中角点特征的鲁棒性,并提高角点特征的识别速度,基于观测点的二变量正态概率密度映射之间的相对差值,提出了一种角点特征提取方法;将观测数据组映射到二变量正态概率密度空间,获得每个观测点的映射;对映射结果进行归一化,消除协方差引起的数值差异;在映射数值曲线中寻找波峰与波谷的位置,波峰对应的观测点最接近均值点,波谷对应的观测点最接近拐点;利用波峰和波谷的相对高度判定该组观测数据是否符合角点特征的边长要求;用波谷对应的原始观测数据点坐标作为角点特征,构建环境特征地图。试验结果表明:提取方法能够处理观测点数大于63,观测点角度分辨率大于1°的稀疏观测数据,在大尺寸室外环境和室内环境中,提取方法都能够稳定识别大型角点;对小于180个点的观测数据,最大处理时间小于5ms,平均处理时间小于1.9ms,提取方法减少了构建环境特征地图的时间;提取方法依据观测数据的二变量正态概率密度提取角点特征,对观测误差和角点特征的尺度与形状不敏感,能够有效提高角点特征的识别鲁棒性。
        In order to enhance the robustness of the corner feature recognition in the driving environment by the unmanned vehicle and improve the recognition speed of the corner feature,based on the relative difference between bivariate normal probability density map values of observation points,a corner feature extraction method was proposed.The observation data set was mapped to the bivariate normal probability density space,and the mapping value of each observation point was obtained.The mapping results were normalized,and the numerical differences caused by the covariances were eliminated.The positions of peaks and troughs were found in the mapped numerical curve.The observation point corresponding to the peak was closest to the mean point,and the observation point corresponding to the trough was closest to the inflection point.Whether the set of observed data meets the edge length requirement of the corner features was determined by using the relative heights of peaks and troughs.Thecoordinates of the original observation data points corresponding to the troughs were used as corner features to construct the environment feature map.Test result shows that the extraction method can process sparse observation data with more than 63 observation points and angular resolution of the observation point greater than 1°.Therefore,in large-scale outdoor environment and indoor environment,the extraction method can stably identify large corner points.When the observation data points are less than 180,the maximum processing time is less than 5 ms,and the average processing time is less than 1.9 ms,so the extraction method has good real-time performance,which is conducive for decreasing the time required for designing the environment feature map.The extraction method extracts the corner features according to the bivariate normal probability density of the observation data,is insensitive to the observation error and the scale and shape of the corner feature,and can effectively improve the robustness of corner feature recognition.14 figs,25 refs.
引文
[1]HESS W,KOHLER D,RAPP H,et al.Real-time loop closure in 2DLIDAR SLAM[C]∥IEEE.Proceedings of the2016 IEEE International Conference on Robotics and Automation.New York:IEEE,2016:1271-1278.
    [2]HIMSTEDT M,FROST J,HELLBACH S,et al.Large scale place recognition in 2DLIDAR scans using geometrical landmark relations[C]∥IEEE.2014IEEE/RSJ International Conference on Intelligent Robots and Systems.New York:IEEE,2014:5030-5035.
    [3]TIPALDI G D,BRAUN M,ARRAS K O.FLIRT:interest regions for 2D range data with applications to robot navigation[C]∥KHATIB O,KUMAR V,SUKHATME G.Experimental Robotics.Berlin:Springer,2014:695-710.
    [4]王云峰,翁秀玲,吴炜,等.基于贪心策略的视觉SLAM闭环检测算法[J].天津大学学报:自然科学与工程技术版,2017,50(12):1262-1270.WANG Yun-feng,WONG Xiu-ling,WU Wei,et al.Loop closure detection algorithm based on greedy strategy for visual SLAM[J].Journal of Tianjin University:Science and Technology,2017,50(12):1262-1270.(in Chinese)
    [5]ZHU A Z,THAKUR D,ZASLAN T,et al.The multivehicle stereo event camera dataset:an event camera dataset for 3D perception[J].Robotics and Automation Letters,2018,3(3):2032-2039.
    [6]TAKETOMI T,UCHIYAMA H,IKEDA S.Visual SLAM algorithms:a survey from 2010 to 2016[J].IPSJ Transactions on Computer Vision and Applications,2017,9(1):16-26.
    [7]RUECKAUER B,DELBRUCK T.Evaluation of event-based algorithms for optical flow with ground-truth from inertial measurement sensor[J].Frontiers in Neuroscience,2016,10(137):1-17.
    [8]GAO Xiang,ZHANG Tao.Unsupervised learning to detect loops using deep neural networks for visual SLAM system[J].Autonomous Robots,2017,41:1-18.
    [9]TORRES-GONZLEZ A,DIOS J R M,OLLERO A.Rangeonly SLAM for robot-sensor network cooperation[J].Autonomous Robots,2018,42:649-663.
    [10]LENAC K,KITANOV A,CUPEC R,et al.Fast planar surface3D SLAM using LIDAR[J].Robotics and Autonomous Systems,2017,92:197-220.
    [11]ANDERT F,AMMANN N,KRAUSE S,et al.Opticalaided aircraft navigation using decoupled visual SLAM with range sensor augmentation[J].Journal of Intelligent and Robotic Systems,2017,88(2-4):547-565.
    [12]康俊民,赵祥模,徐志刚.无人车行驶环境特征分类方法[J].交通运输工程学报,2016,16(6):140-148.KANG Jun-min,ZHAO Xiang-mo,XU Zhi-gang.Classification method of running environment features for unmanned vehicle[J].Journal of Traffic and Transportation Engineering,2016,16(6):140-148.(in Chinese)
    [13]ADAMS M,ZHANG Sen,XIE Li-hua.Particle filter based outdoor robot localization using natural features extracted from laser scanners[C]∥IEEE.Proceedings of the 2004IEEE International Conference on Robotics and Automation.New York:IEEE,2004:1493-1498.
    [14]VANDORPE J,VAN BRUSSEL H,XU H.Exact dynamic map building for a mobile robot using geometrical primitives produced by a 2Drange finder[C]∥IEEE.Proceedings of the1996 IEEE International Conference on Robotics and Automation.New York:IEEE,1996:901-908.
    [15]TAYLOR R M,PROBERT P J.Range finding and feature extraction by segmentation of images for mobile robot navigation[C]∥IEEE.Proceedings of the 1996 IEEE International Conference on Robotics and Automation.New York:IEEE,1996:95-100.
    [16]ADAMS M D,KERSTENS A.Tracking naturally occurring indoor features in 2Dand 3D with LIDAR range/amplitude data[J].International Journal of Robotics Research,1998,17(9):907-923.
    [17]GUIVANT J,NEBOT E,BAIKER S.Autonomous navigation and map building using laser range sensors in outdoor applications[J].Journal of Robotic Systems,2000,17(10):565-583.
    [18]GUIVANT J,MASSON F,NEBOT E.Simultaneous localization and map building using natural features and absolute information[J].Robotics and Autonomous Systems,2002,40(2/3):79-90.
    [19]满增光,叶文华,肖海宁,等.从激光扫描数据中提取角点特征的方法[J].南京航空航天大学学报,2012,44(3):379-383.MAN Zeng-guang,YE Wen-hua,XIAO Hai-ning,et al.Method for corner feature extraction from laser scan data[J].Journal of Nanjing University of Aeronautics and Astronautics,2012,44(3):379-383.(in Chinese)
    [20]FABRESSE F R,CABALLERO F,MAZA I,et al.Undelayed3DRO-SLAM based on Gaussian-mixture and reduced spherical parametrization[C]∥IEEE.2013 IEEE/RSJ International Conference on Intelligent Robots and Systems.New York:IEEE,2013:1555-1561.
    [21]EROL B A,VAISHNAV S,LABRADO J D,et al.Cloudbased control and vSLAM through cooperative mapping and localization[C]∥IEEE.2016 World Automation Congress.New York:IEEE,2016:1-6.
    [22]ZHANG Sen,XIE Li-hua,ADAMS M.An efficient data association approach to simultaneous localization and map building[J].International Journal of Robotics Research,2005,24(1):49-60.
    [23]康俊民,赵祥模,徐志刚.基于特征几何关系的无人车轨迹回环检测[J].中国公路学报,2017,30(1):121-128,135.KANG Jun-min,ZHAO Xiang-mo,XU Zhi-gang.Loop closure detection of unmanned vehicle trajectory based on geometric relationship between features[J].China Journal of Highway and Transport,2017,30(1):121-128,135.(in Chinese)
    [24]李阳铭,宋全军,刘海,等.用于移动机器人导航的通用激光雷达特征提取[J].华中科技大学学报:自然科学版,2013,41(增1):280-283.LI Yang-ming,SONG Quan-jun,LIU Hai,et al.General purpose LIDAR feature extractor for mobile robot navigation[J].Journal of Huazhong University of Science and Technology:Natural Science Edition,2013,41(S1):280-283.(in Chinese)
    [25]LI Yang-ming,OLSON E B.Extracting general-purpose features from LIDAR data[C]∥IEEE.Proceedings of the2010 IEEE International Conference on Robotics and Automation.New York:IEEE,2010:

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700