用户名: 密码: 验证码:
一种融合稀疏几何特征与深度流的深度视觉SLAM算法
详细信息    查看全文 | 推荐本文 |
  • 英文篇名:A Depth Vision SLAM Algorithm Combining Sparse Geometric Features with Range Flow
  • 作者:方正 ; 赵世博 ; 李昊来
  • 英文作者:FANG Zheng;ZHAO Shibo;LI Haolai;Faculty of Robot Science and Engineering, Northeastern University;
  • 关键词:同步定位与地图创建 ; 深度流 ; 稀疏几何特征 ; 位姿图优化
  • 英文关键词:SLAM(simultaneous localization and mapping);;range flow;;sparse geometric feature;;pose graph optimization
  • 中文刊名:JQRR
  • 英文刊名:Robot
  • 机构:东北大学机器人科学与工程学院;
  • 出版日期:2018-10-15 15:35
  • 出版单位:机器人
  • 年:2019
  • 期:v.41
  • 基金:国家自然科学基金(61573091,61673341);; 中央高校基本科研业务专项资金(N172608005);; 辽宁省自然科学基金(20180520006);; 机器人学国家重点实验室开放基金(2018-O08)
  • 语种:中文;
  • 页:JQRR201902006
  • 页数:13
  • CN:02
  • ISSN:21-1137/TP
  • 分类号:51-62+107
摘要
为了克服移动机器人在视觉退化场景下的位姿估计问题,通过将稠密的深度流与稀疏几何特征相结合,提出了一种实时、鲁棒和低漂移的深度视觉SLAM(同时定位与地图构建)算法.该算法主要由3个优化层组成,基于深度流的视觉里程计层、基于ICP(迭代最近点)的位姿优化层和基于位姿图的优化层.基于深度流的视觉里程计层通过建立深度变化约束方程实现相机帧间快速的6自由度位姿估计;基于ICP的位姿优化层通过构建局部地图来消除局部漂移;基于位姿图的优化层从深度信息中提取、匹配稀疏几何特征,从而建立闭环约束并通过位姿图来实现全局位姿优化.对本文所提出的算法分别在TUM数据集和实际场景中进行了性能测试.实验结果表明本文的前端算法的性能优于当前深度视觉主流算法,后端算法可以较为鲁棒地建立闭环约束并消除前端位姿估计所产生的全局漂移.
        In order to solve the pose estimation problem of mobile robots in visually degraded environments, a real-time,robust and low-drift depth vision SLAM(simultaneous localization and mapping) method is proposed by utilizing both dense range flow and sparse geometry features. The proposed method is mainly composed of three optimization layers, namely range flow based visual odometry layer, ICP(iterative closest point) based pose optimization layer and pose graph based optimization layer. In range flow based visual odometry layer, the constraint equation of range variation is used to solve fast6 DOF(degree of freedom) frame-to-frame pose estimation of camera. Then, local drifts are reduced by applying local map in ICP based pose optimization layer. After that, the loop closure constraints are built by extracting and matching sparse geometric features from depth information and a pose graph is constructed for global pose optimization in pose graph based optimization layer. Performances of the proposed method are tested on TUM datasets and in real-world scenes. Finally,experiment results show that our front-end algorithm outperforms the state-of-the-art depth vision methods, and our back-end algorithm can robustly construct loop closures constraints and reduce the global drift caused by front-end pose estimation.
引文
[1]Prakhya S M, Liu B, Lin W, et al. Sparse depth odometry:3D keypoint based pose estimation from dense depth data[C]//IEEE International Conference on Robotics and Automation. Piscataway, USA:IEEE, 2015:4216-4223.
    [2]Jaimez M, Gonzalez-Jimenez J. Fast visual odometry for 3-D range sensors[J]. IEEE Transactions on Robotics, 2015, 31(4):809-822.
    [3]Newcombe R A, Izadi S, Hilliges O, et al. KinectFusion:Realtime dense surface mapping and tracking[C]//IEEE International Symposium on Mixed and Augmented Reality. Piscataway,USA:IEEE, 2012:127-136.
    [4]Forster C, Pizzoli M, Scaramuzza D. SVO:Fast semi-direct monocular visual odometry[C]//IEEE International Conference on Robotics and Automation. Piscataway, USA:IEEE, 2014:15-22.
    [5]Fischler M A, Bolles R C. Random sample consensus[J]. Communications of the ACM, 1981, 24(6). DOI:10.1145/358669.358692.
    [6]Tykkala T, Audras C, Comport A I. Direct iterative closest point for real-time visual odometry[C]//IEEE International Conference on Computer Vision Workshops. Piscataway, USA:IEEE,2016:2050-2056.
    [7]Mur-Artal R, Montiel J M M, Tard′os J D. ORB-SLAM:A versatile and accurate monocular SLAM system[J]. IEEE Transactions on Robotics, 2015, 31(5):1147-1163.
    [8]Engel J, Schops T, Cremers D. LSD-SLAM:Large-scale direct monocular SLAM[M]//Lecture Notes in Computer Science,Vol.8690. Berlin, Germany:Springer-Verlag, 2014:834-849.
    [9]Kerl C, Sturm J, Cremers D. Robust odometry estimation for RGB-D cameras[C]//IEEE International Conference on Robotics and Automation. Piscataway, USA:IEEE, 2013:3748-3754.
    [10]Jaimez M, Kerl C, Gonzalez-Jimenez J, et al. Fast odometry and scene flow from RGB-D cameras based on geometric clustering[C]//IEEE International Conference on Robotics and Automation. Piscataway, USA:IEEE, 2017:3992-3999.
    [11]Kerl C, Sturm J, Cremers D. Dense visual SLAM for RGB-D cameras[C]//IEEE/RSJ International Conference on Intelligent Robots and Systems. Piscataway, USA:IEEE, 2014:2100-2106.
    [12]Endres F, Hess J, Sturm J, et al. 3-D mapping with an RGB-D camera[J]. IEEE Transactions on Robotics, 2017, 30(1):177-187.
    [13]Whelan T, Kaess M, Johannsson H, et al. Real-time large-scale dense RGB-D SLAM with volumetric fusion[J]. International Journal of Robotics Research, 2015, 34(4/5):598-626.
    [14]Besl P J, McKay N D. A method for registration of 3-D shapes[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1992, 14(2):239-256.
    [15]Segal A V, Haehnel D, Thrun S. Generalized-ICP[C]//Robotics:Science and Systems. Cambridge, USA:MIT Press, 2009. DOI:10.15607/RSS.2009.V.021.
    [16]Magnusson M, Lilienthal A, Duckett T. Scan registration for autonomous mining vehicles using 3D-NDT[J]. Journal of Field Robotics, 2007, 24(10):803-827.
    [17]Yousif K, Bab-Hadiashar A, Hoseinnezhad R. Real-time RGBD registration and mapping in texture-less environments using ranked order statistics[C]//IEEE/RSJ International Conference on Intelligent Robots and Systems. Piscataway, USA:IEEE,2014:2654-2660.
    [18]Horn B K P, Harris J G. Rigid body motion from range image sequences[J]. CVGIP Image Understanding, 1991, 53(1):1-13.
    [19]Spies H, Jahne B, Barron J L. Range flow estimation[J]. Computer Vision and Image Understanding, 2002, 85(3):209-231.
    [20]Gonzalez J. Recovering motion parameters from a 2D range image sequence[C]//International Conference on Pattern Recognition. Piscataway, USA:IEEE, 1996:433-440.
    [21]Fang Z, Yang S, Jain S, et al. Robust autonomous flight in constrained and visually degraded environments[J]. Journal of Field Robotics, 2017, 34(1):25-52.
    [22]Schmiedel T, Einhorn E, Gross H M. IRON:A fast interest point descriptor for robust NDT-map matching and its application to robot localization[C]//IEEE/RSJ International Conference on Intelligent Robots and Systems. Piscataway, USA:IEEE, 2015:3144-3151.
    [23]Kümmerle R, Grisetti G, Strasdat H, et al. g2o:A general framework for graph optimization[C]//IEEE International Conference on Robotics and Automation. Piscataway, USA:IEEE, 2011:3607-3613.
    [24]Sturm J, Engelhard N, Endres F, et al. A benchmark for the evaluation of RGB-D SLAM systems[C]//IEEE/RSJ International Conference on Intelligent Robots and Systems. Piscataway,USA:IEEE, 2012:573-580.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700