基于点线特征的单目视觉同时定位与地图构建算法
详细信息    查看全文 | 推荐本文 |
  • 英文篇名:A Monocular Visual SLAM Algorithm Based on Point-Line Feature
  • 作者:王丹 ; 黄鲁 ; 李垚
  • 英文作者:WANG Dan;HUANG Lu;LI Yao;Department of Electronic Science and Technology, University of Science and Technology of China;
  • 关键词:同时定位与地图构建(SLAM) ; 线特征 ; 单目视觉 ; 移动机器人 ; 半直接法
  • 英文关键词:simultaneous localization and mapping(SLAM);;line feature;;monocular vision;;mobile robot;;semi-direct method
  • 中文刊名:JQRR
  • 英文刊名:Robot
  • 机构:中国科学技术大学电子科学与技术系;
  • 出版日期:2019-01-05 16:48
  • 出版单位:机器人
  • 年:2019
  • 期:v.41
  • 基金:国家自然科学基金(61472380)
  • 语种:中文;
  • 页:JQRR201903012
  • 页数:12
  • CN:03
  • ISSN:21-1137/TP
  • 分类号:106-117
摘要
当相机快速运动导致图像模糊或场景中纹理缺失时,基于点特征的同时定位与地图构建(SLAM)算法难以追踪足够多的有效特征点,定位精度和鲁棒性较差,甚至无法正常工作.为此,设计了一种基于点、线特征并融合轮式里程计数据的单目视觉同时定位与地图构建算法.首先,利用点特征与线特征的互补来提高数据关联的准确性,并据此构建具有几何信息的环境特征地图,同时引入轮式里程计数据为视觉定位算法提供先验和尺度信息.然后,通过最小化局部地图点、线的重投影误差得到更准确的视觉位姿,在视觉定位失效时,定位系统能根据轮式里程计数据继续工作.通过对比在多组公开数据集上得到的仿真实验结果可知,本文算法性能优于MSCKF (multi-state constraint Kalman filter)和LSD-SLAM(large-scale direct monocular SLAM)算法,证明了该算法的准确性和有效性.最后,将该算法应用于课题组搭建的机器人系统上,得到单目视觉定位均方误差(RMSE)约为7 cm,在1.2 GHz主频、四核处理器的嵌入式平台上平均每帧(640×480)的处理时间约为90 ms.
        In the circumstances with blurred images owing to the fast movement of camera or in low-textured scenes, it is difficult for the SLAM(simultaneous localization and mapping) algorithm based on point features to track sufficient effective point features, which leads to low accuracy and robustness, and even causes the system can't work normally. For this problem, a monocular visual SLAM algorithm based on the point and line features and the wheel odometer data is designed.Firstly, the data association accuracy is improved by using the complementation of point-feature and line-feature. Based on this, an environmental-feature map with geometric information is constructed, and meanwhile the wheel odometer data is incorporated to provide prior and scale information for the visual localization algorithm. Then, the more accurate visual pose is estimated by minimizing reprojection errors of points and line segments in the local map. When the visual localization fails,the localization system still works normally with the wheel odometer data. The simulation results on various public datasets show that the proposed algorithm outperforms the multi-state constraint Kalman filter(MSCKF) algorithm and large-scale direct monocular SLAM(LSD-SLAM) algorithm, which demonstrates the correctness and effectiveness of the algorithm.Finally, the algorithms is applied to a self-developed physical robot system. The root mean square error(RMSE) of the monocular visual localization algorithm is about 7 cm, and the processing time is about 90 ms per frame(640 × 480) on the embedded platform with 4-core processor of 1.2 GHz main frequency.
引文
[1]Cadena C,Carlone L,Carrillo H,et al.Past,present,and future of simultaneous localization and mapping:Toward the robustperception age[J].IEEE Transactions on Robotics,2017,32(6):1309-1332.
    [2]Klein G,Murray D.Parallel tracking and mapping for small ARworkspaces[C]//IEEE and ACM International Symposium on Mixed and Augmented Reality.Piscataway,USA:IEEE,2008:10pp.
    [3]Mur-Artal R,Montiel J M M,Tardós J D.ORB-SLAM:A versatile and accurate monocular SLAM system[J].IEEE Transactions on Robotics,2015,31(5):1147-1163.
    [4]Engel J,Sch?ps T,Cremers D.LSD-SLAM:Large-scale direct monocular SLAM[C]//European Conference on Computer Vision.Berlin,Germany:Springer-Verlag,2014:834-849.
    [5]Engel J,Koltun V,Cremers D.Direct sparse odometry[J].IEEETransactions on Pattern Analysis and Machine Intelligence,2018,40(3):611-625.
    [6]Forster C,Pizzoli M,Scaramuzza D.SVO:Fast semi-direct monocular visual odometry[C]//IEEE International Conference on Robotics and Automation.Piscataway,USA:IEEE,2014:15-22.
    [7]von Gioi R G,Jakubowicz J,Morel J M,et al.LSD:A fast line segment detector with a false detection control[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,2010,32(4):722-732.
    [8]李海丰,胡遵河,陈新伟.PLP-SLAM:基于点、线、面特征融合的视觉SLAM方法[J].机器人,2017,39(2):214-220.Li H F,Hu Z H,Chen X W.PLP-SLAM:A visual SLAMmethod based on point-line-plane feature fusion[J].Robot,2017,39(2):214-220.
    [9]Pumarola A,Vakhitov A.PL-SLAM:Real-time monocular visual SLAM with points and lines[C]//IEEE International Conference on Robotics and Automation.Piscataway,USA:IEEE,2017:4503-4508.
    [10]Scaramuzza D,Achtelik M C,Doitsidis L,et al.Visioncontrolled micro flying robots:From system design to autonomous navigation and mapping in GPS-denied environments[J].IEEE Robotics&Automation Magazine,2014,21(3):26-40.
    [11]Mourikis A I,Roumeliotis S I.A multi-state constraint Kalman filter for vision-aided inertial navigation[C]//IEEE International Conference on Robotics and Automation.Piscataway,USA:IEEE,2007:3565-3572.
    [12]Leutenegger S,Lynen S,Bosse M,et al.Keyframe-based visualinertial odometry using nonlinear optimization[J].International Journal of Robotics Research,2015,34(3):314-334.
    [13]Endres F,Hess J,Sturm J,et al.3-D mapping with an RGB-Dcamera[J].IEEE Transactions on Robotics,2014,30(1):177-187.
    [14]Sturm J,Engelhard N,Endres F,et al.A benchmark for the evaluation of RGB-D SLAM systems[C]//IEEE/RSJ International Conference on Intelligent Robots and Systems.Piscataway,USA:IEEE,2012:573-580.
    [15]Krombach N,Droeschel D,Behnke S.Combining feature-based and direct methods for semi-dense real-time stereo visual odometry[C]//International Conference on Intelligent Autonomous Systems.Berlin,Germany:Springer,2016:855-868.
    [16]Hartley R,Zisserman A.Multiple view geometry in computer vision[M].Cambridge,UK:Cambridge University Press,2003.
    [17]Sola J,Vidal-Calleja T,Civera J,et al.Impact of landmark parametrization on monocular EKF-SLAM with points and lines[J].International Journal of Computer Vision,2012,97(3):339-368.
    [18]Bartoli A,Sturm P.The 3D line motion matrix and alignment of line reconstructions[J].International Journal of Computer Vision,2004,57(3):159-178.
    [19]Forster C,Zhang Z,Gassner M,et al.SVO:Semidirect visual odometry for monocular and multicamera systems[J].IEEETransactions on Robotics,2017,33(2):249-265.
    [20]Rosten E,Drummond T.Machine learning for high-speed corner detection[C]//European Conference on Computer Vision.Berlin,Germany:Springer,2006:430-443.
    [21]Baker S,Matthews I.Lucas-Kanade 20 years on:A unifying framework[J].International Journal of Computer Vision,2004,56(3):221-255.
    [22]高翔,张涛,颜沁睿,等.视觉SLAM十四讲:从理论到实践[M].北京:电子工业出版社,2017.Gao X,Zhang T,Yan Q R,et al.14 lectures on visual SLAM:From theory to practice[M].Beijing:Publishing House of Electronics Industry,2017.
    [23]Burri M,Nikolic J,Gohl P,et al.The EuRoC micro aerial vehicle datasets[J].International Journal of Robotics Research,2016,35(10):1157-1163.
    [24]Handa A,Whelan T,McDonald J,et al.A benchmark for RGB-D visual odometry,3D reconstruction and SLAM[C]//IEEE International Conference on Robotics and Automation.Piscataway,USA:IEEE,2014:1524-1531.
    [25]谢晓佳.基于点线综合特征的双目视觉SLAM方法[D].杭州:浙江大学,2017.Xie X J.Stereo visual SLAM using point and line features[D].Hangzhou:Zhejiang University,2017.
    [26]Delmerico J,Scaramuzza D.A benchmark comparison of monocular visual-inertial odometry algorithms for flying robots[C]//IEEE International Conference on Robotics and Automation.Piscataway,USA:IEEE,2018:2502-2509.
    ?https://github.com/MichaelGrupp/evo
    ?http://optitrack.com/

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700