利用解耦光流运动场模型的双目视觉里程计位姿优化方法
详细信息    查看全文 | 推荐本文 |
  • 英文篇名:A stereo visual odometry pose optimization method via flow-decoupled motion field model
  • 作者:乌萌 ; 郝金明 ; 付浩 ; 高扬 ; 张辉
  • 英文作者:WU Meng;HAO Jinming;FU Hao;GAO Yang;ZHANG Hui;Information Engineering University;State Key Laboratory of Geo-Information Engineering;Xi'an Research Institute of Surveying and Mapping;National University of Defense Technology;National Defense University Joint Operations College;
  • 关键词:解耦光流运动场 ; 双目视觉里程计 ; 地面移动测量系统 ; 无人驾驶系统
  • 英文关键词:flow-decoupled motion field;;stereo visual odometry;;MMS;;AV
  • 中文刊名:CHXB
  • 英文刊名:Acta Geodaetica et Cartographica Sinica
  • 机构:信息工程大学地理空间信息学院;地理信息工程国家重点实验室;西安测绘研究所;国防科技大学智能科学学院;国防大学联合作战学院;
  • 出版日期:2019-04-15
  • 出版单位:测绘学报
  • 年:2019
  • 期:v.48
  • 基金:国家自然科学基金(65103400)~~
  • 语种:中文;
  • 页:CHXB201904008
  • 页数:13
  • CN:04
  • ISSN:11-2089/P
  • 分类号:62-74
摘要
针对地面移动测量系统(MMS)和无人驾驶车(AV)平台双目立体相机采集的图像序列进行实时载体位姿估计优化问题,提出利用光流运动场模型的载体位姿与图像光流矢量间关系,将光流矢量解耦为3个平移分量、3个旋转分量和一个深度分量,推导分析了解耦后单分量、组合分量误差对位姿估计的影响,利用仿真和真实数据试验,验证了不同模型下单分量、组合分量误差分离模型的有效性,并结合组合分量误差分离模型,提出了双目视觉里程计位姿估计的解耦光流运动场位姿优化算法。试验结果表明:该算法可在与初始估计几乎同等计算效率条件下,将载体横向平移平均误差由4.75%降低至2.2%,即横向平移误差平均降低了53.6%;将载体前向平移平均误差由2.2%降低至1.9%,即前向平移误差平均降低了15.4%,长时间运行累积误差率较低,能够满足低功耗高效率计算条件下的组合导航实时载体位姿估计需求。
        For solving the optimization problem in vehicle ego-motion estimation in mobile mapping system(MMS) or autonomous vehicle(AV), the relationship between vehicle pose and optical flow is proposed to be utilized and all optical flow vectors are decoupled into 3 translational components, 3 rotational components and 1 depth component. The error influences on single component and combined components to vehicle pose estimation are derived. The validity of error separation models with single or combined components is verified through simulation and real-scene data experiments. The combined components error separation model is employed in the proposed flow-decoupled motion field based pose optimization algorithm for stereo visual odometry. Experiment results illustrate that, in conditions of almost the same calculation efficiency as the initial estimation process, this algorithm can reduce the average lateral direction error from 4.75% to 2.2%, which means the lateral direction error is reduced by 53.6%; and it can reduce the average forward direction error from 2.2% to 1.9%, which means the forward direction error is reduced by 15.4%.The results demonstrate that the lower cumulative error ratio can satisfy the requirement of real-time vehicle ego-motion estimation in low power dissipation and high efficiency situations in integrated navigation.
引文
[1]高俊.图到用时方恨少,重绘河山待后生---《测绘学报》60年纪念与前瞻[J].测绘学报,2017,46(10):1219-1225.DOI:10.11947/j.AGCS.2017.20170503.GAO Jun.The 60anniversary and prospect of acta geodaetica et cartographica sinica[J].Acta Geodaetica et Cartographica Sinica,2017,46(10):1219-1225.DOI:10.11947/j.AGCS.2017.20170503.
    [2]程传奇,郝向阳,李建胜,等.移动机器人视觉动态定位的稳健高斯混合模型[J].测绘学报,2018,47(11):1446-1456.DOI:10.11947/j.AGCS.2018.20170649.CHENG Chuanqi,HAO Xiangyang,LI Jiansheng,et al.Robust Gaussian mixture model for mobile robots’visionbased kinematical localization[J].Acta Geodaetica et Cartographica Sinica,2018,47(11):1446-1456.DOI:10.11947/j.AGCS.2018.20170649.
    [3]高翔,张涛,刘毅,等.视觉SLAM十四讲:从理论到实践[M].北京:电子工业出版社,2017.GAO Xiang,ZHANG Tao,LIU Yi,et al.14lectures on visual SLAM:from theory to practice[M].Beijing:Publishing House of Electronics Industry,2017.
    [4]周志华.机器学习[M].北京:清华大学出版社,2016.ZHOU Zhihua.Machine learning[M].Beijing:Tsinghua University Press,2016.
    [5]THRUN S,BURGARD W,FOX D.Probabilistic robotics[M].Cambridge:The MIT Press,2006.
    [6]HARTLEY R,ZISSERMAN A.Multiple view geometry in computer vision[M].Cambridge:Cambridge University Press,2004.
    [7]张晓东.可量测影像与GPS/IMU融合高精度定位定姿方法研究[D].郑州:信息工程大学,2013.ZHANG Xiaodong.Research on high precision position and orientation method based on digital measurable image and GPS/IMU integration[D].Zhengzhou:Information Engineering University,2013.
    [8]程传奇.非结构场景下移动机器人自主导航关键技术研究[D].郑州:信息工程大学,2018.CHENG Chuanqi.Research on the key technologies of autonomous navigation for mobile robots in unstructured environments[D].Zhengzhou:Information Engineering University,2018.
    [9]陈驰,杨必胜,田茂,等.车载MMS激光点云与序列全景影像自动配准方法[J].测绘学报,2018,47(2):215-224.DOI:10.11947/j.AGCS.2018.20170520.CHEN Chi,YANG Bisheng,TIAN Mao,et al.Automatic registration of vehicle-borne mobile mapping laser point cloud and sequent panoramas[J].Acta Geodaetica et Cartographica Sinica,2018,47(2):215-224.DOI:10.11947/j.AGCS.2018.20170520.
    [10]魏崇阳.城市环境中基于三维特征点云的建图与定位技术研究[D].长沙:国防科学技术大学,2016.WEI Chongyang.3Dfeature point clouds-based research on mapping and localization in urban environments[D].Changsha:National University of Defense Technology,2016.
    [11]NISTER D,NARODITSKY O,BERGEN J.Visual odometry[C]∥Proceedings of 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition.Washington,DC,USA:IEEE,2004.
    [12]SCARAMUZZA D,FRAUNDORFER F.Visual odometry Part I:the first 30years and fundamentals[J].IEEERobotics and Automation Magazine,2011,18(4):80-92.
    [13]MUR-ARTAL R,MONTIEL J M M,TARDS J D.ORB-SLAM:a versatile and accurate monocular SLAM system[J].IEEE Transactions on Robotics,2015,31(5):1147-1163.
    [14]FORSTER C,ZHANG Zichao,GASSNER M,et al.SVO:semidirect visual odometry for monocular and multicamera systems[J].IEEE Transactions on Robotics,2017,33(2):249-265.
    [15]PIZZOLI M,FORSTER C,SCARAMUZZA D.REMODE:Probabilistic,monocular dense reconstruction in real time[C]∥Proceedings of 2014IEEE International Conference on Robotics and Automation.Hong Kong,China:IEEE,2014:2609-2616.
    [16]NEWCOMBE R A,LOVEGROVE S J,DAVISON A J.DTAM:dense tracking and mapping in real-time[C]∥Proceedings of the 2011International Conference on Computer Vision.Barcelona,Spain:IEEE,2011:2320-2327.
    [17]ENGEL J,SCHPS T,CREMERS D.LSD-SLAM:Largescale direct monocular SLAM[C]∥Proceedings of the13th European Conference on Computer Vision.Zurich,Switzerland:Springer,2014:834-849.
    [18]ENGEL J,KOLTUN V,CREMERS D.Direct sparse odometry[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,2018,40(3):611-625.
    [19]BADINO H,KANADE T.A head-wearable short-baseline stereo system for the simultaneous estimation of structure and motion[C]∥Proceedings of the IAPR Conference on Machine Vision Application.Nara,Japan,2011:185-189.
    [20]KITT B,GEIGER A,LATEGAHN H.Visual odometry based on stereo image sequences with RANSAC-based outlier rejection scheme[C]∥Proceedings of 2010IEEEIntelligent Vehicles Symposium.San Diego,CA,USA:IEEE,2010.
    [21]STEIN G P,MANO O,SHASHUA A.A robust method for computing vehicle ego-motion[C]∥Proceedings of the2000IEEE Intelligent Vehicles Symposium Dearborn,MI,USA:IEEE,2000.
    [22]SCARAMUZZA D,FRAUNDORFER F,SIEGWART R.Real-time monocular visual odometry for on-road vehicles with 1-point RANSAC[C]∥Proceedings of 2009IEEEInternational Conference on Robotics and Automation.Kobe,Japan:IEEE,2009.
    [23]BUCZKO M,WILLERT V.Flow-decoupled normalized reprojection error for visual odometry[C]∥Proceedings of the 19th IEEE International Conference on Intelligent Transportation Systems.Rio de Janeiro,Brazil:IEEE,2016:1161-1167.
    [24]GEIGER A,LENZ P,URTASUN R.Are we ready for autonomous driving?The KITTI vision benchmark suite[C]∥Proceedings of 2012IEEE Conference on Computer Vision and Pattern Recognition.Providence,RI,USA:IEEE,2012.
    [25]章毓晋.计算机视觉教程[M].北京:人民邮电出版社,2011.ZHANG Yujin.A course of computer vision[M].Beijing:Posts&Telecom Press,2011.
    [26]MALIK J.Dynamic perspective[EB/OL].[2015-05-16].http:∥www-inst.eecs.berkeley.edu/~cs280/sp15/lectures/4.pdf.
    [27]SABATINI S,CORNO M,FIORENTI S,et al.Visionbased pole-like obstacle detection and localization for urban mobile robots[C]∥Proceedings of 2018IEEE Conference on Intelligent Vehicles Symposium.Changshu,China:IEEE,2018.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700