基于视觉与IMU融合的采茶机器人位姿估计研究
详细信息    查看全文 | 推荐本文 |
  • 英文篇名:Research on Tea Picking Robot's Pose Estimation by Using Vision and IMU Fusion
  • 作者:周俊 ; 吴明晖 ; 王先伟
  • 英文作者:Zhou Jun;Wu Minghui;Wang Xianwei;School of Mechanical Engineering, Shanghai University of Engineering Science;
  • 关键词:位姿估计 ; 视觉里程计 ; 视觉惯性里程计 ; 多传感器融合 ; 同时定位与地图构建
  • 英文关键词:pose estimation;;visual odometry;;visual-inertial odometry;;multi-sensor fusion;;simultaneous localization and mapping
  • 中文刊名:SDLG
  • 英文刊名:Agricultural Equipment & Vehicle Engineering
  • 机构:上海工程技术大学机械工程学院;
  • 出版日期:2019-07-10
  • 出版单位:农业装备与车辆工程
  • 年:2019
  • 期:v.57;No.336
  • 语种:中文;
  • 页:SDLG201907008
  • 页数:5
  • CN:07
  • ISSN:37-1433/TH
  • 分类号:33-36+42
摘要
为实现采茶机器人在室外复杂茶园环境中实时定位与移动轨迹跟踪,并以较低的计算代价获得较高的精度,使用双目摄像头与IMU作为传感器,采用视觉惯性里程计算法OKVIS,通过实验对比视觉里程计、松耦合的视觉惯性里程计与紧耦合的视觉惯性里程计。结果表明该算法精度较高。最终采用OKVIS算法,在室外茶园环境中验证采茶机器人的实时定位与轨迹跟踪。实验结果表明,所提方案实际运行结果良好,可以满足采茶机器人位姿估计要求。
        To achieve tea picking robot's real time pose estimation and trajectory tracking in complex outdoor environment of tea garden and get higher positioning accuracy through lower calculation cost, binocular camera and IMU are used as sensors, and visual-inertial odometry algorithm OKVIS is used to estimate robot's pose. According to the experiment, visual odometry, loosely-coupled visual-inertial odometry and tightly-coupled visual-inertial odometry are compared. The experiment result suggests that this algorithm has a higher accuracy. Finally, OKVIS algorithm is adopted to verify tea picking robot's real time pose estimation and trajectory tracking in outdoor environment of tea garden. The result shows that the project suggested works well and it can satisfy the requirement of tea picking robot's pose estimation.
引文
[1]夏凌楠,张波,王营冠,等.基于惯性传感器和视觉里程计的机器人定位[J].仪器仪表学报,2013,34(1):166-172.
    [2]Konolige K,Agrawal M,SolàJ.Large-scale visual odometry for rough terrain robotics research[M].Springer,2011:201-212.
    [3]Bonin-Font F,Ortiz A,Oliver G.Visual navigation for mobile robots:a survey[J].Journal of Intelligent and Robotic Systems,2008,53(3):263-296.
    [4]王鹏,孙长库,张子淼.单目视觉位姿测量的线性求解[J].仪器仪表学报,2011,32(5):1126-1131.
    [5]Mourikis A I,Roumeliotis S I.A multi-state constraint Kalman filter for vision-aided inertial navigation[C].Proceedings of the IEEE International Conference on Robotics and Automation,Roma,Italy,Apr 2007:3565-3572.
    [6]Bloesch M,Omari S,Hutter M,et al.Robust visual inertial odometry using a direct EKF-based approach[C].IEEEInternational Conference on Intelligent Robots&Systems,2015:298-304.
    [7]Weiss S M.Vision based navigation for micro helicopters[D].Zurich:ETH Zürich,2012.
    [8]Leutenegger S,Lynen S,Bosse M,Siegwart R,Furgale P.Keyframe-based visual-inertial SLAM using nonlinear optimization[J].Int.J.Robot.Research,vol.34,no.3,314-334,2015.
    [9]Lin Y,Gao F,Qin T,et al.Autonomous aerial navigation using monocular visual-inertial fusion[J].Journal of Field Robotics,2017,35(4).
    [10]Leutenegger S,Chli M,Siegwart R Y.BRISK:binary robust invariant scalable keypoints[J].IEEE International Conference on Computer Vision,2012,58(11):2548-2555.
    [11]Mur-Artal R,Montiel J M M,Tardós J D.ORB-SLAM:a versatile and accurate monocular SLAM system[J].IEEETransactions on Robotics,2017,31(5):1147-1163.
    [12]Maesschalck D R,Jouan-Rimbaud D,Massart D L.The mahalanobis distance[J].Chemometrics and Intelligent Laboratory Systems,2000,50(1):1-18.
    [13]Fischler M A,Bolles R C.Random sample consensus:a paradigm for model fitting with applications to image analysis and automated cartography[J].Commun.ACM,1981,24.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700