摘要
为实现采茶机器人在室外复杂茶园环境中实时定位与移动轨迹跟踪,并以较低的计算代价获得较高的精度,使用双目摄像头与IMU作为传感器,采用视觉惯性里程计算法OKVIS,通过实验对比视觉里程计、松耦合的视觉惯性里程计与紧耦合的视觉惯性里程计。结果表明该算法精度较高。最终采用OKVIS算法,在室外茶园环境中验证采茶机器人的实时定位与轨迹跟踪。实验结果表明,所提方案实际运行结果良好,可以满足采茶机器人位姿估计要求。
To achieve tea picking robot's real time pose estimation and trajectory tracking in complex outdoor environment of tea garden and get higher positioning accuracy through lower calculation cost, binocular camera and IMU are used as sensors, and visual-inertial odometry algorithm OKVIS is used to estimate robot's pose. According to the experiment, visual odometry, loosely-coupled visual-inertial odometry and tightly-coupled visual-inertial odometry are compared. The experiment result suggests that this algorithm has a higher accuracy. Finally, OKVIS algorithm is adopted to verify tea picking robot's real time pose estimation and trajectory tracking in outdoor environment of tea garden. The result shows that the project suggested works well and it can satisfy the requirement of tea picking robot's pose estimation.
引文
[1]夏凌楠,张波,王营冠,等.基于惯性传感器和视觉里程计的机器人定位[J].仪器仪表学报,2013,34(1):166-172.
[2]Konolige K,Agrawal M,SolàJ.Large-scale visual odometry for rough terrain robotics research[M].Springer,2011:201-212.
[3]Bonin-Font F,Ortiz A,Oliver G.Visual navigation for mobile robots:a survey[J].Journal of Intelligent and Robotic Systems,2008,53(3):263-296.
[4]王鹏,孙长库,张子淼.单目视觉位姿测量的线性求解[J].仪器仪表学报,2011,32(5):1126-1131.
[5]Mourikis A I,Roumeliotis S I.A multi-state constraint Kalman filter for vision-aided inertial navigation[C].Proceedings of the IEEE International Conference on Robotics and Automation,Roma,Italy,Apr 2007:3565-3572.
[6]Bloesch M,Omari S,Hutter M,et al.Robust visual inertial odometry using a direct EKF-based approach[C].IEEEInternational Conference on Intelligent Robots&Systems,2015:298-304.
[7]Weiss S M.Vision based navigation for micro helicopters[D].Zurich:ETH Zürich,2012.
[8]Leutenegger S,Lynen S,Bosse M,Siegwart R,Furgale P.Keyframe-based visual-inertial SLAM using nonlinear optimization[J].Int.J.Robot.Research,vol.34,no.3,314-334,2015.
[9]Lin Y,Gao F,Qin T,et al.Autonomous aerial navigation using monocular visual-inertial fusion[J].Journal of Field Robotics,2017,35(4).
[10]Leutenegger S,Chli M,Siegwart R Y.BRISK:binary robust invariant scalable keypoints[J].IEEE International Conference on Computer Vision,2012,58(11):2548-2555.
[11]Mur-Artal R,Montiel J M M,Tardós J D.ORB-SLAM:a versatile and accurate monocular SLAM system[J].IEEETransactions on Robotics,2017,31(5):1147-1163.
[12]Maesschalck D R,Jouan-Rimbaud D,Massart D L.The mahalanobis distance[J].Chemometrics and Intelligent Laboratory Systems,2000,50(1):1-18.
[13]Fischler M A,Bolles R C.Random sample consensus:a paradigm for model fitting with applications to image analysis and automated cartography[J].Commun.ACM,1981,24.