解耦光流运动场模型的车载平台仿真
详细信息    查看全文 | 推荐本文 |
  • 英文篇名:Simulation on Vehicle Platform Based on Decoupled Optical Flow Motion Field Model
  • 作者:乌萌 ; 郝金明 ; 付浩 ; 高扬
  • 英文作者:Wu Meng;Hao Jinming;Fu Hao;Gao Yang;Institute of Geospatial Information, Information Engineering University;State Key Laboratory of Geo-Information Engineering;Xi'an Research Institute of Surveying and Mapping;Institute of Intelligent Science, National University of Defense Technology;
  • 关键词:机器视觉 ; 光流运动场模型 ; 解耦 ; 无人车 ; 光流 ; 仿真
  • 英文关键词:machine vision;;optical flow motion field model;;decoupling;;autonomous vehicle;;optical flow;;simulation
  • 中文刊名:GXXB
  • 英文刊名:Acta Optica Sinica
  • 机构:信息工程大学地理空间信息学院;地理信息工程国家重点实验室;西安测绘研究所;国防科技大学智能科学学院;
  • 出版日期:2018-12-25 07:03
  • 出版单位:光学学报
  • 年:2019
  • 期:v.39;No.445
  • 基金:国家自然科学基金(65103400)
  • 语种:中文;
  • 页:GXXB201904035
  • 页数:12
  • CN:04
  • ISSN:31-1252/O4
  • 分类号:295-306
摘要
针对无人车平台利用光流进行载体位姿估计时面临的不同运动状态光流矢量解耦与分析问题,推导了光流运动场模型,分析了载体在六自由度位姿独立变化时的解耦光流运动场模型;根据解耦光流运动场模型,设计了车载平台的仿真算法,并给出完全解耦的仿真结果;利用解耦光流运动场模型量化分析了仿真结果的正确性;利用KITTI数据集的平移、旋转两个典型场景开展真实光流解耦实验,进行了模型分析、仿真过程、真实数据、对比结果的一致性验证。结果表明:所给出的解耦模型分析、仿真算法、仿真与真实结果以及对比分析不仅可用于车载平台利用光流开展位姿解耦估计中的误差分析和算法验证,还对深入理解光流运动成像、开展无人车平台光流应用的研究具一定的借鉴和指导。
        Aiming at the problems of decoupling and analysis of optical flow vectors with various motion states confronted in the pose estimation process of autonomous vehicles(AVs) using optical flow, the optical flow motion field model(OFMFM) is derived and the decoupled optical flow motion filed model(DOFMFM) is analysed as the vehicle poses change independently in six degrees of freedom. According to the DOFMFM, a simulation algorithm is designed for the vehicle platform, and the completely decoupled simulation results are presented. The DOFMFM is applied to quantify and verify the simulation results. Two real scenes of translation and rotation from the KITTI dataset are utilized for the flow-decoupled experiments. The consistency among model analysis, simulation process, real data and comparison results is verified. The results show that the proposed decoupled model analysis, simulation algorithm, simulation and real results together with comparison analysis can not only be applied in the error analysis and algorithm test of pose estimation, but also provide a reference or instruction for the improvement in the understanding of optical flow motion imaging and the research on the optical flow based applications on an AV platform.
引文
[1] Xiong C Z,Che M Q,Wang R L,et al.Robust real-time visual tracking via dual model adaptive switching[J].Acta Optica Sinica,2018,38(10):1015002.熊昌镇,车满强,王润玲,等.稳健的双模型自适应切换实时跟踪算法[J].光学学报,2018,38(10):1015002.
    [2] Lin H C,Lü Q,Wei H,et al.Quadrotor autonomous flight and three-dimensional dense reconstruction based on VI-SLAM[J].Acta Optica Sinica,2018,38(7):0715004.林辉灿,吕强,卫恒,等.基于VI-SLAM的四旋翼自主飞行与三维稠密重构[J].光学学报,2018,38(7):0715004.
    [3] Xiao J S,Tian H,Zou W T,et al.Stereo matching based on convolutional neural network[J].Acta Optica Sinica,2018,38(8):0815017.肖进胜,田红,邹文涛,等.基于深度卷积神经网络的双目立体视觉匹配算法[J].光学学报,2018,38(8):0815017.
    [4] Gibson J J.The perception of the visual world[M].Cambridge:The Riverside Press,1950.
    [5] Loianno G,Scaramuzza D,Kumar V.Special issue on high-speed vision-based autonomous navigation of UAVs[J].Journal of Field Robotics,2018,35(1):3-4.
    [6] Meneses M C,Matos L N,Oprado B.Low-cost autonomous navigation system based on optical-flow classification[EB/OL].(2018-03-11)[2018-10-15].https://arxiv.org/abs/1803.03966.
    [7] Wan F H.Research on multi-sensor based UAV obstacle avoidance technique[D].Hangzhou:Zhejiang University of Technology,2017.万富华.基于多传感器的无人机定位和避障技术研究[D].杭州:浙江工业大学,2017.
    [8] Dai B X.Research on obstacle avoidance method of MAV in indoor environment based on optical flow[D].Chengdu:University of Electronic Science and Technology of China,2015.戴碧霞.基于光流的微小型飞行器室内避障方法研究[D].成都:电子科技大学,2015.
    [9] Forster C,Zhang Z C,Gassner M,et al.SVO:Semidirect visual odometry for monocular and multicamera systems[J].IEEE Transactions on Robotics,2017,33(2):249-265.
    [10] Engel J,Koltun V,Cremers D.Direct sparse odometry[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,2018,40(3):611-625.
    [11] Longuet-Higgins H C,Prazdny K.The interpretation of a moving retinal image[J].Proceedingsof the Royal Society of London B,1980,208(1173):385-397.
    [12] Matthies L,Szeliski R,Kanade T.Kalman filter-based algorithms for estimating depth from image sequences[M].Heidelberg:Springer,1993:87-130.
    [13] de Luca A,Oriolo G,Robuffo Giordano P.Feature depth observation for image-based visual servoing:theory and experiments[J].The International Journal of Robotics Research,2008,27(10):1093-1116.
    [14] Sabatini S,Corno M,Fiorenti S,et al.Vision-based pole-like obstacle detection and localization for urban mobile robots[C]∥Proceedings of the 29th IEEE Intelligent Vehicles Symposium,June 26-30,2018,Changshu,China.New York:IEEE,2018:1209-1214.
    [15] Buczko M,Willert V.Flow-decoupled normalized reprojection error for visual odometry[C]//Proceedings of IEEE 19th International Conference on Intelligent Transportation Systems,November 1-4,2016,Rio de Janeiro,Brazil.New York:IEEE,2016:1161-1167.
    [16] Jaegle A,Phillips S,Daniilidis K.Fast,robust,continuous monocular egomotion computation[C]∥Proceedings of IEEE International Conference on Robotics and Automation,May 16-21,2016,Stockholm,Sweden.New York:IEEE,2016:773-780.
    [17] Wu Z L,Li J,Guan Z Y,et al.Optical flow-based autonomous landing control for fixed-wing small UAV[J].Systems Engineering and Electronics,2016,38(12):2827-2834.吴政隆,李杰,关震宇,等.基于光流的固定翼小型无人机自主着陆控制[J].系统工程与电子技术,2016,38(12):2827-2834.
    [18] Guo L,Liu M Y,Wang Y,et al.A motion estimation method for optical flow detection device of aircraft with two cameras:CN104880187A[P/OL].(2015-09-02)[2018-10-15].https://patentimages.storage.googleapis.com/79/9a/cf/6a6371ddd46e75/CN104880187A.pdf.郭雷,刘梦瑶,王岩,等.一种双摄像机的飞行器光流检测装置的运动估计方法:CN104880187A[P/OL].(2015-09-02)[2018-10-15].https://patentimages.storage.googleapis.com/79/9a/cf/6a6371ddd46e75/CN104880187A.pdf.
    [19] Geiger A,Lenz P,Urtasun R.The KITTI vision benchmark suite[EB/OL].(2018-09-15)[2018-10-07].http://www.cvlibs.net/datasets/kitti/eval_odometry.php.
    [20] Buczko M,Willert V.Monocular outlier detection for visual odometry[C]//Proceedings of IEEE Intelligent Vehicles Symposium,June 11-14,2017,Los Angeles,CA,USA.New York:IEEE,2017:1124-1131.
    [21] Zhang Y J.A course of computer vision[M].Beijing:Posts & Telecom Press,2011.章毓晋.计算机视觉教程[M].北京:人民邮电出版社,2011.
    [22] Jitendra M.Dynamic perspective[EB/OL].(2015-05-02)[2018-10-07].http://www-inst.eecs.berkeley.edu/~cs280/sp15/lectures/4.pdf.
    [23] Geiger A,Lenz P,Urtasun R.Are we ready for autonomous driving?The KITTI vision benchmark suite[C]∥Proceedings of,June 16-21,2012,Providence,RI,USA.New York:IEEE,2012:3354-3361.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700