用户名: 密码: 验证码:
自主导航车局部地图创建研究
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
自主导航车能够准确并且可靠行驶的前提条件是各种车载传感器能感知环境并且有效地分析计算及处理数据。目前,仍没有一种传感器能够提供环境完全且可靠的数据,因此多传感器数据融合是自主导航车感知环境的关键技术之一
     本论文旨在研究基于多传感器的自主导航车环境感知策略,将单线激光测距仪检测障碍物的功能和单目摄像机成像检测识别路面区域的功能结合起来,实现两传感器的数据融合,创建一个较为完善的局部地图。在未知环境下的地图创建一直是当前自主导航车的研究热点,特别是在复杂环境下的环境感知问题,这将有益于提高自主导航车的智能水平。
     论文研究的主要工作有以下几个方面:
     1)分析激光测距仪采集的数据,将激光二维图像的几种分割算法进行比较后,提出改进的自适应分割点检测算法,算法在近距离处使用线性阈值算法,而在远距离处使用自适应分割点检测算法。在进行了合理的聚类后,实现了普通噪声点的剔除,并将聚类后的点进行直线提取,实现了障碍物的结构化显示。
     2)提出了帧序列剧烈变化检测算法。该算法通过分析激光测距仪数据的帧序列,若是当前帧的点个数接近激光测距仪每一帧能扫描的最大点个数时,并且当前帧相比前一帧的聚类数发生剧列变化,则该帧为异常数据。
     3)结合Matlab摄像机标定工具箱和OpenCV摄像机标定函数,实现摄像机标定,并将标定得到的内参数和外参数用于实现逆透视投影变换,同时利用摄像机标定得到的畸变系数将原图进行畸变矫正后再将路面区域感兴趣的像素进行逆透视投影变换转换,使像素坐标到实际物理坐标的计算结果更加精确。
     4)利用标定物基于实际物理距离求解出旋转平移参数,将激光测距仪坐标与摄像机坐标统一,实现了多传感器数据的配准,建立了单线激光测距仪与单目视觉摄像机的数据融合系统。
Environment detection based on multi-sensor is the pre-condition that autonomous vehicle can drive precisely and safely. By far, there is no one single sensor can provide complete and reliable data of the environment, so data fusion is one of the key technologies for local environment mapping.
     This thesis is about research on the mapping strategy based on multi-sensor. It uses data fusion to establish a reliable local map. It's a hot research on establishing map under un-constructive environment, especially complicated environment in the city, and it will improve the intelligent level of autonomous vehicle.
     The main contributions and works are described as follows
     1. This thesis proposes a combination algorithm based on ABD (adaptive breakpoint detector) algorithm and linear threshold segmentation algorithm after some comparisons on several segmentation algorithms. After segmentation, it is easily to detect the noise point based on clusters, then shows the constructive environment using line extraction algorithm.
     2. This thesis proposes a new SCSD (Sharp Change Sequence Detector) algorithm to detect abnormal data based on tremendous difference between frames by analyzing the sequence of LRF data when the points of current frame is nearly maximum.
     3. Matlab camera calibration toolbox and OpenCV camera calibration method are used to get camera parameters, then un-distorts the image by distortion coefficient and uses IPM (Inverse Perspective Mapping) to convert the ROI (Region of Interest) pixel to physical distance precisely.
     4. Using rotation matrix and translation matrix of two Cartesian coordinate planes to unify the LRF and camera coordinates, and establishes a local environment map. It takes advantages from two sensors.
引文
[1]刘大学.用于越野自主导航车的激光雷达与视觉融合方法研究:[博士学位论文].长沙:国防科技大学,2009
    [2]Sebastian T, Wolfram B, Dieter F. A probabilistic approach to concurrent mapping and localization for mobile robots, Machine Learning and Autonomous Robots,1998, 31(5):1-25
    [3]Behringer R, Sundareswaran S, Gregory B, et al. The DARPA grand challenge-development of an autonomous vehicle. Intelligent Vehicles Symposium,2004, (6): 226-231
    [4]Sebastian T, Mike M, Hendrik D, et al. Stanly:The robot that won the DARPA grand challenge. Springer Tracts in Advanced Robotics,2007, (36):1-43
    [5]Chris U, Joshua A, Drew B, et al. Autonomous Driving in Urban Environments-Boss and the Urban Challenge. Journal of Field Robotics Archive,2008,25(8):425-466
    [6]Google. http://googleblog.blogspot.com/2010/10/what-were-driving-at.html
    [7]New York Times. Google cars drive themselves, in traffic.2010
    [8]李华,丁冬花,何克忠THMR-V导航控制算法的研究.Robot,2001,23(6):214-220
    [9]袁启平.自主泊车系统中的环境建模技术研究:[硕士学位论文].长沙:国防科技大学,2009
    [10]视听觉信息的认知计算http://ccvai.xjtu.edu.cn
    [11]刘莉.“中国智能车未来挑战赛”西安开战.科技日报,2010,(10)
    [12]吕宏,刘大力,孙嘉燕.从无人驾驶汽车奔赴世博会看未来汽车.机电产品开发与创新,2011(1):433-436
    [13]黄长礼,张浩杰.实现无人驾驶时相关操纵件的设计.汽车与配件,2010,(7):56-59
    [14]Wijesoma, W.S., Kodagoda K.R.S, Balasuriya, A.P. A laser and a camera for mobile robot navigation. Seventh International Conference on Control, Automation,2002, (11):740-745
    [15]Huijing Z, Jinshi C, Hongbin Z. Sensing an intersection using a network of laser scannners and video cameras. IEEE Intelligent Transportation Systems Magazine, 2009, summer:31-37
    [16]邬永革,黄炯,杨静宇.基于多传感器信息融合的机器人障碍检测和环境建模.自动化学报,1997,23(5):641-648
    [17]何树林.浅谈智能汽车及其相关问题.汽车工业研究,10(9):28-30
    [18]李崇寒,彭鑫.无人驾驶:可行还是不可行?科技日报,2010,(12)
    [19]于金霞,蔡自兴,邹小兵等.移动机器人导航中激光雷达测距性能研究.传感技术学报,2006,19(2):356-360
    [20]SICK. Quick manual for LMS communication setup V1.1 March 2002
    [21]SICK. Telegrams for operating and configuring the LMS 2xx Laser Measurement systems V2.30 x1.27
    [22]Marcos B. The understanding, characterization, and implementation of a SICK LMS-291 laser scanner for use in an experimental environment:[Bachelor thesis]. USA:Massachusetts Institute of Technology,2006
    [23]邱春玲.高速面阵CCD图像数据采集技术研究:[硕士学位论文].吉林:吉林大学,2010
    [24]饶世贤,刘仁明.一种简单测量凸透镜焦距的方法.大众科技,2011,(2):34-38
    [25]吴晔华.高清/标清摄像机不同光圈时与灯光照度之间的关系.2009年中国电影电视技术学会影视技术文集,2010,23(9):546-550
    [26]陈爱斌,蔡自兴,安基程.一种基于摄像机视角的立体视觉定位方法.中南大学学报,2009,18(12):385-393
    [27]张艳珍,欧宗瑛.一种基于斜率的摄像机畸变校正方法.小型微型计算机系统,2002,7(5):273-279
    [28]于金霞,蔡自兴,邹小兵等.基于激光雷达的移动机器人障碍测距研究.传感器与微系统,2006,13(6):256-262
    [29]史文中,李必军,李清泉.基于投影点密度的车载激光扫描距离图像分割方法.测绘学报,2005,34(2):95-100
    [30]Cristiano P, Urbano N. Segmentation and geometric primitives extraction from 2D laser range data for mobile robot application. Robotics,2005
    [31]Ali S, Axel K, Siegfried K, et al. An optimized segmentation method for a 2D laser-scanner applied to mobile robot navigation.3rd IFAC,1997
    [32]Hoover A, Jean-Baptiste G, Jiang X, et al. An experimental comparison of range image segmentation algorithms, PAMI,1996,18(7):673-689
    [33]Khalifa I, Moussa M, Kamel M. Range image segmentation using local approximation of scan lines with application to cad model acquisition. Machine Vision Applications, 2003,13(6):263-274
    [34]Xiang R, Wang R. Range image segmentation based on split-merge clustering.17th ICPR,2004:614-617
    [35]王奎民.基于激光测距的环境地图动态创建技术研究.自动化技术与应用,2009,28(5):44-46
    [36]Peter A. Processing of laser scanner data-algorithms and applications. ISPRS Journal of Photogrammetry & Remote Sensing,1998, (53):138-147
    [37]Richard M, Jim R, Deepak K, et al. The golem group/university of California at Los Angeles autonomous ground vehicle in DARPA Ground Challenge. Journal of Field Robotics,2006,23(8):527-533
    [38]Wijesoma W.S, Kodagoda K.R.S, Balasuriya A.P. Road-boundary detection and tracking using ladar sensing. IEEE Transactions on Robotics and Automation,2004, 20(3):456-464
    [39]Pears N. Feature extraction and tracking for scanning range sensors. Robotics and Autonomous System,2000, (33):43-58
    [40]Borges G.A, Aldon M.J. Line extraction in 2D range images for mobile robotics. Journal of Intelligent and Robotic System,2004,40(3):267-297
    [41]Viet N, Stefan G, Agostino M, et al. A comparison of line extraction algorithms using 2D range data for indoor mobile robotics. Auton Robot,2007, (23):97-111
    [42]李健,史进.基于OpenCV的三维重建研究.微电子学与计算机,2008,25(12):29-32
    [43]孟海岗.基于平面约束的CCD相机标定方法改进:[硕士学位论文].吉林:吉林大学,2009
    [44]Zhengyou Z. A flexible new technique for camera calibration. IEEE Transactions on Pattern Analysis and Machine Intelligence,2000,22(11):1330-1334
    [45]R.Y. Tsai. A versatile camera calibration technique for 3D machine vision. IEEE Robotics and Automation,1987,3(4):323-344
    [46]Brown D.C. Close-Range camera calibration. Photogrammetric Engineering,1971, 855-866
    [47]Bradski G,Kaehler A于仕琪,刘瑞祯(译).学习OpenCV.北京:清华大学出版社,2009.406-437
    [48]Jean-Yves Bouguet. http://www.vision.caltech.edu/bouguetj/calib_doc/
    [49]Tuohy S, Cualain D.O, Jones E., et al. Distance Determination for an Automobile Environment using Inverse Perspective Mapping in OpenCV. ISSC,2010, (6)
    [50]Muad A.M, Hussain A, Samad S.A, et al. Implementation of Inverse Perspective Mapping Algorithm for the Development of an Automatic Lane Tracking System. TENCON,2004, Volume.1:207-210
    [51]Bertozzi M, Broggi A, Fascioli A. Stereo inverse perspective mapping:theory and applications. Image and Vision Computing,1998, (16):585-590
    [52]邓剑文.高速公路自主驾驶汽车道路检测技术研究:[硕士学位论文].长沙:国防 科技大学,2004
    [53]裘伟.高速公路车道偏离告警系统的研究:[硕士学位论文].长沙:国防科技大学,2006
    [54]彭骏驰.深度图象和光学图象的数据融合:[硕士学位论文].长沙:中南大学,2007
    [55]何友,彭应宁.多传感器数据融合模型综述.清华大学学报,1996,36(9):14-20
    [56]Chao-Hua G, Jian-Wei Gong, Yong-Dan Chen, et al. An application of data fusion combining laser scanner and vision in real-time driving environment recognition system. Machine Learning and Cybernetics,2009, (6):3116-3121
    [57]Bradski G, Kaehler A于仕琪,刘瑞祯(译).学习OpenCV.北京:清华大学出版社,2009.246-290
    [58]叶轩.基于图论的单目视觉路面识别技术研究:[硕士学位论文].西安:长安大学,2010
    [59]Wijesoma W.S, Kodagoda K.R.S, Balasuriya A.P, et al. Road edge and lane boundary detection using laser and vision. IEEE International Conference on Intelligence Robots and Systems.2001, (10):1440-1445
    [60]Matthew A.T, David G.M, Keith D.G, et al. Video road-following for the autonomous land vehicle. IEEE,273-280
    [61]Bertozzi M, Broggi A. GOLD:A parallel real-time stereo vision system for generic obstacle and lace detection. IEEE Trans. on Image Processing,1998,7(1):62-81
    [62]Mohamed A. Real time detection of lane markers in urban streets.
    [63]Hong W, Qiang C. Real-time lane detection in various conditions and night cases. Intelligent Transportation Systems, Proceedings of the IEEE,2006,16 (9):1226-1231
    [64]高德芝,郑榜责,段建民.基于逆透视变换的智能车辆定位技术.计算机测量与控制,2009,17(9):1810-1812
    [65]Nieto M, Salgado L, Jaureguizar F, et al. Stabilization of inverse perspective mapping images based on robust vanishing point estimation. Proceedings of the 2007 IEEE Intelligent Vehicles Symposium,2007, (6):315-320

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700