基于视觉的农用轮式移动机器人导航路径识别
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
随着计算机图像处理技术逐渐成熟,视觉导航将成为农用轮式移动机器人自主导航的重要发展方向。本文研究的是农用轮式移动机器人视觉导航系统的一个关键部分—导航路径的识别以及车辆导航参数的提取,是实现车辆自主导航的重要模块,对视觉导航技术的发展和推广有重大意义。
     由于农用轮式移动机器人在实时作业时,导航路径一般近似直线,因此,本文针对结构化道路,将道路的中间线作为车辆的导航路径进行识别并提取车体的导航参数,主要研究内容和结论如下:
     (1)对CCD摄像机采集的彩色图像进行预处理,并重点研究了图像阈值分割方法。实验证明迭代分割和Ostu分割均能产生较好的二值化图像。
     (2)在国内外学者的研究基础上,深入研究图像边缘检测。分析了基于模板卷积运算的Robets、Prewitt、Sobel、Laplacian等多种边缘检测算法,实验结果表明运用Laplacian算法能在满足实时性要求下清楚地检测到道路边缘。
     (3)研究一种快速有效的直线信息检测方法识别农用轮式移动机器人的导航路径。分析了地面直线路径在图像空间中的投影特点,通过改进Hough变换直线检测算法,在获取农用轮式移动机器人跟踪路径的同时直接提取了车辆的导航参数—横向偏差和航向偏差;实验结果显示横向偏差和航向偏差的平均值误差分别为2.6(cm)、-0.47(deg)。
     (4)采用针孔成像模型完成CCD摄像机的标定。建立了机器人坐标系和计算机图像坐标系中各对应坐标点的关系方程组,通过求解得出摄像机各内外参数。
     (5)以福田欧豹4040型拖拉机作为农用轮式移动机器人实验原型,采用CCD摄像机采集车辆行驶的道路图像(彩色位图),利用VC++6.0编写图像处理源程序。实验表明拖拉机在低速行驶时,本文开发的图像处理源程序能够满足实时性要求,处理速度为2.1Hz,平均每秒钟处理2帧640×480(像素)彩色位图。
Gradually, with the maturity of computer image processing technology, vision navigation is going to be an important developing direction of the agricultural mobile wheeled robot. Research of the paper is to recognize the navigation route and to obtain vehicle navigation parameters which is a key section of the agricultural mobile wheeled robot navigation system and also is the important module realizing vehicle independent navigation. The study will make great sense to the development and popularity of vision navigation technology. Generally speaking, when the agricultural mobile robot is on real-time busywork, the navigation route is approximate line, therefore, based on structured road, the study about navigation route recognition of agricultural wheeled mobile robot is carried on. The major research contents and conclusions are as follows:
     (1) The color images collected by CCD camera are preprocessed and the methods of image threshold partition is paid more attention to. Both iteration segmentation and Ostu partition are considered to be able to provide better binary images.
     (2) On the base of scholars’research at home and abroad, image edge detection is deeply studied. Grounding on template convolution operation, some edge detection algorithms such as Robets, Prewitt, Sobel and Laplacian are analyzed. Experimental results prove that Laplacian algorithm can satisfy the real-time requirement and detect the rode edge distinctly.
     (3) A quick and effective line-detection method to recognize the navigation route of agricultural wheeled mobile robot is studied. Projection characteristics of a line on the ground in image space is analyzed and the navigation parameters—lateral and heading displacement, are gotten directly at the same time as the traced route of the agricultural mobile robot is detected by improving the algorithm of Hough transformation which is used to recognize the line. The experimental result shows that the error of lateral and heading displacement respectively are 2.6 centimeters and negative 0.47 degrees.
     (4) CCD camera calibration is completed with pinhole imaging model. A series of relationship equation group of corresponding points in robot’s coordinate and digit image coordinate is established, after calculation, each inner and outer parameters of the camera have been obtained.
     (5) The Fu Tian O Bao 4040 tractor is regarded as the prototype of agricultural wheeled mobile robot. The CCD camera is used to collect the tractor steering rode images(color bitmap) and the VC++6.0 is utilized to write image processing programs. The experiments reveals that when the tractor speed is slow, real-time requirement can be achieved through the software development procedure of this thesis, with the processing speed being 2.1Hz and two frames 640×480(pixel) color bitmaps processed per second.
引文
[1]赵颖.农业自主行走机器人视觉导航技术的研究[D].北京:中国农业大学,2006.
    [2] Jongebreur A A. Strategic themes in agricultural and bioresoure engineering in the 21# century Agriculture[J]. Journal of Agricultural Engineering Research,2000,76:227-236.
    [3]周俊.农业轮式移动机器人视觉导航系统的研究[D].南京:南京农业大学,2003.
    [4]郭戈,胡征峰,董江辉.移动机器人导航与定位技术[J].微计算机信息,2003,19(8):10-12.
    [5]王志文,郭戈.移动机器人导航技术现状与展望[J].机器人,2003,25(5):470-474.
    [6] Tillett N D. Automatic guidance sensor for agricultural field machine: A review[J]. Journal of Agricultural Engineering Research,1991,50(3):167-187.
    [7] Wilson J N. Guidance of agricultural vehicles-a historical perspective[J]. Computers and Electronics in Agriculture,2000,25(1-2):3-9.
    [8]王丰元,宇仁德,刘启.智能交通系统(ITS)及其发展[J].山东工程学院学报,2000,14(3):26-30.
    [9]冯建农,柳明,吴捷.自主移动机器人智能导航研究进展[J].机器人,1997,19 (6):468-472.
    [10] Hague T, Marchant J A, Tillett N D. Ground Based Sensing Systems for Agricultural Vehicles[J]. Computers and Electronics in Agriculture,2000,25:11-28.
    [11]欧阳正柱,何克忠. GPS在智能移动机器人中的应用[J].微计算机信息,2001,17(11):56-58.
    [12]李开生,张慧慧,费仁元等.定位传感器及其融合技术综述[J].计算机自动测量与控制,2001,9(4):1-3.
    [13] Bevly D M, Rekow A, Parkinson B. Incorporating INS with carruer-phase differential GPS for automatic steering control of farm tractor[A]. International Off-Hoghway and Powerplant Congress & Exposition[C]. Indianapolis: IN,1999,1:2851.
    [14] Kettle L, Peterson C L. Autonomous GPS guided vehicle[A]. International Off-Highway and Powerplant Congress & Exposition[C]. Indianapolis: IN,1999,1:2850.
    [15] Thuilot B, Cariou C, Martinet P, et al. Automatic guidance of a farm tractor relying on a single CP-DGPS[J]. Autonomous Robots,2002,13:53-71.
    [16] Cordesses L, Cariou C, Berducat M. Combine harvester control using real time kinematic GPS[J]. Precision Agriculture,2000,2:147-161.
    [17] Reid J F, Searcy S W. An algorithm for separating guidance information from row crop image[J]. American Society of Agricultural Engineers,1988,31(6):1624-1632.
    [18] Helmers H, Schellenberg M. CMOS vs. CCD sensoes inspeckle interferomertry[J]. Optics & Laser Technology,2003,35(8):587-595.
    [19] Bertozzi M, Broggi A. GOLD: A parallel real-time stereo vision system for generic obstacle and lane detection[J]. IEEE Transaction on Image Processor,1998,7(1):62-81.
    [20] Kreucher C, Lakshmanan S. LANA: A lane extraction algorithm that uses frequency domain features[J]. IEEE transaction on Robotics and Automation,1999,15(2):343-350.
    [21] Lutzeler M, Dickmanns E D. Road recognition with marveye[A]. Proc of the IEEE Intelligent Vehicle Symposium[C]. Germany:Stuttgart,1998,341-346.
    [22] Kuan D, Sharma K. Model based geometry reasoning for autonomous road following[A]. Proc. on theIEEE Internatonal Conference on Robotics and Automation[C]. North Carolina: Raleigh,1987, 416-423.
    [23] Turk M A, Morgenthaler D G, Gremban K D, et al. VITS: A vision system for autonomous land vehicle navigation[J]. IEEE Transaction on Pattern Analysis and Machine Intelligence,1988,10(3): 342-360.
    [24] Liu J, Wu Y, Liu K, et al. Color road segmentation for ALV road following[A]. Proc.of the International Society for Optical Engineeing[C]. USA: Boston MA,1994,116-126.
    [25] Thorpe C, Hebert M, Kanade T, et al. Vision and navigation for the Camegie-Melon Navlab[J]. IEEE Transaction on Pattern Analysis and Machine Intelligence,1988,10(3):362-373.
    [26]王荣本,张友坤,王志中.有线图像识别式自动引导车辆系统设计[J].农业工程学报,1994,10(3):21-26.
    [27]俆友春,王荣本等.智能车辆视野及其图像变形矫正的研究[J].公路交通科技,2000,17(5):76-80.
    [28]王荣本,俆友春,李兵等.基于线性模型的导航路径图像检测算法研究[J].公路交通科技,2001,18(2):47-51.
    [29]张军宇.面向高速公路的室外移动机器人车道检测的研究与实现[D].北京:清华大学,2001.
    [30]邵刚,毛罕平.农业机械视觉导航研究进展[J].安徽农业科学,2007,35(14):4394-4396.
    [31] Reid J F, Searcy S W. Detecting crop rows using the hough transform[J]. American Society of Agricultural Engineers,1986,86-3042:20-25.
    [32] Marchant J A. Track of row structure in three crops usingimage analysis[J].Computer and Electronics in Agriculture,1996(15):161-179.
    [33] Sogaard H T, Olsen H J. Determination of crop rows by image analysis without segmentation[J]. Computers and Electronics in Agriculture,2003,38(2):141-158.
    [34] Astand B, Baerveldt A J. A vision based row-following system for agricultural field machinery[J]. Mechatronics,2005,15(2):251-269.
    [35] Olsen H J. Determination of row positon in small grain crops by analysis of video images[J]. Computers and Electronics in Agriculture,1995,12:147-162.
    [36] Hague T, Tillett N D. A band pass filter based approach to crop row location and tracking[J]. Mechatronics,2001,11:1-12.
    [37] Tillett N.D, Hague T, Miles S J. Inter-row vision guidance for mechanical weed control in sugar beet[J]. Computers and Electronics in Agriculture,2002,33:163-177.
    [38] Kentaro N, Togashi T, Amaha K. Vision-Based speed and yaw angle measurement system[A]. Proceedings of the Automation Technology for Off Road Equipment[C]. USA:Chicago Illinois,2002,212-220.
    [39] Francisco R M. Applications of stereoscopic vision to agriculture[D]. Ph.D Thesis:University of Illinois,2003.
    [40] Kise M, Zhang Q, Francisco R M. A stereovision-based crop row detector for tractor automated guidance[J]. Biosystems Engineering,2005,90(4):357-367.
    [41] Pla F, Sanchiz J M, Marchant J A. Building perspective models to guide a row crop navigation vehicle[J]. Image and Vision Computing,1997,15(6):465-473.
    [42] Han S, Zhang Q, Ni B. A guidance directrix approach to vision based vehicle guidance systems[J].Computers and Electronics in Agriculture,2004,43:179-195.
    [43] Toru T, Satoshi K, Tsukasa T, et al. Crop row tracking by autonomous vehicle using machine vision(part1)[J]. Journal of the Japanese Society of Agricultural Machinery,2000,62(2): 41-48.
    [44] Toru T, Akira T, Tsuguo O, et al. Crop row tracking by autonomous vehicle using machine vision(part2)[J]. Journal of the Japanese Society of Agricultural Machinery,2000, 62(5):37-42.
    [45]王丰元,周一鸣,孙壮志.车辆引导路线检测的计算机视觉初[J].农业机械学报,1998, 29(1):1-5.
    [46]张卫,杜尚丰.机器视觉对农田中定位基准线的识别[J].中国农业大学学报,2006,11(4):75-77.
    [47]杨为民,李天石,贾鸿社.农业机械机器视觉导航研究[J].农业工程学报,2004,20(1):160- 165.
    [48]赵颖,孙群,陈兵旗等.基于机器视觉的非结构化道路导航路径检测方法[J].农业机械学报,2007,38(6):202-204.
    [49]张枞生.沿垄栽作物列自动行走机械视觉系统及其行走方向人工神经网络控制方法的研究[D].北京:中国农业大学,1998.
    [50]袁佐云,毛志怀,魏青.基于计算机视觉的作物行定位技术[J].中国农业大学学报,2005, 10(3):69-72.
    [51]沈明霞,姬长英.基于农田景物边缘的农业机器人自定位方法[J].农业机械学报,2001,32(6):49-51.
    [52]章毓敏.图像工程:图像处理和分析[M].北京:清华大学出版社,2001.
    [53]郎锐.数字图象处理学[M].北京:希望电子出版社,2002.
    [54]孙元义,张绍磊,李伟.基于机器视觉的农业机器人导航路径识别[J].中国科技论文在线,http://www.paper.edu.cn.
    [55]王晔.移动机器人视觉导航中的道路检测[D].南京:南京理工大学,2005.
    [56]周俊,姬长英.基于知识的视觉导航农业机器人行走路径识别[J].农业工程学报,2003,19(6):101-105.
    [57]吴东晖,叶秀清,顾伟康.基于不稳定性知识的实时道路场景理解[J].中国图像图形学报,2002,7(A)(1):69-74.
    [58] Ohta Y I, Kanade T, Sakai T. Color information for region segmentation[J]. Computer Graphics and Image Processing,1980,13:222-241.
    [59]叶家鸣.彩色城市交通地图道路信息的识别与提取[D].合肥:中国科学技术大学,2003.
    [60]韩思奇,王蕾.图像分割的阈值法综述[J].系统工程与电子,2002,24(6):91-94.
    [61] Doyle W. Operation Useful for Similarity-Invariant Pattern Recognition[J].Journal of the Association for Computing Machinery,1962,9:259-267.
    [62]吕俊哲.图像二值化算法研究及其实现[J]. SCI/TECH Information Development & Economy. 2004,14(12):266-267.
    [63] Ostu N. A Threshold Selection Method from Gray-Level Histograms[J]. IEEE Trans. on System, Man and Cybemetics,1979,9(1):62-69.
    [64] Lee S, Chung S. A Comparative Performance Study of Goble Thresholding Techniques for Segmentation[J]. Computer Vision Graphics and Image Processing,1990,52:171-190.
    [65] Pun T. A New Method for Gray-Level Picture Thresholding Using the Entroy of the Histgran[J].Signal Processing,1980,2:223-237.
    [66] Kapure J, Sahop P, Wong A. A New Method for Gray-Level Picture Thresholding Using the Entropy of Hisogram[J]. Computer Vision Graphics and Image Processing,1985,29:210-239.
    [67] Kapure J N. Maxinmum Entropy Medels in Science and Engineering[M]. New Delhi: Wiley Eadtern, 1989.
    [68] Abutaleb A S. Automatic Thresholding of Gray-Level Picture Using Two-Dimensional Entropies[J]. Pattern Recognition,1989,47:22-32.
    [69]章毓敏.图像分割[M].北京:科学出版社,2001.
    [70]蔡梅艳,吴庆宪,姜长生.改进Ostu法的目标图像分割[J].电光与控制,2007,14(6):118-151.
    [71]王勇智.数字图像的二值化处理技术究[J].湖南理工学院学报,2005,18(1):31-33.
    [72]付忠良.图像阈值选取方法的构造[J].中国图像图形学报,2000,5(6):466-469.
    [73] Giardina C R, Dougherty E R. Morphological Methods in Image and Signal Processing[M]. New Jersey: Prentice-Hall,1988.
    [74]王树文,闫成新,张天序.数学形态学在图像处理中的应用[J].计算机工程与应用,2004,40(32):89-92.
    [75]姚敏.数字图像处理[M].北京:机械工业出版社,2006.
    [76]甘勇,马芳,熊坤.基于遗传算法和梯度算子的图像边缘检测[J].微计算机信息,2007,213(2):306-308.
    [77]徐建华.图像处理与分析[M].北京:科学出版社,1992.
    [78]崔屹.数字图像处理与技术[M].北京:电子工业出版社,1997.
    [79]王润生.图像理解[M].长沙:国防科技大学出版社,1994.
    [80]周新论,柳健,刘华志.数字图像处理[J].北京:国防工业出版社,1984.
    [81]王建中,赵军.图像边缘提取的小波多孔算法及改进[J].武汉理工大学学报,2004,26(1):76-79.
    [82]周长发.精通Visual C++图像编程[M].北京:电子工业出版社,2000.
    [83] Hough P V C. Machine analysis of bubble chamber pictures[A]. Proceedings of International Conference High Energy Accelerators and Instrumentation[C]. Switzerland: Geneva,1959,554-556.
    [84] Hough P V C. Method and means for recognizing complex patterns. USA,3069654[P],1962.
    [85] Duda R O, Hart P E. Use of the Hough transform to detect lines and curves in picture[J]. Comunications of the the Association for Computing Machinery,1972,15:11-15.
    [86]杜艳红,张伟玉,常若葵等.基于Hough变换的线段检测算法的改进[J].天津农学院学报,2007,14(2):33-36.
    [87]乔洁,李京华,杨志荣.基于Hough变换的道路边缘提取[J].交通与计算机,2008,1(26):62-64.
    [88]张卫,杜尚丰. Hough变换在农田机械视觉导航中的应用[J].仪器仪表学报,2005,26(8)增:706-707.
    [89]易玲.基于分级的快速霍夫变换直线检测[J].微计算机信息,2007,23(11):206-208.
    [90]皮燕妮,史忠科,黄金.结构化公路车道的精确检测与跟踪[J].计算机工程与应用,2005,1:203-206.
    [91]井建辉,张振东,吴文琪.摄像机定标在机器人视觉系统中的应用[A].全国第15届计算机科学与技术应用学术会议论文集[C].合肥:中国科学技术大学出版社,2003,48-53.
    [92]邱茂林,马松德,李毅.计算机视觉中的摄像机标定综述[J].自动化学报,2000,26(1):43-55.
    [93] Abdel-Aziz Y A, Karara H M. Direct linear transformation from comparator coordinates into object space coordinates in close-range photogrammetry[A]. Proceeding of the Symposium on Close-Range Photogrammetry[C]. Falls Church: American Society of Photogrammetry,1971,1-18.
    [94] Brown D C. Decentering Distortion of Lenses[J]. Photogrammetric Engineering and Remote Sensing,1966,3:444-462.
    [95] Tsai R Y. An efficient and accurate camera calibration technique for 3D machine vision [A]. Proc.Of IEEE Conference on Computer Vision and Pattern Recognition[C]. Miami Beach: FL,1986,364-374.
    [96] Dainis A, Juberts M. Accurate remote measurement of robot trajectory motion[J]. Proc. Int.Conference on Robotics and Automation,1985,92-99.
    [97] Yakimovsky Y, Cunningham R. A system for extracting three dimensional measurements from a stereo pair of TV cameras[J]. Computer Graphics and Image Processing,1978,7:195-210.
    [98] Itoh A M, Ozawa S. Distance measuring methods using only simple vision constructed for moving robots[A]. Proc.7th Int.Conf. Pattern Recognition[C]. Canada: Montreal,1984,192-195.
    [99] Ganapathy S. Decomposition of transformation matrices for robot vision[J]. Proc. Int.Conference on Robotics and Automation,1984,130-139.
    [100] Weng J, Cohen P, Herniou M. Camera calibration with distortion models and accuracy evaluation[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,1992,14(10):965-980.
    [101] Martins H A, Birk J R, Kelley R B. Camera models based on data from two calibration planes[J]. Computer Graphics and Imaging Processing,1981,17:173-180.
    [102] Wei G Q, Ma S D. Two plane calibration: a unified model[A]. Proc. Of IEEE Conference on Computer Vision and Pattern Recognition[C]. USA: Maui,1991,133-138.
    [103] Wei G Q, Ma S D. A complete two-plane camera calibration method and experimental comparisons[A]. Proc.4th International Conference on Computer Vision[C]. Germany: Berlin,1993, 439-446.
    [104] Longuet-Higgins H C. A computer algorithm for reconstructing a scene from two projections[J]. Nature,1981,293(10):133-135.
    [105] Faugeras O, Maybank S J. Motion from point matches: multiplicity of solutions[J]. International Journal of Computer Vision,1990,4:225-246.
    [106] Maybank S J, Faugeras O. A theory of self-calibration of a moving camera[J]. International Journal of Computer Vision,1992,8(2):123-151.
    [107] Ma S D. A self-calibration technique for active vision systems[J]. IEEE Transactions on Robotic and Automation,1996,12(1):114-120.
    [108] Hartley R I. A Linear method for reconstruction from lines and points[A]. Proc. 5th International Conference on Computer Vision [C]. Cambridge MA: MIT,1995:882-887.
    [109] Faugeras O, Mourrain B. On the geometry and algebra of the point and line correspondences betweenn N images[A]. Proc. 5th International Conference on Computer Vision[C]. Cambridge MA: MIT,1995,951-955.
    [110] Heyden A. Reconstruction from image sequences by means of relative depths[A]. Proc. 5th International Conference on Computer Vision [C]. Cambridge MA: MIT,1995,1058-1063.
    [111] Shashua A. Trilinearity in visual recognition by alignment[A]. Proceedings of the third European conference on Computer vision[C]. USA: NJ.Springer-Verlag New York,1994,479-484.
    [112] Shashua A. Algebraic functions for recognition[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,1995,17(8):779-789.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700