用户名: 密码: 验证码:
自然场景下成熟苹果目标的识别及其定位技术研究
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
当前,我国农业生产正朝着规模、多样、精确化的方向发展,劳动力成本迅速上升,同时农业劳动力资源也逐渐向其它社会产业转移,并且在不久的将来人口老龄化问题也日渐突出,所以农业机器人技术进入农业领域已经成为现实需要。目前,国内和国外关于农业机器人的研究现状,最集中的领域是果蔬采摘机器人的研究。本课题以陕西渭北高原的苹果品系为对象,研究双目立体视觉下自然场景中成熟苹果的识别和定位问题。
     自然环境条件下,在理解成熟苹果目标、树叶和树枝等背景颜色信息的基础上,对拍摄的图片在RGB、HSV、HSI三种颜色模型中各个分量及其直方图进行分析比较,并对RGB颜色空间各分量进行色差运算,最后选择直方图呈现单峰分布的R-G色差图作为后续的图像分割。通过RGB原图和R-G色差分量图边缘检测的比较,可以明显地观察到,R-G色差分量图像“漏检”了苹果背景,保留了苹果目标,进一步印证了前期图像预处理的方法是合理和和正确的选择。
     为了实现苹果目标的分割,本文应用三种阈值分割方法,经比较,选择OTSU法进行图像分割,并引入类平均方差,对OTSU法进行改进。通过改进OTSU法与传统OTSU法比较,改进法具有更好的自适应性和抗噪性,从分割效果映射图可以看出,该方法较为完整地保留了苹果的轮廓信息。利用数学形态学法对图像进行腐蚀和膨胀运算,以消除孤立点和毛刺等。
     在对图像分割后的二值图像区域标记后,提取统计数量值和二值图像连通区域的面积、周长、圆度和质心等相关特征量,并对其在识别和定位研究中的作用进行说明和讨论。统计数量值用于调整采摘机器人姿态,确保其准确抓取苹果目标和统计识别率;面积和圆度两个特征量的结合为保证机器人在可控范围之内,机械手每次能够抓取唯一的最佳苹果目标提供了重要的判断依据;质心可作为立体匹配中特征匹配点。最后,统计总识别率为89.9%。
     苹果目标空间匹配和定位的实现,采用张正友相机标定两步法对左右两相机进行标定。本文提出一种基于质心、果梗、标记点的双目立体视觉测距方法,即采用质心、果梗、标记点作为特征匹配点,通过极线约束的图像匹配算法完成苹果目标的匹配。空间定位中,根据深度的计算原理,得出有效焦距、视差和基线长度确定苹果目标深度值。测距实验结果显示,在实验室环境下,工作距离在300-1000mm时,标记点深度计算平均误差率为0.63%,质心平均误差率为3.54%,本文采用的定位方法可满足苹果采摘机器人视觉系统在大多数采摘作业环境下的工作要求。
Currently, Chinese agricultural industry is developing towards the directionof scale, variety and precision. The cost of labor is rising up quickly, andagricultural labor force resources shift to other social industry gradually. In thenear future, the problem of aging population is also becoming more and moreserious. Therefore, the entering of agricultural robot technology into theagricultural field has become a realistic demand. At present, in the agriculturalrobot research status at home and abroad, the highest concentration filed is theresearch of fruit and vegetable harvesting robot. This topic depends on apples inWeiBei plateau Shaanxi province and discusses the recognition and orientationof mature apple in natural scene based on Binocular stereo vision.
     In natural environmental conditions, on the basis of the background colorof mature apple's goal, leaves and twigs, three kinds of color model componentin RGB, HSV, HIS on the pictures and its histogram of each component wereanalyzed and compared, and color difference in RGB color space was figuredout, finally unimodal distribution R-G color diagram which histograms showingas a follow-up image segmentation. By comparing the edge detection of originaland R-G color difference component diagram, clearly, the R-G color differencecomponent image "missed" the background of apple, but retained the target ofapple, so the rationality of the pre-image pre-processing method was furtherconfirmed.
     In order to identify the apple target, three methods of thresholdsegmentation were investigated. Through the comparison, OTSU method wasselected, and the concept of average variance was introduced to improve OTSUmethod. Compared with traditional OTSU method, the improved method hadbetter adaptability and noise immunity. It can be seen from the segmentationresults that the method retained the integrity of apple's profile information completely. The mathematical morphology’s erosion and dilation of the imagewere used to eliminate isolated points and burrs.
     After signing on the segmental binary image, statistical values andcharacteristic quantities of the binary image connected regions such as area,perimeter, circularity, and centroid were extracted, and then the effect inidentification and location research was explained and discussed. Statisticalvalues were used to adjust the posture of picking robot to ensure their grabbingApple's goals accurately and statistical recognition rate; the combination of areaand circularity was to make the robot controllable and to provide importantbasis for the capture to the only target. In binocular stereo vision matching, thecentroid can be used as the feature match point. Finally, the total recognitionrate was up to89.9%.
     To realize the matching and positioning of apples’ target, ZhangZhengyou’s two-step method was used for camera calibration. In this paper, abinocular stereo vision ranging method which used centroid, stem and marker asmatching points was presented. Using centroids, stems, marking points as thefeature match points to complete matching apple goals by the polar lineconstraint. In spatial orientation, the effective focal length, parallax, and thebaseline length can be achieved according to the triangle depth calculationprinciple, which can determine the depth value of apple's target. Rangingexperimental results showed that, in the laboratory environment, when theworking distance was300-1000mm, the average error rate of calculated markersdepth was0.63%, and the average error rate of cancroids was3.54%.Thepositioning method used in this paper can meet the work requirements of theapple’s harvesting robot’s vision system in most picking environment.
引文
[1]刘忠群,彭发清,张黎晓.农村劳动力转移与农业现代化问题研究[J].生产力研究,2011(3):36-37.
    [2]陈磊,陈帝伊,马孝义.果蔬采摘机器人的研究[J].农机化研究,2011(1):224-227.
    [3]张杰,李艳文.果蔬采摘机器人的研究现状、问题及对策[J].机械设计,2010,27(6):1-4.
    [4]陕西苹果总产量将突破一千万吨[J].果农之友,2011(12):23.
    [5]慕芳,王继锋.渭北苹果生产成本的分析[J].北方园艺,2011(12):199-200.
    [6]王丽丽,郭艳玲,王迪等.果蔬采摘机器人研究综述[J].林业机械与木工设备,2009,37(1):10-14.
    [7]刘长林,张铁中,杨丽.果蔬采摘机器人研究进展[J].安徽农业科学,2008,36(13):5394-5397.
    [8] Harrell R C,Adsit P D,Munilla RD,et al.Robotic Picking of citrus[J].Robotica,1990,8(4):269-278.
    [9] Kondo N, Lin G P, et al. Visual feedback guided robotic cherry tomatoharvesting[J].Trans ASAE,1996(39):2331-2338.
    [10] Kondo N, TING K C. Robotics for Bioproduction Systems[M]. New York:ASAEPublication,1998:191-195.
    [11] Institute Valenciano de Investigaciones Agrarias (IVIS)[EB/OL].http://agroingenieria.ivia. es,2005-09-02.
    [12] Kassay L,Hungarian robotic apple harvester[N]. ASAE Paper,1992-01-14(9).
    [13] Buemi F E,M Massa,G Sandini.A robot system for greenhouse operations[C].In4thWorkshop on Robotics in Agriculture,IARP Tolouse,1995:172-184.
    [14] Jimenez A R, Ceres R, pons J L.A survey of computer vision methods for locating fruiton trees[J].Transactions of the ASAE,2000,43(6):1911-1920.
    [15] Kondo N, Monta M, Arima S. Strawberry harvesting robot on hydroponicsystem[C].Preprints of3rd IFAC/CIGR Workshop on Artificial Intelligence inAgriculture,1998:194-198.
    [16] Reeda J N, Milesa S J, Burlera J, et al. Automatic mushroom harvesterdevelopment[J].Journal of Agricultural Engineering Research,2001,78(1):15-23.
    [17] Hayashi S, Ganno K, Ishii Y. Visual feedback guidance of manipulator for egg plantrecognition using fuzzy logic[J]. J SHITA,2004,12(2):83-92.
    [18] Murakami N, Inoue K, Otsuka K.Selective harvesting robot of cabbage [C]. Proceedingof international symposium of automation and robotics in bioproduction andprocessing.JSAM,1995(2):24-31.
    [19] Edan Y,Rogozin D, Flash T, et al.Robotic melon harvesting [J].Robotics andAutomation,2000,16(6):831-835.
    [20] Slaughter D,Harrel R C. Color vision in robotic fruit harvesting[J]. Transactions of theASAE.1987,30(4):1144-1148.
    [21] Fujiura T,Ueda K,Chung S H,et al. Vision system for cucumber-harvesting robot[C].Bio-Robotics Ⅱ,2nd IFAC International Workshop on Bio-Robotics, InformationTechnology and Intelligent Control for Bioproduction Systems.2000:61-65.
    [22] Bulanon D M,Kataokab T,Ota Y,et al. A segmentation algorithm for the automaticrecognition of Fuji apples at harvest[J].Biosystems Engineering,2002,83(4):405-412.
    [23] Linsiroratana S,Ikeda Y.On image analysis for harvesting tropical fruits[C].Proceedingsof the41st SICE Annual conference.SICE2002.2002:1336-1341.
    [24]汤修映,张铁中.果蔬收获机器人研究综述[J].机器人,2005,27(1):90-91.
    [25]陆怀民.林木球果采摘机器人设计与试验[J].农业机械学报,2001,32(6):52-58.
    [26]曹其新,吕恬生,永田雅辉等.草莓拣选机器人的开发[J].上海交通大学学报,1999,33(7):880-884.
    [27]周云山,李强,李红英等.计算机视觉在蘑菇采摘机器人上的应用[J].农业工程学报,1995,11(4):27-32.
    [28]袁国勇.黄瓜采摘机器人目标识别与定位系统的研究[D].北京:中国农业大学,2006.
    [29]陈利兵.草莓收获机器人采摘系统研究[D].北京:中国农业大学,2005.
    [30]张瑞合,姬长英.计算机视觉技术在番茄收获中的应用[J].农业机械学报,2001,32(5):50-53.
    [31]蔡健荣,赵杰文.自然场景下成熟水果的计算机视觉识别[J].农业机械学报,2005,36(2):62-64.
    [32]王津京,赵德安,姬伟等.采摘机器人基于支持向量机苹果识别方法[J].农业机械学报,2009(1):147-151.
    [33]周小军.柑橘采摘机器人成熟果实定位及障碍物检测研究[D].镇江:江苏大学,2009.
    [34]杨杰.数字图像处理及MATLAB实现[M].北京:电子工业出版社,2010.
    [35]吕小莲.基于四自由度西红柿采摘机器人视觉系统的研究[D].沈阳:沈阳农业大学,2010.
    [36]王津京.基于支持向量机苹果采摘机器人视觉系统的研究[D].镇江:江苏大学,2009.
    [37]徐惠荣.基于机器视觉的树上柑桔识别方法研究[D].杭州:浙江大学,2004.
    [38]高秀娟.图像分割的理论、方法及应用[D].长春:吉林大学,2006.
    [39]吴琼.小波变换在图像边缘检测和降噪中的应用[D].天津:天津大学,2008.
    [40]亢洁,杨刚.关于图像识别边缘检测算法仿真研究[J].计算机仿真,2010,27(12):267-270.
    [41]管宏蕊,丁辉.图像边缘检测经典算法研究综述[J].首都师范大学学报(自然科学版),2009,30(10):66-69.
    [42]康牧,许庆功.基于Prewitt理论的自适应边缘检测算法[J].计算机应用研究,2009,26(6):2383-2386.
    [43]谭立勋,刘缠牢,李春燕.实时图像处理中Sobel算子的改进[J].弹箭与制导学报,2006,26(1):291-293.
    [44]彭太乐,洪留荣.基于零交叉及Canny算子的边缘检测方法[J].微型电脑应用,2008,24(8):65-67.
    [45]崔建军,詹世富,郑雄伟等.一种改进的边缘检测算法[J].测绘科学,2009,34(4):55-56.
    [46]刘超,周激流,何坤.基于Canny算法的自适应边缘检测方法[J].计算机工程与设计,2010,31(18):4036-4039.
    [47]侯叶.基于图论的图像分割技术研究[D].西安:西安电子科技大学,2011.
    [48]万施.彩色图像分割算法研究[D].南昌:南昌大学,2010.
    [49]彭召意,周玉,吴志辉.彩色人体图像的二值化方法[J].计算机工程与设计,2010,31(6):1366-1368.
    [50]刘东菊.基于阈值的图像分割算法的研究[D].北京:北京交通大学,2009.
    [51]邵立康,邹飞平,迟权德等.一种基于直方图的阈值分割算法[J].CT理论与应用研究,2009,18(2):66-71.
    [52]邓林华,许骏,程向明.基于迭代阈值的太阳像分割算法的应用研究[J].计算机与现代化,2010(10):72-74.
    [53]李梅.基于Otsu算法的图像分割研究[D].合肥:合肥工业大学,2011.
    [54]邱丽君. Otsu图像分割方法的研究与应用[D].济南:山东师范大学,2011.
    [55]刘健庄,栗文青.灰度图象的二维Otsu自动阈值分割法[J].自动化学报,1993,19(1):101-105.
    [56]艾鑫.基于数学形态学的边缘检测算法及其在图像缩放中的应用[D].杭州:浙江大学,2011.
    [57]李敏,蒋建春.基于腐蚀算法的图像边缘检测的研究与实现[J].计算机应用与软件,2009,26(1):82-84.
    [58]尹星云,王峻.基于改进的彩色图像形态学膨胀和腐蚀算子设计[J].计算机工程与应用,2008,44(14):172-175.
    [59]陈柏生.一种二值图像连通区域标记的新方法[J].计算机工程与应用,2006(25):46-47.
    [60]陈波,张立伟.基于区域面积和特征的图像配准[J].计算机与现代化,2008(7):54-56.
    [61]王中元,胡瑞敏.利用四元树结构计算黑白二值图像周长的方法[J].计算机工程,2005,31(18):169-171.
    [62]王冰,职秦川,张仲选等.灰度图像质心快速算法[J].计算机辅助设计与图形学学报,2004,16(10):1360-1365.
    [63]王珺.计算机立体视觉算法研究与实现[D].大连:大连理工大学,2009.
    [64]郑圣子,李湘旭,孙志超.新型移动机器人激光测距雷达的研究[J].计算机测量与控制,2011,19(5):1094-1097.
    [65]李志强.双目汇聚摄像系统若干问题研究[D].开封:河南大学,2008.
    [66]李玉良.基于立体视觉的遮挡柑橘识别与空间匹配研究[D].镇江:江苏大学,2007.
    [67] Marr D. Cooperative Computation of Stereo Disparity[J]. Science,194:209-236.
    [68] Marr D, Vision W H. Free and Company[M]. San Francisco,1982.中译本:姚国正,刘磊,汪云九.视觉计算理论[M].科学出版社,1988.
    [69]韩云生.基于双目立体视觉的移动机器人目标定位[D].无锡:江南大学,2009.
    [70]林琳.机器人双目视觉定位技术研究[D].西安:西安电子科技大学,2009.
    [71]周海林,王立琦.光学图象几何畸变的快速校正算法[J].中国图象图形学报,2003,8(10):1131-1135.
    [72] Zhang Zhengyou, Member S. A flexible new technique for camera calibration[J]. IEEETrans on Pattern Analysis And Machine Intelligence,2000,22(11):1330-1334.
    [73]杨恒,王庆.一种高效的图像局部特征匹配算法[J].西北工业大学学报,2010,28(2):291-297.
    [74]彭辉,文友先,翟瑞芳等.结合SURF算子和极线约束的柑橘立体图像对匹配[J].计算机工程与应用,2011,47(8):157-160.
    [75]胡晓鹏.双目立体图像的快速匹配值计算方法[J].微计算机信息,2007,24(5):293-294.
    [76]蔡健荣,李玉良,范军等.成熟柑橘的图像识别及空间定位研究[J].微计算机信息,2007,23(12):224-226.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700