足球机器人目标识别与自定位研究
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
机器人足球是一个新兴的交叉学科,涉及机器人学、智能控制和图像处理、计算机视觉等多个领域,是目前的研究热点与难点。感知系统作为足球机器人的一个重要组成部分,其发展程度对机器人技术的研究有着重要影响。中型组机器人足球比赛(Middle-Size League)是机器人足球世界杯(RoboCup)中的一个重要项目,具有全自主性与极高的实用价值。本文采用中型足球机器人作为研究平台,针对目标识别与自定位中的的全向视觉成像、目标快速识别及其定位、机器人自定位等关键技术进行了研究。
     首先,在对机器人子系统进行划分的基础上,根据感知系统完成的主要感知功能,构建了机器人感知系统的总体体系结构,对感知系统的层次结构和主要功能的实现进行论述,并指出感知系统需要深入研究的内容。
     其次,针对全向视觉的主要成像影响因素与球形目标的成像畸变,提出了一种新的全向反射镜面设计方法。全向视觉采用具有针孔投影成像的折反射式,凸型镜面作为基本面型,提出径向长度投影模型,根据球形目标的期望成像构造径向长度投影函数,利用径向长度投影函数,并结合几何光学,实现对镜面形状及参数的求解。
     再次,针对实时性与精确性要求,提出一种新的快速目标识别及其定位方法。主要由目标区域分割、区域标记、目标区域修复三部分组成。根据目标的颜色特征利用HSI颜色空间对目标区域进行分割;提出了基于轮廓的快速区域标记算法,兼顾实时性与特征获取,获取包括轮廓在内的目标特征;结合目标区域轮廓特征,提出了一种基于凸体的目标修复方法,提高目标的定位精度。
     最后,针对大比赛场地,提出了一种新的机器人自定位方法。利用全向视觉与白线检测、拟合算法获取白线的初始信息,根据电子罗盘获取的朝向信息,将白线的初始信息转换为白线的半全局信息;根据白线实垂交类型、辅助白线的距离与长度信息,定位区域优先度算法确定机器人所在的最终定位区域;最后根据定位区域的定位特征得到机器人在场地中的位置,实现机器人的自定位。
Robot soccer is an emerging interdisciplinary field that involves robotics, intelligent control, image processing, computer vision and other fields. It is currently a hot and difficult research topic. Robot perception system as an important part of soccer robot system, its development remarkably influences the robot technology. Middle-Size League is an important part of RoboCup, It has full autonomy and high practical value. In this paper, used Middle-Size League as a research platform for robot soccer, researched the key technologies of the object recognition and localization, include omni-directional vision imaging modeling, rapid object recognition and its localization, and robot self-localization.
     Firstly, based on the sub-system dividing of robot, and the main perceptual task, this paper designed the architecture of the robot perceptual system, introduced the hiberarchy of perceptual system and the achievement of perceptual task, point out the contents, which need in-depth research in perceptual system.
     Secondly, according to the main imaging factor and the distortion of the Spherical object, this paper presented a new omni-directional reflection mirror design method, which used catadioptrics with the pinhole projection as the imaging type and the convex mirror as the basic mirror type. This paper proposed Radial Length Projection Model, constructed Radial Length Projection Function based on the Spherical object Expected imaging, used Radial Length Projection Function and the theory of geometrical optics, obtained the mirror shape and the parameters.
     Thirdly, according to the real-time and precision requirement, this paper presented a new method for rapid object identification and its localization. This method contains the object component segmentation, the component labelling and the component repair. This paper used HSI color space to segment the object component based on the object color characteristic, presented a rapid component labelling algorithm based on contour, obtain the characteristic include contour, while rapidly processing, used the contour and the character of convex, presented the object component repairing algorithm, improve the precision of object localization.
     Finally, according to the large game field, this paper presented a new robot self-localization method. This method obtained the initial information of white lines by omni-directional vision, converted the initial information to the part-global information of white lines by the orientation which is obtained through electronic compass, used the while line true vertical intersection type, the length and distance of assistant white line, localization area priority algorithm to identify the best area for localization, obtained the pose by the information of the intersection point, which belong to the true vertical intersection while line, and then achieve robot self-localization.
引文
[1] A.K. Mackworth. On Seeing Robots. University of British Columbia. Technical Report. TR-93-05. 1993.
    [2] RoboCup official home page. http://www.robocup.org.
    [3] FIRA official home page. http://www.fira.net/.
    [4] Bernd Jahne, Horst Haussecker, Peter Geissler. Handbook of Computer Vision and Applications[M]. Academic Press, 1999.
    [5]章毓晋.图像工程第二版[M].北京:清华大学出版社, 2006.
    [6]蔡自兴.人工智能控制[M].北京:化学工业出版社,2005.
    [7]边肇祺.模式识别第二版[M].北京:清华大学出版社, 2000.
    [8]徐德谭民李原.机器人视觉测量与控制[M].北京:国防工业出版社, 2008.
    [9]吴福朝.计算机视觉中的数学方法[M].北京:科学出版社, 2008.
    [10] B. Ryad, S.B. Kang. Panoramic Vision: Sensors, Theory and Applications[M].New York: Springer, 2001.
    [11] Reinhard Klette, Shmuel Peleg, Gerald Sommer. Robot vision[M]. New York: Springer, 2001
    [12] S. Baker, S.K. Nayar. A Theory of Single-viewpoint catadioptric image formation[J]. International Journal of Computer Vision, 1999, 35(2): 175-196.
    [13] J.S. Chahl, M.V. Srinivasan. Reflection surfaces for panoramic imaging[J]. Applied Optics, 1997, 36(31): 8276-8285.
    [14] Pedro Lima, Andrea Bonarini, Carlos Machado. et al. Omni-directional catadioptric vision for soccer robots[J]. Robotics and Autonomous Systems, 2001, 36(2): 87-102.
    [15] Shih-Scho¨n Lin, Ruzena Bajcsy. Single-View-Point omnidirectional catadioptric Cone mirror imager[J]. IEEE Trans. Pattern Analysis and Machine Intelligence,2006, 28(5): 840-845.
    [16]汤一平,王庆,陈敏智.立体全方位视觉传感器的设计[J].仪器仪表学报, 2010, 31(7): 1520-1527.
    [17]汤一平,严海东,陈龙艳等.无死角的全方位传感器的设计[J].仪器仪表学报, 2009, 30(5): 916-920.
    [18] Yu Wang, Yong Wang, Xinhe Xu. Design and modeling of omni-directional vision system applied in autonomous soccer robot[C]// Proceedings of the Sixth International Conference on Intelligent Systems Design and Applications, 2006, Jinan, China: 610-615.
    [19]曾吉勇,苏显渝.抛物面折反射全景成像系统[J].光电子.激光, 2003年, 14(5): 485-488.
    [20]徐玮,高辉,张茂军.多视点折反射全景成像系统分析与设计[J].系统仿真学报, 2010,22(2): 435-438.
    [21]曾吉勇,苏显渝.水平场景无畸变的折反射全景成像系统[J].光学学报, 2003, 23(5): 636-640.
    [22]卢惠民,刘斐,郑志强.一种新的用于足球机器人的全向视觉系统[J]。中国图象图形学报,2007,12(7): 1243-1248. Lu H M, Liu F, Zheng Z Q. A novel omni-vision system for soccer robots[J]. Joumal of Image and Graphics, 2007,12(7): 1243-1248.
    [23] Meng Cao, Anh Vu, Matthew Barth. A Novel Omni-Directional Vision Sensing Technique for Traffic Surveillance[C]// Proceedings of the 2007 IEEE Intelligent Transportation Systems Conference, Sept. 30 - Oct. 3, 2007, Seattle, WA: 649-653.
    [24]曾吉勇,苏显渝.柱面场景无畸变的折反射全景成像系统[J].光电工程, 2003, 30(1): 42-45.
    [25] Gwangwon Kang; Junguk Beak; Jongan Park;Features Defined by Median Filtering on RGB Segments for Image Retrieval[C].Computer Modeling and Simulation, 2008. EMS '08. Second UKSIM European Symposium on,Liverpool,U.K.,2008:436– 440.
    [26] Sen Gupta, G.; Bailey, D.;Discrete YUV look-up tables for fast colour segmentation for robotic applications[C].Electrical and Computer Engineering, 2008. CCECE 2008. Canadian Conference on,Niagara Falls ON,Canada,2008:963– 968.
    [27] Calin Rotaru1,Thorsten Graf1,Jianwei Zhang.Color image segmentation in HSI space for automotive applications[J].Journal of Real-Time Image Processing,2008,3(4):311-322.
    [28]刘斐,卢惠民,郑志强.基于线性分类器的混合空间查找表颜色分类方法[J].中国图象图形学报,2008,13(1):104-108.
    [29] U.L. Jau, Chee Siong Teh, Giap Weng Ng. A comparison of RGB and HSI color segmentation in real-time video images: A preliminary study on road sign detection[C].Information Technology, 2008.,2008:1-6.
    [30] AbuBaker, A. Qahwaji, R. Ipson, S. et al. One Scan Connected Component Labeling Technique[C]. Signal Processing and Communications, 2007. ICSPC 2007. IEEE International Conference on, Dubai, United Arab Emirates, 2007: 1283-1286.
    [31]陈柏生.一种二值图像连通区域标记的新方法[J].计算机工程与应用, 2006, 42(25): 46-47.
    [32]张燕,曾立波,吴琼水等.一种适用于任意形状区域的快速孔洞填充算法[J].计算机应用研究, 2004, (12): 136-155.
    [33] Khanna Vikrant,Gupta Phalguni,Hwang C.J. Finding connected components in digital images by aggressive reuse of labels[J]. Image and Vision Computing,2002,20(8):557-568.
    [34] Tao Jiang, Ming Qiu, Jie Chen. et al. LILA: A Connected Components Labeling Algorithm in Grid-Based Cluster[C]. 2009 First International Workshop on Database Technology and Applications, Wuhan, China, 2009:213-216.
    [35] Lifeng He, Yuyan Chao, KenjiSuzukic. et al. Fast connected-component labelling[J]. Pattern Recognition, 2009, 42(9): 1977—1987.
    [36] Kesheng Wu, Ekow Otoo, Kenji Suzuki. Optimizing Two-pass Connected-component Labeling Algorithms[J]. Pattern Analysis and Applications, 2009, 12(2): 117-135.
    [37] Yang Yang,Zhang David. A novel line scan clustering algorithm for identifying connected components in digital images[J]. Image and Vision Computing,2003,21(5):459-472.
    [38] He Lifeng,Chao Yu-yan,Suzuki Kenji. A run-based two-scan labeling algorithm[J]. IEEE Transactions on Image Processing,2008,17(5):749-756.
    [39]徐利华,陈早生.二值图像中的游程编码区域标记[J].光电工程,2004,31(6):63-65.
    [40] C. H. Messomt, S. Demidenk0, K. Subramaniam. et al. Size/Position Identification in Real- Time Image Processing using Run Length Encoding[C]. IEEE Instrumen阳tion and Measurement Technology Conference, Anchorage, AK, USA, 2002: 1055-1059.
    [41] Yang Yang,Zhang David. A novel line scan clustering algorithm for identifying connected components in digital images[J]. Image and Vision Computing,2003,21(5):459-472.
    [42]张荣国,刘焜.新区入栈的区域填充扫描线算法[J].计算机工程,2006,32(5):63-121.
    [43]李波,吴琼玉,刘东华.快速的复连通区域扫描线图形填充新方法[J].国防科技大学学报,2003,25(4):68-71.
    [44] Chang Fu,Chen Chun-Jen,Lu Chi-Jen. A linear-time component-labeling algorithm using contour tracing technique Computer[J]. Vision and Image Understanding,2004,93(2):206-220.
    [45]桑红石;傅勇;张天序.基于标记信息的快速轮廓跟踪算法[J].华中科技大学学报(自然科学版),2005,33(9):315-324.
    [46] Roshan Dharshana Yapa, Harada Koichi. A Connected Component Labeling Algorithm for Grayscale Images and Application of the Algorithm on Mammograms[C]. SAC’07, Seoul, Korea, 2007:146-152.
    [47] Bertalmio M, Sapiro G, Gaselles V. et al. Image Inpainting[C]. In: Proceeding of ACM SIGGRAPH the 27th Annual Conference on Computer Graphics, New Orleans, USA, 2000:417-424.
    [48] Criminisi A, Perez P, Toyama K. Region filling and object removal by exemplar-based image inpainting[J]. IEEE Transactions on Image Processing, 2004, 13(9):1200-1212.
    [49] Markus Ulricha;b, Carsten Steger, Albert Baumgartner. Real-time object recognition using amodified generalized Hough transform[J]. Pattern Recognition, 2003, 36(11):2557-2570.
    [50] E.K. Jolly, M. Fleu. Multi-sector algorithm for hardware acceleration of the general Hough transform[J]. Image and Vision Computing, 2006, 24(9): 970-976.
    [51] Jader Monari, Stelio Montebugnoli, Andrea Orlati. et al. Generalized Hough transform: Auseful algorithm for signal path detection[J]. Acta Astronautica, 2006,58(4): 230-235.
    [52] Liu Wei, Tan Jinlu. Detection of incomplete ellipse in images with strong noise by iterative randomized Hough transform(IRHT)[J]. Pattern Recogntion, 2008, 41(4): 1268-1279.
    [53] Anandaroop Ray, Deepak C. Srivastava. Non-linear least squares ellipse fitting using the genetic algorithm with applications to strain analysis[J]. Journal of Structual Geology, 2008,30(12): 230-235.
    [70]王珂,庄严,王伟等.自主移动机器人足球比赛视觉定位方法综述[J].控制理论与应用, 2005, 22(4): 597-603.
    [71]徐德,谭民.移动机器人的在线实时定位研究[J].自动化学报, 2003, 29(5): 716-725.
    [72]董晓杰,陈启军.卡尔曼滤波在足球机器人定位中的应用[J].装备制造技术, 2008, (2): 45-48.
    [73]方正,佟国峰,徐心和.一种鲁棒高效的移动机器人定位方法[J].自动化学报, 2007, 33(1): 48-53.
    [74]徐则中,庄燕滨.移动机器人定位方法对比研究[J].系统仿真学报, 2009, 21(7): 1891-1896.
    [75]房芳,马旭东,戴先中.一种新的移动机器人MonteCarlo自主定位算法[J].东南大学学报, 2007, 37(1): 40-44.
    [76]黄庆成,洪炳熔, Javaid Khurshid.全自主足球机器人的超声波定位避障系统[J].哈尔滨工业大学学报, 2003, 35(9): 1077-1079.
    [77] Bin Ge, Genichi Yasuda, Fuliang Yin, et al. Object Recognition and Self-Localization for Interactive Soccer Robots[C]. Proceedings of the 6th World Congress on Intelligent Control and Automation, Dalian, China, 2006: 10245- 10250.
    [78]张祥德,牛纪祥,董再励.自主移动机器人三角定位的路标优化[J].东北大学学报(自然科学版), 2004, 25(1): 24-27.
    [79] Kemal Kaplan, Buluc, C, elik, Tekin Meric,li. Practical Extensions to Vision-Based Monte Carlo Localization Methods for Robot Soccer Domain[C]. RoboCup 2005: Robot Soccer World Cup IX, New York: Springer, 2006: 624-631.
    [80]章小兵,宋爱国,唐鸿儒.基于视觉的室内移动机器人精确定位方法[J].数据采集与处理, 2007, 22(2): 196-200.
    [81] Bo Liu, Junbo Fan, Jun Zhou. A Self-localization Method Through Pose Point Matching forAutonomous Soccer Robot Based on Omni-vision[C]. The Ninth International Conference on Electronic Measurement & Instruments, Beijing, China, 2009: 4246-4249.
    [82] Ronghua Luo, Huaqing Min. A New Omni-vision Based Self-localization Method for Soccer Robot[C]. Software Engineering, 2009. WCSE '09. WRI World Congress on, Xiamen, China, 2009: 126– 130.
    [83] Luca Iocchi, Daniele Nardi. Self_Localization in the RoboCup Environ[C]. RoboCup-99 : robot soccer world cup III, Stockholm , SUEDE, 2000: 318-330.
    [84] Marques, C.F., Lima, P.U. Vision-based self-localization for soccer robots[C]. IEEE International Conference on Intelligent Robots and Systems, Takamatsu, Japan, 2000: 1193-1198.
    [85] Abdul Baist, Robert SablatnigN, Gregor Novak. Line-based Landmark Recognition for Self-localization of Soccer Robots[C]. IEEE --- 2005 International Conference on Emnerging Technologies, Islamiabad, 2005: 132-137.
    [86] Abdul Bais, Robert Sablatnig. Landmark Based Global Self-localization of Mobile Soccer Robots[C]. Computer Vision– ACCV 2006, New York: Springer, 2006: 842-851.
    [87] Mark de Berg,Otfried Cheong,Marc Van Kreveld, et al. Computational geometry: algorithms and applications[M]. New York: Springer, 2008.
    [88]罗志增,将静坪.机器人感觉与多传感器融合[M].北京:机械工业出版社, 2002.
    [89] David L. Hall, James Llinas. Hand Book of Multisensor Data Fusion[M]. New York: CRC Press, 2001.
    [90] Ivan Volosyak, Olena Kouzmitcheva, Danijela Ristic. et al. Improvement of Visual Perceptual Capabilities by Feedback Structures for Robotic System FRIEND[J]. IEEE TRANS ON SYSTEMS, MAN, AND CYBERNETICS—PART C: APPLICATIONS AND REVIEWS, 2005, 35(1): 66-74.
    [91] Gerald Fritz, Lucas Paletta, Ralph Breithaupt. et al. Learning Predictive Features in Affordance based Robotic Perception Systems[C].Proceedings of the 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems, Beijing, China, 2006: 3642-3647.
    [92] Benjamin Auffarth1, Yasumasa Muto, Yasuharu Kunii. An Artificial System for Visual Perception in Autonomous Robots[C]. Intelligent Engineering Systems, 2005. INES '05. Proceedings. 2005 IEEE International Conference on, 2005: 16-19.
    [93] Lei Zhang, Ren′e Zapata. Probabilistic Localization Methods of a Mobile Robot Using Ultrasonic Perception System[C]. Proceedings of the 2009 IEEE International Conference on Information and Automation, Zhuhai/Macau, China, 2009: 1062-1067.
    [94] Barber, R., Mata, M., Boada, M.J.L.. et al. A perception system based on laser information formobile robot topologic navigation[C]. IECON - 2002. 2002 28th Annual Conference of the IEEE Industrial Electronics Society, 2002: 2779– 2784.
    [95] Hiroyuki Masuta, Naoyuki Kubota. Perceptual System of Networked Partner Robots[C]. SICE Annual Conference 2007, Kagawa University, Japan, 2007: 1845-1850.
    [96] Arena, P., De Fiore, S., Fortuna, L.. et al. Implementation of a CNN-based perceptual framework on a roving robot[C]. Implementation of a CNN-based perceptual framework on a roving robot, Seattle, WA, 2008: 1588– 1591.
    [97] Naoyuki Kubota, Kenichiro Nishida. Perceptual Control Based on Prediction for Natural Communication of a Partner Robot[J]. IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, 2007, 54(2): 866-877.
    [98]杨杰,张明路,吕晓玲.基于模糊神经网络的机器人感知系统多源信息融合的研究[J].河北工业大学学报, 2009, 38(3): 41-43.
    [99]屈彦呈,王常虹,庄显义.一种通用的视觉机器人体系结构[J].计算机应用研究, 2002, (8): 25-27.
    [100]屠大维,林财兴.智能机器人视觉体系结构研究[J].机器人, 2001, 23(3): 206-233.
    [101] CAMBADA: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.115.5540[EB/OL].
    [102] ADRO: http://www.iranadro.com/ADRO2010.pdf.
    [103] Hibikino-Musashi . http://robocup.ksrp.or.jp/hibikino-musashi/paper/tdp_2008_ hibikinomusashi_fv.pdf.
    [104] NuBot: http://www.nubot.com.cn/teamDsc/NuBot%20TDP2009.pdf.
    [105] Jianhua Wang, Fanhuai Shi, Jing Zhang, et al. A new calibration model of camera lens distortion[J]. Pattern Recognition, 2008, 41(2): 607-615.
    [106] R.V. Carlos, Sánchez Salmerón, Antonio-Jo. Robust metric calibration of non-linear camera lens distortion[J]. Pattern Recognition, 2010, 43(4): 1688-1699.
    [107]梁冬泰,王宣银.基于分块的摄像机内外参数标定方法[J].上海交通大学学报, 2009, 43(3): 422-431.
    [108]傅丹,周剑,邱志强等.基于直线的几何不变性标定摄像机参数[J].中国图象图形学报, 2009, 14(6): 1058-1063.
    [110] Peter Peer, Franc Solina. Mosaic-Based Panoramic Depth Imaging with a Single Standard Camera [C]. IEEE Workshop on Stereo and Multi-Baseline Vision, Kauai, Hawaii, USA, 2001: 75 -84.
    [111] Seong Jong Ha, Sang Hwa Lee, Nam Ik Cho. Embedded Panoramic Mosaic System Using Auto-Shot Interface[J]. IEEE Transactions on Consumer Electronics, 2008, 54(1): 16-24.
    [112] Jeong-Hun Jang, Suk-Pil Ko. PanoVIX-Mini: Tiny and compact, multi-sensor based 3600panoramic camera system for mobile robotic applications[C]. 16th IEEE International Conference on Robot & Human Interactive Communication, Jeju, Korea, 2007:663-664.
    [113] Nagase Y., Yamamoto T., Kawamura T.. Hardware realization of panoramic camera with speaker-oriented face extraction for teleconferencing[C]. Circuits and Systems, 2005. ISCAS 2005. IEEE International Symposium on, Kobe, Japan, 2005:6256-6259.
    [114] Ekpar, F.. A Framework for Intelligent Video Surveillance[C]. Computer and Information Technology Workshops, 2008. CIT Workshops 2008. IEEE 8th International Conference on, Sydney, Australia, 2008:421-426.
    [115] D.W. Rees. Panoramic Television Viewing System. U.S. Patent. 3505465. April 1970.
    [116] Xiao Xiao, Guo-guang Yang, Jian Bai. Capture of Vehicle Surroundings Using a Pair of Panoramic Annular Lens Cameras[C]. Telecommunications, 2007. ITST '07. 7th International Conference on ITS, Sophia Antipolis, France, 2007: 1-5.
    [117] Punarjay C., Ray Jarvis. Panoramic Vision and Laser Range Finder Fusion for Multiple Person Tracking[C]. Proceedings of the 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems, Beijing, China, 2006:2949-2954.
    [118] Carlos M. Soria, Ricardo Carelli, Mario Sarcinelli-Filhot. Using Panoramic Images and Optical Flow to Avoid Obstacles in Mobile Robot Navigation[C]. Proc. IEEE-ISIE Conf, Montreal, Quebec, Canada, 2006:2902-2907.
    [119] Henrik Andreasson, Andr′e Treptow, Tom Duckett. Self-localization in non-stationary environments using omni-directional vision[J]. Robotics and Autonomous Systems, 2007, 55(7):541-551.
    [120] Mark Fiala. Linear Markers for Robot Navigation with Panoramic Vision[C]. Proceedings of the First Canadian Conference on Computer and Robot Vision, Canada, 2004:145-154.
    [121] LIU Guang-hui, CHEN Chuan-bo. A new algorithm for computing the convex hull of a planar point set[J]. Journal of Zhejiang University SCIENCE A, 2007, 8(8):1210-1217.
    [122] U.L. Jau, Chee Siong Teh, Giap Weng Ng. A comparison of RGB and HSI color segmentation in real-time video images: A preliminary study on road sign detection[C].Information Technology, 2008:1-6.
    [123] Kiyoshi Hosono. On convex decompositions of a planar point set[J]. Discrete Mathematics, 2009, 309 (6): 1714–1717.
    [124] Dale Umbacn, Kerry N. Jones. A Few Methods for Fitting Circles to Data[J]. IEEE TRANSACTION AND MEASUREMENT, 2003, 52(6): 1881-1885.
    [125] Leandro A.F. Fernandes, Manuel M. Oliveira. Real-time line detection through an improved Hough transform voting scheme[J]. Pattern Recognition, 2008, 41 (1):299-314.
    [126] Nitin Aggarwal, William Clem Karl. Line Detection in Images Through Regularized Hough Transform[J]. IEEE TRANSACTIONS ON IMAGE PROCESSING, 2006, 15(3):582-591.
    [127] Yu-Tai Ching. Detecting line segments in an image– a new implementation for Hough Transform[J]. Pattern Recognition Letters, 2001, 22(3): 421-429.
    [128] Vladimir Shapiro. Accuracy of the straight line Hough Transform: The non-voting approach[J]. Computer Vision and Image Understanding, 2006, 103(1): 1-21.
    [129] Andrea Bonci, Tommaso Leo, and Sauro Longhi. A Bayesian Approach to the Hough Transform for Line Detection[J]. IEEE Trans On Systems, Man, and Cybernernetics - Part A: Systems and Humans, 2005, 35(6):945-955.
    [130] R Willink. Estimation and uncertainty in fitting straight lines to data: different techniques[J]. Metrologia, 2008, 45 (3):290–298.
    [131] Michael Krystek, Mathias Anton. A weighted total least-squares algorithm for fitting a straight line[J]. Meas. Sci. Technol. 2007, 18 (11):3438–3442.
    [132]俞巧云,邢晓正,胡红专等.直线拟合方法在一维图像边缘检测中的应用[J].光电工程, 2001, 28(6):56-65.
    [133]刘进.不变量特征的构造及在目标识别中的应用[D].华中科技大学, 2004.
    [134]王牛.基于动觉智能图式足球机器人运动控制[D].重庆大学, 2008.
    [135]张祺.基于视觉的机器人足球比赛系统研究. [D].广东工业大学. 2003.
    [136]曾吉勇.折反射全景立体成像. [D].四川大学. 2003.
    [137]席志红.全方位视觉技术及其在智能移动机器人等领域的应用研究[D].哈尔滨工程大学, 2007.
    [138]刘斐.应用于足球机器人的彩色全向视觉关键技术研究[D].国防科学技术大学, 2007.
    [139]王珂.基于视觉-激光的移动机器人自定位研究[D].大连理工大学, 2008.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700