自主移动机器人全向视觉系统研究
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
本论文在设计实现了机器人全向视觉系统的基础上,重点研究了如何提高全向视觉系统的鲁棒性,并将其应用于自主移动机器人,实现鲁棒的目标识别和自定位。
     具有环境感知能力是移动机器人实现自主的基础,而视觉传感器是一种能为自主移动机器人提供最为丰富的环境信息的传感器。在各种视觉传感器中,全向视觉装置由于具有360°的水平视场角,能够在一幅图像中获取机器人周围环境的全景信息,经过图像处理、分析和理解可实现机器人的目标识别、建图、自定位等,因此已经在各种移动机器人中得到了越来越广泛的应用。尽管全向视觉在自主移动机器人中的应用已经取得了很多研究成果,但是由于该问题本身的复杂性和高难度,因此仍然面临很多的挑战。这些挑战中既有计算机/机器人视觉研究中面临的共性问题,又有由全向视觉的成像特性所带来的问题。例如在动态光线条件下,如何使机器人视觉的图像获取及图像处理和理解具有恒常性;机器人目前在复杂的非结构化环境中工作时所使用的局部视觉特征提取算法往往计算量过大,无法应用于具有较高实时性要求的实际工程问题,同时针对全向视觉的成像特性,局部视觉特征提取算法还要做相应的调整;如何提高利用全向视觉对各种普通目标的识别能力,以降低对机器人工作环境的约束;如何使机器人在室内结构化环境中的自定位对环境高度动态因素具有鲁棒性;针对机器人在非结构化环境中的自定位问题,如何更多的借鉴模式识别领域中的大量研究成果,使机器人认知环境和实现定位的工作方式与人类更加相符;等等。
     针对上述挑战,本文主要开展并完成了以下几个方面的研究工作:
     (1)全向视觉系统的设计与标定。较为完整地总结了折反射式全向视觉系统的设计及标定问题,并以RoboCup中型组比赛用机器人为实验对象,设计实现了一种新的使用组合反射镜面的NuBot全向视觉系统,给出了一个进行全向视觉系统设计的范例。根据该全向视觉系统非单视点成像的特点,采用一种免模型的标定思想,完成了该系统较为精确的距离映射标定。
     (2)基于图像熵的摄像机参数自动调节算法。光线条件的变化会极大的影响视觉系统采集到的图像,进而给机器人的目标识别、自定位等造成很大的困难。因此在定义了图像熵后,通过实验验证了图像熵能够有效地表征摄像机参数是否设置恰当,进而提出了一种基于图像熵的摄像机参数自动调节算法,使视觉系统输出的图像具有一定的恒常性,以提高机器人视觉系统对光线条件变化的鲁棒性。使用NuBot全向视觉系统和普通透视成像摄像机在室内外环境中开展实验,结果验证了该算法的有效性。
     (3)两种实时的应用于全向视觉的局部视觉特征提取算法。针对目前现有算法难以实时提取局部视觉特征的问题,提出了两种新的用于全向视觉的实时局部视觉特征提取算法,称为FAST+LBP和FAST+CSLBP,以使得局部视觉特征能够应用于实时性要求较高的实际工程问题。使用COLD图像数据库进行全景图像特征匹配实验,确定了算法的最优参数设置,并与经典的SIFT算法作性能和计算时间上的比较,验证了所提出的局部视觉特征算法的有效性和实时性。
     (4)基于全向视觉的足球机器人任意足球识别方法。通过分析足球在NuBot全向视觉系统中的成像特性,得出结论为其在全景图像中成像为椭圆,然后根据该成像特性设计了相应的图像处理算法实现对任意FIFA足球的检测,还结合球速估计算法实现了对任意FIFA足球的实时跟踪。该识别过程不依赖于足球的彩色信息,因此该研究既可以提高机器人全向视觉系统的鲁棒性,又可以降低RoboCup中型组比赛环境的颜色编码化程度以促进RoboCup最终目标的实现。实验结果验证了算法的有效性。
     (5)高度动态的室内结构化环境中基于全向视觉的机器人自定位。以RoboCup中型组比赛为应用背景和测试环境,设计了一种新的基于全向视觉的机器人自定位算法,结合目前最常用的两种定位算法即粒子滤波和匹配优化定位算法的优点,并弥补了各自的不足,以在实时获得高精度的自定位的同时实现有效的全局定位,同时结合使用基于图像熵的摄像机参数自动调节算法,使定位算法对环境的各种动态因素如视觉系统被大量遮挡、机器人高速动态对抗、变化的光线条件等具有很强的鲁棒性。在标准的RoboCup中型组比赛环境中开展一系列实验,结果验证了算法的有效性。
     (6)非结构化环境中基于全向视觉的机器人拓扑定位。非结构化环境中机器人导航定位等问题所使用的视觉特征往往是人类无法直接理解的,直接基于特征匹配的机器人拓扑定位则需要大量的存储空间,且与人类认识环境的工作方式也是不相符的。因此将模式识别领域中得到了很多成功应用的特征袋方法引入到机器人定位问题中,结合使用所提出的两种实时局部视觉特征和基于统计学习思想的支持向量机分类器学习算法,得到了一种基于局部视觉特征袋和支持向量机的机器人地点识别算法,并用于实现机器人的拓扑定位。使用COLD图像数据库开展实验研究,确定了算法最优的参数和训练条件,并验证了算法的有效性。
On the basis of designing and realizing the omnidirectional vision system for autonomous mobile robots, the thesis focuses on how to improve the robustness of the omnidirectional vision system, and how to achieve robust object recognition and self-localization based on omnidirectional vision for autonomous mobile robots.
     The autonomy of mobile robots depends on the ability to percept the environment, and visual sensors can provide the richest environment information for autonomous mobile robots. Among all of the visual sensors, the omnidirectional vision system can provide a 360o view of the robot’s surrounding environment in a single panoramic im-age, and the robot can use it to realize object recognition, mapping, self-localization, etc. by image processing, analyzing and understanding. So it has become more and more popular as a visual sensor for autonomous mobile robots. Although a lot of progresses have been achieved in the applications of omnidirectional vision for autonomous mobile robots, many challenges still exist because of the complexity and the difficulty of this problem. These challenges include not only the common problems in computer/robot vision research, but also the problems brought forth by the special imaging character of omnidirectional vision. For example, how to achieve the constancy in the image acqui-sition, processing and understanding of the robot vision is still challenging under dy-namic lighting conditions; the computation costs of the existing local visual feature al-gorithms used by robots when working in the complex and unstructured environment are usually high, which limits the actual application of local visual features in the engi-neering situations with high real-time requirements; the local visual feature algorithms should be adjusted according to the special imaging character of omnidirectional vision; the ability to recognize the ordinary objects based on omnidirectional vision should be improved to reduce the constraints of the robot’s working environment; the robot’s self-localization should be robust with respect to the highly dynamic factors in the in-door and structured environment; more research results in the pattern recognition com-munity should be introduced and applied to the robot’s self-localization in the unstruc-tured environment, so the robot can cognize the environment and realize self-localization in a more consistent way with human; etc..
     The following research is performed and finished in the thesis to deal with the challenges mentioned above:
     (1) The design and calibration of the omnidirectional vision system. The design and calibration of the catadioptric omnidirectional vision system are summarized com-prehensively in the thesis, and then a novel omnidirectional vision system named as NuBot is designed and realized by using RoboCup Middle Size League (MSL) soccer robots as the test-bed, which also provides an example on how to design the omnidirec- tional vision system. Because the NuBot omnidirectional vision system is not a single viewpoint one, a model-free calibration idea is adopted to perform the calibration of the distance map for the system.
     (2) Camera parameters auto-adjusting technique based on image entropy. The varying lighting conditions will affect the images acquired by vision systems, which will cause much difficulty to the robot’s object recognition and self-localization. In the thesis, the definition of image entropy is presented, and then it is verified that image en-tropy can indicate whether camera parameters are set properly by experiments. A cam-era parameters auto-adjusting algorithm based on image entropy is proposed to achieve some kind of color constancy in the output of vision systems, so the robustness to the varying lighting conditions can be improved for the robot’s vision system. The algo-rithm is tested by using the NuBot omnidirectional vision system and the perspective camera in indoor and outdoor environments, and the results show that the algorithm is effective.
     (3) Two novel real-time local visual features for omnidirectional vision. To deal with that local visual features can not be extracted in real-time by the existing algo-rithms, two novel real-time local visual features, namely FAST+LBP and FAST+CSLBP, are proposed in the thesis for omnidirectional vision to make local vis-ual features applicable in the actual engineering problems with high real-time require-ments. The matching experiments of the panoramic images from the COLD database are performed to determine their optimal parameters, and to compare with the famous SIFT in performance and computation cost. The experimental results show that the proposed local visual features perform better, and can be extracted in real-time.
     (4) Arbitrary FIFA ball recognition based on omnidirectional vision for soccer ro-bots. The conclusion is derived that the ball is imaged to be the ellipse in the panoramic image acquired by the NuBot omnidirectional vision system by analyzing the imaging character, and then an image processing algorithm is designed to detect the arbitrary FIFA ball according to this imaging character. A ball velocity estimating algorithm is also integrated to track the ball in real-time. The recognition process is independent on the color information, so the robustness of the robot’s omnidirectional vision can be improved, and the current color-coded environment of RoboCup MSL can be further changed to promote the realization of the final goal of RoboCup. The experimental re-sults validate the effectiveness of the algorithm.
     (5) Robot’s robust self-localization based on omnidirectional vision in the highly dynamic indoor and structured environment. Taking the RoboCup MSL competition as the application background and the test-bed, a novel self-localization method based on omnidirectional vision is proposed in the thesis. The method combines the particle filter localization and the matching optimization localization, which are two most popular ones currently, by utilizing their virtue and making up their respective deficiency. So the highly accurate self-localization can be achieved in real-time while global localiza-tion is realized effectively. By integrating the camera parameters auto-adjusting algo-rithm based on image entropy, the robot’s self-localization is robust to the highly dy-namic environment with occlusions, high speed competing between robots and varying lighting conditions. The experimental results in the standard RoboCup MSL field show that the algorithm is effective.
     (6) Robot’s topological self-localization based on omnidirectional vision in the un-structured environment. In the unstructured environment, the visual features applied in the robot’s navigation and localization can not be understood by human, and the robot’s topological self-localization based on feature matching directly needs huge memory space, which is inconsistent with the way human cognize the environment. The bag of features method, which is popular and has been applied successfully in pattern recogni-tion problems, is introduced to the robot’s self-localization in the thesis. By combing the two real-time local visual features proposed in this thesis and Support Vector Machines (SVM) classifier learning algorithm based on statistical learning, a place recognition algorithm based on bag of local visual features and SVM is presented for mobile robots, meanwhile the robot’s topological self-localization is also realized by place recognition. The COLD database is used to perform the experiments to determine the best algorithm parameters and training conditions, and the results verify the effectiveness of the algo-rithm.
引文
[1] Mobile Robot, Wikipedia[EB/OL]. http://en.wikipedia.org/wiki/Mobile_robot. [Available 2010.03.29].
    [2] Autonomous Robot, Wikipedia[EB/OL]. http://en.wikipedia.org/wiki/Autonomous_robot. [Available 2010.03.29].
    [3] Murphy Robin R.[著],杜军平,吴立成,胡金春[译].人工智能机器人学导论[M].电子工业出版社, 2002.
    [4] Siegwart Roland,Nourbakhsh Illah R.[著],李人厚[译].自主移动机器人导论[M].西安交通大学出版社, 2006.
    [5]章毓晋.图像处理和分析[M].北京:清华大学出版社, 1999.
    [6] Benosman R., Kang S. B., Eds. Panoramic Vision: Sensors, Theory and Ap-plications[M]. Springer-Verlag, 2001.
    [7] Page of Omnidirectional Vision[EB/OL]. http://www.cis.upenn.edu/~kostas/omni.html. [Available 2010.03.30].
    [8] Cutler R., Rui Y., Gupta A., et al. Distributed Meetings: A Meeting Capture and Broadcasting System[C]. Proceedings of the Tenth ACM International Conference on Multimedia, 2002: 503-512.
    [9] Nanda H., Cutler R. Practical Calibrations for a Real-time Digital Omnidi-rectional Camera[C]. Proceedings of CVPR 2001, Technical Sketch, 2001.
    [10] ASTRO Sensor Series[EB/OL]. http://www.viewplus.co.jp/product/09/05.html. [Available 2010.03.30].
    [11]英向华.全向摄像机标定技术研究[D].博士学位论文.北京:中国科学院自动化研究所, 2004.
    [12] Micusik B. Two View Geometry of Omnidirectional Cameras[D]. PhD the-sis. Prague, Czech: Czech Technical University, 2004.
    [13] Rees D. W. Panoramic Television Viewing System[P]. U.S. Patent No. 3505465, April 1970.
    [14] Yagi Y., Kawato S. Panorama Scene Analysis with Conic Projection[C]. Proceedings of IEEE International Workshop on Intelligent Robots and Systems, 1990: 181-187.
    [15] Hong J., Tan X., Pinette B., et al. Image-based Homing[C]. Proceedings of IEEE International Conference on Robotics and Automation, 1991: 620-625.
    [16] Yamazawa K., Yagi Y., Yachida M. Omnidirectional Imaging with Hyper-boloidal Projection[C]. Proceedings of the 1993 IEEE/RSJ International Conference on Intelligent Robots and Systems, 1993: 1029-1034.
    [17] Nayar S. K., Baker S. Catadioptric Image Formation[C]. Proceeding ofDAPAR Image Understanding Workshop, 1997: 1431-1437.
    [18] Gaspar J., Decc? C., Okamoto Jr J., et al. Constant resolution omnidirec-tional cameras[C]. Proceedings of Third Workshop on Omnidirectional Vision, Copen-hagen, Denmark (held with ECCV02), 2002: 27-34.
    [19] Marchese F. M., Sorrenti D. G. Mirror Design of A Prescribed Accuracy Omnidirectional Vision System[C]. Proceedings of Third Workshop on Omnidirectional Vision, Copenhagen, Denmark (held with ECCV02), 2002: 136-142.
    [20] Menegatti E., Nori F., Pagello E., et al. Designing an omnidirectional vision system for a goal keeper robot[C]. RoboCup 2001: Robot Soccer World Cup V, 2002: 193-213.
    [21] Marchese F. M., Sorrenti D. G. Omni-directional Vision with A Multi-part Mirror[C]. RoboCup 2000: Robot Soccer World Cup IV, 2001: 179-188.
    [22] Peri V., Nayar S. K. Generation of Perspective and Panoramic Video from Omnidirectional Video[C]. Proceedings of DARPA Image Understanding Workshop, 1997: 243-246.
    [23] Boult T. E., Gao X., Micheals R., et al. Omni-directional visual surveil-lance[J]. Image and Vision Computing. 2004, 22(7): 515-534.
    [24] Menegatti E., Cavasin M., Mumolo E., et al. Combining Audio and Video Surveillance with a Mobile Robot[J]. International Journal on Artificial Intelligence Tools. 2007, 16(2): 377-398.
    [25] Sturm P. A Method for 3D Reconstruction of Piecewise Planar Objects from Single Panoramic Images[C]. Proceedings of IEEE Workshop on Omnidirectional Vi-sion, 2000: 119-126.
    [26] Bunschoten R., Kr?se B. Robust Scene Reconstruction From an Omnidirec-tional Vision System[J]. IEEE Transactions on Robotics and Automation. 2003, 19(2): 351-357.
    [27] Menegatti E., Pretto A., Tonello S., et al. A Robotic Sculpture Speaking to People[C]. Proceedings of 2007 IEEE International Conference on Robotics and Auto-mation, 2007: 3122-3123.
    [28] Gaspar J., Winters N., Santos-Victor J. Vision-Based Navigation and Envi-ronmental Representations with an Omnidirectional Camera[J]. IEEE Transactions on Robotics and Automation. 2000, 16(6): 890-899.
    [29] Menegatti E., Pretto A., Scarpa A., et al. Omnidirectional Vision Scan Matching for Robot Localization in Dynamic Environments[J]. IEEE Transactions on Robotics. 2006, 22(3): 523-535.
    [30] OMNIVIS 2010[EB/OL]. http://people.csail.mit.edu/koch/omnivis2010. [Available 2010.03.31].
    [31] SIMPAR 2008 - Workshop on Omnidirectional Robot Vision[EB/OL]. http://monicareggiani.net/simpar2008/index.php?option=com_content&task=view&id=26&Itemid=34. [Available 2010.03.30].
    [32] ICRA 2010 Workshop on Omnidirectional Robot Vision[EB/OL]. http://www.dei.unipd.it/~emg/omniRoboVis2010/Home.html. [Available 2010.04.01].
    [33] Daniilidis K., Papanikolopoulos N. The Big Picture, Special Issue on Pano-ramic Robotics[J]. IEEE Robotics&Automation Magazine. 2004, 11(4).
    [34] Menegatti E., Pajdla T. Omnidirectional robot vision[J]. Robotics and Autonomous Systems. 2010, doi:10.1016/j.robot.2010.02.006.
    [35] Wang M. L., Lin H. Y. Object Recognition from Omnidirectional Visual Sensing for Mobile Robot Applications[C]. Proceedings of the 2009 IEEE International Conference on Systems, Man, and Cybernetics, 2009: 1941-1946.
    [36] Bonarini A., Aliverti P., Lucioni M. An Omnidirectional Vision Sensor for Fast Tracking for Mobile Robots[J]. IEEE Transactions on Instrumentation and Meas-urement. 2000, 49(3): 509-512.
    [37] Scaramuzza D., Siegwart R. Correcting Vehicle Heading in Visual Odometry by Using Image Appearance[C]. Proceedings of SIMPAR 2008 Workshop on Omnidi-rectional Robot Vision, 2008: 285-296.
    [38] Scaramuzza D. Omnidirectional Vision: from Calibration to Robot Motion Estimation[D]. PhD Thesis. Zurich, Switzerland: ETH Zurich, 2008.
    [39] Scaramuzza D., Fraundorfer F., Pollefeys M. Closing the loop in appear-ance-guided omnidirectional visual odometry by using vocabulary trees[J]. Robotics and Autonomous Systems. 2010, doi:10.1016/j.robot.2010.02.013.
    [40] GoedeméT., Nuttin M., Tuytelaars T., et al. Fast Wide Baseline Matching with Constrained Camera Position[C]. Proceedings of 2004 IEEE International Confer-ence on Computer Vision and Pattern Recognition, 2004: 24-29.
    [41] Murillo A. C., Sagüés C., Guerrero J. J., et al. From omnidirectional images to hierarchical localization[J]. Robotics and Autonomous Systems. 2007, 55(5): 372-382.
    [42] GoedeméT., Tuytelaars T., Vanacker G., et al. Feature based omnidirec-tional sparse visual path following[C]. Proceedings of IEEE/RSJ International Confer-ence on Intelligent Robots and Systems (IROS 2005), 2005.
    [43] GoedeméT., Nuttin M., Tuytelaars T., et al. Omnidirectional Vision Based Topological Navigation[J]. International Journal of Computer Vision. 2007, 74(3): 219-236.
    [44]马建光.基于全向视觉的移动机器人定位和路径规划研究[D].博士学位论文.北京:北京理工大学, 2003.
    [45] Kr?se B. J. A., Vlassis N., Bunschoten R., et al. A probabilistic model for appearance-based robot localization[J]. Image and Vision Computing. 2001, 19(6): 381-391.
    [46] Menegatti E., Zoccarato M., Pagello E., et al. Image-based Monte Carlo lo-calisation with omnidirectional images[J]. Robotics and Autonomous Systems. 2004, 48(1): 17-30.
    [47] Jogan M., Leonardis A. Robust localization using an omnidirectional ap-pearance-based subspace model of environment[J]. Robotics and Autonomous Systems. 2003, 45(1): 51-72.
    [48] Lamon P., Nourbakhsh I., Jensen B., et al. Deriving and matching image fingerprint sequences for mobile robot localization[C]. Proceedings of 2001 IEEE In-ternational Conference on Robotics and Automation, 2001: 1609-1614.
    [49] Tamimi H., Andreasson H., Treptow A., et al. Localization of mobile robots with omnidirectional vision using Particle Filter and iterative SIFT[J]. Robotics and Autonomous Systems. 2006, 54(9): 758-765.
    [50] Andreasson H., Treptow A., Duckett T. Self-localization in non-stationary environments using omni-directional vision[J]. Robotics and Autonomous Systems. 2007, 55(7): 541-551.
    [51] Bay H., Ess A., Tuytelaars T., et al. Speeded-Up Robust Features (SURF)[J]. Computer Vision and Image Understanding. 2008, 110(3): 346-359.
    [52] Murillo A. C., Guerrero J. J., Sagues C. SURF features for efficient robot localization with omnidirectional images[C]. Proceedings of 2007 IEEE International Conference on Robotics and Automation, 2007: 3901-3907.
    [53]许俊勇,王景川,陈卫东.基于全景视觉的移动机器人同步定位与地图创建研究[J].机器人. 2008, 30(4): 289-297.
    [54] Kim S., Oh S. SLAM in Indoor Environments using Omni-directional Verti-cal and Horizontal Line Features[J]. Journal of Intelligent and Robotic Systems. 2008, 51(1): 31-43.
    [55]王永明,王贵锦.图像局部不变性特征与描述[M].北京:国防工业出版社, 2010.
    [56] Kato K., Ishiguro H., Barth M. Identifying and Localizing Robots in a Multi-Robot System[C]. Proceedings of the 1999 IEEE/RSJ International Conference on Intelligent Robots and Systems, 1999: 966-971.
    [57] Das A. K., Fierro R., Kumar V., et al. A Vision-Based Formation Control Framework[J]. IEEE Transaction on Robotics and Automation. 2002, 18(5): 813-825.
    [58]柳林,刘斐,季秀才,等.全向移动机器人编队分布式控制研究[J].机器人. 2007, 29(1): 23-28.
    [59]柳林.多机器人系统任务分配及编队控制研究[D].博士学位论文.长沙:国防科技大学, 2006.
    [60] Spletzer J., Das A. K., Fierro R., et al. Cooperative localization and control for multi-robot manipulation[C]. Proceedings of the 2001 IEEE/RSJ International Con-ference on Intelligent Robots and Systems (IROS 2001), 2001: 631-636.
    [61] RoboCup official home page[EB/OL]. http://www.robocup.org.
    [62] Hafner R., Lange S., Riedmiller M., et al. Brainstormers Tribots Team De-scription[C]. RoboCup 2009 Graz, CD-ROM, 2009.
    [63] Aangenent W. H. T. M., de Best J. J. T. H., Bukkems B. H. M., et al. Tech United Eindhoven Team Description 2009[C]. RoboCup 2009 Graz, CD-ROM, 2009.
    [64] Neves A. J. R., Corrente G. A., Pinho A. J. An Omnidirectional Vision Sys-tem for Soccer Robots[C]. Progress in Artificial Intelligence, LNCS 4874, 2007: 499-507.
    [65] Zhang H., Wang X., Lu H., et al. NuBot Team Description Paper 2009[C]. RoboCup 2009 Graz, CD-ROM, 2009.
    [66] Gholipour M., Ebrahimijam S., Fard H. R., et al. MRL Middle Size Team: 2009 Team Description Paper[C]. RoboCup 2009 Graz, CD-ROM, 2009.
    [67] Chen W., Cao Q., Wang J., et al. JiaoLong2008 Team Description[C]. Ro-boCup 2008 Suzhou, CD-ROM, 2008.
    [68] Kitazumi Y., Ishida S., Ogawa Y., et al. Hibikino-Musashi Team Description Paper[C]. RoboCup 2009 Graz, CD-ROM, 2009.
    [69] Zweigle O., Kappeler U. P., Rajaie H., et al. RFC Stuttgart Team Description 2009[C]. RoboCup 2009 Graz, CD-ROM, 2009.
    [70] Zhang S., Lin X., Fan H., et al. SHU Strive Legends Team Description 2009[C]. RoboCup 2009 Graz, CD-ROM, 2009.
    [71] Bruce J., Balch T., Veloso M. Fast and Inexpensive Color Image Segmenta-tion for Interactive Robots[C]. Proceedings of 2000 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2000: 2061-2066.
    [72] Liu F., Lu H., Zheng Z. A Modified Color Look-Up Table Segmentation Method for Robot Soccer[C]. Proceedings of the 4th IEEE LARS/COMRob 07, 2007.
    [73]刘斐,卢惠民,郑志强.基于线性分类器的混合空间查找表颜色分类方法[J].中国图象图形学报. 2008, 13(1): 104-108.
    [74]刘伟. RoboCup中型组机器人全景视觉系统设计与实现[D].硕士学位论文.长沙:国防科学技术大学, 2004.
    [75]刘斐.应用于足球机器人的彩色全向视觉关键技术研究[D].博士学位论文.长沙:国防科技大学, 2008.
    [76] Middle Size Robot League Rules and Regulations for 2009[EB/OL]. http://wiki.msl.robocup-federation.org/.
    [77] Lu H., Zhang H., Yang S., et al. A Novel Camera Parameters Auto-adjusting Method Based on Image Entropy[C]. RoboCup 2009: Robot Soccer World Cup XIII, 2010: 192-203.
    [78] Grillo E., Matteucci M., Sorrenti D. G. Getting the most from your colorcamera in a color-coded world[C]. RoboCup 2004: Robot Soccer World Cup VIII, 2005: 221-235.
    [79] Takahashi Y., Nowak W., Wisspeintner T. Adaptive Recognition of Color-Coded Objects in Indoor and Outdoor Environments[C]. RoboCup 2007: Robot Soccer World Cup XI, 2008: 65-76.
    [80] Lunenburg J. J. M., Ven G. V. D. Tech United Team Description[C]. Ro-boCup 2008 Suzhou, CD-ROM, 2008.
    [81] Neves A. J. R., Cunha B., Pinho A. J., et al. Aotonomous configuration of parameters in robotic digital cameras[C]. Proceedings of the 4th Iberian Conference on Pattern Recognition and Image Analysis, ibPRIA 2009, 2009.
    [82] Cameron D., Barnes N. Knowledge-Based Autonomous Dynamic Colour Calibration[C]. RoboCup 2003: Robot Soccer World Cup VII, 2004: 226-237.
    [83] Heinemann P., Sehnke F., Streichert F., et al. Towards a Calibration-Free Robot: The ACT Algorithm for Automatic Online Color Training[C]. RoboCup 2006: Robot Soccer World Cup X, 2007: 363-370.
    [84] Gunnarsson K., Wiesel F., Rojas R. The Color and the Shape: Automatic On-Line Color Calibration for Autonomous Robots[C]. RoboCup 2005: Robot Soccer World Cup IX, 2006: 347-358.
    [85] Martins D. A., Neves A. J. R., Pinho A. J. Real-time generic ball recognition in RoboCup domain[C]. Proceedings of the 11th edition of the Ibero-American Conf. on Artificial Intelligence, IBERAMIA 2008, 2008.
    [86] Merke A., Welker S., Riedmiller M. Line based robot localization under natural light conditions[C]. Proceedings of ECAI 2004 Workshop on Agents in Dy-namic and Real Time Environments, 2004.
    [87] Heinemann P., Haase J., Zell A. A Novel Approach to Efficient Monte-Carlo Localization in RoboCup[C]. RoboCup 2006: Robot Soccer World Cup X, 2007: 322-329.
    [88] Lauer M., Lange S., Riedmiller M. Calculating the perfect match: An effi-cient and accurate approach for robot self-localization[C]. RoboCup 2005: Robot Soc-cer World Cup IX, 2006: 142-153.
    [89] Lauer M., Lange S., Riedmiller M. Modeling moving objects in a dynami-cally changing robot application[C]. KI 2005: Advances in Artificial Intelligence, 2005: 291-303.
    [90] Voigtl?nder A., Lange S., Lauer M., et al. Real-time 3D ball recognition us-ing perspective and catadioptric cameras[C]. Proceedings of 2007 European Conference on Mobile Robots, 2007.
    [91] Taiana M., Gasper J., Nascimento J., et al. 3D Tracking by Catadioptric Vi-sion Based on Particle Filters[C]. RoboCup 2007: Robot Soccer World Cup XI, 2008: 77-88.
    [92] Santos J., Lima P. Multi-Robot Cooperative Object Localization Decentral-ized Bayesian Approach[C]. RoboCup 2009: Robot Soccer World Cup XIII, 2010: 332-343.
    [93] Silva J., Lau N., Rodrigues J., et al. Sensor and information fusion applied to a Robotic Soccer Team[C]. RoboCup 2009: Robot Soccer World Cup XIII, 2010: 366-377.
    [94] Demonceaux C., Vasseur P., Pégard C. Omnidirectional vision on UAV for attitude computation[C]. Proceedings of the 2006 IEEE International Conference on Robotics and Automation, 2006: 2842-2847.
    [95] Mondragón I. F., Campoy P., Martinez C., et al. Omnidirectional vision ap-plied to Unmanned Aerial Vehicles (UAVs) attitude and heading estimation[J]. Robot-ics and Autonomous Systems. 2010, doi:10.1016/j.robot.2010.02.012.
    [96] Hrabar S., Sukhatme G. S. Omnidirectional Vision for an Autonomous Helicopter[C]. Proceedings of the 2003 IEEE International Conference on Robotics and Automation, 2003: 558-563.
    [97] Mayer G., Utz H., Kraetzschmar G. K. Playing Robot Soccer under Natural Light: A Case Study[C]. RoboCup 2003: Robot Soccer World Cup VII, 2004: 238-249.
    [98] Wolf J., Burgard W., Burkhardt H. Robust vision-based localization by com-bining an image-retrieval system with Monte Carlo localization[J]. IEEE Transactions on Robotics. 2005, 21(2): 208-216.
    [99] Tamimi H., Zell A. Global Robot localization using Iterative Scale Invariant Feature Transform[C]. Proceedings of 36th International Symposium Robotics (ISR 2005), 2005.
    [100] Pronobis A., Caputo B. COLD: COsy Localization Database[J]. International Journal of Robotics Research. 2009, 28(5): 588-594.
    [101] Csurka G., Dance C. R., Fan L., et al. Visual categorization with bags of keypoints[C]. Proceedings of ECCV’04 workshop on Statistical Learning in Computer Vision, 2004: 59-74.
    [102] Sivic J., Zisserman A. Video Google: A Text Retrieval Approach to Object Matching in Videos[C]. Proceedings of the Ninth IEEE International Conference on Computer Vision (ICCV 2003), 2003: 1-8.
    [103] Rosten E., Drummond T. Fusing points and lines for high performance tracking[C]. Proceedings of 10th IEEE International Conference on Computer Vision, 2005: 1508-1515.
    [104] Rosten E., Drummond T. Machine Learning for High-Speed Corner Detec-tion[C]. Computer Vision– ECCV 2006, 2006: 430-443.
    [105] Heikkil(?) M., Pietik(?)inen M., Schmid C. Description of interest regions with local binary patterns[J]. Pattern Recognition. 2009, 42(3): 425-436.
    [106]叶其孝,沈永欢.实用数学手册(第2版)[M].北京:科学出版社, 2006.
    [107]刘伟,刘斐,郑志强.用于机器人足球赛的全景视觉设计仿真[J].计算机仿真. 2005, 22(11): 190-192.
    [108] Gluckman J., Nayar S. K. Ego-Motion and Omnidirectional Cameras[C]. Proceedings of 6th IEEE International Conference on Computer Vision, 1998: 999-1005.
    [109] Cauchois C., Brassart E., Delahoche L., et al. Reconstruction with the cali-brated syclop sensor[C]. Proceedings of the IEEE International Conference on Intelli-gent Robots and Systems (IROS 2000), 2000: 1493-1498.
    [110] Bakstein H., Pajdla T. Panoramic Mosaicing with a 180 Field of View Lens[C]. Proceedings of IEEE Workshop on Omnidirectional Vision, 2002: 60-67.
    [111] Geyer C., Daniilidis K. Paracatadioptric Camera Calibration[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2002, 24(5): 687-695.
    [112] Geyer C., Kostas D. Catadioptric Projective Geometry[J]. International Journal of Computer Vision. 2001, 45(3): 223-243.
    [113] Geyer C., Kostas D. Catadioptric camera calibration[C]. Proceedings of the IEEE International Conference on Computer Vision, 1999: 398-404.
    [114] Kang S. B. Catadioptric self-calibration[C]. Proceedings of IEEE Interna-tional Conference on Computer Vsion and Pattern Recognition, 2000: 201-207.
    [115] Micusik B., Pajdla T. Estimation of omnidirectional camera model from epipolar geometry[C]. Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition, 2003: 485-490.
    [116] Micusik B., Pajdla T. Para-catadioptric camera auto-calibration from epipolar geometry[C]. Proceedings of the Asian Conference on Computer Vision, 2004: 748-753.
    [117] Geyer C., Daniilidis K. A Unifying Theory for Central Panoramic Systems and Practical Implications[C]. Proceedings of ECCV 2000, LNCS 1843, 2000: 445-461.
    [118] Ying X., Hu Z. Catadioptric camera calibration using geometric invariants[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2004, 26(10): 1260-1271.
    [119] Ying X., Hu Z. Catadioptric camera calibration using geometric invari-ants[C]. Proceedings of the Ninth IEEE International Conference on Computer Vision, 2003: 1351-1358.
    [120] Wu F., Duan F., Hu Z., et al. A new linear algorithm for calibrating central catadioptric cameras[J]. Pattern Recognition. 2008, 41(10): 3166-3172.
    [121] Svoboda T., Pajdla T. Epipolar Geometry for Central Catadioptric Cam-eras[J]. International Journal of Computer Vision. 2002, 49(1): 23-37.
    [122] Svoboda T. Central Panoramic Cameras Design, Geometry, Egomotion[D].PhD thesis. Prague, Czech: Czech Technical University, 1999.
    [123] Scaramuzza D., Martinelli A., Siegwart R. A Flexible Technique for Accu-rate Omnidirectional Camera Calibration and Structure from Motion[C]. Proceedings of the Fourth IEEE International Conference on Computer Vision Systems (ICVS 2006), 2006.
    [124] Scaramuzza D., Martinelli A., Siegwart R. A Toolbox for Easily Calibrating Omnidirectional Cameras[C]. Proceedings to IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2006), Beijing, China, 2006: 5695-5701.
    [125] Mei C., Rives P. Single View Point Omnidirectional Camera Calibration from Planar Grids[C]. Proceedings of the 2007 IEEE International Conference on Ro-botics and Automation, 2007: 3945-3950.
    [126] Omnidirectional Camera Calibration Toolbox for Matlab[EB/OL]. http://asl.epfl.ch/~scaramuz/research/Davide_Scaramuzza_files/Research/OcamCalib_Tutorial.htm. [Available 2010.03.03].
    [127] Omnidirectional Calibration Toolbox[EB/OL]. http://www.robots.ox.ac.uk/~cmei/Toolbox.html. [Available 2008.05.08].
    [128] Hicks R. A., Bajcsy R. Catadioptric Sensors that ApproximateWide-angle Perspective Projections[C]. Proceedings of IEEE Workshop on Omnidirectional Vision, 2000: 97-103.
    [129] Lima P., Bonarini A., Machado C., et al. Omni-directional catadioptric vision for soccer robots[J]. Robotics and Autonomous Systems. 2001, 36(2/3): 87-102.
    [130]卢惠民,刘斐,郑志强.一种新的用于足球机器人的全向视觉系统[J].中国图象图形学报. 2007, 12(7): 1243-1248.
    [131]卢惠民.机器人全向视觉系统自定位方法研究[D].硕士学位论文.长沙:国防科技大学, 2005.
    [132]海丹.全向移动平台的设计与控制[D].硕士学位论文.长沙:国防科技大学, 2005.
    [133] Yu W., Lu H., Lu S., et al. NuBot Team Description Paper 2010[C]. Ro-boCup 2010 Singapore, CD-ROM, 2010.
    [134] Colombo A., Matteucci M., Sorrenti D. G. On the Calibration of Non Single Viewpoint Catadioptric Sensors[C]. RoboCup 2006: Robot Soccer World Cup X, LNAI 4434, Springer Berlin/Heidelberg, 2007: 194-205.
    [135] Lu H., Zhang H., Xiao J., et al. Arbitrary Ball Recognition Based on Omni-Directional Vision for Soccer Robots[C]. RoboCup 2008: Robot Soccer World Cup XII, 2009: 133-144.
    [136] Canny J. A computational approach to edge detection[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence. 1986, 8(6): 679-698.
    [137] Ristic D., Vuppala S. K., Gr?ser A. Feedback Control for Improvement ofImage Processing: An Application of Recognition of Characters on Metallic Sur-faces[C]. Proceedings of the Fourth IEEE International Conference on Computer Vision Systems (ICVS 2006), 2006.
    [138] Mayer G., Utz H., Kraetzschmar G. K. Towards Autonomous Vision Self-calibration for Soccer Robots[C]. Proceedings of the International Conference on Intelligent Robots and Systems (IROS02), Lausanne, Switzerland, 2002: 214-219.
    [139] Forsyth D. A. A Novel Algorithm for Color Constancy[J]. International Journal of Computer Vision. 1990, 5(1): 5-36.
    [140] Agarwal V., Abidi B. R., Koschan A., et al. An overview of Color Constancy Algorithms[J]. Journal of Pattern Recognition Research. 2006, 1(1): 42-54.
    [141] G?nner C., Rous M., Kraiss K. Real-Time Adaptive Colour Segmentation for the RoboCup Middle Size League[C]. RoboCup 2004: Robot Soccer World Cup VIII, 2005: 402-409.
    [142] Lu H., Zheng Z., Liu F., et al. A Robust Object Recognition Method for Soccer Robots[C]. Proceedings of the 7th World Congress on Intelligent Control and Automation, Chongqing, China, 2008: 1645-1650.
    [143] Kuno T., Sugiura H., Matoba N. A New Automatic Exposure System for Digital Still Cameras[J]. IEEE Transactions on Consumer Electronics. 1998, 44(1): 192-199.
    [144] Chikane V., Fuh C. Automatic White Balance for Digital Still Cameras[J]. Journal of Information Science and Engineering. 2006, 22(3): 497-509.
    [145] Ng Kuang Chern N., Neow P. A., Jr M. H. A. Practical Issues in Pixel-Based Autofocusing for Machine Vision[C]. Proceedings of the 2001 IEEE International Con-ference on Robotics & Automation, Seoul, Korea, 2001: 2791-2796.
    [146] Goo?en A., Rosenstiel M., Schulz S., et al. Auto Exposure Control for Multi-Slope Cameras[C]. ICIAR 2008, 2008: 305-314.
    [147] Anzani F., Bosisio D., Matteucci M., et al. On-Line Color Calibration in Non-Stationary Environments[C]. RoboCup 2005: Robot Soccer World Cup IX, 2006: 396-407.
    [148] Hanek R., Schmitt T., Buck S., et al. Towards RoboCup without Color La-beling[C]. RoboCup 2002: Robot Soccer World Cup VI, 2003: 179-194.
    [149] Treptow A., Zell A. Real-time object tracking for soccer-robots without color information[J]. Robotics and Autonomous Systems. 2004, 48(1): 41-48.
    [150] Goshtasby A. A. Fusion of Multi-exposure images[J]. Image and Vision Computing. 2005, 23(6): 611-618.
    [151] Shannon C. E., Weaver W. The Mathematical Theory of Communication[M]. Urbana: University of Illinois Press, 1949.
    [152] Gonzalez R. C., Woods R. E. Digital Image Processing, Second Edition[M]. Prentice Hall, 2002.
    [153] Marchant J. A. Testing a measure of image quality for acquisition control[J]. Image and Vision Computing. 2002, 20(7): 449-458.
    [154] Marchant J. A., Onyango C. M. Model-based control of image acquisition[J]. Image and Vision Computing. 2003, 21(2): 161-170.
    [155] Murino V., Foresti G. L., Regazzoni C. S. Adaptive Camera Regulation for Investigation of Real Scenes[J]. IEEE Transactions on Industrial Electronics. 1996, 43(5): 588-600.
    [156] Huber R., Nowak C., Spatzek B., et al. Adaptive Aperture Control for Image Enhancement[C]. Proceedings of 2003 IEEE International Workshop on Computer Ar-chitectures for Machine Perception (CAMP), 2003: 5-11.
    [157] Ulrich I., Nourbakhsh I. Appearance-based place recognition for topological localization[C]. Proceedings of the 2000 IEEE International Conference on Robotics and Automation, 2000: 1023-1029.
    [158] Mikolajczyk K., Schmid C. Indexing based on scale invariant interest points[C]. Proceedings of 8th IEEE International Conference on Computer Vision, 2001: 525-531.
    [159] Brown M., Lowe D. Automatic Panoramic Image Stitching using Invariant Features[J]. International Journal of Computer Vision. 2007, 74(1): 59-73.
    [160] Tuytelaars T., Van Gool L. Matching Widely Separated Views Based on Af-fine Invariant Regions[J]. International Journal of Computer Vision. 2004, 59(1): 61-85.
    [161] Lowe D. G. Distinctive Image Features from Scale-Invariant Keypoints[J]. International Journal of Computer Vision. 2004, 60(2): 91-110.
    [162] Ullah M. M., Pronobis A., Caputo B., et al. Towards robust place recognition for robot localization[C]. Proceedings of IEEE International Conference on Robotics and Automation, 2008: 530-537.
    [163] Lazebnik S., Schmid C., Ponce J. A sparse texture representation using local affine regions[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2005, 27(8): 1265-1278.
    [164] Stephen S., Lowe D., Little J. Global localization using distinctive visual features[C]. Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems, 2002: 226-231.
    [165] Harris C., Stephens M. A combined corner and edge detector[C]. Proceed-ings of Alvey Vision Conference, 1988: 147-151.
    [166] Smith S. M., Brady J. M. SUSAN—A New Approach to Low Level Image Processing[J]. International Journal of Computer Vision. 1997, 23(1): 45-78.
    [167] Matas J., Chum O., Urban M., et al. Robust wide-baseline stereo from maximally stable extremal regions[J]. Image and Vision Computing. 2004, 22(10): 761-767.
    [168] Kadir T., Zisserman A., Brady M. An Affine Invariant Salient Region De-tector[C]. Computer Vision - ECCV 2004, 2004: 228-241.
    [169] Mikolajczyk K., Schmid C. Scale & Affine Invariant Interest Point Detec-tors[J]. International Journal of Computer Vision. 2004, 60(1): 63-86.
    [170] Yan K., Sukthankar R. PCA-SIFT: a more distinctive representation for local image descriptors[C]. Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2004: 506-513.
    [171] Abdel-Hakim A. E., Farag A. A. CSIFT: A SIFT Descriptor with Color In-variant Characteristics[C]. Proceedings of 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2006: 1978-1983.
    [172] Mikolajczyk K., Schmid C. A performance evaluation of local descriptors[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2005, 27(10): 1615-1630.
    [173] Mikolajczyk K., Tuytelaars T., Schmid C., et al. A Comparison of Affine Region Detectors[J]. International Journal of Computer Vision. 2005, 65(1): 43-72.
    [174] Schmid C., Mohr R., Bauckhage C. Evaluation of Interest Point Detectors[J]. International Journal of Computer Vision. 2000, 37(2): 151-172.
    [175] Li J., Allinson N. M. A comprehensive review of current local features for computer vision[J]. Neurocomputing. 2008, 71(10-12): 1771-1787.
    [176] Grabner M., Grabner H., Bischof H. Fast Approximated SIFT[C]. Computer Vision– ACCV 2006, 2006: 918-927.
    [177] Bonarini A., Aliverti P., Lucioni M. An omnidirectional vision sensor for fast tracking for mobile robots[J]. IEEE Transactions on Instrumentation and Measure-ment. 2000, 49(3): 509-512.
    [178] Svoboda T., Pajdla T. Matching in Catadioptric Images with Appropriate Windows, and Outliers Removal[C]. Computer Analysis of Images and Patterns, 2001: 733-740.
    [179] Ieng S., Benosman R., Devars J. An Efficient Dynamic Multi-Angular Fea-ture Points Matcher for Catadioptric Views[C]. Proceedings of the 2003 Conference on Computer Vision and Pattern Recognition Workshop, CVPRW '03, 2003.
    [180] LBP Website[EB/OL]. http://www.ee.oulu.fi/mvg/page/lbp_bibliography.
    [181] Ojala T., Pietik?inen M., M?enp?? T. Multiresolution gray-scale and rotation invariant texture classification with local binary patterns[J]. IEEE Transactions on Pat-tern Analysis and Machine Intelligence. 2002, 24(7): 971-987.
    [182] Ahonen T., Hadid A., Pietik?inen M. Face Description with Local Binary Patterns: Application to Face Recognition[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2006, 28(12): 2037-2041.
    [183] Heikkil? M., Pietik?inen M. A texture-based method for modeling the back-ground and detecting moving objects[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2006, 28(4): 657-662.
    [184] Pietik?inen M., Nurmela T., M?enp?? T., et al. View-based recognition of real-world textures[J]. Pattern Recognition. 2004, 37(2): 313-323.
    [185] SIFT Source Code[EB/OL]. http://www.vlfeat.org/?vedaldi/code/sift.html.
    [186] Ledwich L., Williams S. Reduced sift features for image retrieval and indoor localisation[C]. Proceedings of Australian Conference on Robotics and Automation, 2004.
    [187] Hanek R., Beetz M. The Contracting Curve Density Algorithm: Fitting Pa-rametric Curve Models to Images Using Local Self-Adapting Separation Criteria[J]. In-ternational Journal of Computer Vision. 2004, 59(3): 233-258.
    [188] Hanek R., Schmitt T., Buck S., et al. Fast Image-based Object Localization in Natural Scenes[C]. Proceedings of the 2002 IEEE/RSJ International Conference on Intelligent Robots and Systems, Lausanne, Switzerland, 2002: 116-122.
    [189] Mitri S., Perv?lz K., Surmann H., et al. Fast Color-Independent Ball Detec-tion for Mobile Robots[C]. Proceedings of IEEE Mechatronics and Robotics, Aachen, Germany, 2004: 900-905.
    [190] Mitri S., Frintrop S., Perv?lz K., et al. Robust Object Detection at Regions of Interest with an Application in Ball Recognition[C]. Proceedings of IEEE International Conference on Robotics and Automation, Barcelona, Spain, 2005: 125-130.
    [191] Coath G., Musumeci P. Adaptive Arc Fitting for Ball Detection in Ro-boCup[C]. APRS Workshop on Digital Image Analysing, 2003.
    [192] Bonarini A., Furlan A., Malago L., et al. Milan Robocup Team 2009[C]. RoboCup 2009 Graz, CD-ROM, 2009.
    [193] Wenig M., Pang K., On P. Arbitrarily colored ball detection using the struc-ture tensor technique[J]. Mechatronics. 2010, doi:10.1016/j.mechatronics.2010.07.005.
    [194] Neves A. J. R., Azevedo J. L., Cunha M. B., et al. CAMBADA’2009: Team Description Paper[C]. RoboCup 2009 Graz, CD-ROM, 2009.
    [195]卢惠民,王祥科,刘斐,等.基于全向视觉和前向视觉的足球机器人目标识别[J].中国图象图形学报. 2006, 11(11): 1686-1689.
    [196] Sonka Milan,Hlavac Vaclav,Boyle Roger[著],艾海舟,武勃[译].图像处理、分析与机器视觉[M].人民邮电出版社, 2003.
    [197] Thrun S., Burgard W., Fox D. Probabilistic Robotics [M]. Cambridge: MIT Press, 2005.
    [198] Dellaert F., Fox D., Burgard W., et al. Monte Carlo Localization for Mobile Robots[C]. Proceedings of IEEE International Conference on Robotics and Automation (ICRA99), 1999: 1322-1328.
    [199]王景川,陈卫东,曹其新.基于全景视觉与里程计的移动机器人自定位方法研究[J].机器人. 2005, 27(1): 41-45.
    [200] Fox D., Burgard W., Thrun S. Markov Localization for Mobile Robots in Dynamic Environments[J]. Journal of Artificial Intelligence Research. 1999, 11(1): 391-427.
    [201] Thrun S., Fox D., Burgard W., et al. Robust Monte Carlo Localization for Mobile Robots[J]. Artificial Intelligence. 2000, 128(1-2): 99-141.
    [202]卢惠民,张辉,郑志强.基于视觉的移动机器人自定位问题[J].中南大学学报(自然科学版). 2009, 40(Suppl.1): 127-134.
    [203] Andreasson H., Duckett T. Topological Localization for Mobile Robots Us-ing Omni-directional Vision and Local Features[C]. Proceedings of the 5th IFAC Sym-posium on Intelligent Autonomous Vehicles (IAV), 2004.
    [204] Pronobis A., Caputo B., Jensfelt P., et al. A Discriminative Approach to Ro-bust Visual Place Recognition[C]. Proceedings of 2006 IEEE/RSJ International Con-ference on Intelligent Robots and Systems, 2006: 3829-3836.
    [205] Luo J., Pronobis A., Caputo B. SVM-based Transfer of Visual Knowledge Across Robotic Platforms[C]. Proceedings of 5th International Conference on Computer Vision Systems (ICVS07), Bielefeld, Germany, 2007.
    [206] Pronobis A., Caputo B. Confidence-based cue integration for visual place recognition[C]. Proceedings of 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2007: 2394-2401.
    [207] Luo J., Pronobis A., Caputo B., et al. Incremental learning for place recogni-tion in dynamic environments[C]. Proceedings of 2007 IEEE/RSJ International Confer-ence on Intelligent Robots and Systems, 2007: 721-728.
    [208] Orabona F., Castellini C., Caputo B., et al. Indoor Place Recognition using Online Independent Support Vector Machines[C]. Proceedings of 18th British Machine Vision Conference (BMVC07), Warwick, UK, 2007.
    [209] Dudek G., Jugessur D. Robust place recognition using local appearance based methods[C]. Proceedings of 2000 IEEE International Conference on Robotics and Automation, 2000: 1030-1035.
    [210] Pronobis A., Martinez M. O., Caputo B. SVM-based discriminative accumu-lation scheme for place recognition[C]. Proceedings of 2008 IEEE International Con-ference on Robotics and Automation, 2008: 522-529.
    [211] Pronobis A., Caputo B., Jensfelt P., et al. A realistic benchmark for visual indoor place recognition[J]. Robotics and Autonomous Systems. 2010, 58(1): 81-96.
    [212] Pronobis A., Jie L., Caputo B. The more you learn, the less you store: Mem-ory-controlled incremental SVM for visual place recognition[J]. Image and Vision Computing. 2010, doi:10.1016/j.imavis.2010.01.015.
    [213]庄严,陈东,王伟,等.移动机器人基于视觉室外自然场景理解的研究与进展[J].自动化学报. 2010, 36(1): 1-11.
    [214]季秀才.机器人同步定位与建图中数据关联问题研究[D].博士学位论文.长沙:国防科技大学, 2008.
    [215] Cristianini Nello,Shawe-Taylor John[著],李国正,王猛,曾华军[译].支持向量机导论[M].电子工业出版社, 2000.
    [216] Chao Z., Yucheng W., Tieniu T. Mobile robot self-localization based on global visual appearance features[C]. Proceedings of 2003 IEEE International Confer-ence on Robotics and Automation, 2003: 1271-1276.
    [217] Kosecka J., Li F. Vision based topological Markov localization[C]. Proceed-ings of 2004 IEEE International Conference on Robotics and Automation, 2004: 1481-1486.
    [218] Ramisa A., Tapus A., de Mantaras R. L., et al. Mobile robot localization us-ing panoramic vision and combinations of feature region detectors[C]. Proceedings of 2008 IEEE International Conference on Robotics and Automation, 2008: 538-543.
    [219] Silpa-Anan C., Hartley R. Localisation using an image-map[C]. Proceedings of the Australian Conference on Robotics and Automation, 2004.
    [220]李桂芝,安成万,杨国胜,等.基于场景识别的移动机器人定位方法研究[J].机器人. 2005, 27(2): 123-127.
    [221] Wallraven C., Caputo B., Graf A. Recognition with Local Features: the Ker-nel Recipe[C]. Proceedings of the Ninth IEEE International Conference on Computer Vision (ICCV 2003), 2003: 257-264.
    [222] ImageCLEF - Image Retrieval in CLEF[EB/OL]. http://www.imageclef.org/2010/robot. [Available 2010.05.29].
    [223] The ImageCLEF Robot Vision Challenge Workshop 2010[EB/OL]. http://www.idiap.ch/~bcaputo/iros-w10.html. [Available 2010.05.29].
    [224] Jurie F., Triggs B. Creating Efficient Codebooks for Visual Recognition[C]. Proceedings of the 2005 International Conference on Computer Vision, 2005: 604-610.
    [225] Jiang Y., Ngo C., Yang J. Towards Optimal Bag-of-Features for Object Categorization and Semantic Video Retrieval[C]. Proceedings of the 6th ACM Interna-tional Conference on Image and Video Retrieval, 2007: 494-501.
    [226] Fei-Fei L., Perona P. A Bayesian Hierarchical Model for Learning Natural Scene Categories[C]. Proceedings of the 2005 IEEE International Conference on Com-puter Vision and Pattern Recognition, 2005: 524-531.
    [227] Yang J., Jiang Y., Hauptmann A. G., et al. Evaluating Bag-of-Visual-Words Representations in Scene Classification[C]. Proceedings of the International Workshop on Multimedia Information Retrieval, 2007: 197-206.
    [228] Nowak E., Jurie F., Triggs B. Sampling Strategies for Bag-of-Features Image Classification[C]. Computer Vision– ECCV 2006, 2006: 490-503.
    [229] Duda Richard O.,Hart Peter E.,Stork David G.[著],李宏东,姚天翔[译].模式分类(第二版)[M].机械工业出版社, 2003.
    [230] Recognizing and Learning Object Categories, the Slides of the Short Course at ICCV2005[EB/OL].http://people.csail.mit.edu/torralba/shortCourseRLOC/index.html. [Available 2009.03.09].
    [231]张学工.关于统计学习理论与支持向量机[J].自动化学报. 2000, 26(1): 32-42.
    [232] Freund Y., Schapire R. E. A Short Introduction to Boosting[J]. Journal of Japanese Society for Artificial Intelligence. 1999, 14(5): 771-780.
    [233] Freund Y., Schapire R. E. A decision-theoretic generalization of on-line learning and an application to boosting[J]. Journal of Computer and System Sciences. 1997, 55(1): 119-139.
    [234]于玲,吴铁军.集成学习:Boosting算法综述[J].模式识别与人工智能. 2004, 17(1): 52-59.
    [235] Vapnik V. N. Statistical Learning Theory[M]. New York: Wiley, 1998.
    [236] LIBSVM -- A Library for Support Vector Machines[EB/OL]. http://www.csie.ntu.edu.tw/~cjlin/libsvm/. [Available 2010.04.19].
    [237] Hochdorfer S., Lutz M., Schlegel C. Bearing-Only SLAM in everyday envi-ronments using Omnidirectional Vision[C]. Proceedings of the 2nd. Workshop on Om-nidirectional Robot Vision, A workshop of the 2010 IEEE International Conference on Robotics and Automation (ICRA2010), 2010: 26-31.
    [238] Rituerto A., Puig L., Guerrero J. J. Comparison of omnidirectional and con-ventional monocular systems for visual SLAM[C]. Proceedings of OMNIVIS 2010: The 10th Workshop on Omnidirectional Vision, Camera Networks, and Non-classical Cam-eras, 2010.
    [239] Lisin D. A., Mattar M. A., Blaschko M. B., et al. Combining Local and Global Image Features for Object Class Recognition[C]. Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 05), 2005.
    [240] Garg A., Agarwal S., Huang T. S. Fusion of global and local information for object detection[C]. Proceedings of the 16th International Conference on Pattern Rec-ognition (ICPR2002), 2002: 723-726.
    [241] Su Y., Shan S., Chen X., et al. Hierarchical Ensemble of Global and Local Classifiers for Face Recognition[J]. IEEE Transactions on Image Processing. 2009, 18(8): 1885-1896.
    [242] Zhang W., Yu B., Zelinsky G. J., et al. Object Class Recognition Using Mul-tiple Layer Boosting with Heterogeneous Features[C]. Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 05),2005: 323-330.
    [243] Liu M., Scaramuzza D., Pradalier C., et al. Scene Recognition with Omnidi-rectional Vision for Topological Map using Lightweight Adaptive Descriptors[C]. Pro-ceedings of the 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2009), 2009: 116-121.
    [244] Nicosevici T., Garcia R. On-line Visual Vocabularies for Robot Navigation and Mapping[C]. Proceedings of the 2009 IEEE/RSJ International Conference on Intel-ligent Robots and Systems (IROS 2009), 2009: 205-212.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700