面向室外环境的移动机器人场景识别与建模
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
随着移动机器人种类及应用环境的不断增加,其应用领域已经从结构化室内环境逐步向准结构化或非结构化室外环境拓展。作为移动机器人定位、导航及环境探索等研究的基础,面向室外环境的场景识别与建模已成为移动机器人领域的研究热点之一。本文从三维场景中阶梯目标的检测与参数估计问题入手,进而对室外典型场景的分类与地形建模问题展开深入研究,从而为移动机器人的室外自主环境适应提供技术支持。
     在移动机器人自主导航中,阶梯目标既可能被视为障碍物,也可能成为备选通路,还可能作为定位和导航的重要标识物。针对阶梯目标结构的多样性以及三维激光点云分布的不确定性,提出一种基于阶梯拓扑模型和模糊集理论的自适应阶梯目标检测与参数估计方法。根据阶梯剖面模型的拓扑关系,提出一种基于角度直方图算法的阶梯边缘点检测方法,提高了阶梯边缘位置估计的精度。采用同级线段提取与跨级线段接合策略,可实现候选阶梯边缘线集合的有效构建。通过模糊变换和自适应模糊推理实现各级候选阶梯边缘线之间的级联概率估计,并采用模拟退火算法搜索全局最优的候选阶梯边缘线组合,从而实现对阶梯三维模型的有效构建。
     车体抖动及光照条件变化都会显著影响视觉图像的质量。在利用Gabor滤波进行图像增强的基础上,采用Canny算子实现阶梯边缘检测。提出局部融合相位编组法以增强低质量图像中边缘线提取的鲁棒性。经过边缘线段的接合与过滤,可获得机器人运动控制所需的偏移量参数。通过单目视觉图像与三维激光点云数据的特征级融合,提高机器人阶梯检测与参数估计的可靠性。
     对于具有一定越障能力的移动机器人,地形分类与地形建模是判别环境可通过性的重要依据。针对室外地形复杂程度的差异,提出一种分级的地形分类与几何建模方法,为移动机器人的场景识别与运动规划提供参考。基于分层高程地图的地形表示方法,可实现对平坦路面上的可通行区域与障碍区域的快速判别。通过模糊推理生成一种表示自然场景内在模糊性的高级特征,并根据极大熵原理实现三维点云地形的分类,从而提高复杂地形条件下可通过性判别的准确性。在此基础上,利用地形片段的语义分割算法,实现室外场景中典型地形结构的建模。
     分别在自主研发的野外无人车平台及沈阳自动化所研发的六轮腿机器人平台上进行实验验证,实验结果及数据分析验证了本文所提方法的有效性和实用性。
With the increasing in types and application of mobile robot, the application domain expands from structured indoor environment to quasi-structured or unstructured outdoor environment. As the basis of mobile robot's localization, navigation and exploration, scene recognition and modeling for outdoor environment become one of the hot topics in mobile robot. This dissertation starts form the problem of stair detection and parameter estimation in3D scene, and then conducts an in-depth study on classification and terrain modeling for typical outdoor scene, so as to provide technical support for the mobile robot's self-adaptation to outdoor environment.
     In mobile robot's autonomous navigation, stairs can be seen as obstacles, alternative pathway and also important marks in localization and navigation. Targeting the structural diversity of stairs and distribution uncertainty of3D laser point cloud, an adaptive stair detection and parameter estimation method based on stair topology model and fuzzy set theory is proposed. According to the topological relations of staircase profile, a stair edge detection method based on Angle Histogram Algorithm is proposed to improve the estimation accuracy of the stair edge position. Adopting the in-level line extraction and cross-level line linking strategy, the candidate stair edge line set is constructed effectively. The cascade probability of candidate stair edges between levels is estimated by fuzzy transform and adaptive fuzzy reasoning. Global optimum candidate edge line combination is searched by Simulated Annealing Algorithm, so as to construct the3D stair model effectively.
     The vehicle vibration and lighting conditions change will significantly affect the quality of vision image. By using Gabor Filter for image enhancement, the stair edge detection is then achieved by adopting Canny Operator. A local fusion phase grouping method is proposed to enhance the robustness of edge line extraction in low quality image. The offset parameter for robot motor control is obtained by linking and filtering of the edge lines. By the feature fusing of monocular vision image and3D laser point cloud data, the reliability of stair detection and parameter estimation is then enhanced.
     For the mobile robot with certain ability of obstacle negotiation, terrain classification and terrain modeling are an important bases for the discrimination of environment passability. According to the diversity of outdoor terrain complexity, a hierarchical terrain classification and geometric modeling method is proposed for mobile robots'scene recognition and motion planning. Based on the terrain representation of layered elevation map, rapid discrimination of passable area and obstacle area on flat road surface can be realized. A senior feature representation of the inherent ambiguity of natural scene is produced by fuzzy inference, then terrain classification of3D point cloud is achieved according to the principle of maximum entropy, thereby the discrimination veracity of passability in outdoor scene is enhanced. Based on this, using semantic segmentation algorithm of terrain fragment, the modeling to typical terrain structure in outdoor scene is achieved.
     The effectiveness and practicality of the methods proposed in this dissertation is verified with the experiments on the unmanned ground vehicle (UGV) developed by our research group and the leg-wheel robot developed by Shenyang Institute of Automation.
引文
[1]Garcia E, Jimenez M A, De Santos P G, et al. The evolution of robotics research[J]. Robotics & Automation Magazine, IEEE,2007,14(1):90-103.
    [2]Thorpe C, Hebert M H, Kanade T, et al. Vision and navigation for the Carnegie-Mellon Navlab[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,1988, 10(3):362-373.
    [3]Luettel T, Himmelsbach M, Wuensche H J. Autonomous ground vehicles—concepts and a path to the future[C]//Proceedings of the IEEE, Special Centennial Issue,2012, 100(13):1831-1839.
    [4]Miller I, Lupashin S, Zych N, et al. Cornell University's 2005 DARPA grand challenge entry[J]. Journal of Field Robotics,2006,23(8):625-652.
    [5]Schoenberg J R, Nathan A, Campbell M. Segmentation of dense range information in complex urban scenes[C]//2010 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS),2010:2033-2038.
    [6]Helmick D M, Roumeliotis S I, McHenry M C, et al. Multi-sensor, high speed autonomous stair climbing[C]//IEEE/RSJ International Conference on Intelligent Robots and Systems,2002,1:733-742.
    [7]Carlson J, Murphy R R. How UGVs physically fail in the field[J]. IEEE Transactions on Robotics,2005,21(3):423-437.
    [8]Mourikis A I, Trawny N, Roumeliotis S I, et al. Autonomous stair climbing for tracked vehicles[J]. The International Journal of Robotics Research,2007,26(7):737-758.
    [9]Steplight S, Egnal G, Jung S H, et al. A mode-based sensor fusion approach to robotic stair-climbing[C]//IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS),2000,2:1113,-1118.
    [10]唐鸿儒,宋爱国,章小兵.基于传感器信息融合的移动机器人自主爬楼梯技术研究[J].传感技术学报,2005,18(4):828-833.
    [11]Verma V, Baskaran V, Utz H, et al. Demonstration of Robust Execution on a NASA Lunar Rover Testbed[C]//Proc. of International Symposium on Artificial Intelligence, Robotics and Automation in Space (iSAIRAS).2008
    [12]Torres R J, Allan M, Hirsh R, et al. RAPID:Collaboration results from three NASA centers in commanding/monitoring lunar assets[C]//Aerospace conference,2009 IEEE, 2009:1-11.
    [13]Cong Y, Peng J J, Sun J, et al. V-disparity Rased UGV Obstacle Detection in Rough Outdoor Terrain[J]. Acta Automatica Sinica,2010,36(5):667-673.
    [14]Lu J, Bu C. Study on the mobile robot reconfiguration control methods[C]//IEEE International Conference on Automation and Logistics. IEEE,2009:2045-2049.
    [15]Levinson J, Askeland J, Becker J, et al. Towards fully autonomous driving:Systems and algorithms[C]//Intelligent Vehicles Symposium (Ⅳ). IEEE,2011:163-168.
    [16]Manz M, Luettel T, von Hundelshausen F, et al. Monocular model-based 3D vehicle tracking for autonomous vehicles in unstructured environment[C]//IEEE International Conference on Robotics and Automation (ICRA). IEEE,2011:2465-2471.
    [17]Urmson C, Anhalt J, Bagnell D, et al. Autonomous driving in urban environments:Boss and the urban challenge[J]. Journal of Field Robotics,2008,25(8):425-466.
    [18]Braun T. Cost-efficient global robot navigation in rugged off-road terrain[J]. KI-Kilnstliche Intelligenz,2011,25(2):173-177.
    [19]Diosi A, Kleeman L. Laser scan matching in polar coordinates with application to SLAM[C]//IEEE/RSJ International Conference on Intelligent Robots and Systems(IROS). IEEE,2005:3317-3322.
    [20]NUchter A, Lingemann K, Hertzberg J, et al.6D SLAM—3D mapping outdoor environments [J]. Journal of Field Robotics,2007,24(8-9):699-722.
    [21]Surmann H, Nuchter A, Lingemann K, et al.6D SLAM-preliminary report on closing the loop in six dimensions[C]//In Proceedings of the 5th IFAC Symposium on Intelligent Autonomous Vehicles (IAV 04), Lisabon.2004.
    [22]Blanco J L, Fernandez-Madrigal J A, Gonzalez J. Toward a Unified Bayesian Approach to Hybrid Metric--Topological SLAM[J]. IEEE Transactions on Robotics.2008,24(2): 259-270.
    [23]Surmann H, Nuchter A, Hertzberg J. An autonomous mobile robot with a 3D laser range finder for 3D exploration and digitalization of indoor environments[J]. Robotics and Autonomous Systems,2003,45(3):181-198.
    [24]Rekleitis I, Bedwani J L, Dupuis E. Over-the-horizon, autonomous navigation for planetary exploration[C]//IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE,2007:2248-2255.
    [25]Poppinga J, Birk A, Pathak K. Hough based terrain classification for realtime detection of drivable ground[J]. Journal of Field Robotics,2008,25(1-2):67-88.
    [26]Pfaff P, Triebel R, Burgard W. An efficient extension to elevation maps for outdoor terrain mapping and loop closing[J]. The International Journal of Robotics Research, 2007,26(2):217-230.
    [27]Hilbert K, Brunnett G. A hybrid LOD based rendering approach for dynamic scenes [C]// International Proceedings on Computer Graphics. IEEE,2004:274-277.
    [28]Zavodny A, Flynn P, Chen X. Region extraction in large-scale urban LIDAR data [C]// IEEE 12th International Conference on Computer Vision Workshops (ICCV Workshops). IEEE,2009:1801-1808.
    [29]Schaf er H, Hach A, Proetzsch M, et al.3d obstacle detection and avoidance in vegetated off-road terrain[C]//IEEE International Conference on Robotics and Automation. IEEE, 2008:923-928.
    [30]Kong H, Audibert J Y, Ponce J. General road detection from a single image[J]. IEEE Transactions on Image Processing,2010,19(8):2211-2220.
    [31]Sande K E A, Gevers T, Snoek C G M. Evaluating color descriptors for object and scene recognition[J]. Pattern Analysis and Machine Intelligence, IEEE Transactions on,2010, 32(9):1582-1596.
    [32]Zhang K, Zhang L, Yang M H. Real-time compressive tracking[M]//Computer Vision-ECCV 2012. Springer Berlin Heidelberg,2012:864-877.
    [33]Lowe D G. Distinctive image features from scale-invariant keypoints[J]. International journal of computer vision,2004,60(2):91-110.
    [34]Bay H, Ess A, Tuytelaars T, et al. Speeded-up robust features (SURF)[J]. Computer vision and image understanding,2008,110(3):346-359.
    [35]Torralba A, Oliva A, Castelhano M S, et al. Contextual guidance of eye movements and attention in real-world scenes:The role of global features in object search[J]. Psychological review,2006,113(4):766-786.
    [36]Oliva A, Torralba A. The role of context in object recognition[J]. Trends in cognitive sciences,2007,11(12):520-527.
    [37]Panjwani D K, Healey G. Markov random field models for unsupervised segmentation of textured color images[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence.1995,17(10):939-954.
    [38]Lafferty J, McCallum A, Pereira F C N. Conditional random fields:Probabilistic models for segmenting and labeling sequence data[J]. In Proceedings of the Eighteenth International Conference on Machine Learning,2001:282-289.
    [39]Kelly A, Stentz A, Amidi 0, et al. Toward reliable off road autonomous vehicles operating in challenging environments[J]. The International Journal of Robotics Research,2006,25(5-6):449-483.
    [40]Hoiem D, Efros A A, Hebert M. Geometric context from a single image[C]//Computer Vision, 2005. ICCV 2005. Tenth IEEE International Conference on. IEEE,2005,1:654-661.
    [41]Fu S, Liu H, Gao L, et al. SLAM for mobile robots using laser range finder and monocular vision[C]//Mechatronics and Machine Vision in Practice,2007. M2VIP 2007.14th International Conference on. IEEE,2007:91-96.
    [42]Zhuang Y, Wang W, Wang K, et al. Mobile robot indoor simultaneous localization and mapping using laser range finder and monocular vision[J]. Acta automatica sinica, 2005,31(6):925.
    [43]Bansal M, Matei B, Southall B, et al. A LIDAR streaming architecture for mobile robotics with application to 3D structure characterization[C]//IEEE International Conference on Robotics and Automation (ICRA). Shanghai, China:IEEE,2011:1803-1810.
    [44]Theeravithayangkura C, Takubo T, Mae Y, et al. Stair recognition with laser range scanning by limb mechanism robot "ASTERISK" [C]//Proceedings of IEEE International Conference on Robotics and Biomimetics(ROBIO'08). Bangkok, Thailand:IEEE,2009: 915-920.
    [45]OBwald S, Gutmann J S, Hornung A, et al. From 3D point clouds to climbing stairs: a comparison of plane segmentation approaches for humanoids[C]//The 11th IEEE/RAS International Conference on Humanoid Robots (Humanoids). Bled, Slovenia:IEEE,2011: 93-98.
    [46]Se S, Brandy M. Vision-based detection of staircases [C]//Proceedings of The Fourth Asian Conference on Computer Vision. Taipei, Taiwan:Springer,2000:535-540.
    [47]Se S, Brandy M. Road feature detection and estimation [J]. Machine Vision and Applications,2003,14(3):157-165.
    [48]Stoeter S A, Papanikolopoulos N. Autonomous stair-climbing with miniature jumping robots[J]. IEEE Transactions on Systems, Man, and Cybernetics, Part B:Cybernetics, 2005,35(2):313-325.
    [49]Mourikis A I, Trawny N, Roumeliotis S I, et al. Autonomous stair climbing for tracked vehicles [J]. The International Journal of Robotics Research,2007,26(7):737-758.
    [50]Hesch J, Mariottini G, Roumeliotis S. Descending-stair detection, approach, and traversal with an autonomous tracked vehicle [C]//International Conference on Intelligent Robots and Systems (IROS). Taipei, Taiwan:IEEE,2010:5525-5531.
    [51]Lu X, Manduchi R. Detection and localization of curbs and stairways using stereo vision[C]//Proceedings of the 2005 IEEE International Conference on Robotics and Automation (ICRA). Barcelona, Spain:IEEE,2005:4648-4654.
    [52]Broten G, Collier J. Continuous motion, outdoor,2 1/2d grid map generation using an inexpensive nodding 2-d laser rangefinder[C]//IEEE International Conference on Robotics and Automation (ICRA). IEEE,2006:4240-4245.
    [53]Pfaff P, Triebel R, Burgard W. An efficient extension to elevation maps for outdoor terrain mapping and loop closing[J]. The International Journal of Robotics Research, 2007,26(2):217-230.
    [54]Bohren J, Foote T, Keller J, et al. Little Ben:The Ben Franklin racing team's entry in the 2007 Darpa urban challenge [J]. Journal of Field Roboties,2008,25(9):598-614.
    [55]Gingras D, Dupuis E, Payre G, et al. Path planning based on fluid mechanics for mobile robots using unstructured terrain models[C]//IEEE International Conference on Robotics and Automation (ICRA). IEEE,2010:1978-1984.
    [56]Xiong X, Munoz D, Bagnell J A, et al.3-D scene analysis via sequenced predictions over points and regions[C]//IEEE International Conference on Robotics and Automation (ICRA). IEEE,2011:2609-2616.
    [57]Ross S, Munoz D, Hebert M, et al. Learning message-passing inference machines for structured prediction[C]//IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE,2011:2737-2744.
    [58]Rusu R B, Marton Z C, Blodow N, et al. Model-based and learned semantic object labeling in 3D point cloud maps of kitchen environments[C]//Intelligent Robots and Systems, 2009. IROS 2009. IEEE/RSJ International Conference on. IEEE,2009:3601-3608.
    [59]Rusu R B, Marton Z C, Blodow N, et al. Towards 3D Point cloud based object maps for household environments[J]. Robotics and Autonomous Systems,2008,56(11):927-941.
    [60]Gingras D, Lamarche T, Bedwani J L, et al. Rough terrain reconstruction for rover motion planning[C]//2010 Canadian Conference on Computer and Robot Vision (CRV). IEEE,2010:191-198.
    [61]Moon H C, Kim J H, Kim J H. Obstacle detecting system for unmanned ground vehicle using laser scanner and vision[C]//ICCAS'07. International Conference on Control, Automation and Systems. IEEE,2007:1758-1761.
    [62]Zhu X, Zhao H, Liu Y, et al. Segmentation and classification of range image from an intelligent vehicle in urban environment[C]//IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE,2010:1457-1462.
    [63]Kim J H. Unmanned vehicle control and modeling for obstacle avoidance[J]. International journal of automotive technology,2003,4(4):173-180.
    [64]Brown A, Lu Y. Performance test results of an integrated GPS/MEMS inertial navigation package[C]//Proceedings of ION GNSS.2004:21-24.
    [65]Munoz D, Bagnell J A, Vandapel N, et al. Contextual classification with functional max-margin markov networks[C]//IEEE Conference on Computer Vision and Pattern Recognition(CVPR). IEEE,2009:975-982.
    [66]Munoz D, Vandapel N, Hebert M. Onboard contextual classification of 3-d point clouds with learned high-order markov random fields[C]//IEEE International Conference on Robotics and Automation. IEEE,2009:4273-4280.
    [67]Zhao H, Liu Y, Zhu X, et al. Scene understanding in a large dynamic environment through a laser-based sensing[C]//IEEE International Conference on Robotics and Automation (ICRA). IEEE,2010:127-133.
    [68]Zhang Z. A flexible new technique for camera calibration[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence.2000,22(11):1330-1334.
    [69]陈东.三维激光和单目视觉间的联合标定与数据融合[D].大连理工大学,2009.
    [70]Hernandez D C, Jo K H. Stairway segmentation using Gabor filter and vanishing point [C]//Proceedings of IEEE International Conference on Mechatronics and Automation. Beijing, China:IEEE,2011:1027-1032.
    [71]Mihankhah E, Kalantari A, Aboosaeedan E, et al. Autonomous staircase detection and stair climbing for a tracked mobile robot using fuzzy controller[C]//The IEEE International Conference on Robotics and Biomimetic(ROBIO'08). Bankok, Thailand: IEEE,2009:1980-1985.
    [72]Kida Y, Kagami S, Nakata T, et al. Human finding and body property estimation by using floor segmentation and 3D labeling[C]//The IEEE International Conference on Systems, Man, and Cybernetics (SMC). The Hague, Netherlands:IEEE,2004,3:2924-2929.
    [73]Dubrawski A, Siemiatkowska B. A method for tracking pose of a mobile robot equipped with a scanning laser range finder[C]//The IEEE International Conference on Robotics and Automation (ICRA). Leuven, Belgium:IEEE,1998,3:2518-2523.
    [74]Chatzis V, Pitas I. Fuzzy cell Hough transform for curve detection [J]. Pattern Recognition,1997,30(12):2031-2042.
    [75]唐亮,谢维信,黄建军,等.自适应模糊Hough变换[J].电子学报,2004,32(6):69-72.
    [76]Kirkpatrick S, Vecchi M P. Optimization by simmulated annealing [J]. Science, New Series,1983,220(4598):671-680.
    [77]Liu J, Wang Y, Ma S, et al. Analysis of stairs-climbing ability for a tracked reconfigurable modular robot[C]//Safety, Security and Rescue Robotics, Workshop, 2005 IEEE International. IEEE,2005:36-41.
    [78]Horan B, Nahavandi S, Creighton D, et al. Fuzzy haptic augmentation for telerobotic stair climbing[C]//IEEE International Conference on Systems, Man and Cybernetics, 2007. IEEE,2007:2437-2442.
    [79]Michel P, Chestnut J, Kagami S, et al. GPU-accelerated real-time 3D tracking for humanoid locomotion and stair climbing[C]//IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE,2007:463-469.
    [80]Theeravithayangkura C, Takubo T, Mae Y, et al. Stair recognition with laser range scanning by limb mechanism robot "ASTERISK "[C]//IEEE International Conference on Robotics and Biomimetics,2008. ROBIO 2008.. IEEE,2009:915-920.
    [81]Hernandez D C, Jo K H. Stairway segmentation using Gabor Filter and vanishing point[C]//Mechatronics and Automation (ICMA),2011 International Conference on. IEEE, 2011:1027-1032.
    [82]Wang S, Wang H.2D staircase detection using real adaboost[C]//7th International Conference on Information, Communications and Signal Processing (ICICS). IEEE,2009: 1-5.
    [83]Cong Y, Li X, Liu J, et al. A stairway detection algorithm based on vision for ugv stair climbing[C]//IEEE International Conference on Networking, Sensing and Control, 2008 (ICNSC).2008:1806-1811.
    [84]Hernandez D C, Jo K H. Stairway tracking based on automatic target selection using directional filters[C]//17th Korea-Japan Joint Workshop on Frontiers of Computer Vision (FCV). IEEE,2011:1-6.
    [85]Burns J B, Hanson A R, Riseman E M. Extracting straight lines[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence.1986 (4):425-455.
    [86]Kim E, Haseyama M, Kitajima H. Fast line extraction from digital images using line segments[J]. Systems and Computers in Japan,2003,34(10):76-89.
    [87]Lee T S. Image representation using 2D Gabor wavelets [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence.1996,18(10):959-971.
    [88]Canny J. A computational approach to edge detection[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence.1986 (6):679-698.
    [89]Xiong Y, Matthies L. Vision-guided autonomous stair climbing[C]//IEEE International Conference on Robotics and Automation (ICRA'OO). IEEE,2000,2:1842-1847.
    [90]Saxena A, Wong L, Quigley M, et al. A vision-based system for grasping novel objects in cluttered environments[M]//Robotics Research. Springer Berlin Heidelberg,2011: 337-348.
    [91]Boehler W, Vicent M B, Marbs A. Investigating laser scanner accuracy[J]. The International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences,2003,34(5):696-701.
    [92]Rusu R B, Marton Z C, Blodow N, et al. Learning informative point classes for the acquisition of object model maps, Proc. of 10th International Conference on Control, Automation, Robotics and Vision. Hanoi, Vietnam,2008:643-645.
    [93]Yan F, Zhuang Y, Bai M, et al.3D Outdoor Environment Modeling and Path Planning Based on Topology-elevation Model[J]. Acta Automatica Sinica,2010,36(11):1493-1501.
    [94]邱权,杨唐文,韩建达. A new real-time algorithm for off-road terrain estimation using laser data[J]. Journal of Sciences in China Series F,200952(9):1658-1667,
    [95]Likhachev M, Ferguson D, Gordon G, et al. Anytime dynamic A*:An anytime, replanning algorithm[C]//Proceedings of the international conference on automated planning and scheduling (ICAPS).2005:262-271.
    [96]Ferguson D, Stentz A. Using interpolation to improve path planning:The Field D* algorithm[J]. Journal of Field Robotics,2006,23(2):79-101.
    [97]Anguelov D, Taskarf B, Chatalbashev V, et al. Discriminative learning of markov random fields for segmentation of 3d scan data[C]//IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR). IEEE,2005,2:169-176.
    [98]Steder B, Grisetti G, Burgard W. Robust place recognition for 3D range data based on point features[C]//2010 IEEE International Conference on Robotics and Automation (ICRA). IEEE,2010:1400-1405.
    [99]Zhong C, Zhuang Y, Wang W. Fuzzy classification of 3D point cloud in outdoor environment[J]. ICIC Express Letters,2011, Vol.5, Issue 8(B):2741-2746.
    [100]Alexa M, Adamson A. On normals and projection operators for surfaces defined by point sets[C]//Proceedings of the First Eurographics conference on Point-Based Graphics. Eurographics Association,2004:149-155.
    [101]Dinesh S. Fuzzy Classification of Physiographic Features[J]. Applied Mathematical Sciences.2007,1(19):939-961.
    [102]Seraji H, Serrano N. A multisensor decision fusion system for terrain safety assessment[J]. IEEE Transactions on Robotics.2009,25(1):99-108.
    [103]Bradley D M, Unnikrishnan R, Bagnell J. Vegetation detection for driving in complex environments[C]//IEEE International Conference on Robotics and Automation. IEEE, 2007:503-508.
    [104]Vandapel N, Donamukkala R, Hebert M. Experimental results in using aerial ladar data for mobile robot navigation[C]//Field and Service Robotics. Springer Berlin Heidelberg,2006:103-112.
    [105]Lalonde J F, Vandapel N, Huber D F, et al. Natural terrain classification using three-dimensional ladar data for ground robot mobility[J]. Journal of field robotics, 2006,23(10):839-861.
    [106]Galindo C, Saffiotti A, Coradeschi S, et al. Multi-hierarchical semantic maps for mobile robotics[C]//IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE,2005:2278-2283.
    [107]Nuchter A, Hertzberg J. Towards semantic maps for mobile robots[J]. Robotics and Autonomous Systems,2008,56(11):915-926.
    [108]Nuchter A, Surmann H, Lingemann K, et al. Semantic scene analysis of scanned 3D indoor environmentstC]//Proceeding of the VMV Conference.2003:215-221.
    [109]Wolf D F, Sukhatme G S. Semantic mapping using mobile robots [J]. IEEE Transactions on Robotics.2008,24(2):245-258.
    [110]Himmelsbach M, Luettel T, Wuensche H J. Real-time object classification in 3D point clouds using point feature histograms[C]//IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE,2009:994-1000.
    [111]Himmelsbach M, Hundelshausen F, Wuensche H. Fast segmentation of 3D point clouds for ground vehicles[C]//Intelligent Vehicles Symposium (Ⅳ). IEEE,2010:560-565.
    [112]Woo H, Kang E, Wang S, et al. A new segmentation method for point cloud data[J]. International Journal of Machine Tools and Manufacture,2002,42(2):167-178.
    [113]Rekleitis Ⅰ, Bedwani J L, Gemme S, et al. Terrain modelling for planetary exploration[C]//Fourth Canadian Conference on Computer and Robot Vision. IEEE,2007: 243-249.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700