用户名: 密码: 验证码:
基于信息融合的移动机器人三维环境建模技术研究
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
利用感知信息构建三维环境模型是实现移动机器人自主导航与环境探测的重要前提。彩色图像信息与激光深度信息的有效融合能够为机器人提供三维环境感知信息,实现这两种信息的优势互补,以保证移动机器人对环境认知的充分性和系统运行的安全、稳定性。本文基于图像与深度信息的融合,从理论上对图像特征提取技术、传感器系统标定技术、信息融合技术和环境建模技术分别进行研究,在实际应用中利用激光扫描仪和单目视觉传感器构建起一个三维环境信息采集重建系统,并在此基础上对室内环境下移动机器人的目标识别和分层地图创建方法进行深入研究和分析,以确保整个系统的实时性、准确性和稳定性达到较好的指标。本论文的主要研究内容如下:
     (1)单目视觉信息处理方法研究。模仿人类的视觉注意机制对视觉信息进行选择性处理,提出一种以自底向上的数据驱动为主、适当结合高层任务信息指导的视觉注意计算模型。原始图像在经过去噪、平滑后进行边缘检测,通过对显著性边缘的提取和封闭操作,实现移动机器人在室内环境下对道路区域的识别。在此基础上,通过显著度计算和角点空间分布来检测图像中的感兴趣区域,并进一步利用模糊支持向量机分类算法将感兴趣目标与背景分离。通过实验验证该方法在视觉信息处理与分类方面的有效性。
     (2)三维激光扫描仪与单目相机的系统标定方法研究。分别对激光测距仪和单目相机的内部参数标定方法进行了分析,以避免传感器自身因素带来的测量误差。提出一种基于单线匹配的激光-相机外部标定算法,以确保后期信息融合和三维建模的精度。针对该方法在时效性和操作简捷性方面的不足,提出一种不依赖棋盘标定板、基于特征点集匹配的联合标定方法。通过实验计算重投影误差、标定时间、实际映射结果等对两种外部标定方法的精度、时效性等进行对比分析。
     (3)三维激光扫描仪与单目相机的分层数据融合方法研究。根据传感器系统标定的结果,建立三维激光点云与彩色图像在时间和空间上的双重配准,实现距离与图像信息的像素级融合。并进一步从特征级融合的层面上进行研究,提出一种三角网格平面法向量聚类的方法对深度图像进行特征提取,并结合色彩直方图、颜色矩等图像特征的双重匹配,提高物体识别的精度和效率,为移动机器人在目标抓取和避障方面的行为决策提供可靠保证。
     (4)移动机器人室内地图创建方法研究。针对单一形式的地图信息有限、无法满足移动机器人与环境深层交互的信息需求问题,提出了一种分层式室内环境地图创建方式,包括面向机器人路径规划的二维栅格地图层、基于信息融合的局部三维环境模型层和面向机器人操作任务的语义描述层。采用基于D-S证据推理的地图更新方法和基于边界的环境探索策略来创建室内环境的二维栅格地图;利用多视点下获得的三维激光深度信息的配准构建大范围室内场景的三维模型;依据对环境整体框架的空间分布和归属关系的认知,构建室内场景的语义地图。通过仿真和实际实验对上述地图创建方法的有效性和可行性进行了验证。
     (5)基于信息融合的移动机器人三维环境建模实验研究。构建场景信息采集、融合与三维重建实验系统,设计并完成室内场景下移动机器人彩色三维环境建模实验,实验结果证明了整个系统设计的正确性和有效性。
Three-dimensional (3D) environment modeling based on the perceptive information is an important foundation for the autonomous navigation and exploration of the mobile robot. The effective integration of the color image information and laser range data can provide3D environment perception information for the robot to realize the complementary advantages of the two sensors. The fused information can ensure the adequacy of the environmental perception and the security/stability of the whole system. In this dissertation, the image feature extraction, sensors system calibration, data fusion and hybrid mapping technologies are studied theoretically. In practical applications, we build a3D environmental information acquisition and reconstruction system, which is composed of a laser scanner and a monocular camera. On this basis, the target recognition and hierarchical mapping of the mobile robot under indoor environment is studied in order to ensure the real-time, accuracy and stability of the entire system. The main research contents include the following aspects:
     (1) The issue of monocular visual information processing is studied. Inspired by the human visual attention mechanism, a data-driven bottom-up computation model for visual attention is proposed, combining with some high-level task guiding information. After the de-noising and smoothing steps, the edges of the original image are detected. Through the salient edge extraction and closure operations, the mobile robot can identify the road area the under indoor environment. Finally, the visual attention mechanism and fuzzy support vector machine-based classification algorithm are combined together. The regions of interest in the image are detected through the saliency computation and spatial distribution of corners. Furthermore, how to extract the target of interest from the image is studied. The experiments show the effectiveness of the method in visual information processing and classification.
     (2) The issue of system calibration for integrated camera and laser scanner is studied. The intrinsic calibration of the laser scanner and monocular camera is analyzed respectively at first, in order to avoid the measurement error caused by the internal factors of sensors. On this basis, the single line matching-based joint calibration method is proposed to ensure the accuracy of the following information fusion and3D modeling. For the lack of timeliness and simplicity of this method, the feature point sets matching-based joint calibration method is proposed. Considering the checkerboard-based calibration is always time-wasting to achieve an accurate result, the new method uses the natural object. Finally, the re-projection error, calibration time and actual projection result are calculated separately, for the purpose of comparing and analyzing the calibration accuracy and timeliness of the two joint calibration methods.
     (3) The hierarchical data fusion method between the3D laser scanner and a monocular camera is studied. Based on the system calibration results, the dual registration of the3D laser points cloud and color image in time and space is established. This phase is called the pixel-level fusion between the laser and camera. Further from the level of feature-fusion, a triangular grid plane normal vector clustering method is proposed to extract the features from the depth image, combining with the dual match of the color histogram, color moment and other image characteristics to improve the accuracy and efficiency for object recognition. This work can provide a reliable guarantee for the capabilities of decision-making, when the mobile robot implements the target grasping and obstacle avoidance.
     (4) The information from a single formed map can't satisfy the needs for the deep interaction between the mobile robot and the environment. Therefore, a hierarchical indoor environment map is proposed, which includes a two-dimensional grid map layer for the path planning, a local3D environment model layer based on information integration and a semantic layer for robotic manipulation tasks. The hierarchical mapping will lay a foundation for the autonomous navigation of mobile robot in an unknown environment. The2D grid map under the indoor environment is created based on the map updating method using D-S evidence reasoning and the frontier-based environment exploring strategies; A wide range of3D indoor environment model is built based upon the registration of3D scanning from multi-view; Through the cognition of the spatial distribution and attribution of the whole environmental framework, a semantic map of the indoor scene can be created; Finally, the effectiveness and feasibility of our algorithms can be validated by the simulation and actual experiments.
     (5) An experimental system of3D environmental information acquisition and reconstruction is set up in the light of schematic design of the project. The algorithms mentioned in this dissertation are tested under indoor environment. The experimental results have proved the correctness and validity of the whole system design.
引文
[1]连晓峰.移动机器人及室内环境三维模型重建技术[M].北京:国防工业出版社,2010.
    [2]于春和,刘济林.越野环境的三维地图重建[J],南京理工大学学报,2007,31(2):180-183.
    [3]李赣华.基于摄像机和二维激光测距仪的三维建模关键技术研究[D].长沙:国防科学技术大学,2006.
    [4]毛方儒,王磊.三维激光扫描系统[J].宇航计测技术,2005,25(2):1-6.
    [5]Charles K. Toth, Nora Csanyi, Dorota A. Grejner-Brzezinska. Improving LiDAR-based Surface Reconstruction Using Ground Control[J]. Dynamic Planet,2007,130:817-824.
    [6]李笑岚,孟放,冯洁等.基于几何模型与彩色图像融合的地质化石数字化[J].计算机科学,2005,29(z2).
    [7]J. Cosmas, T. Itagaki, D. Green, et al.3D MURALE:A multimedia system for archaeology[C]. In Proc. of VAST 2011 (Virtual Reality, Archaeology and Cultural Heritage), ACM Press,2011:297-306.
    [8]王茹,周明全,耿国华.基于二维透视图像的建筑物基本体素三维重建[J].计算机应用与软件,2009,26(9):7-9.
    [9]王茹.古建筑数字化及三维建模关键技术研究[D].西安:西北大学.2010.
    [10]庄严.移动机器人基于多传感器数据融合的定位及地图创建研究[D].大连:大连理工大学.2004.
    [11]刘大学.用于越野自主导航车的激光雷达与视觉融合方法研究[D].长沙:国防科学技术大学中南大学,2009.
    [12]Shengjie Wang, Yan Zhuang, Keqiang Zheng, et al.3D scene reconstruction using panoramic laser scanning and monocular vision[C]. In Proc. of the 8th World Congress of Intelligent Control and Automation, Jinan, China. July 2010:861-866.
    [13]Besl P J, McKay N D. A method for registration of 3-D shapes[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,1992,14(2):239-256.
    [14]John Williams, Mohammed Bennamoun. Simultaneous Registration of Multiple Corresponding Point Sets[J]. Computer Vision and Image Understanding,2001,81(1): 117-142.
    [15]Chu-Song Chen, Yi-Ping Hung, Jen-Bo Cheng. RANSAC-based DARCES:a new approach to fast automatic registration of partially overlapping range images[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2003,21(11):1229-1234.
    [16]Yonghuai Liu. Automatic registration of overlapping 3D point clouds using closest points[J]. Image and Vision Computing,2006,24:762-781.
    [17]Jean-Daniel Boissonnat. Geometric Structures for Three-Dimensional Shape Representation[J]. ACM Trans. Graph,1984,3(4):266-286.
    [18]谢凯,汤晓安,郝建新等.一种基于多幅深度图像的混合建模方法[J].小型微型计算机系统,2003,1:94-96.
    [19]Natasha Gelfand, Leslie Ikemoto, Marc Levoy. Geometrically stable sampling for the ICP algorithms[C]. In Proceedings of 4th International Conference on 3DIM, Banff, Canada, 2003:260-267.
    [20]Radu B., Blodow Nico, Beetz Michael. Fast point feature histograms (FPFH) for 3D registration[C]. IEEE International Conference on Robotics and Automation (ICRA 09'), Kobe,2009:3212-3217.
    [21]Daniel Scharstern, Richard Szeliski. A taxonomy and evaluation of dense two-frame stereo correspondence algorithms[J]. International journal of computer vision,2002,47(1):7-42.
    [22]吴皓.基于服务导向的机器人地图构建研究[D].济南:山东大学.2011.
    [23]S. Thrun. Learning metric-topological maps for indoor mobile robot navigation[J]. Artificial Intelligence,1998,99(1):21-31.
    [24]Elfes A, Moravec H. High resolution maps from wide angel sonar[C]. IEEE International Conference on Robotics and Automation,1985:116-121.
    [25]Guivant J, Nebot E. Simultaneous localization and map building using natural features in outdoor environment [C]. Proceedings of Intelligent Autonomous System,2000:581-588.
    [26]Natarajan, S. K. Robust stereo-vision based 3D modeling of real-world objects for assistive robotic applications[C].2011 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), San Francisco, CA,2011:25-30.
    [27]http://www.gpsworld.com/gpsworld
    [28]S. Thrun, M. Beetz, et al. Probabilistic algorithms and the interactive museum tour-guide robot Minerva[J]. The International Journal of Robotics Research,2000,19(11):972-999.
    [29]Jonathan Walker. Representing social space:cognitive mapping and the potential for progressive urban planning & design[J]. An Undergraduate Geography Journal, Vol.5, 2011:34-41.
    [30]W. K. Yeap, M. E. Jefferies. On early cognitive mapping [J]. Spatial Cognition and Computation,2000,2(2):85-116.
    [31]H. Zender, P. Jensfelt, O. Martinez-Mozos, et al. An integrated robotic system for spatial understanding and situated interaction in indoor environment[C].2007 AAAI,2007: 1584-1589.
    [32]P. Beeson, M. Macmahon, J. Modayil, et al. Integrating multiple representations of spatial knowledge for mapping, navigation and communication[C]. Interaction challenges for intelligent assistants, AAAI Spring Symposium Series,2007:513-520.
    [33]C. Galindo, A. Saffiotti, S. Coradeschi, et al. Multi-hierarchical semantic maps for mobile robotics[C]. In Proc. of the IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, IROS, Edmonto, CA,2005:3492-3497.
    [34]D. Meger, P. Forssen, K. Lai, et al. Curious George:An attentive semantic robot[C]. In IROS Workshop:From Sensors to Human Spatial Concepts,2007:390-404.
    [35]Cipriano Galindo, Juan-Antonio Fernandez-Madrigal, Javier Gonzalez, et al. Robot task planning using semantic maps[J]. Robotics and Autonomous Systems,2008,56:955-966.
    [36]A. Ranganathan, F. Dellaert. Semantic modeling of places using objects[C]. In Robotics Science and Systems Conf.,2007:45-53.
    [37]Andreas Nuchter, Joachim Hertzberg. Towards semantic maps for mobile robots[J]. Robotics and Autonomous Systems,2008,56(11):915-926.
    [38]孙珺.基于地面移动机器人感知信息的三维地图构建[D].南京:南京理工大学.2005.
    [39]Jan Elseberg, Dorit Borrmann, Andreas Nuchter. Efficient processing of large 3D point clouds[C]. In Proceedings of the XXIII International Symposium on Information, Communication and Automation Technologies (ICAT'11), Sarajevo, Bosnia, October 2011: 1-7.
    [40]M.Montemerlo and S.Thrun. FastSLAM:A Scalable Method for the Simultaneous Localization and Mapping Problem in Robotics [M]. Springer Tracts in Advanced Robotics, Vol.27,2007.
    [41]S. Thrun. Simultaneous localization and mapping[M]. Spatial Mapping Approaches in Robotics and Natural Mapping Systems. Springer Tracts in Advanced Robotics, Berlin, 2006.
    [42]Peter Biber, Henrik Andreasson.3D modeling of indoor environments by a mobile robot with a laser scanner and panoramic camera[C]. In International Symposium on Experimental Robotics (ISER),2005.
    [43]http://www.cs.columbia.edu.cn/robotics/projects/avenue. The AVENUE project of Columbia University [DB/OL],2005.
    [44]Sangwoo Jo, Qonita M. Shahab, Yong-Moo Kwon, et al. Indoor modeling for interactive robot service[C]. SICE-ICASE International Joint Conference,2006:3531-3536.
    [45]闫飞.面向复杂室外环境的移动机器人三维地图构建与路径规划[D].大连:大连理工大学,2011.
    [46]刘同明,夏祖勋,解洪成.数据融合技术及其应用[M].北京:国防工业出版社,1998.
    [47]孙辉,赵峰,张峰云.多传感器信息融合技术及其应用[J].海洋测绘,2009,29(5):76-81.
    [48]Mitchell A. Benjamin, et al. Multi-sensor fusion[P]. US007283904B2, Oct.16,2007.
    [49]杨万海.多传感器数据融合及其应用[M].西安:西安电子科技大学出版社,2004.
    [50]R. E. Kalman. A new approach to linear filtering and prediction problems[J]. Transactions of the ASME-Journal of Basic Engineering,1960:35-45.
    [51]Gao J B, Harris C J. Some remark on Kalman filters for the multi-sensor fusion[J]. Information Fusion,2002,3(3):191-201.
    [52]Berger, James O. Statistical decision theory and Bayesian Analysis (2nd ed.). New York: Springer-Verlag,1985.
    [53]A P Dempster. Upper and lower probabilities induced by a multi-valued mapping. Annuals of Mathematical Statistics.1967,38(2):325-339.
    [54]G Shafer. A mathematical theory of evidence[M]. Princeton. Princeton University Press, 1976.
    [55]李菲菲,徐汀荣,徐小华.基于k-均值聚类和最小二乘的数据融合方法[J].微计算机信息,2011(04):219-220.
    [56]Nigam, P., Biswas, K., Gupta, S., et al. Radar-infrared sensor track correlation algorithm based on neural network fusion system[C].2011 3rd International Conference on Electronics Computer Technology(ICECT), July 2011:202-206.
    [57]Stover, J.A. A fuzzy-logic architecture for autonomous multi-sensor data fusion[J]. IEEE Transactions on Industrial Electronics,2006,43(3):403-410.
    [58]Wang Haijun, Chen Yimin. Sensor Data Fusion Using Rough Set for Mobile Robots System[C]. Proceedings of the 2nd IEEE/ASME International Conference on Mechatronic and Embedded Systems and Applications,2006:1-5.
    [59]严怀成,黄心汉,王敏.多传感器数据融合技术及其应用.传感器技术[J],2005,24(10):1-4.
    [60]Tucker Balch, Henrik Christensen, Vince Camp, et al. DARPA Urban Challenge[R]. June 2007.
    [61]Dan Shapiro. Three anecdotes from the DARPA autonomous land vehicles project[J]. AI Magazine,2008,29(2):2108-2114.
    [62]Jan C Becker, Andreas Simon. Sensor and navigation data fusion for an autonomous vehicle[C]. Proceedings of the IEEE intelligent vehicles symposium 2005,10:156-161.
    [63]P Bellutta, R Manduchi. Terrain Perception for DEMO III[C]. Proceedings of the IEEE intelligent vehicles symposium 2005,10:326-331.
    [64]http://www.mobilerobots.com/ResearchRobots/PioneerP3DX.aspx. Adept MobileRobots.
    [65]Huhns M N, Mendoza B, Ruvinsky A, et al. The JIDOKA system for multi-sensor terrain classification [Technical report CSE TR-2006-13], University of South Carolina,2006.
    [66]Saxena A, Chung S H, Ng A Y.3D depth reconstruction from a single still image[J]. International Journal of Computer Vision.2007,76(1):53-69.
    [67]Okita S Y, Ng-Thow-Hing V, Sarvadevabhatla R. Learning together:ASIMO developing an interactive learning partnership with children[C]. The 18th IEEE international symposium on robot and human interactive communication (ROMAN 2009),2009: 1125-1130.
    [68]Chitta S, Jones E G, Ciocarlie M, et al. Perception, planning, and execution for mobile manipulation in unstructured environments[J]. IEEE Robotics and Automation Magazine. 2012,19(2):1-10.
    [69]胡斌,何克忠.计算机视觉在室外移动机器人中的应用[J].自动化学报,2006,32(5):774-784.
    [70]杨长强,叶泽田,钟若飞.基于时空匹配的车载激光点云与CCD线阵图像的融合[J].测绘科学,2010,35(2):32-34.
    [71]闫飞,庄严,王伟.移动机器人基于多传感器信息融合的室外场景理解[J].控制理论与应用,2011,28(8):1093-1098.
    [72]韩峰,杨万海,袁晓光.基于模糊集合的证据理论信息融合办法[J].控制与决策,2010,25(3):449-452.
    [73]张学习,杨宜民.基于多传感器信息融合的移动机器人快速精确自定位[J].控制理论与应用,2011,28(3):443-448.
    [74]陈嘉威.视觉注意计算模型的研究及其应用[D].厦门:厦门大学,2009.
    [75]张广军.视觉测量[M].北京,科学出版社,2008.
    [76]罗四维.视觉感知系统信息处理理论[M].北京:电子工业出版社,2006.
    [77]张巧荣.视觉注意计算模型及其关键技术研究[D].哈尔滨:哈尔滨工程大学,2011.
    [78]S. Frintrop. VOCUS:A visual attention system for object recognition[D]. Lecture Notes in Artificial Intelligence, vol.3899,2006.
    [79]Lee K W, Buxton H, Feng J F. Cue-guided search:A computational model of selective attention[J]. IEEE Transactions on Neural Networks,2005,16(4):910-924.
    [80]Evans KK, Treisman A. Perception of Objects in Natural Scenes:Is it really attention free[J]. Psychology Hum Percept Perform,2005,31(6):76-92.
    [81]Itti L., Koch C. Computational modeling of visual attention[J]. Nature Reviews Neuroscience,2001,2(3):194-203.
    [82]C. Koch, S. Ullman. Shifts in selective visual attention:towards the underlying neural circuitry[J]. Human Neurobiology,1985,4(4):219-227.
    [83]Sun Y, Fisher R. Object-based visual attention for computer vision[J]. Artificial Intelligence,2003,146(1):77-123.
    [84]窦燕.基于空间和物体的视觉注意计算方法及实验研究[D].秦皇岛:燕山大学,2010.
    [85]Sun Y R. Hierarchical object-based visual attention for machine vision[D]. University of Edinburgh,2003.
    [86]Reisfeld D. Constrained phase congruency:simultaneous detection of interest points and of their scales[C]. In Proc. of the computer vision and pattern recognition, CA,1996:562-567.
    [87]Dimai A. Assessment of effectiveness of content based image retrieval systems[C]. In Proc. of the visual'99, Amsterdam, Netherlands,1999:525-532.
    [88]Wai W Y K, Tsotsos J K. Directing attention to onset and offset of image events for eye-head movement control[C]. In Proc. of the international association for pattern recognition, Washington, USA,1994, vol. A:274-279.
    [89]Itti L, Koch C. Saliency-based search mechanism for overt and covert shifts of visual attention[J]. Vision Research 2000,40(10):1489-1506.
    [90]G. R. Arce. Nonlinear Signal Processing:A Statistical Approach[M]. Wiley:New Jersey, USA,2005.
    [91]王爱玲,叶明生,邓秋香.图像处理技术[M].北京:电子工业出版社,2008.
    [92]董鸿燕.边缘检测的若干技术研究[D].长沙:国防科技大学,2008.
    [93]王伟.基于信息融合的机器人障碍物检测与道路分割[D].浙江:浙江大学.2010.
    [94]A bdel-Aziz. Y. I, Karara. H. M. Direct linear transformation into object space coordinates in close-range photogrammetry[C]. In Proceedings of symposium on close-range photogrammetry,1971:1-18.
    [95]Tsai. R. Y. An efficient and accurate camera calibration technique for 3D machine vision[C]. In Proc. of CVPR 86',1986:364-374.
    [96]Berthold K. P. Horn. Tsai's camera calibration method revisited[M]. MIT Press, Cambridge, Massachusetts and McGraw-Hill, New York.2000.
    [97]Zhang. Z. A flexible new technique for camera calibration[J]. IEEE Transactions of Pattern analysis and machine intelligence,2000,22(11):1330-1334.
    [98]张爱武,胡少兴,孙卫东。基于激光与可见光同步数据的室外场景三维重建[J].电子学报,2005,33(5):810-815.
    [99]Q.L. Zhang, R. Pless. Extrinsic calibration of a camera and laser range finder (improves camera calibration) [C], In 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems, Sendai, Japan,2004.2301-2306.
    [100]Bauermann, E. Steinbach. Joint Calibration of a Range and Visual Sensor for the Acquisition of RGBZ Concentric Mosaics[C]. VMV2005, November 2005.
    [101]S. Wasielewski, O. Strauss. Calibration of a multi-sensor system laser rangefinder/camera[C]. Proceedings of Intelligent Vehicles 2005 Symposium, Detroit, USA Sponsored by IEEE Industrial Electronics Society, Sep.25-26,2005,472-477.
    [102]R. Unnikrishnan, M. Hebert, Fast extrinsic calibration of a laser rangefinder to a camera[R], tech. report CMU-RI-TR-05-09, Robotics Institute, Carnegie Mellon University,2005.
    [103]Scaramuzza, D., Harati, A., Siegwart, R.. Extrinsic self calibration of a camera and a 3D laser ranger finder from natural scenes[C]. IEEE International Conference on Intelligent Robots and Systems(IROS 2007), San Diego, CA,2007:4164-4169.
    [104]H. Aliakbarpour, P. Nunez, J.Prado, et al. An efficient algorithm for extrinsic calibration between a 3D laser range finder and a stereo camera for surveillance[C]. In Proceedings of ICAR 2009, Germany, June,2009.
    [105]P. Nunez, P. Drews Jr, R. Rocha and J.Dias, Data fusion calibration for a 3D laser range finder and a camera using inertial data[C]. In Proceedings of the 4th European Conference on Mobile Robots, ECMR'09, Sep 23-25,2009.
    [106]Kassir, T. Peynot. Reliable automatic camera-laser calibration[C]. Proceedings of the 2010 Australasian Conference on Robotics & Automation(ACRA 2010), Brisbane, Australia, 2010.
    [107]G. Pandey, J. Mcbride, S. Savarese. Extrinsic calibration of a 3D laser scanner and an omnidirectional camera[C].7th IFAC Symposium on Intelligent Autonomous Vehicles, 2010, Vol.7, Part 1.
    [108]项志宇,郑路.摄像机与3D激光雷达联合标定的新方法[J].浙江大学学报(工学版),2009,43(8):1401-1405.
    [109]付梦印,刘明阳.视觉传感器与激光测趴雷达空间对准方法[J].红外与激光工程,2009,38(1):74-78.
    [110]汤晓.基于激光测距仪的移动机器人同时定位和地图创建[D].济南:山东大学,2007.
    [111]Laser Measurement systems[M/CD]. https://mysick.com/saqqara/get.aspx?id=im0031331
    [112]马颂德,张正友.计算机视觉:计算理论与算法基础[M].北京:科学出版社,2003.
    [113]王红波.移动机器人视觉导航系统研究[D].北京:北京交通大学,2008.
    [114]Jean-Yves Bouguet. Camera Calibration Toolbox for Matlab[EB/OL]. http://www.vision. caltech.edu/bouguetj/calibdoc/, July 9,2010.
    [115]Tuley J, Vandapel N, Hebert M. Analysis and removal of artifacts in 3-D LADAR data[R]. Tech. report CMU-RI-TR-04-44, Robotics Institute, Carnegie Mellon University, August, 2004.
    [116]Mohammad AI-khawaldah, Andreas Nuchter. Multi-Robot Exploration and Mapping with a rotating 3D Scanner[C]. In Proceedings of the 10th International IFAC Symposium on Robot Control (SYROCO'12), Volume 10, Part 1, Dubrovnik, Croatia, September 2012.
    [117]李世飞,王平,沈振康.迭代最近点算法研究进展[J].信号处理,2009,25(10):1583-1588.
    [118]Hartley R., Zisserman A. Multiple view geometry in computer vision (second edition)[M]. Cambridge University Press,2004.
    [119]Harris C, Stephens M J. A combined corner and edge detector[C]. Proceedings of the Fourth Alvey Vision Conference,1988:147-151.
    [120]Mitchell, H. B. Multi-Sensor Data Fusion:an introduction[M]. Springer-Verlag Berlin Heidelberg,2007.
    [121]韩玲,吴汉宁.多源遥感影像数据融合的理论与技术[J].西北大学学报(自然科学版),2004,34(4):457-460.
    [122]王元斌,夏学知.多传感器综合目标识别技术研究[J].舰船电子工程,2004,24(4):8-12.
    [123]Cantzler H. An overview of range images [DB/OL]. http://homepages.inf.ed.ac.uk/rbf/ CVonline/LOCAL_COPIES/CANTZLER2/range.html,2008.
    [124]Ignatenko, A., Zhirkov, A., Konushin, A. Depth image-based representation and compression for static and animated 3-D objects[J]. IEEE Transactions on Circuits and Systems for Video Technology,2006,14(7):1032-1045.
    [125]Robert Fisher, Ken Dawson-Howe, Andrew Fitzgibbon, et al. Dictionary of computer vision and image processing[M]. John Wiley&Sons, Ltd.2005.
    [126]Greg T, Marc L. Zippered polygon meshes from range images[C]. Proceedings of the 21st Annual Conference on Computer Graphics and Interactive Techniques,1994:311-318.
    [127]李世飞.基于深度图像的三维目标识别技术研究[D].长沙:国防科技大学,2010.
    [128]Jonathan R. S. Unstructured mesh generation, chapter 10 of Combinatorial Scientific Computing[M]. CRC Press,2012:257-297.
    [129]闫飞,庄严,白明等.基于拓扑高程模型的室外三维环境建模与路径规划[J].自动化学报,2010,36(11):1493-1501.
    [130]G. Grisetti, C. Stachniss, W. Burgard. Improved techniques for grid mapping with Rao-Blackwellized particle filters[J]. IEEE Trans. Robot,2007,23(1):34-46.
    [131]C. Stachniss, W. Burgard. Mapping and exploration with mobile robots using coverage maps[C]. Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2003),2003:467-472.
    [132]Van Zwynsvoorde D, Simeon T, Alami R. Incremental topological modeling using local voronoi-like graphs[C]. Proceedings of 2000 IEEE/RSJ International Conference on Intelligent Robots and Systems,2000:897-902.
    [133]P. Beeson, N. K. Jong, B. Kuipers. Towards autonomous topological place detection using the extended voronoi graph[C]. Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems,2005:4373-4379.
    [134]J. L. Blanco, J. Gonzalez, J. A. Fern, et al. Subjective local maps for hybrid metric-topological SLAM[J]. Robotics and Autonomous Systems,2009:57(3):64-74.
    [135]J. L. Blanco, J. Gonzalez, J. A. Fern, et al. Toward a unified Bayesian approach to hybrid metric-topological SLAM[J]. IEEE Transaction on Robotics,2008,24(2):1824-1831.
    [136]王珂,王伟,庄严等.基于几何-拓扑广域三维地图和全向视觉的移动机器人自定位[J].自动化学报,2008,34(11):1369-1378.
    [137]梁志伟,马旭东,戴先中等.基于分布式感知的移动机器人同时定位与地图创建[J].机器人,2009,31(1):33-39.
    [138]Hans Moravec and A. E. Elfes. High resolution maps from wide angle sonar[C]. Proceedings of the 1985 IEEE International Conference on Robotics and Automation, March,1985:116-121.
    [139]Friedman, J. H., Bentley, J. L., Finkel, R. A. An algorithm for finding best matches in logarithmic expected time[J]. ACM Transaction on Mathematical Software,1977,3(3): 209-226.
    [140]Martin A. Fischler, Robert C. Bolles. Random Sample Consensus:a paradigm for model fitting with applications to image analysis and automated cartography[C]. Comm. of the ACM,1981,24(6):381-395.
    [141]H. Cantzler, R.B. Fisher, M. Devy. Quality enhancement of reconstructed 3D models using coplanarity and constraints[C]. In Proc. annual Symp. for Pattern Recognition, DAGM'02, Zurich, Switzerland, September,2002:34-41.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700