面向服务机器人的室内语义地图构建的研究
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
机器人对其所处环境的自主感知与理解是机器人与人工智能领域的一项长远目标。近年来,随着室内服务机器人研究的不断进展,以及新型深度传感器的相继问世,基于RGB-D摄像头的室内三维地图构建受到越来越广泛的研究与关注。特别地,诸多相关领域(如机器人、计算机视觉和计算机图形学,等)的研究成果被用来对齐与融合RGB-D图像序列,进而获得全局的三维场景表示。在地图构建的精度、尺度、速度,以及场景模型等诸多方面已有了大量深入的研究和富有价值的成果。
     针对室内移动机器人对三维地图模型的需求,本文从自主性的角度出发,,系统性地研究了RGB-D地图构建方法,进而构建出一条从传感器硬件到高层语义模型自动化信息处理链。此处的自主性主要包括为使用机器人自动地构建场景三维地图,以及对其进行分析、理解以产生语义模型。基于RGB-D摄像头来自主获取室内环境的三维语义模型,属于一种低成本环境建模技术。对该技术的研究具有广泛的应用价值和重要的商业价值,例如由此衍生出的纯RGB-D摄像头导航,将有可能替代当前主流但较昂贵的激光导航。
     本文从以下三个方面来考虑如何实现这种自主性:1)机器人需要具有自主扫描场景来采集RGB-D图像的能力。如何在有限的先验信息上自动规划摄像头在场景中的扫描轨迹,使之产生的图像序列满足建图系统需求,是本文要解决的问题之一。2)自主性要求建图系统具有极高的鲁棒性。使其在没有人工介入的情况下,仍能适应纹理与形态各异的实际场景,并成功构建出全局一致的地图。此外,本文的动机之一是要让机器人能够在线、自主地构建大尺度场景的三维地图,使机器人作为一个独立的主体来持续地感知与理解它所处的室内环境,故实时性与可扩展性亦是本文追求的目标。3)需要借助场景分析手段将三维地图解释成机器人能够理解的语义地图。
     本文通过研究上述三个子问题,探索出一条机器人自主构建室内语义地图的新途径。具体来说,本文的主要工作分为以下三个方面:
     第一,关于自动图像采集,本文定义了旋转扫描和移动扫描这两个基本行动,并以现有二维栅格地图作为环境先验信息,在栅格地图上规划扫描场景的行动序列。为了获得最佳扫描规划,本文还定义了评价函数来评估扫描规划的优劣程度,并使用随机搜索方法寻找最优解。
     第二,关于RGB-D建图,本文在RGB-D图像序列上提取关键帧,在连续关键帧之间进行空间对齐,并在关键帧序列上进行环闭合检测,同时在关键帧集合上进行全局优化,由此构建出全局一致的三维地图。本文对RGB-D图像帧间对齐和环闭合检测方法进行了深入的研究,以获得鲁棒、快速的建图性能。此外,本文采用KinectFusion方法来进行表面重建,同时讨论了该方法在大尺度场景下的存储开销问题。
     第三,关于场景分析,本文研究了针对点云场景的快速平面检测算法,通过抽取平面完成对点云场景的分割。对分离开来的场景各部分提取特征,并使用简单规则进行识别。由此,将无序点集转换成带有语义信息的三维拓扑地图。
     本文主要的贡献和创新之处包括如下三个方面:
     首先,本文提出了一种面向三维建图的RGB-D摄像头扫描规划方法。当前的RGB-D建图(或重建)系统主要通过人手持摄像头在环境中采集图像,自动化程度较低且难以高效获取大尺度场景的图像信息。而该方法能在二维栅格地图上自动规划出摄像头扫描轨迹,实验表明自动规划结果与RGB-D建图研究人员设计的扫描轨迹基本一致,证明了该方法的有效性。
     其次,本文提出了一种极其鲁棒且快速的基于点、面特征的帧间对齐算法。在关键帧序列(接近3千帧)上的相邻帧帧间对齐实验中,该算法错误率为零,证明了其鲁棒性;同时,算法避开了时间复杂度高的ICP类对齐技术,故效率极高,在主流PC上(无需GPU加速)即可获得实时性能。该算法使得自主建图系统的鲁棒性有了重要保障。此外,结合基于特征匹配的快速环闭合检测技术,以及全局优化技术,再加上前面所述的自动扫描规划方法,共同构成一个能在机器人上在线运行的三维地图自动构建系统。
     再次,对于场景点云模型的理解,这里提出了一种基于投影变换的快速平面提取算法,能够在数秒时间内完成数百万量级点云场景的平面检测与抽取。通过后续的场景分割与简易识别,得到了场景语义地图。
     本文使用廉价的RGB-D摄像头,为移动机器人设计了一套完整的自动构建室内场景三维语义地图的系统。需要指出的是,该系统仍需要一幅二维栅格地图作为场景先验知识。因此,在实际应用中,仍不能完全脱离当前主流的激光(LRFs)感知与导航。而摆脱对场景先验知识的依赖,仅使用RGB-D摄像头,完成对陌生环境的自主探测及语义地图构建,将是未来工作的一个重要方向。尽管离能实用的完全自主、纯RGB-D摄像头的导航与感知还有一段较长的距离,本文工作仍可以看做朝着这一很有前景的目标迈出了重要一步。
The construction of intelligent robotic agents that are able to percept and understand their surroundings autonomously has been a long-standing goal of engineers and scientists in the field of robots with artificial intelligence. In re-cent years, with the advances of research on domestic service robot and the is-sues of novel depth sensors, building3D maps of indoor environments using the RGB-D cameras attracted more and more attention and investigation. Particu-larly, achievements from many areas, such as robotics, computer vision, computer graphics and so on, are utilized to align and merge RGB-D image sequence and further obtain the global represent of3D scenes. There are many in-depth inves-tigations and valuable works in the aspects of precision, scale and speed of map building and map representation of real indoor environments.
     On the require of3D map models for indoor mobile robots, We systematically studied the map building approaches with RGB-D cameras in this thesis from the perspective of autonomy, and has constructed an automated information process-ing chain from sensor hardware to high-level semantic model. In this context, autonomy means that the3D maps of the indoor scene are built by robots auto-matically, and the maps require automatically analysis and understand to gener-ate semantic model. It is a low-cost environment modeling technique to obtain three-dimensional semantic model of indoor environment with RGB-D cameras. The study of this technology has extensive application value and important com-mercial value. For example, the pure RGB-D camera navigation derived from this technique is a hopeful alternative to the current mainstream but expensive laser based navigation.
     In this paper, the following three aspects are taken into consideration to achieve the autonomy. Firstly, the robot should be able to scan the scene and capture RGB-D image automatically. It is one of the issues addressed by this article that how to plan the scene scanning trajectory of the camera automatically on limited priori information about the scene, and generating a sequence of images which meets the mapping system requirements. Secondly, autonomy requires that the mapping system should be robust enough. It should be able to adapt to real scenes with various texture and structures and build a global map successfully, even without manual intervention. In addition, one of the motivations of this article is to make the robot build3D map of large-scale scene online and treat the robot as an autonomous agent to percept and understand its surroundings, so real-time performance and scalability is also the pursuit of this article. Thirdly, the3D maps of scenario need to be interpreted into semantic maps that the robot can understand.
     A new schema that robots build indoor semantic maps autonomously is pre-sented by investigating the above three sub-problems. Specifically, the main work of this paper is divided into the following three aspects:
     First, for the automatic image acquisition problem, we defines such two basic actions as the rotating scanning and mobile scanning, and obtain action sequences of scanning the scene on the existing2D grid map which is regarded as prior information of the environment. To gain the best scan plan, a gain function is also defined to evaluate the merits of scan plan and the optimal solution is achieved by random search techniques.
     Second, for the RGB-D mapping, to obtain globally consistent3D maps, we extract key frames from RGB-D image sequence, and spatial alignment, loop clo-sure detection and global optimization are taken on the key frame sequence. We have a deep investigation on the alignment and loop closure problem on RGB-D frames in order to achieve robust and rapid mapping performance. In addition, we adopt method presented in KinectFusion to achieve elaborate surface reconstruc-tion, and discuss the storage problem while extending the method to large-scale environment.
     Third, as for scene analysis, a fast plane detection algorithm is presented for the point cloud of the scene. Segmentation of the point cloud is done by extracting the planes in the scene, and then extracting features from separated parts of point cloud and recognizing with simple rules of the indoor environments. As a result, unsorted points set is converted in a3D topological map with semantic information.
     The contributions and innovations of this paper include the following three aspects:
     Firstly, a RGB-D camera scan-path planning method is proposed for3D mapping in this paper. The input images of current RGB-D mapping system are captured mainly by hand-held camera traversing throughout the environment. It suffers from low degree of automation, especially collecting images for large-scale scenes. And though this method, the camera scanning trajectory can be achieved in an automatic way. Experiments show that the automatically planned scan trajectory is very similar to the one designed by the expert, and thus prove the effectiveness of this method.
     Secondly, this paper presents an extremely robust and fast point and plane features based RGB-D image alignment algorithm. We perform frame-to-frame alignment experiment on adjacent frames in the key frame sequence (close to3,000frames), and the result of none error demonstrates the robustness of this algorithm. The algorithm avoids the time-consuming ICP-style alignment techniques, so it is of high efficiency and can achieve real-time performance on mainstream PC with-out GPU acceleration which in turn guarantees the robustness of the automatic mapping system. In addition, the combination of rapid feature matching based loop closure detection and global optimization techniques, and coupled with the automatic scan planning method described earlier, constitute a3D online mapping system running on the robot.
     Thirdly, a rapid plane extraction algorithm based on the projection transform is presented for the understanding of the scene through its point cloud model, it just cost a few seconds to detect and extract the planes from scene containing millions of3D points. The semantic map is obtained by subsequent scene seg-mentation and simply recognition of the scene.
     In this paper, we have designed a system which is able to automatically build3D semantic maps of indoor scene using cheap RGB-D cameras. It should be noted that the system still needs a2D grid map as the prior knowledge of the scene. Therefore, in practical applications, it still cannot be completely departing from the mainstream LRFs based perception and navigation. Getting rid of the prior knowledge of scene and fulfilling the task of exploration and semantic mapping of the unfamiliar environment just using an RGB-D camera, will be an important direction for future work. Though lots of works need to be done to realize the practical full autonomous and pure RGB-D camera navigation and perception, this work can still be regarded as an important step toward this promising goal.
引文
[1]Siciliano B, Khatib O. Springer handbook of robotics. Springer,2008.
    [2]Buss M, Beetz M, Wollherr D, et al. CoTeSys-cognition for technical systems. Proceedings of in Proceedings of the 4th COE Workshop on Human Adaptive Mechatronics. Citeseer,2007.
    [3]Chen X, Xie J, Ji J, et al. Toward Open Knowledge Enabling for HumanRobot Interaction. Journal of Human-Robot Interaction,2012, (2):100-117.
    [4]Thrun S, Burgard W, Fox D, et al. Probabilistic robotics, volume 1. MIT press Cambridge,2005.
    [5]Srinivasa S S, Ferguson D, Helfrich C J, et al. HERB:a home exploring robotic butler. Autonomous Robots,2010,28(1):5-20.
    [6]Yamazaki K, Ueda R, Nozawa S, et al. A demonstrative research for daily assistive robots on tasks of cleaning and tidying up rooms. Proceedings of First International Symposium on Quality of Life Technology, pp. J (June 2009),2009.
    [7]Rusu R B, Marton Z C, Blodow N, et al. Towards 3D Point cloud based object maps for household environments. Robotics and Autonomous Systems,2008,56(11):927-941.
    [8]Willow Garage, Inc. PR2 Robot-technical specifications, http://www.willowgarage.com/pages/ robots/technical-specs,2010.
    [9]Henry P, Krainin M, Herbst E, et al. RGB-D mapping:Using depth cameras for dense 3D mod-eling of indoor environments. Proceedings of the 12th International Symposium on Experimental Robotics (ISER), volume 20,2010.22-25.
    [10]Newcombe R A, Davison A J, Izadi S, et al. KinectFusion:Real-time dense surface mapping and tracking. Proceedings of Mixed and Augmented Reality (ISMAR),201110th IEEE International Symposium on. IEEE,2011.127-136.
    [11]Jarvis R. A perspective on range finding techniques for computer vision. Pattern Analysis and Machine Intelligence, IEEE Transactions on,1983, (2):122-139.
    [12]Yang Y, Aggarwal J. An overview of geometric modeling using active sensing. Control Systems Magazine, IEEE,1988,8(3):5-13.
    [13]Hebert M. Active and passive range sensing for robotics. Proceedings of Robotics and Automation, 2000. Proceedings. ICRA'00. IEEE International Conference on, volume 1. IEEE,2000.102-110.
    [14]Beraldin J A, Blais F, Cournoyer L, et al. Active 3D sensing.2003..
    [15]Mihaylova L, Lefebvre T, Bruyninckx H, et al. Active sensing for robotics-a survey. Proceedings of In Proceedings of the 5th International Conference On Numerical Methods and Applications, 2002.316-324.
    [16]Ballard D, Brown C. Computer Vision. Prentice-Hall,1982.
    [17]Scharstein D, Szeliski R. A taxonomy and evaluation of dense two-frame stereo correspondence algorithms. International journal of computer vision,2002,47(1-3):7-42.
    [18]Point Grey Research, Inc. Product catalog stereo. http://www.ptgrey.com/products/Point_Grey_stereo_catalog.pdf,2013.
    [19]Focus Robotics. nDepth vision system datasheet. http://www.focusrobotics.com/docs/focus_ndepth_pci_brief.pdf,2013.
    [20]Surveyor Corporation. Surveyor stereo vision system SVS specifications, http://www.surveyor.com/ stereo/,2013.
    [21]Videre Design LLC. STH-MDCS3 Datasheet, http://www.videredesign.com/assets/%docs/ datasheets/sth-mdcs3-data-sheet.pdf,2013.
    [22]Shapiro L G, Stockman G C. Computer Vision. Prentice Hall,2001.
    [23]Ullman S. The interpretation of structure from motion. Proceedings of the Royal Society of London. Series B. Biological Sciences,1979,203(1153):405-426.
    [24]Huang T S, Netravali A N. Motion and structure from feature correspondences:A review. Pro-ceedings of the IEEE,1994,82(2):252-268.
    [25]Jebara T, Azarbayejani A, Pentland A.3D structure from 2D motion. Signal Processing Magazine, IEEE,1999,16(3):66-84.
    [26]2d3 Ltd. Company website, http://www.2d3.com,2013.
    [27]Vicon. Company website, www.vicon.com,2010.
    [28]Pollefeys M, Koch R, Van Gool L. Self-calibration and metric reconstruction in spite of varying and unknown internal camera parameters. Proceedings of Computer Vision,1998. Sixth International Conference on. IEEE,1998.90-95.
    [29]Dellaert F, Seitz S M, Thorpe C E, et al. Structure from motion without correspondence. Pro-ceedings of Computer Vision and Pattern Recognition,2000. Proceedings. IEEE Conference on, volume 2. IEEE,2000.557-564.
    [30]Snavely N, Seitz S M, Szeliski R. Photo tourism:exploring photo collections in 3D. Proceedings of ACM transactions on graphics (TOG), volume 25. ACM,2006.835-846.
    [31]Bartczak B, Koeser K, Woelk F, et al. Extraction of 3D freeform surfaces as visual landmarks for real-time tracking. Journal of Real-Time Image Processing,2007,2(2-3):81-101.
    [32]Koeser K, Bartczak B, Koch R. Robust GPU-assisted camera tracking using free-form surface models. Journal of Real-Time Image Processing,2007,2(2-3):133-147.
    [33]SICK AG. TLMS200/211/221/291 laser measurement systems technical description. http://www. mysick.com/saqqara/get.aspx?id=IM0012759,2006.
    [34]SICK AG. The LMS-400 measurement system. http://www.sick-automation.ru/images/File/pdf/ LMS400.pdf,2006.
    [35]Hokuyo Automatic Co., Ltd. URG-04LX scanning range finder datasheet. http://www. sentekeurope.com/docs/datasheet/URG-04LX-UG01 datasheet.pdf,2010.
    [36]Hokuyo Automatic Co., Ltd. UTM-30LX scanning range finder datasheet. http://www. sentekeurope.com/docs/datasheet/UTM-30LXdatasheet.pdf,2010.
    [37]Hahnel D, Burgard W, Thrun S. Learning compact 3D models of indoor and outdoor environments with a mobile robot. Robotics and Autonomous Systems,2003,44(1):15-27.
    [38]Thrun S, Burgard W, Fox D. A real-time algorithm for mobile robot mapping with applications to multi-robot and 3D mapping. Proceedings of Robotics and Automation,2000. Proceedings. ICRA'00. IEEE International Conference on, volume 1. IEEE,2000.321-328.
    [39]Wulf O, Wagner B. Fast 3D scanning methods for laser measurement systems. Proceedings of International conference on control systems and computer science (CSCS14),2003.2-5.
    [40]Montemerlo M, Thrun S. A multi-resolution pyramid for outdoor robot terrain perception. Pro-ceedings of PROCEEDINGS OF THE NATIONAL CONFERENCE ON ARTIFICIAL INTEL-LIGENCE. Menlo Park, CA; Cambridge, MA; London; AAAI Press; MIT Press; 1999,2004. 464-469.
    [41]Nguyen H G, Kogut G, Barua R, et al. A Segway RMP-based robotic transport system. Proceed-ings of Optics East. International Society for Optics and Photonics,2004.244-255.
    [42]Shea R, Jamali N, Kadous M W, et al. A low-cost, compact, lightweight 3d range sensor. Pro-ceedings of Proc. of the Australian Conference on Robotics and Automation,2006.
    [43]Velodyne Lidar, Inc. HDL-64E S2 data sheet, http://www.velodyne.com/lidar/products/brochure/ HDL-64ES2datasheet_lowres.pdf,2006.
    [44]Leica Geosystems AG. Leica ScanStation C10 data sheet, http://www.leica-geosystems.com/ common/shared/downloads/inc/downloader.asp?id=11889,2010.
    [45]Kolb A, Barth E, Koch R. ToF-sensors:New dimensions for realism and interactivity. Proceedings of Computer Vision and Pattern Recognition Workshops,2008. CVPRW'08. IEEE Computer Society Conference on. IEEE,2008.1-6.
    [46]PMD Technologies GmbH. PMD[vision] R CamCube 2.0 data sheet, http://www.pmdtec.com/ fileadmin/pmdtec/downloads/documentation/datasheet_camcube.pdf,2010.
    [47]Mesa Imaging AG. Swiss RangerTM SR4000 data sheet, http://www.mesa-imaging.ch/dlm.php? fname=pdf/SR4000_Data_Sheet.pdf,2010.
    [48]Medina A, Gaya F, Del Pozo F. Compact laser radar and three-dimensional camera. JOSA A, 2006,23(4):800-805.
    [49]Yahav G, Iddan G J, Mandelboum D.3D imaging camera for gaming application. Proceedings of Consumer Electronics,2007. ICCE 2007. Digest of Technical Papers. International Conference on. IEEE,2007.1-2.
    [50]Microsoft Corporation/3DV Systems. ZCam product data sheet, http://www.engadget.com/ photos/microsofts-project-natal-roots-revealed-3dv-systems-zcam-2/2056911/,2008.
    [51]PrimeSense Ltd. The PrimeSensorTMreference design 1.08. http://www.primesense.com/files/PDF/ PrimeSensor_RD1.08_Datasheet.PDF,2010.
    [52]Canesta, Inc. Introduction to 3D vision in CMOS. http://www.canesta.com/assets/pdf/ technicalpapers/Canesta101.pdf,2008.
    [53]Laurenzis M, Christnacher F, Monnin D. Long-range three-dimensional active imaging with su-perresolution depth mapping. Optics Letters,2007,32(21):3146-3148.
    [54]Schuon S, Theobalt C, Davis J, et al. High-quality scanning using time-of-flight depth superreso-lution. Proceedings of Computer Vision and Pattern Recognition Workshops,2008. CVPRW08. IEEE Computer Society Conference on. IEEE,2008.1-7.
    [55]Scharstein D, Szeliski R. High-accuracy stereo depth maps using structured light. Proceedings of Computer Vision and Pattern Recognition,2003. Proceedings.2003 IEEE Computer Society Conference on, volume 1. IEEE,2003.I-195.
    [56]Zhang L, Curless B, Seitz S M. Rapid shape acquisition using color structured light and multi-pass dynamic programming. Proceedings of 3D Data Processing Visualization and Transmission,2002. Proceedings. First International Symposium on. IEEE,2002.24-36.
    [57]Zhang S, Huang P S. High-resolution, real-time three-dimensional shape measurement. Optical Engineering,2006,45(12):123601-123601.
    [58]Fechteler P, Eisert P. Adaptive colour classification for structured light systems. Computer Vision, IET,2009,3(2):49-59.
    [59]GFMesstechnik GmbH. TopoCAM-high-end optical 3D measurement of components, http: //www.gfm3d.com/index.php?view=article&id=102,2010.
    [60]AGE Solutions S.r.l. Maestro 3D dental scanner technical specifications. http://www.maestro3d. com/index.asp?pageO=containe&page1=maestro3d.dentalscanner.technical,2010.
    [61]Solutionix Corp. Rexcan DS2 measurement for quality and speed, http://www.solutionix.com/ product/file/RexcanDS2_Brochure_en_down.zip,2010.
    [62]Shotton J, Sharp T, Kipman A, et al. Real-time human pose recognition in parts from single depth images. Communications of the ACM,2013,56(1):116-124.
    [63]Smisek J, Jancosek M, Pajdla T.3D with Kinect. Proceedings of Consumer Depth Cameras for Computer Vision. Springer,2013:3-25.
    [64]Steinbrucker F, Sturm J, Cremers D. Real-time visual odometry from dense RGB-D images. Pro-ceedings of Computer Vision Workshops (ICCV Workshops),2011 IEEE International Conference on. IEEE,2011.719-722.
    [65]Fioraio N, Konolige K. Realtime visual and point cloud slam. Proceedings of Proc. of the RGB-D Workshop on Advanced Reasoning with Depth Cameras at Robotics:Science and Systems Conf.(RSS), volume 27,2011.
    [66]Whelan T, Johannsson H, Kaess M, et al. Robust Real-Time Visual Odometry for Dense RGB-D Mapping..
    [67]Lai K, Bo L, Ren X, et al. A large-scale hierarchical multi-view rgb-d object dataset. Proceedings of Robotics and Automation (ICRA),2011 IEEE International Conference on. IEEE,2011.1817-1824.
    [68]Lai K, Bo L, Ren X, et al. Detection-based object labeling in 3d scenes. Proceedings of Robotics and Automation (ICRA),2012 IEEE International Conference on. IEEE,2012.1330-1337.
    [69]Tang J, Miller S, Singh A, et al. A textured object recognition pipeline for color and depth image data. Proceedings of Robotics and Automation (ICRA),2012 IEEE International Conference on. IEEE,2012.3467-3474.
    [70]Grisetti G, Stachniss C, Burgard W. Improved techniques for grid mapping with rao-blackwellized particle filters. Robotics, IEEE Transactions on,2007,23(1):34-46.
    [71]Fox D. Adapting the sample size in particle filters through KLD-sampling. The international Journal of robotics research,2003,22(12):985-1003.
    [72]Lingemann K, Niichter A, Hertzberg J, et al. High-speed laser localization for mobile robots. Robotics and Autonomous Systems,2005,51 (4):275-296.
    [73]Ulrich I, Borenstein J. VFH+:Reliable obstacle avoidance for fast mobile robots. Proceedings of Robotics and Automation,1998. Proceedings.1998 IEEE International Conference on, volume 2. IEEE,1998.1572-1577.
    [74]Fox D, Burgard W, Thrun S. The dynamic window approach to collision avoidance. Robotics & Automation Magazine, IEEE,1997,4(1):23-33.
    [75]Yamauchi B. A frontier-based approach for autonomous exploration. Proceedings of Computa-tional Intelligence in Robotics and Automation,1997. CIRA'97., Proceedings.,1997 IEEE Inter-national Symposium on. IEEE,1997.146-151.
    [76]Stachniss C, Grisetti G, Hahnel D, et al. Improved Rao-Blackwellized mapping by adaptive sampling and active loop-closure. Proceedings of In Proc. of Navigatingthe Workshop on Self-Organization of AdaptiVE behavior (SOAVE),2004.
    [77]Bruce J, Balch T, Veloso M. Fast and inexpensive color image segmentation for interactive robots. Proceedings of Intelligent Robots and Systems,2000. (IROS 2000). Proceedings.2000 IEEE/RSJ International Conference on, volume 3. IEEE,2000.2061-2066.
    [78]Triebel R, Burgard W. Improving simultaneous mapping and localization in 3d using global con-straints. Proceedings of Proceedings of the national conference on artificial intelligence, volume 20. Menlo Park, CA; Cambridge, MA; London; AAAI Press; MIT Press; 1999,2005.1330.
    [79]May S, Droeschel D, Holz D, et al. Three-dimensional mapping with time-of-flight cameras. Journal of Field Robotics,2009,26(11-12):934-965.
    [80]Newman P, Sibley G, Smith M, et al. Navigating, recognizing and describing urban spaces with vision and lasers. The International Journal of Robotics Research,2009,28(11-12):1406-1433.
    [81]Akbarzadeh A, Frahm J M, Mordohai P, et al. Towards urban 3D reconstruction from video. Pro-ceedings of 3D Data Processing, Visualization, and Transmission, Third International Symposium on. IEEE,2006.1-8.
    [82]Konolige K, Agrawal M. FrameSLAM:From bundle adjustment to real-time visual mapping. Robotics, IEEE Transactions on,2008,24(5):1066-1077.
    [83]Clemente L A, Davison A J, Reid I, et al. Mapping large loops with a single hand-held camera. Proceedings of Robotics:Science and Systems,2007.
    [84]Furukawa Y, Curless B, Seitz S M, et al. Reconstructing building interiors from images. Proceed-ings of Computer Vision,2009 IEEE 12th International Conference on. IEEE,2009.80-87.
    [85]Besl P J, McKay N D. Method for registration of 3-D shapes. Proceedings of Robotics-DL tentative. International Society for Optics and Photonics,1992.586-606.
    [86]Rusinkiewicz S, Levoy M. Efficient variants of the ICP algorithm. Proceedings of 3-D Digital Imaging and Modeling,2001. Proceedings. Third International Conference on. IEEE,2001.145-152.
    [87]Yang C, Medioni G. Object modelling by registration of multiple range images. Image and vision computing,1992,10(3):145-155.
    [88]Segal A, Haehnel D, Thrun S. Generalized-icp. Proceedings of Proc. of Robotics:Science and Systems (RSS), volume 25,2009.26-27.
    [89]Nister D. An efficient solution to the five-point relative pose problem. Pattern Analysis and Machine Intelligence, IEEE Transactions on,2004,26(6):756-770.
    [90]Grisetti G, Grzonka S, Stachniss C, et al. Efficient estimation of accurate maximum likelihood maps in 3d. Proceedings of Intelligent Robots and Systems,2007. IROS 2007. IEEE/RSJ Inter-national Conference on. IEEE,2007.3472-3478.
    [91]Grisetti G, Stachniss C, Grzonka S, et al. A tree parameterization for efficiently computing maximum likelihood maps using gradient descent. Proceedings of Proc. of robotics:science and systems (RSS),2007.
    [92]Pfister H, Zwicker M, Van Baar J, et al. Surfels:Surface elements as rendering primitives. Proceedings of Proceedings of the 27th annual conference on Computer graphics and interactive techniques. ACM Press/Addison-Wesley Publishing Co.,2000.335-342.
    [93]Curless B, Levoy M. A volumetric method for building complex models from range images. Proceedings of Proceedings of the 23rd annual conference on Computer graphics and interactive techniques. ACM,1996.303-312.
    [94]Zeng M, Zhao F, Zheng J, et al. A memory-efficient kinectfusion using octree. Proceedings of Computational Visual Media. Springer,2012:234-241.
    [95]Shi J, Tomasi C. Good features to track. Proceedings of Computer Vision and Pattern Recognition, 1994. Proceedings CVPR'94.,1994 IEEE Computer Society Conference on. IEEE,1994.593-600.
    [96]Lucas B D, Kanade T, et al. An iterative image registration technique with an application to stereo vision. Proceedings of Proceedings of the 7th international joint conference on Artificial intelligence,1981.
    [97]Bay H, Tuytelaars T, Van Gool L. Surf:Speeded up robust features. Proceedings of Computer Vision-ECCV 2006. Springer,2006:404-417.
    [98]Lorensen W E, Cline H E. Marching cubes:A high resolution 3D surface construction algorithm. Proceedings of ACM Siggraph Computer Graphics, volume 21. ACM,1987.163-169.
    [99]Nuchter A.3D robotic mapping:the simultaneous localization and mapping problem with six degrees of freedom, volume 52. Springer,2009.
    [100]ARun K S, Huang T S, Blostein S D. Least square fitting of two 3-d point sets. Pattern Analysis and Machine Intelligence, IEEE Transactions on,1987, (9):698-700.
    [101]Zhang Z. Iterative point matching for registration of free-form curves.1992..
    [102]Newcombe R A, Lovegrove S J, Davison A J. DTAM:Dense tracking and mapping in real-time. Proceedings of Computer Vision (ICCV),2011 IEEE International Conference on. IEEE,2011. 2320-2327.
    [103]Henry P, Krainin M, Herbst E, et al. RGB-D mapping:Using Kinect-style depth cameras for dense 3D modeling of indoor environments. The International Journal of Robotics Research,2012, 31(5):647-663.
    [104]Triggs B, McLauchlan P F, Hartley R I, et al. Bundle adjustment-a modern synthesis. Proceed-ings of Vision algorithms:theory and practice. Springer,2000:298-372.
    [105]Konolige K, Garage W. Sparse sparse bundle adjustment. Proceedings of Proc. of the British Machine Vision Conference (BMVC),2010.
    [106]Muja M, Lowe D G. Fast approximate nearest neighbors with automatic algorithm configuration. Proceedings of International Conference on Computer Vision Theory and Applications (VISSAPP' 09),2009.331-340.
    [107]Lowe D G. Object recognition from local scale-invariant features. Proceedings of Computer vision,1999. The proceedings of the seventh IEEE international conference on, volume 2. Ieee, 1999.1150-1157.
    [108]Rublee E, Rabaud V, Konolige K, et al. ORB:an efficient alternative to SIFT or SURF. Proceed-ings of Computer Vision (ICCV),2011 IEEE International Conference on. IEEE,2011.2564-2571.
    [109]Terriberry T B, French L M, Helmsen J. GPU accelerating speeded-up robust features. Proceedings of Proceedings of the 4th International Symposium on 3D Data Processing, Visualization and Transmission. Citeseer,2008.355-362.
    [110]Niichter A, Hertzberg J. Towards semantic maps for mobile robots. Robotics and Autonomous Systems,2008,56(11):915-926.
    [111]Biswas J, Veloso M. Depth camera based indoor mobile robot localization and navigation. Pro-ceedings of Robotics and Automation (ICRA),2012 IEEE International Conference on. IEEE, 2012.1697-1702.
    [112]Hurt J J, Colwell L V. A comparison of several plane fit algorithms. CIRP Annals-Manufacturing Technology,1980,29(1):381-384.
    [113]Whelan T, Kaess M, Fallon M, et al. Kintinuous:Spatially extended kinectfusion.2012..
    [114]Roth H, Vona M. Moving volume kinectfusion. Proceedings of British Machine Vision Conf.(BMVC),(Surrey, UK),2012.
    [115]Sturm J, Magnenat S, Engelhard N, et al. Towards a benchmark for RGB-D SLAM evaluation. Proceedings of Proc. of the RGB-D Workshop on Advanced Reasoning with Depth Cameras at Robotics:Science and Systems Conf.(RSS),2011.1-3.
    [116]Mozos 6 M. Semantic labeling of places with, mobile robots, volume 61. Springer Verlag,2010.
    [117]Pronobis A. Semantic mapping with mobile robots[D]. KTH,2011.
    [118]Mozos O M, Marton Z C, Beetz M. Furniture models learned from the www. Robotics & Au-tomation Magazine, IEEE,2011,18(2):22-32.
    [119]Klasing K, Wollherr D, Buss M. Realtime segmentation of range data using continuous nearest neighbors. Proceedings of Robotics and Automation,2009. ICRA'09. IEEE International Confer-ence on. IEEE,2009.2431-2436.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700