未知环境探测及三维室内语义建图研究
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
随着当今社会进入到人口老龄化阶段,迫切需要机器人能为人们的日常生活提供各种服务和帮助。通常,机器人需要借助于地图才能在人们日常生活和工作的室内环境中发挥作用。因此,需要机器人具备探测未知环境和创建地图的能力。除此之外,由机器人传感器构建的普通度量地图无法反映室内环境信息,为此需要创建一种包含室内环境语义信息并能让机器人理解的语义地图。为了实现上述目的,本论文首先对未知的室内环境进行探测与建图,然后提出了一种在三维(3D)环境中基于人机交互方式创建环境语义地图的概念。在室内环境中,借助于两种移动机器人平台对未知环境探测与建图问题开展了研究。此外,借助于可穿戴式运动传感器网络和运动捕捉系统,在人机交互的框架下对3D环境中的语义地图创建的相关技术问题开展研究。主要研究内容如下:
     首先,利用iRobot移动机器人、Pioneer移动机器人、激光测距仪、4种类型的摄像头、微型计算机以及机器人操作系统ROS搭建了两种多功能移动机器人实验平台。基于iRobot移动机器人搭建的平台主要借助于激光测距仪和4种类型的摄像头,用于实现二维环境建图、未知环境探测和多机器人协作定位等功能。而基于Pioneer移动机器人搭建的平台主要借助于Kinect摄像头、运动捕捉系统和可穿戴式无线运动传感器网络,分别用于3D环境建模和手势识别。将这两种平台组成高低搭配,可以灵活地应用于不同需要的实验任务。此外,为了实现室内环境中的人机交互,本论文设计的一种由方向传感器模块、无线通信模块和电源管理模块组成的可穿戴式无线运动传感器用于识别人的身体活动与手势动作,并提出一种能耗管理算法来延长其使用时间。
     其次,针对移动机器人在进行同步定位与建图(SLAM)时,由于累积误差而导致的数据关联失败的问题,提出了一种累积误差修正算法来减小误差。并针对机器人探测未知环境中的同步规划定位与建图(SPLAM)问题,本论文基于信息熵原理提出了一种效用函数构建方法来实现机器人对未知环境的建图和自主路径规划。此外,针对多机器人协作定位问题提出了一种数据融合策略,并借助于运动捕捉系统验证了该方法的准确性与有效性。
     第三,针对基于视觉的同步定位与建图(VSLAM)中存在的由于Kinect摄像头视角范围有限以及移动机器人运动造成的Kinect摄像头姿态和位置的变化,从而引起的多个视角的点云数据在同一个共享帧中无法匹配的问题。本论文提出一种通过将Kinect摄像头自身姿态信息与来自多个视图的数据融合起来,并提出了一种多层迭代最近点算法(MICP)用于3D环境建图。
     第四,针对传统视觉手势识别方法计算量大的问题,为了降低计算的复杂度,本论文通过另外一种基于可穿戴式无线运动传感器的方式进行手势识别,并提出了一种基于分层隐马尔科夫模型(MHMMs)的连续手势识别的算法。首先,将一种三层前馈神经网络结构用于检测手势信号;其次,利用下层隐马尔科夫模型(LHMMs)对连续手势信号中的单个手势进行识别;最后,一种带有上下文约束条件的贝叶斯滤波器将在上层隐马尔科夫模型(UHMMs)中对手势识别结果进行修正。
     最后,本论文提出了一种基于人的运动信息与位置信息融合的方法进行3D环境语义地图建模。将本论文设计的三个无线运动传感器分别穿戴在测试者右侧的大腿、腰部和手腕上组成一个人体传感器网络,用于同步的人体活动和手势动作的识别,并借助于运动捕捉系统来获取人的位置信息。接着,利用本论文提出的一种三层动态贝叶斯网络(DBN)对位置、身体活动和手势之间的约束条件进行建模。随后,利用一种贝叶斯滤波器和一种改进的维特比算法来估计人的活动和手势。最后,通过人的活动来确定室内家具类型,并将家具信息加入到3D地图中,从而实现了室内3D语义建图。
Since human society has evolved into an information society, there is an urgent need fora robot that can provide a variety of services and assistance for people’s daily lives. Typically,a robot needs the help of a map to play a role in people’s daily life and indoor workenvironments. Therefore, each robot needs to have the capability of unknown environmentexploration and mapping. Additionally, the common metric map built by robot sensors can notreflect semantic information of an indoor environment. Therefore, a kind of semantic map,which reflects semantic information of the environment and can be understand by robots,needs to be created. To achieve the above purpose, this thesis firstly focuses on unknownenvironment exploration and mapping, and then presents the concept of building a semanticmap in an indoor three-dimensional (3D) environment based on Human-Robot Interaction(HRI). In an indoor environment, two kinds of mobile robot platform are used to do researchon topics of unknown environment exploration and mapping. Furthmore, by means of awearable motion sensor network and a motion capture system, studies on some relatedtechnology problems of semantic mapping in a3D environment are carried out within theframework of HRI. The main research contents are as follows:
     Firstly, an iRobot mobile robot, a Pioneer mobile robot, laser range finders, four cameras,micro computers and Robot Operating System (ROS) are used to build two kind ofmulti-purpose experiment platforms. The mobile robot platform based on iRobot robot mainlyuses a laser range finder and various types of camera to realize functions such as mapping in atwo-dimensional environment, unknown environment exploration and multi-robot cooperativelocalization. The platform based on Pioneer mobile robot mainly uses a Kinect camera,amotion capture system and a wearable wireless motion sensor network to create a3Denvironment map and recognize gestures, respectively. Composition of these two platformswith high-low structure can be flexibly applied to different experimental tasks. Additionally,in order to achieve HRI in an indoor environment, a wearable wireless motion sensor, whichis composed by an orientation sensor module, a wireless communication module and a powermanagement module for activity and gesture recognition, is designed in this thesis. Besides,an energy management algorithm is proposed to prolong its service time.
     Secondly, when Simultaneous Localization And Mapping (SLAM) is proceeding, theproblem of failure in data association is mainly caused by accumulative errors. Therefore, anerror correction algorithm is proposed to reduce accumulative errors. And for the problem ofSimultaneous Planning Localization And Mapping (SPLAM) in unknown environmentexploration, a method of utility function construction is proposed to achieve mapping in anunknown environment and autonomous path planning based on information entropy theory.Additionally, for the problem of multi-robot cooperative localization, a data fusion strategy ispresented. And with the help of a motion capture system, the accuracy and validity of theproposed method are verified.
     Thirdly, the limited viewing angle of a Kinect camera, posture and position changes ofthe camera caused by the movement of a mobile robot are two main problems in Vision Simultaneous Localization And Mapping (VSLAM), which generate the problem of pointcloud data in a same shared frame cannot be matched. A method which fuses data fromposture information of a Kinect camera and data from multiple frames is proposed by thisthesis, and a Multilevel Iterative Closest Point algorithm (MICP) is proposed for constructinga3D environmental map.
     Fourthly, to avoid similar computational complexity in traditional vision-based gesturerecognition method, another way based on a wearable wireless motion sensor is used in thisthesis, and an approach based on Multilayer Hidden Markov Models (MHMMs) is proposedfor continuous gesture recognition. Firstly, a three-layer feed-forward neural networkstructure is used to detect gesture signals; Secondly, Low-level Hidden Markov Models(LHMMs) are used to recognize single gesture in continuous signals. Finally, a Bayesian filterwith constraints of context in High-level Hidden Markov Models (LHMMs) is used to correctfinal recognition result.
     Finally, a method fuses information from human motion and human’s locationinformation is proposed for modeling a semantic3D map. Three wireless motion sensorsdesigned by the thesis are worn on the same side of thigh, waist and wrist of a tester, whichform a body sensor network for simultaneous human’s activities and gestures recognition.Meanwhile, a motion capture system is used to obtain location information of the tester. Athree-layer Dynamic Bayesian Network (DBN) is used to model constraints among human’sposition, physical activities and gestures. Then, a Bayesian filter and an improved Viterbialgorithm are used to estimate physical activities and gestures. Finally, human’s activities areused to determine the furniture types and then information of furniture types is embedded intoa3D map to achieve the task of indoor3D semantic mapping.
引文
1. R·西格沃特,I·R·诺巴克什,D·斯卡拉穆扎.自主移动机器人导论(第二版)[M].西安:西安交通大学出版社,2013.1-317
    2.蔡自兴,贺汉根,陈虹.未知环境中移动机器人导航控制理论与方法[M].北京:科学出版社,2009.1-503
    3. Thrun S,Bennewitz M,Burgard W,et al. MINERVA: A Second-Generation Museum Tour-GuideRobot [C]. In: IEEE International Conference on Robotics and Automation (ICRA), Detroit, MI, USA:ICRA,1999.1999-2005
    4. Pitzer B, Osentoski S, Jay G, et al. PR2Remote Lab: An environment for remote development andexperimentation [C]. In: IEEE International Conference on Robotics and Automation (ICRA).Saint Paul,MN, USA: ICRA,2012.3200-3205
    5. Wolf D F and Sukhatme G. Semantic Mapping Using Mobile Robots [J]. IEEE Transactions onRobotics,2008,24(2):245-258
    6. Bill G, Nathan M and Peter R. The Road Ahead [M]. New York: Viking Penguin,1995.1-286
    7. Hans P M. Dense3D Perception for Broad Mobile Robot Applications [EB/OL].http://www.frc.ri.cmu.edu/~hpm/hpm.pubs.html,2003-07-03
    8. Michael K, Ronald C A and Jarek R. Compact Map Building-3D Models from Laser Data [EB/OL].http://www.cc.gatech.edu/ai/robot-lab/research/3d/,2003-6-27
    9. Silvia C, Amy L and Federico P. Enabling SoCial Interaction Through Embodiment [EB/OL].http://www.aass.oru.se/Research/Robots/index.html,2012-01-18
    10. McMickell M B, Goodwine B and Montestruque L A. Micabot: A robotic platform for large-scaledistributed robotics [C]. In: EEE International Conference on Robotics and Automation (ICRA).St. Paul,MN, USA: ICRA,2003.1600-1605
    11. Bergbreiter S and Pister K S J. Cotsbots: An off-the-shelf platform for distributed robotics [C]. In:IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA:IROS,2003.1632-1637
    12. Sibley G T, Rahimi M H and Sukhatme G. Robomote: A tiny mobile robot platform for large-scalead-hoc sensor networks [C]. In: IEEE International Conference on Robotics and Automation(ICRA).Washington, DC, USA: ICRA,2002,1143-1148
    13. Cruz D, McClintock J, Perteet B, et al. Decentralized cooperative control [J]. IEEE Control SystemsMagazine,2007,27(3):58-78
    14. Grocholsky B, Keller J, Kumar V, et al. Cooperative air and ground surveillance [J]. IEEE Roboticsand Automation Magazine,2006,13(3):16-26
    15. Michael N, Fink J, and Kumar V. Experimental testbed for large multirobot teams [J]. IEEE Roboticsand Automation Magazine,2008,15(1):53-61
    16. Jin Z, Waydo S, Wildanger E B, et al. Mvwt-ii: the second generation Caltech multi-vehicle wirelesstestbed [C]. In: Proceedings of the American Control Conference (LACAME). Boston, MA, USA:LACAME,2004:5321-5326
    17.陶重犇.一种室内多用途移动机器人实验平台[P].中国,实用新型,2012200819071.2012.12.05
    18. Quigley M, Gerkey B, Conley K, et al. ROS: an open-source Robot Operating System [C]. In:International Conference on Robotics and Automation (ICRA). Kobe, Japan: ICRA,2009:1-5
    19. Zhang Z Y. Microsoft Kinect Sensor and Its Effect [J]. IEEE MultiMedia.2012,19(2):4-10
    20. Todorovic D, Jovanovic Z and Dordevic G S. Monitoring the PLC based industrial control systemsthrough the Internet using Microsoft robotics developer studio technologies [C]. In: InternationalConference on Telecommunication in Modern Satellite Cable and Broadcasting Services (TELSIKS). Ni,Serbia: TELSIKS,2011:581-584
    21. Brown University. Brown University Robotics repository for ROS tools and nodes[EB/OL].http://code.google.com/p/brown-ros-pkg/,2009-08-01
    22. Brian P G, Jeremy L and Blaise G. A ROS node to provide access to SCIP2.0-compliant Hokuyo laserrange finders [EB/OL]. http://wiki.ros.org/hokuyo_node,2013-12-21
    23. Patrick M, Suat G and Radu B R. A ROS driver for Microsoft Kinect camera [EB/OL].http://wiki.ros.org/openni_camera,2013-11-14
    24. Tully F, Eitan M E and Wim M. TF package for multiple coordinate frames [EB/OL].http://wiki.ros.org/tf,2013-07-30
    25. Culjak I, Abram D, Pribanic T, et al. A brief introduction to OpenCV [C]. In: Proceedings of the35thInternational Convention (MIPRO). Opatija, Croatia: MIPRO,2012:1725-1730
    26. Rusu R B and Cousins S.3D is here: Point Cloud Library (PCL)[C]. In: IEEE International Conferenceon Robotics and Automation (ICRA). Shanghai, China: ICRA,2011:1-4
    27. Vicon Optical Motion Capture system. A generic guide to the Vicon Optical Motion Capture system[EB/OL]. http://mocap.csit.carleton.ca/,2008-06-08
    28. Durrant W H and Bailey T. Simultaneous localization and mapping: part I [J]. IEEE Robotics andAutomation Magazine.2006,13(2):99-110
    29. Gaspar J, Winters N and Santos V J. Vision-based navigation and environmental representations withan omnidirectional camera [J]. IEEE Transactions on Robotics and Automation,2000,16(6):890-898
    30. Hiroshi K, Jun M and Yoshiaki S. Mobile robot navigation in dynamic environments usingomnidirectional stereo [C]. In: IEEE International Conference on Robotics and Automation (ICRA). Taipei,Taiwan: ICRA,2003:893-898
    31. Chen C L and Lee M R. Global path planning in mobile robot using omnidirectional camera [C]. In:International Conference on Consumer Electronics, Communications and Networks (CECNet). XianNing,China: CECNet,2011:4986-4989
    32. Gregory D and Michael J. Computational Principles of Mobile Robotics [M]. Cambridge: CambridgeUniversity Press,2010.228-369
    33. Chen S Y, Li Y F and Ngai M K. Active vision in robotic systems: A survey of recent developments [J].International Journal of Robotics Research,2011,30(11):1343-1377
    34. Arbel T and Frank P F. Entropy-based Gaze Planning [J]. Image and Vision Computing,2001,19(1):179-786
    35. Eidenberger R, Grundmann T, Feiten W, et al. Fast parametric viewpoint estimation for active objectdetection [C]. In: IEEE International Conference on Multisensor Fusion and Integration for IntelligentSystems (MFI), Seoul, Korea: MFI,2008:309-314
    36. Ma J and Burdick J W. Dynamic sensor planning with stereo for model identification on a mobileplatform [C]. In: IEEE International Conference on Robotics and Automation (ICRA), Anchorage, AK,USA: ICRA,2010:3233-3239
    37. Dahlberg T A, Nasipuri A and Taylor C. Explorebots: A Mobile Network Experimentation Testbed [C].In: Proceedings of ACM SIGCOMM workshop on Experimental approaches to wireless network designand analysis (FGCN), York, NY, USA: FGCN,2005,76-81
    38. Stephan H, Daniel P, Fong S F, et al. Mesheye: a hybrid resolution smart camera mote for applicationsin distributed intelligent surveillance [C]. In: Proceedings of the international conference on Informationprocessing in sensor networks (IPSN), New York, NY, USA: IPSN,2007:360-369
    39. Boyoon J and Gaurav S S. Tracking targets using multiple robots: The effect of environment occlusion[J]. Autonomous Robots,2002,13(1):191-205
    40. Sun Y, Paik J K, Koschan A, et al.3D Reconstruction of Indoor and Outdoor Scenes using a MobileRange Scanner [C]. In: Proceedings16th International Conference Pattern Recognition (ICPR), QuebecCity, Canada: ICPR2007:653-656
    41. Thrun S,Burgard W and Fox D. A real-time algorithm for mobile robot mapping with applications tomulti-robot and3D mapping [C]. In: Proceedings IEEE International Conference on Robotics andAutomation (ICRA), San Francisco, CA, USA: ICRA,2000:321-328
    42. Josep A, Yvan P, Joaquim S, et al. The SLAM problem: a survey [C]. In: Proceedings of the11thInternational Conference of the Catalan Association for Artificial Intelligence (CCIA), London, UK: CCIA,2008:363-371
    43. Davison A J and Murray D. Simultaneous localization and map-building using active vision [J]. IEEETransactions on Pattern Analysis and Machine Intelligence,2002,24(7):865-880
    44. Walter M R, Eustice R M and Leonard J J. Exactly sparse extended information filters for feature basedSLAM [J]. International Journal of Robotics Research,2007,26(4):335-359
    45.陶重犇.一种鱼眼摄像头的移动机器人SLAM平台[P].中国,实用新型,2012200526390.2012.10.31
    46.王海军.未知环境下移动机器人即时定位与地图创建[D]:[博士学位论文].上海:上海大学环境与化学工程学院,2009
    47. Migurl J, Oscar R, Arturo G, et al. Behavour based Multi-robot Intergated exploration [J]. InternationalJournal of Innovative Computing, Information and Control,2011,7(9):5225-5244
    48. Makarenko A A, Williams S B, Bourgault F, et al. An experiment in integrated exploration [C]. In:IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Lausanne, Switzerland:IROS,2002:534-539
    49. Stachniss C, Grisetti G and Burgard W. Information gain-based exploration using Rao-Blackwellizedparticle filters [C]. In: Proceedings of Robotics: Science and Systems (RSS), Cambridge, MA: PRSS,2005:180-188
    50. Yamauchi B. A frontier-based approach for autonomous exploration [C]. In: Proceedings of IEEEInternational Symposium on Computational Intelligence in Robotics and Automation (CIRA), Monterey,CA, USA:CIRA,1997:146-151
    51. Tao C B,Lee G and Liu G D. BOS_based Multi-Robot Cooperative Localization [J]. InternationalJournal of Advancements in Computing Technology,2013,5(3):671-678.
    52.陶重犇.一种网络摄像头的移动机器人目标追踪平台[P].中国,实用新型,2012200435048.
    2012.11.14
    53.陶重犇.一种iRobot移动机器人定位平台[P].中国,实用新型,201220045441X.2012.10.03
    54. Sre ko J K. ROSARIA Package provides a ROS interface for most Adept Mobile Robots [EB/OL].http://wiki.ros.org/ROSARIA,2014-01-27
    55. Brian P G. GMAPPING Package for Slam’s Gmapping [EB/OL]. http://wiki.ros.org/gmapping,2013-12-27
    56. Brian P G. AMCL Package is a probabilistic localization system [EB/OL]. http://wiki.ros.org/amcl,2011-08-03
    57. Eitan M E. MOVE_BASE provides an implementation of an action [EB/OL].http://wiki.ros.org/move_base,2012-10-18
    58. Dave H, David G and Josh F. RVIZ is a3D visualization tool for ROS [EB/OL]. http://wiki.ros.org/rviz,2014-02-07
    59. Silveira G,Malis E and Rives P. An Efficient Direct Approach to Visual SLAM [J]. IEEE Transactionson Robotics,2008,24(5):969-979
    60. Henry P, Krainin M, Herbst E, et al. RGB-D mapping: Using depth cameras for dense3d modeling ofindoor environments [C]. In: Proceedings of the12th International Symposium on Experimental Robotics(ISER), New Delhi and Agra, India: ISER,2010:126-132
    61. Engelhard N, Endres F, Hess J, et al. Real time3d visual slam with a hand-held rgb-d camera [C]. In:Proceedings of the RGB-D Workshop on3D Perception in Robotics at the European Robotics Forum(RERF), Vasteras, Sweden: RERF,2011:536-543
    62. Endres F, Hess J, Engelhard N, et al. An evaluation of the rgb-d slam [C]. In: Proceedings of IEEEConference on Robotics and Automation (ICRA), St. Paul, USA: ICRA,2012:187-192
    63. Izadi S, Kim D, Hilliges O, et al. Kinect fusion: real-time3d reconstruction and interaction using amoving depth camera [C]. In: Proceedings of the24th annual ACM symposium on User Interface Softwareand Technology (UIST), New York, NY, USA: UIST,2011:559-568
    64. Henry P, Krainin M, Herbst E, et al. RGB-D mapping: Using Kinect-style depth cameras for dense3Dmodeling of indoor environments [J]. International Journal of Robotics Research,2012,31(5):647-663
    65.许俊勇.移动机器人全景VSLAM研究[D]:[硕士学位论文].上海:上海交通大学电子信息与电气工程学院,2008
    66. Tasaki T, Tokura S, Sonoura T, et al. Mobile robot self-localization based on tracked scale and rotationinvariant feature points by using an omnidirectional camera [C]. In: IEEE/RSJ International Conference onIntelligent Robots and Systems (IROS), Taipei, Taiwan: IROS,2010:5202-5207
    67. Zhang Z Y. Camera calibration with one-dimensional objects [J]. IEEE Transactions on PatternAnalysis and Machine Intelligence,2004,26(7):892-899
    68. Lee G H, Fraundorfer F, and Pollefeys M. RS-SLAM: RANSAC sampling for visual FastSLAM [C].In: International Conference on Intelligent Robots and Systems (IROS), San Francisco, CA, USA: IROS,2011:1655-1660
    69. Kummerle R, Grisetti G, Strasdat H, et al. G2o: A general framework for graph optimization [C]. In:IEEE International Conference on Robotics and Automation (ICRA), Shanghai, China: ICRA,2011:3607-3613
    70. Zhang Z Y. Iterative point matching for registration of free-form curves and surfaces [J]. InternationalJournal of Computer Vision,1994,13(12):119-152
    71. Vincent R V and Tully F T. openni_kinect contains OpenNI and Kinect related ros drivers [EB/OL].http://wiki.ros.org/openni_kinect,2012-12-21
    72. Besl P J and McKay N D. A method for registration of3-d shapes [J]. IEEE Transactions on PatternAnalysis and Machine Intelligence,1992,14(1):239-256
    73. Niu F and Mottaleb M A. View-invariant human activity recognition based on shape and motionfeatures [C]. In: Proceedings of the IEEE Sixth International Symposium on Multimedia SoftwareEngineering (MMSE), Miami, FL, USA: MMSE,2004:546-556
    74. Chen L M, Nugent C D, Cook D J et al. Sensor-Based Activity Recognition [J].IEEE Transactions onSystems, Man, and Cybernetics, Part C: Applications and Reviews,2012,42(6):790-808
    75. Tao C B and Liu G D. A Multilayer Hidden Markov Models-Based Method for Human-RobotInteraction [J]. Mathematical Problems in Engineering,2013,9(1):106-119
    76. Kostesha N, Schmidt M S, Bosco F, et al. The Xsense project: The application of an intelligent sensorarray for high sensitivity handheld explosives detectors [C]. In: IEEE Sensors Applications Symposium(SAS), San Antonio, TX, USA: SAS,2011:7-11
    77. Barbour N. Inertial MEMS Systems and Applications [EB/OL].http://pskla.kpi.ua/lib/2011/NATO/EN-SET-116(2011)-03.pdf,2013-03-01
    78. Ding Z Q, Luo Z Q, Causo A, et al. Inertia sensor-based guidance system for upperlimb posturecorrection [J]. Medical Engineering and Physics,2011,35(2013):269-267
    79. Starner T. Project Glass: An Extension of the Self [J]. IEEE Pervasive Computing,2013,12(2):14-15
    80. Pina L R, Ramirez E, and Griswold W G. Fitbit: A behavior-based intervention system to reducesedentary behavior [C]. In:6th International Conference on Pervasive Computing Technologies forHealthcare (PCTH), San Diego, CA, USA: PCTH,2012:175-178
    81. Aminian K and Najafi B. Capturing human motion using body-fixed sensors: Outdoor measurementand clinical applications [J]. Computer Animation and Virtual Worlds,2004,15(1):79-94
    82. Bachmann E R.3D Motion Sensor MDP-A3U7datasheet and applications [EB/OL].http://www.datasheetarchive.com/MDP-A3U7-datasheet.html,2000-06-01
    83.刘蓉,刘明.基于三轴加速度传感器的手势识别[J].计算机工程,2011,37(24):141-143
    84. Acht V, Bongers E, Lambert N, et al. Miniature wireless inertial sensor for measuring human motions[C]. In:29th Annual International Conference of the IEEE Engineering in Medicine and Biology Society(EMBS), Lyon, France: EMBS,2007:6278-6281
    85.陶重犇.一种网络摄像头与可穿戴式传感器的人机协作系统[P].中国,实用新型,2012200659265.2012.10.31
    86. Yang G Z and Yacoub M. Body Sensor Networks [M]. New York: Springer Publishing,2006.1-493
    87. Oka R. Spotting method for classification of real world data [J]. The Computer Journal,1998,41(8):559-565
    88. Ramamoorthy A, Vaswani N, Chaudhury S, et al. Recognition of dynamic hand gestures [J]. PatternRecognition,36(2003),2003:2069-2081
    89. Hasan H S and Kareem S A. Human Computer Interaction for Vision Based Hand Gesture Recognition:A Survey [J]. In:International Conference on Advanced Computer Science Applications andTechnologies (ACSAT), Kuala Lumpur,Malaysia:ACSAT,2012:55-60
    90. Rabiner L R. A tutorial on hidden markov models and selected application in speech recognition[J].IEEE Proceedings,1989,77(2):257-286
    91. Baum L E and Sell G R. Growth functions for transformations on manifolds. Pacific Journal ofMathematics,1968,27(2):211-227
    92. Viterbi A J. Error bounds for convolutional codes and an asymptotically optimal decoding algorithm.IEEE Transactions on Information Theory,1967,13:260-269
    93. Turin W. Unidirectional and parallel Baum-Welch algorithms [J]. IEEE Transactions on Speech andAudio Processing,1998,6(6):516-523
    94. Civera J, Galvez L D, Riazuelo L, et al. Towards semantic SLAM using a monocular camera [C]. In:IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), San Francisco, CA, USA:IROS,2011:1277-1284
    95. Theobalt C, Bos J, Chapman T, et al. Talking to godot: Dialogue with a mobile robot [C]. In:Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Taipei,Taiwan: IROS,2002:1338-1343
    96. Rottmann A, Mozos O M, Stachniss C, et al. Semantic place classification of indoor environments withmobile robots using boosting [C]. In: Proceedings of the National Conference on Artificial Intelligence(AAAI), Pittsburgh, PA, USA: AAAI,2005:1306-1311
    97. Nuchter A and Hertzberg J. Towards semantic maps for mobile robots [J]. Robotics and AutonomousSystems,56(2008),2008:915-926
    98. Siddiqui J R and Khatibi S. Semantic indoor maps [C]. In:28th International Conference of Image andVision Computing New Zealand (IVCNZ),Wellington, New Zealand: IVCNZ,2013:465-470
    99. Jebari I and Battesti E. Multi-sensor semantic mapping and exploration of indoor environments [C]. In:Proceedings of the IEEE International Conference on Technologies for Practical Robot Applications(TePRA), Woburn, MA, USA, TePRA:2011:151-156
    100.吴培良.家庭智能空间中服务机器人全息建图及相关问题研究[D]:[博士学位论文].河北:燕山大学信息科学与工程学院,2010
    101. Galindo C, Saffiotti A, Coradeschi S, et al. Multi-Multilayer semantic maps for mobile robotics [C]. In:Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS),Edmonton, Alberta, Canada: IROS,2005:3492-3497
    102. Kuipers B, Modayil J, Beeson P, et al. Local metrical and global topological maps in the hybrid spatialsemantic hierarchy [C]. In: IEEE International Conference on Robotics and Automation (ICRA), Barcelona,Spain: ICRA,2004:4845-4851
    103. Sidenbladh H. Detecting human motion with support vector machines [C]. In: Proceedings of the17thInternational Conference on Pattern Recognition (ICPR), Cambridge, UK: ICRP,2004:23-26
    104. Moeslunda T B, Hiltonb A and Kruger V. A survey of advances in vision-based human motion captureand analysis [J]. Computer Vision and Image Understanding,2006,140(2):90-126
    105. Bao L and Intille S S. Activity recognition from user-annotated acceleration data [J]. Proceedings ofPERVASIVE,2004,30(1):1-17
    106. Yang J, Wang S Q, Chen N J, et al. Wearable accelerometer based extendable activity recognitionsystem [C]. In: IEEE International Conference on Robotics and Automation (ICRA), Anchorage, AK,USA:ICRA,2010:3641-3647
    107. Liao L, Fox D and Kautz H. Extracting Places and Activities from GPS Traces Using MultilayerConditional Random Fields [J]. International Journal of Robotics Research,2007,26(1):119-126
    108. Tao C B, Lee G and Liu G D. A Versatile Mobile Robot Platform Based on ROS [J]. InternationalJournal of Digital Content Technology and its Applications,2013,7(1),536-544

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700