用户名: 密码: 验证码:
智能空间中人的行为识别与理解
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
目前许多国家已进入老龄化社会,对独居老人的监护和陪伴已成为突出的社会问题,各国把智能服务机器人的开发作为应对这一问题的重要策略之一。正确识别和理解人的行为是服务机器人给人提供相应服务的前提。目前对基于视觉的行为识别和理解的研究尚处于初级阶段,人体行为模式的多样性和家庭环境的复杂性使之成为一个极富挑战性的课题。就目前机器人的发展水平而言,仅仅依靠机器人自身有限的感知和计算能力难以完成“在正确识别和理解人的行为基础上提供恰当服务”这样复杂的任务,必须借助其他手段。为此,本文提出了以基于视觉的方法为基础、结合智能空间技术实现对人行为的识别和理解的方法。对智能空间中人的行为的正确识别和理解使机器人对独居老人的监护和陪伴成为可能。因此,对智能空间及智能空间中行为识别和理解的研究具有重要的理论和实际意义。
     本文对智能空间中行为识别和理解所包含的五个主要方面进行了研究与探讨:“面向理解和服务的智能空间”(Intelligent Space for Understanding and Service,简称ISUS)的设计与搭建,运动目标的检测与跟踪,日常动作识别,异常动作识别,行为认知和意图识别。首先,研究了ISUS的搭建。ISUS既是人活动的物理空间,又是能够感知人的活动、给人提供服务的智能空间。在行为识别和理解的研究过程中,需要用到ISUS中的视觉和其他相关信息。因此,ISUS是后续研究的基础;然后,研究了以家庭环境中独居老人的监护为背景的目标检测和跟踪,实现了家庭环境中对目标的准确检测和实时跟踪,此为目标动作识别中特征提取的前提;在正确检测出目标并对目标进行实时跟踪的基础上,对常见日常动作和异常动作的识别进行了研究。日常动作的正确识别是行为认知和意图识别的基础,而通过对异常动作的有效识别可判断被监护者是否有异常状况发生;最后,研究了智能空间中目标的行为认知和意图识别,实现了对正常行为的认知和意图识别以及对常见反常行为的识别。在此基础上,智能空间可根据识别结果直接或者间接给人提供相应的服务。主要研究内容和结果概括如下:
     (1)针对iSpace (Intelligent space)服务效率低的问题,提出了ISUS的概念和执行分布的思想,有效提高了服务效率,降低了机器人的负担;从智能空间标准化的角度进行研究,设计了ISUS的分层体系结构,从而使整个系统更加紧凑清晰,便于ISUS的搭建和任务分配;在对ISUS的组成单元进行研究、设计的基础上,搭建了智能空间,然后针对智能空间如何有效完成“理解和服务”这一核心任务进行了任务分解和规划,最后介绍了ISUS所能实现的功能并剖析了ISUS中的信息流结构。
     (2)针对智能空间的环境特点,改进了高斯混合模型(GMM)运动检测算法,提高了模型及时反映背景变化的能力,从而改善了目标分割的效果;对均值漂移算法(Mean Shift算法)进行了改进,提出了适合本环境的模板和目标尺度更新策略,并结合卡尔曼滤波提高了跟踪的鲁棒性;采用智能空间中的无线传感网络技术和视觉技术相结合的目标跟踪方法,解决了跟踪过程中目标的严重遮挡或目标暂时离开摄像机视野范围后的重新获取以及多摄像机目标跟踪中目标匹配、交接困难的问题,实现了多摄像机对目标的接力跟踪。
     (3)人体动作识别是行为认知和意图识别的基础,基于视觉的动作识别的实质是有效特征的选择问题,好的特征不仅能够有利于行为识别而且能够减少识别的计算量。针对利用目前常用特征难以提高动作识别准确性的问题,本文提出了利用人体动作序列的轮廓特征识别人的日常动作的方法,有效提高了动作识别的准确率。但是,这种方法对目标分割的要求很高,易受目标分割结果的影响。为此,提出了直接利用人体动作序列的侧影特征作为动作识别特征的方法,提高了识别的准确性和可靠性。
     (4)对常见异常动作的识别进行了研究,主要研究了跌倒、异常步态和异常行走状态的识别。在跌倒识别方面,针对已有方法难于区分跌倒和正常躺下的问题,提出了一种新的跌倒识别算法,在识别过程中利用了跌倒的静、动态特性,同时考虑了人在智能空间中的位置、身体状况等信息,提高了跌倒识别的准确性;与目前常用步态特征实现身份识别不同,本文通过对异常步态的识别来推断人是否有异常状况发生。为此,根据人的步态特点,定义了能实时描述步态变化的参数,据此实现了对异常步态的识别;最后通过对行走轨迹的判断实现了对异常行走状态的识别。
     (5)行为认知和意图识别是行为识别和理解的高层处理阶段。首先对人在智能空间中的定位进行了研究,针对基于身高模型的定位不能解决当人的姿态变化时会产生定位错误的问题,提出了身高模型和姿态识别相结合的定位方式,消除了人体姿态变化对定位准确性的影响。然后,通过定义的位置置信度系数实现了多摄像机定位结果的融合,进一步提高了定位精度;在实现对人定位的基础上,研究了对正常行为的认知和对反常行为的识别。在对正常行为的认知和意图识别中,提出了智能空间关键点的概念,对智能空间中的关键点和区域进行了定义。在此基础上提出了地点驱动的行为认知算法,把关键点与其属性相联系,实现了对人意图的识别;在反常行为识别方面,提出了通过分析人在关键点之间的活动路线识别人是否有反常状况发生的方法;提出了关键点持续时间直方图的概念,并利用持续时间直方图实现了对反常习惯行为的识别,并通过定义的习惯变化评价指标实现了对行为反常程度的度量。
Nowadays, the medical care and accompaniment of solitary senior citizens has become a special social problem due to the aging society tendency in many countries. One accredited strategy to deal with this crisis is to develop intelligent service robots. Particularly, human behavior recognition and understanding are two essential prerequisites for service robots to function correctly. At present, the research of behavior recognition and understanding based computer vision is still in its preliminary stage. It is an extremely challenging task for the robots to adapt to the variety of human behavior patterns and the complexity of home environment. In other words, it is very difficult to accomplish such complicated mission as providing correct service based on the recognition and understanding of human behavior as far as the currently robot development level is concerned. Therefore, in order to solve this kind of problems, a vision-based method is presented in this thesis that can achieve human behavior recognition and understanding combined with the intelligent space technology. This method makes it possible to accompany and take care of solitary elderly based on the intelligent service robots and the intelligent space technology. It can be shown that the study of the intelligent space and behavior recognition and understanding in intelligent space have important theoretical and practical meanings.
     In this thesis, five essential aspects concerning behavior recognition and understanding in intelligent space are studied which include the research and construction of ISUS (Intelligent Space for Understanding and Service), detection and tracking of moving object, typical action recognition, abnormal action recognition, and behavior recognition and understanding. Firstly, the construction of ISUS is studied. ISUS is not only a physical space of human activities, but also an intelligent space which can perceive human's behavior and provide assistance to human in it. The visual and other relevant information of ISUS is required in the course of study on the behavior recognition and understanding. Therefore, ISUS is the foundation of the following research. Moreover, object detection and tracking in the background of the old solitary people in home environment is studied. The accurate target detection and real-time tracking are two prerequisites for feature extraction in the action recognition which will be shown in the following. Based on the correct detection and tracking of objects, the recognition of common daily actions and abnormal actions can be achieved. The correct recognition of the daily actions and abnormal movements are foundations of the behavior perception and intention recognition. Lastly, behavior perception and intention recognition in ISUS is researched, and the intention recognition of the normal behavior and the abnormal behavior recognition are achieved. Based on the above five aspects, ISUS can provide the corresponding service to human directly or indirectly according to the recognition result. The main research contents and results are shown as follows:
     (1) In view of the inefficiency of iSpace, the concept of ISUS and the idea of execution distribution are discussed, which are used to effectively improve the service efficiency and reduce the burden of the robot. The layered architecture of ISUS is designed from the perspective of intelligent space standardization, which makes the system more compact, clear and convenient to be built. Meanwhile, this method is convenient for task allocation. Moreover, the constructions of ISUS, task planning and information flows composition in ISUS are studied based on the above researches.
     (2) The GMM algorithm is improved according to the environment characteristic of ISUS. It improves the capability of models to timely reflect the background changes. Therefore the effect of target segmentation is increased, the mean shift algorithm is improved and a template and target scale updating strategies for the ISUS environment is proposed. The robustness of object tracking is raised combined with a Kalman filter. In this paper, object locationing and tracking technology combined wireless sensor networks and computer vision is adopted. Many problems, such as the heavy occlusion and re-capturing of objects during object tracking, the difficulty of target hand off and object matching in multi-camera object tracking, are solved. Object relay tracking in multi-camera system is also realized with this method.
     (3) Human action recognition is the foundation of behavior perception and intention recognition, and the feature selection is a key to it. Good features can not only be conducive to recognition, but also reduce the amount of calculations. In view of the problem that it is difficult to improve the recognition accuracy using the common features, this thesis presents a methodology for typical action recognition using the contours of human posture, which effectively improves the action recognition accuracy. In order to overcome the shortcomings of human body contour apt to be effected by segmentation results, the silhouette of human action is used as recognition feathers after dimension reduction which improves recognition reliability.
     (4) The abnormal action recognition in home environment is studied which includes the falling, abnormal gait and abnormal walking status. Especially, as it is difficult to distinguish falling from lying down using existing methods, a new algorithm is proposed in falling recognition, which increases the accuracy of falling recognition using the static and dynamic features and human location in ISUS and physical condition. Moreover, different from the gait-based human identification, the purpose of abnormal gait recognition in this thesis is to infer whether an abnormal situation has occurred. Therefore, gait parameters reflecting stride changes are defined by the human gait characteristics and the abnormal gait recognition that can be achieved by analyzing these parameters. Lastly, the abnormal walking state recognition is achieved by judging human walking trajectory.
     (5) Behavior perception and intention recognition belong to the high-level processing stage. In this thesis, the human locationing in ISUS is researched first. A new algorithm of human locationing in ISUS is proposed using height models combined with posture recognition, which reduces the effects of body posture changes to locationing accuracy. The locationing accuracy is further improved using the combination of multi-camera locationing results by the defined position confidence coefficient. The normal behavior perception and abnormal behavior recognition are researched based on human location. In normal behavior perception and intention recognition, the concept of key points in ISUS is discussed, and key points and area of ISUS are defined. Then a new behavior perception algorithm based on place-driven is proposed, which achieves human's intention inference through interrelating key points and its attribute. In abnormal behavior recognition, a key point based method of abnormal event recognition using activity route is introduced. Then the concept of key point's duration histogram is put forward and used to recognize abnormal behaviors. Finally, the abnormal degree of a behavior is obtained by the defined evaluation index of habit changes.
引文
[1]中华人民共和国国家统计局.2005年全国1%人口抽样调查数据.[DB/OL]. http://www.stats.gov.cn/tjsj/ndsj/renkou/2005/renkou.htm.
    [2]Guohui Tian, Feng Duan, and Tamio Arai. Modular Design for Home Service Robot System Based on Hierarchical Colored Petri Nets [A]. In Proceedings of the 9th International Conference on Intelligent Autonomous System (IAS-9) [C]. Tokyo, Japan:IOS Press, 2006:542-549.
    [3]F. Nasoz, K. Alvarez, C. L. Lisetti, et al. Emotion recognition from physiological signals using wireless sensors for presence technologies [J]. Cognition, Technology & Work,2004, 6(1):4-14.
    [4]K. Hara, T. Omori, and R. Ueno. Detection of unusual human behavior in intelligent house [A]. In Proceedings of the 12th IEEE Workshop on Neural Networks for Signal Processing [C]. Japan 2002:697-706.
    [5]J. A. Ward, P. Lukowicz, G. Troster, et al. Activity recognition of assembly tasks using body-worn microphones and accelerometers [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2006,28 (10):1553-1567.
    [6]J. Yin, Q. Yang, and J. J. Pan. Sensor-based abnormal human activity detection [J]. IEEE Transactions on Knowledge and Data Engineering,2008,20 (8):1082-1090.
    [7]S. I. Yang and S. B. Cho. Recognizing human activities from accelerometer and physiological sensors [A]. In Proceedings of the IEEE International Conference on Multi-sensor Fusion and Integration for Intelligent Systems [C]. Seoul, Korea,2008: 100-105.
    [8]A. Purwar, D. U. Jeong, and W. Y. Chung. Activity monitoring from real-time tri-axial accelerometer data using Sensor network [A]. In Proceedings of the International Conference on Control, Automation and Systems [C]. Seoul, Korea,2007:2402-2406.
    [9]S. Wang, W. Pentney, and T. Choudhury. Common sense based joint training of human activity recognizers [A]. In Proceedings of the 20th International Joint Conference on Artificial Intelligence [C]. Hyderabad, India,2007:2237-2242.
    [10]N. T. Nguyen, D. Q. Phung, and S. Venkatesh. Learning and detecting activities from movement trajectories using the hierarchical hidden markov model [A]. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition [C]. San Diego, CA, USA,2005:955-960.
    [11]D. Mahajan, N. Kwatra, S. Jain, et al. A framework for activity recognition and detection of unusual activities [A]. In Proceedings of the Indian Conference on Computer Vision, Graphics, Image Processing [C]. Kolkata, India,2004:37-42.
    [12]Ronald Poppe. Vision-based human motion analysis:An overview [J]. Computer Vision and Image Understanding,2007,108(1-2):4-18.
    [13]王素玉,沈兰荪.智能视觉监控技术研究进展[J].中国图像图形学报,2007,12(9):1506-1513.
    [14]凌志刚,赵春晖等.基于视觉的人行为理解综述[J].计算机应用研究,2007,25(9):2570-2579.
    [15]A. Jaimes and N. Sebe. Multimodal human computer interaction:a survey [J]. Computer Vision and Image Understanding,2007,108(1-2):116-134.
    [16]R. Collins, A. Lipton, T. Kanade, et al. A system for video surveillance and monitoring: VSAM final report [R]. Carnegie Mellon University, Technical Report:CMU-RI-TR-00-12, 2000.
    [17]I. Haritaoglu, D. Harwood, and L. Davis. W4:real-time surveillance of people and their activities [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2000,22 (8): 809-830.
    [18]P. Remagnino, T. Tan, and K. Baker. Multi-agent visual surveillance of dynamic scenes [J]. Image and Vision Computing,1998,16 (8):529-532.
    [19][DB/OL]. http://www.msra.cn/.
    [20][DB/OL]. http://nlpr-web.ia.ac.cn/.
    [21]Haibing Ren and Guangyou Xu. Human Action Recognition in Smart Classroom [A]. The 5th IEEE International Conference on Automatic Face and Gesture Recognition [C].2002: 399-404.
    [22][DB/OL]. http://www.cis.pku.edu.cn/vision/vision.htm.
    [23]L. W. Campbell, D. A. Becker, A. Azarbayejani, et al. Invariant features for 3D gesture recognition [A]. In Proceedings of the International Conference on Automatic Face and Gesture Recognition [C]. Killington, Vermont, USA,1996:157-162.
    [24]N. Jin and F. Mokhtarian. Image-based shape model for view-invariant human motion recognition [A]. In Proceedings of the IEEE Conference on Advanced Video and Signal Based Surveillance [C], London, UK,2007:336-341.
    [25]S. Park and M. Trivedi. Driver activity analysis for intelligent vehicles:issues and development framework [A]. In Proceedings of the IEEE Intelligent Vehicles Symposium [C]. Las Vegas, Nevada, USA,2005:644-649.
    [26]N. Robertson and I. Reid. Behavior understanding in video:a combined method [A]. In Proceedings of the IEEE International Conference on ComputerVision [C], Beijing, China, 2005:808-815.
    [27]Y. Wang, K. Huang, and T. N. Tan. Abnormal activity recognition in office based on R transform [A]. In Proceedings of the IEEE Conference on Image Processing [C]. San Antonio, TX, USA,2007:341-344.
    [28]K. Kim and G. G. Medioni. Distributed visual processing for a home visual sensor network [A]. In Proceedings of the IEEE Workshop on Applications of ComputerVision [C]. Copper Mountain, Colorado, USA,2008:1-6.
    [29]A. Bobick and J. Davis. Real-time recognition of activity using temporal templates [A]. In Proceedings of the IEEE Conference on Applications of Computer Vision [C].1996:39-42.
    [30]A. Veeraraghavan, A.K. Roy-Chowdhury, and R. Chellappa. Matching shape sequences in video with applications in human movement analysis [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2005,27 (12):1896-1909.
    [31]C. Bregler. Learning and recognizing human dynamics in video sequences [A]. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition [C]. Puerto Rico,1997:568-574.
    [32]Mun wai Lee and I. Cohen. A Model-Based Approach for Estimating Human 3D poses in Static Images [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence.2006, 28(6):905-916.
    [33]K. Murphy. Dynamic Bayesian networks:representation, inferenceand learning [D]. Berkeley:University of California,2002.
    [34]J. X. Wu, A. Osuntogun, T. Choudhury, et al. A scalable approach to activity recognition based on object use [A]. In Proceedings of the IEEE International Conference on Computer Vision [C]. Beijing, China,2007:1-8.
    [35]Y. T. Du, F. Chen, W. L. Xu, et al. Recognizing interaction activities using dynamic Bayesian network [A]. In Proceedings of the International Conference on Pattern Recognition [C], New York, USA,2006:618-621.
    [36]Guangyu Zhu, Changsheng XU, Qingming Huang, et al. Action recognition in broadcast tennis video [A]. In Proceedings of the 18th International Conference on Pattern Recognition [C]. Hong Kong, China,2006:251-254.
    [37]A. Lipton, H. Fujiyoshi, and R. Patil. Moving Target Detection and Classification from Real-Time Video [A]. In Proceedings of the IEEE Workshop Application of Computer Vision [C].1998:8-14.
    [38]L. Wang, T. Tan, H. Ning, et al. Silhouette Analysis Based Gait Recognition for Human Identification [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2003, 25(12):1505-1518.
    [39]P. F. Felzenszwalb and D. P. Hutenlocher. Efficient Matching of Pictorial Structures [A]. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition [C]. Hilton Head Island, SC,2000:66-73.
    [40]A. Mohan, C. Papageorgiou, and T. Poggio. Example-Based Object Detection in Images by Components [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2001, 23(3):349-361.
    [41]R. Cutler and L. Davis. Robust real-time periodic motion detection, analysis, and applications [J]. IEEE Transactions on Pattern Analysis and Machine In telligence,2000,22 (8):781-796.
    [42]D. Comaniciu and P. Meer. Mean shift analysis and applications [A]. In Proceedings of the seventh IEEE International Conference on Computer Vision [C]. Kerkyra,1999,2:1197-1203.
    [43]G. Welch and G. Bishop. An introduction to the Kalman filters [DB/OL]. http://www.math.itb.ac.id/, UNC-Chapel Hill, TR 95-041,2000.
    [44]M. Isard and A. Blake. Condensation-Conditional Density Propagation for Visual Tracking [J]. International Journal of Computer Vision,1998,29(1):5-28.
    [45]Weiming Hu, Tieniu Tan, Liang Wang, et al. A survey on visual surveillance of object motion and behaviors [J]. IEEE Transactions on Systems, Man and Cybernetics, Part C: Applications and Reviews,2004,34(3):334-352.
    [46]Boyoon Jung and G. S. Sukhatme. A region-based approach for cooperative multi-target tracking in a structured environment [A]. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and System [C].2002,3 (30):2764-2769.
    [47]R. Polana and R. Nelson. Low level recognition of human motion [A]. In Proceedings of the IEEE Workshop on Motion of Non-Rigid and Articulated Objects [C], TX,1994:77-82.
    [48]N. Paragios and R. Deriche. Geodesic active contours and level sets for the detection and tracking of moving objects [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2000,22 (3):266-280.
    [49]A. F. Bobick and Andy Wilson. Using configuration states for the representation and recognition of gestures [R]. MIT Media Lab Perceptual Computing Section Technical Report, No.308,1995.
    [50]A. Kojima. Generating natural language description of human behaviors from video images [A]. In Proceedings of the IEEE International Conference on Patern Recognition [C]. Barcelona,2000:728-731.
    [51]Joo-Ho Lee and Hideki Hashimoto. Intelligent Space [A]. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems [C].2000,2:1358-1363.
    [52]Yuanchun Shi, Weikai Xie, Guangyou Xu, et al. The smart classroom:merging technologies for seamless tele-education [J]. Pervasive Computing,2003,2(2):47-55.
    [53][DB/OL]. http://oxygen.lcs.mit.edu/.
    [54][DB/OL]. http://www.nist.gov/smartspace/.
    [55][DB/OL]. http://awarehome.imtc.gatech.edu/.
    [56][DB/OL]. http://www.cs.washington.edu/mssi/tic/intros/Shafer/.
    [57]R. Vanijjirattikhan, M. Y. Chow, P. Szemes, et al. Mobile agent gain scheduler control in inter-continental Intelligent Space [A]. In Proceedings of the IEEE International Conference on Robotics and Automation [C],2005:15-20.
    [58]朱红松,孙利民,徐勇军,等.基于精细化梯度的无线传感网络汇聚机制及分析[J].软件学报,2007,18(5):1138-1151.
    [59]X. Huang, Z. Zhao, and L. Cui. Easisoc:Towards Cheaper and Smaller [A]. In Proceedings of the International Conference on Mobile Ad-hoc and Sensor Networks [C],2005:229-238.
    [60]J. Pan, J. T. Kwok, Q. Yang, et al. Multidimensional vector regression for accurate and Low-cost Location Estimation in Pervasive Computing[J]. IEEE Transactions on Knowledge and Data Engineering,2006,18(9):1181-1193.
    [61]J. Yin, X. Chai, and Q. Yang. High-level Goal Recognition in a Wireless LAN [A]. In Proceedings of the 19th International Conference on Artificial Intelligence [C]. San Jose, CA, USA,2004:578-584.
    [62]赵杰,庞明,樊继壮.基于智能空间的三肢体仿生机器人3D运动规划[J].哈尔滨工业大学学报,2007,28(3):311-315.
    [63]谷红亮,史元春,徐光.智能空间位置感知的伺候式服务模型[J].清华大学学报(自然科学版),2006,46(4):584-587.
    [64]王红霞,田国会,李晓磊,等.基于地标信息融合的家庭环境机器人组合导航[A].第26届中国控制会议[C].张家界,中国,北京航空航天大学出版社:145-148.
    [65]夏锋,尹红霞,王智,等.网络化控制系统中的实时调度理论与应用[A].第17届中国控制与决策年会[C].哈尔滨,2005:125-126.
    [66]F. Xia, X. Dai, Y. X. Sun, et al. Control Oriented Direct Feedback Scheduling [J]. International Journal of Information Technology,2006,12(3):21-32.
    [67]Joo-Ho Lee, Noriaki Ando, and Hideki Hashimoto. Design Policy of Intelligent Space [A]. In Proceedings of the IEEE International Conference on Systems, Man, and Cybernetics[C]. 1999:77-82.
    [68]K. Kawaji, K. Yokoi, M. Niitsuma, et al. Observation system of human-object relations in Intelligent Space [A]. In Proceedings of the 6th IEEE International Conference on Industrial Informatics [C].2008,:1475-1480.
    [69][DB/OL]. http://dfs.iis.u-tokyo.ac.jp/index.php?ResearchList.
    [70]徐光佑,史元春,谢伟凯.普适计算[J].计算机学报,2003,26(9):1042-1050.
    [71]P. Castro and R. Muntz. Managing context data for smart spaces [J]. IEEE Personal Communications,2000,7(5):44-46.
    [72]S. Voida, E. D. Mynatt, B. MacIntyre, et al. Integrating virtual and physical context to support knowledge workers [J]. IEEE Pervasive Computing,2002,1(3):73-79.
    [73]D. Smith, L. Ma, and N. Ryan. Acoustic environment as an indicator of social and physical context [J]. Personal and Ubiquitous Computing,2006,10(4):241-254.
    [74]K. Geihs. Middleware challenges ahead [J]. Computer,2001,34(6):24-31.
    [75]张子迎,刘心.面向智能空间的普适计算技术[J].科技咨询导报,2007,9:38-39.
    [76]田国会,李晓磊,赵守鹏,等.家庭服务机器人智能空间技术研究与进展[J].山东大学学报(工学版),2007,37(5):53-60.
    [77]Hao Wu, Guohui Tian, and Bin Huang. Multi-robot collaborative localization methods based on Wireless Sensor Network [A]. In Proceedings of the IEEE International Conference on Automation and Logistics [C].2008:2053-2058.
    [78]T. H. Arampatzis, J. Lygeros, and S. Manesis. A survey of applications of wireless sensors and wireless sensor networks [A]. In Proceedings of the 13th Mediterranean Conference on Control and Automation [C]. Limassol, Cyprus,2006:719-724.
    [79]王金东,赵海,韩光洁等.基于互操作计算的SBDM设备管理模型的研究[J].通讯学报,2004,4(25):84-93.
    [80]A. K. Dey and D. Abow. Towards a better understanding of context and context-awareness [A] In Proccedings of the 1st International Symposium on Handheld and Ubiquitous Computing [C]. London,2000:304-307.
    [81]R. Want, A. Hopper, V. Falcao, et al. The Active Badge Location System [J]. ACM Transactions on Information Systems,1992,10(1):91-102.
    [82]Gijeong Jang, Sungho Kim, Wangheon Lee, et al. Color Landmark Based Self-Localization for Indoor Mobile Robots [A]. In Proceedings of the IEEE International Conference on Robotics and Automation[C].2002,1:1037-1042.
    [83]K. J. Yoon and I. S. Kweon. Artificial Landmark Tracking Based on the Color Histogram [A]. In Proceedings of the 2001 IEEE/RSJ International Conference on Intelligent Robots and Systems [C].2001,4:1918-1923.
    [84]D. Scharstein and A. J. Briggs. Real-time recognition of self-similar landmarks [J]. Image and Vision Computing,2001,19(11):763-772.
    [85]司秉玉.基于可视人工路标的自主机器人导航若干问题研究[D].东北大学博士学位论文,2003.
    [86]Xiaolei Li, Fei Lu, Guohui Tian, et al. Ceiling camera based environment objects recognition of intelligent space [A]. In Proceedings of the IEEE International Conference on Automation and Logistics[C].2008:1512-1516.
    [87]D. A. Forsyth and J. Ponce. Computer Vision:A Modern Approach [M]. Beijing:Tsinghua University Press.2004.
    [88]B. Hom and B. Schunch. Detemining optical flow [J]. Artificial Intelligence,1981,17(1): 185-203.
    [89]A. Verri, S. Uras, and E. DeMicheli. Motion Segmentation from optical flow [A]. In Proceedings of the 5th Alvey Vision Conference [C]. Brighton, UK,1989:209-214.
    [90]C. Anderson, P. Bert, and W. G. Vander. Change detection and tracking using pyramids transformation techniques [A]. In Proceedings of the SPIE Conference on Intelligent Robots and Computer Vision [C]. Cambridge, MA,1985:72-78.
    [91]B. Coifinan, D. Beymer, P. Mclauchlan, et al. A real-time computer vision system for vehicle tracking and traffic surveillance [J]. Transportation Research Part C,1998,6(4):271-288.
    [92]Liyuan Li, Weimin Huang, Irene Yu-Hua Gu, et al. Statistical Modeling of Complex Backgrounds for Foreground Object Detection [J]. IEEE Transactions on Image Processing. 2004,13(11):1459-1472.
    [93]方帅,薛方正,徐心和.基于背景建模的动态目标检测算法的研究与仿真[J].系统仿真学报,2005,17(1):159-165.
    [94]C. Stauffer and W. E. L. Grimson. Adaptive background mixture models for real-time tracking [A]. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition [C].1999,2:246-252.
    [95]M. M. Alan. Background Subtraction Techniques [A]. In Proceedings of the International Conference on Image and Vision Computing [C]. Hamilton, New Zealand,2000:1-6.
    [96]周西汉,刘勃,周荷琴.一种基于对称差分和背景消减的运动检测方法[J].计算机仿真,2005,22(4):117-119.
    [97]常发亮.彩色图像分割与复杂场景下视觉目标跟踪方法研究[D].山东大学博士学位论文,2005.
    [98]侯志强,韩崇昭.视觉跟踪技术综述[J].自动化学报,2006,32(4):603-617.
    [99]T. B. Moeslund and E. Granum. A survey of computer vision-based human motion capture [J]. Computer Vision and Image Understanding,2001,81(3):231-268.
    [100]K. Fukunaga and L. D. Hosteller. The estimation of the gradient of a density function, with application in pattern recognition [J]. IEEE Transactions on Information Theory,1975,21: 32-40.
    [101]Y. Z. Cheng. Mean shift, mode seeking, and clustering [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence.1995,17(8):790-799.
    [102]D. Comaniciu and P. Meer. Mean shift:A robust approach toward feature space analysis [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2002,24(5):603-619.
    [103]D. Comaniciu and P. Meer. Image segmentation using clustering with saddle point detection [A]. In Proceedings of the IEEE International Conference on Image Processing [C],2002: 297-300.
    [104]D. Comaniciu, V. Ramesh, and P. Meer. Real-time tracking of non-rigid objects using mean shift [A]. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition [C]. Hilton Head, SC, USA,2000:142-149.
    [105]D. Comaniciu, V. Ramesh, and P. Meer. Kernel-based object tracking [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2003.25(2):564-577.
    [106]G. R. Bradski and S. Clara. Computer vision face tracking for use in a perceptual user interface [J]. Intelligence Technology Journal,1998,2:1-15.
    [107]R. T. Collins. Mean-Shift Blob Tracking through Scale Space [A]. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition [C].2003, 2:234-240.
    [108]K. Nummiaro, E. Koller-Meier, and L. V. Gool. Color features for tracking non-rigid objects [J].Chinese Journal of Automation,2003,29(3):345-355.
    [109]周芳芳,樊晓平,叶榛.均值漂移算法的研究与应用[J].控制与决策,2007,22(8):841-847.
    [110]文志强,蔡自兴.Mean Shift算法的收敛性分析[J].软件学报,2007,18(2):205-213.
    [111]C. Garcia and G. Tziritas. Face detection using quantized skin color regions merging and wavelet packet analysis [J]. IEEE Transactions on Multimedia,1999,1(3):264-277.
    [112]李乡儒,吴福朝,胡占义.均值漂移算法的收敛性[J].软件学报,2005,16(3):365-374.
    [113]A. Mittal and L. S. Davis. M2 Tracker:A Multi-View Approach to Segmenting and Tracking People in a Cluttered Scene Using Region-Based Stereo [A]. In Proceedings of the European Conference on Computer Vision [C]. Copenhagen, Denmark,2002:18-36.
    [114]W. Hu, M. Hu, X. Zhou, et al. Principal Axis-based Correspondence between Multiple Cameras for People Tracking [J], IEEE Transactions on Pattern Analysis and Machine Intelligence,2006,28(4):663-671.
    [115]S. Avidan, Y. Moses, and Y. Moses. Centralized and Distribute Multi-view Correspondence [J]. International Journal on Computer Vision,2007,71(1):49-69.
    [116]O. Javed, K. Shafique, and M. Shah. Appearance modeling for tracking in multiple nonover lapping cameras [A]. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition [C].2005,2:26-33.
    [117]T. Zhao, M. Aggarwal, R. Kumar, et al. Real-time wide area multi-camera stereo tracking [A]. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition [C].2005,1:976-983.
    [118]R. Hartley and A. Zisserman. Multiple View Geometry in Computer Vision [M]. Cambridge University Press,2000.
    [119]J. Kang, I. Cohen, and G. Medioni. Continuous tracking within and across camera streams [A]. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition [C].2003,1:267-272.
    [120]A. Utsumi and N. Tetsutani. Human tracking using multiple-camera-based head appearance modeling [A]. In Proceedings of the Sixth IEEE International Conference on Automatic Face and Gesture Recognition [C].2004:657-662.
    [121]T. He, C. Huang, B. M. Blum, et al. Range-free localization schemes for large scale sensor networks [A]. In Proceedings of the 9th annual international conference on Mobile computing and networking [C]. San Diego, CA,2003:81-95.
    [122]Nirupama Bulusu, John Heidemann, and Deborah Estrin. GPS-less Low Cost Outdoor Localization For Very Small Devices [J]. IEEE Personal Communications Magazine,2000, 7(5):28-34.
    [123]Dragos Niculescu and Badri Nath. DV-based Positioning in Ad hoe Networks [J]. Journal of Telecommunication Systems,2003,22(1-4):267-280.
    [124]R. Nagpal, and H. Shrobe, and J. Bachrach. Organzing a global coordinate system from local information on an ad hoc sensor network [A]. In Proceedings of the 2nd International Workshop on Information Processing in Sensor Networks [C]. Palo Alto,2003.
    [125]A. Harter, A. Hopper, P. Steggles, et al. The anatomy of a context-aware application [A]. In Proceedings of the 5th Annual IEEE/ACM International Conference on Mobile Computing and Networking [C].1999:59-68.
    [126]L. Girod and D. Estm. Robust rang estimation using acoustic and multimodal sensing [A]. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems [C].2001,3:1312-1320.
    [127]N. B. Priyantha, A. K. L. Miu, H. Balakrishnan, et al. The cricket compass for context-aware mobile applications [A]. In Proceedings of the 7th Annual International Conference on Mobile Compming and Networking Rome [C].2001:1-14.
    [128]K. Yedavalli, B. Krishnamachari, S. Ravula, et al. Ecolocation:A sequence based technique for RF Localization in Wireless Sensor Networks [A]. In Proceedings of the 4th international symposium on Information processing in sensor networks [C].2005:285-292.
    [129]S. R. Theodore. Wireless Communications:Principles and Practice (2nd Edition) [M]. Prentice Hall Press,2001,69-138.
    [130]J. W. Wallace and M. A. Jensen. Modeling the Indoor MIMO Wireless Channel [J]. IEEE Transactions on Antennas and Propagation,2002,50(5):594-606.
    [131]王晓红,杨玲.基于颜色、形状直方图的图像检索方法[J].信息系统,2003,4(26):368-370.
    [132]A. Agarwal and B. Triggs. Recovering 3D Human Pose from Monocular Images [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2006:44-58.
    [133]M. Ahmad and S. W. Lee. HMM-based Human Action Recognition Using Multiview Image Sequences [A]. In Proceedings of the 18th International Conference on Pattern Recognition [C].2006,1:263-266.
    [134]A. Sharma, D. K. Kumar, S. Kumar, et al. Recognition of human actions using moment based features and artificial neural networks [A]. In Proceedings of the 10th International Multimedia Modelling Conference [C].2004:368.
    [135]Youtian Du, Feng Chen, and Wenli Xu. Human Interaction Representation and Recognition through Motion Decomposition [J]. Signal Processing Letters,2007,14(12):952-955.
    [136]H. Kauppinen, T. Seppanen, and M. Pietikainen. An experimental comparison of autoregressive and Fourier-based descriptors in 2D shape classification [M]. IEEE Transactions on Pattern Analysis and Machine Intelligence,1995,17(2):201-207.
    [137]Shuxian Zhu and Renjie Zhang. Comparison with BP and RBF neural network used in face recognition [J]. Chinese Journal of Scientific Instrument,2007,28(2):375-379.
    [138]Mingxing Zhu and Delong Zhang. Study on the Algorithms of Selecting the Radial Basis Function Center [J]. Journal of An'hui university (natural sciences),2000,24 (1):72-78.
    [139]Hui Yu, Guangmin Sun, Wenxing Song, et al. Human motion recognition based on neural network [J]. Digital Object Identifier,2005,2(5):979-982.
    [140]T. Ogata, M. M. Rahman, J. K. Tan, et al. Real time human motion recognition based on a motion history image and an eigenspace [A]. In Proceedings of the SICE 2004 Annual Conference [C].2004,2:1901-1904.
    [141]D. Y. Chen, S. W. Shih, and H. Y. M. Liao. Human Action Recognition Using 2-D Spatio Temporal Templates [A]. In Proceedings of the IEEE International Conference on Multimedia and Expo [C].2007:667-670.
    [142]Y. Liu, R. Collins, and Y. Tsin. Gait sequence analysis using frieze patterns [A]. In Proceedings of the International Conference on European Conference on Computer Vision [C].2002,2:657-671.
    [143]K. Stokbro, D. K. Umberger, and J. A. Hertz. Exploiting neurons with localized receptive fields to learn chaos [J]. Complex Systems,1990,4:603-622.
    [144]C. Lin. Face detection in complicated backgrounds and different illumination conditions by using YCbCr color space and neural network [J]. Pattern Recognition Letters,2007,28(16): 2190-2200.
    [145]覃朝晖,于普林.老年人跌倒研究的现状及进展[J].中华老年医学杂志,2005,24(9):711-714.
    [146]中华人民共和国国家统计局.第五次人口普查数据(2000年).http://www.stats.gov.cn/ tjsj/ndsj/renkoupucha/2000pucha/pucha.htm.
    [147]李新辉,陈丽丽.老年人跌倒危险因素及预防研究进展[J].全科护理,2008,11(6):2829-2831.
    [148]World Health Organization. Global report on falls prevention in older age [R].2007:1-47.
    [149]A. Sixsmith and N. Johnson. A smart sensor to detect the falls of the elderly [J]. IEEE Pervasive Computing,2004,3(2):42-47.
    [150]J. Q. Lin. The Behavior Analysis and Detection of Falling [D].2004.
    [151]D. Anderson, J. M. Keller, M. Skubic, et al. Recognizing Falls from Silhouettes [A]. In Proceedings of the 28th IEEE EMBS Annual International Conference [C]. New York City, USA,2006:6388-6391.
    [152]M. P. Murary, A. B. Drought, and R. C. Kory. Gait as a total pattern of movement [J]. American Journal of Physical Medicine,1967,46(1):290-332.
    [153]J. Cutting and L. Kozlowski. Recognizing friends by their walk:Gait perception without familiarity cues [J]. Bulletin of the Psychonomic Society,1977,9 (5):353-356.
    [154]S. Sarkar, P. Phillips, Z. Liu, et al. The human id gait challenge problem:Data sets, performance, and analysis [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2005,27 (2):162-177.
    [155]P. J. Phillips, S. I. Sarkar, P. Robledo, et al. The Gait identification challenge problem:data sets and baseline algorithm [A]. In Proceedings of the International Conference on Pattern Recognition [C]. Quebec, Canada,2002:385-389.
    [156]P. Kuchi and S. Panchanathan. Intrinsic mode functions for gait recognition [A]. In Proceedings of the 2004 International Symposium on Circuits and Systems [C], Vancouver, Canada.2004,2:117-120.
    [157]R. Collins, R. Gross, and J. Shi. Silhouette-based human identification from body shape and gait [A]. In Proceedings of the International Conference on Automatic Face and Gesture Recognition [C].2002:366-371.
    [158]J. B. Hayfron-Acquah, M. S. Nixon, and J. N. Carter. Automatic gait recognition by symmetry analysis [J]. Pattern Recognition Letters.2003,24(13):2175-2183.
    [159]L. Lee and W. Grimson. Gait analysis for recognition and classification [A]. In Proceedings of the Fifth IEEE International Conference on Automatic Face and Gesture Recognition [C]. Washington DC, USA,2002:155-162.
    [160]D. Cunado, M. S. Nixon, and J. N. Carter. Automatic Extraction and Description of Human Gait Models for Recognition Purposes [J]. Computer Vision and Image Understanding,2003, 90(1):1-41.
    [161]高大利,吴清江,孙凌.基于HMM的步态身份识别[J].计算机工程与应用,2006,16:53-56.
    [162]G. A. Jones, J. R. Renno, and P. Remagnino. Auto-Calibration in Multiple-Camera surveillance Environments [A]. IEEE international workshop on Performance Evaluation of Tracking and Surveillance [C]. Denmark,2002:40-47.
    [163]齐美彬.多摄像机协作分布式智能视觉监控中若干问题研究[D].合肥工业大学博士学位论文,2007.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700