基于粒子滤波的行人跟踪算法研究
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
目标跟踪在视频监控、智能人机交互、机器人视觉导航、智能交通、行为分析以及医疗诊断等方面有着广泛的应用。在目标跟踪的大多数场景中行人是跟踪的主要目标,行人跟踪在目标跟踪中有着重要的研究意义和应用价值。但是,由于人体运动的随意性,且经常会有光照、行人姿态变化、复杂背景以及遮挡等影响,使得复杂环境下行人跟踪仍然面临许多问题,而现有行人跟踪算法研究缺少鲁棒性好的观测模型,对行人特征进行描述来适应各种复杂环境,从而在现实中缺少适用性,也无法实现准确、鲁棒的跟踪;另一方面,目前多个行人的跟踪算法研究相对较少,已有的算法大多数只是在静态背景下进行行人跟踪,很少有实现移动背景的行人跟踪;另外,当前的行人跟踪研究多采用单个视频数据,存在跟踪视野范围小、信息量少且因视觉角度造成行人间的相互遮挡等缺点,因而在人数较多的场景下很难对其所有行人进行准确跟踪。
     由于粒子滤波能够处理任意非线性、非高斯分布的系统,而该系统更能准确描述实际场景中的跟踪问题,因此本文基于粒子滤波理论,针对上述问题提出相应的行人跟踪算法,主要研究内容包含以下几个方面:
     1)基于粒子采样优化的粒子滤波理论,通过多特征融合的方法来提高观测模型的鲁棒性,提出了模拟退火粒子群的粒子滤波多特征行人跟踪算法。本文从单人跟踪问题出发,针对粒子滤波算法中粒子多样性匮乏现象,根据粒子群与粒子滤波的相似性和模拟退火对粒子群全局极值条件的改进,采用模拟退火粒子群算法对粒子采样结果进行优化;接着围绕视频图像中单人跟踪问题,对状态空间模型进行改进,增强粒子对目标的跟踪能力;鉴于单个行人特征的观测模型在复杂环境和背景、噪声干扰等因素影响下所具有的目标识别能力有限,在设计观测模型时,结合三种相互间具有互补性的特征信息,并自适应的调整特征权重,从而提高算法在观测中对目标的鉴别。与单个特征的行人跟踪相比,算法在跟踪精度和稳定性都有所提高,且在目标平移、姿态变化、复杂背景以及部分遮挡等情况下仍然获得了较好的跟踪结果。
     2)在静态背景和移动背景的行人跟踪中,引入特征包算法,将数量较大的特征集合转化为数量较小的特征字典和包来建立判决性模型,有效地解决多个行人特征提取复杂性的问题,提出了基于判决性模型的多人视频跟踪算法。在行人跟踪中,多个行人的跟踪与单个行人相比更为复杂,而采用多个特征进行多人跟踪会导致计算量较大,因而不适用于解决多人跟踪问题。鉴于此,本文引入计算简单、算法复杂度低的特征包算法,并结合超像素和局部二值模式(LBP)块特征的提取,共同建立判决性模型,来对视频序列中的行人进行判定。与常规跟踪算法不同,本文的多人跟踪算法在检测阶段提出了适用于静态背景和移动背景下的两种检测方法,提高了算法的适用性;同时,针对多人跟踪过程中经常会出现行人间的相互遮挡进行处理,以防止因目标遮挡引起的漂移和目标丢失现象。实验结果表明,提出的跟踪算法在处理如目标的平移、遮挡、行人间的干扰、光照和行走速度的变化以及相似物的干扰等复杂情况具有较好的稳定性和鲁棒性。
     3)针对行人较密集场景中多个行人间的遮挡问题,结合视频图像的特征信息和激光点云的三维信息,提出了基于视频图像和激光点云的多人跟踪算法。在行人较多的场景,采用单一的视频数据因行人相互遮挡造成跟踪信息的缺失,因而很难对所有行人进行准确跟踪。因此,本文首先对激光点云数据进行解译、分类和检测,得出具有较好检测性能的分类参数,从而将行人和车辆数据分离出来;然后通过标定的方法实现视频图像和激光点云数据的融合,并利用两种数据间的互补性对融合后的兴趣区域进行分析;利用行人检测结果与粒子滤波跟踪算法的结合,对粒子状态与检测进行关联,并对行人目标的消失与出现情况进行处理,从而实现较密集场景下的行人跟踪。实验结果表明:两种数据的融合提高了行人检测的性能,并且在此基础上提出的多人跟踪算法能够处理行人间的相互遮挡、目标出现等情况,具有较好的跟踪效果。
Object tracking has a wide range of applications in video surveillance, intelligent human-computer interaction, visual navigation of robots, intelligent transportation, behavioral analysis and medical diagnosis, etc. Pedestrian is the main goal of tracking in most scenes of object tracking. Pedestrian tracking has important research significance and application value in object tracking. However, pedestrian tracking in complex environment is still facing many problems due to the arbitrariness of human motion, illumination and change of pedestrian postures, complex background and occlusion. Existing pedestrian tracking algorithms lack observation model of good robustness in describing the features to adapt to a variety of complex environments, thus they lack applicability in the real world and cannot achieve accurate robust tracking; on the other hand, the research in multiple pedestrians tracking algorithm is comparatively few. Most of the existing algorithms can achieve pedestrian tracking in the static background, few of them can achieve pedestrian tracking in the moving background; In addition, the current pedestrian tracking research often uses data of single video, which has shortcomings in small tracking field of view, less information, mutual occlusion due to the visual angle, etc, and thus has difficulties in achieving accurate tracking of all pedestrians in pedestrian intensive scenes.
     Since particle filter is able to handle any non-linear and non-Gaussian distribution system which is more accurate in describing tracking problems in actual scenes, the dissertation puts forward pedestrian tracking algorithm based on particle filter theory in respond to the above issues. The main research content includes the following aspects:
     1) Pedestrian tracking in video sequences based on multi-feature particle filter of simulated annealing and particle swarm is proposed based on the particle sampling optimized particle filter theory and the improved robustness of the observation model by multiple features fusion. Starting from the single pedestrian tracking problem, this dissertation uses simulated annealing and particle swarm algorithm to optimize particle sampling results based on the similarity between particle swarm and particle filter as well as the improved global extreme conditions of particle swarm by simulated annealing to deal with the particle diversity scarcity phenomenon in particle filter algorithm; then to solve the problem of single pedestrian tracking in video sequences, the state-space model is improved to enhance the particle's tracking capability of objects; in view of the limited ability of object recognition of the single pedestrian feature observation model under the influence of complex environment, background and noise interference factors, the dissertation combines with three types of feature information which have complementarities between each other and adjusts feature weights self-adaptively in designing the observation model to increase the object identification of algorithm in observations. Compared with pedestrian tracking using single feature, the proposed algorithm improves the accuracy and stability in tracking and can achieve better tracking results in the case of object translation, posture change, complex background, partial occlusion, etc.
     2) Multi-pedestrian tracking algorithm in video sequences based on discriminative model is proposed by introducing bag of features algorithm into pedestrian tracking in both static and moving background which transforms a large number of feature sets into a smaller number of feature dictionaries and bags to create a discriminative model, and effectively solves complexity problem of multi-pedestrian feature extraction. Multi-pedestrian tracking is more complex than single pedestrian tracking, and using multiple features in multi-pedestrian tracking will cause a large amount of calculations, therefore it is not suitable to solve the problems of multi-pedestrian tracking. In view of this, the dissertation introduces a simple-calculating, low-complexity bag of features algorithm, and combines with superpixel and the extraction of LBP block features jointly to establish a discriminative model to determine the pedestrians in video sequences. Unlike conventional tracking algorithms, multi-pedestrian tracking algorithm in this dissertation proposes two detection methods which can be applied to both the static and moving background in the detection phase, thus improves the applicability of the algorithm; meanwhile, the problem of pedestrians' mutual occlusion in multi-pedestrian tracking process is processed in order to prevent the drift and lost phenomenon caused by the object block. Experimental results show that the proposed tracking algorithm has better stability and robustness in processing the objects' translation, occlusion, the interference among pedestrians, illumination, change in walking speed as well as analogue interference, etc.
     3) For the problem of occlusion among multiple pedestrians in pedestrian intensive scenes, multi-pedestrian tracking algorithm based on video sequences and laser point clouds is proposed by combining feature information contained in video sequences and three-dimensional information of laser point clouds. It is difficult to accurately track all of the pedestrians by using data from single video in comparatively intensive scenes for the lack of tracking information caused by pedestrian occlusion. The algorithm first obtains classification parameters with better detection performance through interpreting, classifying and detecting laser point data to separate the data of pedestrians and vehicles, then achieves the fusion of video sequences and laser point clouds by the method of calibration, analyzes the region of interest after fusion by using the complementary between the two data; associates particle state with the detection by using a combination of pedestrian detection results with particle filter tracking algorithm; and processes the disappearance and appearance of pedestrian objects. As a result, pedestrian tracking in comparatively intensive scenes has been achieved. Experimental results show that the fusion of two data improves the performance of pedestrian detection and multi-pedestrian tracking algorithm based on that is able to handle pedestrians' mutual occlusion, object appearance, etc, thus has better tracking results.
引文
[1]Richard Szeliski. Computer Vision:Algorithms and Applications [M]. Springer Publishing,2012.
    [2]Milan Sonka, Roger Boyle.图像处理、分析与机器视觉(第3版)[M].北京:清华大学出版社,2011.
    [3]J. R. Parker. Algorithms for Image Processing and Computer Vision [M]. Wiley Publishing,2010.
    [4]A. Adam, E. Rivlin, and I. Shimshoni. Robust Fragments-based Tracking Using the Integral Histogram [C]. IEEE Conference on Computer Vision and Pattern Recognition,2006:798-805.
    [5]S. Avidan. Ensemble Tracking [C]. IEEE Conference on Computer Vision and Pattern Recognition,2005:494-501.
    [6]赵春晖,潘泉,梁彦等.视频图像运动目标分析[M].北京:国防工业出版社,2011.
    [7]Thomas B. Moeslund, Erik Granum, A Survey of Computer Vision-based Human Motion Capture [J]. Computer Vision and Image Understanding,2001, 18:231-268.
    [8]Kato J, Joga S, Rittscher J, et al. An HMM-based Segmentation Method for Traffic Monitoring Movies [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2002,24(9):1291-1296.
    [9]Monnet A, Mittal A, Paragios N, et al. Background Modeling and Subtraction of Dynamic Scenes [C]. Proceedings of the 9th IEEE International Conference on Computer Vision,2003:1305-1312.
    [10]周宏仁,敬忠良,王培德.机动目标跟踪[M].北京:国防工业出版社,1991.
    [11]J. Yim, S. Jeong, K. Gwon, J. Joo. Improvement of Kalman Filters for WLAN Based Indoor Tracking [J]. Expert Systems with Applications.2010,37(1): 426-433.
    [12]F Bugallo Monica, S. Xu, P. M. Djuri. Performance Comparison of EKF and Particle Filtering Methods for Maneuvering Targets [J]. Digital Signal Processing.2007,17(4):774-786.
    [13]A.V. Shenoy, J. Prakash, V. Prasad, et al. Practical Issues in State Estimation Using Particle Filters:Case Studies with Polymer Reactors [J]. Journal of Process Control,2013,23(2):120-131.
    [14]Z. Wang, X. Yang, Y. Xu, S. Yu. CamShift Guided Particle Filter for Visual Tracking [J]. Pattern Recognition Letters.2009,30(4):407-413.
    [15]P. Tissainayagam, D. Suter. Object Tracking in Image Sequences Using Point Features. Pattern Recognition,2005,38(1):105-113.
    [16]朱志宇.粒子滤波算法及其应用[M].北京:科学出版社,2010.
    [17]梁军.粒子滤波算法及其应用研究[D].哈尔滨:哈尔滨工业大学自动化测试与控制系,2009.
    [18]龚亚信.基于粒子滤波的弱目标检测前跟踪算法研究[D].长沙:国防科学技术大学,2009.
    [19]胡昭华.基于粒子滤波的视频目标跟踪技术研究[D].南京:南京理工大学,2008.
    [20]R. Collins, A. Lipton, T. Kanade, et al. A System for Video Surveillance and Monitoring:VSAM Final Report. Carnegie Mellon University:Technical Report CMU-RI-TR-00-12.2000.
    [21]N. Siebel, S. Maybank. The ADVISOR Visual Surveillance System [C]. Proceedings of the European Conference on Computer Vision workshop ACV. 2004, Prague, Czech Republic:103-111.
    [22]I. Haritaoglu, D. Harwood, L. S. Davis. W4:Real-time Surveillance of People and Their Activities [J]. IEEE Trans, on Pattern Analysis and Machine Intelligence.2000,22:809-830.
    [23]P. Remagnino, T.Tan, K. Baker. Muli-agent Visual Surveillance of Dynamic Scenes [J]. Image and Vision Computing.1998,16(8):529-432.
    [24]T. Matsuyama. Cooperative Distributed Vision-Dynamic Integration of Visual Perception, Action and Communication [C]. Proceedings of DARPA Image Understanding Workshop.1998, Monterey, California:365-384.
    [25]R. C. Wren, A. Azarbayejani, T. Darrel, et al. Pfinder:Real-time Tracking of Human Body [J]. IEEE Trans, on Pattern Analysis and Machine Intelligence. 1997,19(7):780-785.
    [26]J. Gao, R.Collins, A.Hauptmann and H.Wactler. Articulated Motion Modeling for Activity Analysis[C]. IEEE Workshop on Articulated and NonRigid Motion held in conjunction with CVPR'04.2004, Washington, DC, USA:20-21.
    [27]C. Sminchisescu, B.Triggs. Covariance-scaled Sampling for Monocular 3D Body Tracking [C]. Proceedings of IEEE International Conference on Computer Vision and Pattern Recognition.2001, Hawaii, USA:447-454.
    [28]A. Mittal and L.S.Davis. M2 Tracker:a Multi-view Approach to Segmenting and Tracking People in a Cluttered Scene [J]. International Journal of Computer Vision.2003,51(3):189-203.
    [29]M. Isard, A.Blake. CONDENSATION-Conditional Density Propagation for Visual Tracking [J]. International Journal of Computer Vision.1998,9(1):52-28.
    [30]Francesc Serratosa, Rene Alquezar, Nicolas Amezquita. A Probabilistic Integrated Object Recognition and Tracking Framework [J]. Expert Systems with Applications,2012,39(8):7302-7318.
    [31]Yang Y, Zhang T. Moving Target Tracking Based on Feature-optical-flow[J]. Chinese Journal of Astron Autics,2000,21(2):8-15.
    [32]Ben-Arie J, Wang Z, Pandit P, et al. Human Activity Recognition Using Multidimensional Indexing [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2002,24(8):1091-1104.
    [33]Alper Y, Omar J, Mubarak S. Object Tracking:A Survey [J]. ACM Journal of Computing Surveys,2006,38(4):1-45.
    [34]Black J, Ellis T. Multi Camera Image Tracking [J]. Image and Vision Computing, 2006,24(11):1256-1267.
    [35]Xu F L, Liu X, Fujimura K. Pedestrian Detection and Tracking with Night Vision [J]. IEEE Transactions on Intelligent Transportation Systems,2005,6(1): 63-71.
    [36]Zhao F, Shin J, Reich J. Information-driven Dynamic Sensor Collaboration for Tracking Applications [J]. IEEE Signal Processing Magazine,2002,3(1):61-72.
    [37]M. D.Vision. A Computational Investigation into the Human Representation and Processing of Visual Information. Freeman W. H. and Company. San Francisco, 1982.
    [38]C. Shen, v. d. Hengel, A. Brooks, M.J. Visual Tracking Via Efficient Kernel Discriminant Subspace Learning [C]. IEEE International Conference on Image Processing, Genova,2005:590-593.
    [39]D. A. Ross, J. Lim, R. Lin, M. Yang. Incremental Learning for Robust Visual Tracking [J]. International Journal of Computer Vision.2008,77(1-3):125-141.
    [40]T. Parag, F. Porikli, A. Elgammal. Boosting Adaptive Linear Weak Classifiers for Online learning and Tracking [C]. IEEE Conference on Computer Vision and Pattern Recognition.2008:1-8.
    [41]H. Grabner, H. Bischof. On-line Boosting and Vision [C]. IEEE Computer Society Conference on Computer Vision and Pattern Recognition. New York, 2006:260-267.
    [42]Shai Avidan. Support Vector Tracking [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence.2004,26(8):1064-1072.
    [43]Shai Avidan. Ensemble Tracking [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence.2007,29(2):261-271.
    [44]Y. Bar-Shalom, T. Fortmann. Tracking and Data Association [M], Academic Press,1988.
    [45]Zhaoxia Fu, Yan Han. Centroid Weighted Kalman Fiiter for Visual Object Tracking [J]. Measurement,2012,45(4):650-655.
    [46]K. K. Ahn, D. Q. Online Tuning Fuzzy PID Controller Using Robust Extended Kalman Filter [J]. Journal of Process Control,2009,19(6):1011-1023.
    [47]D. Salmond, N. Gordon, A. Smith. A Novel Approach to Non-linear/ Non-Gaussian Bayesian State Estimation [J]. IEEE Proceedings on Radar, Sonar and Navigation,1993,140(2):107-113.
    [48]Arpita Mukherjee, Aparajita Sengupta. Likelihood Function Modeling of Particle Filter in Presence of Non-stationary Non-gaussian Measurement Noise [J]. Signal Processing,2010,90(6):1873-1885.
    [49]P. Tissainayagam, D. Suter. Object Tracking in Image Sequences Using Point Features. Pattern Recognition.2005,38(1):105-113.
    [50]丁清光.空间三维数据的实时获取与可视化建模[D],解放军信息工程大学,2006.
    [51]李清泉等.三维空间数据的实时获取、建模与可视化.武汉:武汉大学出版社,200.3.
    [52]Peter Axelsson, Processing of Laser Scanner Data-Algorithms and Applications. ISPRS Journal of Photogrammetry and Remote Sensing,1999,54:138-147.
    [53]卢秀山,李清泉,冯文颧等。车载式城市信息采集与三维建模系统。武汉大学学报(工学版),2003,36(6):76-80.
    [54]A. Guarnieria, A. Vettorea, S. El-Hakimb, L. Gonzoc. Digital Photogrammetry and Laser Scannig in Cultural Heritage Survey. ISPRS 2004, July 12-23, Istanbul Turkey, CD-ROM publication.
    [55]刘春,陈华云,吴杭彬.激光三维遥感的数据处理与特征提取.北京:科学出版社,2010.
    [56]Lynx Mobile Mapper, http://www.optech.ca/lynx.htm.
    [57]StreetMapper. http://streetmapper.net/index.htm.
    [58]Huijing Zhao, Ryosuke Shibasaki. Reconstructing Textured CAD Modeled of Urban Environment Using Vehicle-borne Laxer Range Scanners and Line Cameras [A]. International Workship on Computer Vision Systems Proceedings [C]. Lecture Notes In Computer Science,2001.2095:284-297.
    [59]Huijing Zhao, Ryosuke Shibasaki. Reconstruction of Textured Urban 3D Model by Fusing Ground-based Laser Ranger and CCD Image [J]. IEICE Trans inf & Syst,2000, E83-D(7):1429-1440.
    [60]张爱武,胡少兴,孙卫东等.基于激光与可见光同步数据的室外场景三维重建[J].电子学报,2005,33(5):810-815.
    [61]F. Rottensteiner, J. Trinder, S. Clode, K. Kubik. Fusing Airborne Laser Scanner Data and Aerial Imagery for the Automatic Extraction of Buildings in Densely Built Up Areas [C]. The International Society for Photogrammetry and Remote Sensing's Twentieth Annual Congress, Istanbul, Turkey,2004:512-517.
    [62]R. Reulke, A. Wehr. Mobile Panoramic Mapping Using CCD-line Camera and Laser Scanner with Integrated Position and Orientation System [J]. In Proceedings of the ISPRS workshop group V/1.2004:165-183.
    [63]Peter Biber, Henrik Andreasson, Tom Duckett, Andreas Schilling.3D Modeling of Indoor Environments by a Mobile Robot with a Laser Scanner and Panoramic Camera [C]. In International Symposium on Experimental Robotics(ISER), 2005.
    [64]P. Ronnholm, E. Honkavaara, P. Litkey, et al. Integration of Laser Scanning and Photogrammetry [J]. IAPRS Volume XXXVI, Part3/W52,2007.
    [65]Luciano Spinello, Kai O. Arras, Rudolph Triebel, Roland Siegwart. A Layered Approach to People Detection in 3D Range Data[C]. Proceedings of the 24th AAAI Conference on Artificial Intelligence,2010:1625-1630.
    [66]Kiyosumi Kidono, Takeo Miyasaka, Akihiro Watanabe et al.. Pedestrian Recognition Using High-definition LIDAR[C]. Intelligen Vehicles Symposium, 2011:405-410.
    [67]吴芬芳.基于车载激光扫描数据的建筑物特征提取研究[D].武汉:武汉大学,地图学与地理信息系统.
    [68]张帆,黄先锋,李德仁.激光扫描与光学影像数据配准的研究进展[J].测绘通报,2005(2):7-10.
    [69]米晓峰,李传荣,苏国中,黎荆梅.LiDAR数据与CCD影像融合算法研究[J].微计算机信息,2010,26(4-1):113-114,122.
    [70]杨必胜,魏征,李清泉,毛庆洲.面向车载激光扫描点云快速分类的点云特征图像生成方法[J].测绘学报,2010,39(5):540-545.
    [71]罗赞丰.基于多激光雷达的行人目标跟踪[D].2012,杭州:浙江大学,信息与电子工程系.
    [72]赵艳梅.基于激光的行人腿部特征信息提取[D].2007,北京:首都师范大学,地图学与地理信息系统.
    [73]Jinshi Cui, Hongbin Zha, Huijing Zhao, Ryosuke Shibasaki. Multi-modal Tracking of People Using Laser Scanners and Video Camera. Image and Vision Computing,2008,26(2):240-252.
    [74]Alan J. Lipton, Hironobu Fujiyoshi, Raju S. Patil. Moving Target Classification and Tracking from Real-time Video [C]. IEEE Workshop on Applications of Computer Vision, Princeton,1998:19-21.
    [75]Barron J, Fleet D, Beauchemin S. Performance of Optical Flow Techniques [J]. International Journal of Computer Vision,1994,12 (1):42-77.
    [76]Arseneau S, Cooperstock J. Real-time Image Segmentation for Action Recognition [C]. Proceedings of the IEEE Pacific Rim Conference on Communications. Computers and Signal Processing, Canada:Victoria,1999: 86-89.
    [77]SakraPee Paisitkriangkrai, Chunhua Shen, Jian Zhang. Fast Pedestrian Deteetion Using a Cascade of Boosted Covariance Features [J], IEEE Transactions on Circuits and Systems for video technology,2008,18(8):1140-1151.
    [78]P Shih, C Liu. Face Detection Using Distribution-based Distance and Support Vector Machine [C]. International Conference on Computational Intelligence and Multimedia Applications, NJ, USA,2005:327-332.
    [79]汤义.智能交通系统中基于视频的行人检测与跟踪方法的研究[D].华南理工大学,2010.
    [80]Katja Nummiaro, Esther Koller-Meier, Luc Van Gool. An Adaptive Color-based Particle Filter. Image and Vision Computing, vol.21, no.1, pp.99-110,2003.
    [81]David Vignon, Brian C. Lovell, Robert J. Andrews. General Purpose Real-time Object Tracking Using Hausdorff Transforms [C]:In Proceedings of IPMU, 2002:1-6.
    [82]Tao Xiong, Christian Debrunner. Monte Carlo Visual Tracking Using Color Histograms and a Spatially Weighted Oriented Hausdorff Measure. In Proceedings of the Conference on Analysis of Images and Patterns, Mallorca, Spain,2003:190-197.
    [83]Martin Spengler, Bernt Schiele. Towards Robust Multi-cue Integration for Visual Tracking. Machine Vision and Applications, vol.14, pp.50-58,2003.
    [84]A. Makris, D. Kosmopoulos, S. Perantonis, S. Theodoridis. A Hierarchical Feature Fusion Framework for Adaptive Visual Tracking [J]. Journal of Image and Vision Computing, vol.29, pp.594-606,2011.
    [85]Isard M, MacCormick J, Brambl E. A Bayesian Multiple-blob Tracker [C]. In Proceedings of International Conference on Computer Vision, Vancouver, Canada,2001:34-41.
    [86]MacCormick J, Blake A. A Probabilistic Exclusion Principle for Tracking Multiple Objects [C]. In Proceedings of International Conference on Computer Vision, Kerkyra, Greece,1999:572-578.
    [87]K. Okuma, A. Taleghani, N. Freitas, et al. A Boosted Particle Filter:Multitarget Detection and Tracking. In Proceedings of the European Conference on Computer Vision [C],2004:1:28-39.
    [88]Wei-Lwun Lu, Jo-Anne Ting, Kevin P. Murphy and James J. Little. Identifying Players in Broadcast Sports Videos Using Conditional Random Fields [J]. IEEE Conference on Computer Vision and Pattern Recognition,2011:3249-3256.
    [89]Shu Wang, Huchuan Lu, Fan Yang et al. Superpixel Tracking [C]. In Proceedings of International Conference on Computer Vision,2011:1323-1330.
    [90]Krishna, M. T. G., Ravishankar, M. Babu D. R. R.. Automatic Detection and Tracking of Moving Objects in Complex Environments for Video Surveillance Applications [C]. The 3rd International Conference on Electronics Computer Technology,2011, pp.234-238.
    [91]L. Liyuan, May lor K. H. Leung. Integrating Intensity and Texture Differences for Robust Change Detection. IEEE Transactions on Image Processing 2002: 105-112.
    [92]Andrew Blake, Michael Isard. The condensation Algorithm-conditional Density Propagation and Applications to Visual Tracking [J]. Advances in Neural Information Procesing Systems.1997:361-368.
    [93]Isard M., Blake A. CONDENSATION-Conditional Density Propagation for Visual Tracking [J]. International Journal of Computer Vision.1998,29(1):5-28.
    [94]Seongkeun Park, Jae Pil Hwang, Euntai Kim. A New State Estimation Method for Chaotic Signals:Map-particle Filter Method [C]. Expert Systems with Applications,2011,38(9):11442-11446.
    [95]Doucet A, de Freitas J, Gordon N. Sequential Monte Carlo Methods in Practice. New York:Springer,2001.
    [96]Doucet A, Godsill S, Andrieu C. On Sequential Monte Carlo Sampling Methods for Bayesian Filering [J]. Statist Computer,2000,10:197-208.
    [97]Hemerson Pistori, Valguima Victoria Viana Aguiar Odakura, Joao Bosco Oliveira, et al. Mice and Larvae Tracking Using a Particle Filter with an Auto-adjustable Observation Model. Pattern Recognition Letters,2010, 31(4): 337-346.
    [98]Hu X L, Schon T B, Ljung L. A Basic Convergence Result for Particle Filtering. NOLOS,2007.
    [99]Crisan D, Doucet A. Convergence of Sequential Monte Carlo Methods. Univ. Cambridge,2000.
    [100]Fan, Guoliang, Venkataraman, Vijay, Tang. A Comparative Study of Boosted and Adaptive Particle Filters for Affine-invariant Target Detection and Tracking [C]. IEEE Computer Society Conference on Computer Vision and Pattern Recognition,2006.
    [101]M. G. S. Bruno. Bayesian Methods for Multiaspect Target Tracking in Image Sequences [J]. IEEE Trans. Signal Processing,2004,52(7):1848-1861.
    [102]M. G. S. Bruno. Sequential Importance Sampling Filtering for Target Tracking in Image Sequences [J]. IEEE Signal Processing Letters,2003,10(8): 246-249.
    [103]胡士强,敬忠良.粒子滤波原理及其应用[M].北京:科学出版社,2010.
    [104]Jiyong Lee, Mun-Ho Jeong, Joongjae Lee, et al.3D Pose Tracking Using Particle Filter with Back Projection-based Sampling [J]. International Journal of Control Automation and Systems,2012,10(6):1232-1239.
    [105]Ddoucet A, de Freitas J, Gordon N. An Introduction to Sequential Monte Carlo Methods. New York:Springer,2001,10:3-14.
    [106]Sanjeev M, Simon M. A Tutorial on Particle Filter for Online Nonlinear/non-Gaussian Bayesian Tracking. IEEE Trans. On Signal Processing,2002,50(2):174-188.
    [107]Polson N G, Stroud J R, Muller P. Practical Filtering with Sequential Parameter Learning [C]. University of Pennsylvania Working Paper,2006.
    [108]Arpita Mukherjee, Aparajita Sengupta. Likelihood Function Modeling of Particle Filter in Presence of Non-stationary Non-gaussian Measurement Noise. Signal Processing,2010,90(6):1873-1885.
    [109]Maskell S., Gordon N., Clapp T.. A Tutorial on Particle Filters for Online Nonlinear/Non-Gaussian Bayesian Tracking [J]. IEEE Transactions on Signal Processing.2002,50(2):174-188.
    [110]R.C. Eberhart and J. Kennedy. A New Optimized Using Particle Swarm Theory [C]. In the 6th International Symposium on Micromachine and Human Science, pp.39-43. IEEE Press, Piscataway,1995:39-43.
    [111]Y. Shi and R. Eberhart. Empirical Study of Particle Swarm Optimization [C]. Proceedings of the 1999 Congress on Evolutionary Computation,1999, PP. 1945-1950.
    [112]Rodriguez-Diaz F. J., Garcia-Martinez C., Lozano M., A GA-based Multiple Simulated Annealing [J]. IEEE Congress on Evolutionary Computation.2010: 1-7.
    [113]Juan De Vicente, Juan Lanchares, Roman Hermida. Placement by Thermodynamic Simulated Annealing [J]. Physics Letters A.2003,317 (5-6): 415-423.
    [114]D. Comaniciu, V. Ramesh, Mean Shift and Optimal Prediction for Efficient Object Tracking [C]. International Conference on Image Processing. 2000:70-73.
    [115]Raphael Gonzalez, Richard E. Woods Digital Image Processing,2nd ed. Prentice Hall Press,2002, p.295.
    [116]Chang Huai You, Kong Aik Lee, Haizhou Li. An SVM Kernel With GMM-Supervector Based on the Bhattacharyya Distance for Speaker Recognition. Signal Processing Letters,2009,16(1):49-52.
    [117]Miao Zhang, Qiang Wang, Zhi He, etc. Bhattacharyya Distance Based Kernel Method for Hyperspectral Data Multi-class Classification. Instrumentation and Measurement Technology Conference,2010:629-632.
    [118]T. Ojala, M. Pietikainen, D. Harwood. A Comparative Study of Texture Measures with Classification Based on Feature Distributions [J]. Pattern Recognition,1996,29(1):51-59.
    [119]T. Ojala, M. Pietikainen, T. Maenpaa. Multiresolution Gray Scale and Rotation Invariant Texture Classification with Local Binary Patterns [J]. IEEE Trans. On Pattern Analysis and Machine Intelligence,2002,24(7):971-987.
    [120]L. Ma, L. Zhu. Integration of the Optimal Gabor Filter Design and Local Binary Patterns for Texture Segmentation [C]. In Proceedings of the 2007 IEEE International Conference on Integration Technology. Shenzhen,2007: 408-413.
    [121]H. L. Jin, Q. S. Liu, H. Q. Lu. Face Detection Using Improved LBP Under Bayesian Framework. In Proceedings of International Conference on Image and Graphics. Hong Kong,2004:.306-309.
    [122]T. Ahonen, A. Hadid, M. Pietikainen. Face Recognition with Local Binary Patterns [C]. In Proceedings of 8th European Conference on Computer Vision, Prague, Czech,2004:469-481.
    [123]T. Ahonen, A. Hadid, M. Pietikainen. Face Description with Local Binary Patterns:Application to Face Recognition [J]. IEEE Trans. On Pattern Analysis and Machine Intelligence,2006,28(12):2037-2041.
    [124]Chengbin Zeng, Huadong Ma, Anlong Ming. Fast Human Detection Using Mi-SVM and a Cascade of HOG-LBP Features [C].2010 17 IEEE International Conference on Image Processing.2010:3845-3848.
    [125]Ming-Yu Liu, Tuzel O., Veeraraghavan A., Chellappa R.. Fast Directional Chamfer Matching [C]. Computer Vision and Pattern Recognition,2010: 1696-1703.
    [126]Varadarajan S., Chakrabarti C., Karam L. J.. A Distributed Psycho-visually Motivated Canny Edge Detector [C]. IEEE International Conference on Acoustis Speech and Signal Processing,2010:822-825.
    [127]Pradabpet C., Ravinu N., Chivapreecha S.. Knobnob, B.; Dejhan, K. An Efficient Filter Structure for Multiplierless Sobel Edge Detection. Innovative Technologies in Intelligent Systems and Industrial Applications,2009:40-44.
    [128]Lei Yang; Dewei Zhao; Xiaoyu Wu; Hui Li; Jun Zhai. An Improved Prewitt Algorithm for Edge Detection Based on Noised Image. International Congress on Image and Signal Processing,2011:1197-1200.
    [129]Datasets:http://www.cvpapers.com/datasets.html.
    [130]Yang Cao, Changhu Wang, Zhiwei Li, Liqing Zhang, Lei Zhang. Spatial-bag-of-features. IEEE Conference on Computer Vision and Pattern Recognition.2010:3352-3359.
    [131]Yang F., Lu H., Zhang W., Yang G.. Visual Tracking via Bag of Features. IET Image Processing,2012,6(2):115-128.
    [132]Gabrilovich E., Markovitch S.. Text Categorization with Many Redundant Features:Using Aggressive Feature Selection to Make SVMs Competitive with C4.5 [C]. International Conference on Machine Learning,2004:321-328.
    [133]Tong S., Koller D.. Support Vector Machine Active Learning with Applications to Text Classification [J]. Journal of Machine Learning Research, 2002,2:45-66.
    [134]Lodhi H., Shawe-taylor J., Christianini N.. Text Classification Using String Kernels [J]. Journal of Machine Learning Research,2002,2:419-444.
    [135]Cristianini N., Shawe-Taylor J., Lodhi H.. Latent Semantic Kernels [J]. Journal of Intelligent Information Systems.2002,18(2):127-152.
    [136]Zhu L, Rao A, Zhang A. Theory of Keyblock-based Image Retrieval [J]. ACM Transactions on Information Systems,2002,20(2):224-257.
    [137]Csurka G, Dance C R, Fan L, et al. Visual Categorization with Bags of Keypoints [C]. European Conference Computer Vision Workshop on Statistical Learning in Computer Vision, Prague, Czech Republic,2004.
    [138]Fei-Fei L, Perona P. A Bayesian Hierarchical Model for Learning Natural Scene Categories [C]. IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Diego,2005:524-531.
    [139]Sivic J, Russell B C, Efros A A, et al. Discovering Object Categories in Image Collections [C]. IEEE International Conference on Computer Vision, Beijing, 2005.
    [140]Fergus R, Fei-Fei L, Perona P, et al. Learning Object Categories From Google's Image Search [C]. IEEE International Conference on Computer Vision, Beijing,2005:1816-1823.
    [141]Winn J, Criminisi A, Minka T. Object Categorization by Learned Universal Visual Dictionary [C]. IEEE International Conference on Computer Vision, Beijing,2005.
    [142]Navneet Dalal, Bill Triggs. Histograms of Oriented Gradients for Human Detection. IEEE Computer Society Conference on Computer Vision and Pattern Recognition,2005:886-893.
    [143]Pedro F. Felzenszwalb, Daniel P. Huttenlocher. Efficient Graph-based Image Segmentation [J]. International Journal of Computer Vision.2004,59(2): 167-181.
    [144]Jianbo S, Jitendra M. Normalized Cuts and Image Segmentation [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2000,22(8): 888-905.
    [145]Andrea V, Stefano S. Quick Shift and Kernel Methods for Mode Seeking. European Conference on Computer Vision, Marseille, France,2008:705-718.
    [146]Anex L, Adrian S, Kiriakos N K, David J F, Sven J D, Daleem S. Turbopixels: Fast Superpixels Using Geometric Flows. IEEE Transactions on Pattern Analysis and Machine Intelligence,2009,31(12):2290-2297.
    [147]Radhakrishna A, Appu S, Kevin S, Aurelien L, Pascal F, Sabine S. SLIC Superpixels [R]. EPFL Technical Report no.149300, June 2010.
    [148]Fukunaga K, Hostetler L. The Estimation of the Gradient of a Density Function, with Applications in Pattern Recognition [J]. IEEE Transactions on Information Theory,1975,21(1):32-40.
    [149]PETS Benchmark Data:http://www.cvg.rdg.ac.uk/PETS2013/a.html.
    [150]CAVIAR Test Data:http://homepages.inf.ed.ac.uk/rbf/CAVIARDATAl/.
    [151]Kenji Okuma, Ali Taleghani, Nando De Freitas, James J. Little, David G. Lowe. A Boosted Particle Filter:Multitarget Detection and Tracking. European Conference on Computer Vision,2004:28-39.
    [152]Tian T Y, Tomasi C, Heeger D J. Comparison of Approaches to Egomotion Computation. Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, San Francisco,1996:315-320.
    [153]Ying R, Chua C S, Ho Y K. Motion Detection with Nonstationary Background. Proc. the 11th International Conference on Image Analysis and Processing. Palermo,2001:78-83.
    [154]Yasushi Mae, Yoshiaki Shirai, Jun Miura, et al. Object Tracking in Cluttered Background Based on Optical Flow and Edges. Proc. the 13th International Conference on Pattern Recognition, Vol.1,1996:196-200.
    [155]Y. Mae, S. Yamamoto, Y. Shirai, J. Miura. Optical Flow Based Object Tracking by Active Camera. Proc. the 2nd Japan-France Congress on Mechatronics, Kagawa, Japan, Nov.1994.
    [156]Ryuzo Okada, Yoshiaki Shirai, Jun Miura. Object Tracking Based on Optical Flow and Depth. IEEE/SICE/RSJ International Conference on Multisensor Fusion and Integration for Intelligent Systems,1996:565-571.
    [157]A. Mohan, C. Papageorgiou, T. Poggio. Example-based Object Detection in Images by Components. IEEE Transaction on Pattern Analysis and Machine Intelligence,2001,23(4):349-361.
    [158]OTT P, Everingham M. Implicit Color Segmentation Features for Pedestrian and Object Detection [C]. IEEE 12th International Conference on Computer Vision,2009:723-730.
    [159]F. Fleuret, J. Berclaz, R. Lengagne, et al. Multicamera People Tracking with a Probabilistic Occupancy Map [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2008,30(2):267-282.
    [160]Riccardo Mazzon, Andrea Cavallaro. Multi-camera Tracking Using a Multi-Goal Social Force Model [J]. Neurocomputing,2013,100(16):41-50.
    [161]Velodyne:http://velodynelidar.com/lidar/hdlproducts/hd164e.aspx.
    [162]Craig Glennie, Derek D. Lichti. Static Calibration and Analysis of the Velodyne HDL-64E S2 for High Accuracy Mobile Scanning [J]. Remote Sensing,2010,2(6):1610-1624.
    [163]Xiaojin Gong, Ying Lin, Jilin Liu. Extrinsic Calibration of a 3D LIDAR and a Camera Using a Trihedron [J]. Optics and Lasers in Engineering,2013,51(4): 394-401.
    [164]Jie Sun, Mingyue Jia, Hui Li. AdaBoost Ensemble for Financial Distress Prediction:an Empirical Comparison with Data from Chinese Listed Companies [J]. Expert Systems with Applications,2011,38(8):9305-9312.
    [165]Junfeng Ge, Yupin Luo. A Comprehensive Study for Asymmetric AdaBoost and Its Application in Object Detection [J]. Acta Automatica Sinica,2009, 35(11):1403-1409.
    [166]Felzenszwalb P, Girshick R, McAllester D, et al. Object Detection with Discriminatively Trained Part-based Models[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2010,32(9):1627-1645.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700