复杂场景下实时视觉目标跟踪的若干研究
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
基于视觉的目标跟踪一直以来都是计算机视觉领域一个重要而富有挑战性的研究热点,它涉及到模式识别、图像处理、人工智能、计算机应用等诸多方面的知识,随着高性能计算机和高质量且廉价的摄像终端的增多,以及自动视频分析需求量的增大,视觉目标跟踪算法引起人们越来越多的关注,并且在军事和民用的许多领域(如:智能监控系统、智能交通系统、精确制导系统、智能医学诊断)等都具有极为广泛的应用前景。在过去的几十年中很多优秀的视觉目标跟踪算法及有效的新理论被相继提出,然而,由于通常的视频序列中存在着内因(如:尺度变化,姿势变化和形状变化等)和外因(如:部分或全部遮挡,光照变化,运动模糊和背景杂乱)等因素的干扰,要想设计一个具有普适性的实时的、鲁棒的、精准稳定的视觉目标跟踪系统来满足实际需求,仍然面临着很大的挑战。
     针对以上问题,本论文在详细分析传统视觉目标跟踪方法的基础上,以生成模型和判别模型理论为指导,结合当前国际前沿动态和实际应用需求,提出了一些新的思想和方法。本文主要是在复杂场景中、单摄像采集设备且伴随运动的情况下,对单目标跟踪算法展开研究,其目的是为了有效地丰富视觉目标跟踪理论以及提高视觉目标跟踪的鲁棒性和准确性。本论文研究的主要内容和创新点如下:
     (1)以判别模型为基础,提出了一个协同训练框架下变权重实时视觉目标跟踪系统。如何提升分类器的分类性能是判别跟踪方法研究的主要目标,而提取有效的特征来描述目标外观是构建分类器的关键所在,因此本文在压缩跟踪器的基础上引入了一种有效的特征选择策略,来剔除冗余信息。该方法采用Anyboost泛函梯度下降方法,能够有效地选取出最具判别力的特征来构建分类器。在构建分类器的时候需要采用正、负样本,如果赋予样本相同的权重而不区分样本的重要性,那么在某种程度上分类器的分类性能将会下降,本论文提出了一种有效的样本加权方法,把样本的重要性引入到判别跟踪系统的在线学习过程中来。自学习方法是仅采用一种特征来对跟踪系统进行建模,当目标的外观不能被有效的表示或者跟踪系统出现微小的错误时,随着时间推移跟踪器将积累这些错误而导致跟踪失败。本文采用特征融合的方法来构建协同训练框架,利用灰度特征和局部二进制纹理特征独立构建两个分类器进行相互学习、相互更新,然后把各自分类器的分类结果融合一起,通过加权的形式获取最优跟踪结果。
     (2)以生成模型为基础,提出了一种在粒子滤波框架下基于偏最小二乘和稀疏学习的实时视觉目标跟踪系统。有效的目标外观模型表示方法,将对跟踪系统起到决定性的作用。通常的生成跟踪器仅采用目标的前景信息对目标外观进行建模而忽略了目标背景信息的作用,本论文提出了一种有效的目标外观建模方法,该方法采用偏最小二乘理论充分利用前景和背景信息构建一个判别特征子空间,并把稀疏表示理论引入到此特征子空间上来对目标外观进行建模,该方法比起传统的基于稀疏表示理论的跟踪算法,具有较低的计算复杂度,因此可以实时处理高分辨率影像。遮挡问题是视觉目标跟踪中常见的且又极难处理的问题。本论文还提出了一种有效的遮挡检测机制,该遮挡检测机制是在稀疏表示理论框架下,采用稀疏解得到的‘琐碎’模板构建的,它能对被跟踪的目标进行实时检测,并对检测到的遮挡或者异常值进行实时处理。
Visual target object tracking has always been an important yet challenging hot topic in the field of computer vision, it involves many aspect knowledge including pattern recognition, image processing, artificial intelligence, computer application and so on, with the increase of high performance computer and high quality and cheap camera, and with the increase of demand for automatic video analysis, visual target object tracking algorithms take more and more attention, in many fields of military and civilian have extremely extensive application prospect,(such as:(Intelligent monitoring system, intelligent transportation system, precision guidance system, intelligent medical diagnosis, etc.) Over the past few decades, lots of excellent visual target tracking algorithms and effective new theory have been proposed, however, due to many difficulties account for intrinsic factors (e.g. scale and pose change, shape deformation) and extrinsic factors (e.g. partial or total occlusion, illumination change, cluttered background and motion blur), in order to design an universal visual target object tracking system, which has the attribute of real-time, robust, accurate and stable, is still facing great challenges to meet the actual demand.
     To solve these problems, in this paper, we detailed analysis and study traditional visual target object tracking method, and generative model and the discriminative model theory as a guide. It combines academic frontiers dynamic and practical application demand, and proposes some new idea and methods to enrich visual target object tracking theory and effectively improve the accuracy and robustness of object tracking system. In this paper, we study the object tracking system mainly in complex environment with single video device yet movement. The main contents and contributions of this dissertation are summarized as follows:
     (1) We propose an adaptive weighted real-time compressive tracking system in co-training framework based on discriminative model. How to improve classification performance of a classifier is the main goal of the discriminative tracking method. Building a classifier, it is critical to extract the effective features to describe the target appearance. In this paper, we introduce an effective feature selection criterion in compressive tracker to eliminate redundant information. Our method adopts Anyboost functional gradient descent method to effectively choose the most discriminative features to build classifier. We use positive and negative samples to build classifier, if these samples have the same weight and not discriminative, the classification performance of classifier will be degraded to some degree. In this paper, we integrate the sample importance into compressive tracker online learning procedure and proposed a weighted method. The self-learning often suffer from drift due to it merely make use of one kind of feature to model tracking system, when the model can not effectively describe the object appearance or the tracking system has a minor error, then the appearance model ends up getting updated with a suboptimal positive example. Over time, the errors will be accumulated and can cause drift (failure). In this paper, we adopt feature fusion method to build co-training framework. We use gray feature and LBP texture feature to build two classifiers independently to update and learn each other, then we obtain an optimal tracking result by weighting classification results.
     (2) We propose an effective and efficient online visual tracking algorithm with an appearance model based on partial least squares and sparse learning (PLSSL) in a particle filter framework. It is prerequisite for visual target object tracking to effectively describe the object appearance. Traditional generative tracker often only adopts foreground information to model the object appearance and neglect the background information. In this paper, a low-dimensional discriminative feature subspace can be obtained via partial least squares method from high-dimensional feature space which consists of a few collected positive and negative samples. Then, we introduce popular sparse learning into PLS methods to model the candidate target object whose appearance is represented by the PLS subspace. Compared with popular sparse representation based tracking algorithms, our algorithm can process higher resolution image observations and handle challenging image sequences with drastic appearance change and background clutters. The partial occlusion problem is one of the most general yet challenging problems which remain as arguably the most critical factor for causing tracking failures. In this paper, we proposed an effective occlusion detection mechanism based on sparse representation theory framework. We make use of trivial coefficients to construct an effective online update mechanism to real-time detect and process partial occlusion or outliers
引文
[1]R. Szeliski,2010, Computer vision:algorithms and applications:Springer.
    [2]Milan Sonka, Roger Boyle.2011图像处理、分析与机器视觉(第3版)[M].北京:清华大学出版社.
    [3]R. V. Babu, et al., "Robust tracking with motion estimation and local kernel-based color modeling, [J]" Image and Vision Computing, vol.25, pp.1205-1216,2007.
    [4]M. F. Abdelkader, et al., "Integrated motion detection and tracking for visual surveillance, [C]" in Computer Vision Systems,2006 ICVS'06. IEEE International Conference on,2006, pp.28-28.
    [5]A. Elgammal, et al., "Background and foreground modeling using nonparametric kernel density estimation for visual surveillance, [J]" Proceedings of the IEEE, vol.90, pp.1151-1163, 2002.
    [6]C. Bahlmann, et al., "A system for traffic sign detection, tracking, and recognition using color, shape, and motion information, [C]" in Intelligent Vehicles Symposium,2005. Proceedings. IEEE, 2005, pp.255-260.
    [7]C. E. Smith, et al., "Visual tracking for intelligent vehicle-highway systems, [J]" Vehicular Technology, IEEE Transactions on, vol.45, pp.744-759,1996.
    [8]W. Hu, et al., "Traffic accident prediction using 3-D model-based vehicle tracking, [J]" Vehicular Technology, IEEE Transactions on, vol.53, pp.677-694,2004.
    [9]K.-N. Kim and R. Ramakrishna, "Vision-based eye-gaze tracking for human computer interface, [C]" in Systems, Man, and Cybernetics,1999. IEEE SMC99 Conference Proceedings.1999 IEEE International Conference on,1999, pp.324-329.
    [10]R. Sharma, et al., "Toward multimodal human-computer interface, [J]" Proceedings of the IEEE, vol.86, pp.853-869,1998.
    [11]B. Noureddin, et al., "A non-contact device for tracking gaze in a human computer interface, [J]" Computer Vision and Image Understanding, vol.98, pp.52-82,2005.
    [12]V. I. Pavlovic, et al., "Visual interpretation of hand gestures for human-computer interaction: A review, [J]" Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol.19, pp. 677-695,1997.
    [13]P. Pathirana, et al., "Precision missile guidance for systems intercepting at small attack angles: a H(?) tracking approach, [C]" in Proceedings of the International Conference on Computational Intelligence, Robotics and Autonomous Systems:CIRAS 2003,2003.
    [14]A. Watson, et al., "Bearings Only Versus Bearings and Extent Tracking for Missile Guidance Systems Utilising Particle Methods, [C]" Gl Jahrestagung (2), vol.10, pp.907-912,2010.
    [15]蔡悛林.医学图像中多目标检测与跟踪技术研究[D].[硕士].广东:华南理工大学,2012.
    [16]江南春.肛管外病症计算机辅助诊断的图像分割和目标跟踪[D].[硕士].江西:南昌大学,2011.
    [17]N. Wax, "Signal-to-Noise Improvement and the Statistics of Track Populations, [J]" Journal of Applied Physics, vol.26, pp.586-595,2004.
    [18]R. E. Kalman, "A new approach to linear filtering and prediction problems, [J]" Journal of basic Engineering, vol.82, pp.35-45,1960.
    [19]R. W. Sittler, "An optimal data association problem in surveillance theory, [J]" Military Electronics, IEEE Transactions on, vol.8, pp.125-139,1964.
    [20]Robert A S, John J S. "An optimal tracking filter for processing sensor data of imprecisely determined origin in surveillance systems [C]"; proceedings of the Decision and Control,1971 IEEE Conference on, F Dec.1971.
    [21]I. K. Sethi and R. Jain, "Finding trajectories of feature points in a monocular image sequence, [J]" Pattern Analysis and Machine Intelligence, IEEE Transactions on, pp.56-73,1987.
    [22]V. Salari and I. K. Sethi, "Feature point correspondence in the presence of occlusion, [J]" IEEE Transactions on Pattern Analysis and Machine Intelligence, vol.12, pp.87-91,1990.
    [23]K. Rangarajan and M. Shah, "Establishing motion correspondence, [C]" CVGIP:image understanding, vol.54, pp.56-73,1991.
    [24]C. J. Veenman, et al., "Resolving motion correspondence for densely moving points, [J]" Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol.23, pp.54-72,2001.
    [25]K. Shafique and M. Shah, "A noniterative greedy algorithm for multiframe point correspondence, [J]" Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol.27, pp. 51-65,2005.
    [26]D. Comaniciu and P. Meer, "Mean shift analysis and applications," in Computer Vision,1999. The Proceedings of the Seventh IEEE International Conference on,1999, pp.1197-1203.
    [27]R. T. Collins, et al., "A system for video surveillance and monitoring vol.2:Carnegie Mellon University [M]", the Robotics Institute Pittsburg,2000.
    [28]Fujiyoshi H, Kanade T. "VSAM:Video surveillance and monitoring project [J]". Journal of the Institute of Image Information and Television Engineers,57(9):1068-1072,2003.
    [29]I. Haritaoglu, et al., "W 4:Real-time surveillance of people and their activities, [J]" Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol.22, pp.809-830,2000.
    [30]ADVISER Annotated Digital Video for Intelligent Surveillance and Optimized Retrieval. http://www-so.inria.fr/orion/ADVISOR.
    [31]C. Micheloni, G. Foresti, L. Snidaro, "A cooperative multicamera system for video-surveillance of parking lots", In Proc. IET Symposium on Intelligence Distributed Surveillance Systems, February (2003), pp 5/1-5-5.
    [32]Ferryman J. AVITRACK:Aircraft surroundings, categorized vehicles & individuals tracking for apron activity model interpretation & check[C]. In IEEE International Conference on Computer Vision and Pattern Recognition, San Diego, USA,2005.
    [33]曾仲杰,基于稀疏表示和特征选择的目标跟踪[D].[硕士].广州:广东工业大学,2013.
    [34]Andrade E, Blunsden S, Fisher R. Simulation of crowd problems for computer vision[C]. Proceeding of First International Workshop on Crowd Simulation,2005:71-80.
    [35]S. Avidan, "Support vector tracking, [J]" Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol.26, pp.1064-1072,2004.
    [36]S. Avidan, "Ensemble tracking, [J]" Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol.29, pp.261-271,2007.
    [37]H. Grabner and H. Bischof, "On-line boosting and vision, [C]" in Computer Vision and Pattern Recognition,2006 IEEE Computer Society Conference on,2006, pp.260-267.
    [38]H. Grabner, et al., "Semi-supervised on-line boosting for robust tracking, [C]" in Computer Vision-ECCV 2008, ed:Springer,2008, pp.234-247.
    [39]F. Tang, et al., "Co-tracking using semi-supervised support vector machines, [C]" in Computer Vision,2007. ICCV 2007. IEEE 11th International Conference on,2007, pp.1-8.
    [40]B. Babenko, et al., "Robust object tracking with online multiple instance learning, [J]" Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol.33, pp.1619-1632,2011.
    [41]B. Babenko, et al., "Visual tracking with online multiple instance learning, [C]" in Computer Vision and Pattern Recognition,2009. CVPR 2009. IEEE Conference on,2009, pp.983-990.
    [42]H. Lu, et al., "A co-training framework for visual tracking with multiple instance learning, [C]" in Automatic Face & Gesture Recognition and Workshops (FG 2011),2011 IEEE International Conference on,2011, pp.539-544.
    [43]K. Zhang and H. Song, "Real-time visual tracking via online weighted multiple instance learning, [J]" Pattern Recognition, vol.46, pp.397-411,2013.
    [44]L. Mason, et al., "Functional gradient techniques for combining hypotheses, [J]" ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS, pp.221-246,1999.
    [45]K. Zhang, et al., "Real-time Object Tracking via Online Discriminative Feature Selection, [J]" Image Processing, IEEE Transactions on,2013.
    [46]K. Zhang, et al., "Real-time compressive tracking, [C]" in Computer Vision-ECCV 2012, ed: Springer,2012, pp.864-877.
    [47]K. Fukunaga and L. Hostetler, "The estimation of the gradient of a density function, with applications in pattern recognition, [J]" Information Theory, IEEE Transactions on, vol.21, pp. 32-40,1975.
    [48]Y. Cheng, "Mean shift, mode seeking, and clustering, [J]" Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol.17, pp.790-799,1995.
    [49]D. Comaniciu, et al., "Kernel-based object tracking, [J]" Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol.25, pp.564-577,2003.
    [50]G. R. Bradski, "Computer vision face tracking for use in a perceptual user interface, [J]" In: Regina Spencer Sipple, ed. IEEE Workshop on Applications of Computer Vision. Stoughton: Printing House, pp:214-219,1998.
    [51]R. T. Collins, "Mean-shift blob tracking through scale space, [C]" in Computer Vision and Pattern Recognition,2003. Proceedings.2003 IEEE Computer Society Conference on,2003, pp. 11-234-40 vol.2.
    [52]陈晓鹏,et al.,“咱适应带宽均值移动算法及目标跟踪,[J]”机器人,vol.30,pp.147-154,2008.
    [53]Z. Zivkovic and B. Krose, "An EM-like algorithm for color-histogram-based object tracking, [C]" in Computer Vision and Pattern Recognition,2004. CVPR 2004. Proceedings of the 2004 IEEE Computer Society Conference on,2004, pp.1-798-1-803 Vol.1.
    [54]Z. Zivkovic, et al., "Approximate Bayesian methods for kernel-based object tracking, [J]" Computer Vision and Image Understanding, vol.113, pp.743-749,2009.
    [55]C. Zhao, et al., "Target tracking using mean-shift and affine structure, [J]" in Pattern Recognition,2008. ICPR 2008.19th International Conference on,2008, pp.1-5.
    [56]A.-h. Chen, et al., "Mean shift tracking combining SIFT, [J]" in Signal Processing,2008. ICSP 2008.9th International Conference on,2008, pp.1532-1535.
    [57]J. Ning, et al., "Scale and orientation adaptive mean shift tracking, [J]" Computer Vision, IET, vol.6, pp.52-61,2012.
    [58]J. Ning, et al., "Robust mean-shift tracking with corrected background-weighted histogram, [J]" Computer Vision, IET, vol.6, pp.62-69,2012.
    [59]D. A. Ross, et al., "Incremental learning for robust visual tracking, [J]" International Journal of Computer Vision, vol.77, pp.125-141,2008.
    [60]J. Lim, et al., "Incremental Learning for Visual Tracking, [C]" in Nips,2004, pp.793-800.
    [61]X. Li, et al., "Robust visual tracking based on incremental tensor subspace learning, [C]" in Computer Vision,2007. ICCV2007. IEEE 11th International Conference on,2007, pp.1-8.
    [62]X. Li, et al., "Visual tracking via incremental log-euclidean riemannian subspace learning, [C]" in Computer Vision and Pattern Recognition,2008. CVPR 2008. IEEE Conference on,2008, pp. 1-8.
    [63]N. J. Gordon, et al., "Novel approach to nonlinear/non-Gaussian Bayesian state estimation, [J]" in IEE Proceedings F (Radar and Signal Processing),1993, pp.107-113.
    [64]M. Isard and A. Blake, "Contour tracking by stochastic propagation of conditional density, [C]" in Computer Vision—ECCV'96, ed:Springer,1996, pp.343-356.
    [65]M. Isard and A. Blake, "Condensation—conditional density propagation for visual tracking, [J]" International Journal of Computer Vision, vol.29, pp.5-28,1998.
    [66]K. Nummiaro, et al., "An adaptive color-based particle filter, [J]" Image and Vision Computing, vol.21, pp.99-110,2003.
    [67]K. Okuma, et al., "A boosted particle filter:Multitarget detection and tracking, [C]" in Computer Vision-ECCV 2004, ed:Springer,2004, pp.28-39.
    [68]X. Mei and H. Ling, "Robust visual tracking and vehicle classification via sparse representation, [J]" Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol.33, pp. 2259-2272,2011.
    [69]X. Mei and H. Ling, "Robust visual tracking using (?)1 minimization, [C]" in Computer Vision, 2009 IEEE 12th International Conference on,2009, pp.1436-1443.
    [70]X. Mei, et al., "Efficient minimum error bounded particle resampling (?) 1 tracker with occlusion detection, [J]" Image Processing, IEEE Transactions on,2013.
    [71]X. Mei, et al., "Minimum error bounded efficient (?)1 tracker with occlusion detection, [C]" in Computer Vision and Pattern Recognition (CVPR),2011 IEEE Conference on,2011, pp.1257-1264.
    [72]C. Bao, et al., "Real time robust e 1 tracker using accelerated proximal gradient approach, [C]" in Computer Vision and Pattern Recognition (CVPR),2012 IEEE Conference on,2012, pp. 1830-1837.
    [73]B. Liu, et al., "Robust and fast collaborative tracking with two stage sparse optimization," in Computer Vision-ECCV 2010, ed:Springer,2010, pp.624-637.
    [74]B. Liu, et al., "Robust tracking using local sparse appearance model and k-selection, [C]" in Computer Vision and Pattern Recognition (CVPR),2011 IEEE Conference on,2011, pp.1313-1320.
    [75]B. Liu, et al., "Robust visual tracking using local sparse appearance model and k-selection, [J]" Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol.35, pp.2968-2981,2013.
    [76]T. Zhang, et al., "Robust visual tracking via structured multi-task sparse learning, [J]" International Journal of Computer Vision, vol.101, pp.367-383,2013.
    [77]T. Zhang, et al., "Low-rank sparse learning for robust visual tracking, [C]" in Computer Vision-ECCV 2012, ed:Springer,2012, pp.470-484.
    [78]Q. Wang, et al., "Online discriminative object tracking with local sparse representation, [C]" in Applications of Computer Vision (WACV),2012 IEEE Workshop on,2012, pp.425-432.
    [79]W. Zhong, et al., "Robust object tracking via sparsity-based collaborative model, [C]" in Computer Vision and Pattern Recognition (CVPR),2012 IEEE Conference on,2012, pp.1838-1845.
    [80]D. Wang, et al., "Online object tracking with sparse prototypes, [J]" Image Processing, IEEE Transactions on, vol.22, pp.314-325,2013.
    [81]D. Wang, et al., "Least soft-threshold squares tracking, [C]" in Computer Vision and Pattern Recognition (CVPR),2013 IEEE Conference on,2013, pp.2371-2378.
    [82]R. T. Collins, et al., "Online selection of discriminative tracking features, [J]" Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol.27, pp.1631-1643,2005.
    [83]H. Grabner, et al., "Real-Time Tracking via On-line Boosting, [J]" in BMVC,2006, p.6.
    [84]T. G. Dietterich, et al., "Solving the multiple instance problem with axis-parallel rectangles, [J]" Artificial intelligence, vol.89, pp.31-71,1997.
    [85]P. Viola and M. Jones, "Rapid object detection using a boosted cascade of simple features, [C]" in Computer Vision and Pattern Recognition,2001. CVPR 2001. Proceedings of the 2001 IEEE Computer Society Conference on,2001, pp.1-511-1-518 vol.1.
    [86]P. Diaconis and D. Freedman, "Asymptotics of graphical projection pursuit, [J]" The annals of statistics, pp.793-815,1984.
    [87]D. L. Donoho, "Compressed sensing, [J]" Information Theory, IEEE Transactions on, vol.52, pp. 1289-1306,2006.
    [88]Y. Tsaig and D. L. Donoho, "Extensions of compressed sensing, [J]" Signal processing, vol.86, pp.549-571,2006.
    [89]D. L. Donoho, et al., "Message-passing algorithms for compressed sensing, [J]" Proceedings of the National Academy of Sciences, vol.106, pp.18914-18919,2009.
    [90]P. Li, et al., "Very sparse random projections, [C]" in Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining,2006, pp.287-296.
    [91]D. Achlioptas, "Database-friendly random projections:Johnson-Lindenstrauss with binary coins, [J]" Journal of computer and System Sciences, vol.66, pp.671-687,2003.
    [92]R. Baraniuk, et al., "A simple proof of the restricted isometry property for random matrices, [J]" Constructive Approximation, vol.28, pp.253-263,2008.
    [93]T. Ojala, et al., "Multiresolution gray-scale and rotation invariant texture classification with local binary patterns, [J]" Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol.24, pp.971-987,2002.
    [94]M. Heikkila and M. Pietikainen, "A texture-based method for modeling the background and detecting moving objects, [J]" IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 28, pp.657-662,2006.
    [95]P. Viola, et al., "Multiple instance boosting for object detection, [C]" in NIPS,2005, p.5.
    [96]A. Blum and T. Mitchell, "Combining labeled and unlabeled data with co-training, [C]" in Proceedings of the eleventh annual conference on Computational learning theory,1998, pp. 92-100.
    [97]M. Everingham, et al., "Pascal visual object classes challenge results," Available from www. pascal-network, org,2005.
    [98]R.-S. Lin, et al., "Adaptive Discriminative Generative Model and Its Applications, [C]" in NIPS, 2004, pp.801-808.
    [99]J. Wen, et al., "Incremental learning of weighted tensor subspace for visual tracking [C]," in Systems, Man and Cybernetics,2009. SMC 2009. IEEE International Conference on,2009, pp. 3688-3693.
    [100]Q. Wang, et al., "Object tracking via partial least squares analysis, [J]" Image Processing, IEEE Transactions on, vol.21, pp.4454-4465,2012.
    [101]R. Rosipal and N. Kramer, "Overview and recent advances in partial least squares, [J]" in Subspace, Latent Structure and Feature Selection, ed:Springer,2006, pp.34-51.
    [102]I. S. Helland, "On the structure of partial least squares regression, [J]" Communications in statistics-Simulation and Computation, vol.17, pp.581-607,1988.
    [103]S. Wold, et al., "PLS-regression:a basic tool of chemometrics, [J]" Chemometrics and intelligent laboratory systems, vol.58, pp.109-130,2001.
    [104]A. Doucet, et al., "An introduction to sequential Monte Carlo methods, [J]" in Sequential Monte Carlo methods in practice, ed:Springer,2001, pp.3-14.
    [105]J. Wright, et al., "Robust face recognition via sparse representation, [J]" Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol.31, pp.210-227,2009.
    [106]X.-Q. Zeng, et al., "Orthogonal projection weights in dimension reduction based on partial least squares, [J]" International Journal of Computational Intelligence in Bioinformatics and Systems Biology, vol.1, pp.100-115,2009.
    [107]R. Manne, "Analysis of two partial-least-squares algorithms for multivariate calibration, [J]" Chemometrics and intelligent laboratory systems, vol.2, pp.187-197,1987.
    [108]H. Sidenbladh, et al., "Stochastic tracking of 3D human figures using 2D image motion, [C]" in Computer Vision—ECCV 2000, ed:Springer,2000, pp.702-718.
    [109]W. Hu, et al., "Semantic-based surveillance video retrieval, [J]" Image Processing, IEEE Transactions on, vol.16, pp.1168-1181,2007.
    [110]J. Y. Zheng and A. Murata, "Acquiring a complete 3D model from specular motion under the illumination of circular-shaped light sources, [J]" Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol.22, pp.913-920,2000.
    [111]S. S. Intille, et al., "Real-time closed-world tracking, [C]" in Computer Vision and Pattern Recognition,1997. Proceedings.,1997 IEEE Computer Society Conference on,1997, pp.697-703.
    [112]M. Sanjeev Arulampalam, et al., "A tutorial on particle filters for online nonlinear/non-Gaussian Bayesian tracking, [J]" Signal Processing, IEEE Transactions on, vol.50, pp. 174-188,2002.
    [113]D. Beymer and K. Konolige, "Real-time tracking of multiple people using continuous detection, [J]" in IEEE Frame Rate Workshop,1999.
    [114]R. Rosales and S. Sclaroff, "3D trajectory recovery for tracking multiple objects and trajectory guided recognition of actions, [C]" in Computer Vision and Pattern Recognition,1999. IEEE Computer Society Conference on.,1999.
    [115]C. Rasmussen and G. D. Hager, "Probabilistic data association methods for tracking complex visual objects, [J]" Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol.23, pp. 560-576,2001.
    [116]D. B. Reid, "An algorithm for tracking multiple targets, [J]" Automatic Control, IEEE Transactions on, vol.24, pp.843-854,1979.
    [117]R. L. Streit and T. E. Luginbuhl, "Maximum likelihood method for probabilistic multihypothesis tracking, [J]" in SPIE's International Symposium on Optical Engineering and Photonics in Aerospace Sensing,1994, pp.394-405.
    [118]I. J. Cox and S. L. Hingorani, "An efficient implementation of Reid's multiple hypothesis tracking algorithm and its evaluation for the purpose of visual tracking, [J]" Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol.18, pp.138-150,1996.
    [119]D. P. Huttenlocher, et al., "Tracking non-rigid objects in complex scenes, [C]" in Computer Vision,1993. Proceedings., Fourth International Conference on,1993, pp.93-101.
    [120]K. Sato and J. K. Aggarwal, "Temporal spatio-velocity transform and its application to tracking and interaction, [J]" Computer Vision and Image Understanding, vol.96, pp.100-128, 2004.
    [121]Y. Shi and W. C. Karl, "Real-time tracking using level sets, [C]" in Computer Vision and Pattern Recognition,2005. CVPR 2005. IEEE Computer Society Conference on,2005, pp.34-41.
    [122]A.-R. Mansouri, "Region tracking via level set PDEs without motion computation, [J]" Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol.24, pp.947-961,2002.
    [123]R. Ronfard, "Region-based strategies for active contour models, [J]" International Journal of Computer Vision, vol.13, pp.229-251,1994.
    [124]T. Bai and Y. Li, "Robust visual tracking with structured sparse representation appearance model, [J]" Pattern Recognition, vol.45, pp.2390-2404,2012.