基于粒子滤波的视频目标跟踪方法研究
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
随着智能交通、平安城市、平安政府等一系列智能化概念的提出,智能视频监控技术逐渐成为现代信息化、网络化社会的基础支撑技术之一,其中视频序列中运动目标的精确跟踪问题已成为目前该领域的一个热点研究方向。目前基于贝叶斯推理的跟踪方法是视觉跟踪技术研究的主要方法之一,其具体的思路是将目标跟踪问题转换为贝叶斯估计问题,在已知目标状态先验概率的情况下,通过获取新的目标观测并递归求解目标的最大后验概率,但是在实际的视频目标跟踪过程中,后验概率的分布具有非线性、非高斯、多模态的特点,因此研究人员会采用粒子滤波方法解决视频目标跟踪的难题,目前该方法已经成为视频目标跟踪领域的常用方法,基于粒子滤波算法的视频目标跟踪效果主要取决于三个因素:粒子权值的退化问题、可靠的观测模型以及精确运动模型。通常情况下,通过被噪声干扰的二维图像序列来建立精确的运动模型几乎是不可能的,因此研究人员主要通过建立可靠的观测模型并尽量消除粒子权值退化问题,但是现实条件下诸多问题制约着该技术的发展,例如:在光照突变、姿态改变、部分遮挡或完全遮挡、快速运动、目标机动等情况下,视频目标跟踪的精确性和鲁棒性一直未能很好的解决。本论文在教育部博士点基金项目(20106201110003)的资助下,以提升不同复杂环境下视频目标跟踪系统的精确性和鲁棒性为目的,就粒子滤波算法和目标特征的融合处理展开了系统、深入的研究,并取得了以下几项研究成果:
     1.建立了粒子滤波方法建议分布函数的优化机制
     标准粒子滤波(Particle Filter,PF)方法的主要思想是对系统状态的后验概率密度采用求和近似方法进行求解,这样可有效避免对状态矩阵进行相应的求积分运算,采用样本均值的方法对系统的后验状态进行估计。标准PF算法中,为了计算简便,采用一步转移概率作为系统的最优建议分布函数,但是由于缺乏最新观测信息对模型的修正作用而导致粒子权值退化现象的产生。本文立足观测信息与粒子权值退化之间的原因,建立了观测信息对粒子权值退化的修正思想和实现步骤,并采用积分卡尔曼滤波和改进的弦线迭代无迹卡尔曼滤波方法对采样粒子进行最新量测更新处理,提出了两种改进的粒子滤波方法,并将相应的方法应用在视频序列的运动目标跟踪系统中,有效提升了跟踪的精度。由于该算法采用一种普遍通用的思维进行改进,因此本文对建议分布函数的优化机制可以适合于多数具有优化处理能力的方法,具有较好的开放适应性。
     2.提出了噪声未知情况下的自适应滤波方法
     目前针对粒子滤波的研究,多数都是集中在系统噪声统计特性精确已知的情况下展开,但在实际应用过程中系统噪声的统计特性是无法预先精确获知的,特别是光照突变及遮挡等不可预估的复杂场景下,必将改变系统噪声的统计特性,噪声的时变特性也会直接导致系统模型的失配,从而降低系统的整体跟踪精度甚至跟踪失败。针对上述问题,本文提出了一种系统噪声统计特性未知情况下的自适应粒子滤波方法,采用Sage-Husa估计器对系统未知噪声的统计特性进行实时估计,并将无迹卡尔曼滤波方法引入到Sage-Husa估计器的量测更新中,较好的削弱了噪声统计特性估计器的发散现象。
     3.建立了粒子自适应采样及滤波发散抑制机制
     通常情况下,粒子滤波方法的滤波精度与采样粒子数成正比关系,但并不是在所有的情况下都适用较多数量的采样粒子,因较多的粒子数在很大程度上限制了计算的速度,从而限制了算法的实时性。因此,本文在深入分析系统观测信息残差对采样效率影响的基础上,引入系统估计和预测提供的新息差值估计,通过获得的观测新息差值在线自适应调整采样粒子数,实现了较少粒子采样数与高精度系统模型的匹配,在降低采样粒子数的同时保持了较高的采样精度。同时,采用最新量测信息对滤波发散趋势进行判断,并引入衰减记忆因子来有效抑制发散,较好的保证了系统噪声方差阵的半正定性和正定性,有效增强了基于该方法的视频目标跟踪系统的精确性和鲁棒性。
     4.实现了视频多特征自适应融合处理
     随着多源信息融合技术的发展,目标多特征融合技术在增强跟踪的精确性和鲁棒性方面表现出了独特的优势,但是当前的融合方法多数均是采用乘性融合和加性融合进行特征信息的简单处理,乘性融合会使目标的概率分布变得更加尖锐,虽然可以增强概率密度的鉴别能力,却抑制了状态分布的多峰性,放大了系统的噪声,而加性融合是采用特征权值加权求和的方法给出目标系统最终的权值调整因子,虽然能够削弱噪声对系统估计的影响,但是跟踪效果的可信度却没有太大的改变。针对这些不足,本文立足不同信息的互信息熵理论提出了不同特征的置信度度量方法,实现了视频多特征的自适应融合处理,并将该方法应用在高速公路上不同场景的车辆跟踪问题,有效提升了复杂环境下目标跟踪的鲁棒性和精确性。
     本论文的研究内容进一步拓展了粒子滤波算法理论的适用条件和应用范围,有效提升了不同复杂环境下智能监控系统中目标跟踪方法的鲁棒性和精确性。
With the introduction of intellectualized concepts, such as intelligent transportation, safe city, peace government, etc., intelligent video monitoring technology gradually becomes one of the fundamental technologies of modern society which features informatization and networking, among which the issue of effective tracking of the moving objects in video sequence has become a hot research topic. The tracking algorithm based on Bayesian's reasoning became the main method of visual tracking technology, its specific ideas is converted the target tracking into the Bayesian estimation. Making a careful observation and recruiting the maximum posteriori probability of the objective by acquiring new target when the prior probability of target state is known. But in the practical process for visual tracking, the characteristics of the posterior probability distribution is nonlinear, non-Gauss, multi-modal. And therefore researchers adopt the standard Particle Filter (Particle Filter, PF) method to solve the target tracking problem, which became the main method in this filed. There are three results of visual tracking technology:particle degeneracy, reliable observation model and accurate motion model. Normally, it is impossible to set the accurate motion model through2D image list, therefore researchers remove the particle degeneracy problem by establishing the reliable observation model. But a lot of problems in reality are restricting the development of these technologies, such as the sudden change of illumination, attitude change, partial occlusion or complete occlusion, quick movement and maneuvering target, which effect the robustness and accuracy of target tracking. Sponsored by the Ministry of Education Doctoral Program (20106201110003), in order to improve the accuracy and robustness video target tracking system in different complex environments, this article conducts systematic and in-depth research on particle filtering algorithm and processing fusing characteristics of target, obtaining the following achievements:
     1.Optimal mechanism of particle filter method suggested distribution function is established
     The guiding theory of standard Particle Filter (Particle Filter,PF) method is to solve the post probability density of system state with the summation approximation, so that the state matrix corresponding integrated operation can be effectively avoided. What works is the sample mean method to the system post state estimation. In standard PF algorithm, for the sake of convenience, the one simple step transition probability of the system is adopted as the optimal suggested distribution function, due to that the lack of information on the latest observation model corrective action can lead to the weights recession. Based on the association between observation information and weights decline,this paper proposes the correction thought and implementation steps towards the weight decline, updates the measurement of sampling particles with the integral kalman Filter and improved string iterative no trace kalman filtering method, puts forward two methods of improving Particle Filter method, and applies the corresponding method to moving target tracking system in video sequence, effectively improving the tracking accuracy. Because a universal concept is adopted in this algorithm, the distribution function optimization mechanism suggested in this paper is applicable to most methods targeting optimal operation, open to adaptability.
     2. An adaptive filtering method under unknown noise is put forward
     Most of the current filtering methods are suitable in the circumstance where the system noise is precisely known, but in practice, the statistical characteristics of the system noise cannot be predicted, especially in the complicated scenes where the sudden light change, camouflage and the like take place, which may change the statistical characteristics of system noise, in addition, and the changable noise can directly cause the mismatching of system model, thereby reducing the overall system tracking accuracy, even resulting in tracking failure. Therefore, in order to solve the problem mentioned above, this paper puts forward the adaptive filtering method under unknown noise, which estimates the system noise statistical properties in a real-time manner by use of the Sage-Husa estimator, and introducing the No Trace Kalman Filter Method into the Sage-Husa estimator measurement updating, which effectively avoids the divergent phenomenon of the noise statistical properties'estimation.
     3.The particle adaptive sampling and filtering divergence inhibition mechanism is established
     Usually, PF method of filtering accuracy is in proportion to sample particle number, but not in all cases, a larger sample particle number is desirable, which may slows down the calculation speed, hindering the real-time. Therefore, this paper carries on an in-depth analysis on the effect of the systematic observation income residual on the sampling efficiency, introduces the deviation of estimated and predicted information, which can self-adapt sampling particle population online. Therefore, in the case of fewer sample particles, the matching of the system model can be enhanced and a high sampling accuracy can be maintained while the number of sampling particles is being reduced. At the same time, the latest measurement information is adopted to decide the filtering divergence trend, and the attenuation memory factor is introduced to effectively suppress the divergence, better the first ensuring the semidefinite and definite of the system noise variance array, effectively enhancing the accuracy and robustness of video target tracking system based on this method.
     4. Video multi-features adaptive fusion mechanism is achieved
     Along with the development of multi-source information fusion technology, multi-features fusing technology gains a unique advantageous edge in the enhancement of the tracking accuracy and robustness, but the current fusion method is mainly to adopt multiplicative fusion and additive fusion. Multiplicative fusion can sharpen the target probability distribution, enhancing the identification of the probability density, but inhibits the multimodality of the state distribution, strengthening the system noise; Additive fusion is characterized by using feature weights weighted summation method to work out the final right value adjustment factor of target system. Although the disturbance of noise on the system estimation is lessened, the credibility of the tracking performance does not make a huge difference. To surmount these shortcomings, this paper proposes the method of differentiating credibility of different characteristics grounded on the principle of maximum information entropy, achieving a video multi-features adaptive fusion mechanism. The method is applied to vehicle tracking of different scenarios on the highway, effectively improving the robustness and accuracy of target tracking in complex environments.
     The theory of PF algorithm proposed in this paper can be applied on a broader scope, effectively enhancing the robustness and accuracy of target tracking in intelligent monitoring system in different complicated environments.
引文
[1]侯志强,韩崇昭.视觉跟踪技术综述[J].自动化学报,2006,32(4):603-617.
    [2]郑南宁.计算机视觉与模式识别[M].北京:国防工业出版社,1998.
    [3]万卫兵,霍宏,赵宇明.智能视频监控中目标检测与识别[M].上海:上海交通大学出版社,2010.
    [4]顾鑫,王海涛,汪凌峰等.基于不确定性度量的多特征融合跟踪[J].自动化学报,2011,37(5):550-559.
    [5]T. Shivappa, M. Trivedi, D. Rao. Audio-visual Information Fusion In Human Computer Interfaces and Intelligent Environments:A survey[J].IEEE Proceedings,2011,98(10): 1680-1691.
    [6]Blauth, M. Vicente, P. Jung Claudio et. Voice activity detection and speaker localization using audiovisual cues [J]. Pattern Recognition Letters 2012,33(4):373-380.
    [7]曹洁,魏建勇,刘宗礼等.基于简化的ISPF算法与视听信息融合的目标跟踪[J].仪器仪表学报,2011,32(1):76-80.
    [8]夏思宇,潘泓,金立左等.基于特征组合的人脸跟踪方法[J].数据采集与处理,2011,26(1):15-19.
    [9]何友,王国宏等.多传感器信息融合及应用[M].北京:电子工业出版社,2000
    [10]权太范.信息融合神经网络-模糊推理理论与应用[M].北京:国防工业出版社,2007.
    [11]孔军,汤心溢,蒋敏等.基于多尺度特征提取的Kalman滤波跟踪[J].红外与毫米波学报,2011,30(5):446-450.
    [12]孔军,汤心溢,蒋敏.基于多尺度特征提取的运动目标定位研究[J].红外与毫米波学报,2011,30(1):21-26.
    [13]Gotman J. Automatic removal of eye movement artifacts from the EEG using ICA and the dipole model[J]. Progress in Natural Science,2009(09):1165-1170.
    [14]Haritaoglu L, Harwood D, Davis L.W4:rea1-time surveilance of people and their activities[J].IEEE Trans Panem Analysis and Machine Intelligence,2000, 22(8):809-830.
    [15]李剑峰,黄增喜,刘怡光.基于光流场估计的自适应Mean-Shift目标跟踪算法[J].光电子.激光,2012,(10):1996-2002
    [16]MjchaeI Harvre, HewIet-Packard Laboracories. A Framework for High-Level Feedback to Adaptive, PerPixel, Mixture-Gaussian Background Models[C]//In the 7th European Conference on Computer Vision,2012:28-31.
    [17]C.R.wren,A.Azarbayejani,T.Darrell. Real-Time Tracking of the Human Body[J].IEEE Trans.Pattem Analysis and Machine Intelligence,2007,19(7):780-785.
    [18]Spiau B, Chaumete F, Rives P. A New Approach to Visual Serving in Robots[J], IEEE Tramsactions on Robotics and Automation,2012,8(3):313-326.
    [19]Liptorl A,Fujiyoshi H,Pat R. Moving target classification and tracking from real-time video [J]. Proc.of the Fourth IEEE on WACV 1998.98,8-14.
    [20]Jung S,Sukhatme S.Detecting Moving Objects using a Single Camera on a MobileRobot in an Outdoor EnVironment[C]//Inthe 8th Conference on Intelligent Autonomous Systems,2009:980-987.
    [21]Lee J.Neumann U.Tracking with Omni-Directional Vision for Outdoor Systems, IEEE ACM Inlemalional Symposium on Mixed aIld Augmented Reality, Darmstadt, Gemany, 2011:47-56.
    [22]Baroja M,Fleet D,Beaufort S.Performance of optical now techniques[J]. Intemational Joumal of Computer Vision,2012.12(1):42-77.
    [23]胡昭华,宋耀良,梁德群,等.复杂背景下多信息融合的粒子滤波跟踪算法[J].光电子.激光,2008,19(5):680-685.
    [24]马加庆,韩崇昭.一类基于信息融合的粒子滤波跟踪算法[J].光电工程,2007,34(4):22-25.
    [25]王勇,王典洪.基于空间直方图的多目标粒子滤波跟踪[J].光电工程,2010,37(1):65-69.
    [26]Simon.J.Julier.Process Models for the Navigation of High-Speed Land Vehicles,ph.D thesis.U niversity of Oxford.1997.
    [27]Passerieux,J.M.Van Cappel.D.Optimal observer maneuver for bearings-only tracking[J].IEEE Tran. Aerospace and electronic systems,1998.7(34):777-788.
    [28]Staufer C,Grimson W.Adaptive background mixture models for real-time tracking [J].Computer Vision and Panem Recognition,1999.1999(2):246-252.
    [29]Chris Stauff.Er,L.Grimson. Leaming Pattems of Activity Using Real-Time Tracking [J].IEEE Transactions on Pattem Analysis and Machine Intelligence, 2000,08,22(8):747-757.
    [30]巫春玲,韩崇昭.用于弹道目标跟踪的有限差分扩展卡尔曼滤波算法[J].西安交通大学学报,2008,42(2):143-146.
    [31]高韬,刘正光,岳士宏等,用于智能交通的运动车辆跟踪算法[J].中国公路学报,2010,23(3):89-94.
    [32]林菜,赵又群,徐朔南.基于粒子滤波算法的汽车状态估计技术[J].农业机械学报,2011,42(2:)23-28.
    [33]Doucet A,Godsill S J,Andrieu C.On sequential Monte Carlo sampling methods for Bayesian filtering [J].Statistics and Computing,2000,10(3):197-208.
    [34]S.J.Julier and J.K.Uhlmann.Unscented Filtering and Nonlinear Estimation [J]. IEEE Trans.Signal Processing,2004,92(3):401-422.
    [35]杨璐,李明,张鹏.一种新的改进粒子滤波算法[J].西安电子科技大学学报(自然科学版),2010,37(5):862-865.
    [36]高越,赵丹培,姜志国.复杂环境下的鲁棒目标跟踪方法[J].计算机辅助设计与图形学学报,2010.22(5):817-822.
    [37]Wei Qi,Xiong Zhang,Li Chao.A robust approach for multiple vehicles tracking using layered particle filter [J].International Journal of Electronics and Communications,2011(65):609-618.
    [38]王书朋,姬红兵.用于目标跟踪的自适应粒子滤波算法[J].系统仿真学报,2010,22(3):630-633.
    [39]李远征,卢朝阳,高全学等.基于多特征融合的均值迁移粒子滤波跟踪算法[J].电子与信息学报,2010,32(2):411-415.
    [40]Liu P R,Meng Q H,Liu P X,et.al.Optical flow and active contour for moving object segmentation and detection in monocular robot[C]//Proceedings 2006 IEEE International Conference on Robotics and Automation. Washigton:IEEE, 2006:4075-4080.
    [41]P Jacek C,P Branko R.A particle filter for joint detection and tracking or color objects[J] Journal of Image and Vision Computing,2007,25(2):1271-81.
    [42]Morris B.T,Trivedi M.M.Contextextual activity visualization from long-term video observations [J].IEEE Intelligent Systerms,2010,25(3):50-62.
    [43]曹洁,李伟.基于多特征融合的目标跟踪算法[J].兰州理工大学学报,2011,37(2):80-84
    [44]李昱辰,李战明.基于自适应无迹粒子滤波的目标跟踪算法[J].光电子·激光,2012,23(10):1983-1989.
    [45]李红伟,王俊,王海涛.一种基于差分演化的粒子滤波算法[J].电子与信息学报,2011,33(7):223-226.
    [46]蒋鹏,宋华华.基于动态分簇路由优化和分布式粒子滤波的传感网络目标跟踪方法[J].电子与信息学报,2012,34(9):2187-2193.
    [47]Comaniciu D,Ramesh V,Meer P.Kernel-based object tracking [J].IEEE Transactions on Pattern Analysis and Machine Intelligence.2003,25(5):564-577.
    [48]Brasnett P,Mihaylova L,Bull D.Sequential Monte Carlo tracking by fusing multiple cues in video sequences [J].Image and Vision Computing,2007,25(1):1217-1227.
    [49]Chunlin WU,Chonzhao Han.Quadrature Kalman particle filter [J].Systems Engineering and Electronics.2010,21(2):175-179.
    [50]Cipolla R.Computer Vision for Human-Machine interaction,Cambridge UniVersity Press,1998.
    [51]Gordon N.J,Salmond D.J.Smith A.F M. Novel approach to nonlinear/non-gaussian bayesian state estimation [J].IEEE Proceedings on Radar and Signal Processing,1993.140(2):107-113.
    [52]M.A.Ali,S.Indupalli,and B.Boufama,Tracking multiple people for video surveillance,VP4S-06:The First International Workshop on Video Processing for Security,2006,Quebee City,QC,Canada.
    [53]S.Y.Chien,Y.W.Huang,B.Y.Hsieh,S.Y.Ma,and L.G.Chen,Fast video segmentation algorithm with shadow cancellation, global motion compensation, and adaptive threshold techniques[J].IEEE Trans.Multimedia,2004,6(5):732-748.
    [54]Elgammal, R. Duraiswami, D. Harwood, and L. S Davis, Backgroundand foreground modeling using non-parametric kernel density estimation for visual surveillance[J].Proc. IEEE,2002,90(7) 1151-1163.
    [55]Stauffer and W.E.L. Grimson, Adaptive background mixture models for real-time tracking, presented at Comput. Vision Pattern Recog. Conf., Fort Collins, CO, Jun.1999, pp.246-252.
    [56]Serby, S. Koller-Meier, and L.V. Gool, Probabilistic object tracking using multiple features, in Proc. IEEE Int. Conf. Pattern Recog.,2004, pp.184-187.
    [57]M. J. Black and A. D. Jepson, Eigen tracking:Robust matching and tracking of articulated objects using view based representations, Int. J. Comput. Vision,2003, vol. 26, no. 1,pp.63-84.
    [58]H.K. Sorenson, Least squares estimation:From gauss to Kalman, IEEE Spectrum, Jul,1970, vol.7, no.7, pp.63-68.
    [59]E. Trucco and K. Plakas, Video tracking:A concise survey, IEEE J. Ocean. Eng., Apr. 2006, vol.31, no.2, pp.520-529.
    [60]S.K. Weng, C.M. Kuo, and S.K. Tu, Video object tracking using adaptive Kalman filter, J. Visual Commun. Image Represent.2006, vol.17, pp.1190-1208.
    [61]Y. Hao, Z. Xiong, F. Sun, and X.Wang, Comparison of unscented Kalman filters, in Proc. IEEE Int. Conf. Mechatron. Autom. Harbin, China, Aug.2007, pp 5-8,895-899.
    [62]P. Li, T. Zhang, and B. Ma, Unscented Kalman filter for visual curve tracking, Image Vision Comput.,2004,vol.22, pp.157-164.
    [63]M. SanjeevArulampalam, S.Maskell, N. Gordon, and T.Clapp, A tutorial on particle filters for online non-linear/non-Gaussian Bayesian tracking, IEEE Trans. Signal Process., Feb.2002, vol.50, no.2, pp.174-188.
    [64]P. Li, T. Zhang, and A. E. C. Pece, "Visual contour tracking based on particle filters," Image Vision Comput.,2003,vol.21, pp.111-123.
    [65]Y. Satoh, T. Okatani, and K. Deguchi, A color-based tracking by Kalman particle filter, in Proc.17th Int. Conf. Pattern Recog., vol.3, Aug.2004, pp.502-505.
    [66]C. Chang and R. Ansari, Kernel particle filter for visual tracking, IEEE Signal Process. Lett.Mar.2005, vol.12, no.3, pp.242-245.
    [67]C. Kwok, D. Fox, and M. Meila, Real-time particle filters, Proc. IEEE, Mar.2004, vol. 92, no.3, pp.469-484.
    [68]M. J. Swain and D. H. Ballard, Color indexing, Int. J. Comput. Vision,1991, vol.7, no. 1,pp.11-32.
    [69]P. Perez, C. Hue, J. Vermaak, and M. Gangnet, Color based probabilistic tracking, in Proc.7th Eur. Conf. Comput. Vision-Part I,2002, pp.661-675.
    [70]J. Czyz, B. Ristic, and B. Macq, A color-based particle filter for joint detection and tracking of multiple objects, in Proc. IEEE Int. Conf. Acoust., Speech, Signal Process., Mar.2005, pp.217-220.
    [71]K. Nummiaro, E. Koller-Meier, and V. Gool, Object tracking with adaptive color based particle filter, in Proc.24th German Assoc. Pattern Recog. Symp. Pattern Recog.2002, pp.353-360.
    [72]J. Ning, L. Zhang, D. Zhang, and C. Wu, Robust object tracking using joint color-texture histogram, Int. J. Pattern Recog. Artif. Intell.,2009, vol.23, no.7, pp. 1245-1263.
    [73]王天真,汤天浩,张圣杰等.基于灰色预测的粒子滤波算法[J].高技术通讯,2012,22(4):423-428.
    [74]杨淑莹,吴涛,张迎等.基于模拟退火的粒子滤波在目标跟踪中的应用[J].光电子.激光,2011,22(8):1236-1240.
    [75]张苗辉,刘先省.基于无味粒子滤波的动态场景下高机动目标跟踪[J].光电子.激光,2010,21(6):924-929.
    [76]Doucet A, Godsill S J, Andrieu C. On sequential Monte Carlo sampling methods for Bayesian filtering [J]. Statistics and Computing,2000,10(3):197-208.
    [77]S. J. Julier and J. K. Uhlmann. Unscented Filtering and Nonlinear Estimation[J]. IEEE Trans. Signal Processing,2004,92(3):401-422.
    [78]曲彦文,张二华,杨静宇.改进的无迹粒子滤波算法[J].控制理论与应用,2010,27(9):1152-1158.
    [79]余孟泽,刘正熙,骆键等.基于改进粒子滤波的鲁棒目标跟踪算法[J].光电子.激光,2011,22(5):766-770.
    [80]龚俊亮,何听,魏仲慧等.采用改进辅助粒子滤波的红外多目标跟踪[J].光学精密工程,2012,20(2):414-421.
    [81]刘世军,彭真明,赵书斌等.基于混沌粒子滤波的视频目标跟踪[J].光电工程,2010,37(7):16-23.
    [82]胡昭华,宋耀良,梁德群等.复杂背景下多信息融合的粒子滤波跟踪算法[J].光电子.激光,2008,19(5):680-685.
    [83]邹伟,朱智平,李园等.一种基于粒子滤波的任意姿态头部椭圆轮廓跟踪方法[J].高技术通讯,2009,19(12):1288-1293.
    [84]Yang Y X, Gao W G An optimal adaptive Kalman filter[J]. Journal of Geodesy,2006, 80(4):177-183.
    [85]Vasileios Maroulas, Panos Stinis. Improved particle filters for multi-target tracking [J]. Journal of Computational Physics,2012,23(1):602-611.
    [86]高韬,刘正光,岳士宏等.用于智能交通的运动车辆跟踪算法[J].中国公路学报,2010,23(3):89-94.
    [87]高越,赵丹培,姜志国.复杂环境下的鲁棒目标跟踪方法[J].计算机辅助设计与图形学学报,2010.22(5):817-822.
    [88]Morris B T, Trivedi M M. Contextextual activity visualization from long-term video observations [J]. IEEE. Intelligent Systerms,2010,25(3):50-62.
    [89]LIU Chen-guang, CHENG Dan-song, LIU Jia-feng, et. Interactive Particle Filter Based Algorithm for Tracing Multiple Objects in Videos[J]. Acta Electronica Sinica, 2011,39(2):260-267.
    [90]Zhan R, Wan J. Iterated Unscented Kalman Filter for Passive Target Tracking[J]. IEEE Transactions on Aerospace and Electronic Systems,2007,43(3):1155-1163.
    [91]Zhou H Y, Yuan Y, Shi C M. Object tracking using SIFT features and mean shift. Computer Vision and Image Understanding,2009,113(3):334-352.
    [92]YANG Lu, LI Ming, ZHANG Peng. New improved particle filter algorithm[J]. Journal of XiDian University,2010,37(5):862-865.
    [93]S. J. Julier and J. K. Uhlmann. Unscented Filtering and Nonlinear Estimation[J]. IEEE Trans. Signal Processing,2004,92(3):401-422.
    [94]Zivko V. Improved Adaptive Gaussian Mixture Model for Background Subtraction. Proc. Int'1 Conf Pattem Recognition, V012,pp,28-31,2004.
    [95]Eigammal A, Duraisw amir, Hanvood D. Backround and foreground modeling using nonparametric kemel density estimation for Visual sunreillance[C]//Proceedings of the IEEE 90(7), July 2002 Page(s):1151-1163.
    [96]K.Toyama, J.Krumm, B.Brumitt, and B.Meyers.Wallflower:Principles and Practice of Background Maintenance [J].IEEE Int'1 Conf&Computer Vision,2010,11(1)255-261.
    [97]Monnet A, Mittal A, Paragios N. Background Modeling and Subtraction of Dynamic Scenes. Proc. IEEE Int'1 Conf&Computer Vision. v012.pp,1305-1312.2010.
    [98]Kato J, Watanabe J, Rittscher S. An HMM-Based Segmentation Method for the Monitoring Movies.IEEETrails.Patter Analysis and Machine Intelligence, V01.24, no.9, pp.1291-1296, Sept,2012.
    [99]M. Mason and Z. Duric. Using Histograms to Detect and Track Objects in Color Video.Proc, Applied Imagery Pattern Recognition Workshop, pp,154-159,2001.
    [100]Heikkila, M. Pietikainen, M. A texture-baSed method for modeling the backkground and detecting moving objects. Panem Analysis and Machine Intelligence, IEEE, TrInsactions on Volume 28, Issue 4, April 2006 Page(s):657-662.
    [101]C. Jiang and Mo. Ward. Shadow identification. Proceedings of IEEE Intel Conference on Computer Vision and Pattern Recognition, pp.606-612,2002.
    [102]J. Stauder R. mech and J. Osternlann. Detection of moving cast shadows for object segmentation. IEEE Transaction of Multimedia V01.1, no.1, pp.65-76, Mar,1999.
    [103]Z. Zhu, G. xu. VISATRAM, a real-time vision system for automatic traffic monitoring [J], Image and Vision Computing,2010,18(10):781-794.
    [104]J. Stauder. Mech, J. O.stern. Detection of moving cast shadows for object segmentat. On IEEE Transactions of Multimedia.1999, 1(1):65-76.
    [105]D. koller, K. Dan. lilian, and H. H. Nagel. Model-based object tracking in monocular images equably of road traffic scenes. International Journal of Computer Vision,1993, 10(3):257-281.
    [106]Jason K, Jaynes C. Moving Shadow Detection using a Combined Geometric and Color classification Approach IEEE Workshop on Motion and Video Computing. 2005:197-199.
    [107]Nadimi S, Bllanu B. Physical models for moving shadow and object detection in Video[J]. IEEE Transactions on Pattern Analysis and Machine Telligence,2004, 26(8):1079-1087.
    [108]Angie WKS, Wong KKY, Chung RHY. Shadow Detection For Vehicles by Location the Object-Shadow Boundary[C]//The 7th Lasted international Conference on Signal and Image Processing (SIP2005), August 15-17,2005.
    [109]Hoang MA, Ceusebroek J M. Control texture measurement and segmentation[J]. Signal Processjng,2005,85(2):265-275.
    [110]Bevilacqua A, Rofilli M. Robust denoising and moving shadows detection in traffic scenes[C]//IEEE computer Society Conference on Computer Vision and pattern Recognition.2011:1234-1237.
    [111]Duque D, Santos H, Cortez P. Moving object detection unaffected by cast shadows, highlights and ghosts [J].IEEE International Conference on Image Processing, 2010,9(3):413-416.
    [112]Martel-Brisson N, Zaccarin A. Moving cast shadow detection on a Gaussiamixture shadow model [J].IEEE Computer Socie Conference on Computer Vision and Pattern Recognition,2011,3(2):643-648.
    [113]Leone A, Distante C, Buccolieri F. A texture-baSed approach for shadow detection. IEEE Conference on Advanced Video and Signal Based Sun, eillance,2005:371-376.
    [114]Jacques JCS, Jung CR, Musse SR. Backround Subtraction and Shadow Detection in Grayscale Video Sequences. Sib Graph 2005.18th Brazilian Symposium on Computer Graphics and Image Processing,09-12 Oct 2005:189-196.
    [115]Ristic B, AruIampalam S, Gordon N. Beyond the kalman Filte and Panicle Filters for trackingApplication, Artech House Press,2004.
    [116]Oron E, Kumar A, Bar-shalom Y. Precision Tracking with Segmentation for Imaging Sensors, IEEE Transactions Aerospace and Electronic Systems,1993,29(3):977-987.
    [117]Kumar A, Bar-Shalom Y, Oron E. Precision Tracking based on Segmentation with OptimalLayering for Image sensors, IEEE Transactions on Pattern Analysis and Machine Intelligence,1995,17(2):182-188.
    [118]Koller D, Weber J, Mailk J. Robust Multiple Car Tracking With Occlusion Reasoning. In Proceedings of the 3rd European Conference on Computer Vision.1994,189-196.
    [119]Salmond J, Bircht H. A Parcicle Filter for Track-Before-Detect, In Proceedings of the American Control Conference, June,2001,3755-3760.
    [120]John L.A, Krishnamurthy V. Perfrormance Analysis of a Dynamic Progralnming Tracking Before Detection Algorithm, IEEE Transaction on Aerospace and Electronic Systems,2003,38(1):228-242.
    [121]Boers Y, Driessen J.N. Panicle Filter based Track before Detect Algorithms. In Proceedings of the Same Targets Confference at the SPIE Annual Meeting 2003, August, 2003,20-30.
    [122]Collins R.T.Asystem for video surveillance andmonitoring, CMU.IU.TR.Onl2.
    [123]http://www.Condis.iu/esprit/src/23515.htm.
    [124]http://www.Sop.inria.fr/orion/ADVISOR.
    [125]Matsuyama T, Cooperative Distributed Vision,In Proceedings of Darpa Image Understanding Workshop,1998,365-384.
    [126]胡洪涛,敬忠良,胡士强.基于辅助粒子滤波的红外小目标检测前跟踪方法,控制与决策,2005,20(11):1208-1211.
    [127]龙腾,崔智社,张岩.图像序列中机动目标的形心跟踪,航空学报,2001,22(4):312-316.
    [128]张风超,杨杰.红外图像序列的目标增强和检测,红外与激光工程,2004,33(4):380-384.
    [129]Comaniciu D, Ramesh V, Meer P. Kernel-based object Tracking, IEEE Transactions on Pattern Ananysjs and Machine Intelligence,2003,25(5):564-577.
    [130]Liu T.L, Chen H.T. Real-time Tracking Using Trust-region Methods. IEEE Transactions on Pattern Analysis and Machine Intelligence,2004,26(3),397-402.
    [131]Nummiaro K, Kolle Meier E, Svobada T, Roth D. Color-based Object Tracking in multi-camera Environments, In Proceedings of the 25th Panern Recognition Symposium DAGM 2003.591-599.
    [132]Baker K.D, Grove T.D, Tan T.N. Colo-based Object Tracking, In proceedings of 14th International Conference on Pattern Recognition 1998,1442-1444.
    [133]Koller D, Weber J, Huang T, Mam J, Ogalsawara G, Rao B. To wards Robust Automatic Traffic Scene Analysis in Real-time, In proceedings of the international Conference onPattern Recognition, October,1994,126-131.
    [134]Li P.H, Zhang T.W, Anhure. C.P. Visual Contour Tracking based on Particle Filters, Image and Vision Computing,2003,21(1):111-123.
    [135]Li P.H, Chaumette Fabre. A Shape Tracking Algorithmf or Visual Serving, In Proceedings of IEEE International Conference on Robotics and Automation, April,2005. 2858-2863.
    [136]Lipton A, Fujiyoshi H. Removing Target Classification and Tracking from Real Time Video, In Proceedings of the Workshop on Application of computer Vision, October, 199847-58.
    [137]Black M.J, Jepsen A.D. Eigentracking:Robust Matching and Tracking of Articulated Objects Using a View-based Representation, In proceedings of European Conference on Computer Vision, April,1996,329-342.
    [138]Tissainagam D suter. Object Tracking in Image sequences using Point Featurespanern Recognjtion,2005,38(1):105-113.
    [139]Hu weimin, TA N Tieniu, WANG Liang. A survey on visual surve. Ballance of object motion and behaviors [J]. IEEE Transaction on Systems 2004,34(3):334-352.
    [140]KarauloVal, Han P, Marshan A. A hierarchical model of dynamics for tracking people with a single Video camera[C]//Proceedings of British Machine Vision Conference, Bristol, U.K.2000:352-361.
    [141]Ju S, Black M, Yaccpb Y. Cardboard people:A parameterized model of articulated image Motion[C]//IEEE International Conference on Automatic Faceand Gesture Recognition, Vermont USA,1996:38-44.
    [142]Wachter S, Nagel H. Tracking persons in monocular image sequences [J]. Computer Vision and Image Understanding.1999,74(5):174-192.
    [143]Dorin Comaniciu, Visvanathan Ramesh, Peter Meer. Kernel-based object tracking [J]. IEEE Transactions on Patternm Analysis and Machine Intelligence,2011, 24(5):564-577.
    [144]Shai Avidan. Support Vector tracking [J]. IEEE Transactions on Panem Analysis and Machine Intelligence.2004,26(8):1064-1072.
    [145]Hager G, Belhumeur P. Emcient region tracking with partretric models of geometry and illumination [J]. IEEE Transactions on Pattern Analysis and Intelligence.1998, 20(10):1025-1039.
    [146]Jepson A.D, Fleet D.J. Robust online appearance models for Visual tracking [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence.2003,25(10):1296-1311.
    [147]Paragios N, DIeriche. Geodesic active contours and level sets for the detection and Tracking of moving objects [J]. IEEE Transactions on Pattern Analysis and Machine. Mar 2000,22(3):266-280.
    [148]T. Ojala, M. Pietika. Multiresolution Grayscale and rotation invariant texture classification with local binary pattern. Pattem Analysis and Machine Intelligence, IEEE Transactions on Volume 24, Issue 7, July,2002 Page(s):971-987.
    [149]T. Ojala, K. Valkealahti. Texture Discmination with Multi-Dimensional Distributions of Signed Gray Level [J]. Pattern Recognition, V01.34, pp.727-739,2001.
    [150]Piet Mondriaan, T. Ojala, and Z. Xu, Rotation-invariant Texture Classification Using Feature Distributions, Pattern Recognition [J]. V01.33, pp.43-52,2000.
    [151]T. Ojala, M.Pietikaeinen, D. Har. A Comparative Study of Texture Measures with Classification Based on Feature Distributions, Pattern Recognition.1996, V01.29, pp.51-59.
    [152]R. Porter, N. Canaga. A Robust Rotation-Invariant Texture Classification:Wavelet, Gabor Filter and GMRF Based Schemes [J]. IEEE Proc.Vision, Image and Signal Processing,1997,144(1):180-188.
    [153]P. Brodatz, Textures:A Photographic Album for Anysts and Designers. Dover,1966.
    [154]Park,Y. K. Kim, J.K. Fast adptive smoothing based on LBP for robust face recognition.Electronics Letters Volume 43,Issue 24, Nov.22 2007 Page(s):1350-1351.
    [155]Vazquez, H. M. Reyes, E.G, Molleda. C. A new image division for LBP method to improve face recognition under Vading Jighting conditions. Pattern Reccongnition,2008. ICPR2008.19th International Conference on 8-11 Dec.2012 Page(s):1-4.
    [156]Canming M, Taizhe Tan, Qunsheng Yang. Cascade boosting LBP feature based classifiners for face recognition. Intelligent System and Knowledge Engineering,2008. ISKE 2008.3rd International Conference on Volumel,17-19 Nov.2008 Page(s):1100-1104.
    [157]Ying Zilu, Fang Xieyan. Combining LBP and Adaboost for fkial expression recognition. Signal Processing,2008. ICSP 2008.9th International Conference on 26-29 Oct.2008 Page(s):1461-1464.
    [158]Quan-You Zhao, Bao-Chang Pan, Jian-Jja Pan, Yuan-Yan Tang. Facjal expression recognition based on fusion of Gabor and LBP features. Wavelet Analysis and Pattern Recognition,2008. ICWAPR' 08. International Conference On Volumel,30-31 Aug. 2011Page(s):362-367.
    [159]Shi Zhiping, Ye Fei, He Qing, Shi Zhongzhi. Symmetrical Invariant LBP Texture Descriptor and Application for Image Retrieval. Image and Signal Processing,2008. CISP08, Congress on Volume 2,27-30 May 2008 Page(s):825-829.
    [160]Heikkila, M. Pietikainen, M. A texture-based method for modeling the background and detecting moving objects. Pattern Analysis and Machine Intelligence, IEEE Transactions on Volume 28, Issue 4, April 2006 Page(s):657-662.
    [161]Shaohua Xu, Yong Zhao, Chunyu Yu, Ling Shen.Spatial Feature Based Shadow Detection in Visual Track Surveillance System. Computing, Communication, Control, and Management,2008. CCCM'08, ISECS International CollOquium On Volume 1,3-4 Aug.2008 Page(s):477-480
    [162]Jcong K, Jaynes C. MoVing Shadow Detection using a Combined Geometric and Color classification Approach[C]//IEEE Workshop On Motion and Video Computing, 2011:132-136.
    [163]Mil P, Kogut G.T. Moving shdow and object detection in tracking Scene [J]. In:ICPR.2000.321-324.
    [164]Sala K, Chakchouk M, Besbeso. Moving shadow detection with support Vector domain description in the color ratio space [J], ICPR.2004.384-387.
    [165]Wei Zhang, Xiang Zhong Fang, Yang, X. K. Wu, Q.M.J. Moving Cast Shadows Detection Using Ratio Edge. Multimedia, IEEE Transactions on Volume 9, Issue 6, Oct.2007 Page(s):1202-1214.
    [166]Liang Xu, Liang Zhang, Ping Han, Renbiao Wu. Adaptive threshold shadow detection based on image block statistics. Signal Processing,2008. ICSP 2008.9th International Conference on 26-29 Oct.2008 Page(s):1372-1375.
    [167]Shaohua X, Yong Zhao, Chunyu Yu, Ling Shen. Spatial Feature Based Shadow Detection in Visual Tracking Surveillance System. Computing Communication, Control, and Management,2008. CCCM.08. ISECS Intemalional Colloquium on Volume 1,3-4 Aug.2008 Page(s):477-480.
    [168]Stauffer C, Grimson Wel. Learning Patterns of Activity Using Real-Time Tracking. IEEE Transactions on PAMI,2000,22(8):747-757.
    [169]Jacques J.C.S, Jung C.R, Musse S.R. Background Subtraction based Shadow Detection in Grayscale Video Sequences. SIBGRAPI 2005.18th Brazilian Symposium on Computer Graphics and Image Processing,09-12 Oct 2005:189-196.
    [170]Yizong Cheng, Mean Shi, Mode Seeking, and Ciustering [J].IEEE Trilns. On Panem Analysis and Machine Intelligence,1995,17(8):790-799.
    [171]Comaniciu D, Ramesn V, Meer P. Real-time Tracking of Non-rigid Objects Using Mean Shift[C]//Proceeding of the IEEE Conference on CVPR, HiltonHead, SC, USA, 2000:142-149.
    [172]Comaniciu D, Radesn V, Meer P. Kernel-based Object Tracking[J]IEEE Trans on Pattern Analysis and Machine Intelligence,2003,25(5):564-577.
    [173]Jia Jing-Ping, Zhang Yan-Ning, Zhao Rong-Chun. Tracking of Objects in Image Sequences using Multi-freeness Mean Shift Algorithm. Proceeding of 2005 international Conference on Machine Learning and Cybernetics,2005,8:5133-5138.
    [174]S. J. Julier and J. K. Uhlmann. Unscented Filtering and Nonlinear Estimation[J]. IEEE Trans. Signal Processing,2004,92(3):401-422.
    [175]曲彦文,张二华,杨静宇.改进的无迹粒子滤波算法[J].控制理论与应用,2010,27(9):1152-1158
    [176]余孟泽,刘正熙,骆键等.基于改进粒子滤波的鲁棒目标跟踪算法[J],光电子.激光,2011,22(5):766-770.
    [177]龚俊亮,何昕,魏仲慧等.采用改进辅助粒子滤波的红外多目标跟踪[J],光学精密工程,2012,20(2):414-421.
    [178]Chunlin WU, Chonzhao Han. Quadrature Kalman particle filter[J]. Systems Engineering and Electronics.2010,21(2):175-179
    [179]石勇,韩崇昭.自适应UKF算法在目标跟踪中的应用[J].自动化学报,2011,37(6):755-759
    [180]Wang Chong, Wang Wenyuan. Links between PPCA and Subspace Methods for Complete Gaussian Density Estimation [J]. IEEE Transactions on Neural Network, 2006,17(3):789-792.
    [181]Zhan R, Wan J. Iterated Unscented Kalman Filter for Passive Target Tracking[J]. IEEE Transactions on Aerospace and Electronic Systems,2007,43(3):1155-1163
    [182]Zhou H Y, Yuan Y, Shi C M. Object tracking using SIFT features and mean shift. Computer Vision and Image Understanding,2009,113(3):334-352
    [183]Cao Jie, LI Wei. Object tracking algorithm based on multi-feature fusion[J]. Journal of Lanzhou University of Technology,2011,37(2):80-84
    [184]Doucet A, Godsill S J, Andrieu C. On sequential Monte Carlo sampling methods for Bayesian filtering[J]. Statistics and Computing,2000,10(3):197-208.
    [185]S. J. Julier and J. K. Uhlmann. Unscented Filtering and Nonlinear Estimation[J]. IEEE Trans. Signal Processing,2004,92(3):401-422.
    [186]YANG Lu, LI Ming, ZHANG Peng. New improved particle filter algorithm[J]. JOURNAL OF XIDIAN UNIVERSITY,2010,37(5):862-865
    [187]M.Sanjeev Arulampalam.Simon Maskell,Nei1 Gordon and Tim Clapp.A Tutorial on Particle Filters for On line Non-linear/Non-Gaussian Bayesian Tracking[J].IEEE Transactions on Signal Processing,2002,50(2):174-188
    [188]Lowe D. G Distinctive image feature from scale-invariant keypoints[J]. International Journal of Computer Vision,2004,60(2):91-110.
    [189]GU Xin, WANG Hai-Tao, WANG Ling-Feng, et al. Fusing Multiple Features for Object Tracking Based on Uncertainty Measurement[J]. Acta Automatica Sinica, 2011,37(5):550-559
    [190]Y. Alper, J. Omar, S. Vlubarak. Object tracking:A survey [J]. ACM Computing Surveys, 2006,38 (4):1-46.
    [191]Comaniciu D, Meer P. Mean shin:A robust approach toward feature space analysis [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence.2002,24(5):603-619.
    [192]Comaniciu D, Meer P. Image segmentatj on using clustering with saddle point detectjon[C]//IEEE International Conference on Image Processing,2002:297-300.
    [193]Comaniciu D, Ramesh V. Mean shir and optimal predictjon for efncient object tracking[C]//Intemational Conference on Image Prdcessing, Vancouver'BC, Canada, 2000:70-73.
    [194]Tipping M.E, Bishop C.M, Probabilistic Principal Component Analysis [J]. Journal of Royal Statistical Society,1999,3(1):71-86.
    [195]Nguyen H.T, Qiang Ji, Smeulders AWM. Spatio-Temporal Context for Robust Multitarget Tracking. IEEE Trails. On Panem Analysis and Machjne Intel Jigence,2007, 29(1):52-64.
    [196]Ojala T, Pietikainen M, Maenpaa T. Multiresolution Gray-Scale and Rotation Invariant Texture Classification with Local Bina Panems [J]. IEEE Trans. Pattern Analysis and Machine Intelligence,2012,24(7):971-987.
    [197]K. Fukunaga and L.D. Hosteller. The estimation of the gradient of a density function, withapplication in pattern recognitjon. IEEE Trans. Information Theory, v01.21,2011, pp,32-40.
    [198]Dorin Comaniciu, Visvanathan Ramesh, Peter Meer. Real-Time Tracking of Non-Rigid Objects using Mean Shift [J]. IEEE Computer Vision and Panern Recognition,2009, 22(4):142-149.