智能视频监控中的遮挡目标跟踪技术研究
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
近年来,智能视频监控技术引起了越来越多研究人员的重视,然而该技术的发展却遇到了多方面的制约,遮挡目标跟踪就是其中之一。在单摄像机的视频监控中,由于观察角度等原因,目标之间的相互遮挡是普遍的现象。这对目标跟踪算法准确跟踪目标带来很大影响,甚至严重影响视频监控系统的应用。因此,本文重点研究遮挡目标的跟踪问题。
     本文利用贝叶斯理论为目标跟踪问题建模,并对该模型进行理论推导,分别为四种不同的跟踪类型(目标进入场景、离开场景、单目标跟踪和遮挡目标跟踪)推导出表达式。
     在多目标跟踪中,由于目标自身的情况和目标之间情况的不同,目标跟踪的难度也存在很大差异。为了度量目标的跟踪难度,本文提出不可跟踪性理论,描述三种不同的定义形式:整个图象序列所有目标的不可跟踪性、相邻帧之间所有目标的不可跟踪性以及相邻帧中单个目标的不可跟踪性,并提出了不可跟踪性的简化计算方法,通过实验验证了影响不可跟踪性的四种因素(场景中目标数量、目标的分辨率、目标的运动速度和跟踪特征之间的可区分性)。利用不可跟踪性理论作为指导得到了两种解决遮挡目标跟踪问题的方法:遮挡目标的自动组合和遮挡目标跟踪中的动态特征选择
     由于前面提到的跟踪遮挡目标的方法具有一定的局限性,因此本文提出了一个通用的解决遮挡目标跟踪问题的方法。该方法利用遮挡分层的概念来描述目标之间的遮挡关系;利用目标重叠区域的外观特征和速度特征来帮助确定目标遮挡关系;利用目标的遮挡关系和状态信息来获取遮挡目标的非遮挡部分;利用目标非遮挡部分的外观特征和速度特征的组合来描述遮挡目标;对传统的均值平移跟踪算法进行改进,利用外观直方图和速度直方图信息来确定目标的位置;并利用目标在当前时刻以前的目标尺度变化来预测当前时刻目标尺度变化的各个可能性的概率,并通过对尺度变化的概率进行采样来确定目标的尺度变化。
     描述目标状态的参数包括目标的位置、尺度和目标之间的遮挡关系,这些参数之间互相关联,而且目标的参数空间包括离散变量和连续变量,参数空间随目标数量的变化而变化,因此本文应用马尔可夫链蒙特卡罗方法来求解相关遮挡目标的最优状态。在求解最优状态的过程中,本文对目标模型中的位置、遮挡关系和尺度三个方面的参数构建状态转移函数以加速算法收敛。在算法收敛过程中,三个方面的参数通过采样被逐步地调整以消除这些参数之间的关联,保证算法收敛到全局最优。当跟踪算法收敛时,则认为此时模型的状态参数为目标的最优状态。
     本文利用多段具有遮挡目标的图象序列对提出的跟踪遮挡目标的方法进行测试,从实验结果来看,本文的跟踪算法能够对所选取的大多数情况的遮挡目标进行很好地跟踪。
In recent years, the technology of intelligence video surveillance is attached more importance to researches in computer vision. But in this technology there are many difficulties which include occlusion target tracking. In video surveillance which uses single camera, target occlusion is a common phenomenon due to camera views. The occlusion among targets causes great disturbances for accurately tracking occlusion targets and even for the application of video surveillance.
     In this paper, Bayesian theory is used for modeling target tracking and the tracking model is represented as different tracking types (target entrance, target exit, single target tracking and occlusion target tracking).
     In multiple target tracking, the tracking difficulties are difference according to different target situations. In order to measure tracking difficulties, intrackability theory is proposed and three kinds of concepts (intrackability of the whole sequence, intrackability of adjacent frames and intrackability of single target in adjacent frames) are described in this paper. The simplicity of intrackability computation is also proposed. It is confirmed by experiments that four factors (target number in the scent, target resolution, target velocity and distinctiveness of tracking features) affect intrackability. And two approaches (automatic target combination and dynamic feature cascade in occlusion target tracking) of tracking occlusion targets are proposed according to intrackability theory.
     Due to the limitation of occlusion target tracking approach which is introduced ahead, another general framework of tracking occlusion target is proposed. In this framework, the theory of occlusion layers is introduced to represent target occlusion partial order which is determined using the appearance features and velocity features in target overlapping patches. Then the target non-occlusion parts can be obtained according to the target occlusion relation and target states. Occlusion target is described by the combination of appearance features and velocity features. The traditional mean shift tracking algorithm is improved to estimate target position using appearance histograms and velocity histograms. The probabilities of all the target scale changes at current time are predicted according to target scales changes at previous time.
     In this paper, Markov Chain Monte Carlo approach is used for estimating optimal states of occlusion targets, because three types of parameters (target position, target scale and target occlusion relations) which describe target states are correspondence with each other; the parameter space which includes discrete variables and continuous variables varies with the number of occlusion targets. During the process of estimating the optimal target states, in order to accelerate the algorithm converges, the state transaction functions are established for these parameters which include target positions; occlusion relations and target scales. In order to obtain the optimal states, these parameters are gradually adjusted one by one using sampling approach. When the tracking algorithm converges, the model state is optimal for occlusion target states.
     This tracking algorithm is test by several image sequences with different kinds of occlusion targets. From the tracking results, we can see that this tracking algorithm can well track occlusion targets.
引文
[1]马颂德,张正友.计算机视觉――计算理论与算法基础.北京:科学出版社,1998,pp.53-71.
    [2] Forsyth, D. A., Ponce, J.计算机视觉-一种现代方法.林学訚,王宏等译.北京:电子工业出版社,pp. 32-46.
    [3] Lv, F. J., Zhao, T., Nevatia, R., Self-Calibration of a Camera from Video of a Walking Human, In Proceedings of IEEE International Conference on Image Processing, 2002, pp. 562-567.
    [4] Yilmaz, A., Javed, O., and Shah, M., Object Tracking: A Survey, ACM Journal of Computing Surveys, 2006, Vol. 38, No. 4, 13.
    [5] Sethi, I., and Jain, R., Finding trajectories of feature points in a monocular image sequence. IEEE Transactions of Pattern Analysis and Machine Intelligence. 1987, 9(1) 56–73.
    [6] Salari, V. and Sethi, I. K. Feature point correspondence in the presence of occlusion. IEEE Transactions of Pattern Analysis and Machine Intelligence. 1990, 12(1) 87–91.
    [7] Rangarajan, K. and Shah,M. Establishing motion correspondence. In Proceedings of IEEE Conference Vision Graphies Image Process, 1991, 54(1) 56–73.
    [8] Intille, S., Davis, J., and Bobick, A. Real-time closed-world tracking. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. 1997, pp.697–703.
    [9] Veenman, C., Reinders, M., and Backer, E. Resolving motion correspondence for densely moving points. IEEE Transactions of Pattern Analysis and Machine Intelligence. 2001, 23(1)54–72.
    [10] Shafique, K. and Shah, M. A non-iterative greedy algorithm for multi-frame point correspondence. In Proceedings of IEEE International Conference on Computer Vision. 2003,pp.110–115.
    [11] Broida, T. and Chellappa, R. Estimation of object motion parameters from noisy images. IEEE Transactions of Pattern Analysis and Machine Intelligence. 1986, 8(1) 90–99.
    [12] Torresani, L. and Bregler, C. Space-time tracking. In European Conference onComputer Vision. 2002, pp.801–812.
    [13] Beymer, D. and Konolige, K. Real-time tracking of multiple people using continuous detection. In IEEE International Conference on Computer Vision Frame-Rate Workshop. 1999, pp.56-60.
    [14] Rosales, R. and Sclaroff, S. 3d trajectory recovery for tracking multiple objects and trajectory guided recognition of actions. In IEEE Conference on Computer Vision and Pattern Recognition. 1999, pp.117–123.
    [15] Tanizaki, H. Non-gaussian state-space modeling of nonstationary time series. Journal of American Statistic Association. 1987, 82, pp.1032–1063.
    [16] Isard, M. and Blake, A. Condensation - conditional density propagation for visual tracking. International Journal Computer Vision, 1998, 29(1) 5–28.
    [17] Chang, Y. L. and Aggarwal, J. K. 3d structure reconstruction from an ego motion sequence using statistical estimation and detection theory. In Workshop on Visual Motion. 1991, pp.268–273.
    [18] Rasmussen, C. and Hager, G. Probabilistic data association methods for tracking complex visual objects. IEEE Transactions of Pattern Analysis and Machine Intelligence. 2001, 23(6) 560–576.
    [19] Reid, D. B. An algorithm for tracking multiple targets. IEEE Transaction Automatic Control , 1979, 24(6) 843–854.
    [20] Streit, R. L. and Luginbuhl, T. E. Maximum likelihood method for probabilistic multi-hypothesis tracking. In Proceedings of the International Society for Optical Engineering, 1994, vol. 2235, pp.394–405.
    [21] Cox, I. and Hingorani, S. An efficient implementation of reid’s multiple hypothesis tracking algorithm and its evaluation for the purpose of visual tracking. IEEE Transactions of Pattern Analysis and Machine Intelligence. 1996, 18(2) 138–150.
    [22] Murty, K. An algorithm for ranking all the assignments in order of increasing cost. Operations Resear. 1968, 16, pp.682–686.
    [23] Cham, T. and Rehg, J. M. A multiple hypothesis approach to figure tracking. In IEEE International Conference on Computer Vision and Pattern Recognition. 1999,pp. 239–245.
    [24] Bar-Shalom, Y. and Foreman, T. Tracking and Data Association. Academic Press Inc. 1988.
    [25] Bregler, C., Hertzmann, A. and Biermann, H. Recovering nonrigid 3d shape from image streams. In IEEE Conference on Computer Vision and Pattern Recognition. 2000, pp.690–696.
    [26] Vidal, R. and Ma, Y.. A unified algebraic approach to 2-d and 3-d motion segmentation. In European Conference on Computer Vision. 2004, pp.1–15.
    [27] Black, M. and Anandan, P. The robust estimation of multiple motions: Parametric and piece wise smooth flow fields. Computer Vision and Image Understand. 1996, 63(1) 75–104.
    [28] Wang, J. and Adelson, E. Representing moving images with layers. IEEE Image Process. 1994, 3(5) 625–638.
    [29] Birchfield, S. Elliptical head tracking using intensity gradients and color histograms. In IEEE Conference on Computer Vision and Pattern Recognition, 1998, pp.232–237.
    [30] Schweitzer, H., Bell, J. W., and Wu, F. Very fast template matching. In European Conference on Computer Vision. 2002, pp.358–372.
    [31] Fieguth, P. and Terzopoulos, D. Color-based tracking of heads and other mobile objects at video frame rates. In IEEE Conference on Computer Vision and Pattern Recognition. 1997, pp.21–27.
    [32] Comaniciu, D., Ramesh, V., and Meer, P. Kernel-based object tracking. IEEE Transactions of Pattern Analysis and Machine Intelligence. 2003, 25, pp.564–575.
    [33] Comaniciu, D. and Meer, P. Mean shift: A robust approach toward feature space analysis. IEEE Transactions of Pattern Analysis and Machine Intelligence. 2002, 24(5) 603–619.
    [34] Jepson, A., Fleet, D., and Elmaraghi, T. Robust online appearance models for visual tracking. IEEE Transactions of Pattern Analysis and Machine Intelligence. 2003, 25(10) 1296–1311.
    [35] Horn, B. and Schunk, B. Determining optical flow. Artificial Intelligence. 1981, 17, pp.185–203.
    [36] Lucas, B. D. and Kanade, T. An iterative image registration technique with anapplication to stereo vision. In International Joint Conference on Artificial Intelligence, 1981, pp.653-661.
    [37] Schunk, B. The image flow constraint equation. Computer Vision Graphics Image Process. 1986, 35, pp.20–46.
    [38]章毓晋.图象工程下册图象理解与计算机视觉.北京:清华大学出版社,2000,119-128.
    [39]王润生.图像理解.长沙:国防科技大学出版社, 1998.
    [40] Shi, J.B., and Tomasi, C., Good Features to Track. IEEE Conference Computer Vision and Pattern Recognition, 1994, pp.593-600.
    [41] Tao, H., Sawhney, H., and Kumar, R. Object tracking with Bayesian estimation of dynamic layer representations. IEEE Transactions of Pattern Analysis and Machine Intelligence. 2002, 24(1) 75–89.
    [42] Isard, M. and Maccormick, J. Bramble: A Bayesian multiple-blob tracker. In IEEE International Conference on Computer Vision. 2001, pp.34–41.
    [43] Black, M. and Jepson, A. Eigen-tracking: Robust matching and tracking of articulated objects using a view-based representation. International Journal Computer Vision. 1998, 26(1) 63–84.
    [44] Avidan, S. Support vector tracking. In IEEE Conference on Computer Vision and Pattern Recognition. 2001, pp.184–191.
    [45] Huttenlocher, D., Noh, J., and Rucklidge, W. Tracking non-rigid objects in complex scenes. In IEEE International Conference on Computer Vision. 1993, pp.93–101.
    [46] Hausdorff, F. Set Theory. Chelsea, New York, NY, 1962.
    [47] Li, B., Chellappa, R., Zheng, Q., and Der, S. Model-based temporal object verification using video. IEEE Transaction Image Process. 2001, 10(6) 897–908.
    [48] Kang, J., Cohen, I., and Medioni, G. Object reacquisition using geometric invariant appearance model. In International Conference on Pattern Recongnition. 2004, pp.759–762.
    [49] Haritaoglu, I., Harwood, D., and Davis, L. W4: real-time surveillance of people and their activities. IEEE Transactions of Pattern Analysis and Machine Intelligence. 2000, 22(8) 809–830.
    [50] Sato, K. and Aggarwal, J. Temporal spatio-velocity transform and its application to tracking and interaction. Computer Vision Image Understand. 2004, 96(2) 100–128.
    [51] Terzopoulos, D. and Szeliski, R. Tracking with Kalman snakes. In Active Vision, A. Blake and A. Yuille, Eds. MIT Press. 1992.
    [52] Maccormick, J. and Blake, A. Probabilistic exclusion and partitioned sampling for multiple object tracking. International Journal of Computer Vision. 2000, 39(1) 57–71.
    [53] Chen, Y., Rui, Y., and Huang, T. Jpdaf based hmm for real-time contour tracking. In IEEE Conference on Computer Vision and Pattern Recognition. 2001, pp.543–550.
    [54] Sethian, J. Level Set Methods: Evolving Interfaces in Geometry, Fluid Mechanics Computer Vision and Material Sciences. Cambridge University Press. 1999.
    [55] Bertalmio, M., Sapiro, G., and Randall, G. Morphing active contours. IEEE Transactions of Pattern Analysis and Machine Intelligence. 2000, 22(7) 733–737.
    [56] Mansouri, A.. Region tracking via level set pdes without motion computation. IEEE Transactions of Pattern Analysis and Machine Intelligence. 2002, 24(7) 947–961.
    [57] Cremers, D. and Schnorr, C. Statistical shape knowledge in variational motion segmentation. I. Srael Nent. Cap. J. 2003, 21, pp.77–86.
    [58] Yilmaz, A., Li, X., and Shah, M. Contour based object tracking with occlusion handling in video acquired using mobile cameras. IEEE Transactions of Pattern Analysis and Machine Intelligence. 2004, 26(11) 1531–1536.
    [59] Ronfard, R. Region based strategies for active contour models. International Journal Computer Vision. 1994, 13(2) 229–251.
    [60] Elgammal, A., Duraiswami, R., Harwood, D., and Davis, L. Background and foreground modeling using nonparametric kernel density estimation for visual surveillance. Proceedings of IEEE 90, 2002, 7, pp.1151–1163.
    [61] Beymer, D. and Konolige, K. Real-time tracking of multiple people using continuous detection. In IEEE International Conference on Computer Vision Frame-Rate Workshop. 1999, pp.322-326.
    [62] Dockstader, S. and Tekalp, M. On the tracking of articulated and occluded video object motion. Real Time Image. 2001, 7(5) 415–432.
    [63] Nebojsa Jojic and Brendan J. Frey Learning Flexible Sprites in Video Layers, In IEEE Conference on Computer Vision and Pattern Recognition, 2001, pp.1199-1206.
    [64] Zhou, Y. and Tao, H., A Background Layer Model for Object Tracking Through Occlusion, In IEEE International Conference on Computer Vision, 2003, pp.1079-1085.
    [65] Senior, A., Hampapur, A., Tian, Y. L., Brown, L., Pankanti, S. and Bolle, R. Appearance Models for Occlusion Handling, in 2nd International Workshop on Performance Evaluation of Tracking and Surveillance Systems, 2001, pp.552-556.
    [66] Kaucic, R., Perera, A. G. A., Brooksby, G., Kaufhold, J. and Hoogs, A., A Unified Framework for Tracking through Occlusions and across Sensor Gaps, In IEEE Conference on Computer Vision and Pattern Recognition, 2005, pp.990-997.
    [67] Perera, A. G. A., Srinicas, C., Hoogs, A., Brooksby, G., Hu, W. S., Multi-Object Tracking Through Aimultaneous Long Occlusions and Split-Merge, In IEEE Conference on Computer Vision and Pattern Recognition, 2006, pp.666-673.
    [68] Zhao, T., Nevatia, R. and Lv, F., Aegmentation and Tracking of Multiple Humans in Complex Situations, In IEEE Conference on Computer Vision and Pattern Recognition, 2001, pp.194-201.
    [69] Zhao, T. and Nevatia, R., Bayesian Multiple Human Segmentation in Crowded Situations, In IEEE Conference on Computer Vision and Pattern Recognition, 2003, pp.459-466.
    [70] Zhao, T. and Nevatia, R., Tracking Multiple Humans in Crowded Environment, In IEEE Conference on Computer Vision and Pattern Recognition, 2004, pp.406-413.
    [71] Zhao, T. and Nevatia, R., Tracking Multiple Humans in Complex Situations, IEEE Transactions on Pattern Analysis And Machine Intelligence, 2004, pp.1208-1221.
    [72] Elgammal, A. M. and Davis, L. S., Probabilistic Framework for Segmenting People under Occlusion, In IEEE International Conference on Computer Vision, 2001, pp.145-152.
    [73] Han, M., Xu, W., Tao, H. and Gong, Y. H., An Algorithm for Multiple Object Trajectory Tracking, In IEEE Conference on Computer Vision and Pattern Recognition, 2004, pp.1864-1871.
    [74] Khan S. and Shah, M., Tracking people in presence of occlusion. In Asian Conference on Computer Vison, 2000, pp.122-126.
    [75] Dockstader, S. and Tekalp, A. M. Multiple camera tracking of interacting and occluded human motion. Proceedings of the IEEE 89, 2001, 1441–1455.
    [76] Mittal, A. and Davis, L. M2 tracker: A multiview approach to segmenting and tracking people in a cluttered scene. International Journal Computer Vision, 2003, 51(3) 189–203.
    [77]龚光鲁,钱敏平.应用随机过程教程及在算法何智能计算中德随机模型.北京:清华大学出版社,2004,186-209.
    [78]边肇祺,张学工等.模式识别(第二版).北京:清华大学出版社, 2000.
    [79] Duda R.O., Hart P.E., Stork D.G..模式分类.李宏东,姚天翔等译.北京:机械工业出版社(第二版), 2005.
    [80] Gonzalez, R. C., Woods, R.E.数字图象处理(第二版).阮秋崎,阮宇智等译.北京:电子工业出版社,2003,70-85.
    [81]何斌,马天予,王运坚等. Visual C++数字图象处理.北京:人民邮电出版社,2001,78-92.
    [82]容观澳.计算机图象处理.北京:清华大学出版社,1998,108-111.
    [83]贾云德.机器视觉.北京:科学出版社, 2000.
    [84] Arulampalam, M. S., Maskell, S., Gordon, N., and Clapp, T., A Tutorial on Particle Filters for Online Nonline/Non-Gaussian Bayesian Tracking. IEEE Transactions on Signal Processing, 2002, vol. 50, no. 2, pp.174-188.
    [85] Wu, Y., Zhu, S., and Guo, C. From information scaling of natural images to regimes of statistical models. Quarterly of Applied Mathematics, 2007, 65, pp. 233-251.
    [86] Pylyshyn, Z. W. Some puzzling findings in multiple object tracking (MOT): I. tracking without keeping track of object identities. Visual Cognition, 2004, 11(7):801–822.
    [87] Pylyshyn, Z. W. and Annan, J. V. Dynamics of target selection in multiple object tracking (mot). Spatial Vision, 2006, 19(6):485–504.
    [88] Black, M. J. and Fleet, D. J. Probabilistic detection and tracking of motion boundaries. International Journal of Computer Vision, 2000, 38(3):231–245.
    [89] Horn, B. and Schunck, B. Determining optical flow. Artificial Intelligence, 1981, 17:185–203.
    [90] Serby, D., Koller-Meier, S., and Gool, L. V. Probabilistic object tracking using multiple features. In IEEE International Conference of Pattern Recognition, 2004, 184-187.
    [91] Veenman, C., Reinders, M., and Backer, E. Resolving motion correspondence for densely moving points. IEEE Transactions of Pattern Analysis and Machine Intelligence, 2001, 23, pp.54-72.
    [92] Comaniciu, D., Ramesh, V., and Meer, P. Kernel-based object tracking. IEEE Transactions of Pattern Analysis and Machine Intelligence, 2003, 25(5):564–575.
    [93] Maccormick, J. and Blake, A. A probabilistic exclusion principle for tracking multiple objects. International Journal of Computer Vision, 2000, 39(1):57–71.
    [94] Wu, Y., Hua, G., and Yu, T. Tracking articulated body by dynamic Markov network. In IEEE International Conference on Computer Vision, 2003, pp.1094-1101.
    [95] Tang, F. and Tao, H. Object tracking with dynamic feature graphs. In Proceedings of IEEE Workshop on VS-PETS, 2005, pp.570-578.
    [96] Cuzol, A. and M′amin, E. A stochastic filter for fluid motion tracking. In IEEE International Conference on Computer Vision, 2005, Vol. I, pp.396-402.
    [97] Szummer, M. and Picard, R. W.. Temporal texture modeling. In International Conference of Image Process, 1996, Vol. 3, pp.823-826.
    [98] Wang, Y. and Zhu, S. C. Modeling textured motion : Particle, wave and sketch. In IEEE International Conference on Computer Vision, 2003, pp.213–220.
    [99] Fitzgibbon, A. Stochastic ridigity: Image registration for nowhere-static scenes. In IEEE International Conference on Computer Vision, 2001, Vol. 1, pp.662-669.
    [100] Soatto, S., Doretto, G., and Wu, Y. Dynamic textures. In IEEE International Conference on Computer Vision, 2001, Vol. 2, pp.439-446.
    [101] Yilmaz, A., Javed, O., and Shah, M. Object tracking: A survey. ACM Computing Survey, 2006, 38(4):13.
    [102] Reid, D. B. An algorithm for tracking multiple targets. IEEE Trans. on Automatic Control, 1979, AC-24(6),843-854.
    [103] F. Tang and H. Tao. Object tracking with dynamic feature graphs. In Proc. IEEE Workshop on VS-PETS, 2005, pp.230-236.
    [104] Weiss, Y. and Adelson, E. H. Slow and smooth: A Bayesian theory for the combination of local motion signals in human vision. AI Memo 1998, 1624, MIT.
    [105] Lowe D. G.. Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision, 2004, 60(2):91–110.
    [106] Vedaldi, A. and Soatto, S. Features for recognition viewpoint invariance for non-planar scenes. In IEEE International Conference on Computer Vision, 2005, Vol. 2, pp.1474-1481.
    [107] Isard, M. and Blake, A., Contour Tracking by Stochastic Propagation of Conditional Density. In Proceedings of European Conference on Computer Vision. 1996, pp. 343–356.
    [108] Isard, M. and Blake, A., ICondensation: Unifying Low-level and High-level Tracking in a Stochastic Framework. In Proceedings of European Conference on Computer Vision, Vol. 1. pp. 767–781.
    [109] Comaniciu, D., Ramesh, V., and Meer, P., Real-Time Tracking of Non-Rigid Objects using Mean Shift. In IEEE Conference on Computer Vision and Pattern Recognition, 2000, Vol. 2, 142–149.
    [110] Raja, Y., McKenna, S., and Gong, S., Colour Model Selection and Adaptation in Dynamic Scenes. In Proceedings of European Conference on Computer Vision. 1998, pp.460–475.
    [111] Toyama, K., Krumm, J., Brumitt, B., and Meyers, B. Wallflower: Principles and Practice of Background Maintenance. In IEEE International Conference on Computer Vision. 1999, pp.255–261.
    [112] Wu, Y. and Huang, T. S., Color Tracking by Transductive Learning. In IEEE Conference on Computer Vision and Pattern Recognition, 2000, Vol. 1. 133–138.
    [113] Azoz, Y., Devi, L., and Sharma, R. Reliable Tracking of Human Arm Dynamics by Multiple Cue Integration and Constraint Fusion. In IEEE Conference on Computer Vision and Pattern Recognition. 1998, pp.905–910.
    [114] Birchfield, S., Ellitical Head Tracking Using Intensity Gradient and Color Histograms.In IEEE Conference on Computer Vision and Pattern Recognition. pp.232–237.
    [115] Isard, M. and Blake, A., ICondensation: Unifying Low-level and High-level Tracking in a Stochastic Framework. In Proceedings of European Conference on Computer Vision, 1998, Vol. 1, 767–781.
    [116] Rasmussen, C. and Hager, G., Joint Probabilistic Techniques for Tracking Multi-Part Objects. In IEEE Conference on Computer Vision and Pattern Recognition. 1998, pp.16–21.
    [117] Wren, C., Azarbayejani, A., Darrel, T., and Pentland, A., Pfinder: Real-Time Tracking of the Human Body. IEEE Transactions on Pattern Analysis and Machine Intelligence 1997, 9, pp.780–785.
    [118] Toyama, K. and Wu, Y., Bootstrap Initialization of Nonparametric Texture Models for Tracking. In Proceedings of European Conference on Computer Vision. 2002, pp.156-167.
    [119] Hager, G. and Belhumeur, P., Real-time Tracking of Image Regions with Changes in Geoemtry and Illumination. In IEEE Conference on Computer Vision and Pattern Recognition. 1996, pp.403–410.
    [120] Li, B. and Chellapa, R., Simultaneous Tracking and Verification via Sequential Posterior Estimation. In IEEE Conference on Computer Vision and Pattern Recognition, 2000, Vol. 2. pp.110–117.
    [121] Tao, H., Sawhney, H., and Kumar, R., Dynamic Layer Representation with Applications to Tracking. In IEEE Conference on Computer Vision and Pattern Recognition, 2000, Vol. 2. 134–141.
    [122] Black, M. and Jepson, A., Eigen-tracking: Robust Matching and Tracking of Articulated Object Using a View-Based Representation. In IEEE Conference on Computer Vision and Pattern Recognition, 1996, Vol. 1. 343–356.
    [123] Collins, R.T. and Liu, Y.X. On-Line Selection of Discriminative Tracking Features, In Proceedings IEEE International Conference of Computer Vision, 2003, Vol. 1, pp.346-352.
    [124] Collins, R.T., Liu Y.X. and Leordeanu, M., On-Line Selection of Discriminative Tracking Features, IEEE Transaction Pattern Analysis and Machine Intelligence,2005. 27(10): 1631-1643.
    [125] Zhou, Y. and Tao, H., A Background Layer Model for Object Tracking through Occlusion, In IEEE International Conference on Computer Vision, 2003, pp.1079-1085.
    [126] Fukunaga, K. and Hostetler, L.D. The estimation of the gradient of a density function, with applications in pattern recognition. Information Theory, 1975, 21(1):32--87.
    [127] Chen, Y.Z., Mean shift, mode seeking, and clustering, IEEE Transactions on Pattern Analysis and Machine Intelligence, 1995, pp.790– 799.
    [128] R.Collins, Mean-shift Blob Tracking through Scale Space, IEEE Confference on Computer Vision and Pattern Recognition, 2003, pp 234-240.
    [129] Hua, G. and Wu, Y., Multi-scale Visual Tracking by Sequential Belief Propagation, In IEEE Conference on Computer Vision and Pattern Recognition, 2004, vol.I, pp. 826-833.
    [130] Mumford, D. Pattern Theory: the Mathematics of Perception. APPTS Report #02-10, August 2002, Proceedings of the ICM, Beijing 2002, vol. 1, pp. 401-422.
    [131] Moon, T., The expectation-maximization algorithm, IEEE Signal Processing Magazine, 1996, Vol.13,pp.47-60.
    [132]茆诗松,王静龙,高等数理统计北京:高等教育出版社,1998, 467.
    [133]《现代应用数学手册》编委会,现代应用数学手册:概率论与随机过程卷.北京:清华大学出版社,2000。
    [134] MetroPolis, N., Rosenbluth, A.W., Teller, A.H., Teller, E. Equations of States Calculations by Fast Computing Machines, Journal Chemistry physics,1953, Vol.21, pp.1087-1091.
    [135] Wang, X. D., Chen, R., Blind Turbo Equalization in Gaussian and Impulsive Noise, IEEE Transactions on Vehicular Technology, 2001, Vol.50, No.4, pp.1092- 1105.
    [136] Gelfand, A. E., Smith, A. F. W. Sampling-based Approaches to Calculating Marginal Densities, Journal American Statistics Association, 1990, Vol.85, pp.398-409.
    [137] Chen, R., Liu, J. S., Wang, X. D., Convergence Analysis and Comparisons of Markov Chain Monte Carlo algorithms in digital communications, IEEE Transactions on Signal Processing, 2002, Vol.50, No.2, pp.255- 270.
    [138] Metropolis N., Rota G. C., Tanny S., Significance Arithmetic: The Carrying Algorithm. Journal of Combinatorial Theory, 1973, 14(3): 386-421.
    [139] Hasting, W. K. Monte Carlo Sampling Methods Using Markov Chains and Their Applications. Biometrics, 1970, 57: 72-89.
    [140] Tanner, M. A., Tools for statistic inference. New York: Springer Verlag, 1991.
    [141] Liu, J. S., Markov Chain Monte Carlo and related topics, http://www-stat.stanofrd.edu/~jliu/ TechRept/99.html
    [142] Propp, J. G., Wilson, D. B., Exact sampling with coupled Markov Chains and applications to statistical mechanics, Random Structures and Algorithms, 1996, Vol.9, No.1&2, pp.223-252.
    [143] Robert, C. P., Discretization and MCMC convergence assessment, Springer: New York.
    [144] Tu, Z.W., Chen, X., Yuille, A. and Zhu, S.C., Image Parsing: Segmentation, Detection and Recognition, In IEEE International Conference on Computer Vision, 2003, pp.890-898.
    [145] Sminchisescu, C., and Triggs, B., Kinematic Jump Processes For Monocular 3D Human Tracking, In IEEE Conference on Computer Vision and Pattern Recognition, 2003, Vol. pp.69-76.
    [146] Khan, Z., Balch, T., and Dellaert, F. Multitarget tracking with split and merged measurements. In IEEE Conference on Computer Vision and Pattern Recognition, 2005, vol. 1, pp. 605–610.
    [147] Khan, Z., Balch, T., and Dellaert, F. Mcmc-based particle filtering for tracking a variable number of interacting targets. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2005, (11):1805–1819.
    [148] Smith, K., Gatica-Perez, D., and Odobez, J. M. Using particles to track varying numbers of interacting people. In IEEE Conference on Computer Vision and Pattern Recognition, 2005, pp. 962–969.
    [149] Zhao, T. and Nevatia, R. Tracking multiple humans in crowded environment. In IEEE Conference on Computer Vision and Pattern Recognition, 2004, pp. 406–413.
    [150] Mittal, A. and Davis, L. M2tracker: A multi-view approach to segmenting andtracking people in a cluttered scene. International Journal of Computer Vision, 2003, (3):189–203.
    [151] Tierney, L., Markov chain concepts related to sampling algorithms, Markov Chain Monte Carlo in Practice, Chapman and Hall, 1996, pp. 59-74.
    [152] Stauffer, C. and Grimson, E., Adaptive Background Mixture Models for Real-time Tracking. In Proceedings of the IEEE Workshop on the Event Mining in Video, 2003, pp.160-168.
    [153] Stauffer, C. and Grimson, E., Learning Patterns of activity using Real time Tracking. IEEE Transactions on Pattern Analysis And Machine Intelligence, 2000, 22(8): 747-757.