基于计算机视觉的运动目标跟踪算法研究
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
随着计算机理论、技术和应用的快速发展,视频图像处理和计算能力得到了极大的提高,使得计算机视觉成为了计算机领域与人工智能领域中最热门的研究课题之一。基于计算机视觉的运动目标跟踪作为计算机视觉领域中的一个重要问题,是研究视频图像序列中运动目标的检测、提取、识别和跟踪,获得运动目标的运动参数,如位置、速度、加速度等,以及目标运动的轨迹,从而进行进一步处理与分析,实现对运动目标的行为理解,以完成更高一级的任务。作为一个有着广泛应用背景的研究领域,基于计算机视觉的目标跟踪吸引了大批研究学者参与,许多国外研究机构也将其列为重要研究方向,并已取得了很多成果。但是一般意义上的跟踪技术还远未成熟,要开发出真正鲁棒、实用的跟踪应用系统还需要解决大量的问题。
     论文研究了基于图像特征的目标分割、模板匹配、Mean shift算法等目标跟踪问题,在实际的序列图像场景分析中,以航天器发射中运载火箭的起飞和飞行时的轨迹和姿态获取为背景,通过高速摄像机获得的视频图像进行火箭目标的处理和跟踪,针对不同场景下的火箭目标跟踪问题,研究了基于计算机视觉的运动目标跟踪算法。
     论文对基于计算机视觉的目标跟踪技术的研究现状进行了探讨,讨论了当前基于计算机视觉的目标跟踪技术中目标的表示方法以及目标跟踪特征选择标准;对基于计算机视觉的运动目标跟踪算法进行了分类并指出了各种目标跟踪算法的优缺点。对计算机视觉理论框架进行了探讨,在Marr的计算理论框架下提出了本文的研究思路。重点对基于计算机视觉的火箭目标跟踪场景中的目标及背景进行了深入分析。在此基础上,提出了基于计算机视觉的火箭目标跟踪中存在的难点问题。
     针对火箭目标的跟踪问题,在对比不同的边缘检测算子对火箭图像序列进行边缘检测的基础上,选择使用Robert边缘检测算子对火箭目标进行边缘检测。根据目标的灰度分布性改进了最大类间差分法,提高了火箭分割的精确度与实时性。提出了一种带方向的非线性滤波方法去除背景边缘的算法,有效的解决了火箭边缘图像中存在的干扰边缘问题。仿真实验结果表明了该算法对火箭目标具有很好的分割效果。
     针对在火箭目标飞行过程中大小和姿态变化情况下,对火箭目标稳定跟踪的问题,提出了基于多关联模板匹配的模板匹配策略,通过仿射变换根据伸缩比和旋转角度从上帧最优模板中产生出多关联模板以自适应火箭的大小和姿态改变提高算法的匹配精度。采用了卡尔曼滤波对火箭的运动进行轨迹预测,根据预测目标位置确定图像待匹配区域,有效地减小了算法时间复杂度,提高了算法的实时性。仿真实验结果表明了该算法具有很好的匹配精度与实时性,对于目标的状态、大小变化与遮挡现象有较好的鲁棒性。
     在研究了Mean shift算法与火箭目标飞行特点的基础上,提出了使用Mean shift算法与帧间差分法相结合的火箭目标跟踪算法,对Mean shift算法进行了改进。采用帧间差分法提取火箭目标运动区域,然后在此基础上使用Mean shift算法实现目标的精确跟踪。仿真实验结果表明了该算法能有效地对火箭目标进行跟踪,并能很好的解决跟踪过程中的跟踪误差累积问题。
     针对在Mean shift目标跟踪算法框架中只使用单一固定图像特征表示火箭目标、不能自适应的根据跟踪场景选取最佳跟踪特征对火箭目标进行表示、选用的特征模板不能随跟踪环境自适应更新而经常造成模板漂移导致跟踪失败的问题,提出了一种在线自适应多特征融合算法和模板自适应更新机制。融合火箭目标的颜色、边缘、纹理特征,对火箭目标进行表示。通过构建前后相邻两帧间的相似度函数,对跟踪模板进行自适应更新。实验证明了该算法对复杂背景下的火箭目标具有较好的跟踪效果。
     最后,对全文的研究工作进行了总结,并指出了今后工作中进一步研究的方向。
With the rapid development of computer theory, technolgy and application, the video image processing capacity and computer’s performance have been greatly improved. It made computer vision became a very popular research task in the computer science and artificial intelligence. Computer vision based moving object tracking is a very important challenge in computer vision field. Object tracking based on the computer vision is a technology of detection, extraction, recognition and tracking moving object from video image sequence and get the moving parameters such as position, velocity, acceleration and object’s moving trajectory. Based on these parameters, with deeply process and analysis to achieve behavior understand and complete advanced task. Object tracking based on computer vision as a widely used technolgoy, has attracted many researchers. Lots of institutions take it as a very important research field and have got many remarkable achievements. But computer vision based object tracking technology is still not a mature technology. There still have a great number of problems which should be solved to develop a robust and practical tracking system.
     Thesis focuses on feature-based object segmentation technology, template matching technologyand Mean shift object tracking technology. On real sequence image scene analysis, taking the task of acquire rocket’s trajectory and flight regime when rocket launch and fly as research background. It using high-speed video camera directly to trace and locate the rocket image. Meanwhile, it takes deeply research and discussion on rocket tracking based on different scenes.
     Thesis discusses the current research of computer-vision-based object tracking. The object representation method and tracking features choosing method in the computer-vision-based object tracking filed is also discussed. It classifies the algorithms of moving object tracking and points out the advantages and disadvantages of these algorithms. Thesis introduces and analyzes the theory frame of computer vision. Based on the Marr’s computer vision theory frame, our approach is presented. It focuses on the analysis of rockets and backgrounds features under computer-vision-based rocket tracking scene. Based on that, the difficulties in computer-vision-based rocket tracking are proposed.
     In order to solve the rocket tracking problem this thesis compares the edge detection results using the different edge detection algorithms and chooses Roberts edge detection operator to detect the rocket edge. Improved OTUS segmentation algorithm is proposed according to distribution of grey scale and that enhances the accuracy and real-time performance of rocket segmentation. To solve the edge interferences in edge images, a nonlinear filter with direction is designed to remove background edges. Simulation has proved the proposed rocket method is effective for rocket segmentation.
     Thesis proposes a multi-correlation-template match strategy to overcome the flaws of traditional template matching algorithms which cannot trace the targets with long time and stability due to the lack of adaptive template update for the change of rockets size and profile. Through affine transformation, the algorithm would generate multi-template to match the size and profile of the target rocket from optimal template in previous frame according to stretching and rotation. Thus the match accuracy is enhanced. To raise the real-time performance of the algorithm,Kalman filter is used to estimate the motive track of rocket, and the estimation reduces time complexity of the algorithm effectively. Simulation shows that the algorithm is robust to changes of target size and profile as well as block problems with good match accuracy and real-time performance.
     Thesis proposes an improved Mean shift which is acquired with combination of normal Mean shift algorithm and frame-difference methods based on the study of Mean shift algorithm and traits of rockets flying. By using Frame-difference method to exact the motive area of a rocket, we get its approximate location. Then with Mean shift algorithm, accurate tracing is achieved. Simulations prove the effectiveness of the new algorithm that it is effective to trace a target rocket and better solves the problem of error accumulation during the tracing.
     Thesis proposes an online multi-features fusion algorithm and adaptive template update mechanism to solve the defect that tracking template cannot be self-adaptive to various scenes which cause template drift, and that single image feature in Mean shift algorithm framework cannot be self-adaptive to trace scene to alter the best trace feature for a target rocket. This new algorithm considers a feature with greatest difference between target rocket and background as the best trace feature. Thus, rocket and the background are classified into two different classes, and a function is established to measure the differences between target rocket and background. To acquire a complete outline, the different features are combined through ratio. By establishing a similarity function of two frames before and after, tracing templates are updated adaptively. Simulations show that this new algorithm is effective in tracing target rocket with complex background.
     At last, the thesis summarizes all research work discussed above, and points out the further research direction in this field.
引文
[1] Wang L, Ning H, Tan T, et al. Fusion of static and dynamic body biometrics for gait revognition[J]. IEEE Transactions on Circuits and Systems for Video Technology, 2004, 14(2): 149~158.
    [2] Wang L, Tan T, Ning H, et al. Silhouette analysis-based gait recognition for human identification[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2003, 25(12): 1505~1518.
    [3] Weiming H, Tieniu T, Liang W, et al. A survey on visual surveillance of object motion and behaviors[J]. IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews, 2004, 34(3): 334~352.
    [4] Hampapur A, Brown L, Connell J, et al. Smart video surveillance:exploring the concept of multiscale spatiotemporal tracking[J]. IEEE Trsancations on Signal Processing Magazine, 2005, 22(2): 38~51.
    [5] Haga H, Kaneda S. A usability survey of a contents-based video retrieval system by combining digital video and an electronic bulletin board[J]. The Internet and Higher Education, 2005, 8(3): 251~262.
    [6] Zampoglou M, Papadimitriou T, Diamantaras K I. Support Vector Machines Content-Based Video Retrieval based solely on Motion Information[A]. 2007 IEEE Workshop on Machine Learning for Signal Processing[C]. 2007. 1551~2541.
    [7] Allen J M, Asselin P K, Foulds R. American Sign Lanuage finger spelling recognition system[A]. 2003 IEEE 29th Annual Proceeding of Bioengineering Conference[C]. 2003. 285~286.
    [8] Amin M A, Hong Y. Sign Language Finger Alphabet Recognition from Gabor-PCA Representation of Hand Gestures[A]. 2007 International Conference on Machine Learning and Cybernetics[C]. 2007. 2218~2223.
    [9]黄福珍,苏剑波.人脸检测[M].上海:上海交通大学出版社, 2006.
    [10] Coifman B, Beymer D, Mclauchlan P, et al. A real-time computer vision system for vehicle tracking and traffic surveillance[J]. Transportation Research Part C, 1998, 6(4): 271~288.
    [11] Hsieh J, Yu S, Chen Y, et al. Automatic traffic surveillance system for vehicle tracking and classification[J]. IEEE Transactions on Intelligent Transportation systems, 2006, 7(2): 175~187.
    [12] Doulamis A D. Optimal content-based video decomposition for interactive videonavigation[J]. IEEE Transactions on Circuits and System for Video Technology, 2004, 14(6): 757~775.
    [13] Thomas B M, Adrian H, Volker K. A survey of advances in vision-based human motion capture and analysis [J]. Computer Vision and Image Understanding, 2006, 104(2-3): 90~126.
    [14] Moeslund T B, Granum E. A survey of computer vision-based human motion capture[J]. Computer Vision and Image Understanding, 2001, 81(3): 231~268.
    [15] Wang L, Hu W M, Tan T N. A survey of visual analysis of human motion[J]. Chinese Journal of Computers, 2002, 25(3): 225~237.
    [16] Veenman C, Reinders M, Backer E. Resolving motion correspondence for densely moving points[J]. IEEE Transactions on Pattern Analysiz Machine Intelligence, 2001, 23(1): 54~72.
    [17] Serby D, Koller-Meier S, Cool L V. Probabilistic object tracking using multiple features[A]. IEEE International Conference of Pattern Recognition(ICPR)[C]. 2004. 184~187.
    [18] Comaniciu D, Ramesh V, Meer P. Kernal-based object tracking[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2003, (25): 564~575.
    [19] Yilmaz A, Li X, Shah M. Contour based object tracking with occlusion handling in video acquired using mobile cameras[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2004, 26(11): 1531~1536.
    [20] Ballard D, Brown C. Computer Vision[M]. Prentice-Hall, 1982.
    [21] Ali A, Aggarwal J. Segmentation and recognition of continuous human activity[A]. IEEE Workshop on Detection and Recognition of Evants in Video[C]. 2001. 28~35.
    [22] Cootes T, Edwards G, Taylor C. Robust real-time periodic motion detection, analysis, and applications[J]. IEEE Transactions on Pattern Analysis and machine Intellgence, 2001, 23(6): 681~685.
    [23] Zhu S, A Y. Region competition: unifying snakes, region growing, and bayes/mdl for multiband image segmentation[J]. IEEE Trsancation on Pattern Analysis and Machine Intelligence, 1996, 18(9): 884~900.
    [24] Paragios N, Deriche R. Geodesic active regions and level set methods for supervised texture segmentation[J]. International Journal of Compute Vision, 2002, 46(3): 223~247.
    [25] ELGAMMAL A D R H. Background and foreground modeling using nonparametric kernel density estimation for visual surveillance.[A]. Proceedings of IEEE[C]. 2002. 1151~1163.
    [26] David S. Robust template tracking with drift correction [J]. Pattern Recognition Letters, 2007, 28(12): 1483~1491.
    [27] Matthews I, Ishikawa T, Baker S. The template update problem[J]. IEEE Transactions onPattern Analysis and Machine Intelligence, 2004, 26(6): 810~815.
    [28] Derraz F, Beladgham M, Khelif M. Application of active contour models in medical image segmentation[A]. International Conference on Information Technology: Coding and Computing, 2004. [C]. 2004. 675~681.
    [29] Van G B, Frangi A F, Staal J J, et al. Active shape model segmentation with optimal features[J]. IEEE Transactions on Medical Imaging, 2002, 21(8): 924~933.
    [30] Bajcsy R, Campos M. Active and Exploratory Perception[J]. CVGIP:Image Understanding, 1992, 56(1): 31~40.
    [31] Michael K, Andrew W, Demetri T. Snakes: Active contour models [J]. International Journal of Computer Vision, 1988, 1(4): 321~331.
    [32] Aloimonos J, Weiss I, Bandopadhay A. Active vision[J]. International Journal of Computer Vision, 1988, : 333~356.
    [33] Mughadam B, Pentland A. Probabilistic visual learning for object representation[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1997, 19(7): 696~710.
    [34] Mitra P, Murthy C, Pal S. Unsupervised feature selection using feature similarity[J]. IEEE Transactions on Pattern Analysis and Machine Inteligence, 2002, 24(3): 301~312.
    [35] Collins R T, Yanxi L, Leordeanu M. Online selection of discriminative tracking features[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2005, 27(10): 1631~1643.
    [36]朱胜利. Mean shift及相关算法在视频跟踪中的应用.杭州:浙江大学, 2006.
    [37] Amiaz T, Kiryati N. Dense discontinuous optical flow via contour-based segmentation[A]. IEEE International Conference on Image Processing[C]. 2005. 1264~1267.
    [38] Yang Y, Zhang T. Moving target tracking based on feature-optical-flow[J]. Chinese Journal of Astron Autics, 2000, 21(2): 8~15.
    [39] Zitnick C W, Jojic N, Sing B K. Consistent segmentation for optical flow estimation[A]. Tenth IEEE International Conference on Computer Vision[C]. 2005. 1308~1315.
    [40] Haralick R, Shanmugam B. Textural features for image classification[J]. IEEE Transactions on System, Man, Cybernetics, 1973, 33(3): 610~622.
    [41] Mallat S. A theory for multiresolution signal decomposition:The wavelet representation[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1989, 11(7): 674~693.
    [42] Blum A, Langley P. Selection of relevent features and examples in machine learning[J]. Artificial Intelligence, 1997, 97(1-2): 245~271.
    [43] Tieu K, Viola P. Boosting image retrival[J]. International Journal of computer vision, 2004, 56(1): 17~36.
    [44] Cremers D, Schnorr C. Statistical shape knowledge in variational motion segmentation[J]. I.Srael Nent. Cap, 2003, (21): 77~86.
    [45]侯志强,韩崇昭.视觉跟踪技术综述[J].自动化学报, 2006, 32(4).
    [46] Tyng-Luh L, Hwann-Tzong C. Real-time tracking using trust-region methods[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2004, 26(3): 397~402.
    [47] Dubuisson J M, Lakshmanan S, Jain A K. Vehicle segmentation and classification using deformable templates[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1996, 18(3): 293~308.
    [48]李培华,张田文.主动轮廓线模型(蛇模型)综述[J].软件学报, 2000, (6): 751~757.
    [49] Peihua L, Tianwen Z, Bo M. Unscented Kalman filter for visual curve tracking[J]. Image and Vision Computing, 2004, 22(2): 157~164.
    [50] Betser A, Vela P, Tannenbaum A. Automatic tracking of flying vehicles using geodesic snakes and Kalman filtering[A]. 43rd IEEE Conference on Decision and Control [C]. 2004. 1649~1654.
    [51] Ronald P. Vision-based human motion analysis: An overview [J]. Computer Vision and Image Understanding, 2007, 108(1-2): 4~18.
    [52] Aggarwal J K, Sangho P. Human motion: modeling and recognition of actions and interactions[A]. 2nd International Symposium on 3D Data Processing, Visualization and Transmission[C]. 640~647.
    [53] Weiming H, Xuejuan X, Xie D, et al. Traffic accident prediction using 3-D model-based vehicle tracking[J]. IEEE Transactions on Vehicular Technology, 2004, 53(3): 677~694.
    [54] Jianguang L, Tieniu T, Weiming H, et al. 3-D model-based vehicle tracking[J]. IEEE Transactions on Image Processing, 2005, 14(10): 1561~1569.
    [55] Drummond T, Cipolla R. Real-time visual tracking of complex structures[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2002, 24(7): 932~946.
    [56] Gerard P, Gagalowicz A. Three dimensional model-based tracking using texture learning and matching[J]. Pattern Recognition Letters, 2000, 21(13-14): 1095~1102.
    [57] Vacchetti L, Lepetit V, Fua P. Stable real-time 3D tracking using online and offline information[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2004, 26(10): 1385~1391.
    [58] Marr D. Vision: a computational investigation into the human representation and processing of visual information[M]. San Francisco: Freeman W.H. and Company, 1982.
    [59] Barrow S T, Tenenbaum J M. Computational vision[A]. Proceding of IEEE [C]. 1981. 572~597.
    [60]郑世友.动态场景图像序列中运动目标检测与跟踪[博士].南京:东南大学, 2005.
    [61]马颂德,张正友.计算机视觉——计算理论与算法基础[M].北京:科学出版社, 2003.
    [62] Zhang R, Tsai P S, Cryer J E, et al. IEEE Transaction on Pattern Analysis and Machine Intelligence, 1999, 21(8): 690~705.
    [63] Brodsky T, C F, Aloimonos Y. Shape from video[A]. IEEE Computer Society Conference on Computer Vision and Pattern Recognition[C]. 1999. 151.
    [64] Lowe D G. Three dimensional object recognition from single two-dimensional images[J]. Artificial Intellignce, 1987, 31: 355~395.
    [65] Aloimons J. Purposive and qualitative active vision[A]. the 10th International Conference of Pattern Recognition[C]. 1990. 246~360.
    [66] Aloimonos J, Weiss I, Bandopadhay A. Active vision[J]. International Journal of Computer Vision, 1988, : 333~356.
    [67] Bajcsy R, Campos M. Active and Exploratory Perception[J]. CVGIP:Image Understanding, 1992, 56(1): 31~40.
    [68] Armen F, Aggarwai J K. Model-based object recognition in dense-range images a review[J]. ACM Computing Surveys, 1993, 25(1): 5~43.
    [69] Chin R T, Dyer C R. Model-based recognition in robot vision[J]. ACM Computing Survey, 1986, 18(1): 67~108.
    [70] Zhao T, Nevatia R. Tracking multiple humans in complex situations[J]. IEEE Transaction on Pattern Analysis and Machine Intelligence, 2004, 26(9): 1208~1221.
    [71] Collins R, A L, Fujiyoshi H, et al. Algorithms for cooperative multisensor surveillance[A]. Proceedings of the IEEE[C]. 2001. 1456~1477.
    [72] Oliver N M, Rosario B, Pentland A P. A Bayesian computer vision system for modeling human interactions[J]. IEEE Transaction on Pattern Anlysis and Machine Intelligence, 2000, 22(8): 831~843.
    [73] Collins R. A system for video surveillance and monitoring:VSAM final report[R]. Caregie Mellon University, 2000.
    [74] Zhan C, Duan X, Xu S, et al, An improved Moving object detection algorithm based on frame difference and edge detection. In Proceedings of Fourth International Conference on Image and Graphics, 2007; p 519~523.
    [75] James W D, Vinay S. Background-subtraction using contour-based fusion of thermal and visible imagery [J]. Computer Vision and Image Understanding, 2007, 106(3): 162~182.
    [76] James W D, Vinay S. Background-Subtraction in Thermal Imagery Using Contour Saliency [J]. International Journal of Computer Vision, 2007, 71(2): 161~181.
    [77] Tai J, Tsang S, Lin C, et al. Real-time image tracking for automatic traffic monitoring andenforcement application[J]. Image and Vision Computing, 2004, 22(6): 485~501.
    [78] Matsuyama T, Ukita N, Real-time multitarget tracking by a cooperative distrbuted vision system. In Proceedings of the IEEE, 2002; 'Vol.' 90, p 1136~1150.
    [79]章毓晋.图像分割[M].北京:科学出版社, 2001.
    [80]马颂德,张正友.计算机视觉——计算理论与算法基础[M].北京:科学出版社, 2003.
    [81] Pal N, Pal S. A Review on Image Segmentation Techniques[J]. Pattern Recognition., 1993, 26(9): 1277~1294.
    [82] Malki J, Mascarilla L, Zahzah E H, et al. Directional relations composition by orientation histogram fusion [A]. Proceedings of 15th International Conference on Pattern Recognition[C]. Barcelona, Spain,: 2000. 758~761.
    [83] Otsu N. A Threshold Selection Method From Gray-Level Histogram[J]. IEEE Transactions on System, Man, Cybernetics, 1979, (6): 62~67.
    [84] Xie D. A new void fraction measurement method for gas-oil two-phase flow based on electrical capacitance tomography system and Otsu algorithms[A]. Fifth World Congress on Intelligent Control and Automation[C]. 2004. 3753~3756.
    [85] Guo J, Liu S, Ni L, et al. Improved image segmentation algorithm based on the Otsu method[J]. Chinese Journal of Scientific Instrument, 2005, 26: 665~666.
    [86] Canny J. A Computational Approach To Edge Detection[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1986, 8: 679~714.
    [87] Lindeberg T. Edge detection and ridge detection with automatic scale selection[J]. International Journal of Computer Vision, 1998, 30(2): 117~154.
    [88] Zhang W, Bergholm F. Multi-Scale Blur Estimation and Edge Type Classification for Scene Analysis[J]. International Journal of Computer Vision, 1997, 24(3): 219~250.
    [89]陆宗骐,诚梁.用Sobel算子细化边缘[J].中国图像图形学报, 2000, 6(5): 516~520.
    [90] Sezgin M, Sankur B. Survey over image thresholding techniques and quantitative performance evaluation[J]. Journal of Electronic Imaging, 2004, 13(1): 146~165.
    [91] Yilmaz A, Javed O, Shah M. Object Tracking: A Survey[M]. ACM Computing Surveys, 2006.
    [92] Smith S, Brady J S. A new approach to low level image processing[J]. International Journal of Computer Vision, 1997, 23(1): 45~78.
    [93] Ghaffor A, Iqbal R N, Khan S A. Modifed chamfer matching algorithm[J]. Lecture Notes in Computer Science, 2003, : 1102~1106.
    [94] Nguyen H, Smeulders A. Fast occluded object tracking by a robust appearance filter [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2004, 26(8): 1099~1104.
    [95]朱永松,国澄明.基于相关系数的相关跟踪算法研究[J].中国图像图形学报, 2004, 9(8): 963~967.
    [96]汪颖进,张桂林.新的基于Kalman滤波的跟踪方法[J].红外与激光工程, 2004, 33(5).
    [97] Aloimons J. Purposive and qualitative active vision[A]. the 10th International Conference of Pattern Recognition[C]. 1990. 246~360.
    [98] Bunyak F, Palaniappan K, Nath S K, et al. Geodesic Active Contour Based Fusion of Visible and Infrared Video for Persistent Object Tracking[A]. IEEE Workshop on Applications of Computer Vision[C]. 2007. 35.
    [99] Avidan S. Support vector tracking[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2004, 26(8):1064~1072., 2004, 26(8): 1064~1072.
    [100]钱江,苏剑波.图象雅可比矩阵的在线Kalman滤波估计[J].控制与决策, 2003, 18(1): 77~80.
    [101] Orderud F. Comparison of Kalman Filter Estimation Approaches for State Space Models with Nonlinear Measurements[A]. Modeling P O S C. 2005.
    [102]潘泉,杨峰,叶亮, et al.一类非线性滤波器—UKF综述[J].航空学报, 2005, 20(5): 481~494.
    [103] Fukunage K, L. D. H. The estimation of the gradient of a density function with application in pattern recognition[J]. IEEE Transactions on Information Theory, 1975, 21(1): 32~40.
    [104] Cheng Y. Mean shift mode seeking and clustering[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1995, 17(8).
    [105] Comaniciu D, Ramesh V, Meer P. Kernal-based object tracking[J]. IEEE Transactions on Pattern Analysis and Machine Inteligence, 2003, (25): 564~575.
    [106] Comaniciu D, Meer P. A Robust application toward feature space analysis[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2002, 24(5): 603~619.
    [107] Comaniciu D, Meer P. Mean Shift analysis and application[A]. Proccedings of the seventh IEEE International Conference on Computer Vison[C]. 1999.
    [108] Comaniciu D, Meer P. Roboust Analysis of Feature Space: Color Image Segmentation[A]. IEEE conference on Computer Vision and Pattern Recognition[C]. 1997.
    [109] Comaniciu D, Ramesh V, Meer P. Real-time tracking of non-rigid objects using Mean Shift[A]. IEEE Conference on Computer Vision and Pattern Recognition[C]. 2000.
    [110] Comaniciu D, Ramesh V, Meer P. The variable bandwidth Mean Shift and data-driven scale selection[A]. 8th International Conference on Computer Vision[C]. 2001.
    [111] Zhu Z, Ji Q, Fujimura K. Combining kalman Filtering and mean shift for real time eye tracking under active ir illumination,[A]. Proceeding of 16th International Conference onPattern Recognition: 2008. 318~321.
    [112] Comaniciu D, Ramesh V. Mean Shift and Optimal Prediction for Efficient Object Tracking[A]. Proceedings Conference on Image Processing[C]. 2000. 70~73.
    [113] Babu R V, Makur A. Kernel-based Spatial-Color Modeling for Fast Moving Object Tracking[A]. IEEE International Conference on ICASSP [C]. 2007. 901~904.
    [114] Wang G L, Liang D Q, Wang Y C, et al. Algorithm for Tracking of Fast Motion Objects with Adaptive Mean shift[A]. Proceeding of ACIS International Conference [C]. 2007. 359~363.
    [115] Fukunaga K, Hostetler L D. The Estimation of the Gradient of a Density Function with Applications in Pattern Recognition[J]. IEEE Transactions on Information Theory, 1975, IT(21): 32~40.
    [116]朱胜利. Mean shift及相关算法在视频跟踪中的应用.杭州:浙江大学, 2006.
    [117]贾静平.图像序列中目标跟踪技术研究[博士].西安:西北工业大学, 2006.
    [118] Wang J, Zhang K, Chirn G -. Using, Approximate Graph Matching Algorithms, Probabilistic Hill Climbing[A]. Proceeding of sixth International conference on tools with artificial intellignece[C]. 1994. 390~396.
    [119] Zhan C, Duan X, Xu S, et al, An improved Moving object detection algorithm based on frame difference and edge detection. In Proceedings of Fourth International Conference on Image and Graphics, 2007; p 519~523.
    [120] Collins R. A system for video surveillance and monitoring:VSAM final report[R]. Caregie Mellon University, 2000.
    [121] Collins R T, Yanxi L, Marius L. Online Selection of Discriminative Tracking Features[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2005, 27(10): 1631~1643.
    [122] Junqiu W, Yasushi Y. Integrating Color and Shape-Texture Features for Adaptive Real-Time Object Tracking[J]. IEEE transactions on Image Processing, 2008, 17(2): 235~240.
    [123] Wei H, Xiaolin Z, Li Z. Online Feature Extraction and Selection for Object Tracking[A]. IEEE Conference on Mechatronics and Automation[C]. 2007. 3497~3502.
    [124] Dawei L, Qingming H. Mean-Shift Blob Tracking with Adaptive Feature Selection and Scale Adaptation[A]. IEEE Conference on Image Processing[C]. 2007. 369~372.
    [125] Yin Z, Poriki F, Collins R T. Likelihood Map Fusion for Visual Object Tracking[J]. IEEE workshop on Applications of Computer Vision, 2008, : 1~7.
    [126]王永忠,梁彦,赵春晖, et al.基于多特征自适应融合的核跟踪方法[J].自动化学报, 2008, 34(4): 393~399.
    [127] Thomas S, Gabriel K. A quantitative theory of immediate visual recognition[J]. Programe on Brain Research, 2007, 165: 33~56.
    [128] Jhuang H, Serre T, Wolf L, et al. A Biologically Inspired System for Action Recognition[A]. 11th international conference on ComputerVision[C]. 2007. 1~8.
    [129] Bar M, Kassam K S. Top-down facilitation of visual recognition[J]. Proceedings of the National Academy of Sciences, 2006, 103(8): 449~454.
    [130] Collins R T, Yanxi L, Leordeanu M. Online selection of discriminative tracking features[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2005, 27(10): 1631~1643.
    [131]冈萨雷斯.数字图像处理(第二版)[M].北京:电子工业出版社, 2006.
    [132]郑南宁.计算机视觉与模式识别[M].北京:国防工业出版社出版, 1998.
    [133] Tan X, Triggs B. Fusing Gabor and LBP Feature Sets for Kernel-Based Face Recognition[J]. Lecture Notes in Computer Science, 2007, : 235~249.
    [134] Aroussi E, Mohamed. Combining DCT and LBP Feature Sets For Efficient Face Recognition[A]. ICTTA 2008[C]. 2008. 1~6.
    [135] Ojala T P, Harwood D M. A comparative study of texture measures with classfication based on feature distributions[J]. Pattern Recognition, 1996, 29(1): 51~59.
    [136]汪颖进,张桂林.新的基于Kalman滤波的跟踪方法[J].红外与激光工程, 2004, 33(5).

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700