基于内容的视频应用中运动对象分割与运动估计技术的研究
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
随着信息技术和互联网技术的飞速发展,多媒体信息量增长越来越快,视频信息量更呈现爆炸式增长。如何有效地管理和利用这些海量视频数据成为当前的研究热点,而人们对多媒体信息的需求也从简单的播放要求转向对内容的访问、检索、操作和分析上。在新兴的视频应用中,基于内容的视频应用日益加速发展,例如基于内容的视频编码和视频检索。而基于内容的视频应用的一个关键的问题就是如何获得视频对象。该问题即为视频对象分割问题。视频对象分割是由视频处理到视频分析的关键环节。鉴于视频对象分割在基于对象的视频应用中的重要性,本文对这该问题进行了研究。而视频对象分割充分利用了运动特征来实现更有效和精确的分割,运动特征体现了视频的时域相关性。此外,运动特征也是基于对象的视频应用中所经常使用的重要的一个特征。因此作为运动特征获得的手段-运动估计技术也是一个非常重要的研究课题。考虑到本文的视频对象分割研究所需要的运动特征,本文也对运动估计法进行了深入研究。
     本文在第二章首先概括了当前的视频对象分割的发展,并分析了各类视频分割算法,对其中的空时融合分割算法进行了深入分析。对于本文分割问题中的运动估计,由于在运动估计法中,光流法因为其较好的性能得到了广泛的重视。本文对当前的各类光流算法进行了总结,对其理论基础和模型进行了进一步研究,指出了模型理论中所存在的问题,以及光流评价问题。在后面章节安排上,为了便于理解分割算法中运动场的获得,本文首先介绍了光流运动估计方法,然后介绍了分割方法。
     在光流算法中,全局光流算法能够进行全局平滑,从而具有“填充效果”,这样全局算法可以产生稠密运动场。全局算法中,开创性的HS算法因为其简单性和合理的性能可能已成为应用最广泛的光流算法。然而,HS算法有一些应用限制,特别是光照敏感性、体现平滑和“填充”效果的局部平均的不可靠性,以及运动边缘模糊。本文第三章采用传统的扩展了HS算法。考虑到图像序列中光照在很短的时间内变化很轻微的特点,提出一种简单的基于实体的光照预滤波方法EIPF,该方法稍微调整这很短时间内各帧的像素的亮度,使光流模型满足光照不变的条件。根据前向光流和后向光流的双向对称性的特点定义了反映光流可靠性的置信度,并提出了基于该置信度的光流估计算法。根据置信度,可靠的光流估计对局部平均具有较大的贡献,而不可靠的光流估计则被抑制。这样,实现了可靠的“填充”效果。同时该算法保持了迭代公式的简单性。该算法利用图像驱动和流驱动的运动边缘保存算法具有互补性的特点,有效地将两种方法结合在一起提出了基于区域的运动边缘保存方法。第三章进一步利用该置信度对光流的可靠性进行评价,并扩展了基于能量的置信度,使其能够评价非能量方法得到的光流。
     由于上述算法利用了传统OFE,而传统OFE一般只能应用于低速运动的光流估计,高速运动会造成传统OFE的一阶线性近似方法具有较大的误差,所以上述算法因为传统OFE的限制在高速运动的估计中具有较大的误差。为了克服传统OFE的限制,第四章分析了高速运动的光流估计技术难点。提出了基于补偿OFE的两步光流法。通过预测光流,从而能够在正确光流附近进行泰勒级数展开,尽管上述预测可减小这种误差,但是OFE的这种一阶线性近似仍是光流算法误差的主要因素,特别是当预测光流与准确光流仍然相差较大时。为了克服上述问题,对OFE进行了二阶补偿从而减小模型进行线性近似所带来的误差。此外,第四章对非二次偏差抑制的平滑方法进行了分析,鉴于传统的非二次偏差抑制具有较大的求解难度和算法实现的复杂度,提出一种基于非二次偏差抑制的平滑方法。它巧妙有效地将非二次抑制函数作用在局部平均计算上。
     前面介绍了分割算法中运动场(运动特征)的获得方法,在此基础上,第五章提出了一种有效的基于特征的运动检测补偿和权重分水岭的时空分割算法。考虑到视频的空时关系的空时融合分割算法是一类有效的分割方法,第五章首先分析了空时分割算法的技术难点,指出了它所存在的问题。本章所提出的分割算法通过考虑一种新的特征,用基于块的运动检测方法既提高了噪声的鲁棒性又保持了对运动的灵敏性。为了补偿运动目标粗糙的时域模板以实现有效的融合,提出基于边界的形态膨胀方法实现对初始模板的各向异性的空间补偿;并利用运动目标的惯性特征,提出时域补偿方法成功地克服了运动目标的“暂停”现象。并提出一种简单有效的“孔洞”填充方法用来填充其中的“孔洞”。另一方面,空域分割考虑一种全局信息来提高分水岭算法的精确性,并用改进的均值滤波器抑制一些极小点减轻过分割问题。该空时分割算法一个突出性能就是它的融合阈值对于不同的序列可以是固定的。
     第五章采用了基于6参数仿射模型的全局运动补偿方法来对齐背景,该补偿方法计算难度和计算量较大。为了减小分割算法的计算量,第六章根据第五章所分析的空时融合分割算法的技术难点,提出了一种基于时空补偿的空时融合分割算法。该算法设计了一种基于齐异点消除的方法来估计全局运动矢量。为了检测运动目标,将每一运动场分解为不重叠的块;然后通过比较每一块的运动矢量和全局运动矢量来得到目标的初始模板。为了补偿该粗糙模板,提出了具有距离约束的区域生长算法来实现空域补偿,其增加的距离约束项可防止生长点偏离于相应的目标区域;并提出了一种预测时域补偿来解决运动目标的“暂停”现象。这样得到较为完整的时域模板便于了后续分割。空域分割和时空融合则采用了第五章的相应算法。第六章设计了一种监控系统的运动目标检测算法以得到较为完整的目标时域模板。该检测方法有效地结合了第五章和第六章的空时融合算法中的一些方法,从一定程度上保证了实时性。
With the development of the information and internet technologies, information data are currently increased rapidly. It is an important research topic how to manage and utilize video data. Furthermore, now users have additional needs of object-to-object interactions. Among these video applications, object-based video applications are improved significantly, such as object-based coding and video retrieval. In these applications, video object segmentation is a key problem. It is a key step from video processing to video analysis. Because of the importance of video object segmentation in object-based applications, this thesis investigates this problem. In object segmentation, motion features are utilized to improve segmentation accuracy. Furthermore, motion features are usually used as features in object-based application. Motion features represent temporal correlation of video. So this thesis also investigates the problem of motion estimation.
     First, this thesis reviews video object segmentation algorithms and further analyse spatio-temporal segmentation algorithms among these algorithms. Optical flow estimation is considered as an important kind of the motion estimation algorithm. This thesis reviews optical flow algorithms and do further research on the optical flow theory and its models. Some main problems in optical flow estimation are discussed.
     Since global optical flow algorithms utilize global smoothness which leads to the filling-in effect, they can produce dense flow fields. Among the global methods, the pioneering algorithm-HS algorithm is probably the most popular algorithm because of its simplicity and reasonable performance. However, it has the limitations of illumination sensitivity, unreliability of local averages, and blurred motion boundaries. We extend HS algorithm in the third chapter. Since illumination varies slightly in a short time in video sequences, a simple entity-based illimination prefiltering method is proposed. This method slightly adjusts the brightness of each pixel to make the model satisfy the illumination invariation condition. A confidence measure to assess reliability of optical flow is defined according to the bidirectional symmetry of forward and backward optical flow. Thus, a confidence based optical flow algorithm is proposed. In this algorithm, flow estimates with higher reliability have greater contributions than those with lower reliability to the averages. It leads to reliable and antisotropic filling-in effect. The algorithm keeps the simplicity of the iteration formula. Since both image-driven and flow-driven methods have complementary advantages and limitations, a region-based method combining image-driven and flow-driven methods is proposed to preserve motion boundaries. Further, we adopt this confidence as a measure to assess the reliability of optical flow fields and extend the energy-based confidence measure to non-energy based optical flow.
     Usually, the traditional OFE is only applied in estimation of slow-speed motion. However, the conventional OFE has a great approximation error in high-speed motion. We analyse the liminations of the traditional OFE in the fourth chapter and propose compensated OFE based two steps algorithm to estimate high-speed motion. By predicting optical flow, the model is expanded by Tayler series expansion around predication locations near true locations. It can reduce the approximation error. Furthermore, we use the second order item to compensate the OFE in order to further reduce the error. We analyse the non-quadratic smoothness method. Since the optical flow algorithm that directly utilizes the non-quadratic method has great complexity and heavy computational cost, we apply the non-quadratic method in computation of local averages. Thus, the proposed discontinuity preserving method can ulilize the non-quadratic method while keeping the simplicity of its implementation.
     Spatio-temproal segmentation is an effective segmentation method. First, we analyse this kind of the segmentation method in the fifth chapter. Then, an efficient spatio-temporal segmentation scheme to extract moving objects from video sequences is proposed. For localization of moving objects, a block-based motion detection method considering a novel feature measure is proposed to detect changed regions. These changed regions are coarse and need accurate spatial compensation. An edge-based morphological dilation method is presented to achieve the anisotropic expansion of the changed regions. Furthermore, to solve the temporarily stopping problem of moving objects, the inertia information of moving objects is considered in the temporal segmentation. The spatial segmentation based on the watershed algorithm considers the global information to improve the accuracy of the boundaries. To reduce over-segmentation in the watershed segmentation, a novel mean filter is proposed to suppress some minima. A fusion of the spatial and temporal segmentation results produces complete moving objects faithfully. Compared with the reference algorithms, the fusion threshold in our scheme may be fixed for different sequences.
     To reduce the complexity of segmentation, we further investigate spatio-temporal segmentation and propose a spatio-temporal compensation based segmentation algorithm in the sixth chapter. The temporal segmentation localizes moving objects by comparing the motion vector of each block in each frame with the corresponding global motion vector. To estimate the global motion vector accurately, an outlier rejection (OR) based method is presented. Furthermore, the temporal compensation utilizing the temporal coherence of moving objects is considered in the temporal segmentation in order to solve the temporarily stopping problem. The detected moving object regions usually have discontinuous boundaries and some holes. The region growing algorithm with a distance constraint is utilized to compensate the coarse object regions in the spatial domain. By using a fusion module, moving objects are extracted. Since moving object detection in video surveillance is an important step, we propose an automatic object detection algorithm based on spatio-temporal compensation for video surveillance. This detection algorithm combines some methods proposed above. The proposed detection algorithm can extract moving objects as completely as possible and its computational cost is acceptable for surveillance systems.
引文
[1] Overview of the MPEG-4 Standard[S]. ISO/IEC JTC1/SC29/WG11, N1999, 3156(12).
    [2] T. Wiegand, G. J. Sullivan, G. Bjntegaard, A. Luthra,“Overview of the H.264/AVC video coding standard”, IEEE Trans. on Circuits and Systems for Video Technology, 13(7): 560- 576, 2003.
    [3] Mpeg-7 requirements. JTC/SC29/WG11/n2083,1998.
    [4] H. Guo, P. F. Shi,“Object-based watermarking scheme robust to object manipulations”, Electronics Letters, 38(25): 1656-1657, 2002.
    [5]魏维,游静,刘凤玉,许满武,“语义视频检索综述”,计算机科学, 33(2): 1-7, 2006.
    [6] Z. Y. Xiong, X. S. Zhou, Q. Tian, et al,“Semantic retrieval of video - review of research on video retrieval in meetings, movies and broadcast news, and sports”, IEEE Signal Processing Magazine, 23(2): 18-27, 2006.
    [7]王扉,体育视频的内容分析技术研究,博士论文,中国科学院计算技术研究所,2005.
    [8] W. H. J. Chen, J. N. Hwang,“The CBERC: a content-based error-resilient coding technique for packet video communications”, IEEE Trans. on Circuits and Systems for Video Technology, 11(8): 974-980, 2001.
    [9] A. van Eekeren, K. Schutte, J. Dijk, et al,“Super-Resolution on Moving Objects and Background”, IEEE International Conf. on Image Processing, Atlanta, GA, pp. 2709-2712, Oct. 2006.
    [10] B. Bauer, J. Kneip, T. Mlasko, el at,“The MPEG-4 Multimedia Coding Standard:Algorithms, Architectures and Applications”, Journal of VLSI Signal Processing, 23(1): 7-26, 1999.
    [11] T. Ebrabimi and C. Horne,“MPEG-4 Natural Video Coding- An Overview”, Signal Processing: Image Communication, 15(4): 365-385, 2000.
    [12] S. Battista, F. Casalino, and C. Lande,“MPEG-4: A Multimedia Standard for the Third Millennium, Part 1,”IEEE Multimedia, 6(4): 74-83, 1999.
    [13] Kato T.,“Database architecture for content-based image retrieval”, SPIE,1662:112-123, 1992.
    [14] Alan F. Smeaton,“Techniques used and open challenges to the analysis, indexing and retrieval of digital video”, Information Systems , 32: 545-559, 2007.
    [15] J. Sivic, F. Schaffalitzky, A. Zisserman,“Efficient object retrieval from videos”, Proceedings of 12th European Signal Processing Conference (EUSIPCO’04), Vienna, Austria, pp. 1737–1740, 2004.
    [16] J. C. Nascimento and J. S. Marques,“Performance Evaluation of Object Detection Algorithms for Video Surveillance”, IEEE Transactions on Multimedia, 8(4): 761-774, 2006.
    [17] Hongzan Sun, Tao Feng, and Tieniu Tan,“Spatio-Temporal Segmentation for Video Surveillance”, Electronics Letters, 37(1): 20-21, 2001.
    [18]杨淑莹,图像模式识别-VC++技术实现,北京:清华大学出版社,2005.
    [19] J. Wood, "Invariant pattern recognition: a review", Pattern Recognition, 29(1): 1-17, 1996.
    [20]季白杨,陈纯,钱英,“视频分割技术的发展”,计算机研究与发展, 38 (1) : 36-42, 2001.
    [21] Gu C, Lee M C.,“Semiautomatic segmentation and tracking of semantic video objects”, IEEE Trans. on Circuits and Systems for Video Technology, 8 (5): 572-584, 1998.
    [22] Canny J.,“A computational approach to edge detection”, IEEE Trans. on Pattern Analysis and Machine Intelligence, 8(6): 679-698, 1986.
    [23] Salembia P, Torres L., Meyer Fetal,“Region-based video coding using mathematical morphology”, Proc. of the IEEE, 83 (6): 843-857, 1995.
    [24] Salembier P, Marques F.,“Region-based representations of image and video: Segmentation tools for multimedia services”, IEEE Trans. on Circuits and Systems for video Tech., 9:1147-1167, 1999.
    [25] Zhang D, Lu G.,“Segmentation of moving objects in image sequence: a review”, Circuits Systems and Signal P rocessing (Special Issue on Multimedia Communication Services), 20(2), 2001.
    [26] A. Aydin, L. Onural, M. Wollborn, et al.“Image sequence analysis for emerging interactive multimedia services-the european COST 211 framework”, IEEE Trans. on Circuits and Systems for Video Technology, 8(7): 802-813, 1998.
    [27]沈未名,江柳,“视频对象分割及跟踪方法研究”,武汉大学学报(信息科学版), (3), 2004.
    [28] Ye Lu, Ze-Nian Li,“Active video object extraction”, Multimedia and Expo IEEE International Conference on, 1: 563– 566, 2004.
    [29] Meier Thomas, Segmentation for video object plane extraction and reduction of coding artifacts, Dept. Of Electrical and Electronic Engineering, The University of Western Australia, 1998.
    [30] Gu C, Lee M C,“Semiautomatic segmentation and tracking of semantic video objects”, IEEE Trans. on Circuits and Systems for Video Technology, 8 (5): 572-584,1998.
    [31]刘党辉,沈兰荪,“视频运动对象分割技术的研究,机械”,电路与系统学报, 7(3):77-85,2002.
    [32] Yining Deng, Manjunath B.S.“Spatio-temporal relationships and video object extraction”, Signals, Systems & Computers, 1998.
    [33]徐光,白雪生,金国英,“基于帧间运动的视频分割”,清华大学学报(自然科学版), 1, 2002.
    [34] Malo, J., Guttierrez, J.,Epifanio, I.,“Perceptually weighted optical flow for motion-based segmentation in MPEG-4 paradigm”, Electronics Letters ,36(20):1693– 1694, 2000.
    [35] Yasushi Mae , Shinya Yamamoto , Yoshiaki Shirai , et al.,“Optical flow based realtime object tracking by active vision System”, http://www-cv. mech. eng. Osaka-u. ac. jp/ research/ tracking - group/ mae/ flow/ JFCM94/ JFCM94. html.
    [36] Ryuzo Okada , Yoshiaki Shirai , Jun Miura,“Object tracking based on optical flow and depth“, http:∥www-cv.mech.eng.Osaka-u.ac.jp/research/tracking-group/okada/flow-disp-tracking/flow-disp- tracking.html.
    [37] Neti A, Colonnese S, Russo G, et al.,“Automatic Moving Object and Background Separation”, Signal Processing, 66(2): 219-232, 1998.
    [38] T. Aach and A. Kaup,“Statistical model-based change detection inmoving video,”Signal Processing, 31:165–180, 1993.
    [39] Sifakis E, Tziritas G.,“Moving Object Localisation Using a Multi-label Fast Marching Algorithm”, Signal Processing: Image Communication, (16):963-976, 2001.
    [40] Roland Mech, Michael Wollborn,“A noise robust method for 2D shape estimation of moving objects in video sequences considering a moving camera”, Signal Processing, 66(2):203-217, 1998.
    [41] Liu LI-jie, CaiDe-jun, W eng N an-shan,“A motion-oriented video object segmentation algorithm”, Chinese Journal of Computers, 23 (12): 1326-1331, 2000.
    [42] Gerald Kühne, Stephan Richter, Markus Beier,“Motion-based segmentation and contour-based classification of video objects”, ACM Multimedia, Ontario, Canada, 2001.
    [43] Jain R, Nagel H.,“On the analysis of accumulative difference of picture from image sequences of real world scenes”, IEEE Trans. PAMI, 206-214, 1979.
    [44]陈朝阳,张桂林,“基于图像对称差分运算的运动小目标检测方法”,华中理工大学学报, 26 (9), 1998.
    [45]杨志华,曾禹村,“背景移动补偿技术的研究”,北京理工大学学报, 20 (3), 2000.
    [46] S. Araki , T. Matsuoaka , N. Yokoya, et al.,“Real time tracking of multiple moving object contours in a moving camera image sequence”, IEICE Trans. Inf.&Syst., E832D(7), 2000.
    [47] M. Kim, J.G. Choi, D. Kim, and H. Lee,“A VOP generation tool:Automatic segmentation of moving objects in image sequences based on spatial-temporal information”, IEEE Trans. Circuits Syst. Video Technol., 9(8):1216-1226, 1999.
    [48] J. Xia and Y. Wang,“A spatio-temporal video analysis system for object segmentation”, ISPA, 2:18-20, Sept. 2003.
    [49] H. Gao, W. C. Siu, and C. H. Hou,“Improved techniques for automatic image segmentation”, IEEE Trans. Circuits Syst. Video Technol., 11(12): 1273-1280, Dec. 2001.
    [50] J. H. Mao and K. K. Ma,“Semantic spatio-temporal segmentation for extracting video objects”, IEEE International Conference on Multimedia Computing and Systems, 1: 738-743, Jun. 1999.
    [51] Renjie Li and Songyu Yu,“Efficient Spatio-temporal Segmentation for Extracting Moving Objects in Video Sequences”, IEEE Transactions on Consumer Electronics, 53(3): 1161-1167, 2007.
    [52] L. Vincent and P. Soille,“Watersheds in digital spaces: an efficient algorithm based on immersion simulations”, IEEE Trans. Pattern Anal.Mach. Intell., 13(6): 583-598, Jun. 1991.
    [53] Y. Tsaig and A. Averbuch,“Automatic segmentation of moving objects in video sequences: a region labeling”, IEEE Trans. Circuits Syst. Video Technol., 12(7): 597-612, Jul. 2002.
    [54]卢官明,毕厚杰, "基于数学形态学的图像序列分割”,南京邮电学院学报, 17(2):54-57, 1997.
    [55]黄友珍,黄艺等,“基于修正分水岭算法和时域跟踪的视频自动分割”,数字视频, 1:5-8, 2000.
    [56] Rakib Ahmed, G. C. Karmakar and L. S. Dooley,“Probabilistic Spatio-Temporal Video Object Segmentation Incorporating Shape Information”, Acoustics, Speech and Signal Processing IEEE International Conference on, Toulouse, FRANCE, 2: 645-648, 2006.
    [57] Patras, L., Hendriks, et all,“Video Segmentation by MAP Labeling of Watershed Segments”, IEEE Trans. on Pattern, Analysis and Machine Intelligence, 23(3):326-332, 2001.
    [58] Stiller C.,“Object-based estimation of dense motion fields”, IEEE Trans. on Image Processing, 6 (2): 234-250, 1997.
    [59] Mark Everingham, Barry Thomas,“Supervision Segmentation and Tracking of Non-rigid Objects Using a“Mixture of Histograms”Model”, IEEE International Conference on Image Processing, 1:62-65, 2001.
    [60]张文涛,李晓峰,李在铭,“高速密集视频目标场景下的运动分析”,电子学报, 2000.
    [61] Kuo C. M., Hsieh C. H., Huang Y. R., et al.,“A New Mesh Based Temporal-Spatial Segmentation for Image Sequence”, Computer Software and Applications Conference, 395–400, 2000.
    [62] Celasun, Tekalp A M, Gokcetekin M H, et al.,“2-D Mesh-Based Video Object Segmentation and Tracking With Occlusion Resolution”,Signal Processing:Image Communication,16(10):949-962, 2001.
    [63] M.Kass, A.Within, and D.Terzopoulos,”Snakes:Active contour models”, Interantional Journal of Computer Vision,1: 321-332,1987.
    [64] S. Araki , T. Matsuoaka , N. Yokoya , et al.,“Real time tracking of multiple moving object contours in a moving camera image sequence”, IEICE Trans. Inf.&Syst., E832D(7), 2000.
    [65] Gerald Kühne, Stephan Richter, Markus Beier,“Motion-based segmentation and contour-based classification of video objects”, ACM Multimedia, Ontario, Canada, 2000.
    [66]魏波,点时空约束图像目标跟踪理论与实时实现技术研究[D] .西安:电子科技大学, 2000.
    [67] Saman Cooray, Noel O’Connor, Sean Marlow, Thomas Curran. et al.“Semi-automatic video object segmentation using recursive shortest spanning tree and binary partition tree”, Workshop on Image Analysis for Multimedia Interactive Services, Tampere, Finland, 16-17, 2001.
    [68]施建良,余松煜,傅立言,“基于最短描述长度的序列图像运动分割”,计算机学报, 22(8):809-815, 1999.
    [69] Ioannis K., Dimitrios T., Michael G. S.,“3-D Model-Based Segmentation of Videoconference Image Sequences”, IEEE Trans. on Circuits and Systems for Video Technology, 8(5):547-561, 1998.
    [70] Ebroul I. M.,“Disparity/Segmentation Analysis: Matching With an Adaptive Window and Depth-Driven Segmentation”, IEEE Trans. Circuits Syst. Video Technol., 9(4):589-607, 1999.
    [71]杜干,目标检测的分形方法及应用[D],西安:电子科技大学,2000.
    [72]李红艳,图像低信噪比小目标检测与跟踪算法研究[D],西安:西安电子科技大学, 2000.
    [73] Feng P., Lajtai E. Z., Jonak J., Anborgh P. H., et al.,“Motion Estimation Methods for Video Compression-A Review”, Journal of The Franklin Institute, 335(8): 1411-1441,1998.
    [74] Renjie Li, Songyu Yu,“Confidence based optical flow algorithm for high reliability“, IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP2008), Las Vegas, USA, Apr. 2008.
    [75] B. K. P. Horn and B. G. Schunck,“Determining optical flow”, Artificial Intelligence, 17: 185-203, 1981.
    [76] B. Galvin, B. McCane, K. Novins, D. Mason, and S. Mills,“Recovering motion fields: an analysis of eight optical flow algorithms”, Proc. British Machine Vision Conf., 195-204, 1998.
    [77] B.D. Lucas and T. Kanade,“An iterative image registration technique with an application to stereo vision”, Proc. DARPA Image Understanding Workshop, 121-130, 1981.
    [78] A. Bruhn and J. Weickert,“Lucas/Kanade Meets Horn/ Schunck: Combining Local and Global Optical Flow Methods”, International Journal of Computer Vision, 61(3): 211-231, 2005.
    [79] N. Papenberg, A. Bruhn, T. Brox, et al,“Highly Accurate optical flow computation with theoretically justified warping”, International Journal of Computer Vision, 67(2):141-158, 2006.
    [80] T. Gautama and M. M. Van Hulle,“A phase-based approach to the estimation of the optical flow field using spatial filtering”, IEEE transaction on neural networks, 13(5), 2002.
    [81] B. McCane, K. Novins, D. Crannitch, and B. Galvin,“On Benchmarking Optical Flow”, Computer Vision and Image Understanding, 84(1): 126-143, 2001.
    [82] J. L. Barron, D. J. Fleet, and S. S. Beauchemin,“Performance of optical flow techniques”, International Journal of Computer Vision, 12:43-77 , 1994.
    [83] Joachim Weickert , Andrés Bruhn, Thomas Brox, and Nils Papenberg,“A Survey on Variational Optic Flow Methods for Small Displacements”, Mathematical Models for Registration and Applications to Medical Imaging, Springer Berlin Heidelberg, 10: 103-136, 2006.
    [84] Uras, S., et al.,“A computational approach to motion perception”, Biological Cybernetics, 60:79-87, 1988.
    [85] J. Weickert and Christoph S.,“Variational Optic Flow Computation with a Spatio-Temporal Smoothness Constraint”, Journal of Mathematical Imaging and Vision 14: 245–255, 2001.
    [86] Anandan, P.“A computational framework and an algorithm for the measurement of visual motion”, Intern. J. Comput. Vis., 2: 283-310, 1989.
    [87] Singh,A.,“Optie Flow Computation:A Unified Perspective”,IEEE Comput. Society Press., 1992.
    [88] Heeger, D.J.,“Optical flow using spatiotemporal filters”,Intern.J. Comput. Vis. 1: 279-302, 1988.
    [89] L. F. Chen, H. Y. M. Liao, and J. C. Lin,“Wavelet-Based Optical Flow Estimation”, IEEE Transactions On Circuits And Systems For Video Technology, 12(1), 2002.
    [90] Fleet, D.J., and Jepson, A.D.,“Computation of component image velocity from local phase information”, Intern. J. Comput. Vis. 5: 77-104, 1990.
    [91] Temujin G. and Marc M. V. H.,“A Phase-Based Approach to the Estimation of the Optical Flow Field Using Spatial Filtering”, IEEE Trans. on Neural Networks, 13(5), 2002.
    [92] S Negahdaripour and C Yu,”A generalized brightness change model for computing optical flow”, Computer Vision Proc., Fourth International Conference on, Berlin, Germany,2-11, 1993.
    [93] C. H. Teng, S. H. Lai, Yung-Sheng Chen,“Accurate optical flow computation under non-uniform brightness variations”, Computer Vision and Image Understanding 97: 315–346, 2005.
    [94] S. j. Sun, D. Haynor, and Y. M. Kim,“Motion estimation based on optical flow with adaptive gradients”, Image Processing International Conference on, Vancouver, Canada, 1: 852-855, 2000.
    [95] H. Jeffreys and B.S. Jeffreys, Methods of Mathematical Physics 3rd ed, Cambridge University Press, Cambridge, England, 305-306, 1988.
    [96]余松煜,周源华,张瑞,“数字图像处理”,上海交通大学出版社, 2007.
    [97] S. L. Iu and Y. T. Lin,“Re-examining the optical flow constraint. A new optical flow algorithm with outlier rejection”, International Conference on Image Processing, Kobe, 3:727-731, Oct. 1999.
    [98] Bruhn A., Weickert J.,“A Confidence Measure for Variational Optic flow Methods”, Springer Netherlands, 283–298, 2006.
    [99] H. Mahmoud, S. Goel, M. Shaaban, and M. Bayoumi,“A new efficient block-batching algorithm for motion estimation”, Journal of VLSI Signal Processing, 42: 21-33, 2006.
    [100] Shi R. and Li Z. M,“The optical flow estimation with displacement compensation for high-speed motion”, Communications, Circuits and Systems and West Sino Expositions, IEEE 2002 International Conference on, 1: 595- 599, 2002.
    [101] Pitas I ,Venetsanopou A., Nonlinear Digital Filters: Principles and Application [M]. Norwell: Kluwer press, 1990.
    [102] Ezhilarasan M and Thambidurai P.,“An efficient block matching algorithm for fast motion estimation in video compression”, International Conference on Signal Processing & Communication, India, 2004.
    [103] A. Elgammal, R. Duraiswami, D. Harwood, and L. S. Davis,“Background and foreground modeling using nonparametric kernel density estimation for visual surveillance”, Proc. IEEE, 90: 1151-1163, 2002.
    [104] H. W. L. Shih, and C. W. Jeff,“Video segmentation based on watershed algorithm and optical flow motion estimation”, Proc. SPIE Int. Soc. Opt. Eng., USA, 56-66, 1999.
    [105] M. Biering,“Displacement by hierarchical block matching”, Proc. SPIE Visual Communications and Image Processing (VCIP’88), Cambridge, MA, 1001: 942–951, 1988.
    [106] R. M. Haralick, S. R. Sternberg, and X. Zhuang,“Image analysis using mathematical morphology”, IEEE Trans. Pattern Anal. Mach. Intell., 9: 532-550, Jul. 1987.
    [107] L.S. Davis,“A Survey of Edge Detection Techniques”, CGIP, 4: 248-270, 1975.
    [108] M. Wollborn and R. Mech,“Refined procedure for objective evaluation of video object generation algorithm”, Doc. ISO/IEC JTC1/SC29/ WG11 M3448, Mar. 1998.
    [109] R. J. Li, S.Y. Yu, and X.W. Wang,“Unsupervised Spatio-temporal Segmentation for Extracting Moving Objects in Video Sequences,”Journal of Shanghai Jiaotong University (English edition), no.3, 2008.
    [110] R. J. Li, S. Y. Yu, and H. K. Xiong,“Spatio-temporal Compensation Based Object Detection for Video Surveillance Systems,”Journal of Donghua University (English edition), no.2, 2008
    [111]李仁杰,余松煜,“运动目标的自动空时分割算法”,第四届数字电视与无线多媒体通信国际论坛(IFTC2007),上海,2007.并收录在《中国图象图形学报》12(10): 1931-1934, 2007
    [112] R Adams, L. Bischof,“Seeded Region Growing”, IEEE Trans. Pattern Anal. Machine, Intell., 16(6): 641–647, 1994.
    [113] Salembier P.“Morphological multiscale segmentation for image coding”, Signal Processing, 38: 359-386, 1994.
    [114] L. J. Qin, Y. T. Zhuang, Y. L. Pan, et al.,“Video Segmentation Using Maximum Entropy Model”, Journal of Zhejiang University SCIENCE, 6A(S1): 47-52, 2005.
    [115] M. Hotter, R. Thoma,“Image Segmentation Based on Object Oriented Mapping Parameter Estimation”, Signal Processing, 15(3): 315–334, 1988.
    [116] J. C. Nascimento, J. S. Marques,“Performance Evaluation of Object Detection Algorithms for Video Surveillance”, IEEE Trans. on Multimedia, 28(4): 761-774, 2006.
    [117] S. Y. Wan and W. E. Higgins,“Symmetric Region Growing”, IEEE Trans. Image Processing, 12(9): 1007-1015, 2003.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700