基于彩色的微分光流估计及运动目标检测技术研究
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
利用时变图像序列进行运动分析来确定三维空间中运动物体的结构或物体与观察者之间的相对运动参数是计算机视觉领域中的热门研究课题。三维运动投影至二维图像平面也将形成运动,这种运动以图像平面上亮度模式的流动表现出来,称之为光流,而光流场是一种二维速度场。对光流场的计算是低层视觉计算技术中的一项重要任务。
     对光流计算技术的研究已持续了将近三十年,其间产生了一大批富有成效的光流计算方法,微分光流算法、匹配光流算法、能量光流算法、相位光流算法、小波光流算法等相继被提出,其中微分光流算法因具有完备的数学理论基础,实现简单,精度相对较高等优点而获得了广泛的应用。然而,这些光流计算方法都是基于图像中的灰度信息进行计算的,忽略了图像中蕴含丰富的彩色信息。光流估计是一个病态问题,彩色信息提供的附加约束可有效克服光流孔径问题带来的影响。彩色光流估计方法大致可分为灰度一致性方法和色彩一致性方法两大类,实验表明其光流估计精度优于经典的灰度估计方法。然而,总的来说,彩色图像序列的光流场计算技术仍处于起步阶段,利用彩色信息改进现有光流计算方法以获得更好的性能是一个值得进一步研究的课题。
     本文在回顾经典灰度与彩色光流计算方法的基础上,着重于研究利用彩色信息提高传统灰度光流估计算法的性能,同时对利用光流场进行运动目标检测也进行了探讨,其主要工作可概括为以下几个方面:
     第一,在基于高阶梯度的光流计算方法基础上,给出了一种基于色彩梯度恒常性的微分光流计算方法,该方法也可看成是一种基于二阶梯度进行光流求解的方法,只是其一阶梯度使用了基于色彩信息的彩色向量梯度。在假设色彩梯度恒常的基础上列写梯度约束方程,并施加全局平滑约束以求解光流。最后,进行了数值实验,实验结果表明了算法的有效性。
     第二,在研究了经典彩色光流算法及基于全局平滑约束的光流算法各自特点的基础上,给出了一种融合彩色光流计算与全局平滑灰度光流计算的方法。该方法通过矩阵条件数来判断彩色光流解的可靠性,对于不可靠光流位置,使用全局平滑光流解来进行光流替换,从而得到一种混合的光流解。
     同时,在上述算法的基础上,本文进一步研究了彩色局部光流计算模型和彩色全局光流计算模型。局部模型利用了邻域约束,可提高光流估计精度并有一定的抗噪性能,全局模型可得到稠密光流场但会模糊物体运动边界,参考前述的混合光流计算方法,本文提出了三种针对彩色图像序列的混合光流计算模型,并进行了数值实验,实验结果表明了算法的有效性。
     第三,回顾了小波的基本原理及其在光流估计中存在的问题,并简要介绍了利用复值小波克服相位震荡进行光流估计的方法,同时给出了一种利用色彩信息提高复值小波光流估计精度的方法,利用多通道彩色信息对基本的复小波光流方程组进行了扩展,利用光流解的稳定性判据筛选最可靠颜色通道进行光流计算,最后进行了实验对比,实验结果表明了算法的有效性。
     第四,给出了一种基于光流场与水平集曲线演化检测运动目标的方法。该方法首先利用运动内极线约束或规范化光流场确定运动目标数量及大致位置,然后通过Kmeans动态聚类方法进行运动目标区域分割。然而,由于光流估计误差及聚类分割误差,单纯依靠运动信息的检测方法通常都不能得到准确的目标轮廓线,因此本文在运动分割的基础上进一步加入基于空域信息的水平集分割步骤,利用彩色向量梯度定义了曲线演化停止函数,并利用快速行进算法加快计算速度。最后,对该算法进行了单目标与多目标,静止背景与动态背景的三组实验,实验结果表明了算法的有效性。
It is a hot topic in computer vision field that determining the structure of moving object in 3D space or the relative motion parameters between the viewer and the object using image sequence. And it will be 2D motion when 3D motion is projected onto the 2D image surface. This motion will appear as the flow state of 2D brightness pattern, which is called optical flow. The optical flow field is a kind of 2D flow velocity fields. The computation of optical flow field is an important task of low level vision computation technology.
     The researching of optical flow computation technology has lasted for thirty years. And many effective computation methods have been produced. Many algorithms such as differential optical flow algorithm, matching optical flow algorithm, energy-based optical flow algorithm, phase-based optical flow algorithm and wavelet optical flow algorithm are proposed and developed one after the other. Among all of these algorithms, differential optical flow method is used broadly because it has the complete mathematics theories, can be realized easily and has high precision of estimation. But these algorithms metioned above are all based on the gray information and the color information is ignored dure the procedure of computation. Optical flow estimation is an ill-posed problem. The additional constraint of color information can be used to solve caperture problem. Color optical flow estimation method can be classified as gray consistency method and color consistency method. The experiments show that the precision of optical flow estimation is better than gray method. On all accounts, optical flow estimation based on color information is still immature. And it is a valuable topic that using the color information to improve the performance of existing optical flow estimation method.
     In this thesis, we reviewed the classic gray and color optical flow method, and focused on improve the performance of traditional gray optical flow methods using color information. At the same time, we also discussed the method of moving object detection using optical flow.
     The innovations of this thesis are as the follows:
     1. Based on the high order optical flow method, a new optical flow estimation method based on color gradient consistency is proposed. This method can be regarded as a second order optical flow method. And its first order gradient is just employed color vector gradient. The gradient constraint equation can be produced based on assumption of color gradient consistency and global smoothness constraint is used to solve optical flow. Finally, the numerical experiments were performed. The experiments show that the method is effective.
     2. Based on the characteristic of classic color optical flow method and global smoothness optical flow method, a new method fused color and gray optical flow method is proposed. The reliability of color optical flow is judged by condition number of matrix. The method adopts the smooth optical flow value to replace the unreliable color optical flow so that we can get the fixed optical flow field.
     At the same time, the color local model and color global model have been discussed in this thesis. The local model uses local constraint to improve the precision of estimation and has robustness to against the noise. The global model can produce density optical flow field and blur the edge of moving object at one time. Refereced the fuse method discussed above, three fuse models of color optical flow computation are proposed. And the experiments were exployed to contrast. The experiments show that the method is effective.
     3. The thesis reviewed the basic theory of wavelet and the problem in optical flow computation. Then complex wavelet is discussed and it can be used to overcome the effect of phase concussion. A method which used color information to improve the performance of complex wavelet is proposed. Multi-channel color information is used in this method to extend the basic complex wavelet optical flow equation and condition number is used to judge the reliability of every channel. The best stablest channel is selected to compute optical flow using complex wavelet. Finally, the numerical experiments were performed. The experiments show that the method is effective.
     4. A method of moving object detection based on optical flow field and level set is proposed. Epipolar constraint or standardization optcal flow field is used to determeming the number and initial motion region. Then Kmeans algorithm is used to get the region of moving object. However, the algorithm can not get the accurate edge of moving object because the error of segmentation and computation of optical flow. So level set method based on spatial information is adopted to get the final segmentation result. Color vector gradient is used to define the evolution ending function and fast marching method is used to improve the performance of level set method. Finally, the experiments which include single object, multi-object, state background and moving background were performed. The experiments show that the method is effective.
引文
[1]A. Verri, T. Poggio. Motion field and optical flow:qualitative properties. IEEE Trans. PAMI,1989,11(5):490-497
    [2]J. L. Barron, D. J. Fleet. Systems and experiment:Performance of optical flow techniques. Inter. J. Comp. Vision,1994,12(1):43-77
    [3]S. Ghosal, R. Mehrotra. Robust Optical Flow Estimation Using Semi-Invariant Local Feature. Pattern Recognition,1997,30(2):229-237
    [4]B. H. Alireza, S. David. Robust optical flow Computation. Inter. J. Comp. Vision.1998,29(1):59-77
    [5]S. Wang, V. Markandey, A. Reid. Total least squares fitting spatiotemporal derivatives to smooth optical flow field. In Proc. of the SPIE:Signal and Data Processing of Small Targets,1992,1698:42-55
    [6]J. M. Fernandez, B. Watson, M. Qian. Computing relief structure from motion with a distributed velocity and disparity representation. Vision Research,2002,42:883-898
    [7]N. Ohta, Optical flow detection by color images. Proceedings of IEEE International Conference on Image Processing, Pan Pacific, Singapore,1989, 801-805
    [8]J. Lai, J. Gauch, J. D. Crisman. Computing optical flow in color image sequences. Innovation and Technology in Biology and Medicine,1994, 15(1):76-87
    [9]P. Golland, A. M. Bruckstein. Motion from color. Computer Vision and Image Understanding,1997,68(3):346-362
    [10]M.Haag, H. H. Nagel. Combinnation of edge element and optical flow estimates or 3D-model-based vehicle tracking in traffic image sequences. Inter. J. Comp. Vision,1999,35(3):295-319
    [11]S. M. Smith, J. M Brady. ASSET-2:Real-Time Motion Segmentation and shape Tracking. IEEE Trans, PAMI,1995,8(17):814-820
    [12]lketani A, Kuno Y, Shimada N. Real Time Surveillance System Detecting Persons in Complex Scenes. In:Proc. of Image Analysis and Processing, 1999:1112-1115
    [13]J. J Gibson. The Perception of the Visual World. Houghton Mifflin, Boston, MA,1950
    [14]马颂德,张正友.计算机视觉—计算理论与算法基础.科学出版社,1998
    [15]S. Ullman. The Interpretation of Vision Motion. MIT Press, Cambridge, London,1979
    [16]B. K. P. Horn, B. G. Schunck. Determining optical flow. Artificial Intelligence,1981,17:185-203
    [17]H. H. Nagel. Displacement vectors derived from second-order intensity variations in image sequences. CVGIP,1983,21:85-117
    [18]H. H. Nagel, W. Enkermann. An investigation of smoothness constraints for the estimation of displacement vector fields from image sequences. IEEE Trans. PAMI,1986,5:565-593
    [19]H. H. Nagel. On the estimation of optical flow:Relations between different approaches and some results. Artificial Intelligence,1987,33:299-324
    [20]H. H. Nagel. On a constraint equation for the estimation of displacement rates in image sequences. IEEE Trans. PAMI,1989,11:13-30
    [21]O. Tretial, L. Pastor. Velocity estimation from image sequences with second order differential operators. Proc.7th Intern. Conf. Patt. Recog., Montreal, 1984,20-22
    [22]S. Uras, F. Girosi, A. Verri, V. Torre. A computational approach to motion perception. Biological Cybernetics,1988,60:79-87
    [23]R. Haralick, J. S. Lee. The facet approach to optical flow. Proc. of Image Understanding Workshop,1984,74-83
    [24]高文,陈熙霖.计算机视觉—算法与系统原理.清华大学出版社,1999
    [25]B. D. Lucas, T. Kanade. An iterative image registration technique with an application to stereo vision. International Joint Conference on Artificial Intelligence,1981,674-679
    [26]M. Yachida. et al. Trinocular vision:new approach for correspondence problem. Proc.8th Intern. Conf. Patt. Recog.,1986,1041-1044
    [27]N. Cornelius, T. Kanade. Adapting optical flow to measure object motion in reflectance and X-ray image sequences. Proc. ACM Siggraph/sigart Interdisciplinary Workshop on Motion:Representation and Perception, Canada,1983,50-58
    [28]L. S. Davis, Z. Wu, H. Sun. Contour-based motion estimation. Computer Vision, Graphics, Image Proceeding,1983,23:313-326
    [29]E. C. Hildreth. The integration of motion information along contours. Proc Workshop Computer Vision:Representation and Control, Rindge, NH,1982, 83-91
    [30]M. Ye, R. M. Haralick. Image flow estimation. Vision Interface'98, Vancouver, British Columbia,1998:51-58
    [31]B. Galvin, B. McCane, K. Novins, et al. Recovering motion field:An Evaluation of Eight Optical Flow Algorithms. British Machine Vision Conference, Southampton, England,1998:195-204
    [32]Jianbo Shi, Carlo Tomasi. Good Features to Track. IEEE Conference on Computer Vision and Pattern Recognition,1994,593-600
    [33]HongChe Liu, Tsai-Hong Hong, Martin Herman. A general motion model and spatio-temperal filters for computing optical flow. Inter. J. Comp. Vision, 1997,22(2):141-172
    [34]W. B. Jonathan. Improved accuracy in gradient-based optical flow estimation. International Journal of Computer Vision,1997,25(1):5-22
    [35]R. Li, B. Zeng, M. L. Liou. A new three-step search algorithm for block motion estimation. IEEE Trans. Circuits Syst. Video Technology,1994,4(8): 438-442
    [36]L. M. Po, W. C. Ma. A novel four-step search algorithm for fast block motion estimation. IEEE Trans. Circuits Syst. Video Technology,1996,6(6): 313-317
    [37]L. K. Liu, E. Feig. A block-based gradient descent search algorithm for block motion estimation in video coding. IEEE Trans. Circuits Syst. Video Technology,1996,8,6(4):419-422
    [38]C. H. Cheung, L. M. Po. A novel cross-diamond search algorithm for fast block motion estimation. IEEE Trans. Circuits Syst. Video Technology, 2002,12(12):1168-1177
    [39]S. Zhu, K. K. Ma. A new diamond search algorithm for fast block-matching motion estimation. IEEE Trans. Image Processing,2002,9(2):287-290
    [40]A. M. Tourapis, O. C. Au, M. L. Liou. Highly efficient predictive zonal algorithms for fast block-matching motion estimation. IEEE Trans. Circuit Syst. Video Technology,2002,12(10)
    [41]C. Zhu, X. Lin, L. P. Chau. Hexagon-based search pattern for gast block motion estimation. IEEE Trans. Circuits Syst. Video Technology,2002, 12(5):349-355
    [42]J. R. Jain, A. K. Jain. Displacement measurement and its application in inter-frame image coding. IEEE Trans. Communication.1981,29: 1799-1808
    [43]E. Memin, T. Risset. On the study of VLSI derivation for optical flow estimation. Inter. J. PRAI,2000,14(4):441-461
    [44]S. S. Beauchemin, J. L. Barron. The computation of optical flow. Technique Report,1995
    [45]F. Dufaux, F. Moscheni. Motion Estimation Techniques for Digital TV:A Review and a New Contribution. IEEE Proc.1995,83(6):858-876
    [46]P. Anandan. A computation framework and an algorithm for the measurement of visual motion. Inter. J. Comp. Vision,1989,2:283-310
    [47]A. Singh. An estimation-theoretic framework for image flow computation. Pro.3rd Intern. Conf. Comp. Vision, Osaka,1990,168-177
    [48]A. B. Waston, A. J. Ahumada. A look at motion in the frequency domain. In J. K. Tostsos, editor, Motion:Perception and representation,1983,1-10
    [49]D. J. Fleet, A. D. Jepson. A cascaded filter approach to the construction of velocity selective mechanisms. Technical Report, Department of Computer Science, University of Toronto,1984, RBCV-TR-84-6
    [50]E. H. Adelson, J. R. Bergen. Spatio-temporal energy models for the perception of motion. J. Opt. Soc. Amer,1985, A 2:284-299
    [51]E. H. Adelson, J. R. Bergen. The extraction of spatio-temporal energy in human and machine vision. Proc. IEEE Workshop on Visual motion. Charleston,1986,151-156
    [52]D. J. Heeger. Model for the extraction of image flow. J. Opt. Soc. Amer, 1987, A4:1455-1471
    [53]D. J. Heeger. Optical flow using spatio-temporal filters. Inter. J. Comp. Vision,1988,1:279-302
    [54]A. M. Waxman, K. Wohn. Contour evolution, neighbourhood deformation and global image flow:Planar surfaces in motion. Intern. J. Robotics Res., 1985,4:95-108
    [55]A. M. Waxman, J. Wu, F. Bergholm. Convected activation profiles and receptive fields for real time measurement of short range visual motion. Proc. Conf. Comp. Vis. Patt. Recog., Ann Arbor,1988,771-723
    [56]D. J. Fleet, A. D. Jepson. Computation of component image velocity from local phase information. Inter. J. Comp. Vision,1990,5:77-104
    [57]D. J. Fleet. Measurement of image velocity. Kluwer Academic Publishers: Norwell,MA,1992
    [58]D. J. Fleet, A. D. Jepson. Stability of phase information. IEEE Trans. PAMI, 1993,15:1253-1268
    [59]T. Gautama, M. M. Van Hulle. A Phase-based approach to the estimation of the optical flow field using spatial filtering. IEEE Transactions on Neural Networks,2002,13(5):1127-1136
    [60]W. Enkelmann. Investigations of multigrid algorithms for the estimation of optical flow fields in image sequences. CVGIP 1988,43:150-177
    [61]M. R. Mahzoun, J. Kim et al. A scaled multigrid optical flow algorithm based on the least RMS error between real and estimated second image. Pattern Recognition,1999,32:657-670
    [62]R. Battiti, E. Amaldi, C. Koch. Computing optical flow across multiple scales:an adaptive coarse-to-fine strategy. Inter. J. Comp. Vision,1991,6(2): 133-145
    [63]J. Weber, J. Malik. Robust computation of optical flow in multi-scale differential framework. IJCV,1995,14(1):67-81
    [64]E. P. Ong, M. Spann. Robust optical flow computation on LMS regression. Inter. J. Comp. Vision,1999,31(1):51-82
    [65]张泽旭.多尺度微分光流算法与应用研究.哈尔滨工业大学博士学位论文.2004
    [66]Zhang Yaqin, Zafar Sohail. Motion-compensated wavelet transform coding for color video compression. IEEE Transactions on Circuits and Systems for Video Technology,1992,2(3):285-296
    [67]宋传鸣,王相海.小波域视频运动估计研究进展.计算机学报,2005,28(10):1716-1727
    [68]T. J. Burns, S. K. Rogers, D. W. Ruck, M. E. Oxley. Discrete, spatiotemporal, wavelet multiresolution analysis method for computing optical flow. Opt. Eng.,1994,33(7):2236-2247
    [69]S. Srinivasan, R. Chellappa. Optical flow using overlapped basis functions for solving global motion problems. In Proc. European Conf. Computer Vision, Freiburg, Germany,1998,288-304
    [70]Y. T. Wu, T. Kanade, J. Cohn, C. C. Li. Optical flow estimation using wavelet motion model. In Proc. Int. Conf. Computer Vision,1998,992-998
    [71]Y. T. Wu, T. Kanade, J. Cohn, C. C. Li. Image registration using wavelet-based motion model. Int. J. Computer Vision,2000,38(2):129-152
    [72]Li-Fen Chen, Hong-Yuan Mark Liao, Ja-Chen Lin. Wavelet-Based Optical Flow Estimation. IEEE Transactions on circuits and systems for video technology,2002,12(1):1-12
    [73]Mujica F A, Leduc J P, Murenzi R, Smith M J T. A New Motion Parameter Estimation Algorithm Based on the Continuous Wavelet Transform. IEEE Transaction on Image Processing,2000,9(5):873-888
    [74]Magarey J, Kingsbery N G. Motion estimation using a complex-valued wavelet transform. IEEE Trans. On Signal Processing,1998,46(4): 1069-1084
    [75]Magarey J, Kingsbery N G. Motion estimation using complex wavelets. IEEE International Conference on Acoustics, Speech and Signal,1996, 4:2371-2374
    [76]Magarey J, Kingsbery N G. An improved motion estimation algorithm using complex wavelets. IEEE International Conference on Image Processing, 1996,1:969-972
    [77]Magarey J, Kingsbery N G. Robust motion estimation using complex wavelets. IEEE International Conference on Speech and Image Technologies for Computing and Telecommunications,1997,2:655-658
    [78]C. P. Bernard. Discrete wavelet analysis:A new framework for fast optic flow computation. In Proc. European Conf. Computer Vision, Freiburg, Germany,1998,354-368
    [79]C. P. Bernard. Discrete Wavelet Analysis for Fast Optic Flow Computation. Applied and Computational Harmonic Analysis,2001,11:32-63
    [80]田天,周兵,李波.基于解析小波的光流计算方法.北京航空航天大学学报,2003,29(6):548-551
    [81]J. E. Fowler, Li Hua. Wavelet Transforms for Vector Fields Using Omnidirectionally Balanced Multiwavelets. IEEE Transactions on Signal Processing,2002,50(12):3018-3027
    [82]Bing Wang, Rongchun Zhao. A New Motion Estimation Algorithm Based on The Balanced Multiwavlets Vector Transform.7th International Conference on Signal Processing,2004,2:1280-1283
    [83]Bing Wang, Rongchun Zhao. A new multiwavelets-based motion estimation. Proceedings of 2004 International Conference on Machine Learning and Cybernetics,2004,6:3802-3807
    [84]J. van de Weijer, Th. Gevers. Robust optical flow from photometric invariants. IEEE International Conference on Image Processing, Singapore, 2004,251-255
    [85]J. Barron, R. Klette. Experience with optical flow in colour video image sequcence. Image and Vision Computing'2001, Auckland University, New Zealand,2001,195-200
    [86]J. Barron, R. Klette. Quantitative color optical flow. International Conference on Pattern Recognition, Vancouver, Canada,2002,251-255
    [87]J. Andrew, B. C. Lovell. Color optical flow. Workshop on Digital Image Computing, Brisbane, Australia,2003,1(1):135-139
    [88]V. Willert, J. Eggert, S. Clever, E. Korner. Probabilistic Color Optical Flow. DGAM 2005,2005,9-16
    [89]H. Madjidi, S. Negahdaripour. On robustness and localization accuracy of optical flow computation from color imagery.2nd International Symposium on 3D Data Processing, Visualization, and Transmission, Thessaloniki, Greece,2004,317-324
    [90]陈震,高满屯,沈允文.机器人视觉中彩色时变图像光流场计算的综述.机器人,2001,23(6):559-562
    [91]陈震,高满屯,沈允文.基于色彩和饱和度的彩色时变图像光流场计算.模式识别与人工智能,2002,15(4):458-462
    [92]Z. Chen, M. T. Tao, J. X. Zeng, Y. W. Shen. Optical flow estimation using hue and saturation information in color image sequences. In Second International Conference on Image and Graphics, Hefei, China,2002: 451-457
    [93]沈允文,高满屯,陈震.利用点线对应计算彩色时变图像光流场.西北工业大学学报,2002,20(3):359-362
    [94]曹建平,高满屯.无人机电视制导中彩色图像序列光流场的检测与定位.测量与检修,2005,25(1):28-29,39
    [95]陈震.图像序列光流场计算及三维场景恢复研究.西北工业大学博士学位论文.2003
    [96]P. Bouthemy. A maximum-likelihood framework for determining moving edges. IEEE Trans. PAMI,1989,11(5):499-511
    [97]P. Bouthemy, P. Lalande. Detection and tracking of moving objects based on a statistical regularization method in space and time. Proc. First European Conf. Comput. Vision, Antibes, France,1990,307-311
    [98]F. Heitz, P. Bouthemy. Motion estimation and segmentation using global Bayesian approach. Proc. Inter. Conf. ASSP, Albuquerque, NM,1990, 2305-2308
    [99]J. Konrad, E. Dubois. Comparison of stochastic and deterministic solution methods in Bayesian estimation of 2D motion. Image and Vis. Comput., 1991,9:215-288
    [100]J. Konrad, E. Dubois. Bayesian estimation of motion vector fields. IEEE Trans. PAMI,1992,14:910-927
    [101]S. L. Iu. Robust estimation of motion vector fields with discontinuity and occlusion using local outliers rejection. SPIE,1993,2094:588-599
    [102]C. Stiller, B. H. Hurtgen. Combined displacement estimation and segmentation in image sequences. SPIE,1993,1977:276-287
    [103]G Convertino, M. Brattoli, A. Distante. Hopfield Neural Network for qualitative recognition of objects motion based on optical flow. Proc. SPIE, 1994,2232:165-174
    [104]Anderson C, Burt P, van der Wal G. Chang detection and tracking using pyramid transformation techniques. In Proc of SPIE-Intelligent Robots and Computer Vision,1985,579:72-78
    [105]Fujiyoshi H, Lipton A. Real-time human motion analysis by image skeletonization. In Proceedings of the IEEE Workshop on Applications of Computer Vision,1998
    [106]Mataric M. J. Learning in multi-robot systems. Adaptation and Learning in Multi-Agent System. Lecture Notes in Artificial Intelligence, Springer Verlag, Berlin,1996,152-163
    [107]Lipton A, Fujiyoshi H, Patil R. Moving Target Classification and Tracking from Real Time Video. In Proc:Workshop on Applications of Computer Vision,1998
    [108]Saptharishi M, Bhat K, Diehl C, Oliver C, Savvides M. Soto A, Dolan J, Khosla P. Recent advances in distributed collaborative surveillance. SPIE Proceedings on Unattended Ground Sensor Technologies and Applications, 2000,4040:199-208
    [109]Beymer D, McLauchlan P. F, Coifman B, Malik. A real-time computer vision system for measuring traffic parameters. In Proc. of the IEEE Conf. on Computer Vision and Pattern Recognitions,1997
    [110]Beymer D, Konolige K. Real-time tracking of multiple people using continous detection. In Proc. International Conference on Computer Vision, 1999
    [111]Collins R. T, Lipton A. J, Kanade T. A System for Video Surveillance and Monitoring. A meric an Nuclear Society Eight Intern. Topical Meeting on Robotics and Remote Systems,1999
    [112]Gordon G, Darrell T, Harville M, Woodfill J. Background estimation and removed based on range and color. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition,1999
    [113]Cui Y, Samarasekera S, Huang Q, Greiffenhagen M. Indoor monitoring via the collaboration between a peripheral sensor and a foveal sensor. In Proc. of the IEEE Workshop on Visual Surveillance,1998,2-9
    [114]Eveland C, Konolige K, Bolles R.C. Background modeling for segmentation of video-rate stereo sequences. In Proceedings IEEE Conf. on Computer Vision and Pattern Recognition,1998,266-271
    [115]Harville M, Gordon G, Woodfill J. Foreground segmentation using adaptive mixture models in color and depth. In Workshop on Detection and Recognition of Events in Video,2001
    [116]Haritaoglu 1, Cutler R, Harwood D, Davis L. Backpack:Detection of people carrying objects using silhouettes. In International Conference on Computer Vision,1999,102-107
    [117]MacCormick J, Blake A. A probabilistic exclusion principle for tracking multiple objects. In Proceedings of the Seventh International Conference on Computer Vision,1999
    [118]W. B Thompson, T. C. Pong. Detecting moving object. Inter. J. Comp. Vision,1990,4:39-57
    [119]G. Adiv. Determining three-Dimensional motion and structure from optical flow generated by several moving objects. IEEE Trans. PAMI,1985,7(4): 384-401
    [120]G Sasa, S. Loncaric. Spatio-temporal image segmentation using optical flow and clustering algorithm. First Int'l workshop on image and signal processing and analysis, Pula, Croatia,2000,63-68
    [121]Freedman D, Zhang Tao. Active contours for tracking distributions. IEEE Transactions on Image Processing,2004,13(4):518-526
    [122]Mansouri A R, Konrad J. Multiple motion segmentation with level sets. IEEE Transactions on Image Processing,2003,12(2):201-220
    [123]Feghali R, Mitiche A. Spatiotemporal motion boundary detection and motion boundary velocity estimation for tracking moving objects with a moving camera-a level sets PDEs approach with concurrent camera motion compensation. IEEE Transactions on Image Processing,2004, 13(11):1473-1490
    [124]于慧敏,徐艺,刘继忠,高晓颖.基于水平集的多运动目标时空分割与跟踪.中国图象图形学报,2007,12(7):1218-1223
    [125]A. R. Bruss, B. K. P. Horn. Passive Navigation. Computer Vision, Graphics, and Image Processing.1983,21:3-20
    [126]Young, G. Sun. New visual invariants for terrain navigation without 3D reconstruction. IJCV,1998,28:45-71
    [127]R. Kasturi, O. Camps, Y-L. Tang, S. Devadiga, T. Gandhi. Algorithms for detection of objects in image sequences captured from an airborne imaging system. NASA96-1206,1995,1-56
    [128]H. Lau, T. Hong-Hong, M. Herman. Optimal estimation of optical flow, time-to-contact and depth. PB93-113578,1992,1-43
    [129]H. Longuet-Higgins. A computer algorithm for reconstructing a scene from two projections. Nature,1981,293:133-135
    [130]R. Tsai, T. Huang. Estimation 3-D motion parameters of a rigid planar patch I. IEEE Trans, on Acoustics, Speech and Signal Processing,1981,29(12): 1147-1152
    [131]D. J. Heeger, A. D. Jepson. Subspace methods for recovering rigid motion I: Algorithm and implementation. Inter. J. Comp. Vision.1992,7(2):95-117
    [132]S. Srinivasan. Extracting structure from optical flow using the fast error search technique. Intern. J. Comp. Vision,2000,37(3):203-230
    [133]J. Oliensis. Extract two-image structure from motion. IEEE Trans. PAMI, 2002,24(12):1618-1633
    [134]Srikantan, Geetha, etc. Efficient extraction of local myocardial motion with optical flow and a resolution hierarchy. Proceedings of SPIE,1991,1459: 258-267
    [135]D. Krezeski, R. E. Mercer, J. L. Barron, P. Joe, H. Zhang. Storm Tracking in Doppler Radar Images. IEEE Proceedings on Image Processing,1994,9: 226-230
    [136]Simonceli E P, Adelson E H, Heeger D J. Probability distribution of optical flow. IEEE Proc. of Computer Vision and Pattern Recognition,1991, 310-315
    [137]Fleet D J, Langley K. Toward real-time optical flow. Technical Report, RPL-TR-9308, Robotics and Perception Lab, Queen's University,1993
    [138]Di Zenzo S. A Note on the Gradient of a Multi-Image. Computer Vision, Graphics and Image Processing,1986,33:116-125
    [139]Bauer N, Pathirana P, Hodgson P. Robust optical flow with combined Lucas-Kanade/Horn-Schunck and automatic neighborhood selection. In: Information and automation international conference,2006,378-383
    [140]A. Bruhn, J Weickert. Lucas/Kanade meets horn/schunck:Combining local and global optic flow methods. International Journal of Computer Vision, 2005,61(3):211-231
    [141]E. P. Simoncelli. Bayesian multiscale differential optical flow. Handbook of computer vision and applications. Academic Press,1998
    [142]Osher S, Sethian J. Fronts propagating with curvature dependent speed: Algorithms based on the Hamilton-Jacobi formulation. Journal of Computational Physics,1988,79(1):12-49
    [143]Adalsteinsson D, Sethian J A. A fast level set method for propagating interface. Journal of Computational Physics,1995,118:269-277
    [144]Sethian J A. A fast marching level set method for monotonically advancing fronts. Proceedings of the National Academy of Science,1996, 93(4):1591-1694
    [145]杨新.图像偏微分方程的原理与应用.上海交通大学出版社,2003.7

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700