照明不均匀条件下光流计测的研究
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
在计算机视觉领域内,很多学者致力于从图像序列中获取三维信息的研究。其中一个最重要的问题就是确定运动场。首先要区分两个概念:光流场和运动场。当物体在摄像机前运动,或者摄像机在环境中移动时,就会发现图像在发生变化。在图像中观察到的表面上的模式运动就是所谓的光流场,而运动场则是物体的空间运动投影到成像二维平面,图像中的每个像素点赋予一个三维速度矢量。在通常情况下,运动场是不等同于光流场的,但是我们真正需要的是运动场而非光流场。
     运动场可应用于计算机视觉领域,例如目标探测和追踪,视频分割以及定性分析。在图像通信和医疗成像领域里运动补偿已经引起极大的关注。运动补偿在图像增强和恢复以及图像压缩方面也很有用。运动分析是一种利用科学方法从可视数据里提取出运动信息的有力工具。额外的量值例如分歧,漩涡状态以及压力分布都能从速度数据中计算得出。在气象学里,风向量的的信息对于天气的数字化预报是尤为重要的,而风向量的信息则是通过运动分析方法从卫星图像数据估算出来的。
     1981年,Horn和Schunck最先将二维速度场与灰度相联系,引入光流基本约束方程,由此得到光流计算的基本算法。后来,人们基于不同的理论基础提出各种光流计算方法,算法性能各有不同。Barron等人对多种光流计算技术进行了总结,按照理论基础与数学方法的区别把它们分成四种:基于梯度的方法、基于匹配的方法、基于能量的方法、基于相位的方法。然而在实际场景分析中,使用传统的方法获得的结果并不理想。这是因为在实际场景中存在诸多非理想条件的影响因素,例如:不均匀的照明,闭塞,多重光流,物体的非刚性运动,、运动场的发散等。如果想得到可靠的光流,就必须要考虑到这些干扰因素。该论文目的在于提出一种新的模型,可以在照明不均匀条件下计算出高精度,高可信度的光流。
     该论文中提出的新的模型可以在照明不均匀条件下从一系列图像序列中提取出高精度、高可信度的运动场。我们提出了一种新的方法,即引进一个使用时空优化的扩展约束方程。提出的新模型的性能已经通过与传统的计算方法相比较在一些人工合成的,真实的场景下而验证过的。
In the fields of computer vision, many studies have been carried out to obtain information on the three-dimensional(3-D)environment from image sequences.One of the most important problems is to determine. First of all, we must distinguish between two concepts:optical flow field and motion field. When the object moves in front of the camera, or the camera moves in the environment, we will find that image is changing. Motion field depicts a 2-D projection of the instantaneous 3-D velocity of the corresponding point in the scene on the imaging surface (true motion velocity). Optical flow, on the other hand, is the distribution of apparent velocities of the moving brightness patterns in an image sequence. Generally, motion field and optical flow field are not equal. What we want to obtain is motion field,not optical flow.
     The motion field can be useful for realizing human visual functions such as target detection and tracking, segmentation and qualitative shape analysis. Motion compensation has received increasing attention recently in the areas of video communication and medical imaging. Motion compensation is useful in enhancement and restoration of image sequence, and in image compression. In scientific measurements, motion analysis is a powerful tool for extracting physical information from the visualized data. Additional quantities such as divergence, vorticity and pressure distribution can be calculated from the velocity data. In meteorology, information on wind vector is necessary for numerical forecasting of the weather, which can be estimated from satellite image data by using a motion analysis method.
     In 1981, Horn and Schunck first introduct the basic optical flow constraint equation and the basic algorithm of optical flow estimation,which associated two-dimensional velocity field with the brightness. Barron and others summarized the optical flow technology and divided them into in accordance with the difference between them is into four categories:gradient-based approach, gradient-based, correlation-based, energy-based and phase-based methods by theoretical basis and mathematical methods. In the actual scene analysis however, the performance of conventional methods is not satisfactory.There exists the influence of non-ideal conditions in the actual scene.For example:non-uniform illumination, occlusions,multiple optical flow, non-rigid motion of object, and diffusion. If we want to obtain reliable optical flow, we should take into account such problems. This dissertation is an attempt to develop a new framework for optical flow computation under non-uniform illumination (true motion fields).
     The new framework developed in this dissertation views the problem of recovering motion fields from a sequence of images under non-uniform illumination.We proposed one approaches. The method introduces the extended constraint equation with spatio-temporal optimization(including local and global optimization). The performance of the proposed methods is confirmed by comparison with conventional optical flow computation techniques on a series of synthetic and real image sequences.
引文
[1]B.K.P.Horn.Robot vision,the MIT Press,1986.
    [2]Verri,A.and T.Poggio,Motion field and optical flow qualitative properties,IEEE Trans.Pattern Anal.Machine Intell.ll,490-498,1989.
    [3]F.Bergholm,S.Carlsson,A"Theory"of optical flow,CVGIP:Graphic Models And Image Processing,53,2,171-188,1991.
    [4]B.Jahne,Digital Image Processing,Springer-Verlag,Berlin,1995.
    [5]A.Singh,Optical flow computation:A unified perspective,IEEE Computer Society Press,Los Alamitos,California,1991.
    [6]Barron,J.L.,Fleet,D.J.,and Beauchemin,S.S.,Systems and Experiment-Performance of Optical Flow Techniques,Intern.J.Comput.Vis.12:1,43-77,1994.
    [7]Nomura,H.Miike,K.Koga,Determining motion fields under non-uniform illumination Pattern Recog.Letters,16,285-296,1995.
    [8]N.Cornelius and T.Kanade,Adapting optical flow to measure object motion in reflectance and X-ray image sequence,Proc.ACM SIGGRAPH/SIGGART Interdisciplinary Workshop Motion:Representation and Perception,Toront/Canada, 50-58,1983.
    [9]N.Mukawa,Optical model-based analysis of consecutive images,Computer Vision and Image Understanding,66(1997)pp.25-32.
    [10]A.Nomura,Determining motion fields under non-uniform illumination.Ph.D. dissertation,Dept.of Electrical and Electronics Engineering,Yamaguchi University, 1995.
    [11]D.J.Fleet and A.D.Jepson.Computation of component image velocity from local phase information,International Journal of Computer Vision,1990,5:1,77-104.
    [12]N.Mukawa,Estimation of shape,reflection coefficients and illuminant direction from image sequences,Proc.of 3th Intern.Conf.on Computer Vision,Osaka,Japan,Dec.4-7 1990,pp.507-512.
    [13]N.Mukawa,Estimation of light source information from image sequence.The Trans.of the Institute of Electronics,Information and Communication Engineers,J74-D-Ⅱ(9): 1236-1242,Sep.,1991.(in Japanese)
    [14]A.Nomura,H.Miike and K.Koga,Detection of a velocity field from sequential images under temporally varying illumination,The Trans.of the Institute of Electronics, Information and Communication Engineers,J76-D-Ⅱ(9):1977-1986,Sep.,1993.(in Japanese)
    [15]J.Lu and J.Little,Reflectance function estimation and shape recovery from image sequence of a rotating object,Proc.of 5th Intern.Conf.on Computer Vision,Cambridge, Massachusetts,June 20-23,1995,pp.80-86.
    [16]S.Negahdaripour,A.Shokrollahi and M.Gennert,Relaxing the brightness constancy assumption in computing optical flow,Proc.ICIP-89,Singapore,Sep.1989,pp806-810.
    [17]S.Negahdaripour and C.Yu,A generalized brightness change model for computing optical flow,Proc.of 4th Intern.Conf.on Computer Vision,1993,pp.2-11.
    [18]E.Dubois,J.Konrad,Estimation of 2D motion fields from images sequences with application to motion compensated processing,Motion Analysis and Image Sequence processing,Kluwer Academic Publishers,pp.53-87,1993.
    [19]J.A.Lees,C.S.Novak and B.B.Clark An automated technique for obtaining cloud motion from geosynchrous satellite data using cross correlation,Journal of Applied Meterology,10,118-132,1971.
    [20]M.Shizawa and K.Mase,Simultaneous multiple optical flow estimation,Proc.10th Intern.Conf.on Pattern Recog.,Atlantic city,New Jersey,1990,Vol.I,pp.274-278.
    [21]M.Shizawa and K.Mase,Multiple optical flow-fundamental constraint equations and unified computational theory for detecting motion transparency and motion boundaries, The Trans.of the Institute of Electronics,Information and Communication Engineers, D-II,J76-D-II(5):987-1005,May 1993.(in Japanese)
    [22]Verri,F.Girosi,and V.Tore,Differential techniques for optical flow,J.Opt.SocAmer. A7,pp.912-922,1990.
    [23]J.A.Leese,C.S.Novak and B.B.Clark,An automated technique for obtaining cloud motion from geosynchronous satellite data using cross correlation,Journal of Applied Meteorology,10:Feb.1971,pp.118-132.
    [24]Nomura,H.Miike and E.Yokoyama,Detecting motion and diffusion from a dynamic image sequence,Trans.of the Institute of Electronics Engineers Japan,115,3,403-,1995(in Japanese).
    [25]Nomura,H.Miike,K.Koga,Field Theory Approach for Determining Optical Flow, Pattern Recog.Letters,12,183-190,1991.
    [26]B.K.P.Horn,B.G.Schunck,Determining optical flow,Artificial Intell,17,185-203,1981.
    [27]M.J.Black,The robust estimation of multiple motions:Parametric and piecewise-smooth flow fields,Comput.Vision and Image Understanding Vol.63,No.1,Jan.,pp.75-104,1996.
    [28]J.K.Kearney,W.B.Thompson,D.L.Boley,Optical flow estimation:an error analysis of gradient-based methods with local optimization,IEEE Trans.Pattern Anal.Machine Intell,9,229-244,1987.
    [29]T.Hara,T.Kudou,H.Miike,E.Yokoyama and A.Nomura,Recovering 3D-shape from motion stereo under non-uniform illumination,IARP MVA,241-244,1996.
    [30]L.Zhang,T.Sakurai,H.Miike,Detection of motion fields under spatio-temporal 122 Mach.Intell.8(1986)565-593.
    [32]H.H.Nagel,Displacement vectors derived from second-order intensity variations I image sequences,Comput.Graph.Image Process.21:(1983)85-117.
    [33]S.Uras,F.Girosi,A.Verri and V.Torre,A computational approach to motion perception,Biol.Cybern.60:(1988),79-97.
    [34]B.Lucas and T.Kanade,An iterative image registration technique with an application t stereo vision,Proc.DARPA Image Understand Workshop,1981,pp.121-130.
    [35]E.P.Simoncelli,E.H.Adelson and D.J.Heeger,Probability distributions of optical flow Proc.Conf.Comput.Vis.Patt.Recog.,Maui,1991, pp.310-315.
    [36]Burt,P.J.,and Adelson,E.H.,The Laplacian pyramid as a compact image code,IEEE trans.Communications 31,1983 pp.532-540.
    [37]P.Anandan,A computational framework and an algorithm for the measurement of visual motion,Intern.J.Comput.Vis.2:(1989)283-310.
    [38]A.singh,Image-flow computation:an estimation-theoretic framework and a unified perspective Image Understanding,vol.56,No.2,1992,pp 152-177.
    [39]D.J.Heeger,Optical flow using spatiotemporal filters,Intern.J.Comput.Vis.l:1988 279-302.
    [40]D.J.Heeger,Model for the extraction of image flow,J.Opt.Soc.Amer.A 4:1987 1455-1471.
    [41]E.H.Adelson and J.R.Bergen,The extraction of spatiotemporal energy in human an machine vision,Proc.IEEE Workshop on Visual Motion,Charleston,pp.151-156.
    [42]D.J.Fleet and A.D.Jepson,Stability of phase information,IEEE Trans.on Patter Analysis and Machine Intelligence,Vol.15,No.12,Dec.1993,pp.1253-1268.
    [43]K.Koga and H.Miike,Exact determination of optical flow by pixel-based temporal mutual-correlation analysis,Transactions of the Institute of Electronics,Information an Communication Engineers,E,70:719-722,1987.
    [44]K.Koga and H.Miike,Determining optical flow from sequential image,Transactions of the Institute of Electronics,Information and Communication Engineers,D,J70-D(8) 1508-1515,August 1987.(in Japanese)
    [45]K.Koga and H.Miike,Optical flow analysis based on spatio-temporal correlation of dynamic image,Transactions of the Institute of Electronics,Information an Communication Engineers,D-Ⅱ,J72-D-Ⅱ(4):507-516,April 1989.(in Japanese)
    [46]H.Miike,Y.Kurihara,H.Hashimoto and K.Koga,Velocity-field measurement by pixel-based temporal mutual-correlation analysis of dynamic image,Transactions of th123 Institute of Electronics,Information and Communication Engineers,E69:877-882 August 1986.
    [47]H.Miike,Y.Kurihara and K.Koga,Improvement of a velocity-field measurement with pixel-based temporal mutual-correlation analysis of dynamic image,Transactions of the Institute of Electronics,Information and Communication Engineers,J70-D(4):836-839 April 1987.(in Japanese)
    [48]H.Miike,Y.Kurihara,K.Koga and H.Hashimoto,Velocity-field measurement of vortex by dynamic image processing,Japanese Journal of Applied Physics,25(5)L409-412,May 1986.
    [49]A.Nomura,K.Koga and H.Miike,Extracting depth-information from dynamic image by spatio-temporal correlation analysis,Transactions of the Institute of Electronics Information and Communication Engineers,D-Ⅱ,J73-D-Ⅱ(5):728-737,May 1990.(in Japanese)
    [50]A.Nomura,H.Miike and H.Hashimoto,Measuring an accelerating propagation of a bit wave by sequential image processing,Proc.Intern.Conf.:Spatio-TemporaOrganization in Nonequilibrium Systems,June 1992,pp.187-189.
    [51]O.Steinbock,H.Hashimoto and S.C.Muller,Quantitative analysis of periodic chemotaxis in aggregation patterns of dictyostelium discoideum,Physica,D 49:233-239,1991.
    [52]J.R.Bergen,P.J.Burt,R.Hingorani and S.Peleg,A three-frame algorithm for estimating two-component image motion,IEEE Trans.on Pattern Analysis and Mach Intell.,14(9):Sep.1992,pp.886-896.
    [53]J.Aisbett,Optical flow with an intensity-weighted smoothing,IEEE Trans.Pattern Ana Machie Intell.,11,512-522,1989.
    [54]L.Zhang,H.Miike and K.Kuriyama,The Spatio-Temporal Optimization to Determin Optical Flow with Combination of Local and Global Approach,FORMA,Vol.14(1999(in press)
    [55]H.Miike,L.Zhang,T.Sakurai and H.Yamada,Motion enhancement for preprocessin of optical flow detection and scientific visualization,Pattern Recog.Letters(in press).
    [56]L.Zhang,H.Miike,Detection of motion fields under non-uniform illumination bpixel-based time-domain filtering of image sequence,Proceedings of MIRU'98ppI473-478,07/29-31/1998,Gigu,Japan.
    [57]L.Zhang,H.Miike,Determining optical flow under non-uniform illumination,IPSJ SIGNotes,Information Processing Society of Japan, Vol.97,No.70,pp99-106,07/24-251997.
    [58]L.T.Bruton and N.R.Bartley,Three-dimensional image processing using the concept onetwork resonance,IEEE Trans.Circuit Syst.CAS-32(1985),pp.664-672.124
    [59]T.Sakurai,H.Miike,E.Yokoyama and S.C.Muller,Initiation front and annihilatiocenter of convection waves developing in spiral structures of Belousov-Zhabotinskreaction,J.Phys.Soc.Japan 66(1997)pp.518-521.
    [60]K.Imaichi and K.Ohmi,Numerical processing of flow-visualization picturesmeasurement of two-dimensional vortex flow,J.Fluid Mech.,129.
    [61]T.Hara,Determining image flow for the recovery of 3D-shape by extendegradient-based method,Master thesis.Dept.of Electrical and Electronics EngineeringYamaguchi University,1997.
    [62]J.P.Liu and J.Little,Reflectance function estimation and shape recovery from imagsequence of a rotating object,ICCV'95,June 20-23,1995,Massachusetts,Proceedingof ICCV'95,pp.80-86.
    [63]Y.J.Zhang,A survey on evaluation methods for image segmentation,PatterRecognition,Vol.29,No.8,pp.1335-1346,1996.
    [64]N.R.Pal and S.K.Pal,A review on image segmentation techniques,Pattern RecognitionVol.26,No.9,pp.1277-1294,1993.
    [65]W.A.Yasnoff,J.K.Mui and J.W.Bacus,Error measures for scene segmentation,PatterRecognition,Vol.9,pp.217-231.
    [66]Z. Y.Zhang and O.D.Faugeras,Tracking and Grouping 3D Line Segments,ICCV'90December 4-7,1990,Osaka,Japan,Proceedings of ICCV'90,pp.577-580.
    [67]R.C.Gonzalez and P.Wintz,Digital Image Processing,Addison-Wesley Publishing CoInc.,Massachusetts,1977.
    [68]L.Zhang,H.Miike,E.Yokoyama and A.Nomura,Periodicity visualization of the globfrom IR satellite image sequence,International Conference on Virtual Systems anMultimedia(VSMM'96),09/18-20,1996,Gifu,Japan,Proceedings of VSMM'96pp301-306.
    [69]R.D.Ray,T.J.Eanes and B.F.Chao,Detection of tidal dissipation in the solid Earth bsatellite tracking and altimetry,Nature,381,1996,pp.595-597.
    [70]E.A.Smith and D.R.Phillips,Automated cloud tracking using precisely aligned digitaATS pictures,IEEE Trans.on Computers,c-21,1972,pp.715-729.
    [71]T.Ohshima,H.Uchida,T.Hamada and S.Osano,A comparison of GMS cloud motiowinds with ship-observed winds in typhoon vicinity,Geophysical Magazine,44,1991 pp.27-36.
    [72]K.Chaudhury,and R.Mehrotra,A trajectory-based computational model for opticaflow estimation,IEEE Trans.on Robotics and Automation,Vol.11,No.5,pp.733-741.
    [73]D.Heeger and A.Jepson,Simple method for computing 3D motion and depth,In ProcThird Int'l Conf.Computer Vision,Osaka,Japan,pp.96-100,1990.
    [74]K.Nakajima,A.Osa and H.Miike,Evaluation of Body Motion by Optical FlowAnalysis,Japanese Journal of Applied Physics,Vol.36,pp.2929-2937,1997.
    [75]K.Nakajima,T.Maekawa and H.Miike,Detection of Apparent Skin Motion Using Optical Flow Analysis:Blood Pulsation Signal Obtained from Optical Flow Sequence Review of Scientific Instruments,Vol.68,pp.1331-1336,1997.
    [76]R.Malladi and J.A.Sethian,A real-time algorithm for medical shape recovery,In Proc.Sixth Int'l Conf.Computer Vision,Bombay,India,pp.304-310,1998.
    [77]章毓晋.图像工程(下册).图像理解与计算机视觉.北京:清华大学出版社,2000[0]
    [78]马颂德,张正友.计算机视觉——计算理论与算法基础.北京:科学出版社
    [79]吴立德.计算机视觉[M],上海,复旦大学出版社,1993.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700