多相机系统中若干视觉几何问题的研究
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
计算机视觉的研究目标是使计算机具有通过一幅或多幅二维图像认知三维现实环境的能力。计算机视觉研究领域涉及大量的数学方法,其中视觉几何是三维计算机视觉的数学理论基础。关于视觉几何的研究在过去20年中取得了长足的进展。但是随着多相机系统的广泛应用,传统方法不能满足要求。本文从视觉几何的角度,主要对多相机系统的标定问题进行了研究,同时也对多相机系统中的极线几何问题、增强现实应用中的三维注册问题进行了探讨,本文的主要贡献如下:
     1)提出了秩1约束下,基于圆球的相机内参数和外参数标定方法。相机标定是为了获取表示相机自身特性的内参数和表示相机与场景位置关系的外参数。近年来,随着多相机系统的出现,由于传统平面标定板无法使各个视角的相机同时可视,许多研究者提出了基于圆球标定物的标定方法。本文详细分析了圆球的视觉几何特性,提出了圆球投影与绝对二次曲线投影(Image ofAbsolute Conic, IAC)之间存在同心圆关系的新几何解释;提出了圆球投影与隐消线(Vanishing Line)之间关系的新几何解释;以双触(double-contact)关系中的秩1约束为基础,提出了秩1约束下求解相机内参数的方法;提出了一种计算相机外参数的简便方法,并采用秩1约束提高圆球球心空间位置的求解精度,从而提高外参数的标定精度。本文提出的几何解释建立在非对偶形式上,更直观清楚;提出的秩1约束可以提高内参数和外参数的标定精度。
     2)提出了以圆球取代传统棍状1D标定物的标定方法。1D标定是另一种用于多相机系统的标定方法,棍状1D标定物通常采用位于一条直线上的已知位置的三个标志点实现相机标定。本文通过分析两个圆球间的视觉几何特性,提出以两个圆球球心及其连线的中点作为1D标定物,该1D标定物的长度是变化的,但是通过圆球投影特性,可以获得这些长度的相对比例。本文方法只需拍摄单个圆球在不同位置下的多幅图像,就可以精确地提取标志点的图像坐标,实现对相机的1D标定。
     3)提出了一种将圆球标定物用于结构光系统的标定方法。本文分析了圆球投影与截交线投影(光平面和圆球轮廓的交线)之间的double-contact关系。利用圆球投影和截交线投影计算相机内参数,并利用圆球球面方程建立各个光平面方程。本方法能够达到较高的重建精度。
     4)提出了一种在四点共面约束条件下由6点求解基础矩阵的方法及其几何解释。基础矩阵用于表示双目视觉的中的极线几何关系,它广泛应用于相机标定和三维重建中。对于多相机系统,容易出现两个相机主光轴接近平行的情况,此时基础矩阵的解存在不稳定性。本文分析了当相机主光轴接近平行时,传统基于对极线的基础矩阵解法存在不稳定性的原因,提出了一种求解基础矩阵的方法,其中使用双射影变换法求解投影矩阵,再进一步通过投影矩阵的张量形式求解基础矩阵。本方法可提高基础矩阵求解的精度和稳定性。
     5)提出了一种利用特殊的自然场景特征,在足球视频增强现实应用中对相机进行标定并实现三维注册的方法。三维注册指为了将三维数据放置到公共参考坐标系下,所需进行的数据转换工作。足球视频增强现实应用中的三维注册研究如何实时检测相机相对于真实场景的位置和姿态,使系统能够根据这些信息将虚拟三维物体放置到真实场景坐标系中,从而能以正确的投影关系,将虚拟物体投影到图像上。本文方法利用足球场地的自然信息,以足球场地的中圈作为二次曲线,利用场地中点和无穷远直线相对于中圈的配极几何关系,建立场景坐标系,计算相机内外参数,得到相机投影矩阵,从而将虚拟物体投影到图像平面,实现对虚拟物体的三维注册。与传统采用孤立特征点的方法相比,该方法基于二次曲线的整体信息,可以提高注册的稳定性,且该方法可在相机内参数变化的情况下使用。
The research objective of computer vision is to provide the computer with theability to perceive the3D real-world environment through one or more2D images. Theresearch field of computer vision involves a large number of mathematical methods,among which the vision geometry is the basic mathematical theoretical basis for3Dcomputer vision. The studies on vision geometry have achieved abundant progress overthe past two decades. However, with the wide application of multi-camera system, thetraditional method can’t meet the demand any more. This paper aims at studying thecalibration problems in the multi-camera system in view of vision geometry and at thesame time exploring the polar geometry problems in the multi-camera system and the3D registration problem in the application of augmented reality. The main contributionsof this paper are as follows.
     1)Propose the calibration method of internal and external parameters based on sphereunder the constraints of rank-1. Camera calibration aims to obtain the internalparameters (which indicate the property of camera) and external parameters (whichindicate the relationship between the camera and scene position). Recently, with theoccurrence of multi-camera system, the traditional planar calibration object cannotmake the camera in different orientations simultaneously visible. So some researchersproposed calibration method based on sphere for multi-camera system. In this paper,based on the vision geometry properties of sphere, a new geometry interpretation of theconcentric relationship between the sphere image and IAC (the Image of AbsoluteConic) and another new geometry interpretation of the relationship between the sphereimage and the vanishing line are proposed. Based on the constraints of rank-1indouble-contact relationship, this paper put forward the method of solving camerainternal parameters under the constraints of rank-1as well as a simple method tocalculate camera external parameters. Besides, the constraints of rank-1is used to refinethe accuracy of solution, which further improves the calibration accuracy of externalparameters. Unlike that of the dual form of sphere image, our geometry interpretationsare established on the non-dual form of sphere image which is more direct and clear. Inaddition, the constraints of rank-1can improve the calibration accuracy of camerainternal and external parameters.
     2) Propose one calibration method which replaces the traditional stick-like1Dcalibration object with the sphere. The stick-like1D calibration object usually realizesthe camera calibration through three collinear points with known position. Through theanalysis of the vision geometry property between two spheres, this paper proposed toregard the two spheres’ centers and the middle point between them as1D calibrationobject. Though the lengths of the1D calibration object are unfixed, the relative lengthsratio can be solved from the sphere projection property. Only with the images of asingle sphere at difference positions, the three collinear points can be precisely detectedand then the camera calibration can be accomplished.
     3)Propose one calibration method which applies sphere calibration object in thestructured light system. This paper analyzes the double-contact relationship between thesphere image and the image of the intersection circle (of the light plane and the sphere).Then, the sphere images and the intersection circle are used for solving the internalparameters and the light plane equation is established with the aids of the equation ofthe sphere. Experiments showed that this method could reach relatively highreconstruction accuracy.
     4)Propose one method and its geometry interpretation for solving fundamental matrixthrough6corresponding points under the constraints of four coplanar points. Thefundamental matrix indicates the polar geometry relationship of two images obtainedfrom the same3D scene. It’s widely used in camera calibration and3D reconstruction.For multi-camera system, it’s easy for the principal axes of two cameras to closelyparallel. However, the fundamental matrix solutions may not be stable. Throughanalysis, we found out that the reason for unstable fundamental matrix solutions withthe traditional method based on the polar lines and proposed a new method. In ourmethod, the projective matrix of the camera in projective space is solved by using theprojective transformations both in3D space and image plane. Then, with the help ofbifocal tensor which connects the fundamental matrix with the projective matrix, thefundamental matrix is established. This method can improve the stability and accuracyof fundamental matrix.
     5) Propose one camera calibration method in the augmented reality application insoccer video and3D registration method based on the nature scene feature of the fielddiagram. Generally the task of3D registration is to place the3D data into a common reference frame by estimating the transformations between the datasets. In suchaugmented reality application in soccer video,3D registration is to real-time detect therelative position and orientation of the camera with respect to a real world and thenplace a virtual3D object into the coordinate system of the real world and correctlyproject it on the image. The method in this paper takes advantage of the nature diagramof soccer field. The center circle is a conic and the center point and the vanishing line ispole-polar with respect to the center circle. As defined in the world coordinate system,the camera parameters are calibrated. Calculate the camera internal and externalparameters and obtain the camera projection matrix. In this way,3D registration ofvirtual object is projected on the image correctly. Compared with the traditionalmethods, this method based on the whole conic can improve the stability of registrationand be used in case of variable camera internal parameters.
引文
[1]张广军.机器视觉.北京.科学出版社.2005
    [2]吴福朝.计算机视觉中的数学方法.北京.科学出版社.2008
    [3]马颂德,张正友.计算机视觉--计算理论与算法基础.北京.科学出版社1998
    [4]贾云德.机器视觉.北京.科学出版社.2000
    [5] Marr D.. Vision: A computational investigation into the human representation andprocessing of visual information. W.H. San Francisco: Freeman and company.1982.(姚国正、刘磊、汪云九译.北京.科学出版社.1988)
    [6] Hartley, R.I., Zisserman, A.. Multiple View Geometry in Computer Vision,2ndedition. Cambridge University Press, Cambridge.2003
    [7] http://breezesys.com/index.htm
    [8] http://graphics.stanford.edu/projects/array/
    [9] Shen E., Hornsey R.. Multi-Camera Network Calibration with a Non-Planar Target.IEEE Sensors Journal.2011,vol.11, no.10,2356-2364.
    [10] Owens N., Harris C., Stennett C.. Hawk-eye tennis system. in Proc. Int. Conf. Vis.Inf. Eng..2003, vol.1,182–185.
    [11] Spice B. CMU experts helping CBS’s30robotic cameras to work as one.2001.Available: http://www.post-gazette.com/healthscience/20010124matrix2.asp
    [12] Wei Z.Z., Cao L.J., Zhang G.J.. A novel1D target-based calibration method withunknown orientation for structured light vision sensor. Optics&Laser Technology.2010, Vol.42, No.4,570–574.
    [13] Liu Z., Zhang G.J.,Wei Z.Z.. Novel calibration method for non-overlappingmultiple vision sensors based on1D target. Optics and Lasers in Engineering.2011,vol.49,570–577.
    [14]程寅.近红外光学运动捕获技术研究及系统开发.浙江大学,硕士论文.2011.
    [15] Jiang X.Y., Rodner E., Denzler J.. Multi-person Tracking-by-Detection Based onCalibrated Multi-camera Systems. Computer Vision and Graphics.2012, Vol.7594,743-751.
    [16] Fritsch D., Cefalu M. AW. A., Wenzel K.. Photogrammetric Point Cloud Collectionwith Multi-camera Systems. Progress in Cultural Heritage Preservation4thIntentionalconference.2012,11-20.
    [17] Frahm J.M., K ser K., Koch R.. Pose Estimation for Multi-camera Systems.Pattern Recognition2004. Vol.3,262-265.
    [18] Stavnitzky J., Capson D.. Multiple Camera Model-Based3-D Visual Servo. IEEETrans. On Robotics and Automation.2000, Vol.16, No.6,732-739.
    [19] Tsai R.Y.. A Versatile Camera Calibration Technique for High-Accuracy3DMachine Vision Metrology Using Off-the-Shelf TV Cameras and Lenses. IEEE J.Robotics and Automation.1987, Vol.3, No.4,323-344.
    [20] Zhang Z.. A flexible new technique for camera calibration. IEEE Trans. PatternAnal. Mach. Intell..2000, Vol.22, No.11,1330-1334.
    [21] Chen Q., Wu H. Y., and Wada T.. Camera calibration with two arbitrary coplanarcircles. Proc.8th Eur. Conf. Computer Vision.2004, Vol.3,521–532.
    [22] Kim J. S., Gurdjos P., and Kweon I. S.. Geometric and algebraic constraints ofprojected concentric circles and their applications to camera calibration. IEEE Trans.Pattern Anal. Mach. Intell..2005, vol.27, no.4,637–642.
    [23] Jiang G. Quan L.. Detection of concentric circles for camera calibration. Proc.10thInt. Conf. Comput. Vis., Beijing.2005, Vol. I,333–340.
    [24] Wu Y. H., Zhu H. J., Z. Hu Y., and Wu F. C. Camera calibration from thequasi-affine invariance of two parallel circles. Proc.8th. Eur. Conf. Computer Vision.2004, Vol. I,190–202.
    [25] Gurdjos P., Sturm P. F., and Wu Y. H.,“Euclidean structure from n≥2parallelcircles: Theory and algorithms. Proc.9th Eur. Conf. Comput. Vis.. May2006, vol. I,238–252.
    [26] Colombo C., Comanducci D., and Bimbo A. del. Camera calibration with twoarbitrary coaxial circles. Proc.9th Eur. Conf. Comput. Vis..2006, vol. I,265–276.
    [27] Zhang Z. Y.. Camera calibration with one-dimensional objects. IEEE Trans.Pattern Anal. Mach. Intell,2004Vol.26, No.7,892–899.
    [28] Wu F. C, Hu Z. Y., and Zhu H. J.. Camera calibration with movingone-dimensional objects. Pattern Recognition.2005,Vol.38,No.5,755–765.
    [29] Hammarstedt P., Sturm P., and Heyden A.. Degenerate cases and closed-formsolutions for camera calibration with one dimensional objects. Proceedings of theICCV’05, Beijing, China.2005,317-324.
    [30] Qi F., Li Q., Luo Y., and Hu D.. Constraints on general motions for cameracalibration with one-dimensional objects. Pattern Recognit.,2007, Vol.40, No.6,1785–1792.
    [31] Wang L., Wu F. C., and Hu Z. Y.. Multi-camera calibration with one dimensionalobject under general motions. Proc. IEEE Int. Conf. Comput. Vis..2007.1–7.
    [32] Zhao Z., Liu Y., and Zhang Z.. Camera calibration with three non collinear pointsunder special motions. IEEE Trans. Image Process.2008, Vol.17, no.12,2393–2402.
    [33] Miyagawa I., Arai H., and Koike H.. Simple Camera Calibration From a SingleImage Using Five Points on Two Orthogonal1-D Objects IEEE Trans. ImageProcessing.2010,Vol.19, No.6,1528-1538
    [34] Maybank S., Faugeras O.. A Theory of Self-Calibration of a Moving Camera. Int’lJ. Computer Vision.1992,Vol.8, No.2,123-151.
    [35] Hartley R.. An Algorithm for Self-Calibration from Several Views. Proc. IEEEConf. Computer Vision and Pattern Recognition.1994,908-912.
    [36] Hartley R.I.. An Algorithm for Self Calibration from Several Views. Proc. IEEEConf. Computer Vision and Pattern Recognition.1994,908-912.
    [37] Luong Q.T., Faugeras O.D.. Self-Calibration of a Moving Camera from PointCorrespondences and Fundamental Matrices. Int’l J. Computer Vision.1997, Vol.22,No.3,261-289.
    [38] Wilczkowiak, M., Boyer, E., Sturm, P..3D modeling using geometric constraints:A parallelepiped based approach. In: Proceedings of the7th European Conference onComputer Vision.2002, Vol.4,221–237.
    [39]Werner,T., Zisserman, A.. New techniques for automated architecturereconstruction from photographs. In Proceedings of the7th European Conference onComputer Vision,2002,Vol.2,541–555.
    [40] Bougnoux S.. From projective to euclidean space under any practical situation, acritism of selfcalibration. Proc. Sixth Int’l Conf. Comput. Vision.1998,790–796.
    [41] Barreto J. P., Danilidis K.. Wide area multiple camera calibration and estimation ofradial distortion. Proc.5th Workshop Omnidirectional Vision, Camera Networks andNon-classical Cameras.2004,1–12.
    [42] Meijer P. B. L., Leistner C.. Martiniere A.. Multiple view camera calibration forlocalization. Proc.1st ACM/IEEE Int. Conf. Dist. Smart Cameras.2007,228–234.
    [43] Beardsley P., Murray D., Zisserman A.. Camera Calibration Using Multiple Images,In Proceedings of ECCV '92.1992,312-320.
    [44] Daucher N., Dhome M., Lapreste J.. Camera calibration from spheres images. Proc.Eur. Conf. Comput. Vis..1994,449-454.
    [45] Teramoto H., Xu G.. Camera Calibration by a Single Image of Balls: From Conicsto the Absolute Conic. Proc. Asian Conf. Computer Vision.2002,499-506.
    [46] Agrawal M., Davis L.S.. Camera Calibration Using Spheres: A Semi-DefiniteProgramming Approach. Proc. Ninth Int. Conf. Computer Vision,2003,782-791.
    [47] Zhang H., Zhang G., and Wong K.-Y. K.. Camera Calibration with Spheres: LinearApproaches. Proc. International Conference on Image Processing (ICIP05),2005, Vol.II,1150-1153.
    [48] Zhang H., Wong K., Zhang G.Q.. Camera Calibration From Images of Spheres.IEEE Trans. Pattern Anal. Mach. Intell..2007,Vol.29, no.3,499-503.
    [49] Ying X., Zha H.. Interpreting Sphere Images Using the Double-Contact Theorem.Computer Vision ACCV2006, Part1,724-733.
    [50] Ying X., Hu Z.. Catadioptric Camera Calibration Using Geometric Invariants.IEEE Trans. Pattern Analysis and Machine Intelligence.2004, Vol.26, No.10,1260-1271.
    [51] Ying X., Zha H.. Geometric Interpretations of the Relation between the Image ofthe Absolute Conic and Sphere Images. IEEE Trans. on Pattern Analysis and MachineIntelligence.2006, Vol.28, No.12,2031-2036.
    [52] Wong K., Zhang G., Chen Z.. A Stratified Approach for Camera Calibration UsingSpheres. IEEE Trans. Image Processing.2011, Vol.20, No.2,305-316.
    [53] Zhang G., Wong K.. Motion estimation from spheres. Proc.Conf. Comput. Vis.Pattern Recognit..2006, Vol.1,1238–1243.
    [54] Lu. Y., Payandeh S.. On the sensitivity analysis of camera calibration from imagesof spheres. Computer Vision and Image Understanding.2010. Vol.114, No.1,8–20.
    [55] Shi K., Dong Q., and Wu F.C.. Weighted Similarity-Invariant Linear Algorithm forCamera Calibration With Rotating1-D Objects. IEEE Trans. Image Processing.2012,Vol.21, No.8,3806-3812.
    [56] Weinmann, M., Schwartz, C., Ruiters, R.. Multi-camera, Multi-projectorSuper-Resolution Framework for Structured Light. International Conference on3DImaging, Modeling, Processing, Visualization and Transmission.2011,397-404.
    [57] Furukawa, R., Kawasaki, H.. Uncalibrated multiple image stereo system witharbitrarily movable camera and projector for wide range scanning. Fifth InternationalConference on3-D Digital Imaging and Modeling.2005,302–309.
    [58]贺忠海,王宝光.线结构光传感器的模型及成像公式.光学精密工程.2001,Vol.9No.3,269—271.
    [59] Chen C. H., Kak A. C.. Modeling and calibration of a structured light scanner for3D robot vision. Proc. IEEE Conf. robotics and automation.1987,807-8l5.
    [60] Zhou F., Zhang G.. Constructing feature points used for calibrating a structuredlight vision sensor by viewing a plane from unknown orientations. J. Opt. Lasers Eng..2005,Vol.43,1056–1070.
    [61] Zhou F, Zhang G.. Complete calibration of a structured light stripe vision sensorthrough planar target of unknown orientations.Image and Vision Computing,2005,Vol.23, No.1,59-67.
    [62] Faugeras0. and Laveau S.. Representing Three-Dimensional Data as a Collectionof Images and Fundamental Matrices for Image Synthesis. Proceedings of ICPR.1994,689-691.
    [63] Longuet-Higgens H.. A computer algorithm for reconstructing a scene from twoprojections. Nature, Vol.293,1981,133-135.
    [64] Faugeras, O.. Stratification of three-dimensional vision: projective, affine, andmetric representations. Opt. Soc. Am.1995, Vol12,465–484.
    [65] Azuma R., Baillot Y., Behringer R, et.. Recent Advances in Augmented Reality.IEEE Computer Graphics and Applications.2001, Vol.21, No.6,34-47
    [66] Hoff W.A. and Nguyen K., Computer Vision-Based Registration Techniques forAugmented Reality. Proc. Intelligent Robots and Computer Vision, SPIE.1996,538-548.
    [67] Chai L., Hoff W.A.,and Vincent T.. Three-Dimensional Motion and StructureEstimation Using Inertial Sensors and Computer Vision for Augmented Reality.Teleoperators and Virtual Environments.2002, Vol.11,No.5,474-492.
    [68] Kato H. and Billinghurst M.. Marker Tracking and HMD Calibration for avideo-based Augmented Reality Conferencing System. Proc. Second IEEE and ACMInt’l Workshop Augmented Reality.1999,85-94.
    [69] Kutulakos K. N., Vallino J. R.. Calibration-free augmented reality. IEEE Trans. onVisualization and Computer Graphics,1998, Vol.4, No.1,1–20.
    [70] Smith R. A., Fitzgibbon A. W., Zisserman A..Improving augmented reality usingimage and scene constraints. In Proc. BMVC,1999,295–304.
    [71] Simon G., Fitzgibbon A. W., Zisserman A.. Markerless Tracking using PlanarStructures in the Scene. Proc.Int'l Symp on Augmented Reality.2000,137-146.
    [72] Simon G., Berger M.O.. Pose Estimation for Planar Structure. IEEE ComputerGraphics and Applications.2002, Vol.22, No.6,46-53.
    [73] Yuan M. L., Ong S. K., Nee A. Y. C.. Registration Using Natural Features forAugmented Reality Systems. IEEE Trans on Visualization and Computer Graphisc.2006,Vol.12, No.4,569-579.
    [74] Faugeras,O. Three-Dimensional Computer Vision: a Geometric Viewpoint. MITpress.1993.
    [75]高文,陈熙霖.计算机视觉—算法与系统原理.北京:清华大学出版社,广西科学技术出版社.1999.
    [76] Vandenberghe L., Boyd S.. Semi_definite programming. SIAM Review.1996,Vol.38, NO.1,49–95.
    [77] Nesterov Y., Nemirovski A.. Interior point polynomial algorithms in convexprogramming. SIAM publications, Philadelphia, USA,1994.
    [78] Evelyn C.J.A., Money-Coutts G.B., Tyrrell J.A.. The Seven Circles Theorem andOther New Theorems. Stacey Int’l.1974.
    [79] Zheng Y., Liu Y.. On Determining the Projected Sphere Center and Its Applicationin Optical Tracking Systems. International Conference on BioMedical Engineering andInformatics.2008,652-656.
    [80] Sampson P.D.. Fitting conic sections to ‘very scattered’ data: An iterativerefinement of the Bookstein algorithm. Computer Vision, Graphics, and ImageProcessing.1982, Vol.18,97-108.
    [81] More J. J.. The levenberg-marquardt algorithm, implementation and theory.Numerical Analysis.1977.
    [82] Bookstein F.L.. Fitting Conic Sections to Scattered Data. Computer Graphics andImage Processing.1979, Vol.9,56-71.
    [83] Yao J.. Fast Robust Genetic-algorithm Based Ellipse Detection.17th InternationalConference on Pattern Recognition.2004, Vol.2,859-862.
    [84] Jiang X.. A decomposition approach to geometric fitting. Proc. of Int. Workshop onMachine Vision Applications.2000,467-470.
    [85] Sturm P., Gargallo P.. Conic Fitting Using the Geometric Distance, ACCV.2007,Part II, LNCS4844,784–795.
    [86] Fitzgibbon A. W., Pilu M., and Fisher R. B.,“Direct least-squares fitting ofellipses,” IEEE Trans. on PAMI.1999, Vol.21, No.5,476–480.
    [87] Besl P., McKay N.. A method for registration of3d shapes. IEEE Transactions onPattern Analysis and Machine Intelligence.1992, Vol.14, No.2,239–256.
    [88] Cunnington S. and Stoddart A.. N-view point set registration: A comparison. InProc. British Machine Vision Conference.1999,234-244.
    [89] Benjemaa R., Schmitt F.. A solution for the registration of multiple3d point setsusing unit quaternions. In Proc. Fifth European Conference on Computer Vision.1998,34–50.
    [90] Fran aa J.A., Stemmerb M. R., Fran aa M. B. M.. Revisiting Zhang's1Dcalibrationalgorithm. Pattern Recognition.2010, Vol.43,1180-1187.
    [91] Chiang C., Liu H.. The tangency problem of variable radius circle to lines, circlesand ellipses. Computer Aided Design and Computer Graphics, Ninth InternationalConference.2005,3-9.
    [92]Chen F., Brown G.M., Song M. Overview of the three-dimensional shapemeasurement using optical methods. J. Opt. Eng..2000, Vol.39,No.1,10–22.
    [93] Senoh M., Kozawa, F., Yamada M.. Development of shape measurement systemusing an omni directional sensor and light sectioning method with laser beam scanningfor hume pipes. J. Opt. Eng..2006, Vol.45, No.6, p.064301.
    [94] Valkenburg R.J., McIvor A.M.. Accurate3D measurement using a structured lightsystem. J. Image Vis. Comput..1998, Vol.16,No.2,99–110.
    [95] Dewar R.. Self-generated targets for spatial calibration of structured light opticalsectioning sensors with respect to all external coordinate systems. Robots and Vision’88Con.&Proceedings.1988,5-13.
    [96]段发阶,刘风梅,叶声华.一种新型结构光传感器结构参数标定方法.仪器仪表学报.2000,Vol.2l,No.1,108—110.
    [97] Valkenburg R.J., McIvor A.M.. Accurate3D measurement using a structured lightsystem.. J. Image Vis. Comput..1998, Vol.16, No.2,99–110.
    [98] Wang G.H., Hu Z.Y., Wu F.C., Tsui H.T.. Implementation and experimental studyon fast object modeling based on multiple structured stripes. J. Opt. Lasers Eng..2004,Vol.42,627–638.
    [99] Huynh D.Q.. Calibration a structured light stripe system: A novel approach.International Journal of Computer Vision.1999, Vol.33,No.1,73-86.
    [100]孙军华张广军刘谦等.结构光视觉传感器通用现场标定方法.机械工程学报.2009,第45卷,第3期,174-177.
    [101]叶声华邾继贵王仲等,视觉测试技术及应用.中国工程科学.1999,Vol.1,No.1,49-52.
    [102] Luong Q.T., Faugeras0.. The Fundamental matrix: theory, algorithms, andstability analysis. Intl. Journal of Computer Vision.1996, Vol.17, No.1,43-76.
    [103] Luong Q.T., Faugeras0.. Self calibration of a moving camera from pointcorrespondences and fundamental matrices. Intl. Journal of Computer Vision.1997,Vo1.22, No.3,261-289.
    [104] Hartley R.I. In defence of the8-point algorithm. Proc.5th Int. Conf. on ComputerVision.1995,1064–1070.
    [105] Luong Q.T., Faugeras O.D.. The fundamental matrix: Theory, algorithms, andstability analysis. International Journal of Computer Vision.1996,Vol.17, No.2,43-75.
    [106] Wu F. C., Hu Z. Y..5-points and4-points algorithm to determine the fundamentalmatrix. Acta Automatic Sinica.2003,Vol.29, No.2,175-180.(吴福朝,胡占义.基本矩阵的5点和4点算法.自动化学报).
    [107] Chen Z, Wu C. A Linear Algorithm with High Accuracy for EstimatingFundamental Matrix. Journal of Software.2002,Vol.13, No.4,840-845.(陈泽志,吴成柯.一种高精度估计的基础矩阵的线性算法.软件学报)
    [108] Ma Y, Liu W. Robust Estimation of Vision Fundamental Matrix Based onProjective Space. Robot,2005,Vol.27, No.6,545-549(马永壮,刘伟军.基于射影空间的视觉基础矩阵鲁棒估计.机器人)
    [109] Luong Q.T., Faugeras O. Determining the Fundamental matrix with planes:unstability and new algorithms. In Proc. Conference on Computer Vision and PatternRecognition, New-York.1993,489-494.
    [110] Luong Q.T., Faugeras O.. Stability analysis of the Fundamental Matrix. In Proc.European Conference on Computer Vision. Stockholm, Sweden.1994,577-588.
    [112] Torr P.H.S., Murray D.W. The development and comparison of robust methodsfor estimating the fundamental matrix. International Journal of Computer Vision.1997,Vol.24, No.3,271-300.
    [113] Fischler M.A., Bolles R.C.. Random sample consensus: A paradigm for modelfitting with applications to image analysis and automated cartography. Communicationsof the ACM, Vol.24,No.6,381–395,1981.
    [114] Lowe, David G. Object recognition from local scale-invariant features.Proceedings of the International Conference on Computer Vision.1999.Vol.2,1150–1157.
    [115] Vincent E., Laganiere R.. Detecting planar homographies in an image pair.2ndInternational Symposium on Image and Signal Processing and Analysis, Pula, Croatia.2001,182–18.
    [116] Torr P., Zisserman A.. MLESAC: A new robust estimator with application toestimating image geometry. Computer Vision and Image Understanding.2000, Vol.78,No.1,138–156.
    [117] D’Orazio T., Leo M., Spagnolo P.. An Investigation into the Feasibility ofReal-Time Soccer Offside Detection From a Multiple Camera System. IEEE Trans. onCircuits and Systems for Video Technology.2009, vol.19, no.12,1804-1818.
    [118] Yu X., Leong H.W, Xu C.. A robust hough-based algorithm for partial ellipsedetection in broadcast soccer video. IEEE International Conference on Multimedia andExpro.2004,1555-1558.
    [119]郑宁斐.增强现实中若干关键技术的研究.西安电子科技大学硕士论文,2007.
    [120] Ren J., Xu M., Orwell J., A. Jones G.. Multi-camera video surveillance forreal-time analysis and reconstruction of soccer games. Machine Vision and Applications.2010, vol.21, No.6,855-863.
    [121]Yu X., Xu C., Leong H. W., Tian Q., Tang Q., Wan K.W.. Trajectory-based balldetection and tracking with applications to semantic analysis of broadcast soccer video.In: Proceedings of the11th ACM International Conference on Multimedia.2003,11-20.
    [122] Yu X., Yan X., Hay T., Leong H..3D reconstruction and enrichment of broadcastsoccer video. In: Proceedings of the12th ACM International Conference on Multimedia.2004,260-263.
    [123] Choi K, Seo Y. Automatic initialization for3D soccer player tracking. PatternRecognition Letters,2011, vol.32, No.9,1274-1282.
    [124]卜江,老松杨,白亮,刘钢.一种基于球场模型的广播足球视频摄像机自动定标方法.自动化学报.2012,Vol.38, No.3,321-330.
    [125] Azuma R., Lee J. W., Jiang B.. Tracking in unprepared environments foraugmented reality systems. IEEE Trans. on Computer Graphics,1999, Vol.23, No.6,787—793.
    [126] Beardsley P. A., Torr P. H. S., and Zisserman A..3D model aquisition fromextended image sequences. InProc. ECCV,1996,683–695.
    [127] Fitzgibbon A. W., Zisserman A.. Automatic camera recovery for closed or openimage sequences. In Proc. ECCV,1998,311–326..
    [128]焦晋生.增强现实在视频传播中的应用研究.西安电子科技大学硕士论文.2008.
    [129] Farin D., Krabbe S., Withde P.. Robust camera calibration for sport videos usingcourt models. In SPIE Storage and Retrieval Methods and Applications for Multimedia.2004, Vol53, No.7,80-91.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700