用户名: 密码: 验证码:
计算机视觉中摄像机定标及位姿和运动估计方法的研究
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
计算机视觉是一个相当新且发展十分迅速的研究领域,已成为智能自动化科学的重要研究领域之一,它的研究目标是使计算机具有通过一幅或多幅图像认识周围环境信息的能力,这种能力将不仅使机器能感知环境中物体的几何信息,包括它的形状、位置、姿态和运动等,而且能对它们进行描述、存储、识别和理解。
     本文首先介绍了计算机视觉中所用到的射影几何、仿射几何、度量几何和欧氏几何等各类几何学的基本概念和相关特性;概述了摄像机模型及成像原理;回顾了近年来摄像机定标方法的发展,对存在的各种摄像机定标方法进行了分析、比较和总结;最后,重点阐述了变内部参数摄像机线性自定标、平面型场景图像的透视校正、四种非线性状态估计滤波器(EKF1、EKF2、DD1和DD2)及基于单目视觉的位姿和运动估计的研究方法。
     ◆变内部参数摄像机线性自定标。在某些视觉系统中(例如机器人视觉系统、主动视觉系统)需要经常改变摄像机的位置或调整摄像机光学系统(如光圈与焦距),在每次调整以后,都需要对摄像机重新定标。针对这种情况,本文提出了一种可处理畸变因子和主点已知但其它内部参数发生变化时的摄像机自定标方法,该方法先计算图像间的基本矩阵,在计算基本矩阵后得到射影重建的基础上,用线性方法恢复同形矩阵,再利用同形矩阵计算摄像机内部参数。
     ◆基于场景几何知识的三维度量重建。三维场景图像的度量重建一般是针对图像序列的,所使用的层次化度量重建方法都是首先得到图像序列的射影重建。如果三维物体表面都是平面(这种情况是比较常见的),则在平面型场景图像三维度量重建的基础上,基于场景几何知识,对平面型场景图像和常规的三维场景图像的度量重建进行了回顾,然后在此基础上提出了一种不必进行射影重建且只须单幅图像的三维度量重建方法。
     ◆四种非线性状态估计滤波器(EKF1、EKF2、DD1和DD2)。对于某些系统不能用简单的线性模型来描述,因此必须发展非线性的滤波算法,在线性卡尔曼滤波的基础上,用一阶和二阶Taylor级数来近似非线性动态方程和测量方程,分别得到了EKF1(基于一阶Taylor近似)和EKF2(基于二阶Taylor近似)滤波器,但是使用EKF1和EKF2滤波器要求非线性动态方程和测量方程相应的一阶和二阶微分存在,在非线性动态方程和测量方程的一阶和二阶微分不存在时,用不需要计算微分的Stirling插值近似代替Taylor近似得出了另外两种新的滤波器,即DD1滤波器(基于一阶Stirling插值近似)和DD2滤波器(基于二阶Stirling插值近似)。
     ◆基于单目视觉的位姿和运动估计。随着计算机视觉这门新兴学科的出现,
    
    计算机视觉中摄像机定标及位姿和运动估计方法的研究
    不需要物体接触或人工干预,估计两参考坐标系之间的相对三维位姿和运动对机
    器人导航、装配及测量、跟踪、目标识别和摄像机定标来说也是一个非常重要的
    研究问题。利用单摄像机所获取的二维图像序列来估计两坐标系之间的相对位姿
    和运动在实际应用中是可取的,其难点是从物体的三维特征投影到二维图像特征
    的过程是一个非线性变换,本文研究了在运动物体尺寸和形状己知的情况下,三
    维坐标变换用八元数表示,用单摄像机获取运动物体的图像,以线特征作为测量
    输入,建立了位姿和运动估计系统的非线性模型,分别用IEKFI、IEKFZ、DDI
    和DDZ四种滤波器进行了仿真比较,得出了位姿和运动估计结果。
    关键词:变内部参数摄像机自定标;度量重建;Taylor近似;stirling插值近似;
    滤波器;位姿和运动估计
Computer vision is a new research field with fast development into one of the important fields of intelligent automation. Its research object is to equip the computer with the ability to acquire information from its surrounding images or combination of images, which enables the machine not only to sense the geometrical information of the object in the environment, including its shape, position, pose and motion, etc., but to describe, store, recognize and understand them.Firstly, this dissertation introduces the basic conceptions and relevant properties of projective geometry, affine geometry, metric geometry and Euclidean geometry used in computer vision, as well as the camera model with imaging principles. Secondly, the dissertation gives review of the development of camera calibration technique in recent years. Some analysis, comparison and categorization on the existing various camera calibration techniques are discussed. Finally, the focus in my dissertation is the research on the camera self-calibration with varying intrinsic parameters, the perspective correction of plane scene image, derivations of four nonlinear state estimation filters (EKF1, EKF2, DD1 and DD2), and the estimation of pose and motion based on monocular vision. Linear camera self-calibration with varying intrinsic parameters. In some vision systems (e.g., the vision system of robot and active vision system), the camera's position and optic system (for example, the aperture and focus) require constant adjustment, and the camera must recalibration after every adjustment. Aimed at this circumstance, this paper proposes a camera self-calibration method to deal with the situation when the camera's distorted skews and principal points are already known, while the other intrinsic parameters keep varying. That is, to calculate the fundamental matrix between images to obtain projective reconstruction at first, and on which basis then, to regain the homography matrix by linear method, lastly, to obtain the camera's intrinsic parameters using homography matrix. 3D metric reconstruction based on scene geometry. The 3D metric reconstruction of scene image is often aimed to the image sequences, and the stratified metric reconstruction method is to obtain the projective reconstruction of image sequences at first. If the surfaces of 3D object are plane (This situation is familiar), then based on the perspective correction of plane scene image and the scene geometry information, we give a review of the perspective correction of plane scene
    
    image and the metric reconstruction of routine 3D scene image. Then on the basis of this, this dissertation proposed a 3D metric reconstruction method of single view which don't need to obtain the projective reconstruction. Four kinds of nonlinear state estimation filters (EKFl, EKF2, DD1 and DD2). Some system can't be described by simple linear model, therefore, the development of the non-linear filter algorithm becomes a necessity. Based on Kalman filter, the first order and second order Taylor series can be adopted to approximate the non-linear dynamic equation and measurement equation, and to obtain filters of EKFl (on the basis of first order Taylor approximation) and EKF2 (on the basis of the second order Taylor approximation) respectively. Whereas, the EKFl and EKF2 filters demands the existence of corresponding the first order and the second order differentiation of non-linear dynamic equation and measurement equation. If this requirement is not satisfying, Stirling interpolation approximation which does not need differential calculus, in replacement of Taylor approximation, gives forth to another two filters, DD1 (based on the first order Stirling interpolation approximation) and DD2 (based on the second order Stirling interpolation approximation). Estimation of pose and motion based on monocular vision. With the emergence of the new-born science of computer vision, it is also very important to estimate the correlative three-dimensional pose and motion i
引文
[1] 王耀南,李树涛,毛建旭.计算机图像处理与识别技术.第一版.北京:高等教育出版社,2001:1-30
    [2] 贾云得.机器视觉.第一版.北京:科学出版社,2000:1-13
    [3] 马颂德,张正友.计算机视觉——计算理论与算法基础.第一版.北京:科学出版社,1998:5-149
    [4] L.G. Roberts. Machine perception of three-dimensional solids. In Optical and Electro-optical Information Processing. Cambridge: MIT Press. 1965:7-24
    [5] A. Guzman. Decomposition of a visual scene into three-dimensional bodies. (Ph. D. dissertation). In automatic Interpretation and Classification of Images. New York: Academic Press. 1969:15-58
    [6] A. K. Mackworth. Interpreting pictures of polyhedral scenes. Artificial Intelligence. 1973, 4(2): 121-I37
    [7] D. Marr. Vision: A Computational Investigation into the Human Representation and Processing of Visual Information. W. H. Freeman and Company. San Francisco.1982.中译本:姚国正,刘磊,汪云九.视觉计算理论.北京: 科学出版社,1988:1-13
    [8] 方德植,陈奕培.射影几何.第一版.北京:高等教育出版社,1983:1-317
    [9] 梅向明,刘增贤,林向岩.高等几何,第一版.北京:高等教育出版社,1983: 1-283
    [10] R.I. Hartley and A. Zisserman. Multiple View Geometry in Computer Vision. Cambridge: Cambridge University Press. 2000:173-279
    [11] O.D. Faugeras. Stratification of 3-D vision: projective, affine and metric representations. Journal Optical Society of America. 1995.12(3): 465-484
    [12] H.C. Longuet-Higgins. A computer algorithm for reconstructing a scene from two projections. Nature. 1982. 29(3): 133-135
    [13] P. Beardsley. A. Zisserman and D. Murray. Navigation using affine structure and motion. In: European Conf. Computer Vision. Springer-Verlag. 1994, 85-96
    [14] M. Taylor. Visually Guided Grasping. PhD thesis. University of Oxford. England, 1995:1-15
    [15] O.D. Faugeras. What can be seen in three dimensions with an uncalibrated stereo rig? In: European Conf. Computer Vision. Italy. 1992. 563-578
    [16] R.I. Hartley, R. Gupta, and T. Chang. Stereo from uncalibrated cameras. In IEEE Int. Conf. Computer Vision and Pattern Recognition, Urbana Champaign,1992: 761-764
    
    [17] I. D. Reid and D. W. Murray. Active tracking of foveated feature clusters using affine structure. Int. J. Computer Vision, 1996,18(1): 41-60
    [18] S. J. Maybank and O. D. Faugeras. A theory of self-calibration of a moving camera. Int. J. Computer Vision. 1992, 8(2): 123-151
    [19] W. Triggs. Auto-calibration and the absolute quadric. IEEE Int. Conf. Computer Vision and Pattern Recognition, 1997: 609-614
    [20] R. Y. Tsai. A versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-the-shelf TV cameras and lenses. IEEE J.Robotics and Automation. 1987, 3(4): 323-344
    [21] R. Y. Tsai. An efficient and accurate camera calibration technique for 3D machine vision. IEEE Int. Conf. Computer Vision and Pattern Recognition.1986: 1-6
    [22] H. Malm, A. Heyden. Simplified Intrinsic Camera Calibration and Hand-eye Calibration for Robot Vision. IEEE/RSJ Int. Conf. Intelligent Robots and Systems Las Vegas. Nevada. 2003: 1037-1043.
    [23] H. Malm, A. Heyden. Plane-Based Calbration: The Case of Pure Translation. .Int. Proc. Workshop on Machine Vision Applications, Nara, Japan,2002: 1-5
    [24] C. Zhou. D. L. Tan. F. Zhu. A Planar Homography Estimation Method for Camera Calibration. IEEE Int. Conf. Computational Intelligence in Robotics and Automation. Kobe. Japan. 2003( I ): 424-429
    [25] D. Liebowitz and A. Zisserman. Metric rectification for perspective images of planes. IEEE Int. Conf. Computer Vision and Pattern Recognition. 1998.482-488
    [26] P. Sturm and S. Maybank. On plane based camera calibration: A general algorithm, singularities, applications. IEEE Int. Conf. Computer Vision and Pattern Recognition. 1999: 432-437
    [27] Z. Zhang. A flexible new technique for camera calibration. Int. Conf. Computer Vision. Kerkyra. Greece. 1999: 666-673
    [28] X. Q. Meng. H. Li, Z. Y. Hu. A new camera calibration technique based on circular points. Conf. British Machine Vision. UK: Bristol. 2000: 496-505
    [29] R. K. Lenz and R. Y. Tsai. Techniques for calibration of the scale factor and image center for high accuracy 3-d machine vision metrology. IEEE Trans.
     Pattern Analysis and Machine Intelligence, 1988, 10(5): 713-720
    
    [30] J. Weng, P. Cohen, M.Herniou. Camera calibration with distortion models andaccuracy evaluation. IEEE Trans. Pattern Analysis and Machine Intelligence,1992. 14(10): 965-980
    [31] H. A. Martins. J. R. Birk, R. B. Kelley. Camera models based no data from twocalibration planes. Computer Grapgics and Imaging Processing, 1981. 17(2):173-180
    [32] S. Ma. G. Wei. A self-calibration technique for active vision system. IEEETrans. Robotics and Automation. 1996. 12: 114-120
    [33] O. D. Faugeras and G. Toscani. The calibration problem for stereo. Int. Conf.Computer Vision and Pattern Recognition, pages. Miami. USA. 1996: 15-20
    [34] P. F. McLauchlan and D. W. Murray. Active calibration for a Head-Eyeplatform using the variable State-Dimention filter. IEEE Trans. PatternAnalysis and Machine Intelligence. 1996. 18(1): 15-22
    [35] F. Du and J. M. Brady. Self-calibration of the intrinsic parameters of camerasfor active vision systems. IEEE Int. Conf. Computer Vision and PatternRecognition. 1993. 477-482
    [36] H. Malm. A. Heyden. Self-calibration from Image Derivatives for Active VisionSystems. Int. Conf. Automation. Robotics and Computer Vision. Singapore.2002: 1-5
    [37] B. Caprile and V. Torre. Using vanishing points for camera calibration. Int. J.Computer Vision. 1990. 4: 127-140
    [38] D. Liebowitz and A. Zisserman. Combining scene and auto-calibrationconstraints. Int. Conf. Computer Vision. Kerkyra. Greece. 1999: 285-293
    [39] W. Triggs. Autocalibration from planar scenes. In European Conf. ComputerVision, Freiburg. Germany. 1998: 89-105
    [40] D. Demirdjian. A. Zisserman, and R. Horaud. Stereo autocalibration from oneplane. In European Conf. Computer Vision. Dublin, volume II. 2000: 625-639
    [41] J. Knight and I. Reid. Binocular self-alignment and calibration from planarscenes. In European Conf. Computer Vision. Dublin, volume II. 2000: 462-476
    [42] J. Knight and I. Reid. Self-calibration of a stereo rig in a planar scene by datacombination. Int. Conf. Pattern Recognition. Barcelona, volume 1. 2000:411-414
    [43] E. Malis and R. Cipolla. Multi-view constraints between collineations:application to self-calibration from unknown planar structures. In European Conf. Computer Vision. Dublin. volume Ⅱ, 2000:610-624
    [4
    
    [44] J. Knight. A. Zisserman, I. Reid. Linear Auto-Calibration for Ground Plane Motion. IEEE Proc. Computer Society Conf. Computer Vision and Pattern Recognition. 2003(Ⅰ): 503-510
    [45] Z. Zhang. Q. T. Loung. and O. D. Faugeras. Motion of an uncalibrated stereo rig: self-calibration and metric reconstruction. Technique Report 2079. INRIA SOPHIA-Antipolis, October 1993:1-15
    [46] 雷成,吴福朝,胡占义.Kruppa方程与摄像机自标定.自动化学报.2001,27(5): 621-630
    [47] Y. Ma. B. Vidal. J. Kosecka. Kruppa equation revisited: Its renormalization and degeneracy. In European Conf. Computer Vision. Springer-Verlag. 2000:1-6
    [48] R.I. Hartely. Euclidean reconstruction from uncalibrated views, In J. L. Mundy, A. Zisserman and D. Forsyth. editors. Proc. 2nd European-us Workshop on Invariance, Azores. 1993:187-202
    [49] P.A. Beardsley. A. Zisserman. D. W. Murray. Sequential update of projective and affine structure from motion. Int. J. Computer Vision. 1997. 23(3): 235-259
    [50] M. Pollefeys. L. Van Gool. A. Oosterlinck. The modulus constraint: a new constraint for self-calibration. In Int. Conf. Pattern Recognition. 1996:31-42
    [51] A. Heyden and K. strm. Euclidean reconstruction from constant intrinsic parameters. In Int. Conf. Pattern Recognition, 1996:1-6
    [52] P. A. Beardsley. I. D. Reid. A. Zisserman. D. W. Muttay. Active visual navigation using non-metric structure. In Int. Conf. Computer Vision. Boston. IEEE Computer Society Press. 1995:58-65
    [53] A. Zisserman. P. A. Beardsley. and I. D. Reid. Metric calibration of a stereo rig. In Proc. IEEE Workshop on Representations of Visual Scenes. Boston. IEEE Computer Society Press. 1995:93-100
    [54] M.J. Brooks. L. De Agapito. D. Q. Huynh. L. Banmela. Direct methods for self-calibration of a moving stereo head. In European Conf. Computer Vision. INCS 1065. Cambridge. 1996:415-426
    [55] F. Li. Active Stereo for AGV Navigation. PHD thesis. Oxford University. 1996: 1-23
    [56] R.I. Hartley. R. Kaucic. Sensitivity of calibration to principle point position. In European Conf. Computer Vision, 2002:433-446
    [57] M. Personnaz, P. Sturm. Calibration of a stereo-vision system by the nonlinear optimization of the motion of a calibration object. Inria Technical Report, 0269. France, 2002: 1-15
    
    [58] M. Personnaz, R. Horaud. Camera Calibration: estimation, validation andsoftware. Inria Technical Report, 258, France, 2002:11-19
    [59] Q. Memon, S. Khan. Camera calibration and the three-dimensional worldreconstruction of stereo-vision using neural network. Int. J. Systems Science,2001.32(9): 1155-1159
    [60] T. Moons, L. Van Diest, and A. Oosterlinck. Affine structure from perspectiveimage pair under relative translations between object and camera. TechniquelReport KUL/ESAT/M12/9306. Department Elektrotechniek, KatholieleUniversiteit Leuven. Belgium. 1993: 1-17
    [61] R. I. Hartley. Self-calibration of stationary cameras. Int. J. Computer Vision,1997. 22(1): 5-23
    [62] L. De Agapito. E. Hayman. and I. Reid. Self-calibration of a rotating camerawith varying intrinsic parameters. In Conf. British Machine Vision.Southampton. 1998: 105-114
    [63] M. Pollefeys, R. Koch, and L. Van Gool. Self-calibration and metricreconstruction in spite of varying and unknown internal camera parameters. InInt. Conf. Computer Vision. Bombay. 1998: 90-96
    [64] Y. Seo and K. Hong. Auto-selfcalibration of a Rotating and Zooming Camera.In Proc. IAPR workshop on Machine Vision Applications. 1998: 17-19
    [65] L. De Agapito. R. I. Hartley, and E. Hayman. Linear calibration of a rotatingand zooming camera. In IEEE Conf. Computer Vision and Pattern Recognition,Fort Collins, Colorado, 1999: 15-21
    [66] M. Armstrong. A. Zisserman. and R. Hartley. Self-calibration from imagetriplets. In European Conf. Computer Vision. LNCS 1064/5. Springer-Verlag,1996:3-16
    [67] L. De Agapito. E. Hayman. and I. Reid. Self-calibration of rotating andzooming cameras. Int. J. Computer Vision. 2001. 45(2): 107-127
    [68] L. De Agapito. E. Hayman, and I. Reid. Self-calibration of rotating andzooming cameras. Technical Report OUEL 0225/00. University of Oxford. 2000:1-18
    [69] P. H. S. Torr. Motion segmentation and outlier detection. PhD thesis. Dept ofEngineering Science. Oxford University. 1995: 1-26
    [70] F. Kahl, B. Triggs. Critical motions for auto-calibration when some intrinsicparameters can vary. J. Mathematical Imaging and Vision. 2000. 13(2): 131-146
    
    [71] P. Sturm. Critical motion sequences for monocular self-calibration anduncalibrated Euclidean reconstruction. In IEEE Conf. Computer Vision andPattern Recognition. Puerto Rico. 1997: 1100-1105
    [72] A. Zisserman. D. Liebowitz, and M. Armstrong. Resolving ambiguities inauto-calibration. Philosophical Trans, of the Royal Society of London. SERIESA, 1998.356(1740): 1193-1211
    [73] P. Sturm. Critical motion sequences for the self-calibration of cameras andstereo systems with variable focal length. Image and Vision Computing. 2002.20(4): 415-426
    [74] F. Kahl. Critical motions and ambiguous Euclidean reconstructions inauto-calibration. In Int. Conf. Computer Vision. Kerkyra. Greece. 1999:469-475
    [75] F. Kahl. A. Heyden. Euclidean reconstruction and auto-calibration fromcontinuous motion. In: Int. Conf. Computer Vision. Vancouver, Canada. 2001:572-577
    [76] R. I. Hartley. Estimation of relative camera positions for uncalibrated cameras.In European Conf. Computer Vision, LNCS 588. Springer-Veriag. 1992:579-587
    [77] M. Pollefeys, L. Van Gool. and M. Proesmans. Euclidean 3D reconstructionfrom image sequence with variable focal lengths. In European Conf. ComputerVision. LNCS 1064/1065. Springer-Veriag. 1996: 1-6
    [78] A. Heyden and K. Astrom. Euclidean reconstruction from image sequences withvarying and unknown focal length and principal point. In IEEE Conf. ComputerVision and Pattern Recognition. 1997: 438-443
    [79] A. Heyden and K. Astrom. Minimal conditions on intrinsic parameters forEuclidean reconstruction. In Asian Conf. Computer Vision. 1998: 1-6
    [80] A. Heyden and K. Astrom. Flexible calibration: Minimal cases forauto-calibration. In Int. Conf. Computer Vision, volume 1. Kerkyra. Greece.1999: 350-355
    [81] R. Hartley. Projective Reconstruction and Invariants from Multiple Images.IEEE Trans. PAMI. 1994. 16(10): 1036-1040
    [82] Q. T. Luong. O. D. Faugeras. The Fundamental Matrix: Theory. Algorithms.and Stability Analysis. Int. J. Computer Vision. 1996. 17(1): 43-75
    [83] D. Liebowitiz. Camera Calibration and Reconstruction of Geometry fromImages. PHD thesis. Oxford University. 2001: 37-81
    
    [84] Q. Luong. Canonic represntations for the geometries of multiple projective views. Technical Report UCB/CSD-93-772. University of California, Berkeley. USA. 1993:1-10
    [85] O.D. Faugeras. 3-D Reconstruction of urban scenes from sequences of images. Tech. Report. INRIA. 1995:1-9
    [86] F. Schaffalitzky and A. Zisserman. Planar grouping for automatic, detection of vanishing lines and points. Image and Vision computing. 2000. 18(6): 647-658
    [87] R.T. Collins and J. R. Beveridge. Matching perspective views of coplanar structures using projective unwarping and similarity matching. In IEEE Conf. Computer Vision and Pattern Recognition. 1993:240-245
    [88] A.Criminisi. I. Reid. and A. Zisserman. A plane mersuring device. Image and Vision Computing, 1999. 17(8): 625-634
    [89] N. Wiener. Extrapolation, Interpolation, and Smoothing of Stationary time Series with Engineering Application. Tech. Press of M. I. T. and John Wiley & Sons.,New York, 1949:1-9
    [90] R.E. Kalman. A New Approach to Linear Filtering and Prediction Theory. Trans. ASME. J. Basic Eng. 1960:23-30
    [91] M. Athans. Suboptimal state estimation for continuous-time nonlinear systems from discrete noisy measurements. IEEE Trans. Automatic Control, 1968. 13(5): 504-514
    [92] 宋文尧,张牙.卡尔曼滤波.第一版.北京:科学出版社,1991:1-212
    [93] 贾沛璋,朱征桃.最优估计及其应用.第一版.北京:科学出版社,1984:35-187
    [94] 韩曾晋.自适应控制系统.第一版.北京:机械工业出版社,1983:24-86
    [95] S.J. Julier. A general method for approximating nonlinear transformations of probability distributions. Technical report, robotics Research Group, Department of engineering Science. University of Oxford. 1994:1-7
    [96] S.J. Julier. A new approach for filtering nonlinear systems. American control conference. Seattle. Washington. 1995:1628-1632
    [97] T. Schei. A finite-difference method for linearization in nonlinear estimation algorithms. Automatica. 1997. 33(11): 2051-2058
    [98] 曹立凡,史万明.数值分析.第一版.北京:北京工业学院出版社.1986: 127-213
    [99] 清华大学应用数学系《现代数学手册》编委会.现代应用数学手册.计算方法分册.第一版.北京:北京出版社.1990:16-73
    
    [100] M. N rgaard. Advances in derivative-free state estimation for nonlinear systems.Technical Report, TMM-REP-1998-15, Department of Mathematical Modelling,DTU. revised April 2000: 1-30
    [101] J. S. C. Yuan. A general photogrammetric method for determining objectposition and orientation. IEEE Trans. Robotics and Automation, 1989,5(2):129-142
    [102]R. M. Haralick and H. Joo. 2d-3d pose estimation. Int. Conf. PatternRecognition, 1988: 385-391
    [103]R. J. Holt and A. N. Netravali. Camera calibration problem: Some new results.CVGIP: Image Understanding, 1991, 54(3): 368-383
    [104]B. Triggs. Camera Pose and Calibration from 4 or 5 Known 3D points. In Int.Conf. Computer Vision. 1999: 278-284
    [105]D. Nister. An efficient solution to the five-point relative pose problem. In IEEEComputer Vision and Pattern Recognition. 2003( II): 195-202
    [106]B. Triggs. Routines for Relative Pose of Two Calibrated Cameras fron 5 Points. Technical Report, Http://www.inrialpes.fr/movi/people/Triggs. INRIA, France. 2000: 1-10
    [107]F. Lu. E. Milios. Robot Pose Estimation in Unknown Environments by Matching 2D Range Scans. J. Intelligent Robot. Systems. 1998. 23(2): 267-276
    [108]C. P. Lu, G. Hager, and E. Mjolsness. Fast and Globally Convergent Pose Estimation From Video Images. IEEE Trans. Pattern Analysis and Machine Intelligence. 2000. 22(5): 610-622
    [109] C S. Chen and W. Y. Chang. Pose Estimation for Generalized Imaging Devicevia Solving Non-Perspective N Point Problem. In. IEEE Conf. Robotics andAutomation. Washing. DC. 2001: 2931 -293 7
    [110] V. Torre. A. Verri. and A. Fiumicelli. Stereo accuracy for robotics. RoboticsResearch. The Third International Symposium. G. Giralt O. D. Faugeras. Ed.,MIT Press. 1986: 5-9
    [111] R. Krishnan, H. J. Sommer III, and P. D. Spidaliere. Monocular pose of a rigidbody using point landmarks. Computer Vision, Graphics, and Image Processing.1992. 55(3): 307-316
    [112] F. E. Veldpaus. An analytic solution for the perspective 4-point problem.CVGIP, 1989, 47(1): 33-44
    [113] Y. Hung. P. Yeah. and D. Harwood. Passive ranging to known planar point sets.Tech. Rep. CAR-TR-65, Center for Automation Research. University of Maryland. June 1984: 1-13
    
    [114] M. A. Abidi and T. Chandra. Pose estimation for camera calibration andlandmark tracking. IEEE Int. Conf. Robotics and Automation. 1990: 420-426
    [115] W. J. Wolfe, G. K. White, and L. J. Pinson. A multisensor robotic locatingsystem and the camera calibration problem. SPIE Intelligent Robots andComputer Vision. 1985, 579(9): 420-431
    [116] L. G. Trabasso and C. Zielinski. Semi-automatic calibration procedure for thevision-robot interface applied to scale model decoration. Robotica. 1992, 10(3):303-308
    [117] A. Ansar and K. Daniilidis. Linear Pose Estimation from Points or Lines. IEEETrans. Pattern Analysis and Machine Intelligence. 2003. 25(5): 578-589,
    [118] M. A. Abidi and T. Chandra. A new efficient and direct solution for poseestimation using quadrangular targets: Algorithm and evaluation. IEEE Trans.Pattern Analysis and Machine Intelligence. 1995. 17(5): 534-538
    [119] M. A. Fischler and R. C. Bolles. Random sample consensus: A paradigm formodel fitting with applications to image analysis and automated cartography.Communications of the ACM. 1981. 24(6):381-395
    [120] L. Quan and Z. Lan. Linear N-Point Camera Pose Determination. IEEE Trans.Pattern Analysis and Machine Intelligence. 1999. 21(6): 774-780
    [121] P. D. Fiore. Efficient linear solution of exterior orientation. IEEE Trans. PatternAnalysis and Machine Intelligence. 2001,23(2): 140-148
    [122] E. Marchand. P. Bouthemy. F. Chaumette and V. Moreau. Robust real-timevisual tracking using a 2d-3d model-based approach. IEEE Int. Conf. ComputerVision, 1999:262-268
    [123] Y. Yoon. G. N. DeSouza, A. C. Kak. Real-time Tracking and Pose Estimationfor Industrial Objects Using Geometric Features. In. IEEE Int. Conf. Roboticsand Automation. Taipei, Taiwan. 2003: 3473-3478
    [124] M. A. Abidi and R. C. Gonzalez. The use of multisensor data for roboticapplications. IEEE Trans Robotics and Automation. 1990. 6(2): 159-177
    [125] N. Chen. J. R. Birk. and R. B. Kelley. Estimating workpiece pose using thefeature points method. IEEE Trans Automatic Control, 1980. 25(6): 1027-1041
    [126] V. Lippiello. B. Siciliano. L. Villani. A New Method of Image FeaturesPre-Section for Real-Time Pose Estimation Based on Kalman Filter. In.IEEE/RSJ Int. Conf. Intelligent Robots and Systems EPFL. Lausanne.Switzerland. Oct. 2002: 372-377
    
    [127] C. J. Ho and N. H. McClamroch. Automatic spacecraft docking using computervision-based guidance and control techniques. J. Guidance, Control, andDynamics. 1993. 16(2): 281-288
    [128] A. E. Johnson and L. H. Matthies. Preciese image-based motion estimation forautonomous small body exploration. Int. Conf. Artificial Intelligence. Roboticsand Automation in Space. Noordwijk. The Netherlands. 1999: 627-634
    [129] R. Horaud. B. Conio. and O. Leboulleux. An analytic solution for theperspective 4-point problem. Computer Vision. Graphics, and Image Processing.1989. 47(1): 33-44
    [130] R. T. Howard and M. L. Book. Video guidance sensor for autonomous capture.NASA Automated Rendezvous and Capture Review. November 1991: 1-8
    [131] S. I. Roumeliotis. A. E. Johnson and J. F. Montgomery. Augmenting inertialnavigation with image-based motion estimation. IEEE Int. Conf. Robotics andAutomation. Washington. Dc. 2002: 4326-4333
    [132] S. I. Roumeliotis. G. S. Sukhatme and G. A. Bekey. Circumventing dynamicmodeling: Evaluation of the error-state Kalman filter applied to mobile robotlocalization. IEEE Int. Robotics and Automation. Detroit. MI. 1999: 1656-1663
    [133] K. Mandel and N. A. Due. On-line compensation of mobile robot dockingerrors. IEEE J. Robotics and Automation. 1987. 3(6): 591-598
    [134] J. C. Tietz and L. M. Germann. Autonomous rendezvous and docking.American Control Conference, 1982: 460-465
    [135] W. B. Jatko. J. S. Goddard. R. K. Ferrell, S. S. Gleason. J. S. Hicks, and V. K.Varma. Crusader automated docking system phase iii report. Tech. Rep.ORNL/TM-13177. Oak Ridge National Laboratory. March 1996: 1-7
    [136] H. F. L. Pinkney. C. I. Perratt, and S. G. MacLean. Canex-2 space visionsystem experiments for shuttle flight sts-54. In SPIE Close-RangePhotogrammetry Meets Machine Vision. 1990: 374-381
    [137] R. Safaee-Rad, I. Tchoukanov. B. Benhabib. and K. C. Smith. Accurateparameter estimation of quadratic curves from grey-level images. CVGIP:Image Understanding. 1991. 54(2): 259-274
    [138] B. Wirtz and C. Maggioni. 3-d pose estimation by an improved kohonen-net.Visual Form: Analysis and Recognition. C. Arcelli. Ed.. Plenum Press, NewYork, 1992: 593-602
    [139] H. H. Chen. Pose determination from line-to-plane correspondences: Existencecondition and closed-form solutions. IEEE Trans. Pattern Analysis and Machine Intelligence. 1991, 13(6): 530-541
    
    
    [140] K. Hung, C. Shyi, J. Lee, and T. Lee. Robot location determination in a complex environment by multiple marks. Pattern Recognition, 1988, 21(6): 567-580
    [141] G. A. Borges and M. Aldon. Optimal Mobile Robot Pose Estimation Using Geometrical Maps. IEEE Trans. Robotics and Automation, 2002, 18(1): 87-94
    [142] D. H. Kite and M. Magee. Reasoning about camera positioning relative to rectangular patterns. Proc. SPIE Sensor Fusion: Spatial Reasoning and Scene Interpretation. 1988: 222-233
    [143]S. Chen and W. Tsai. Determination of robot locations by common object shapes. IEEE Trans. Robotics and Automation. 1991. 7(1): 149-156
    [144]R. Consales. A. D. Bimbo, and P. Nesi. On the perspective projection of 3-d planarfaced junctions. Int. Conf. Pattern Recognition. Hague. Netherlands. 1992: 616-619
    [145]T. Ellis, A. Abbood. and B. Brillaut. Ellipse detection and matching with uncertainty. Image and Vision Computing. 1992, 10(5): 271-276
    [146]N. J. Foster and A. C. Sanderson. Determining object orientation using ellipse fitting. SPIE Intelligent Robots and Computer Vision. 1984. 521: 33-43
    [147]R. Safaee-Rad. I. Tchoukanov. Three-dimensional location estimation of circular features for machine vision. IEEE Trans. Robotics and Automation, 1992. 8(5): 624-640
    [148]R. Glachet. M. Dhome. and J. Lapreste.. Finding the perspective projection of an axis of revolution. Pattern Recognition Letters. 1991. 12: 693-700
    [149]R. R. Goldberg and D. G. Lowe. Hessian methods for verification of 3d model parameters from 2d image data. Proc. SPIE: Sensor Fusion: Spatial Reasoning and Scene Interpretation Conf.. 1989: 63-67
    [150]M. Han and S. Rhee. Camera calibration for three-dimensional measurement. Pattern Recognition. 1992.25(2): 155-164
    [151]K. Nakao, K. Konda and S. Kobashi. Object position/pose estimation using CAD models for navigation of manipulator with a single CCD camera. IEEE Int. Conf.. Computational Intelligence in Robotics and Automation. Kobe. Japan. 2003: 1433-1488
    [152]R. M. Haralick and Y. H. Chu. Solving camera parameters from the perspective projection of a parameterized curve. Pattern Recognition. 1984. 17(6): 637-645
    [153]R. M. Haralick. Determining camera parameters from the perspective projection
     of a rectangle. Pattern Recognition, 1989, 22(3): 225-230
    
    [154] M. R. Kabuka and A. E. Arenas. Position verification of a mobile robot usingstandard pattern. IEEE J. Robotics and Automation, 1987, 3(6): 505-516
    [155] D. G. Lowe. Three-dimensional object recognition from single two-dimensionalimages. Artificial Intelligence. 1987. 31(3): 355-393
    [156] Y. Roth. A. S. Wu, R. H. Arpaci. T. Weymouth. and R. Jain. Model-driven posecorrection. IEEE Conf. Robotics and Automation. 1992: 2625-2630
    [157] N. Ayache and O. Faugeras. Building, registrating. and fusing noisy visualmaps. Int. J. Robotics Research, 1988. 7(6): 45-65
    [158] W. M. Wells. Posterior marginal pose estimation. Proc. DARPA ImageUnderstanding Workshop, 1992: 745-751
    [159] A. Singh. Robust computation of image-motion and scene-depth. IEEE Int.Conf. Robotics and Automation, 1991: 2370-2737
    [160] A. Azarbayejani and A. P. Pentland. Recursive estimation of motion, structure,and focal length. IEEE Trans. Pattern Analysis and Machine Intelligence, 1995.17(6): 562-575
    [161] T. J. Broida, S. Chandrashekhar. Recursive 3-d motion estimation from amonocular image sequence. IEEE Trans. Aerospace and Electronic Systems.1990. 26(4): 639-656
    [162] S. Lee and Y. Kay. A kalman filter approach for accurate 3-d motion estimationfrom a sequence of stereo images. CVGIP: Image Understanding, 1991, 54(2):244-258
    [163] D. B. Westmore and W". J. Wilson. Direct dynamic control of a robot using anendpoint mounted camera and kalman filter position estimation. IEEE Int. Conf.Robotics and Automation. 1991: 2376-2384
    [164] J. Wang and W. J. Wilson. 3d relative position and orientation estimation usingKalman filter for robot control. IEEE Int. Conf. Robotics and Automation. 1992:2638-2645
    [165] L. Matthies. T. Kanade. and R. Szeliski. Kalman filter-based algorithms forestimating depth from image sequences. Int. J. Computer Vision. 1989. 3(8):209-236
    [166]J. B. Burl. A reduced order extended Kalman filter for sequential imagescontaining a moving object. IEEE Trans. Image Processing, 1993. 2(3): 285-295
    [167]P. K. Allen, A. Timcenko, B. Yoshimi, and P. Michelman. Automated trackingand grasping of a moving object with a robotic hand-eye system. Tech. Rep. CUCS-034-91, Department of Computer Science, Columbia University,
    November 1991: 1-10
    
    [168] N. P. Papanikolopoulos, P. K. Khosla, and T. Kanade. Visual tracking of amoving target by a camera mounted on a robot: A combination of control andvision. IEEE Trans. Robotics and Automation, 1993. 9(1): 14-35
    [169] W. J. Wilson, and G. S. Bell. Relative end-effector control using cartesianposition based visual servoing. IEEE Trans. Robotics and Automation. 1996.12(5): 684-696
    [170] T. Jebara. 3d structure from 2d motion. IEEE signal Processing Magazine. 1999.16(3): 66-84
    [171] F. Kececi, H. H. Nagel. Machine-vision-based estimation of pose and sizeparameters from a generic work-piece description. IEEE Int. Conf. Roboticsand Automation. 2001: 2159-2164
    [172] V. Lippiello, B. Siciliano. L. Villani. Position and orientation estimation basedon Kalman filtering of stereo images. IEEE Int. Conf. Control Applications.2001:702-707
    [173] V. Lippiello. B. Siciliano, L. Villani. Objects motion estimation via BSP treemodeling and Kalman filtering of stereo images. IEEE Int. Conf. Robotics andAutomation. Washington. DC, 2002: 2968-2973
    [174] M. W. Walker. L. Shao. and R. A. Volz. Estimating 3-d location parameters usingdual number quaternions. CVGIP: Image Understanding, November 1991. 54(3):358-367
    [175] J. C. K. Chou. Quaternion kinematic and dynamic differential equations. IEEETrans. Robotics and Automation, February 1992, 8(1): 53-64

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700