基于主动视觉的大空间坐标测量关键技术研究
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
针对固定式单摄像机坐标测量系统在大空间测量时测量精度低的问题,本文提出了主动视觉测量的新构想。课题“基于主动视觉的大空间坐标测量关键技术研究”的主要目的是将主动视觉引入现有的基于光学测棒的固定式单摄像机坐标测量系统中,探讨研究一种新的视觉坐标测量方法,以提高大空间测量精度。对这一新的测量系统的关键技术如:主动视觉坐标测量建模、移动光学测棒的捕获、移动光学测棒的跟踪、摄像机相对转台位姿现场校准等进行了深入研究,并提出了解决方案。
     本文研究了主动视觉坐标测量的系统结构并建立了测量模型。在测量过程中,光学测棒随着被测工件上的被测点改变而移动,系统首先通过摄像机变焦、偏摆和俯仰捕获并跟踪移动中的光学测棒,使不同测距处的光学测棒都以合适的大小成像于CCD中央区域;再根据固定式单摄像机坐标测量模型计算光学测棒测尖也即被测点在此时系统姿态下的三维坐标;最后通过坐标变换将系统各姿态下测得的点的三维坐标拼接在世界坐标下,实现对大型工件的点位坐标测量。
     对移动中的光学测棒的捕获是指当测棒移动时,摄像机镜头经过变焦和对焦来捕获移动中的光学测棒,使其在像面上仍成大小合适且清晰的图像。本文对变焦镜头做分级变焦,即先根据测距范围和成像目标占像面百分比划分镜头的变焦等级,再估计出变焦镜头的焦距输出曲线,最后在该曲线上快速定位各级焦距,可以获得光学测棒大小合适的图像;为了获得光学测棒上特征光点的清晰图像,本文提出一种将对焦深度法和离焦深度法结合的自动对焦方法,即先用基于退化图像二阶微分自相关的离焦深度法粗估计出正确对焦位置后,再用对点光源的灵敏度最高的Variance函数作为清晰度评价函数的对焦深度法在该位置附近搜索正确对焦位置。实验结果表明在全测量范围内,系统能随着光学测棒的移动完成对其的运动捕获,使其在像面上始终成大小合适且清晰的图像。
     对移动中的光学测棒跟踪是指当测棒移动时,转台带动摄像机做偏摆和俯仰运动跟踪移动中的光学测棒,使其始终成像于CCD像面的中心区域。为实现快速准确的跟踪,本文提出一种基于图像平面逻辑分区的跟踪规划,即将图像平面划分成理想成像区、跟踪区和搜索区,当特征像点质心在搜索区内时用基于特征像点质心偏差的快速跟踪方法以使光学测棒始终保持在摄像机视场范围内,当特征像点质心在跟踪区内时用基于PBVS的精确跟踪方法以使光学测棒成像于CCD平面的中央区域。实验结果表明该跟踪方法的有效性。
     摄像机相对转台位姿现场校准是保证主动视觉坐标测量系统测量数据拼接精度的重要环节之一,本文提出一种基于矩阵直积理论的线性算法和基于固定点重复测量的非线性规划相结合的二步校准法。为消除测量噪声的影响,用四元数法正交化所求解的旋转矩阵并计算出新的平移矢量。实验结果表明校准方法具有较高的校准精度。
     最后,对基于主动视觉的大空间坐标测量系统进行了全面测试,与固定式单摄像机坐标测量系统相比,单点测量重复性在同一水平上,空间尺寸测量精度提高了2-3倍,验证了主动视觉坐标测量系统对扩展测量范围和提高大空间测量精度的有效性。
In view of the problem of low accuracy in the measurement of large sacle for staticsolo-vision coordinate measurement system, a new concept of active visionmeasurement has been proposed. The subject “research on the technologies of largescale coordinate measurement based on active vision” is to study a new visioncoordinate measurement system which introduces active vision to solo-visioncoordinate measurement system base on optical probe to improve its large scalemeasurement accuracy. For this novel measurement system, some key problems such asmodeling of active vision coordinate measurement system, acquisition of movingoptical probe, tracking of moving optical probe, calibration of position and attitudebetween camera and rotator have been researched and some solutions have beenproposed.
     In this paper, we researched the structure of active vision coordinate measurementsystem and the measurement model has been established. First, as optical probe usuallyadapts its position and attitude to the measured point during the whole measurementprocess, acquisition and tracking of the moving optical probe through zooming,rotating and pitching of camera are applied to obtain appreciate size images of opticalprobe in CCD center area; Second, we use solo-vision measurement module to calculatethe optical probe tip coordinate under the current camera coordinate system whichequals the measured point coordinate; At last, coordinate transformation is applied totransform the coordinates gained under every coordinate system to the world coordinatesystem to implement the point coordinate measurement of large scale workpieces.
     Acquisition of moving optical probe is to capture appropriate size and sharplyfocused images of optical probe at CCD through lens’ zooming and focusing whileoptical probe moving with measured points. Focal length of variable lens is classifiedinto several levels to obtain appropriate size images, that is, the focal length is firstlydivided according to work distance and percentage of image plane, then a fast method isused to estimate the focal length exportation curve, at last locate the specific focallength on this cure. A combined auto focusing approach with depth from focus methodand depth from defocus method is proposed to capture sharply focused images, that is, aDFD method based on autocorrelation of derivative of defocus blured image is used firstly to give a fast but rough estimate of the right focus position, then a DFF methodchosen Variance function, most sensitive to point light source, as criterion function isused to find accurate focus position in the rough field. Experimental results showsystem can acquire moving optical probe and obtain sharply focused images withappropriate size in measurement range.
     Tracking of moving optical probe is to capture images of optical probe in CCDcentral area through camera’ rotating and pitching while optical probe moving withmeasured points. A new tracking method based on logical partition of image plane isproposed to meet the fast and accurate tracking requirement in active visionmeasurement, that is, image plane is parted into ideal region, tracking region andsearching region. When centroidal of feature image points distributes in searchingregion, a fast tracking method base on the errors of centroidal of feature points is usedto keep optical probe in field of view of camera; When centroidal of feature pointsdistributes in tracking region, an accurate tracking method base on PBVS is used tokeep image of the optical probe in the center of CCD plane. Experimental results showthe effectiveness of this tracking method.
     Cause the calibration of position and attitude between camera and rotator is the keyto accuracy of measured data combination, this paper presents a valid calibrationmethod consists of linear parameter estimation base on theory of matrix direct productand nonlinear refinement through repeated measurement of fix point. To reduce thenoise corruption, quaternion is applied to orthogonalization the rotation matrix and newtranslation vector is calculated from this orthogonal rotation matrix. Experimentalresults confirm the calibration accuracy.
     Finally, the collectivity test was done for the large scale coordinate measurementsystem based on active vision, the experimental results show the same repeatabilityfor measurement of single point, but2-3times higher accuracy for measurement ofstandard gauge compared with static solo-vision coordinate system,and approve theeffectiveness of extending measurement range and improving measurement accuracy.
引文
[1]叶声华,王仲,曲兴华.精密测试技术展望[J].中国机械工程,2000,11(3):262-263.
    [2] Xiong Y, Hu H. Three Dimensional Shape Measurement in Neuro-Vision System.Proceedings of SPIE-The International Society for Optical Engineering,2001,4189:191-200.
    [3] Ling, Dennis S.H. Lin, Grier C.I. Lee, Sang-Heon. Precision Measurement usingVision Systems in Automated Manufacturing. Proceedings of the SecondInternational Symposium on Instrumentation Science and Technology,2002,(4):241-245.
    [4]王亮,戴宪彪,居鹤华.一种基于单应的月球车车轮沉陷视觉测量方法.宇航学报,2011,32(8):1701-1707.
    [5] Zhang GX, Zhang HW, Liu Z. A Trinocular Vision Probe for Sculptured SurfaceMeasurements. Nanoteohnology and Precision Engineering,2004,2(3):203-209.
    [6]何炳蔚,林志航,杨明顺等.逆向工程中基于三目视觉自动提取并构造复杂曲面边界技术.机器人,2002,24(1):66-70.
    [7]何炳蔚,林志航.逆向工程中线激光—机器视觉集成坐标测量系统研究.机械,2002,29(6):7-11.
    [8] K. Satio, T. Miyoshi. Non-Contact3-D Digitizing and Machining System forFree-from Surfaces. Annals of the CIRP,1991,40(1):483-486.
    [9]张学昌,邵建敏,方毅.基于三坐标测量仪的双目视觉物体测量及数据分析.郑州轻工业学院学报(自然科学版),2004,19(2):31-33.
    [10] Tzung-Sz Shen, Jianbing Huang, Chia-Hsiang Menq. Multiple-Sensor Integrationfor Rapid and High-Precision Coordinate Metrology. IEEE/ASME Transactionson Mechatronics,2000,5(2):110-121.
    [11] S. F. El-Hakim, J.-A. Beraldin. Configuration Design for Sensor Integration.Sabry F. El-Hakim. Proceedings of the Videometrics IV, Ottawa, Canada,1995.Proceedings SPIE,1995,2598:274-285.
    [12]李淑清,张晓芳,王宝光等.无导轨三坐标测量技术的研究.仪器仪表学报,2004,25(4):155-159.
    [13] Liu Changying, Huang Qingcheng, Che Rensheng. Algorithm of Single CameraVision Coordinates Measuring based on Features Imaging of Optical Probe. TanJiubin, Wen XianFang. Proceedings of Second International Symposium onInstrumentation Science and Technology (ISIST’2004), Xian,2004. HarbinInstitute of Technology Press,(3):90-96.
    [14]黄凤山,王春梅.光笔式视觉坐标测量中控制点光斑图像的识别.光学精密工程,2007,15(4):587-591.
    [15]黄凤山,钱惠芬.光笔式单摄像机三维坐标视觉测量系统建模.光电子激光,2007,18(1):85-88.
    [16]彭凯,刘书桂,黄风山.光笔式便携三维坐标视觉测量系统的建模与分析.计量学报,2005,12:18-20.
    [17]黄凤山,刘书桂,彭凯.跟踪测距式三坐标视觉测量系统.光电工程,2006,33(2):107-110.
    [18] Liu Changying, Yu Zhijing, Che Rensheng, Ye Dong. Bundle Adjustments CCDCamera Calibration Based on Collinearity Equation. Chinese Journal ofMechanical Engineering,2004,17:494-497.
    [19]叶东,解邦福,刘博.一种基于共面特征点的单摄像机姿态测量方法研究.宇航计测技术,2009,29(6):1-6.
    [20]姜广文,晁志超,伏思华,于起峰.基于单摄像机的物体位置和姿态变形测量研究.光电子激光,2009,20(6):775-778.
    [21] S. G. Liu, F. S. Huang, K. Peng. The Modeling of Portable3D Vision CoordinateMeasuring System. Proceedings of SPIE-The International Society for OpticalEngineering,Washington,2005:835-842.
    [22]陈杉,周涛,张效栋,孙长库.物体位姿单目视觉传感测量系统.传感技术学报,2007,20(9):2011-2015.
    [23] J. Y. Jin, Z. J. Zhang. Vision Coordinate Measurement Technique Using Stereo-Probe Imaging. Proceedings of Optical Design and Testing. Proceedings SPIE,2002:249-257.
    [24]张小虎,邸慧,周剑,尚洋,李立春,于起峰.一种单像机对运动目标定位的新方法.国防科技大学学报,2006,28(5):114-118.
    [25]赵汝进,张启衡,左颢睿,吴明军.一种基于直线特征的单目视觉位姿测量方法.光电子激光,2010,21(6):894-897.
    [26] Liu Changying, Ye Dong, Che Rensheng. Optical Probe Calibration Based onPowell Optimization Algorithm. T.D Wen. Proceedings of the6th InternationalSymposium on Test and Measurement (ISTM2005), Dalian,2005.International Academic Publishers World Publishing Corporation,(6):5289-5291.
    [27]韩延祥,张志胜,戴敏.用于目标测距的单目视觉测量方法.光学精密工程,2011,19(5):1110-1117.
    [28]黄桂平,钦桂勤,卢成静.数字近景摄影大尺寸三坐标测量系统V-STARS的测试与应用.宇航计测技术,2009,29(2):5-9.
    [29] Xu Qiaoyu, Ye Dong, Che Rensheng. Online Extrinsic Parameters Calibration ofStereo Vision Coordinate Measurement System. Proceedings of the3rdInternational Symposium on Advanced Optical Manufacturing and TestingTechnologies(AOMATT’2007), ChengDou,2007. Proceedings SPIE.
    [30] Xu Qiaoyu, Ye Dong, Che Rensheng. Stereo Vision Coordinate MeasurementSystem Based on Optical Probe. Proceedings of the Second InternationalConference on Complex Systems and Applications-Modeling, Control andSimulations (ICCSA’2007), Jinan. DCDIS Series B, Watam Press.2007,14(S2):594-598.
    [31] Yu Zhijing, Chen Gang, Che Rensheng, Liu Changying. Optical Probe ImagingBased Stereo Cameras3D Coordinate Measuring System. Yimo Zhang, Wenyao,Liu, Harvey M. Pollicove. Proceedings of Photonics Asia’2002, Shanghai,2002.Proceedings SPIE.2002,4921:115-121.
    [32]于之靖.大空间立体视觉测量关键技术研究.哈尔滨工业大学博士论文,2004:80-101.
    [33] Chen Gang, Yu Zhijing, Che Rensheng, Ye Dong, Huang Qingcheng. NovelMeasurement of Target Point Based on Binocular Code Matching. Proceedings ofSecond International Symposium on Instrumentation Science and Technology(ISIST’2002), Jinan,2002. Harbin,Harbin Institute of Technology Press:293-297.
    [34]刘建伟,梁晋,梁新合,曹巨明,张德海.大尺寸工业视觉测量系统.光学精密工程,2010,18(1):126-134.
    [35]王俊,朱战霞,贾国华,张旭阳.多介质下空间目标的视觉测量.计算机应用,2011,31(5):1431-1434.
    [36]张瑞峰,张巍.改进双目视觉在柴油机缸盖毛坯检测中的应用.电子测量技术,2011,34(8):48-51.
    [37]高贵,杨洗陈,张海明.基于结构光立体视觉的激光再制造工件的测量.南开大学学报,2011,44(1):36-42.
    [38]刘美莲,蔡慧敏.目标落点的视觉测量方法研究.应用光学,2011,32(5):949-954.
    [39]程黄金.双目立体视觉系统的技术分析与应用前景.电脑知识与技术,2011,7(9):2145-2147.
    [40] H. A. Beyer. Automated Dimensional Inspection of Cars in Crash Tests withDigital Photogrammetry. Proceedings of Industrial Vision Metrology. Winnipeg,Canada,1991:134-141.
    [41] H. Lutz, N. Lars. High Precision in Car Body Manufacturing. InternationalCongress and Exposition. Detroit,Michigan,USA,1995:7-15.
    [42] Liu Changying, Ye Dong, Che Rensheng, Huang Yan. The Study of NetworksMeasurement based on Vision Coordinates Measurement using Single Camera.Shi Wenren, Yang Simon X., Liang Shan, Liu Xinzhi. Proceedings of theInternational Conference on Sensing, Computing and Automation (ICSCA,2006), Chong Qing. Watam Press.3641-3645.
    [43] Haitao He, Mingyi Chen, Hongwei Guo. Shape Measurement of Complex Objectsusing Connection Techniques Based on Overlapping Areas. H. Philip Stahl.Proceedings of the Optical Manufacturing and Testing V, San Diego, CA, USA,2004. Proceedings SPIE,2004,5180:402-412.
    [44]周维虎,兰一兵.空间坐标转换技术的分析与研究(一)(二).理论与实践,1999,19(4):10-13.
    [45]黄真,李艳文.空间运动构件姿态的欧拉角表示.燕山大学学报,2002,26(3):189-191.
    [46]肖永亮,苏显渝,薛俊鹏,刘晓青.基于凸松弛全局优化算法的视觉测量位姿估计.光电子激光,2011,22(9):1384-1389.
    [47]赵汝进,张启衡,左颢睿,吴明军.一种基于直线特征的单目视觉位姿测量方法.光电子激光,2010,21(6):894-897.
    [48] Long Quan, Zhongdan Lan. Linear N-Point Camera Pose Determination. IEEETransactions on Pattern Analysis and Machine Intelligence,1999,21(8):774-780.
    [49] Haralick R M, Lee Chung-nan, Karsten Ottenberg etal. Analysis and Solutions ofthe Three Point Perspective Pose Estimation Problem. Proc Int Conf on ComputerVision and Pattern Recognition,1991,592-598.
    [50] Zhang Caixia, Hu Zhanyi. A probabilistic study on the multiple solution of theP3P Problem. Journal of software,2007,18(9):2100-2104.
    [51] Abidi, M. A., Chandra, T. A New Efficient and Direction Solutions for PoseEstimation using Quadrangular Targets: Algorithm and Evaluation. IEEETransactions on Pattern Analysis and Machine Intelligence,1995,17(5):534-538.
    [52]张彩霞. P3P问题多解的几个判定条件.北方工业大学学报,2007,19(3):45-51.
    [53]符德林,马文鹏. P3P问题多解条件的补充研究.计算机工程与应用,2011,47(2):179-181.
    [54] Hao Yingming, Zhu Feng, Ou Jinjun. Robust analysis of P3P pose estimation[C].IEEE Robio2007, Sanya, China,2007:222-226.
    [55] Horaud R., Conio B., Leboulleux O., and Lacolle B.. An Analytic Solution for thePerspective4-Point Problem. Computer Vision, Graphics, and ImageProcessing,1989,47(1):33-44.
    [56] Hu Z Y, Wu F C. A Note on Non-planar P4P Problem. Technical Report, TR-RVG-2000-9, National Laboratory of Pattern Recognition, Institute of Automation,the Chinese Academy of Sciences,2000.
    [57] Bill Triggs. Camera Pose and Calibration from4or5Known3D Points. IEEEInternational Conference on Computer Vision,1999,1:278-284.
    [58] M. L. Liu and K. H. Wong. Pose Estimation using Four Corresponding Points.Pattern Recognition Letters,20(1),1999:69-74.
    [59]郭阳,张祥德,徐心和.未标定摄像机非共面P4P问题的一种解析解.计算机学报,2011,34(4):748-754.
    [60]吴福朝,胡战义.摄像机未标定的P5P问题研究.计算机学报,2001,24(11):1321-1326.
    [61]郭阳,徐心和.未标定摄像机P5P问题的一种解析解.计算机学报,2007,30(7):11951200.
    [62]许海霞,王耀南,袁小芳,朱江,周维.基于矢量差分的未标定摄像机P5P问题的求解.自动化学报,2009,35(8):1140-1144.
    [63] O. Faugeras. Three-Dimensional Computer Vision:a Geometric Viewpoint. MITPress,1993.
    [64] L. Grammatikopoulos, G.E. Karras, E. Petsa. Camera calibration approaches usingsingle images of man-made objects. CIPA2003XIXth InternationalSymposium,Antalya,Turkey,2003.
    [65] R. Y. Tsai. A Versatile camera calibration technique for high-Accuracy3Dmachine vision metrology using off-the-Shelf TV cameras and lenses. IEEE J.Robotics and Automation,1987,3(4):323-344.
    [66] R. Tsai. An efficient and accurate camera calibration technique for3D machinevision. Proceedings of the CVPR’86,Miami Beach,USA,1986:364-374.
    [67]张捷,李新德,戴先中.基于立体靶标的摄像机标定方法.东南大学学报(自然科学版),2011,41(3):543-548.
    [68] R.I. Hartley, A. Zisser. Multiple View Geometry in Computer Vision. CambridgeUniversity Press,Cambridge,2000,0521623049.
    [69] V. B. Triggs. Autocalibration from planar scenes. Proceedings of the5th EuropeanConference on Computer Vision, ECCV`98,Springer,London,UK,1998.
    [70] Z. Zhang. A flexible new technique for camera calibration. IEEE Transactions onPattern Analysis and Machine Intelligence,2000,22(11):1330-1334.
    [71] Z. S. Wang, W. Wu, H. X. Recognition and location of the internal corners ofplanar checkerboard calibration pattern image. Application math computer,2007,(2):894-906.
    [72]杨幸芳,黄玉美,高峰,杨新刚,韩旭绍.用于摄像机标定的棋盘图像焦点检测新算法.仪器仪表学报,2011,32(5):1109-1113.
    [73]杨长江,孙凤梅,胡占义.基于平面二次曲线的摄像机标定.计算机学报,2000,23(5):541-547.
    [74] Z. Zhang. Camera Calibration with One-Dimensional Objects. IEEE transactionon pattern analysis and machine vision,2004,26(7):892-90.
    [75] F. C. Wu, Z.Y. Hu, H.J. Zhu. Camera calibration with moving one-dimensionalobjects. Pattern Recognition,2005,38:755-765.
    [76] F. Qi, et al. Camera calibration with one-dimensional objects moving undergravity. Pattern Recognition,2007,40(1):343–345.
    [77] F. Qi, et al. Constraints on general motions for camera calibration with one-dimensional objects. Pattern Recognition,2007,40(6):1785–1792.
    [78] P. Hammarstedt, P. Sturm, A. Heyden. Degenerate cases and closed-formsolutions for camera calibration with one-dimensional objects.10th IEEEInternational Conference on Computer Vision,ICCV,2005.
    [79] X. He et al. Estimation of internal and external parameters for camera calibrationusing1D pattern. Proceedings of IEEE International Conference on Video andSignal Based Surveillance (AVSS'06), IEEE Computer Society, LosAlamitos,CA,USA,2006.
    [80] En Peng, Ling Li. Camera calibration using one-dimensional information and itsapplication in both controlled and uncontrolled environments. PatternRecognition,2010,43:1188-1198.
    [81] X. Cao, H. Foroosh. Camera calibration without metric information using1Dobjects. International Conference on Image Processing,ICIP '04,2004.
    [82] Y. Kojima, T. fujii, and M. Tanimoto. New Multiple Camera Calibration Methodfor a Large Number of Cameras. Proc. of SPIE-IS&Electronic Imaging,SPIE,2005,5665:156-164.
    [83]孙军华,吴子彦,刘谦哲,张广军.大视场双目视觉传感器的现场标定.光学精密工程,2009,17(3):633-640.
    [84]刘震,张广军,魏振中.一维靶标的所视觉传感器全局校准.光学精密工程,2008,16(11):2274-2280.
    [85] R. Hartley. An algorithm for self-calibration from several views. Proceedings ofthe CVPR’94,Washington,USA,1994:908-912.
    [86] S.J. Maybank, O.D. Faugeras. A theory of self-calibration of a moving camera.International Journal of Computer Vision,1992,8(2):123–151.
    [87] Juan R, Majumder A. A photometric self-calibration of a projector-camera system.IEEE conference on computer vision and pattern recognition,2007:1-8.
    [88] A.Saaidi, A. Halli, H. Tairi, K.Satori. Self-calibration using a particular motion ofcamera. WSEAS Transactions on computer research,2008,3(5):295-299.
    [89] Lei Cheng, Hu Zhan-yi, Wu Fu-chao TSUI H T. A novel camera self-calibrationtechnique based on the Kruppa equations. Chines Journal of computers,2003,26(5):587-597.
    [90] Q. Luong and O. Faugeras. Self-Calibration of a Moving Camera from PointCorrespondences and Fundamental Matrices. Intel J. Computer Vision,1997,22(3):261-289.
    [91] F. Lv, T.Zhao, R.Nevatia. Self-calibration of a camera from video of a walkinghuman. Proceedings of the16thInternational Conferenceon Pattern Recognition(ICPR'02),IEEE Computer Society,Washington. DC,USA,2002,1:562-567.
    [92] F.Lv, T.Zhao, R.Nevatia. Camera calibration from video of a walkinghuman.IEEE Transactionson Pattern Analysis and Machine Intelligence,2006,28(9):1513–1518.
    [93] L.Grammatikopoulos, G.E.Karras, E.Pets, I.Kalisperakis. A unified approach forautomatic camera calibration from vanishing points. Journal of photogrammetryand remote sensing,2006,62(1):64-76.
    [94]潘静,李为民.基于3D立体靶标的摄像机标定算法.机械与电子,2007(5):3-5.
    [95]范晶晶,刘峰,徐琼.一种结合遗传算法和LM算法的摄像机自标定方法.南京邮电大学学报(自然科学版),2011,31(5):23-26.
    [96]吴珂,叶俊勇.基于随机蚁群神经网络的摄像机标定.微计算机信息,2010,26(11-1):104-52.
    [97] Y. Z. Zhang, Z. Y. Ou. A New Linear Approach for Camera Calibration. Journal ofImage and Graphics,2001,6(8):727-731.
    [98] X. Q. Meng, Z. Y. Hu. A New Easy Camera Calibration Technique Based onCircular Points. Pattern Recognition,2003,36:1155-1164.
    [99] P. Eisert. Model-Based Camera Calibration Using Analysis by SynthesisTechniques. Proceedings of the Vision, Modeling, and Visualization(VMV'02),Erlangen, Germany,2002:307-314.
    [100] S. Joaquim, A. Xavier, B. Joan. A Comparative Review of Camera CalibratingMethods with Accuracy Evaluation. Pattern Recognition,2002,35(7):1617-1635.
    [101] Yu Zhijing, Che Rensheng. Accurate Camera Intrinsic Parameters Calibrationusing Multistage Method. Anbo Wang, Yimo Zhang, Yukihiro Ishii. Proceedingsof Photonics Asia conference on Advanced Materials and Devices for Sensing andImaging II (Photonics Asia’2004), Beijing,2004. Proceedings SPIE,2004,5633.
    [102]叶东,徐巧玉,车仁生. Camera calibration technique for vision measurementsystem.光学精密工程,2006,14(5):883-890.
    [103]叶东,刘长英,陈刚,车仁生.基于遗传算法的像机虚拟立体校准技术研究.光学精密工程,2006,3(14):348-352.
    [104] Xu Qiaoyu, Ye Dong, Che Rensheng. A Valid Camera Calibration Based on theMaximum Likelihood Using Virtual Stereo Calibration Pattern. InternationalConference on Sensing, Computing and Automation, Chongqing. Watam Press,2006:2346-2351.
    [105]杨小君,苏秀琴,郝伟,李哲,刘刚,杨秀芳.用于精密测量的自动变焦系统及标定方法的研究.光子学报,2005,34(12):1921-1925.
    [106]胡炳梁,曹剑中,熊仁生,杜云飞,田燕.变焦距镜头组的自适应调焦的实现.光子学报,2003,32(8):1004-1006.
    [107]陈志坚,杨小君,李哲.高速电视测量仪中精密变焦系统控制方法的研究.科学技术与工程,2007,7(13):3145-3149.
    [108]金春水.正组补偿变焦距镜头的焦距输出函数.光学精密工程,1996,4(4):11-16.
    [109]邹华,张孟伟.用步进电机实现连续变焦距光学镜头的变焦控制.光电工程,2003,30(1):29-31.
    [110]史亚莉,王一凡,宋春鹏,高峰端.连续变焦距镜头焦距实时输出问题研究.仪器仪表学报,2006,27(6):50-51.
    [111]杜继伟,王庆智,邓志辉.基于89C2051和MAX187的变焦镜头控制器的设计.航空兵器,2008,2:52-56.
    [112]刑辉,杨红,王婵娟.基于TMS320LF2407A的连续变焦镜头控制系统.国外电子测量技术,2006,25(7):33-37.
    [113]张以谟.应用光学(下).机械工业出版社,148-169.
    [114]陶纯堪.变焦距光学系统设计.国防工业出版社.
    [115]梁来顺.变焦距系统设计的快速求解.应用光学,2004,25(1):17-20.
    [116]金逢锡,金虎杰.变焦镜头结构形式的最佳选择方法.光学仪器,2004,26(1):34-39.
    [117]蒋世磊,叶红伟.变焦系统光学偏心调整方法的讨论.光学仪器,1998,20(5):13-18.
    [118]原育凯,程仕东,裴云天.光学系统离焦对自动调焦评价函数的影响.光电工程,2005,32(12):51-58.
    [119]王兆远.照相机原理结构设计基础.机械工业出版社,1991:293-313.
    [120]冯华君,王兆远.红外能量法二点式自动对焦系统.现代照相机,1992,4:1-7.
    [121]冯华君.反射能量法测距聚焦系统及其重叠设计法.光电工程,1998,25(2):48-53.
    [122]李开端,赵育良,李英杰等.面阵CCD航空相机的自动对焦技术研究.光电工程,2002,29(5):22-24.
    [123]冯华君,徐之海,李奇.红外主动式PSD测距系统.光电工程,1999,26(3):42-46.
    [124]冯华君.主动式PSD自动对焦光学系统设计.光学仪器,1998,20(1):13-17.
    [125]木斌.摄像机智能化自动聚焦与变焦原理.电视技术,1997,3:19-21.
    [126]李美志.摄像机的自动聚焦方式及克服存在问题的方法.电视技术,1994,10:6-9.
    [127]黄桂灶,徐向东,陆祖康.单片机处理信号的自动聚焦系统.光电工程,2002,29(1):42-44.
    [128] Xu Xiangdong, Zeng Chao, Li Feng. Autofocusing system based on signalprocessing of single chip microprocessor. SPIE,2002,4921:97-103.
    [129]黄剑琪.基于频谱分析的数字对焦技术的研究.硕士学位论文.浙江大学,2001:21-31.
    [130] Xing YL, Steven S A. Depth from focusing and defocusing[A]. Proceeding ofIEEEE Computer Society Conference on CV and PR,New York,1993:68-73.
    [131] M.Subbarao, G. Surya. Depth from Defocus, A Spatial Domain Approach.International Journal of Computer Vision,1994,13(3):271-294.
    [132]江静,张雪松.基于计算机视觉的深度估计方法.光点技术应用,2011,26(1):51-55.
    [133] F.C.A.Groen, Ian T.Young, G.Lighart. A comparision of different focus functionsfor use in autofocus algorithms[J]. Cytometry,1985,6(2):81-91.
    [134] H. Harms, H.M. Aus. Comparision of digital focus criteria for a TV microscopesystem[J]. Cytometry,1984,5(3):236-243.
    [135] L. Firestone, K. Cook, K. Culp. Compatrision of autofocus methods forautomated microscopy[J]. Cytometry,1991,12(3):195-206.
    [136]姜志国,韩冬兵,谢凤英,袁天云.基于全自动显微镜的图像新技术研究.中国体视学与图像分析,2004,9(1):31-36.
    [137] A.P.Pentland. A new sense for depth of field. IEEE Trans. Pattern. Anal. Mach.Intell,9,1987:523-531.
    [138] A. N. Rajagopalan, S. Chauhuri. A variation Apporach to recovering depth fromdefocus images. IEEE Trans. Pattern. Anal. Mach. Intell,1997,18:1158-1162.
    [139] F.Deschenes, D.ziou, P.Fuchs. An Unified Approach for a Simultaneous andCooperative Estimation of Defocus Blure and Spatial Shifts. Tech. Rep. No.261,DMI, Univesite de Sherbrooke,2001.
    [140] F.Deschenes, D.ziou, P.Fuchs. Improved estimation of defocus blur and spatialshifts in spatial domain:a homotopy-based approach. Pattern recognition,2003,36(9):2105-2125.
    [141] D.Ziou, F.Deschenes. Depth from Defocus Estimation in Spatial Domain.CVIU,2001,81(2):143-165.
    [142]刘义鹏,裴锡宇,冯华君,李奇.一种基于DFD的自动对焦算法.光学仪器,2005,27(4):39-44.
    [143]田涛,潘俊民.基于矩保持法的散焦图像深度估计.上海交通大学学报,2000,34(7):917-921.
    [144]裴锡宇,冯华君,李奇,徐之海.一种基于频谱分析的离焦深度自动对焦法.光电工程,2003,30(5):62-65.
    [145]李奇,冯君华,徐之海.用于全数字对焦的点扩散函数性能分析与评价.浙江大学学报(工学版),2006,40(6):1093-1096.
    [146] F. Rooms, M. Ronsse, A.Pizurica, W. Philips. PSF estimation with application inautofocus and image restoration. Proc.3rdIEEE Benelux Signal ProcessingSymposium(SPS-2000),Leuven,Belgium,2002,s02:21-22.
    [147]汪源源,孙志明,蔡铮.改进的奇异值分解法估计图像点扩散函数.光学精密工程,2006,14(3):520-526.
    [148]陈前荣,陆启生,成礼智,刘泽金,舒伯宏,黎全,王红霞.利用拉失算子鉴别三角模糊图像点扩散函数.计算机工程与科学,2005,27(9):40-43.
    [149]赵琳,金伟其,陈翼男,苏秉华.基于微分图像自相关的离焦模糊图像盲复原.光学学报,2008,28(9):1703-1719.
    [150]张益昕,王顺,张旭苹.大尺度三维视觉测量中的离焦模糊图像恢复.仪器仪表学报,2010,31(12):2748-2753.
    [151]任四光,李见为,谢利利.基于灰度差分法的自动调焦技术.光电工程,2003,30(2):53-56.
    [152]高赞,姜威,朱孔凤,周贤.基于边缘梯度的自动聚焦算法.系统工程与电子技术,2007,29(3):495-499.
    [153]高赞,姜威,朱孔凤,王超.基于Roberts梯度的自动聚焦算法.红外与激光工程,2006,35(1):117-122.
    [154]董代,刘荣,孙明磊,宗光华.基于互相关的自动聚焦方法.北京航空航天大学学报,2006,32(3):306-311.
    [155]谢攀,张利,康宗明.一种基于尺度变化的DCT自动聚焦算法.清华大学学报(自然科学版),2003,43(1):55-59.
    [156]赵秋玲,赵建森,蒋永华.改进DCT的自动聚焦算法.中国图象图形学报,2007,12(7):1206-1208.
    [157] Joewono widjaja, Suganda Jutamulia. Wavelet transform-based autofocus camerasystems[C]. IEEE conference,1998:49-51.
    [158]郭丙华,廖启亮,余志.基于小波变换的快速自动聚焦算法.中山大学学报(自然科学版),2007,46(2):12-15.
    [159]菅维乐,姜威,周贤.一种基于小波变换的数字图像自动聚焦算法.山东大学学报(工学版),2004,34(6):38-40.
    [160]周贤,姜威.基于图像边缘能量的自动聚焦算法.光学技术,2006,32(2):213-218.
    [161]康宗明,张利,谢攀.一种基于能量和熵的自动聚焦算法.电子学报,2003,31(4):552-555.
    [162]王睿,王林,姜志威.基于DSP的主动视觉运动目标跟踪策略及实现.光电工程,2009,36(2):6-11.
    [163]赵海文,岳宏,杜春红,蔡鹤皋.基于主动视觉和超声的机器人目标跟踪系统.计算机工程,2008,34(4):220-224.
    [164]潘逢,王宣银,向桂山,梁冬泰.一种新的运动目标检测与跟踪算法.光电工程,2005,32(1):43-48.
    [165]宁成军,史忠科.基于FPGA的实时空中目标跟踪系统设计与实现.交通运输系统工程与信息,2010,10(5):45-54.
    [166]王麟琨,徐德,谭民.机器人视觉伺服研究进展.机器人,2004,26(3):277-282.
    [167]黎志刚,段锁林,赵建英,毕有明.机器人视觉伺服控制及应用研究的现状.太原科技大学学报,2007,28(1):24-31.
    [168]方勇纯.机器人视觉伺服研究综述.智能系统学报,2008,3(2):109-114.
    [169] Taylor G, Kleeman L. Hybrid position based visual servoing with onlinecalibration for a humanoid robot. International conference on intelligent robotsand system. Sandai,Japan,2004:686-691.
    [170] Conticelli F, Allotta B. Nonlinear controllability and stability analysis of adaptiveimage-based systems. IEEE Transactions on Robotics and Automation,2001,17(2):208-214.
    [171] Marchand E, Comport A, Chaumette F. Improvements in robust2D visualserviong. Proceeding of the2004IEEE international conference on robotics andautomation,New Orleans,USA,2004:747-750.
    [172] Mail E. Chaumette F. Boudet S.2-1/2-d visual serviong[J]. IEEE Transaction onRoboitcs and Automation,1999,15(2):238-250.
    [173] Shui Y C,Ahmad S. Calibration of wrist-mounterd robotic sensors by solvinghomogeneous transform equation of form AX=XB[J]. IEEE Transaction onRobotics and Automatoin,1989,5(1):16-19.
    [174] Zhuang, H.Roth, Z.S. Comments on "Calibration of wrist-mounted roboticsensors by solving homogeneous transform equations of the form AX=XB". IEEETransaction on Robotics and Automation,1991,7:877-878.
    [175] Homer H. Chen. Comment on 'Calibration of wrist_Mounted robotic sensors bysolving homogeneous transform equation of the form AX=XB'. IEEETransaction on Robotics and Automation,1992,8(4):493.
    [176] R. Y. Tsai, R. K. Lenz. A new technique for fully autonomous and efficient3Drobotics hand/eye calibration [J]. IEEE Transation on Robotics and Automation,1989,5(3):345-358.
    [177] Chou J K, Kamel M. Fiding the position and oritentation of a sensor on a robotmanipulator using quaternion [J]. The International Journal of RoboticsResearch,1991,10(3):240-254.
    [178] Radu H, Fadi D. Hand-eye calibration[J]. International Journal of RoboticsResearch,1995,14(3):195-210.
    [179] Ma S D. A self-calibration technique for active vision system[J]. IEEE Trans onRobot Automation,1996,12(1):114-120.
    [180] Konstantinos D. Hand-eye calibration using dual quaternions[J]. IEEETransactions on Robotics and Automation,1992,8(4):161-175.
    [181] Nicolas A, Radu H, Bernard E. Robot hand-eye calibration using structure-from-motion[J]. International Journal of Robotics Research,2001,20(3):228-248.
    [182] Gong C H, Yuan J X, Ni J. A self-calibration method for robotic measurementsystem[J]. Transaction of the ASME Journal of Manufacturing Science andEngineering,2000,122(1):174-181.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700