新型复眼定位装置设计及关键技术研究
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
视觉定位,作为目标定位技术领域中的重要手段之一,在工业测量、安防监控、导航、军事目标定位与跟踪等场合有着广泛的应用。自然界中的复眼由于具有特有的曲面结构、多通道成像系统以及对神经信息的高度集中处理单元,拥有大视场范围内的目标定位和对运动目标高灵敏的捕捉及处理等能力,因此非常适合于市场对定位装置的重量轻、体积小、功耗低及可靠性高等方面日益增长的需求。本论文提出一种用于大视场目标定位的新型复眼定位系统,并讨论了为获取大视场而采用的技术和成像过程中曲面透镜阵列到平面图像传感器成像的解决办法。介绍了透镜安装的要求,搭建起透镜安装平台并完成透镜的安装工作。通过对双目视觉定位技术的调研,建立了适用本复眼系统目标定位的数学模型。从理论和实验两方面研究了该系统多透镜成像畸变的标定任务以及目标像点的识别方法,并分析了目标像点中心的提取技术。基于簇眼单元的划分,提出“搜索三维坐标波动误差”的方法对像点通道进行匹配,进一步完成目标定位任务,最后对复眼系统的标定性能进行了评价并初步研究了复眼定位系统的应用。
     本论文的研究工作主要有以下几个方面:
     1.设计一套进行大视场目标定位的新型复眼装置。以曲面分布的透镜阵列方式获取大视场范围内的目标,研究各透镜相互之间既满足高填充比又最大限度消除成像盲区的排布方式。对透镜阵列的安装方案进行了简要设计并完成各子透镜的安装工作。研究了曲面分布的透镜阵列到图像传感器有限成像面的映射问题和边缘视场接收透镜成像质量差的问题,采用了在透镜阵列与图像传感器之间加入折转透镜的解决办法。选择了大面阵图像传感器,并编写了图像传感器的驱动程序,实现了基于FPGA的图像传感器像素的采集并通过USB2.0高速总线上传到PC机,完成复眼接收通道图像的显示及后续处理。
     2.调研总结双目视觉定位技术,建立复眼各子眼透镜坐标系和世界坐标系,确立复眼定位数学模型和标定内容。为利于模型的表达和问题的解决,把目标点入射光线经接收透镜-折转透镜两层变化后最终成像在图像传感器上的轨迹光线分为两部分。其中一部分是目标到各子透镜中心的线性部分,而另外一部分是各子透镜中心经过折转透镜到图像传感器上像点的部分。根据已有的机器视觉标定模型,通过分析,提出根据三维坐标已知的目标点标定每个透镜入射光线向量角度与其成像点之间对应关系的标定方式。
     3.根据标定原理,设计标定方案,完成复眼成像系统的标定任务。选择了等离子电视机、分光镜、半导体激光器、水平仪以及水平移动导轨等实验设备和仪器。使用等离子电视机作为目标平面,利用分光镜调整了复眼平面和目标平面之间的平行性,采用激光与目标平面上一个固定点重合的方式调节了水平移动轴与目标平面的垂直性,根据水平仪调整了目标平面的行列和复眼坐标系一致。在以上基础上,根据消失点找出复眼中心透镜光轴与目标平面之间的交点在目标平面上的位置,并由不同目标共像点的方法求出复眼中心透镜与垂足之间的距离,进而求出目标平面上各点的三维世界坐标。最后算出各点与接收透镜之间的向量角度和对应像点的中心坐标后,对每一个透镜都建立起一种它与所有接收到目标之间的向量角度和对应目标像点坐标之间的关系,完成系统的标定工作。标定过程中还介绍了提取目标像点的图像处理方法并分析了提取目标光斑像点中心的算法和图像降噪方法。
     4.研究使用复眼定位系统进行目标三维定位的关键问题和简单应用。分析了基于Delaunay三角剖分的数据插值、神经网络插值、双调和样条插值等常用散乱点数据插值方法以及插值精度特点。根据采集到的目标像点中心坐标,插值标定结果得到对应通道的向量角度。提出把复眼透镜分成169个簇眼单元,对接收到一个目标的一组像点首先识别出属于一个簇眼单元的若干通道,然后通过归一化三维坐标差异定义了每一次匹配所求出不同三维坐标之间的的波动误差,实现了目标像点与对应通道的有效识别,解决了复眼三维定位中的像点对应通道识别的关键问题。通过移动不同目标平面并定位目标点,评估了复眼系统的标定效果,根据计算指出透镜向量角度标定均方误差在0.02°左右,X、Y、Z三维坐标相对误差在2%左右。在对目标定位的研究过程中,分析了插值点与周边基准点分布关系对插值精度的影响,只对位于4个基准点构成的四边形内的插值点进行了插值。设计了物体轮廓测量的实验方案并搭建实验平台,使用所设计的复眼定位系统进行了相关的测量并重构了不同视场内的零件轮廓。
Visual technology, as an important method in the fields of target positioning, has been widely applied to the industrial measurement, surveillance and control system, navigation, positioning and tracking for military target, and so on. Because compound eye in nature is endowed with especial surface construction, imaging system with multi-channel and high centralized process unit for information with neural network, and it is capacity of target positioning with large field of view and high sensitivity to capture moving target and deal with it quickly, it is very fit for the requirement of light weight, small size, low power consumption and high reliability of positioning equipment. Based on these characteristics of compound eye, a novel compound eye system for target positioning with large field of view is put forward in this paper, and the adopted technology to capture the entire target within large field of view and the solved method to project lenses array within large field of view on limited imaging area of one CMOS are discussed. After installation requirements of lenses array is introduced, lenses installation platform is set up and installation task is completed. Based on the survey of positioning technology with binocular vision, the mathematical model fitting to the compound eye is set up. Next, calibration task for imaging distortion of multi-lenses of compound eye and the image processing method to recognize imaging point of target are studied and summarized in the ways of theory and experiment, and the algorithm to extract the center of every imaging point is analyzed. Subsequently, based on the unit partition for compound eye, a new algorithm to match every imaging point and its corresponding lens is put forward, and thus to complete further task of target positioning. Finally, according to the target positioning algorithm, calibrated performance of compound eye imaging system is evaluated and simply applications are initially studied.
     The main research contents of this thesis are listed as follows:
     1. A novel compound eye device for target positioning with large field of view is designed. The device makes use of the lenses array with surface distribution to obtain targets within large field of view. The arrange type which could not only meet high fill ratio but also eliminate imaging blind region by the greatest extent is studied. Installation scheme for lenses array is initially designed and the task of installing lenses array is completed. After the question projecting lenses array with large field of view on the limited imaging plane of one CMOS and the question of serious defocus and inclination of received lenses locating at the edge of compound eye are studied, one solution is put forward that refractive lens is added to the location between lenses array and CMOS plane. For the selected CMOS image sense, its driver is developed, and pixel data is collected by FPGA. Then the data is sent to PC by USB2.0. Finally, the received images with compound eye are displayed and the imaging information is further copied with.
     2. After the current technology for target positioning with binocular vision is studied and summarized, sub-coordinate system of compound eye and its world coordinate system are set up, then the mathematical model and calibrating content for target positioning with compound eye are established. In order to help express complex relationship of incident ray and solve the question setting up mathematical model, the ray, form target point to received lens and to refractive lens and to CMOS plane, is split into two parts. One is the linear part from target point to received sub-lens of compound eye, and the other is the nonlinear part from received sub-lens to refractive lens and to CMOS plane. Based on the existing calibrating model, after those methods are analyzed, the calibrating method which establishes the corresponding relationship between imaging points and its corresponding incident angels of every lens of compound eye is put forward.
     3. According to the calibrating theory, calibrating scheme is designed, and the calibrating goal is completed. The experimental equipments and laboratory instruments of plasma television, plane mirror interferometer, semiconductor laser, level instrument and horizontal moving guide rail are selected. The plasma television is selected as the target plane. The parallelism between planes of compound eye and of target is adjusted by interferometer. The verticality between horizontal moving axle and target plane is regulated by the superposition of laser dot and the fixed point in the target plane in the moving process of target plane. The uniformities between X-axis of coordination system of compound eye and row on the target, and between Y-axis of compound eye coordination system and column on the target are adjusted. Based on the work above, the intersection of optical axis of center lens of compound eye and target plane is found out, and the position coordinate on the target plane is calculated by the vanishing point. The distance between the center lens of compound eye and the intersection is obtained by coincidence imaging point of different points in the Y-axis. Then the three dimensional coordinates of all points on the target plane are known. After every center coordinate of imaging point and its corresponding angles of incident ray for every lens are obtained, the corresponding relationship between imaging points and its incident angles is set up. Finally, the calibrated task for imaging distortion of compound eye is completed. In the process of system calibration, image processing method to extract imaging point of target is introduced. And the algorithms to extract imaging point center and of imaging denoising are analyzed.
     4. The key issue of three dimension target positioning based on compound eye is studied, and initial application on target positioning with large field of view is discussed. The data interpolation methods based on Delaunay triangulation and neuron network and biharmonic spline are introduced, and interpolation accuracy and interpolation characteristic are analyzed. After the target imaging point center coordinate is extracted, it is interpolated into calibrating results of corresponding lens and the angles is obtained. This paper puts forward to divide lenses array of compound eye into169units of cluster eyes. For received some imaging points of one target, the imaging points that belong to a cluster eyes are found out firstly. Then the waving error is defined by normalized3D coordinates different. Finally, their corresponding lenses are matched successfully and the key issue to match received imaging point and its corresponding lens is solved. After target plane is moved to different position and the points on the target plane are calculated, the calibrated results of compound eye imaging system are evaluated. The results show that the mean square error of angle of every lens is about0.02°, and the relative error of X or Y or Z of three dimension coordinate of every point is about2%. In the process of target positioning, the influence of distribution relationship of interpolation point and referential points on interpolation accuracy is analyzed. Then the interpolation point that is within quadrangle formed with four neighbor points among referential points is only selected to interpolate. Finally, by the designing experiment scheme and putting up experimental platform, the intersection points of different parts with different field of view are initially measured with the designed compound eye positioning system, and their three dimensional contours of are reconstructed.
引文
[1]贾云得.机器视觉[M].北京:科学出版社,2000.
    [2]马颂德,张正友.计算机视觉-计算理论与算法基础[M].北京:科学出版社,1998.
    [3]李盾.空间预警系统对目标的定位与预报[D]:[博士].长沙:国防科学技术大学,2001.
    [4]P. J. Besl and R. C. Jain. Three-dimensional object recognition [J]. Comput Surveys,17: 75-145,1985.
    [5]Brian E. Pitches and David A. Wright. Three dimensional position measuring apparatus: United States,4691446 [P],1987.
    [6]韦毅,杨万海,李红艳.红外三维定位精度分析[J].红外,2:11-14,2002.
    [7]Kwoh Y S, Reed I S, Chen J Y, et al. A new computerized tomographic-aided robotic stereotaxis system [J]. Robotics Age,7(6):17-22,1985.
    [8]Dornaika F, Horaud R. Simultaneous robot-world and hand-eye calibration [J]. IEEE Trans on Robotics and Automation,14(4):617-622,1998.
    [9]Larsson Soren, J. A. P Kjellander. An industrial robot and a laser scanner as a flexible solution towards an automatic system for reverse engineering of unknown objects [A]. Proceedings of the 7th Biennial Conference on Engineering Systems Design and Analysis-2004 [C]. SL:SN,341-350,2004.
    [10]R. Jirawimut, S. Prakoonwit, F. Cecelja, and W. Balachandran. Visual odometer for pedestrian navigation [J]. IEEE Transactions on Instrumentation and Measurement,52(4): 1166-1173,2003.
    [11]P. Wei, J. Zeidler, and W. Ku. Analysis of Multiframe Target Detection Using Pixel Statistics [J]. IEEE Transaction on Aerospace and Electronic System,31(1):238-247, 1995.
    [12]G. Wells, C. Venaille, C. Torras. Promising research:vision-based robot positioning using neural networks [J]. Image Vision Comput,14(10):715-732,1996.
    [13]Mathies L, Brown E. Machine Vision for Obstacle Detection and Ordnance Recognition [A]. Proceedings of Annual meeting of the Association for Unmanned Vehicle Systems [C], Orlando,1780-1785,1996.
    [14]李开生,张慧慧,费仁元,宗光华.定位传感器及其融合技术综述[J].计算机自动测量与控制,9(4):1-3,2001.
    [15]Dornaika F, Garcia C. Pose estimation using point and line correspondences [J]. Real-time Imaging,5(3):215-230,1999.
    [16]于起峰,孙祥一,邱志强.从单站光测图像确定空间目标三维姿态[J].光学技术,28(1):77-79,2002.
    [17]Land M. F. Variations in the structure and design of compound eyes [J]. Facets of Vision (ed. by D. G. Stavenga & R. C. Hardie), New York:Springer Verlag,90-111,1989.
    [18]Justh E W, Krishnaprasad P S. Steering laws for motion camouflage [C]. Proceeding Royal Soc. A,462:3629-3643,2006.
    [19]王荫长.昆虫生理学[M].北京:中国农业出版社,2004.
    [20]R.朔文.昆虫生理学:下册[M].北京:科学出版社,1956.
    [21]王荫长.昆虫生理生化学[M].北京:中国农业出版社,1994.
    [22]BERTHA F. A., STUMM-TEGETHOFF and A. W. DICKE. Surface Structure of the Compound Eye of Various Drosophila Species and Eye Mutants of Drosophila melanogaster [J]. Theoretical and Applied Genetics,44:262-265,1974.
    [23]侯天德,何福元,程昉.菜粉蝶与蛱蝶复眼和视叶的形态学结构研究[J].西北师范大学学报(自然科学版),38(4):66-69,2002.
    [24]马惠钦.昆虫与仿生学浅谈[J].昆虫知识.37(3):170-172,2000.
    [25]J. D. Davis, S.F. Barrett, C. H. G. Wright and M. Wilcox, A bio-inspired apposition compound eye machine vision sensor system [J]. J.Bioinspiration & Biomimetics, vol.4, pp. 046002,2009.
    [26]吴卫国,吴梅英.昆虫复眼瞳孔调节的一种新机制[J].生物物理学报,6(2):178-182,1990.
    [27]吴卫国,吴梅英.昆虫视觉的研究及其应用[J].昆虫知识,34(3):179-183,1997.
    [28]Lichtensteiger, L., and Eggenberger, P. Evolving the Morphology of a Compound Eye on a Robot [C]. Proceedings of the Third European Workshop on Advanced Mobile Robots (Eurobot'99),129-134,1999.
    [29]Tanida, J., Kumagai T, Yamada K, et al. Thin observation module by bound optics (TOMBO):an optoelectronic image capturing system [A]. Optics in computing, R. A. Lessard and T. Galstian, eds., Proc. SPIE,4089:1030-1036,2000.
    [30]Tanida, J., Kitamura Y, Yamada K, et al. Compact image capturing system based on compound imaging and digital reconstruction [C]. SPIE,4455:34-41,2001.
    [31]Tanida, J., Kumagai T, Yamada K, et al. Thin observation module by bound optics (TOMBO):concept and experimental verification [J]. Appl. Opt,40(10):1806-1813,2001.
    [32]Tanida, J., et al., Color imaging with an integrated compound imaging system [J]. Optics Express,11(18):2109-2117,2003.
    [33]R. Shogenji, Y. Kitamura, K. Yamada, S. Miyatake, and J. Tanida. Bimodal fingerprint capturing system based on compound-eye imaging module [J]. Appl. Opt.43,1355-1359, 2004.
    [34]Duparre J, Schreiber P, Dannberg P, et al. Artificial compound eyes-different concepts and their application to ultra flat image acquisition sensors [C]. SPIE,5346:89-100,2004.
    [35]Duparre J, Dannberg P, Schreiber P, et al. Artificial apposition compound eye fabricated by micro-optics technology [J]. Appl. Opt,43(22):4303-4310,2004.
    [36]Duparre J, Dannberg P, Schreiber P, et al. Micro-optically fabricated artificial apposition compound eye [C]. SPIE,5301:25-33,2004.
    [37]Duparre J, Wippermann F, Darnnberg P, et al. Micro-optical artificial compound eyes-from design to experimental verification of two different concepts [C]. SPIE,5962:1-12,2005.
    [38]Duparre J, Dannberg P, Schreiber P, et al. compound-eye camera [J]. AppL. OpL,44(15): 2949-2956,2005.
    [39]Duparre J, WHppermann F, Dannberg P, et al. Chirped arrays of refractive ellipsoidal micmlenses for aberration correction under oblique incidence [J]. Opt. Express,13(26): 10539-10551,2005.
    [40]Wippermann F, Duparre J, Sehreiber P, et al. Design and fabrication of a chiiped array of refractive ellipsoidal micro-lenses for an apposition eye camera objective [C]. SPIE,5962: 1-11,2005.
    [41]Wippermann F, Duparre J, Schreiber P, et al. Applications of chirped microlens arrays for aberration compensation and improved system integration [C]. SPIE,6289:1-12,2006.
    [42]Duparre J, Radtke D, Tunnermann A. Spherical artificial compound eye captures real images [C]. SPIE,6466:1-9,2607.
    [43]Radtke D, Duparre J, Zeitner U D, et al. Laser lithographic fabrication and characterization of a spherical artificial compound eye [J]. Opt. Express,15(6):3067-3077,2007.
    [44]Duparre J, Radtke D, Bruckner A, et al. Latest developments in microoptical artificial compound eyes:a promising approach for next generation ultra-compact machine vision [C]. SPIE,6503:1-12,2007.
    [45]VAISH, V., WILBURN, B., JOSHI, N., and LEVOY, M.Using plane+parallax for calibrating dense camera arrays [C]. In Proc. CVPR 2004,2-9,2004.
    [46]WILBURN B, JOSHI N, VAISH V, et al. High Performance Imaging Using Large Camera Arrays [J]. In ACM Transactions on Graphics,24(3):765-776,2005.
    [47]HORNSEY R, THOMAS P, WONG W, et al. Electronic compound-eye image sensor: construction and calibration [C]. Proceeding of SPIE,5301:13-24,2004.
    [48]KRISHNASAMY R, THOMAS P, PEPIC S, et al. Calibration Techniques for Object Tracking Using a Compound Eye Image Sensor [C]. Proceeding of SPIE,5611:42-52, 2004.
    [49]B. Leininger. A next-generation system enables persistent surveillance of wide areas [J]. SPIE Newsroom.9 April.10.1118/2.1200803.1112:1-2,2008.
    [50]芦丽明,王国峰,张科,李言俊.蝇复眼在导弹上的应用研究[J].红外技术,5:9-11,2001.
    [51]陶圣,曾理江.结合空间侧抑制的仿生复眼模型[J].光学技术,33(3):337-340,2007.
    [52]张红鑫,卢振武,李凤有,等.重叠复眼光学模型的建立与分析[J].光子学报,36(6):1106-1109,2007.
    [53]张红鑫,卢振武,刘华,等.对重叠复眼进行简化模拟与分析的新方法[J],16(10):1847-1851,2008.
    [54]张红鑫,卢振武,李凤有,等.曲面复眼成像系统的研究[J].光学精密工程,14(3):346-350,2006.
    [55]叶声华,邾继贵,王仲,等.视觉检测技术及应用[J].中国工程科学,1(1):262-263,1999.
    [56]张文景,张文渊,苏健锋,等.计算机视觉检测技术及其在机械零件检测中的应用[J].上海交通大学学报,33(5):635-638,1999.
    [57]陈琛.基于机器视觉的印刷条码质量检测系统研究[D]:[硕士].西安:西安理工大学,2007.
    [58]沈垣,李舜酩,柏方超,等.路面车辆实时检测与跟踪的视觉方法[J].光学学报,30(4):1076-1083,2010.
    [59]黄山.车牌识别技术的研究和实现[D]:[博士].成都:四川大学,2005.
    [60]张辉,王耀南,周博文.基于机器视觉的液体药品异物检测系统研究[J].仪器仪表学报,30(3):548-553,2009.
    [61]N. Singh and M. J. Delwiche. Machine vision methods for defect sorting stonefruit [J]. Transactions of the ASAE,37(6):1989-1997,1994.
    [62]叶声华,王仲,曲兴华.精密测试技术展望[J].中国机械工程,11(3):262-264,2000.
    [63]王庆友.图像传感器应用技术[M].北京:电子工业出版社,2003.
    [64]Cypress Semiconductor Corporation. LUPA-40004M Pixel CMOS Image Sensor Datasheet [Z].2007.
    [65]徐洋,黄智宇,李彦,等.基于Verilog HDL的FPGA设计与工程应用[M].北京:人民邮电出版社,2009.
    [66]李春杰,李旭春FPGA设计的思想与方法[J].电测与仪表,42:470-473,2005.
    [67]李洪伟.基于Quartus Ⅱ的FPGA/CPLD设计[M].北京:电子工业出版社,2006.
    [68]王成儒,李英伟.USB2.0原理与工程开发[M].北京:国防工业出版社,2004.
    [69]J. Axelson. USB Complete,3nd ed. Madison, WI:Lakeview Research,2005.
    [70]钱峰EZ-USB FX2单片机原理、编程及应用[M].北京:北京航空航天大学出版社,22-23,2006.
    [71]蒋道三.USB2.0收发器逻辑电路的ASIC设计[D]:[硕士].南京:东南大学,2003.
    [72]张艳珍.微机视觉系统相关理论及技术研究[D]:[博士].大连:大连理工大学,2001.
    [73]郁道银,谈恒英.工程光学[M].北京:机械工业出版社,2003.
    [74]于起峰,陆宏伟,刘肖琳.基于图像的精密测量与运动测量[M].北京:科学出版社,2002.
    [75]FRYER, J. G. and BROWN, D. C. Lens distortion for close-range photogrammetry [J]. Photogrammetric Engineering and Remote Sensing,52(1):51-58,1986.
    [76]BROWN, D. C. Decentering distortion of lenses [J]. Photogrammetric Engineering,32(3): 444-462,1966.
    [77]J. Weng, P. Cohen, and M. Herniou. Camera calibration with distortion models and accuracy evaluation [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,14(10): 965-980,1992.
    [78]W. Faig. Caiibration of close-range Photogrammetry systems:Mathematical formulation. Photogrammetric Engineering and Remote Sensing [J],41(12):1479-1486,1975.
    [79]I. Sobel. On calibrating computer controlled cameras for perceiving 3-D scenes [J]. Artificial Intelligence,5(2):185-198,1974.
    [80]Abdel-Aziz Y. I., Karara H. M. Direct linear transformation into object space coordinates in close-range photogrammetry [C]. Proc. Symp, Close-Range Photogrammetry, Univ. of Illinois at Urbana-Champaign, Urbana,1-18,1971.
    [81]D. B. Genny. Stereo-camera calibration [J]. Proc. Image Understanding Workshop,4: 101-108,1979.
    [82]Martins H A, Birk J R, Kelley R B. Camera models based on data from two calibration planes [J]. Computer Graphics and Image Processing,17(2):173-180,1981.
    [83]G. Q. Wei, S. D. Ma. Implicit and explicit camera calibration:theory and experiments [J]. IEEE Trans. PAMI,16(5):469-480,1994.
    [84]R. Y. Tsai. A versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-the-shelf TV cameras and lenses [J]. IEEE Journal of Robotics and Automation,3(4):323-344,1987.
    [85]J. Weng, P. Cohen, M. Herniou. Camera calibration with distortion models and accuracy evaluation [J]. IEEE Transactions on PAMI,14(10):965-980,1992.
    [86]R. Y. Tsai, An efficient and accurate camera calibration technique for 3D machine vision [C]. Proc. IEEE Conference on CVPR,364-374,1986.
    [87]郑文庭.基于几何和图像混合的虚拟场景实时绘制技术研究[D]:[博士].杭州:浙江大学,1999.
    [88]马颂德,张正友.计算机视觉-计算理论与实践[M].北京:科学出版社,1998.
    [89]S. D. Ma. A self-calibration technique for active vision system [J]. IEEE Transactions Robotics and Automation,12(2):114-120,1996.
    [90]R. I. Hartley. Estimation of relative camera positions for uncalibrated cameras [C]. Proc. Eur. Conf. Computer Vision,579-587,1992.
    [91]Faugeras O D, Maybank S. Motion from point matches:multiplicity of solutions [J]. The International Journal of Computer Vision,4:225-246,1990.
    [92]M. K. Lee and J. H. Lee. A 2-D Image Camera Calibration using a Mapping Approximation of Multi-Layer Perceptrons [J]. Journal of Control Automation and System Engineering, 4(4):487-493,1998.
    [93]J. Jun and C. Kim. Robust camera calibration using neural network [C]. Proceeding IEEE Region 10 Conference, TENCON, vol.1,694-697,1999.
    [94]HongHua Zhao, GE Louis. Camera Calibration Using Neural Network for Image-Based Soil Deformation Measurement Systems [J]. ASTM geotechnical testing journal.31(2): 192-197,2008.
    [95]Zhang Zhengyou. A flexible new technique for camera calibration [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,22(11):1330-1334,2000.
    [96]李庆扬,王能超,易大义.数值分析[M].北京:清华大学出版社,2001.
    [97]B. Caprile and V. Torre. Using vanishing points for camera calibration [J]. The International Journal of Computer Vision,4(2):127-140,1990.
    [98]R. G. Willson and S. A. Shafer, What is the center of the image [J]. Journal of the Optical Society of America A,11(11):2946-2955,1993.
    [99]Sankur B., Sezgin M. Image thresholding techniques:A survey over categories [J]. Journal of Electronic Imaging,13(1):146-165,2004.
    [100]Yah Solihin and C. G. Leedham. Integral ratio:A new class of global thresholding techniques for handwriting images [J]. IEEE Transactions on Pattern Analysia and Machine Intelligence,21(8):761-768,1999.
    [101]N. Otsu. A threshold selection method from gray-level histograms [J]. IEEE Transactions on Syestems, Man, and Cybernetics,9(1):62-66,1979.
    [102]J. N. Kapur, P. K. Sahoo, A. K. C. Wong. A new method for gray-level picture thresholding using the entropy of the histogram [J]. Computer Vision, Graphics and Image Processing, 29(3):273-285,1985.
    [103]L. Eikvil, T. Taxt and K. Moen. A fast adaptive method for binarization of document images [C]. International Conference on Document Analysis and Recognition, France,435-443, 1991.
    [104]W. Niblack. An introduction to digital image processing [M]. New Jersey:Prentice-Hall, 1986.
    [105]J. Bernsen. Dynamic thresholding of gray-level images [C]. Proc. Int. Conf. Patt. Recogn., 1251-1255,1986.
    [106]J. Sauvola, M. Pietikainen. Adaptive document image binarization [J]. Pattern Recognition, 33:225-236,2000.
    [107]景晓军,李剑峰,熊天庆.静止图像的一种自适应平滑滤波算法[J].通信学报,23(10):6-14,2002.
    [108]Kenneth Castleman R. Digital Image Processing [M]. New York:Prentice Hall,1979.
    [109]Z. L. Shun and Q. Q. Ruan. A fast algorithm and its application of 2D mean value filter [J]. Journal of Northem Jiaotong University,25(2):22-24,2001.
    [110]D. Gunawan. Denoising images using wavelet transform [A]. Proceedings of the IEEE Pacific Rim Conference on Communications, Computers and Signal Processing [C]. Victoria, BC, USA,83-85,1999.
    [111]叶华俊,刘华锋,鲍超.基于小波变换的MR图像的去噪处理[J].光电工程,30(1):66-69,2003.
    [112]C. Pohl and J. L. Van Genderen. Multi-sensor image fusion in remote sensing:Concepts, methods, and applications [J]. Int. J. Remote Sens.,19(5):823-854,1998.
    [113]D. L. DONOHO, I. M. JOHNSTONE. Ideal spatial adaptation via wavelet shrinkage [J]. Biometrika,81(3):425-455,1994.
    [114]龚伟,石青云,程民德.数字空间中的数学形态学-理论及应用[M].北京:科学出版社,1997.
    [115]陈虎,周朝辉,王守尊.基于数学形态学的图像去噪方法研究[J].工程图学学报,25(2):116-119,2004.
    [116]王小鹏,郑玉甫.一种图像噪声的形态学多尺度去除方法[J].计算机工程,32(4):208-210,2006.
    [117]肖启芝,许凯,关泽群,等.一种形态学滤波结构元的选择方法[J].计算机工程与应用,43(21):49-51,2007.
    [118]林玉池,崔彦平,黄银国.复杂背景下边缘提取与目标识别方法研究[J].光学精密工程,14(2):509-514,2006.
    [119]Huang N E, Shen Z, Long S R. The empirical mode decomposition and the Hilbert spectrum for nonlinear and non-stationary time series analysis [J]. Proc. R. Soc. Lond. A, 454(1971):903-995,1998.
    [120]J. C. Nunes, Y. Bouaoune, E. Delechelle, N. Oumar, and P. Bunel. Image analysis by bidimensional empirical mode decomposition [J]. Image and Vision Computing,21(12): 1019-1026,2003.
    [121]P. Flandrin, G. Rilling and P. Goncalves. Empirical Mode Decomposition as a Filter Bank [J]. IEEE Signal Processing Letters,11(2):112-114,2004.
    [122]葛光涛.二维经验模态分解研究及其在图像处理中的应用[D]:[博士].哈尔滨:哈尔滨工程大学,2009.
    [123]张贤达.现代信号处理[M].北京:清华大学出版社,2003.
    [124]尹方平,苏静.一种改进的核维纳滤波器图像去噪算法研究[J].激光与红外,40(5):449-553,2010.
    [125]章毓晋.图像分割[M].北京:科学出版社,2001.
    [126]郑南宁.计算机视觉与模式识别[M].北京:国防工业出版,1998.
    [127]Y. J. Zhang and J. J. Gerbrands. Objective and quantitative segmentation evaluation and comparison [J]. Signal processing,39(8):43-54,1994.
    [128]Zhang Y J. New advancements in image segmentation for CBIR [M]. Encyclopedia of Information Science and Technology, Idea Group Reference. Mehdi Khosrow-Poured,4: 2105-2109,2005.
    [129]章毓晋.中国图像工程:2003[J].中国图像图形学报,9(5):513-531,2004.
    [130]C. K. Chow, T. Kaneko. Automatic boundary detection of the left ventricle from cineangiograms [J]. Computer and Biomedical Research,5:388-410,1972.
    [131]A. Baraldi and F. Parmiggiani. Single linkage region growing algorithms based on the vector degree of match [J]. IEEE Transactions on Geoscience and Remote Sensing,34(1): 137-148,1996.
    [132]F. Liu, R. W. Picard. Periodicity, directionality and randomness:wold features of image modeling and retrieval [J]. IEEE Trans. Pattern Analysis and Machine Intelligence,18(7): 722-733,1996.
    [133]J. Canny. A computational approach to edge detection [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,8(6):679-698,1986.
    [134]张建伟,罗剑,夏德深.一种基于遗传算法的双T Snake模型图像分割方法[J].中国 图像图形学报,10(1):38-42,2005.
    [135]Jain A K. Fundamentals of Digital Image Processing [M]. New York:Prentice Hall,1989.
    [136]章毓晋.图像工程(上册)-图像处理和分析[M].北京:清华大学出版社,1999.
    [137]Hancock B R, Stirbl R C, Cunningham T J, et al. CMOS active pixel sensor specific performance effects on star tracker/imager position accuracy [A].2001 Proceedings of SPIE [C]. Bellingham W A:SPIE Press,4284:43-53,2001.
    [138]张辉,袁家虎,刘恩海,等.CCD噪声对星敏感器星点定位精度的影响[J].红外与激光工程,35(5):629-633,2006.
    [139]Jie Li, Jinguo Liu, Zhihang Hao. Active Pixel Sensor Geometrical Characteristic Effects on Star Image Subdivided Location accuracy for Star Tracker [J]. Proc. of SPIE (S0277-768X), 2006,6031:60310C-1-9.
    [140]李玉峰,郝志航.星点图像超精度亚像元细分定位算法的研究[J].光学技术,31(5):666-671,2005.
    [141]孔兵,王昭,谭玉山.基于圆拟合的激光光斑中心检测算法[J].红外与激光工程,31(3):275-279,2002.
    [142]黄浩,詹玲.一种光斑中心定位算法的研究与实现[J].小型微型计算机系统,26(7)22-26,2005.
    [143]李为民,俞巧云,胡红专,等.光点定位中的曲面拟合迭代算法[J].光学技术,30(1):33-35,2004.
    [144]唐冠群.几种激光光斑中心定位算法的比较[J].北京机械工业学院学报,24(1):61-64,2009.
    [145]J. Heikkila, O. Silven. A Four-step Camera Calibration Procedure with Implicit Image Correction [C]. IEEE Computer Society Conference on Computer Vision and Pattern Recognition:1106-1112,1997.
    [146]Guangjun Zhang, Zhenzhong Wei. A position-distortion model of ellipse center for perspective projection [J]. Measurement Science and Technology,14(8):1420-1426,2003.
    [147]施吉林,刘淑珍,陈桂芝.计算机数值方法[M].北京:高等教育出版社,2003.
    [148]崔海霞.基于多分辨率的图像插值技术[D]:[硕士].广州:华南理工大学,2005.
    [149]Voronoi M G. Nouvelles applications des parametres continus a la theorie des formes quadratiques [J]. J Reine Angew Math,134:198-287,1908.
    [150]Thiessen A H. Precipitation Averages for Large Areas [J]. Monthly Weather Review,39(7): 1082-1084,1911.
    [151]B. Delaunay. Sur la Sphere Vide [J]. Bulletin of the Academy of Sciences of the USSR, Classe des Sciences Mathematiques et Naturelles,8:793-800,1934.
    [152]Xu Haixiang, Zhu Guangxi, Tian Jinwen, et al. Image segmentation based on support vector machine [J]. Journal of Electronic Science and Technology of China,3(3),226-230,2005.
    [153]仇宇.全息图的数字化频域滤波及数值再现研究[J].电子科技大学学报,35(6):970-972,2006.
    [154]Joe B. Construction of three-dimensional Delaunay triangulations using local transformations [J]. Computer Aided Geometroc Design,8:123-142,1991.
    [155]J. Liebeherr, M. Nahas and W. Si. Application-layer Multicasting with Delaunay Triangulation Overlays [J]. IEEE Journal on Selected Areas in Communications,20(8): 1472-1488,2002.
    [156]J. M. Keil and C. A. Gutwin. The Delaunay triangulation closely approximates the complete Euclidean Graph [J]. Proc.1st Workshop on Algorithms and Data Structures [C], Springer LNCS 382,47-56,1989.
    [157]X. Li, G. Calinescu, P. Wan and Y. Wang. Localized delaunay triangulation with application in ad hoc wireless networks [J]. IEEE Transaction on Parallel and Distributed System, 14(10):1035-1047,2003.
    [158]M. M. Nevado, J. G. Garcia-Bermejo, and E. Z. Casanova. Obtaining 3D models of indoor environments with a mobile robot by estimating local surface directions [J]. Robotics and Autonomous Systems,48:131-143,2004.
    [159]杨钦.限定Delaunay三角网格剖分技术[M].北京:电子工业出版社,2005.
    [160]Barber C B, Dobkin D P, Huhdanpaa H T. The quickhull algorithm for convex hulls [J]. ACM Transactions on Mathematical Software,22(4):469-483,1996.
    [161]徐青,常歌,杨力.基于自适应分块的TIN三角网建立算法[J].中国图像图形学报,5(6):461-465,2000.
    [162]Shamos M I, Hoey D. Closest-point Problems [C]. Proceedings of the 16th Annual Symposium on the Foundations of Computer Science, New York, USA:IEEE Computer Society,151-162,1975.
    [163]Lewis B A, Robinson J S. Triangulation of Planar Regions with Applications [J]. The Computer Journal,21(4):324-332,1978.
    [164]武晓波,王世新,肖春生Delaunay三角网的生成算法研究[J].测绘学报,28(1):28-35,1999.
    [165]Fletcher Dunn, Ian Parberry,史银雪,陈洪,王荣静译.3D数学基础:图形与游戏开发[M].北京:清华大学出版社,2005.
    [166]Freeman J A, Skapura D M. Neural Networks:Algorithms, Applications, and Programming Techniques [M]. Boston:Addison-Wesley,1991.
    [167]Rojas R. Neural Networks:A Systematic Introduction [M]. New York:Springer-Verlag, 1996.
    [168]沈世镒.神经网络系统理论及其应用[M].北京:科学出版社,2001.
    [169]哈根.神经网络设计[M].北京:机械工业出版社,2002.
    [170]Simon Haykin S神经网络原理[M].北京:机械工业出版社,2004.
    [171]Wilamowski B M, Chen Y, Malinowski A. Efficient algorithm for training neural networks with one hidden layer [C]. Washington, DC:International Joint Conference on Neural Networks-IJCNN'99,31:1725-1728,1999.
    [172]Wilamowski B M, Iplikci S, Kaynak O, et al. An algorithm for fast convergence in training neural networks [C]. Washington DC:International Joint Conference on Neural Networks-IJCNN'01,3:1778-1782,2001.
    [173]赵弘,周瑞祥,林廷圻.基于Levenberg-Marquardt算法的神经网络监督控制[J].西安交通大学学报,36(5):523-527,2002.
    [174]Ahlberg J H, Nilson E N and Walsh J L. The Theory of Splines and Their Applications [M]. New York:Academic Press,1967.
    [175]Bhattacharyya, B K. Bicubic spline interpolation as a method for treatment of potential field data [J]. Geophysics,34(3):402-423,1969.
    [176]I. C. Briggs. Machine contouring using minimum curvature [J]. Geophysics,39(1):39-48, 1974.
    [177]Sandwell D T. Biharmonic Spline Interpolation of GEOS-3 and Seasat Altimeter Data [J]. Geophysical Research Letters,14(2):139-142,1987.
    [178]Guo Fang, Zhang Hao, and Wang Keyi. Point detection and positioning system of the target based on surface cluster eyes [C].5th International Symposium on Advanced Optical Manufacturing and Testing Technologies:Optical Test and Measurement Technology and Equipment, SPIE,765663,2010.
    [179]陈树强,陈学工,王丽青.判定检测点是否在多边形内的新方法[J].微电子学与计算机,23(8):194-199,2006.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700