用于目标三维探测的复眼系统设计研究
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
非接触式定位相比于接触式定位具有独特的优势,而基于视觉的目标三维定位作为非接触式测量的重要手段之一,在国民生产生活以及国防、航空航天等行业也得到越来越广泛地应用。随着计算机信息技术的高速发展,对光学系统的设计产生了极大的影响。从日常生活中的高速公路视频监控、小区内视频安防监控同时监控360度区域、虚拟现实、虚拟景点、公司远程视频会议、机器人导航,到现在越来越流行的超大屏幕超宽荧幕电影(比如iMax3D、中国巨幕等)的摄制,到嫦娥登月及玉兔上车载大视场成像装置,再到军事上用途广泛的微小型无人侦察机上的大视场成像探测装置,大视场成像及目标三维探测重构技术作为一个重要的研究课题,其应用范围也越来越广泛。
     现有的大视场成像装置种类比较多。鱼眼镜头可以获得大视场的场景,但是其成像有明显的径向畸变;而且即使使用数字图像处理技术纠正了畸变,也存在分辨率不高的问题;另外,一般的广角镜头为纠正像差,通常需要7组以上的球面或非球面透镜组。而昆虫复眼以其体积小、结构紧凑、视场角大、对运动物体反应灵敏等而具有独特的优势,引起学者越来越多的关注,因而人工仿生复眼是解决大视场无畸变成像技术比较优秀的方案。同时,由于复眼为多目视觉成像系统,还具有三维测量与重构的能力。国内外关于人工仿生复眼用于大视场范围内成像的研究很多,但是应用于大视场空间物体三维测量的应用却很少见。因而本文正是着眼于复眼多孔径成像用于空间三维测量以及空间物体三维重构的。
     本文在前人研究的基础上,提出了一种结构简单、功能实用的用于空间三维测量的复眼系统设计方案,并以该系统为研究对象,探索研究了子眼微透镜光学系统设计、大面阵图像采集系统、子眼通道在曲面复眼中的分布方案设计单个子眼标定、系统整体标定、系统用于空间发光点三维探测的原理及方法、复眼用于空间扩展物体的面形测量以及双复眼系统等内容,具体研究内容包括以下几个方面:
     (1)在基于传统球透镜作为微透镜阵列加大弯月透镜组成的复眼系统基础上提出以对数型锥透镜替代球透镜,提高系统焦深可以有效弥补按照曲面分布的各个子眼通道到图像传感器的距离不同而造成的离焦、图像模糊等问题。采用了图像增强算法,增强了光斑中心的亮度。提高了中心提取精度,因而提高了测量精度;
     (2)进一步优化子眼通道在曲面上的分布方式,提出基于正多面体细分的方案,并根据合理假设引入变形系数,使得子眼与子眼之间的距离更加均匀,填充比更高,因而提高了等效通光孔径,间接地提高了成像质量。另外,更加均匀的子眼分布使得在相同的复眼整体尺寸及相同的小透镜尺寸下可以放置更多的子眼通道;
     (3)在对相机标定方法调研分析的基础上,结合复眼系统实际情况,综合考虑子眼微透镜排列规律以及大弯月透镜的影响,提出了使用纵横式正弦光栅标靶及二维相位跳变点检测的单子眼标定方法,为复眼整体标定提供可靠保证;
     (4)通过分析复眼空间三维定位的原理,以及对比基于标靶面与复眼中各个子眼微透镜中心来确定空间发光点的入射光方向的方法,提出了使用多位置标定方法对复眼进行整体标定的思路。。该思路基于先进的光场理论,而不拘泥于传统标定方法必须有一个共同的光轴、光心的假设。建立实验平台,使用与复眼系统保持垂直的大尺寸LCD显示器作为活动标靶,通过精密导轨前后平移大显示器,从而在多个平面内建立起空间点与图像光斑中心之间的映射关系。然后使用已经标定好的平面内的点的连线确定空间发光点的入射光直线方程,从而建立入射光直线与图像光斑中心之间的更为精确的映射关系,用于后续的三维定位任务;
     (5)通过分析复眼成像过程以及复眼空间三维位置定位过程这两个互逆的过程,提出复眼三维定位的基本原理、多通道识别算法、通过多子眼同时成像的光斑中心确定多条入射光线,并根据最小二乘法拟合得到这些入射光线的交点,从而确定空间点的三维位置。在点探测的基础上,通过结合二维激光扫描仪可以实现对扩展空间物体的三维面形重构;
     (6)由于各个子眼之间的基线较短,导致复眼三维测量系统横向分辨率较高(可达0.003rad),但是纵向分辨率比较差(可高达3mm)。为解决这个问题,使用了双复眼系统对空间物体进行三维探测,得到了等效大基线的测量效果,显著提高了纵向分辨率(提高至约0.7mmm)。
Non-contact measurement has unique advantages compared to the contact mea-surement. Vision based three-dimensional object detection and measurement, as one of the important method of non-contact measurement, has more and more applications on national industrial and everyday life, national defense, aerospace and other industries applications. With the rapid development of computer and information technology, the design of the optical system has changed greatly. From everyday life applications such as highway video surveillance, video security monitoring system in the district with360degrees field of view, virtual reality, virtual scenic tourism, corporate remote video conference, robot navigation, to what is now increasingly popular ultra-large-ultra-wide screen movies (for example iMax3D, IMAX China, etc.), large field of view imaging device on the jade rabbit, to large field of view imaging detection device on micro and small unmanned reconnaissance aircraft in the military applications, large field of view imaging and three-dimensional object detection and reconstruction has been an impor-tant research topic, and its application has become increasingly widespread.
     There have been numerous kinds of existing large field view imaging devices. Fisheye lens can get large field scene, but its image has a significant radial distortion; And even using digital image processing techniques to correct the distortion, it still suffers from low image resolution. In addition, the general wide-angle lens to correct aberrations usually requires more than7group spherical or aspherical lenses group. The compound eyes of insects has attract more and more attention of scholars for its unique advantages such as its small size, compact structure, large field of view, quick reaction to moving objects. Therefore bio-mimetic artificial compound eye is a good solution of large field of view imaging technology without significant distortion. Meanwhile, the multi-aperture compound eye imaging system can also been applicable for three-dimensional object detection and surface reconstruction. In most of these studies, the merit of large field of view imaging has been extensively demonstrated. In this paper, we will focus on the potential application of a compound eye system in3D object detection. An artificial compound eye system consisting of a layer of lenslet array and a large meniscus lens for beam-steering for3D object detection is developed
     Based on our previous research, a compound eye system with simple structure, fea-sibility and flexibility is proposed for three-dimensional measurement. The system in-cludes microlenses optical design, large array image acquisition system, closed packing of the microlenses distribution on a curved surface, single microlens calibration, the sys-tematic calibration,3D object detection principle and the method for three-dimensional point object detection,3D detection for extended object and3D measurement with dou-ble compound eyes system. The main contents are as follows:
     (1) Based on the compound eye system consisting of traditional spherical microlenses array and a large meniscus lens, a logarithmic axicon lens is used to replace the traditional spherical microlens, which can effectively compensate for large aber-rations, defocusing, image blurring, and other issues due to different path of light of different channels. The axicon lens image enhancement is also investigated and studied, which improves the accuracy of extracted light spot center, thus in-creasing detection accuracy of the three-dimensional measurement system from the compound eye optical design perspective.
     (2) The sub-channel distribution of the microlenses on the spherical surface are fur-ther optimized. The proposed method is based on subdivision of Platonic Solid (Icosahedron in particular). In addition, deformation coefficient based on reason-able assumptions is introduced to make the distribution of the microlenses more uniform. As a result, higher packing density is achieved, thus increases the equiv-alent aperture, and indirectly improves the image quality. Further more, a more uniform distribution of such sub-eye compound eye in the same overall size and the same size of small lenses can be placed more sub-eye channels, which can improve the accuracy of the3D detection system;
     (3) Based on the recent research and analysis of camera calibration method, and tak-ing into account the actual situation of the compound eye system, considering the impacts of the microlenses arrangment and the large meniscus lens, a single microlens calibration program is proposed to provide a reliable guarantee for the overall calibration of the compound eye;
     (4) By analyzing the principle of three-dimensional3D object detection of the com-pound eye system, and the eyes of the target based on the multiple planes mi-crolens center of each sub eye to determine the direction of the incident light emitting point space method comparison, proposed the use of multi-position cal-ibration method to calibrate the overall idea of the compound eye. The idea is based on the advanced light-field theory, rather than to the traditional calibration methods which assumes that there is only one optical axes and one optical center. Experimental platform is built. To assure the LCD and compound eye system is adjusted to be perpendicular to each other, precision guide by panning around a large display, in order to establish the mapping between the space between the center point of the image spot in multiple planes. Then use a good point has been calibrated within the plane of the incident to determine the connection point of the linear equation luminous space, so as to establish a more precise mapping of the incident linear relationship between the center and the image spot for the subsequent three-dimensional positioning tasks;
     (5) By analyzing the compound eye imaging process and the three-dimensional posi-tion reciprocal positioning process, the basic principles of compound eyes made three-dimensional positioning is analyzed, a multi-channel identification algo-rithm is proposed. The incident light is determined by multiple light spot center on the image, and according to least squares fitting,the intersection of the inci-dent light is calculated, thereby the three-dimensional position of points in space is obtained. On the basis of the detection point, combined with a home-built two-dimensional laser scanner,the system is achievable of3D detection of the extended objects and three-dimensional surface reconstruction;
     (6) Due to the short baseline between the sub-eyes, higher angular resolution can be obtained by compound eye three-dimensional measurement system, but relatively poor vertical resolution. To solve this problem, the system uses a dual compound eye system for three-dimensional space object detection. Experimental results show that higher vertical resolution can be achieved with the equivalent large baseline;
引文
[1]Buschbeck E. Chunk Versus Point Sampling:Visual Imaging in a Small Insect. Science,1999, 286(5442):1178-1180.
    [2]Tanida J, Kumagai T, Yamada K, et al. Thin Observation Module by Bound Optics (TOMBO):Concept and Experimental Verification. Appl Opt,2001,40(11):1806-13.
    [3]Shogenji R, Kitamura Y, Yamada K, et al. Bimodal Fingerprint Capturing System Based on Compound-Eye Imaging Module. Appl. Opt.,2004,43(6):1355-1359.
    [4]Kitamura Y, Shogenji R. Yamada K, et al. Reconstruction of a high-resolution image on a compound-eye image-capturing system. Appl Opt,2004,43(8):1719-27.
    [5]Chan W S, Lam E, Ng M, et al. Super-resolution reconstruction in a computational compound-eye imaging system. Multidimensional Systems and Signal Processing,2007,18(2-3):83-101.
    [6]Horisaki R, Irie S, Ogura Y, et al. Three-Dimensional Information Acquisition Using a Compound Imaging System. Optical Review,2007,14(5):347-350.
    [7]Horisaki R, Choi K, Hahn J, et al. Generalized sampling using a compound-eye imaging system for multi-dimensional object acquisition. Optics Express,2010,18(18):19367-19378.
    [8]Lee L P. Inspirations from Biological Optics for Advanced Photonic Systems. Science,2005,310(5751):1148-1150.
    [9]Jeong K H, Kim J, Lee L P. Biologically inspired artificial compound eyes. Science,2006,312(5773):557-61.
    [10]Song Y M, Xie Y, Malyarchuk V, et al. Digital cameras with designs inspired by the arthropod eye. Nature, 2013,497(7447):95-99.
    [11]Brady D J. Dennis Healy, ISP, Montage and MOSAIC, July 10,2011.
    [12]Marks D L, Brady D J. Gigagon:A Monocentric Lens Design Imaging 40 Gigapixels. Optical Society of America. ITuC2.
    [13]Son H S, Marks D L, Hahn J, et al. Design of a spherical focal surface using close-packed relay optics. Optics Express,2011,19(17):16132-16138.
    [14]Golish D R, Vera E, Kelly K, et al. Challenges in gigapixel multiscale image formation. Proceedings of Imaging and Applied Optics Technical Papers. Optical Society of America. JW3A.4.
    [15]Brady D J. Gehm M E, Stack R A, et al. Multiscale gigapixel photography. Nature,2012,486(7403):386-389.
    [16]Brady D J, Hagen N. Multiscale lens design. Optics Express,2009,17(13):10659-10674.
    [17]Floreano D, Pericet-Camara R, Viollet S, et al. Miniature curved artificial compound eyes. Proceedings of the National Academy of Sciences,2013..
    [18]Li L, Yi A Y. Microfabrication on a curved surface using 3D microlens array projection. Journal of Microme-chanics and Microengineering,2009,19(10):105010-7.
    [19]Li L, Yi A Y. Design and fabrication of a freeform prism array for 3D microscopy. Journal of the Optical Society of America a-Optics Image Science and Vision,2010,27(12):2613-2620.
    [20]Li L, Yi A Y. Development of a 3D artificial compound eye. Optics Express,2010,18(17):18125-37.
    [21]Li L, Yi A Y. Design and fabrication of a freeform microlens array for a compact large-field-of-view compound-eye camera. Applied Optics,2012,51(12):1843-1852.
    [22]Zhang H. INVESTIGATION TO A COST-EFFECTIVE 3D MICROMACHINING METHOD[D].2013.
    [23]Zhang H, Li L, McCray D L, et al. Development of a low cost high precision three-layer 3D artificial compound eye. Opt. Express,2013,21(19):22232-22245.
    [24]Hornsey R, Thomas P, Wong W, et al. Electronic compound-eye image sensor:Construction and calibra-tion. Sensors and Camera Systems for Scientific, Industrial, and Digital Photography Applications V,2004, 5301:13-24.
    [25]Krishnasamy R, Wong W, Shen E, et al. High precision target tracking with a compound-eye image sensor. Proceedings of Electrical and Computer Engineering,2004. Canadian Conference on, volume 4. IEEE.2319-2323 Vol.4.
    [26]张红鑫,卢振武,王瑞庭,et al.曲面复眼成像系统的研究.光学精密工程,2006,(03):346-350.
    [27]王克逸,张浩,曹兆楼,et a1.复眼位标器的标定与探测.光学精密工程,2010,18(8).
    [28]Guo F, Zheng Y P, Keyi W. Lenses matching of compound eye for target positioning. Proceedings of Proc. SPIE 8420,6th International Symposium on Advanced Optical Manufacturing and Testing Technologies: Optical System Technologies for Manufacturing and Testing.
    [29]Guo F, Zhang H, Wang K. Point detection and positioning system of the target based on surface cluster eyes. 2010.765663-765663.
    [30]郭方,王克逸,闫佩正,et al.基于目标定位的簇眼系统标定.光电工程,2011,38(12).
    [31]郭方,王克逸,曹兆楼.et al.基于目标定位与跟踪的簇眼结构和图像采集系统设计.航空兵器,2011,(2).
    [32]曹兆楼,詹珍贤,王克逸.用于运动目标探测的球面复眼透镜的结构设计.红外与激光工程,2011,40(1).
    [33]郭方,王克逸.一种新型的复眼定位仪.光学精密工程,2012..
    [34]郭方,王克逸,闫佩正.et al.用于大视场目标定位的复眼系统标定.光学精密工程,2012,20(5).
    [35]郭方.新型复眼定位装置设计及关键技术研究[D].2012.
    [36]郭方,王克逸,吴青林.多通道大视场目标定位仪的研制.光学精密工程,2013,2013年0l期.
    [37]邸思,杜如虚.单层曲面复眼成像系统的优化设计.光电工程,2010,(02):27-31.
    [38]Mcleod J H. The Axicon-a New Type of Optical Element. Journal of the Optical Society of America,1954, 44(8):592-597. Ud727 Times Cited:406 Cited References Count:0.
    [39]Durnin J, Miceli J J, Eberly J H. Diffraction-Free Beams. Journal of the Optical Society of America a-Optics Image Science and Vision,1986,3(13):P128-P128.
    [40]Durnin J, Miceli J J, Eberly J H. Diffraction-Free Beams. Physical Review Letters,1987,58(15):1499-1501.
    [41]Durnin J, Miceli J J, Eberly J H. Diffraction-Free Beams-Reply. Physical Review Letters,1987,59(22):2612-2612.
    [42]Arimoto R, Saloma C, Tanaka T, et al. Imaging properties of axicon in a scanning optical system. Appl. Opt., 1992,31(31):6653-6657.
    [43]Thaning A, Jaroszewicz Z, Friberg A T. Axicon focusing in oblique illumination, volume 4829 I of 19th Congress of the International Commisssion for Optics Optics for the Quality of Life.266-267.
    [44]Golub I, Chebbi B, Shaw D, et al. Characterization of a refractive logarithmic axicon. Opt. Lett.,2010, 35(16):2828-2830.
    [45]Sochacki J, Ko?odziejczyk A, Jaroszewicz Z, et al. Nonparaxial design of generalized axicons. Appl. Opt., 1992,31(25):5326-5330.
    [46]Cao Z, Wang K, Wu Q. Aspherical anamorphic lens for shaping laser diode beam. Optics Communications, 2013,305(0):53-56.
    [47]Cao Z, Wang K, Wu Q. Logarithmic axicon characterized by scanning optical probe system. Optics Letters, 2013,38(10):1603-1605.
    [48]Cao Z 1, Keyi W, Wu S, et al. Fabrication of refractive axicons utilizing electrostatic force. Optik,2013..
    [49]Kenner H. Geodesic math and how to use it. Univ of California Press,1976.
    [50]Brown D C. Decentering Distortion of Lenses. Photometric Engineering,1966,32(3):444-462.
    [51]Conrady A E. Lens-systems, Decentered. Monthly Notices of the Royal Astronomical Society,1919,79:384-390.
    [52]Brown D. Close-range camera calibration. PHOTOGRAMMETRIC ENGINEERING,1971,37(8):855-866.
    [53]Tsai R. A versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-the-shelf TV cameras and lenses. Robotics and Automation, IEEE Journal of,1987,3(4):323-344.
    [54]Weng J Y, Cohen P, Herniou M. Camera Calibration with Distortion Models and Accuracy Evaluation. Ieee Transactions on Pattern Analysis and Machine Intelligence,1992,14(10):965-980.
    [55]Abdel-Aziz Y. Direct linear transformation from comparator coordinates in close-range photogrammetry. Proceedings of ASP Symposium on Close-Range Photogrammetry in Illinois,1971.
    [56]Chen N Y, Birk J R, Kelley R B. Estimating workpiece pose using the feature points method. Automatic Control, IEEE Transactions on,1980,25(6):1027-1041.
    [57]Martins H A, Birk J R, Kelley R B. Camera models based on data from two calibration planes. Computer Graphics and Image Processing,1981,17(2):173-180.
    [58]Guo-Qing W, Song De M. Implicit and explicit camera calibration:theory and experiments. Pattern Analysis and Machine Intelligence, IEEE Transactions on,1994,16(5):469-480.
    [59]Zhang Z Y. A flexible new technique for camera calibration. Ieee Transactions on Pattern Analysis and Machine Intelligence,2000,22(11):1330-1334.
    [60]Zhang Z. Camera calibration with one-dimensional objects. Pattern Analysis and Machine Intelligence, IEEE Transactions on.2004,26(7):892-899.
    [61]Heikkila J, Silven O. A four-step camera calibration procedure with implicit image correction. Proceedings of Computer Vision and Pattern Recognition,1997. Proceedings.,1997 IEEE Computer Society Conference on,1997.1106-1112.
    [62]Ricolfe-Viala C, Sanchez-Salmeron A J, Martinez-Berti E. Accurate calibration with highly distorted images. Appl Opt,2012,51(1):89-101.
    [63]Kannala J, Brandt S S. A generic camera model and calibration method for conventional, wide-angle, and fish-eye lenses. Pattern Analysis and Machine Intelligence. IEEE Transactions on,2006,28(8):1335-1340.
    [64]Huang L, Zhang Q, Asundi A. Camera calibration with active phase target:improvement on feature detection and optimization. Opt. Lett.,2013,38(9):1446-1448.
    [65]Jun J, Kim C. Robust camera calibration using neural network. Proceedings of TENCON 99. Proceedings of the IEEE Region 10 Conference, volume 1. IEEE.694-697.
    [66]张广军.视觉测量.科学出版社,2008.
    [67]Faugeras O D, Luong Q T, Maybank S J. Camera self-calibration:Theory and experiments, volume 588 of Lecture Notes in Computer Science. Springer Berlin Heidelberg,1992:321-334.
    [68]Nixon M, Aguado A S. Feature Extraction and Image Processing, Second Edition. Academic Press,2008: 424.
    [69]Ouellet J N, Hebert P. A simple operator for very precise estimation of ellipses. Fourth Canadian Conference on Computer and Robot Vision, Proceedings,2007.21-28.
    [70]Chen D, Wang Y, Zhang G. A new sub-pixel detector for grid target points in camera calibration, volume 6027,2006.60272N-60272N-6.10.1117/12.668295.
    [71]Heikkila J. Moment and curvature preserving technique for accurate ellipse boundary detection. Fourteenth International Conference on Pattern Recognition, Vols 1 and 2,1998.734-737.
    [72]Heikkila J. Geometric camera calibration using circular control points. Ieee Transactions on Pattern Analysis and Machine Intelligence,2000,22(10):1066-1077.
    [73]Liang X P, Su X Y. Computer simulation of a 3-D sensing system with structured illumination. Optics and Lasers in Engineering,1997,27(4):379-393.
    [74]Malacara D. Optical Shop Testing (3rd Edition). John Wiley & Sons,2007.
    [75]Zhang S, Gong Y. High-Speed, High-Resolution 3D Imaging Using Projector Defocusing. IGI Global,2012: 121-140.
    [76]Adelson E H, Bergen J R. The plenoptic function and the elements of early vision. Computational models of visual processing,1991,1(2).
    [77]Adelson E H, Wang J Y. Single lens stereo with a plenoptic camera. IEEE transactions on pattern analysis and machine intelligence,1992, 14(2):99-106.
    [78]Levoy M, Hanrahan P. Light field rendering,1996.
    [79]Levoy M, Ng R, Adams A, et al. Light field microscopy. Acm Transactions on Graphics,2006,25(3):924-934.
    [80]Ng R. Light Field Photography with a Hand-held Plenoptic Camera. Report, Stanford Tech Report CTSR 2005-02,2005.
    [81]Georgiev T, Yu Z, Lumsdaine A, et al. Lytro camera technology:theory, algorithms, performance analysis. Multimedia Content and Mobile Devices,2013,8667.
    [82]Wilburn B S, Smulski M, Lee H H K, et al. Light field video camera. Proceedings of Electronic Imaging 2002. International Society for Optics and Photonics.29-36.
    [83]Marwah K, Wetzstein G, Bando Y, et al. Compressive Light Field Photography using Overcomplete Dictio-naries and Optimized Projections. ACM Trans. Graph. (Proc. SIGGRAPH),2013,32(4):1-11. Owner:mmc Added to JabRef:2014.03.25.
    [84]Fatahalian K. Light-Field Cameras. Report, CMU 15-869:Graphics and Imaging Architectures (Fall 2011), 2011.
    [85]Sibley P G, Taubin G. Vectorfield Isosurface-based Reconstruction from Oriented Points. Proceedings of ACM SIGGRAPH 2005 Sketches, New York, NY, USA:ACM,2005.
    [86]Ahmed M T, Hemayed E E, Farag A A. Neurocalibration:a neural network that can tell camera calibra-tion parameters. Proceedings of Computer Vision,1999. The Proceedings of the Seventh IEEE International Conference on, volume 1. IEEE.463-468.
    [87]Birk J, Kelley R, Chen N. Visually estimating workpiece pose in a robot hand using the feature points method. Proceedings of Decision and Control including the 17th Symposium on Adaptive Processes,1978 IEEE Con-ference on, volume 17. IEEE.1407-1412.
    [88]Burvall A, Kolacz K, Goncharov A V, et al. Lens axicons in oblique illumination. Applied Optics,2007, 46(3):312-318.
    [89-]Clarke T A, Fryer J G. The Development of Camera Calibration Methods and Models. The Photogrammetric Record,1998,16(91):51-66.
    [90]Debeer D, Hartmann S R, Friedberg R. Diffraction-Free Beams-Comment. Physical Review Letters,1987, 59(22):2611-2611.
    [91]Dubrofsky E. Homography estimation[D].2009.
    [92]FaugerasO. Three-dimensional computer vision:a geometric viewpoint. MIT press,1993.
    [93]Faugeras O, Toscani G. Camera calibration for 3D computer vision. Proceedings of Proceedings of Interna-tional Workshop on Machine Vision and Machine Intelligence, Tokyo, Japan.
    [94]Fitzgibbon A W. Simultaneous linear estimation of multiple view geometry and lens distortion.2001 leee Computer Society Conference on Computer Vision and Pattern Recognition, Vol 1, Proceedings,2001.125-132.
    [95]Fryer J G, Brown D C. Lens Distortion for Close-Range Photogrammetry. Photogrammetric Engineering and Remote Sensing,1986,52(1):51-58.
    [96]Ganapathy S. Decomposition of Transformation-Matrices for Robot Vision. Pattern Recognition Letters, 1984,2(6):401-412. Tz856 Times Cited:35 Cited References Count:21.
    [97]Garza-Rivera A, Renero-Carrillo F J. Artificial apposition compound eye using aspherical cylindrical micro-doublets, volume 8011. SPIE.80119M-11.
    [98]Garza-Rivera A, Renero-Carrillo F J. Design of Artificial Apposition Compound Eye with Cylindrical Micro-Doublets. Optical Review,2011,18(1):184-186.
    [99]Ginds Garcia M. A Camera Calibration Technique using Targets of Circular Features,2007. http://citeseerx. ist.psu.edu/viewdoc/summary?doi=?doi=10.1.1.14.1965.
    [100]Golub I, Chebbi B. Axicon Lens Array,2009.
    [101]Goncharov A V, Burvall A, Dainty C. Systematic design of an anastigmatic lens axicon. Appl. Opt.,2007, 46(24):6076-6080.
    [102]Gratt S. Passler K, Nuster R, et al. Photoacoustic imaging using a conical axicon detector. volume 7371. 737I0W-73710W-6.
    [103]HLi, RHartley. a non-iterative method for correcting lens distortion from nine point correspondences. OM-NTVIS,2005..
    [104]HeikkilS J. Accurate camera calibration and feature based 3-D reconstruction from monocular image se-quences[D].1997.
    [105]Inform S. A CAMERA CALIBRATION TECHNIQUE USING TARGETS..
    [106]Krishnasamy R. Calibration of An Electronic Compound Eye Image Sensor[D].2004.
    [107]Lei M, Zumbusch A. Total-internal-reflection fluorescence microscopy with W-shaped axicon mirrors. Optics Letters,2010,35(23):4057-4059.
    [108]Li J L, Zhang D W. Camera calibration with a near-parallel imaging system based on geometric moments. Optical Engineering,2011,50(2).
    [109]Liu Y, Su X. Camera calibration with planar crossed fringe patterns. Optik-International Journal for Light and Electron Optics,2012,123(2):171-175.
    [110]Lowe D G. Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision,2004,60(2):91-110.
    [111]Maybank S J, Faugeras O D. A Theory of Self-Calibration of a Moving Camera. International Journal of Computer Vision,1992,8(2):123-151.
    [112]McLeod J H. AXICONS AND THEIR USES. Journal of the Optical Society of America, 1960,50(2):166-169.
    [113]Melen T. Geometrical modelling and calibration of video cameras for underwater navigation. Institute for Teknisk Kybernetikk, Universitetet i Trondheim, Norges Tekniske Hogskole,1994.
    [114]Meng X Q, Hu Z Y. A new easy camera calibration technique based on circular points. Pattern Recognition, 2003,36(5):1155-1164.
    [115]Mikula G, Kolodziejczyk A, Makowski M, et al. Diffractive elements for imaging with extended depth of focus. Optical Engineering,2005,44(5):058001-058001.
    [116]Patz D, Leopold S, Knobber F, et al. Tunable compound eye cameras.2010.77160K-77160K-7.
    [117]Reininger F M. Fiber coupled artificial compound eye,2008.
    [118]Ricolfe-Viala C, Sanchez-Salmeron A J. Correcting non-linear lens distortion in cameras without using a model. Optics & Laser Technology,2010,42(4):628-639.
    [119]Saikaley A G. Imaging, characterization and processing with axicon derivatives[D].2013.
    [120]Scaramuzza D, Martinelli a, Siegwart R. A Flexible Technique for Accurate Omnidirectional Camera Cal-ibration and Structure from Motion. Fourth IEEE International Conference on Computer Vision Systems (ICVS'06),2006, (Icvs):45-45.
    [121]Shapiro R. Direct Linear Transformation Method for Three-Dimensional Cinematography. Research Quar-terly. American Alliance for Health, Physical Education and Recreation,1978,49(2):197-205.
    [122]Shepard D. A two-dimensional interpolation function for irregularly-spaced data,1968.
    [123]Sibley P G, Taubin G. Vectorfield isosurface-based reconstruction from oriented points,2005.
    [124]Snoeyink C, Wereley S. Single-image far-field subdiffraction limit imaging with axicon. Opt. Lett.,2013, 38(5):625-627.
    [125]Snoeyink C, Wereley S. Three-dimensional locating of paraxial point source with axicon. Opt. Lett.,2012, 37(11):2058-2060.
    [126]Sochacki J, Bara S, Jaroszewicz Z, et al. Phase retardation of the uniform-intensity axilens. Opt. Lett.,1992, 17(1):7-9
    [127]Sturm P F, Maybank S J. On plane-based camera calibration:A general algorithm, singularities, applications. Proceedings of Computer Vision and Pattern Recognition,1999. IEEE Computer Society Conference on., volume 1.437 Vol.1.
    [128]Tanaka T, Yamamoto S. Comparison of aberration between axicon and lens. Optics Communications,2000, 184(1-4):113-118.
    [129]Jeught S, Buytaert J A N, Dirckx J J J. Real-time geometric lens distortion correction using a graphics pro-cessing unit. Optical Engineering,2012,51(2).
    [130]Wang A, Qiu T, Shao L. A Simple Method of Radial Distortion Correction with Centre of Distortion Estima-tion. Journal of Mathematical Imaging and Vision,2009,35(3):165-172.
    [131]Wang Z Y, Li H W, Li D P, et al. A direct calibration method for structured light.2005 IEEE International Conference on Mechatronics and Automations, Vols 1-4, Conference Proceedings,2005.1283-1287.
    [132]. Wei G Q, Ma S. A complete two-plane camera calibration method and experimental comparisons. Proceedings of Computer Vision,1993. Proceedings., Fourth International Conference on. IEEE.439-446.
    [133]Wei G Q, Ma S D. Two plane camera calibration:a unified model. Proceedings of Computer Vision and Pattern Recognition,1991. Proceedings CVPR'91., IEEE Computer Society Conference on. IEEE.133-138.
    [134]Willson R. implementation of the Tsai camera calibration algorithm. http://www-cgi.cs.cmu.edu/afs/cs. cmu.edu/user/rgw/www/TsaiCode.html.
    [135]Wolf K B. Diffraction-Free Beams Remain Diffraction Free under All Paraxial Optical Transformations. Physical Review Letters,1988,60(9):757-759.
    [136]Xiao Y, Fisher R. Accurate Feature Extraction and Control Point Correction for Camera Calibration with a Mono-Plane Target..
    [137]Yang X L, Fang S P, Yang Y L. Accurate template-based correction technology for lens distortion. Optical Engineering,2012,51(10).
    [138]Zeng X H, Plain J, Jradi S, et al. Integration of polymer microlens array at fiber bundle extremity by pho-topolymerization. Optics Express,2011,19(6):4805-48l4.
    [139]Zhan Z, Wang K, Yao H, et al. Fabrication and characterization of aspherical lens manipulated by electrostatic field. Applied Optics,2009,48(22):4375-4380.
    [140]Zhengyou Z. Flexible camera calibration by viewing a plane from unknown orientations. Proceedings of Computer Vision,1999. The Proceedings of the Seventh IEEE International Conference on, volume 1.666-673 vol.1.
    [141]Zhengyou Z. A Flexible New Technique for Camera Calibration. IEEE Transactions on Pattern Analysis and Machine Intelligence,2000,22(11):1330-1334.
    [142]Zhou L, Zhou L, Zhou L, et al. Simultaneous 360°viewing optical system with the lenses of the compound eyes from the dragonfly.2007.662402-662402-8.
    [143]Zou A, Hou Z, Zhang L, et al. A Neural Network-Based Camera Calibration Method for Mobile Robot Localization Problems, volume 3498 of Lecture Notes in Computer Science. Springer Berlin.Heidelberg, 2005:277-284.
    [144]邹成刚,张红霞,宋乐,et al.多层曲面仿生复眼成像系统的设计.吉林大学学报.2013.7,31(4).

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700