用户名: 密码: 验证码:
基于单目激光成像的水下圆目标状态检测
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
激光水下成像被广泛应用于海洋、江河及湖泊探测等领域,这给基于视觉的水下目标探测提供了可能。利用二维图像信息和三维空间目标之间的映射关系,可以反演目标相对于摄像机的方向和位置,从而得到目标的运动方向、速度和轨迹等信息。圆特征作为目标携带的最常见的特征之一,其定位问题一直是研究的热点。但在水下环境中,光的折射和散射造成了图像畸变和退化,很少有学者针对水下圆目标的定位展开研究。本论文通过连续激光水下成像,建立了水下圆目标的单目视觉成像模型,利用圆目标的单帧图像实现了目标的三维定位。主要的研究内容与成果如下:
     (1)在室内自然光照环境下通过对直径为500mm圆柱体目标的成像,验证了大气中圆特征的单目视觉成像模型,成功地从单帧图像反演了目标距离,相对误差稳定在2%左右。在大气成像模型的基础上,针对水下摄像机通常被密封在水密容器中拍摄图像的情况,提出了水下圆特征的单目视觉成像模型。该模型对光折射造成的图像畸变进行了补偿,并将目标定位问题分解为求圆锥方程和坐标系变换的问题;
     (2)基于水下圆特征的单目视觉成像模型,利用空间几何方法推导了圆目标的方位检测算法。该算法每一步计算都基于几何约束,能够提供闭合解,不存在迭代近似。因此只要像面椭圆方程100%准确,就能零误差地反演目标方位,这一点通过数值仿真得到了验证。算法的局限性在于要求圆特征平面和摄像机平面之间的夹角在15范围内。3DsMax仿真实验表明,圆柱体目标从水下0.5~10m的不同高度入射,算法计算的圆特征法矢误差低于1°、位置相对误差低于3%;
     (3)像面椭圆提取的精度直接决定目标定位的精度。本论文在传统方法的基础上,提出一种曲率与距离共同约束的椭圆自动检测算法。数值仿真结果表明,算法不受迭代初值影响,逼近速度快。仿真图像实验表明,算法对闭合轮廓的识别正确率在80%以上,对重叠轮廓的识别正确率在90%以上,并且对椭圆重叠率不敏感。无论从给定点集、仿真图像还是真实图像的椭圆检测结果来看,算法的正确率、查全率和精度均优于传统方法,并且不受像素点个数的限制,而传统方法在像素点多于50个时精度明显下降;
     (4)基于前向散射和后向散射的图像退化模型,对水下激光图像的预处理方法进行了研究。同态滤波可以消除乘性噪声,中值滤波对滤除散斑噪声非常有效,将这两种滤波结合起来,可以改善水下图像的质量。主观上从Canny边缘检测的结果来看,噪声对图像边缘的影响降低,圆柱体目标上的椭圆特征得到了有效保留。客观上PSNR和NMSE两项指标也说明了这种预处理方法的降噪能力、对图像细节的保真能力都优于传统的均值滤波和维纳滤波方法;
     (5)针对一个实际的水域监测问题,设计了水下圆目标定位系统,给出了系统的总体框架设计以及各模块的详细设计。针对系统实现时的摄像机标定、视场盲区、安装板形变等工程问题,给出了相应的解决方案。然后,利用3DsMax提供的虚拟现实技术,对系统进行仿真,在理想条件下说明了系统的可行性、有效性和定位精度。最后,在船池搭建了实验系统,对不同方位的直径320mm,高1.6m的圆柱体目标进行了定位,同时也对图像预处理方法、椭圆检测方法和圆目标定位方法进行了全面检验和分析。
Underwater laser imaging is widely used in ocean, river and lake exploration,which makes vision-based object localization possible. Using the mapping relationshipfrom the3D object to the2D image, the orientation and position of the object withrespect to the camera can be calculated. Localization of circular feature, which is one ofthe most common features, is always the research hotspot. However, very few peopleaimed at the localization of underwater circular features (UCFs), since image distortionand degradation occur in underwater environment. Using underwater continuous laserimaging, based on the monocular imaging model for UCFs, this thesis completes objectlocalization by only single image of the UCFs. The main research achievements are asfollows:
     Firstly, by the images of a cylinder object with a diameter of500mm in indoornatural lighting environment, the monocular imaging model for atmosphere circularfeatures is checked. The distance from the object to the camera is obtained with anaverage relative error of2%. Based on the atmosphere vision model, the monocularimaging model for UCFs is proposed. The model compensates the image distortion bylight refraction, and decomposes the object localization problem into solving theequation of the constructed cone and coordinate system transformation problem.
     Secondly, based on the monocular imaging model for UCFs, the algorithm ofcomputing the orientation and center coordinates of the UCFs is given. It is based ongeometric constraint, which provides a closed-form solution, and without any iterationapproximation. Therefore, the result of object localization would be absolutely right aslong as the image ellipse equation is absolutely accurate. This is proven by numericalsimulation of the algorithm. In the meanwhile, the algorithm requires that the anglebetween the circular feature plane and the CCD plane is in the range of15. The3DsMax simulation experiment prove that the error for orientation detection is less than1, and that for position detection is lower than3%.
     Thirdly, the accuracy of localization is directly determined by image ellipsedetection. Based on traditional methods, a novel ellipse detecting algorithm constrained by edge curvature and orthogonal distance is proposed. Numerical results prove that thealgorithm is not affected by iterative initial value and with good approaching rate.Synthetic image experiments prove good performance both with occluded andoverlapping ellipses. No matter from the results of numerical, synthetic image or realimage experiments, all prove that the ellipse detection accuracy, recall ratio, and fittingprecision of the algorithm are superior to traditional methods. Moreover, the fittingresult is not sensitive to the number of edge pixels, when that of traditional methodsdramatically declines with more than50pixels.
     Fourthly, by the image degradation model of forward and backward scattering, theunderwater laser image preprocessing method is studied. Homomorphic filter caneliminate multiplicative noise, when median filter is very effective to speckle noise.Combining the two kinds of filter, the quality of underwater images can be improved.Subjectively, from the results of Canny edge detection, the method reduces the influenceof the noise to image edge, and effectively maintain the elliptic characteristics of thecylinder object. Objectively, both value of PSNR and NMSE prove that the method issuperior to the traditional linear filter and wiener filter in both noise reduction and detailfidelity abilities.
     Fifthly, aimed at a water area monitoring issue, a localization system for objects withUCFs is designed. Solutions for some engineering problems, such as camera calibration,blind area of the cameras, and deformation of the mounting plate are given. Botheffectiveness and localization accuracy are checked by virtual reality simulation of thesystem through3DsMax. At last, an experimental system is constructed in a pool with cleanwater to estimate the location of a cylinder object with a diameter of320mm. The accuracyand stability of the image preprocessing method, ellipse detecting method, and objectlocalization method are thoroughly analyzed by moving and relocating the object.
引文
[1] N. H. Kussat, C. D. Chadwell, R. Zimmerman. Absolute positioning of anautonomous underwater vehicle using GPS and acoustic measurements. IEEEJournal of Oceanic Engineering,2005,30(1):153-164
    [2] A. Negre, C. Pradalier, M. Dunbabin. Robust vision-based underwater homing usingself-similar landmarks. J. Field Robot.,2008,25(6-7):360-377
    [3] D. Carevic. Automatic estimation of multiple target positions and velocities usingpassive TDOA measurements of transients. IEEE Trans. Signal Proces.,2007,55(2):424-436
    [4] S. Negahdaripour, P. Firoozfam. An ROV stereovision system for ship-hullinspection. IEEE Journal of Oceanic Engineering,2006,31(3):551-564
    [5] P. Ridao, M. Carreras, D. Ribas, et al. Visual Inspection of Hydroelectric DamsUsing an Autonomous Underwater Vehicle. J. Field Robotics,2010,27(6):759-778
    [6] D. Lirman, N. R. Gracias, B. E. Gintert, et al. Development and application of avideo-mosaic survey technology to document the status of coral reef communities.Environ. Monit. Assess.,2007,125(1-3):59-73
    [7] S. K. Rao. Simplified target location estimation for underwater vehicles. IEEE/IONPosition, Location, and Navigation Symposium,2006.1021-1026
    [8] K.-C. Lee, J.-S. Ou, M.-C. Huang, et al. A novel location estimation based on patternmatching algorithm in underwater environments. Applied Acoustics,2009,70(3):479-483
    [9] A. Makar. Estimation of the time delay of hydroacoustic signals for passive locationof underwater objects. Archives of Acoustics,2004,29(3):435-445
    [10] A. Caiti, A. Garulli, F. Livide, et al. Localization of autonomous underwater vehiclesby floating acoustic buoys: A set-membership approach. IEEE Journal of OceanicEngineering,2005,30(1):140-152
    [11] R. McEwen, H. Thomas, D. Weber, et al. Performance of an AUV navigation systemat Arctic latitudes. IEEE Journal of Oceanic Engineering,2005,30(2):443-454
    [12] X. Yun, E. R. Bachmann, R. B. McGhee. Testing and evaluation of an integratedGPS/INS system for small AUV navigation. IEEE Journal of Oceanic Engineering,1999,24(3):396-404
    [13] D. P. Costa, P. W. Robinson, J. P. Y. Arnould, et al. Accuracy of ARGOS Locationsof Pinnipeds at-Sea Estimated Using Fastloc GPS. Plos One,2010,5(1):1-9
    [14] H. Kondo, T. Maki, T. Ura, et al. Relative navigation of an AUV using alight-section ranging system.2004425-430
    [15] A. H. Yamashita, H. Kaneko, T. Kawata. Three dimensional measurement of object'ssurface in water using the light stripe projection method. IEEE InternationalConference on Robotics and Automation,2004.2736-2741
    [16] F. M. Caimi, D. M. Kocak. Real-time3-D underwater imaging and mapping using alaser line scan technique Proceedings of the Society of Photo-optical InstrumentationEngineers (SPIE),1997.241-252
    [17] E. S. Harvey, M. R. Shortis. Calibration stability of an underwater stereo-videosystem: Implications for measurement accuracy and precision. Marine TechnologySociety Journal,1998,32(2):3-17
    [18] A. Yamashita, A. Fujii, T. Kaneko. Three dimensional measurement of objects inliquid and estimation of refractive index of liquid by using images of water surfacewith a stereo vision system. IEEE International Conference on Robotics andAutomation, Vols1-9,2008.974-979
    [19] A. Yamashita, S. Kato, T. Kaneko. Robust sensing against bubble noises in aquaticenvironments with a stereo vision system. IEEE International Conference onRobotics and Automation,2006.928-933
    [20] M. R. Arshad. Recent advancement in sensor technology for underwater applications.Indian J. Mar. Sci.,2009,38(3):267-273
    [21] J. A. Catipovic. Performance limitations in underwater acoustic telemetry. IEEEJournal of Oceanic Engineering,1990,15(3):205-216
    [22] V. Jezequel, F. Audo, F. Pellen, et al. Experimentally based simulations onmodulated lidar for shallow underwater target detection and localization. RemoteSensing. International Society for Optics and Photonics,2010.
    [23] B. Bhanu, D. E. Dudgeon, E. G. Zelnio, et al. Special issue on automatic targetrecognition. IEEE Trans. Image Process,1997,6(1):1-100
    [24] W. B. Thompson, T. C. Henderson, T. L. Colvin, et al. Vision-based localization.DARPA Image Understanding Workshop,1993.491-498
    [25] E. Menegatti, A. Pretto, A. Scarpa, et al. Omnidirectional vision scan matching forrobot localization in dynamic environments. IEEE Transactions on Robotics,2006,22(3):523-535
    [26] C. F. Marques, P. U. Lima. A localization method for a soccer robot using avision-based omni-directional sensor. Berlin: Springer Berlin Heidelberg,2001.23-25
    [27] M. Fiala. ARTag, a fiducial marker system using digital techniques. IEEE ComputerSociety Conference on Computer Vision and Pattern Recognition,2005.590-596
    [28] J. Kofman, X. Wu, T. J. Luu, et al. Teleoperation of a robot manipulator using avision-based human-robot interface. IEEE Transactions on Industrial Electronics,2005,52(2):1206-1219
    [29] J. Lu, Y. Shi, S. Wu, et al. Monocular Vision-based Sensor for Autonomous MobileRobot Localization by Circular Markers. Przeglad Elektrotechniczny,2013,89(1b):131-133
    [30] L. Meier, P. Tanskanen, L. Heng, et al. PIXHAWK: A micro aerial vehicle designfor autonomous flight using onboard computer vision. Autonomous Robots,2012,33(1-2):21-39
    [31] L. Pollini, M. Innocenti, R. Mati. Vision algorithms for formation flight and aerialrefueling with optimal marker labeling. AIAA Modeling and SimulationTechnologies Conference and Exhibit,2005.15-18
    [32]何森,侯宏录,王尧.基于特征圆单目视觉的飞机舵面角位移标定技术.光学技术,2006,32(4):524-526
    [33] J. S. Jaffe. Computer modeling and the design of optimal underwater imagingsystems. IEEE Journal of Oceanic Engineering,1990,15(2):101-111
    [34] E. P. Zoge, A. Kokhannowsky. Analytucal solution to the optical transfer function ofa scattening medium with large particles. Applied Optics,1994,33(27):6547-6554
    [35]孔捷,张保民.激光水下成像技术及其进展.光电子技术,2006,26(2):103-107
    [36]刘雪明,张明德.同步扫描水下激光成像系统主要参数的分析.东南大学学报:自然科学版,1999,29(1):129-134
    [37]朱耘.激光水下三维成像技术及进展.应用光学,1999,20(4):22-26
    [38] T. J. Kulp, D. G. Garvis, R. B. Kennedy, et al. Results of the final tank test of theLLNL/NAVSEA Synchronous-Scanning Underwater Laser Imaging System.International Society for Optics and Photonics,1992.453-464
    [39] L. H. Gilligan. Range gated underwater viewing.32nd Annual Technical Symposium.International Society for Optics and Photonics,1988.36-41
    [40]柏连发,张毅,陈钱.距离选通成像实现过程中若干问题的探讨.红外与激光工程,2009,38(1):57-61
    [41] B. A. Swartz. Laser range gate underwater imaging advances. IEEE Proceedings onOceans Engineering for Today's Technology and Tomorrow's Preservation,1994.II-722
    [42] G. R. Fournier, D. Bonnier, J. L. Forand, et al. Range-gated underwater laser imagingsystem. Optical Engineering,1993,32(9):2185-2190
    [43]冯包根.蓝绿激光水下军事应用研究.舰船电子工程,1999,111(3):55-59
    [44] J. L. Forand, G. R. Fournier, D. Bonnier, et al. LUCIE: a laser underwater cameraimage enhancer. IEEE Proceedings on Engineering in Harmony with Ocean,1993.187-190
    [45] K. Tsiakmakis, J. Brufau-Penella, M. Puig-Vidal, et al. A Camera Based Method forthe Measurement of Motion Parameters of IPMC Actuators IEEE Transactions onInstrumentation and Measurement,2009,58(8):2626-2633
    [46] Scripps Institution of Oceanography. Research Areas: Oceanography. UC San Diego,2013.1-2
    [47]吕华,苏建忠.无扫描四维激光成像技术研究.红外与激光工程,2007,36(z1):136-140
    [48]夏珉,杨克成,郑毅.用蒙特卡罗法研究波动水表面对机载海洋激光雷达水下光束质量的影响.中国激光,2008,35(2):178-182
    [49]张威,杨克成,傅博.基于图像亮度特性统计的水下距离选通成像自动增益控制方法.光学与光电技术,2010,8(4):52-55
    [50]王义峰,刘智深.基于数学形态学的水下激光成像消噪法.光电技术应用,2006,21(3):51-53
    [51]昌彦君,彭复员,朱晓亮.水下激光图像的多尺度形态滤波处理.中国地球物理学会第十九届年会,2003.26-28
    [52]蒋洁.激光水下成像噪声分析及图像处理方法研究:[硕士学位论文].秦皇岛市:燕山大学,2010
    [53]朱炜.基于粒子群的水下图像分割与识别技术研究:[博士学位论文].哈尔滨:哈尔滨工业大学,2007
    [54]韩宏伟,张晓晖.水下激光图像增强方法研究.激光与红外,2007,37(10):1105-1108
    [55]韩宏伟,张晓晖,葛卫龙.消除水下激光图像混合噪音的软形态滤波算法.光子学报,2011,40(1):136-141
    [56]杨述斌,彭复员,谢志远.散斑噪声污染的激光水下图像滤波算法.红外与激光工程,2002,31(4):318-321
    [57] L. Zhishen, Y. Yifan, Z. Kailin, et al. Underwater image transmission and blurredimage restoration. Optical Engineering,2001,40(6):1125-1131
    [58] B. yang, F. Dalgleish, F. Caimi, et al. Image Enhancement for Underwater PulsedLaser Line Scan Imaging System. Conference on Ocean Sensing and Monitoring IV,2012.
    [59] L. E. Mertens. Use of point spread and beam spread functions for analysis of imagingsystems in water. J Opt. Soc. Am,1977,67(8):1105-1117
    [60] W. Hou, D. J. Gray, A. D. Weidemann, et al. Automated underwater imagerestoration and retrieval of related optical properties. IEEE International Geoscienceand emote Sensing Symposium,2007.1889-1892
    [61] W. Hou, A. D. Weidemann, D. J. Gray, et al. Imagery-derived modulation transferfunction and its applications for underwater imaging. Applications of Digital ImageProcessing,2007.662-669
    [62] M.H.North. Point spread function modeling using small angle scattering theory. CISSelected Papers: Laser Remote Sensing of Natural Waters: From Theory to Practice,1996.164-176
    [63]范泛.激光水下成像的盲去卷积图像复原技术研究:[博士学位论文].武汉:华中科技大学,2010
    [64]张祥光.图像超分辨率重构算法及其在水下图像中的应用:[博士学位论文].青岛市:中国海洋大学,2008
    [65]谌雨章.激光水下成像的图像复原及超分辨率重建算法研究:[博士学位论文].武汉:华中科技大学,2012
    [66] D. Chaudhuri. A simple least squares method for fitting of ellipses and circlesdepends on border points of a two-tone image and their3-D extensions. PatternRecognition Letters,2010,31(9):818-829
    [67] A. Fitzgibbon, M. Pilu, R. B. Fisher. Direct least square fitting of ellipses. IEEETrans. Pattern Anal.,1999,21(5):476-490
    [68] S. Ahn, W. Rauh, H. Warnecke. Least-squares orthogonal distances fitting of circle,sphere, ellipse, hyperbola, and parabola. Pattern Recognition,2001,34(12):2283-2303
    [69] R. O. Duda, P. E. Hart. Pattern Classification and Scene Analysis. New York: WileyPublishers,1973.36-38
    [70] R. A. McLaughlin. Randomized Hough transform: improved ellipse detection withcomparison. Pattern Recognition Letters,1998,19(3-4):299-305
    [71] Z. Teng, J.-H. Kim, D.-J. Kang. Ellipse detection: a simple and precise method basedon randomized Hough transform. Optical Engineering,2012,51(5):1-15
    [72] F. Mai, Y. S. Hung, H. Zhong, et al. A hierarchical approach for fast and robustellipse extraction. Pattern Recognition,2008,41(8):2512-2524
    [73] A. Y. S. Chia, S. Rahardja, D. Rajan, et al. A split and merge based ellipse detectorwith self-correcting capability. IEEE Transactions on Image Processing,2011,20(7):1991-2006
    [74] D. K. Prasad, M. K. H. Leung, S. Y. Cho. Edge curvature and convexity basedellipse detection method. Pattern Recognition,2012,45(9):3204-3221
    [75] N. Kiryati, H. Kalviainen, S. Alaoutinen. Randomized or probabilistic Houghtransform: unified performance evaluation. Pattern Recognition Letters,2000,21(13-14):1157-1164
    [76] Y. C. Shiu, S. Ahmad.3D Location of Circular and Spherical Features by MonocularModel-Based Vision. Proceedings of the1989IEEE. Systems, Man and Cybernetics,1989.576-581
    [77] R. Safaee-Rad, K. C. Smith, B. Benhabib, et al. An analytical method for the3D-location estimation of circular features for an active-vision system. IEEEInternational Conference on Systems, Man and Cybernetics Conference Proceedings1990.215-220
    [78] M. R. Kabuka, A. E. Arenas. Position verification of a mobile robot using standardpattern. IEEE J. Robotics Automat,1987,3(6):505-516
    [79] H. S. Sawhney, J. Oliensis, A. R. Hanson. Description from image trajectories ofrotational motion.3rd IEEE Int. Conf. Cornput. Vision,1990.494-498
    [80] P. G. Mulgaonkar. Analysis of perspective line drawings using hypothesis basedreasoning:[Ph.D. dissertation]. Blacksburg, VA: Virginia Polytech. Inst. and StateUniv.,1984
    [81] R. Safaee-Rad, K. C. Smith, B. Benhabib. Three-dimensional location estimation ofcircular features for machine vision. IEEE Trans. Robotic Autom.,1992,8(5):624-639
    [82] M. Sonego, P. Gallina, M. Dalla Valle, et al.3D location of circular features forrobotic tasks. AMST '05: Advanced Manufacturing Systems and Technology,2005.267-278
    [83] C. J. Wu, W. H. Tsai. Location estimation for indoor autonomous vehicle navigationby omni-directional vision using circular landmarks on ceilings. Robotics andAutonomous Systems,2009,57(5):546-55
    [84] T. Treibitz, Y. Y. Schechner, C. Kunz, et al. Flat Refractive Geometry IEEE Trans.Pattern Anal.,2012,34(1):51-65
    [85] P. Corke, C. Detweiler, M. Dunbabin, et al. Experiments with underwater robotlocalization and tracking. Proceedings of the2007Ieee International Conference onRobotics and Automation, Vols1-10,2007.4556-4561
    [86] R. Cipolla, N. Hollinghurst. Visually guided grasping in unstructured environments.Robotics and Autonomous Systems,1997,19(3-4):337-346
    [87] E. S. Harvey, M. Cappo, M. R. Shortis, et al. The accuracy and precision ofunderwater measurements of length and maximum body depth of southern bluefintuna (Thunnus maccoyii) with a stereo-video camera system. Fisheries Research,2003,63(3):315-326
    [88] M. Zoppi, R. Molfino. ArmillEye: Flexible platform for underwater stereo vision.Journal of Mechanical Design,2007,129(8):808-815
    [89] D. Schleicher, L. M. Bergasa, M. Ocana, et al. Real-time hierarchical stereo VisualSLAM in large-scale environments ROBOTICS AND AUTONOMOUS SYSTEMS,2010,58(8):991-1002
    [90] C. Mei, G. Sibley, M. Cummins, et al. RSLAM: A System for Large-Scale Mappingin Constant-Time Using Stereo. International Journal of Computer Vision,2011,94(2):198-214
    [91] S. Nuske, J. Roberts, D. Prasser, et al. Experiments in Visual Localisation aroundUnderwater Structures. Field and Service Robotics,2010.295-304
    [92] R. C. Altshuler, J. F. Apgar, J. S. Edelson, et al. ORCA-VII: An autonomousunderwater vehicle. AUVSI's Unmanned Systems North America2004-Proceedings,2004.1759-1767
    [93] A. Yamashita, Y. Shirane, T. Kaneko. Monocular Underwater Stereo-3D MeasurementUsing Difference of Appearance Depending on Optical Paths. IEEE/RSJInternational Conference on Intelligent Robots and Systems,2010.3652-3657
    [94] K. D. Moore. Intercalibration method for underwater three-dimensional mappinglaser line scan systems. Applied Optics,2001,44(33):5991-6004
    [95] C.-C. Wang, M. S. Cheng. Nonmetric camera calibration for underwater laserscanning system. IEEE Journal of Oceanic Engineering,2007,32(2):383-399
    [96] G. C. Karras, D. J. Panagou, K. J. Kyriakopoulos. Target-referenced localization ofan underwater vehicle using a laser-biased vision system. Oceans2006,2006.461-466
    [97] G. C. Karras, K. J. Kyriakopoulos. Localization of an underwater vehicle using anIMU and a laser-based vision system. Mediterranean Conference on Control&Automation,2007.817-822
    [98] H. Dahlkamp, A. Kaehler, D. Stavens, et al. Self-supervised Monocular RoadDetection in Desert Terrain. Proc. of Robotics: Science and Systems,2006.42-46
    [99] G. Toulminet, M. Bertozzi, S. Mousset. Vehicle detection by means of stereovision-based obstacles features extraction and monocular pattern analysis. IEEETransactions on Image Processing,2006,15(8):2364-2375
    [100]王燕清,陈德运,石朝侠.基于单目视觉的非结构化道路检测与跟踪.哈尔滨工程大学学报,2011,32(3):334-339
    [101] E. Royer, M. Lhuillier, M. Dhome. Monocular vision for mobile robot localizationand autonomous navigation. International Journal of Computer Vision,2007,74(3):237-260
    [102] E. Royer, J. Bom, M. Dhome. Outdoor autonomous navigation using monocularvision. International Conference on Intelligent Robots and Systems,2005.1253-1258
    [103] T. Lemaire, S. Lacroix. Monocular-vision based SLAM using line segments.International Conference on Robotics and Automation,2007.2791-2796
    [104] O. A. Aider, P. Hoppenot, E. Colle. A model-based method for indoor mobile robotlocalization using monocular vision and straight-line correspondences. Robotics andAutonomous Systems,2005,52(2):229-246
    [105] P. Piniés, J. D. Tardós. Large-scale slam building conditionally independent localmaps: Application to monocular vision. IEEE Transactions on Robotics,2008,24(5):1094-1106
    [106] E. Royer, M. Lhuillier, M. Dhome. Localization in urban environments: monocularvision compared to a differential GPS sensor. IEEE Computer Society Conference onComputer Vision and Pattern Recognition,2005.114-121
    [107] A. Gordon. Use of laser scanning system on mobile underwater platforms.Symposium on Autonomous Underwater Vehicle Technology,1992.202-205
    [108]黄有为,金伟其,丁琨.基于光束空间展宽的水下前向散射成像模型.红外与激光工程,2009,38(4):669-673
    [109]刘颂豪,李淳飞.光子学技术与应用.合肥:安徽科学技术出版社,2006.26-28
    [110]黄有为,金伟其,王霞.凝视型水下激光成像后向散射光理论模型研究.光学学报,2007,27(7):1192-1197
    [111] B. L. MeGlamery. A computer model for underwater camera system. SPIE OceanOptics VI,1980.221-231
    [112] C. D. Mobley. Light and water: radiative transfer in natural waters. San-Diego:Academic press,1994.12-17
    [113] Y. Y. Seheehner, NirKarpel. Clear underwater vision. Computer Vision&PatternRecognition,2004.536-543
    [114] G. Horvath, C. Varju. Underwater refraction polarization patterns of skylightperceived by aquatic animals throught Snell's window of the flat water surface.Vision Research,1995,35(1):1651-1666
    [115]章毓晋.图像工程(上册):图像处理.(第3版).北京:清华大学出版社,2012.42-45
    [116]李自勤,李琦,王骐.由统计特性分析激光主动成像系统图像的噪声性质.中国激光,2004,31(9):1082-1085
    [117] B. Javidi, J. Wang. Limitation of the classic definition of the correlationsignal-to-noise ratio in optical pattern recognition with disjoint signal and scenenoise. Applied Optics,1992,31(32):6826-6829
    [118] R. C. Gonzalez, R. E. Woods. Digital Image Processing.(3rd Edition). Delhi:Pearson Prentice Hall,2008.29-31
    [119] P. D. Kovesi. MATLAB and Octave Functions for Computer Vision and ImageProcessing,2000.52-56
    [120] D. K. Prasad, M. K. H. Leung, S. Y. Cho, et al. A parameter independent line fittingmethod. the Asian Conference on Pattern Recogni-tion (ACPR),2011.331-335
    [121] G. H. Golub, C. Reinsch. Singular value decomposition and least squares solutions.Numer. Math.,1970,14(1):403-420
    [122] E. Kim, M. Haseyama, H. Kitajima. Fast and robust ellipse extraction fromcomplicated images. International Conference on Information Technology andApplications,2002.357-362
    [123] X. Bai, C. Sun, F. Zhou. Splitting touching cells based on concave points and ellipsefitting. Pattern Recognition,2009,42(11):2434-2446
    [124] Z. Y. Liu, H. Qiao. Multiple ellipses detection in noisy environments: a hierarchicalapproach. Pattern Recognition,2009,42(11):2421-2433
    [125] R. J. T. Bell. An Elementary Treatise on Coordrnated Geometry of ThreeDrmensrons.(3rd Edition). London: Macmillan,1944.26-32
    [126] F. P. Da, S. Y. Gai. Flexible three-dimensional measurement technique based on adigital light processing projector. Applied Optics,2008,47(3):377-385
    [127] X. Q. Meng, Z. Y. Hu. A new easy camera calibration technique based on circularpoints. Pattern Recognition,2003,36(5):1155-1164
    [128] R. Y. Tsai. A Versatile Camera Calibration Technique for High-Accuracy3DMachine Vision Metrology Using Off-the-shelf TV Cameras and Lenses,. IEEEJournal of Robotics and Automation,1987,3(4):323-343
    [129] Z. Zhang. A Flexible New Technique for Camera Calibration. IEEE Trans. PatternAnal.,2000,22(11):1330-1334
    [130] Intel. Open Source Computer Vision Library. OpenCV2.4.5for Windows,2013.1-2
    [131]陈胜勇,刘盛.基于OpenCV的计算机视觉技术实现.北京:科学出版社,2008.39-42

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700