基于多相机全向视觉导航的道路建模
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
视觉导航是自主车的一种重要的导航方式。然而普通视觉传感器视场较小,无法感知道路,特别是道路交叉口的全局信息,给视觉导航造成了一定的局限性。
     多相机全向视觉系统(Omnidirectional Multi-camera System, OMS)具有视场大、分辨率高、畸变小等优点。本文以多相机全向视觉系统作为视觉导航传感器,以Radon变换、混合高斯模型、马尔可夫随机场和图割理论为数学工具,重点研究了自主车视觉导航中的交叉口检测、交叉口结构估计与建模、转弯参考路径的规划、转弯车速控制等问题。
     本文的主要内容和贡献如下:
     1.提出了多相机全向视觉系统外部参数的标定算法。外部参数标定是确定导航系统与环境的位姿关系的重要方法。本算法首先计算外部参数的初始值,然后采用Levenberg-Marquardt算法对其进行迭代优化而得到精确的标定结果。本文还对算法的有效性和稳定性进行了分析。
     2.提出一种基于混合高斯模型和马尔可夫随机场的道路检测算法。以往基于概率和机器学习的道路检测方法虽考虑了帧间的联系和影响,但忽略了像素间的空间关系,容易导致过分割。本算法将道路的分割转化为图像像素的最优二类标记问题,先利用机器学习获得混合高斯模型参数,从而得到图像属于每一类标记的概率密度场,而后由马尔可夫随机场进行空间关系建模,通过图割理论计算路与非路的最优标记。
     3.提出了道路交叉口的参数化模型和数据结构,并研究了该模型的回归算法。在道路交叉口的自主导航中,通常需要对交叉口的形状、结构进行描述和存储。非参数化的描述方法数据量大,不利于处理和存储。因此,本文提出了一种交叉口的参数化模型。该模型采用道路红线模型和内包络线模型对交叉口进行描述,具有良好的通用性和灵活性。其中,道路红线模型反映了交叉口的边缘信息,有利于最大化地保存交叉口的结构和形状;而内包络线模型描述了自主车安全通行的区域,主要用于自主导航和路径规划,保证了行驶的安全性。
     4.提出了基于Radon变换的道路交叉口检测与识别算法,并实现了基于道路交叉口参数化模型的转弯参考路径的规划和车速控制。道路交叉口的检测和识别是交叉口自主导航中的一个重要问题,它决定了导航系统对道路结构的基本判断。在已有的检测与识别算法中,有些仅检测某一特定类型的道路交叉口,有些则基于较强的前提条件或假设,这些都带来了一定的局限性。本文提出了一种基于Radon变换的道路交叉口检测和识别算法,该算法可以检测多种类型的交叉口,并识别它们的结构和类型。在对交叉口建立参数化模型后,本文还计算出自主车转弯的参考轨迹和参考半径等信息,同时由车速控制算法给出了自主车转弯过程中的限制车速。
The vision-based navigation is an important way of the navigation of the Autonomous Land Vehicles (ALVs). However, due to the limited Field of View (FOV), the ordinary vision sensors are unable to perceive the global information of the roads, especially the road intersections. This causes the limitations to the vision-based navigation.
     The Omnidirectional Multi-camera System (OMS) has advantages of large FOV, high resolution and slight distortion. Taking the OMS as the vision sensor for the navigation, this dissertation places emphasis on the study of the detection, the structural estimation and the modeling of an intersection and the planning of the referential path and the limited speed in the ALV's navigation. The study is based on the mathematical tools of the Radon Transform, the Gaussian Mixture Model (GMM), the Markov Random Field (MRF) and the Graph Cuts.
     The main contents and contributions of this dissertation are as follows:
     1. We propose the algorithms for solving the calibration and the rectification for an OMS. The extrinsic parameters are important to determine the pose of the navigation system. Firstly, the initial guesses of the extrinsic parameters are calculated. Then the final results are refined with the Levenberg-Marquardt algorithm. We have analysed the validity and reliability of the algorithms.
     2. For detecting the roads, we introduce a kind of road detection algorithm based on the GMM and the MRF. Although the ordinary road detection algorithms based on the probability and the machine learning take the relation and the influence between the frames into account, they ignore the spatial relationship between the pixels, thus resulting in the over-segmentation. This algorithm treats the road segmentation as a binary classification problem. Through the GMM and the machine learning we can obtain the probability density field of the road and the non-road area of the image. By the MRF and the Graph Cuts, the optimized labels of the road and the non-road can be calculated.
     3. This dissertation also puts forward the parametric model and the data structure of the road intersection and studies the regression algorithm of the model at the same time. In the ALV's navigation in the intersection region, people often need to descript and store the shape and the structure information of the intersection. Suffering from the big data, the non-parametric methods are not suitable for the processing and the storage. To solve these problems, we propose the parametric model of the intersection. The model has a strong adaptability and flexibility, using the inner envelope and the boundary lines of the roads to describe the intersection. On the one hand, the boundary line model records the actual information of the road boundary itself. On the other hand, the inner envelope model guarantees the safety of the vehicle, mainly used for the navigation and the path planning.
     4. This study also presents a road intersection detection and recognition algorithm based on the Radon Transform, and implements the planning of the referential path and the limited speed based on the parametric model of the road intersection. The intersection detection is an important problem of the ALV's navigation in the intersection region. It decides the basic judgement to the road structure. Among the existing methods of the intersection detection, some of them only detect a certain type of the intersections and some are based on a strict condition or hypothesis, thus causing some limitations. The detection and recognition algorithm in this study is able to detect multiple types of the intersection, and recognize the shape and the structure. After modeling the intersection, the algorithm also plans the referential path, the radius and the limited speed of the ALV's turning.
引文
[1]CRANE C, ARMSTRONG D, ARROYO A, et al. Lessons Learned at the DARPA Urban Challenge:Florida Conference on Recent Advances in Robotics, Melbourne, Florida, May 8-9,2008 [C].
    [2]AHMED J, SHAH M, MILLER A, et al. A Vision-based System for a UGV to Handle a Road Intersection:proceedings of the National Conference on Artificial Intelligence,2007 [C].
    [3]AUBERT D, KLUGE K C, THORPE C E. Autonomous navigation of structured city roads:proceedings of the Mobile Robots V, Boston, MA, USA,8-9 Nov., 1990 [C]. Bellingham, Wash., USA:SPIE,1991.
    [4]TURK M A, MORGENTHALER D G, GREMBAN K D, et al. VITS-A vision system for autonomous land vehicle navigation[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,1988,10(3):342-361.
    [5]MYSLIWETZ B D, DICKMANNS E. Distributed scene analysis for autonomous road vehicle guidance:proceedings of the Robotics and IECON, 1987 [C]. International Society for Optics and Photonics.
    [6]GOTO Y, MATSUZAKIK, KWEON I, et al. CMU sidewalk navigation system: a blackboard-based outdoor navigation system using sensor fusion with colored-range images:proceedings of the ACM Fall Joint Computer Conference, 1986 [C]. IEEE Computer Society Press.
    [7]WALLACE R, MATSUZAKI K, GOTO Y, et al. Progress in robot road-following:proceedings of the IEEE International Conference on Robotics and Automation,1986 [C]. IEEE.
    [8]KUSHNER T R, PURI S. Progress In Road Intersection Detection For Autonomous Vehicle Navigation[J]. The International Society for Optical Engineering, Mobile Robots Ⅱ,1987,852:19-24.
    [9]KUAN D, PHIPPS G, HSUEH A-C. Autonomous robotic vehicle road following[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1988,10(5):648-658.
    [10]CRISMAN J D, THORPE C E. SCARF:a color vision system that tracks roads and intersections[J]. IEEE Transactions on Robotics and Automation,1993,9(1): 49-58.
    [11]JOCHEM T M, POMERLEAU D A, THORPE C E. Vision based intersection navigation:proceedings of the IEEE Intelligent Vehicles Symposium,19-20 Sep.,1996 [C].
    [12]JOCHEM T M, POMERLEAU D A, THORPE C E. Vision-based neural network road and intersection detection and traversal:proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems' Human Robot Interaction and Cooperative Robots',5-9 Aug.,1995 [C].
    [13]JOCHEM T M. Initial Results in Vision Based Road and Intersection Detection and Traversal[R]. DTIC Document,1995.
    [14]RASMUSSEN C. Road shape classification for detecting and negotiating intersections:proceedings of the IEEE Intelligent Vehicles Symposium,9-11 June,2003 [C].
    [15]DICKMANNS E D. Vehicles capable of dynamic vision:a new breed of technical beings?[J]. Artificial Intelligence,1998,103(1):49-76.
    [16]DUCHOW C. A marking-based, Flexible approach to intersection detection: proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition-Workshops,25-25 June,2005 [C].
    [17]DUCHOW C. A novel, signal model based approach to lane detection for use in intersection assistance:proceedings of the IEEE Intelligent Transportation Systems Conference,17-20 Sept.,2006 [C].
    [18]KURDZIEL M S. A monocular color vision system for road intersection detection[D]. Rochester:Rochester Institute of Technology Department of Computer Engineering,2008.
    [19]BOHREN J, FOOTE T, KELLER J, et al. Little Ben: The Ben Franklin racing team's entry in the 2007 Darpa urban challenge[J]. Journal of Field Robotics, 2008,25(9):598-614.
    [20]DARMS M S, RYBSKI P E, BAKER C, et al. Obstacle detection and tracking for the urban challenge[J]. IEEE Transactions on Intelligent Transportation Systems,2009,10(3):475-485.
    [21]SEO Y-W, URMSON C. A perception mechanism for supporting autonomous intersection handling in urban driving:proceedings of the 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems,2008 [C]. IEEE.
    [22]DANIILIDIS K. The Page of Omnidirectional Vision[EB/OL]. (2006-02-13) [2013-04-27]. http://www.cis.upenn.edu/~kostas/omni,html.
    [23]FAUGERAS O, BENOSMAN R, KANG S B. Panoramic vision:sensors, theory, and applications[M]. ed.:Springer,2001.
    [24]REES D W. Panoramic television viewing system:USA,3505465 [P/OL]. 1970-04-07.
    [25]BAKER S, NAYAR S K. A theory of single-viewpoint catadioptric image formation[J]. International Journal of Computer Vision,1999,35(2):175-196.
    [26]BAKER S. NAYAR S K. A theory of catadioptric image formation:proceedings of the Sixth International Conference on Computer Vision.1998 [C]. IEEE.
    [27]HONG J W, TAN X, PINETTE B, et al. Image-based homing[J]. Control Systems, IEEE.1992,12(1):38-45.
    [28]YAMAZAWA K, YAGI Y, YACHIDA M. Omnidirectional imaging with hyperboloidal projection:proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems,1993 [C]. IEEE.
    [29]GASPAR J. DECC C, OKAMOTO JR J, et al. Constant resolution omnidirectional cameras:proceedings of the Third Workshop on Omnidirectional Vision,2002 [C]. IEEE.
    [30]SWAMINATHAN R, GROSSBERG M D, NAYAR S K. Caustics of catadioptric cameras:proceedings of the Eighth IEEE International Conference on Computer Vision,2001 [C]. IEEE.
    [31]GROSSBERG M D, NAYAR S K. A general imaging model and a method for finding its parameters:proceedings of the Eighth IEEE International Conference on Computer Vision,2001 [C]. IEEE.
    [32]MARCHESE F M, SORRENTI D G. Mirror design of a prescribed accuracy omnidirectional vision system:proceedings of the Third Workshop on Omnidirectional Vision,2002 [C]. IEEE.
    [33]MENEGATTI E, NORI F, PAGELLO E, et al. Designing an omnidirectional vision system for a goalkeeper robot[J]. RoboCup 2001:Robot Soccer World Cup V,2002:193-213.
    [34]MARCHESE F, SORRENTI D. Omni-directional vision with a multi-part mirror[J]. RoboCup 2000:Robot Soccer World Cup Ⅳ,2001:179-188.
    [35]WINTERS N, GASPAR J, LACEY G, et al. Omni-directional vision for robot navigation:proceedings of the IEEE Workshop on Omnidirectional Vision, 2000 [C]. IEEE.
    [36]SZELISKI R. Image alignment and stitching:A tutorial[J]. Foundations and Trends(?) in Computer Graphics and Vision,2006,2(1):1-104.
    [37]CUTLER R, RUI Y, GUPTA A, et al. Distributed meetings:A meeting capture and broadcasting system:proceedings of the 10th ACM international conference on Multimedia,2002 [C]. ACM.
    [38]NANDA H, CUTLER R. Practical calibrations for a real-time digital omnidirectional camera[J]. CVPR Technical Sketch,2001,20.
    [39]INC. V. ASTRO Sensor Series[EB/OL]. (2011-09-30) [2013-04-27]. http://www.viewplus.co.jp/product/system/astro.html.
    [40]SATOH Y, SAKAUE K. Stereo Omnidirectional System (SOS) and Its Applications[J]. ABC,2005:127.
    [41]Products of Immersive Media Inc.[EB/OL]. [2013-04-27]. httn://immersivemedia.com/products/#cameras.
    [42]Point Grey Research, Inc. High resolution Ladybug(?)3 spherical digital video camera system.[EB/OL]. http://www.ptgrey.com/products/ladybutg3.
    [43]ABRAHAM S, F RSTNER W. Fish-eye-stereo calibration and epipolar rectification[J]. ISPRS Journal of photogrammetry and remote sensing,2005, 59(5):278-288.
    [44]HELLER J, PAJDLA T. Stereographic rectification of omnidirectional stereo pairs:proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,2009 [C].
    [45]FUSIELLO A, TRUCCO E, VERRI A. A compact algorithm for rectification of stereo pairs[J]. Machine Vision and Applications,2000,12(1):16-22.
    [46]WANG Y, GONG X, LIN Y, et al. Stereo Calibration and Rectification for Omnidirectional Multi-camera Systems[J]. International Journal of Advanced Robotic Systems.2012,9(143):1-12.
    [47]YANCHANG W, BIN Y, JINGTING D, et al. A unified rectification method for single viewpoint multi-camera system:proceedings of the 8th IEEE International Conference on Advanced Video and Signal-Based Surveillance (AVSS), Aug.30-Sept.2,2011 [C].
    [48]DING J, LIU J, ZHOU W, et al. Real-time stereo vision system using adaptive weight cost aggregation approach[J]. EURASIP Journal on Image and Video Processing,2011,20(1):1-19.
    [49]Velodyne Lidar, Inc. High Definition Lidar HDL-64E S2[EB/OL]. [2013-04-27]. http://www.velodyne.com/lidar.
    [50]PUIG L, BERM DEZ J. STURM P, et al. Calibration of omnidirectional cameras in practice:A comparison of methods[J]. Computer Vision and Image Understanding,2012,116(1):120-137.
    [51]ZHANG Z. Flexible camera calibration by viewing a plane from unknown orientations:proceedings of the Seventh IEEE International Conference on Computer Vision,1999 [C]. IEEE Computer Society.
    [52]SCARAMUZZA D, MARTINELLI A, SIEGWART R. A Flexible Technique for Accurate Omnidirectional Camera Calibration and Structure from Motion: proceedings of the Fourth IEEE International Conference on Computer Vision Systems,2006 [C].
    [53]MEI C, RIVES P. Single View Point Omnidirectional Camera Calibration from Planar Grids:proceedings of the IEEE International Conference on Robotics and Automation, Roma.10-14 April,2007 [C].
    [54]GEYER C, DANIILIDIS K. A unifying theory for central panoramic systems and practical implications:proceedings of the European Conference on Computer Vision (ECCV),2000 [C]. Springer.
    [55]BARRETO J P, ARAUJO H. Issues on the geometry of central catadioptric image formation:proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition,2001 [C].
    [56]DENG X M, WU F C, WU Y H. An easy calibration method for central catadioptric cameras[J]. Acta automatica sinica,2007,33(8):801-808.
    [57]IKEDA S, SATO T, YOKOYA N. A calibration method for an omnidirectional multi-camera system:proceedings of the SPIE,2003 [C].
    [58]GASPARINI S, STURM P, BARRETO J P. Plane-based calibration of central catadioptric cameras:proceedings of the IEEE 12th International Conference on Computer Vision,2009 [C].
    [59]GEYER C, DANIILIDIS K. Catadioptric camera calibration:proceedings of the Seventh IEEE International Conference on Computer Vision,1999 [C]. IEEE.
    [60]YING X, HU Z. Catadioptric camera calibration using geometric invariants [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2004,26(10): 1260-1271.
    [61]BARRETO J P, ARAUJO H. Geometric properties of central catadioptric line images and their application in calibration[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2005,27(8):1327-1333.
    [62]CAGLIOTI V, TADDEI P, BORACCHI G, et al. Single-image calibration of off-axis catadioptric cameras using lines:proceedings of the IEEE 11th International Conference on Computer Vision,2007 [C].
    [63]WU F, DUAN F, HU Z, et al. A new linear algorithm for calibrating central catadioptric cameras[J]. Pattern Recognition,2008,41(10):3166-3172.
    [64]MICUSIK B, PAJDLA T. Structure from motion with wide circular field of view cameras[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2006,28(7):1135-1149.
    [65]RAMALINGAM S, STURM P, LODHA S K. Generic self-calibration of central cameras[J]. Computer Vision and Image Understanding,2010,114(2):210-219.
    [66]ESPUNY F, BURGOS GIL J I. Generic Self-calibration of Central Cameras from Two Rotational Flows[J]. International Journal of Computer Vision,2011, 91(2):131-145.
    [67]GEYER C, DANIILIDIS K. Conformal rectification of omnidirectional stereo pairs:proceedings of the Conference on Computer Vision and Pattern Recognition Workshop,2003 [C].
    [68]GONZALEZ-BARBOSA J J, LACROIX S. Fast dense panoramic stereovision[S].2005:1210-1215.
    [69]TAKIGUCHI J I, YOSHIDA M, TAKEYA A, et al. High precision range estimation from an omnidirectional stereo system:proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems,2002 [C].
    [70]KANGNI F, LAGANIERE R. Epipolar Geometry for the Rectification of Cubic Panoramas[S].2006:70.
    [71]LEVENBERG K. A method for the solution of certain problems in least squares[J]. Quarterly of applied mathematics,1944,2:164-168.
    [72]MARQUARDT D W. An algorithm for least-squares estimation of nonlinear parameters[J]. Journal of the society for Industrial and Applied Mathematics, 1963,11(2):431-441.
    [73]HILL J F S. The pleasures of 'perp dot' products[J]. Graphics gems Ⅳ,1994: 138-148.
    [74]HADDAD R A, AKANSU A. A class of fast Gaussian binomial filters for speech and image processing[J]. IEEE Transactions on Signal Processing,1991, 39(3):723-727.
    [75]NIXON M, AGUADO A S. Feature Extraction & Image Processing for Computer Vision[M]. ed.:Academic Press.2012:88.
    [76]HUANG T. YANG G, TANG G. A fast two-dimensional median filtering algorithm[J]. IEEE Transactions on Acoustics, Speech and Signal Processing, 1979,27(1):13-18.
    [77]CANNY J. A computational approach to edge detection[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence.1986.8(6):679-698.
    [78]ROBERTS L G. Machine perception of three-dimensional solidsfR]. DTIC Document,1963.
    [79]GONG J, WANG A, ZHAI Y, et al. High speed lane recognition under complex road conditions:proceedings of the IEEE Intelligent Vehicles Symposium,2008 [C]. IEEE.
    [80]PREWITT J M S. Picture processing and Psychopictorics[M]. ed.:Academic Press,1970.
    [81]HE Y, WANG H, ZHANG B. Color-based road detection in urban traffic scenes[J]. IEEE Transactions on Intelligent Transportation Systems,2004,5(4): 309-318.
    [82]DICKMANNS E D, MYSLIWETZ B D. Recursive 3-d road and relative ego-state recognition[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,1992,14(2):199-213.
    [83]JUNG C R, KELBER C R. Lane following and lane departure using a linear-parabolic model[J]. Image and Vision computing,2005,23(13): 1192-1202.
    [84]TAOKA T, MANABE M, FUKUI M. An efficient curvature lane recognition algorithm by piecewise linear approach:proceedings of the IEEE 65th Vehicular Technology Conference(VTC2007-Spring),2007 [C]. IEEE.
    [85]KLUGE K. Extracting road curvature and orientation from image edge points without perceptual grouping into features:proceedings of the the Intelligent Vehicles'94 Symposium,1994 [C]. IEEE.
    [86]WANG Y, TEOH E K, SHEN D. Lane detection and tracking using B-Snake[J]. Image and Vision computing,2004,22(4):269-280.
    [87]YIM Y U, OH S-Y. Three-feature based automatic lane detection algorithm (TFALDA) for autonomous driving[J]. IEEE Transactions on Intelligent Transportation Systems,2003,4(4):219-225.
    [88]WANG Y, SHEN D, TEOH E K. Lane detection using catmull-rom spline: proceedings of the IEEE International Conference on Intelligent Vehicles,1998 [C].
    [89]WANG Y, SHEN D, TEOH E K. Lane detection using spline model[J]. Pattern Recognition Letters,2000,21(8):677-689.
    [90]胡明昊,杨文杰,任明武.一种基于视觉的道路检测算法[J].计算机工程与设计,2005,26(7):1704-1706.
    [91]CHIU K-Y, LIN S-F. Lane detection using color-based segmentation: proceedings of the IEEE Intelligent Vehicles Symposium,2005 [C]. IEEE.
    [92]ZHANG J, NAGEL H-H. Texture-based segmentation of road images: proceedings of the the Intelligent Vehicles'94 Symposium,1994 [C]. IEEE.
    [93]DEMPSTER A P, LAIRD N M, RUBIN D B. Maximum likelihood from incomplete data via the EM algorithm[J]. Journal of the Royal Statistical Society Series B (Methodological),1977:1-38.
    [94]LI S Z. Markov Random Field Modeling in Image Analysis[M].3rd ed. London: Springer-Verlag London,2009.
    [95]LI S Z. Markov random field modeling in computer vision[M]. ed. New York: Springer-Verlag, Inc.,1995.
    [96]WINKLER G. Image analysis, random fields and Markov chain Monte Carlo methods:a mathematical introduction[M]. ed.:Springer,2006.
    [97]GEMAN S, GEMAN D. Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of images[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,1984, (6):721-741.
    [98]BESAG J. Spatial interaction and the statistical analysis of lattice systems[J]. Journal of the Royal Statistical Society Series B (Methodological),1974: 192-236.
    [99]GEMAN S, GEMAN D. Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of images[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,1984.6(6):721-741.
    [100]TAPPEN M F. FREEMAN W T. Comparison of graph cuts with belief propagation for stereo, using identical MRF parameters:proceedings of the Ninth IEEE International Conference on Computer Vision,2003 [C]. IEEE.
    [101]BOYKOV Y Y, JOLLY M-P. Interactive graph cuts for optimal boundary & region segmentation of objects in ND images:proceedings of the Eighth IEEE International Conference on Computer Vision,2001 [C]. IEEE.
    [102]BOYKOV Y, VEKSLER O, ZABIH R. Fast approximate energy minimization via graph cuts[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2001,23(11):1222-1239.
    [103JPAPADIMITRIOU C H, STEIGLITZ K. Combinatorial optimization: algorithms and complexity[M]. ed.:Dover publications,1998.
    [104]中华人民共和国住房和城乡建设部、国家质量监督检验检疫总局.《城市道路交叉口规划规范》[S].北京:中国计划出版社,2010.
    [105]FISCHLER M A, BOLLES R C. Random sample consensus:a paradigm for model fitting with applications to image analysis and automated cartography[J]. Communications of the ACM,1981,24(6):381-395.
    [106]游克思,孙璐,顾文钧.公路平曲线半径可靠性设计理论与方法[J].交通运输工程学报,2012,12(6):1-6.
    [107]黄小进,唐志辉.消防车道最小转弯半径以及通道宽度的计算[J].华中建筑,2009,27(11):67-68.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700