智能车辆视觉感知中的车道标线识别方法的研究
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
本文对结构化道路上车道标线识别这一智能车辆视觉感知系统中的关键技术进行了有针对性地研究,目的是开发出一整套较完善的、能够在用于结构化道路上大多数工况中前方以车道标线表征的可行区域的检测算法。
     本文的主要研究内容如下:针对车道标线识别工作中道路图像的结构复杂、信息量大的特点,通过研究寻求一种能够有效抵抗光照、阴影、雨雪等路面图像杂质,突出车道标线图像的图像二值化方法;针对在二值图像的基础上进行车道标线拟合中简单数学模型精确性低,复杂数学模型实时性差的问题,寻求一种能够兼顾精确性和实时性的车道标线拟合方法;针对目前国内外研究中对车道标线类型的识别鲜有比较成熟的方法,寻求一种较实用的车道标线类型识别方法。
     本文的创新之处在于:提出了通过分窗口自适应图像二值化方法抵抗光照、阴影等局部图像信息对道路图像二值化结果的影响;并针对车道标线处理的需求,提出了通过WOI灰度图像对比度筛选和WOI滑动寻优两种方法确定车道标线感兴趣区域,分别适用于车道标线搜索和跟踪阶段图像处理阶段;在道路图像二值化基础上,提出车道标线上间断线端点的识别方法,进而实现了车道标线类型的识别;在车道标线图像中位置的确定环节,提出了车道标线递归二分折线拟合算法进行车道标线位置描述,在算法精确性和实时性方面都取得了一定的效果;针对于换车道工况,提出了利用驾驶员最优预瞄加速度模型判定本车行驶过程中换车道意图的方法,并在换车道过程中提出三线跟踪的方法平稳地进行车道标线跟踪。最后,对上述各种算法进行了实车道路实验,验证了上述各个方法的有效性。
Taking a panoramic view of the history of the vehicle development, the study of vehicle safety has always been the top priority. In particular, with the development of economy and the improvement of society recently, problems such as road passing ability, traffic safety, energy consumption and environment pollution have been becoming increasingly severe. The most hazardous issue to the modern society is traffic accidents. Thus how to improve the traffic safety is a social problem that has to be solved urgently. Research and development of intelligent vehicle are the basic approach to reduce traffic accidents, to enhance transportation efficiency and to relieve drivers’burden, which has attracted more and more interest of institute and automobile manufactures and becomes one aspect of key technologies of the Intelligent Transportation System. The major characteristic of intelligent vehicle is able to perceive, recognize, acknowledge and adapt the environment and is an automatic driving and control system integrating with environment perception, planning, making decision, operation and control systems. Although various sensors can be used to perceive the environment for intelligent vehicle, computer vision is proved to be the most effective perceiving manner after many years’research, because it can work with much information, effectively and more cheap.. Therefore the technical route mainly based on computer vision is adopted in the design of most of intelligent vehicles in the world. This paper is a part of the High-speed Vehicle Intelligent Assistant Driving System research subject of the State Key Laboratory of Automobile Dynamic Simulation, Jilin University. In the context of application of computer vision to intelligent vehicle, the lane detection of the structured road, which is the key technologies of visual perception system of intelligent vehicle, is the main task. The aim is to develop a more perfect travelable area detection algorithm which is fit for most of structured road conditions.
     The work of lane recognition is normally made up of two parts, followed by road binary image generating and lane fitting. The aim of road binary image generating is that the lane image part can be extracted from the image and the interference of the other image information can be excluded with the image processing method. The purpose of lane fitting is that a (group of) curve is designed to cross the image part of lane, and then the lane position can be described in a road image. If necessary, the results of lane fitting can also be projected back to 3D space from 2D plane with the camera model and perspective projection relationship, and the decision model of the intelligent vehicle will be served with the projection results.
     The purpose of binary road image generating is that through the analysis of the road image in front gathered from the vision system on vehicle, the target image can be extracted and can be used as the import of the decision model and the control model follow-up. The target image here may be the lane in the structural road, the edge of the unstructured road, the other cars or passerby in road and so on with the different needs. The problems faced are that due to the complexity of the road image that there are all kinds of information there in the road image such as the sky, the surroundings and the buildings besides the road and the lane, and the changing of the illumination and the weather, and also the shadow, the smutch, and seeper and firn in road, it is difficult to guarantee that the lane image can be saved accurately in binary road image, so the stability of the algorithm is poor. In the paper the method of adaptive binary road image generating with each window is raised. The method is that: At first, the road image is divided into a number of rectangle windows; and then, the binary image in each window own is generated adaptively. So the influence of the changing of the illumination and the weather can be resisted. And then, to take into account the characteristics of the road image that influence of perspective projection relationship of camera model is great, the means to improve on is raised later. The means is that the work of dividing road image into rectangle windows is finished in 3D space, and then project back to 2D plane with perspective projection relationship. So the perspective characteristic that the window near is big and the window far is small can be materialized with the rectangle windows gathered, and the characteristic is according with the transmutation rule of the main scenery in image, so the probability that the unsatisfactory binary road image will be produced is reduced. At last, the binary image is generated in each rectangle window with OTSU, and the results are fitted together, then the integrated binary image is gathered, which can be the import of the work follow up.
     In the structural road image, lane exists just in some certain regions. According to the method to generate the binary image in the whole area in before-mentioned content, it is difficult that the lane image can be saved accurately in the binary image. If with the physics and continuity restriction of the image stream, the regions which contain the lane image or neighbor with the lane image can be found, which always be defined as ROI(Region of Interest), the binary image generation work can be done in ROI, so the workload of image processing will be reduced, and the other information in image except lane will be filtered further, and then the real-time, accuracy and anti-jamming capability of the binary image algorithm can be improved thereby. On the basis of the method of adaptive binary road image generating with each window in the last chapter, the two methods are raised in the paper that a group of windows of interest(WOI) are selected which contain the lane image in them, and then the ROI can be generated with serial linking WOIs. The two methods are fit for the two phase of lane recognition respectively. The first method is that the complexity of the image in WOI is scaled with the contrast of the gray image in WOI, and then to forecast there are lane in WOI or not, so the decision that the WOI should be held or not can be done. The result of the binary image will become more adaptive, robust and accurate with the method. This method are fitting for the phase of lane searching, in which the lane recognition work should be done in the whole road image area because there is no lane information in last frame of image, so the accuracy of the image processing algorithm should be guaranteed here. The second method is that: The initial position of the WOIs can be fixed on with the result of lane recognition in the last frame of road image, and then the best position can be gathered with the regular sliding of the WOI. The workload and the difficulty of the road image processing can be both reduced, and at the same time the accuracy of processing result can be improved. Thereby the method is fitting for the phase of lane tracking, which is the main state of the lane recognition system working. And the influence of the lane discontinuous part can be effectively reduced, so the robustness of the initial algorithm of lane recognition can be improved further.
     Lane is a very important integral component of the traffic management, and plays a positive role in the safe, smooth and order road traffic. Each kind of lane has its own characteristics, and expresses the different signification. It can bring large effect on high-speed vehicle intelligent assistant system to determine the lane style correctly, such as to assist or guide the driver to control vehicle, to alarm or even forbid the driver for possible vehicles traveling on non-performing line caused be driver’s action. In the paper a method for lane style recognition is raised, which is working in two ways, color and line type. In the color recognition of lane, there are two kinds of useful color, white and yellow. In the line type recognition of lane, the situation is more complex. Aim to this situation, a method for the recognition of the broken points of broken line in the paper, with which the line type can be recognized for real line or broken line. With this method, not only the lane style can be recognized finally, but also the accuracy of the lane fitting result can be improved combined with the lane fitting method mentioned in next chapter.
     Lane is the establishment base of the travelable area of intelligent vehicle, and also one of the key problems of the vision system of the high-speed vehicle intelligent assistant driving system. There is a situation that the accuracy of lane fitting results is poor when is described with the simple mathematical lane models, and the real-time request of lane fitting results is difficult to satisfy when is described with the complex mathematical lane models. To resolve the problem above, a new method to describe the lane position with a group of line, named recursive half-dividing broken line fitting method, is raised in the paper. The main idea of the method is that the road image region will be divided into several regions with some beeline paralleled to the image horizon, and then the beeline is used to fit the points belong to lane image in each region, so a group of beelines can be gathered to describe the lane in the whole road image region, finally. With the results of the experiments, it can be seen that the advantage of the method is flexible, which can describe various shapes of lane more accurately, and is less time-consuming, which can basically satisfy the real-time requirement of lane recognition of the intelligent vehicle vision system.
     Vehicle lane changing running is a special situation which is different from vehicle running in lane, so in the paper the lane tracking in the process of lane changing is made some preliminary exploration, and then a method which can describe the process smoothly is raised. At first, it can be estimated with the vehicle steady preview model that the distance between the vehicle and the lane is safe or not, and then the purpose that the vehicle will change lane or not can be ascertained. When it is ascertained that the vehicle will change lane, the temporary WOIs is introduced to recognize the new lane appeared in the lane changing process, so the normal two-lane tracking state is translated to the three-lane tracking state at the same time. With the method the process can be described smoothly that the new lane in lane changing process becomes from inexistence to appearing, and one of the original two lanes becomes from existing to vanishing, so the lane tracking process in lane changing can proceeding smoothly, too. At the same time, it can be switched flexibly between the three-lane tracking state and the two-lane tracking state with the lane changing state estimating.
     At the end of the paper, many kinds of typical road experiments with real car for all the algorithms mentioned above are done to prove their effect. In the phase of lane searching, the contrast-regional homogeneity and OTSU image segmentation algorithm is used to process the road image, and the LMedSquare algorithm is used to fit the lane curve. In the phase of lane tracking, the WOI-sliding and regional OTSU image segmentation algorithm is used to process the road image, and the recursive half-dividing and broken line fitting algorithm is used to fit the lane. In the phase of lane changing, the WOI-locating and regional OTSU image segmentation algorithm is used to process the road image, and the beeline fitting algorithm is used to fit the lane. It is shown with the experiment results that all of the algorithms raised in the paper can pertinently resolve some practical issues, and can achieve good effects on practical applications.
     The creative points of the paper is that a variety of lane recognition conditions of intelligent vehicle has been studied systematically, and the technology roadmap of lane recognition has been analyzed, then a new method of lane recognition is explored which is adaptive, and can satisfy the request of real-time and accuracy. In the process of lane recognition, several new methods are proposed to the problems in the binary road image generation phase, the lane ROI in road image establishment phase, the lane style recognition phase, the lane fitting phase and the lane tracking phase in lane changing process. At last, the road experiments with real car is done, and the effect of all the methods above are validated according to the experiments results.
引文
[1]龙夫.日本的汽车安全技术研究动向[J].汽车运动, 2005(6),总第152期, 31.
    [2]付百学,许占锋,张谢群.德尔福汽车安全新技术[J].汽车电器, 2004(5): 1-3.
    [3]王卫东.国外汽车安全新理念和安全技术新进展[J].上海汽车, 2006(9): 40-42.
    [4]司康.我国汽车安全技术法规发展概况[J].商用汽车杂志, 2006(8): 103-106.
    [5]周维新,李磊.汽车主动安全性的人机因素分析[J].人类工效学, 1999, 5(4), 59-61.
    [6]杨妙梁,薛志红.汽车安全系统结构与维修[M].北京:中国物质出版社, 1998.
    [7]闫冬.汽车运动状态测量系统研究[D].长春:吉林大学, 2006.
    [8]余志生.汽车理论(第2版)[M].北京:机械工业出版社, 1990.
    [9] NHTSA Report. NHTSA Light Vehicle ABS Performance Test Development[R]. No DOT HS 809 747, June 2005.
    [10]任少卿,陈慧岩,黄江波,宋士伟.汽车防滑控制系统ABS/ASR基本原理及发展趋势[J].汽车电器, 2006(3): 1-5。
    [11]王学慧.具有EBD功能的ABS系统[J].汽车维修与保养, 2004.7, 67-69.
    [12]王会义,宋健.汽车主动安全性及其测控技术[J].计算机自动测量与控制, 2008, 8(6): 5-7.
    [13]门涛,李晓霞.高速公路交通事故成因及对策探讨[J].公路与汽运, 2004(8): 22-24.
    [14]母国勇,陈明伟.基于事故统计的道路交通安全现状及对策研究[J].交通标准化,总第138/139期: 37-39.
    [15] NHTSA Final Report. Automotive Collision Avoidance System(ACAS) Program[R]. No DOT HS 809 080, August 200.
    [16]刘卫平,黄富元,熊文莉,王瑛琳.车辆安全辅助驾驶系统发展概述[J].汽车运用, 2005(11),总第157期: 39-39.
    [17] California PATH Research Report. Assessment of the Applicability of Cooperative Vehicle-Highway Automation Systems to Bus Transit and Intermodal Freight: Case Study Feasibility Analyses in the Metropolitan Chicago Region[R]. UCB-ITS-PRR-2004-26.
    [18] Lathrop, John. National Automated Highway System Consortium: Modeling Stakeholder Preference Project. Berkeley, Calif: California PATH Program, Institute of Transportation Studies. Series Title: California PATH research report UCB-ITS-PRR-97-26. 1055-1425. 1997.
    [19] N. Zheng, S. Tang, H. Cheng, Q. Li, G. Lai, F. Wang. Toward Intelligent Driver-assistance and Safety Warning System[J]. IEEE Intelligent System, 2004, 19(2): 8-11.
    [20]程洪.智能车辆视觉导航算法及其系统实现的研究[D].西安:西安交通大学, 2003.
    [21] W. Jones. Building Safer Cars[J]. IEEE Spectrum, 2002, 39(1): 82-85.
    [22] A.布洛基, W.布图兹, A.法斯莉, G.康特.智能车辆——智能交通系统的关键技术[M],王武宏,沈中杰,侯福国,易兵.北京:人民交通出版社, 2002.
    [23]杨明,宋雪峰,王宏,张钹.面向智能交通系统的图像处理[J].计算机工程与应用, 2001(9): 4-7.
    [24] R. Chapuis, F. Marmoiton, R. Aufrere, F. Collange, J-P. Derutin. Road Detection and Vehicles Tracking by Vision for an On-Board ACC System in the VELAC Vehicle[J]. ISIF, WeB5, 2000: 11-18.
    [25] Juan Pablo Gonzalez, Umit Ozguner. Lane Detection Using Histogram-Based Segmentation and Decision Trees[C]. 2000 IEEE Intelligent Transportation Systems, conference roceedings, Dearborn (MI), USA·October1-3,2000: 346-351.
    [26] Alan L. Yuille, Member, IEEE, and James M. Coughlan. Fundamental Limits of Bayesian Inference: Order Parameters and Phase Transitions for Road Tracking[J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2000.2, 22(2): 160-173.
    [27] Jin-Woo Lee, Sung-Uk Choi, Young-Jin Lee, K. S. Lee. A Study on Recognition of Road Lane and Movement of Vehicles using Vision System[C]. SICE 2001 July 25-27, 2001, Nagoya: 38-41.
    [28] Salah G. Foda and Amer K. Dawoud. Highway Lane Boundary Determination For Autonomous Navigation[J]. IEEE 0-7803-7080-5/01, 2001: 698-702.
    [29] Sukhan Lee, Kwang S. Boo, Dongmok Shin, Dal H. Lee. Automatic Lane Following with a Single Camera[C]. Proceedings of the 1998 IEEE, International Conference on Robotics & Automation, Leuven, Belgium·May 1998.
    [30] F. Paetzold, U. Franke. Road Recognition in Urban Environment[J]. Image and Vision Computing 18 (2000): 377-387.
    [31] Kesav Kaliyaperumal, Sridhar Lakshmanan, Karl Kluge. An Algorithm for Detecting Roads and Obstacles in Radar Images[J]. IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2001.1, 50(1): 170-182.
    [32] Sridhar Lakshmanan, Karl C. Kluge. Lane Detection for Automotive Sensors[J]. IEEE 0-7803-2431-5/95, 1995: 2955-2958
    [33] Alberto Broggi. An Image Reorganization Procedure for Automotive Road Following Systems[J]. IEEE 0-8186-7310-9/95, 1995: 532-535.
    [34] Alberto Broggi. A Massively Parallel Approach to Real-Time Vision-Based Road Markings Detection[R]. The Eureka PROMETHEUS Progetto Finalizzato Trasportiunder contracts n. 93.01813.PF74 and 94.01371.Pf74: 84-89.
    [35] Andrew H. S. Lai, Nelson H. C. Yung. Lane Detection by Orientation and Length Discrimination[J]. IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS-PART B: CYBERNETICS, 2000.8, 30(4): 539-548.
    [36] F. J. J. Hermans. Road Prediction Using Video for Integrated Driver Support[J].COMPUTING & CONTROL ENGINEERING JOURNAL, 1999.8: 169-175.
    [37] Chris Kreucher, Sridhar Lakshmanan. LANA: A Lane Extraction Algorithm That Uses Frequency Domain Features[J]. IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION, 1999.4, 15(2): 343-350.
    [38] Sridhar Lakshmanan, Kesavarajan Kaliyaperumal, Karl Kluge. LEXLUTHER: An Algorithm for Detecting Roads and Obstacles in Radar Images[J]. IEEE 0-7803-4269-0/97, 1998: 415-420.
    [39] D. Aubert, C. Thorpe. Color Image Processing for Mavigation: Two Road Trackers[R]. DARPA, Dod, DACA76-89-C-0014.
    [40] Alberto Broggi. Parallel and Local Feature Extraction: A Real-Time Approach to Road Boundary Detection[J]. IEEE TRANSACTIONS ON IMAGE PROCESSING, 1995.2, 4(2): 217-223.
    [41] Kristijan Macek, Brian Williams, Sascha Kolski, Roland Siegwart. A Lane Detection Vision Module for Driver Assistance[J]. European Commission Contract No. IST-507859.
    [42] Joel C. McCall, Mohan M. Trivedi. An Integrated, Robust Approach to Lane Marking Detection and Lane Tracking[J]. IEEE Intelligent Vehicles Symposium, 2004.6.
    [43] Hee-Chang Moon, Woon-Sung Lee, Jung-Ha Kim. Advanced Lane Detecting Algorithm for Unmanned Vehicle[J]. ICCAS2003: 1130-1133.
    [44]王荣本,游峰,崔高健,郭烈.基于计算机视觉高速智能车辆的道路识别[J].计算机工程与应用, 2004, 26: 18-21.
    [45]李青.智能车辆视觉感知系统的研究[D].西安:西安交通大学, 2004.
    [46] Bin Ran, Henry xianghong Liu. Development of a Vision-Based Real Time Lane Detection and Tracking System for Intelligent Vehhicles[R]. Department of Civil and Environmental Engineering University of Wisconsin at Madison, 1999.11: 1-19.
    [47] Arata Takahashi, Yoshiki Ninomiya. Model-Based Lane Recognition[J]. IEEE 0-7803-3652-6/96.
    [48] Reinhold Behringer, Markus Maurer. Results on Visual Road Recognition for Road Vehicle Guidance[J]. IEEE 0-7803-3652-6/96.
    [49] W.S. Wijesoma, K.R.S. Kodagoda, A.P. Balasuriya, E.K. Teoh. Road Edge and Lane Boundary Detection Using Laser and Vision[C]. Proceedings of the 2001 IEEE/RSj, International Conference on Intelligent Robots and Systems, Maui, Hawaii, USA, Oct. 29– Nov. 03, 2001: 1440-1445.
    [50] Arturo de la Escalera, Luis E. Moreno, Miguel Angel Salichs, Jose maria Armingol. Road Traffic Sign Detection and Classification[J]. IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, 1997.12, 44(6): 848-859.
    [51] Volker Graefe. Vision for Intelligent Road Vehicles[R]. Ministry of Research and Technology (BMFT) and EUREKA project PROMETHEUS:135-140.
    [52]苏开娜,任文君,丰丽军.公路视觉导航中基于卡尔曼滤波器的3D运动参数估计[J].电子测量与仪器学报, 2000.6, 14(2): 15.
    [53]陈莹,韩崇昭.基于扩展卡尔曼滤波的车道融合跟踪[J].公路交通科技, 2004.12, 21(12).
    [54]周欣,黄席樾,刘涛,等.高速公路分道线识别与重建[J].重庆大学学报, 2003, 26(8): 52-55.
    [55] Li Xu, Zhang, Wei-gong, Bian Xiao-dong. Research on Detection of Lane Based on Machine Vision[J]. Journal of Southeast University(English Edition), 2004, 20(2): 176-180.
    [56]管欣,贾鑫,高振海.基于道路图像对比度-区域均匀性图分析的自适应阈值算法[J].吉林大学学报(工学版), 2008, 38(4): 758-763.
    [57]郭磊,李克强,王建强,连小珉.用于车道识别的分段切换车道模型[J].公路交通科技, 2006, 23(11): 90-94.
    [58] Y. Wang, E.KTeoh, K.Shen. Lane Detection and Tracking Using B-Snake[J]. Image and Vision Computing, 2004(22):269-280.
    [59] Yue Wang, Dinggang Shen, Eam Khwang Teoh. Lane Detection Using Catmull-Rom Spline[J]. IEEE International Conference on Intelligent Vehicles, 1998: 51-57.
    [60] J. McDonald. Detecting and Tracking Road Markings Using the Hough Transform[C]. Proc. of the Irish Machine Vision and Image Processing Conference 2001, 2001: 1-6.
    [61]马颂德,张正友.计算机视觉——计算理论及研究[M].北京:科学出版社, 1998.
    [62]曹国.道路可行区域视觉感知系统研究[D].长春:吉林大学, 2003.
    [63]董因平.高速汽车车道偏离预警系统研究[J].长春理工大学学报, 2004, 27(1): 48-50.
    [64]王荣本,余天洪,郭烈,等.基于机器视觉的车道偏离警告系统研究综述[J].汽车工程, 2005, 27(4): 463-466.
    [65] Dickmanns E.D. The Development of Machine Vision for Road Vehicles in The Last Decade[C]. Proceedings of Intelligent Vehicle Symposium 2001, 2001: 268-281.
    [66] Dickmanns E.D. Mysliwetz B.D. Recursive 3-d Road and Relative Ego-state Recognition[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1992, 14: 199-213.
    [67] Thorpe Charles. Toward Autonomous Driving-The CMU Navlab[J]. IEEE Expert System, 1991, 4: 31-42.
    [68] Broggi Alberto. The Argo Autonomous Vehicle’s Vision and Control Systems[J]. International Journal of Intelligent Control and Systems, 1999, 3(4): 409-441.
    [69]张立存.汽车驾驶严控制行为统一决策模型的研究[D].长春:吉林大学, 2007.
    [70]徐友春,王荣本,李兵,李斌.世界智能车辆近况综述[J].汽车工程, 2001, 23(5): 289-295.
    [71] Anthony B. Will. Intelligent Vehicle Braking and Steering Control System. Dissertation, Purdue University.
    [72]韩俊淑,韩佳文,高翔,等.智能车辆的研究及发展[J].世界汽车, 2003(9).
    [73]黄秋元,周鹏,陈伟.汽车碰撞报警/防碰撞系统的方案分析[J].交通与计算机, 2003, 21(3): 22-24。
    [74] Bellutta P, Manduchi R, Matthies L, Owens K, Rankin A. Terrain Perception for DEMO III[C]. Proceedings of the IEEE Intelligent Vehicles Symposium 2000, 2000, 10.
    [75] Coombs D. Driving Autonomously Offroad up to 35hm/h[C]. Proceedings of the IEEE Intelligent Vehicles Symposium 2000, 2000, 10.
    [76] First Quarter Update 2001-Marker Analysis: Lane Departure Warning and Lateral Guidance Systems, http://ivsource.net
    [77] Robert Grogan. Automotive Failures and Solutions for Autonomous Vehicle Development[R]. California Institute of Technology.
    [78]王春燕,杨英俊.当前国际智能运输领域安全辅助驾驶新技术[J].交通与计算机, 1999.8, 17(4): 17-19.
    [79] Youchun Xu, Keqiang Li, Yingma, Yuanyi, Wangjian, Chenjun, Zhao Yufan. General Design of the Lateral Control System Based on Monocular Vision on THASV-I[J]. IV 2004: 692-697.
    [80]孙振平,安向京,贺汉根. CITAV-IV——视觉导航的自主车[J].机器人, 2002, 24(2): 116-120.
    [81]邓剑文,安向京,贺汉根.基于道路结构特征的自主车视觉导航[J].吉林大学学报(信息科学版), 2004, 22(4): 415-419.
    [82]郭烈.高速智能车电器与控制系统设计开发[D].长春:吉林大学, 2003.
    [83]储江伟,施树明,王荣本.视觉导航智能汽车实验平台总体设计[J].汽车工程, 2004, 26(2): 214-219.
    [84]王春燕,蔡风田,杨英俊.国内外安全辅助驾驶研究发展动态[J].云南交通科技, 1999.10, 15(5): 13-14.
    [85]段峰,王耀南,雷晓风,吴立钊,谭文.机器视觉技术及其应用综述[J].自动化博览, 2002(3): 59-61.
    [86]兰海军,文友先.机器视觉技术的发展和应用[J].湖北农机化, 2007, 5.
    [87] Dickmanns E. D. Graefe V, Dynamic Monocular Machine Vision[J]. Machine Vision and Applications, Springer International, 1988.
    [88] Braess H. H, Reichart G. Prometheus: Visiondes’Intelligent En Automobils’Auf’Intelligenter Straβe’Versuch Einer Kritischen Wurdigung-Teil 1, ATZ Automobiltechn, Zeitschrift 97, Bd4, S.200-205.
    [89] Tsugawa S, Sadayuki. Vision-based Vehicles in Japan: Machine Vision Systems and Driving Control Systems. IEEE Trans. Ind. El 41(4), 1994, El 41(4).
    [90] Kenue S. K, Bajpayee S. LaneLok: Robust Line and Curve Fitting of Lane Boundaries. Proc. SPIE-Mobile RobotsⅦ, Vol. 1831, Boston, 1990.
    [91] J. Malik, C. J. Taylor, P. McLauchlan, J, Kosecka. Development of BinocularStereoptics for Vehicle Lateral Control, Longitudinal Control and Obstacle Detection, Technical Report-PATH MOU-257 Report[R]. Department of Electrical Engineering and Computer Sciences, University of California at Berkeley.
    [92] Dickmanns E. D. The Development of Machine Vision for Road Vehicles in The Last Decade[C]. Proceedings of Intelligent Vehicle Symposium 2001, 2001: 268-281.
    [93] BERTOZZI M, BROGGI A, CELLARIO M, et al. Artificial Vision in Road Vehicles[J]. Proceedings of The IEEE, 2002, 90(7): 1258-1271.
    [94] V. Kastrinaki, M. Zervakis, K. Kalaitzakis. A Survey of Video Processing Techniques for Traffic Applications[J]. Image and Vision Computing, 2003, 21(4): 359-381.
    [95] M.A. Turk, D.G. Morgenthaler, K.D. Gremban, M.Marra. VITS-a Vision System for Autonomous Land Vehicle Navigation[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1988, 10(3).
    [96] C.Thorpe, M.H. Jebert, T. Kanade, S.A Shafer. Vision and Navigation for The Carnegie-Mellon Navlab. PAMI, 1988, 10(3).
    [97] S.D. Buluswar, B.A.Draper. Color Machine Vision for Autonomous Vehicles. Engineering Applications of Artificial Intelligence, 1998, 11: 245-256.
    [98] M.Berke, E.Haritaoglu, L.S.Davis. Real-time Multiple Vehicles Detection and Tracking from A Moving Vehicle[J]. Machine vision and applications, 2000, 12: 69-83.
    [99] J.Zhang, H.nagel. Texture-Based Segmentation of Road Images[J]. Proceedings of IEEE, System on Intelligent Vehicles 94.
    [100] J.Badenas, M.Bober, F.Pla. Segmenting Traffic Scenes from Grey Level and Motion Information[J]. Pattern analysis and applications, 2001, 4: 28-38.
    [101] K.Kluge, G.Johnson. Statistical Characterization of The Visual Characteristics of Painted Lane Markings. Proc. of IEEE Intelligent Vehicles 95, 488-493.
    [102] Calin Rotaru, Thorsten Graf, Jianwei Zhang. Extracting Road Features from Color Images Using A Cognitive Approach[C]. 2004 IEEE Intelligent Vehicles Symposium University of Parma Parma, Italy.June 14-17,2004: 298-303
    [103] A.L.Yuille, J.M.Coughlan. Fundamental Limits of Bayesian Inference: Order Parameters and Phase Transitions for Road Tracking[J]. IEEE PAMI, 2000, 22(2): 160-173.
    [104] K. Kluge, S.Lakshmanan. Lane Boundary Detection Using Deformable Templates: Effects of Image Sub-sampling on Detected Lane Edges[C]. Proceedings of 2nd Asian Conference on Computer Vision, 1995: 329-339.
    [105] C.Kreucher, S.Lakshmanan. LANA: A Lane Extraction Algorithm That Uses Frequency Domain Features. IEEE trans. Robotics and Automation, 1999, 15(2).
    [106] C. J. Taylor, J. Malik, J. Weber. A Real Time Approach to Stereopsis and Lane-finding[C]. IFAC Transportation Systems Chania,Greecd, 1997.
    [107] D.Geman, B.Jedynak. an Active Testing Model for Tracking Roads in Satellite Images[J]. IEEE PAMI, 1996,18(1): 1-14.
    [108] D.G.Morgenthaler, S.J.Hennessy,D.De.Menthon. Range-video Fusion and Comparison of Inverse Perspective Algorithms in Static Images[J]. IEEE Trans. Systems Man and Cybermetics, 1990, 20: 1301-1312.
    [109] COMSIS Corporation. Prelininary Human Factors Guidelines for Crash Avoidance Warning Devices. USDOT, National Highway Traffic Safety Administration, project report, 1996.1.
    [110] Y.U.Yim, S.Y.Oh. Three-Feature Based Automatic Lane Detection Algorithm (TFALDA) for Autonomous Driving[J]. IEEE Trans. On Intelligent Transformation System, 2003, 4(4): 219-225.
    [111] D.J.Kang, M.H.Jung. Road Lane Segmentation Using Dynamic Programming for Active Safety Vehicles[J]. Pattern Recognition Letters, 2003(24): 3177-3185.
    [112] Massimo Bertozzi, Alberto Broggi, et al. Artificial Vision in Road Vehicles[J]. Proceedings of the IEEE, 2002.7, 90(7): 1258~1271.
    [113] Pierre Charbonnier, et al. Road Marking Recognition Using Image Processing. IEEE Conference on Intelligent Transportation Systems, Boston USA, 1997.11: 912-917.
    [114] Surender K. Kenue. LANELOK: Detection of Lane Boundaries and Vehicle Tracking Using Image Processing Techniques-Part I: Hough-Transform, Region-Tracking and Correlation. Proc. of SPIE, 1989, 1195: 221-229.
    [115] Karl C. Kluge,Sridhar Lakshmanan. A Deformable Template Approch to Lane Detection[C]. Proceedings of the Intellegent Vehicles’95 Symposium, Detroit, USA , 1995.9: 54-59.
    [116] Sridhar Lakhmanan, David Grimmer. A Deformable Template Approach to Detecting Straight Edges in Radar Images[J]. IEEE Trans. On Pattern Analysis and Machine Intelligence, 1996.4, 18(4): 438-443.
    [117] Sridhar Lakhmanan, Karl C. Kluge. Lane Detection for Automotive Sensors[J]. ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing, 1995.5: 2955-2958.
    [118] T.Liu, N.Zheng, H.Cheng, Z.Xing. A Novel Approach of Road Recognition Based in Deformable Template and Genetic Algorithm[C]. Proceedings of Intelligent Transportation System 2003, 2003: 1251-1256.
    [119] J.W.Park, J.W. Lee, K.Y. Jhang. A Lane-curve Detection Based on An LCF[J]. Pattern Recognition Letters, 2003(24): 2301-2313.
    [120] C. McCall, M.M. Trivedi. an Integrated, Robust Approach to Lane Marking Detection and Lane Tracking[C]. Proc. IEEE Intelligent Vehicles Symposium 2004, 2004: 533-537.
    [121] ISO (2001). 16th ISO/TC204/WG14 Meeting– April 23-25, 2001, Anaheim, USA.
    [122] A. Galip Ulsoy, William Clay. Vehicle Active Safety Systems for Preventing Road Departure Accidents[C]. Proceedings of ESDA2002: 6th Biennial Conference on Engineering Systems Design and Analysis Istanbul, Turkey, July 8-11, 2002.
    [123] Dean Pomerleau, Todd Jochem. Rapidly Adapting Machine Vision for AutomatedVehicle Steering. IEEE Expert, 1996.4, 11(2): 19-27.
    [124] M. Lützeler, Dickmanns E. D. Road Recognition with Mar- VEye[J]. Proc. IEEE IV, 1998: 341-346.
    [125] Alberto Broggi, Massimo Bertozzi, et al. The Argo Autonomous Vehicle’s Vision and Control Systems[J]. International Journal of Intelligent Control and Systems, 1999, 3(4): 409-441.
    [126] Massimo Bertozzi, Alberto Broggi, et al. Obstacle and Lane Detection on ARGO[C]. IEEE Conference on Intelligent Transportation Systems, Boston USA, November 1997: 1010-1015.
    [127] Massimo Bertozzi, Alberto Broggi. GOLD: A Parallel Real-time Stereo Vision System for Generic Obstacle and Lane Detection[J]. IEEE Trans. on Image Processing, 1998.6: 62-81.
    [128] Alberto Broggi. Robust Real-time Lane and Road Detection in Critical Shadow Conditions[C]. Proceedings IEEE International Symposium on Computer Vision, Coral Bables, Florida, November 1995, IEEE Computer Society, 353-358.
    [129] Dieter Koller. Vision Based Automatic Road Vehicle Guidance[C].“Handbook of patter recognition and computer vision”, March 1999, World Scientific Publishing.
    [130] ZHANG Z. Parameter Estimation Techniques: A Tutorial with Application to Conic Fitting[J]. Image and Vision Computing Journal, 1997, 15(1): 59-76.
    [131] Y.Hayashi, K.Hayafune, K.Yamada, N.Maede. System Safety Study on Intelligent Cruise Control System[C]. Proc. of the 4th World Congress on Intelligent Transport Systems, 1997.
    [132] Dickmanns E.D, Dirger D. Mysliwetz. Recursive 3-D Road and Relative Ego-State Recognition[J]. IEEE Trans. on Pattern Analysis and Machine Intelligence, 1992, 14: 199-213.
    [133] U. Franke, D. Gavrila, S. G?rzig, F. Lindner, F. Paetzold, C. W?hler. Autonomous Driving Goes Downtown[J]. Proc. IEEE IV, 1998: 40-48.
    [134] J. Goldbeck, B. Huertgen. Lane Detection and Tracking by Video Sensors[J]. Proc. ITS, 1999: 74-79.
    [135] K. A. Redmill, S. Upadhya, A. Krishnamurthy,ü. ?zgüner. A Ltracking System for Intelligent Vehicle Applications[J]. Proc. IEEE ITS, 2001: 275-281.
    [136] K. A. Redmill. A Simple Vision System for Lane Keeping[J]. Proc. IEEE ITS, 1997.
    [137] F. Chausse, R. Aufrère, R. Chapuis. Vision Based Vehicle Trajectory Supervision[J]. Proc. IEEE ITS, 2000: 143-148.
    [138] X. Youchun, W. Rongben, J. Shouwen. A Vision Navigation Algorithm Based on Linear Lane Model[J]. Proc. IEEE IV, 2000: 240-245.
    [139]王荣本,徐友春,等.基于线性模型的导航路径图像检测算法研究[J].公路交通科技, 2001.4, 18(2): 47-51.
    [140] J. Goldbeck, B. Huertgen. Lane Detection and Tracking by Video Sensors[J]. Proc.ITS, 1999: 74-79.
    [141] R. Risack, P. Klausmann, W. Kruger, W. Enkelmann. Robust Lane Recognition Embedded in a Real-time Driver Assistance System[J]. Proc. IEEE IV, 1998: 35-40.
    [142]刘平。图像分割阈值选取技术综述[J/OL].中科院成都计算所, 2004.2.26. http://dev.csdn.net/article/82664.shtm
    [143]赵文杰.基于DSP和低成本CCD的车道视觉检测算法研究[D].长春:吉林大学, 2007.
    [144]余楠,孙芳.基于改进遗传算法的图像分割[J].计算机与数字工程, 2007, 35(8): 107-108.
    [145]韩思奇,王蕾.图像分割的阈值法综述[J].系统工程与电子技术, 2002, 24(6): 91-94.
    [146] N.Otsu. A Threshold Selection Method from Gray-level Histogram[J]. IEEE Trans, 1979, SMC-9(1): 62-66.
    [147] WFMC-TC-1003, Reference Model - The Workflow Reference Model1.1, 1995.
    [148] WFMCTC00-1008, Interoperability White Paper1.0, 1996.
    [149]彩万志.普通昆虫学[M].北京:中国农业大学出版社, 2005.
    [150]付忠良.图像阈值选取方法——Otsu方法的推广[J].计算机应用, 2000, 20(5): 37-39.
    [151] Joel C.McCall, Mohan M.Trivedi. An Integrated, Robust Approach to Lane Marking Detection and Lane Tracking[J]. Proc. IEEE Intelligent Vehicles Symposium, 2004, 6.
    [152]管欣,贾鑫,高振海.车道检测中感兴趣区域选择及自适应阈值分割[J].公路交通科技, 2009(4).
    [153] S.Battiato, A.Bosco, A.Castorina, G. Messina. Automatic Image Enhancement by Content Dependent Exposure Correction[J]. EURASIP Journal on Applied Signal Processing, 2004, 12: 1849-1860.
    [154]董因平.高速汽车车道偏离预警系统的算法研究[D].长春:吉林大学, 2004.
    [155]徐涛.数值计算方法[M].长春:吉林科学技术出版社, 1998.
    [156]管欣,董因平,高振海.基于LMedSquare的车道曲线拟合算法[J].吉林大学学报(工学版), 2004, 34(2): 194-198.
    [157]柯斌.图像坐标系中车道偏离评价算法的研究[D].长春:吉林大学, 2006.
    [158]管欣,高振海,等.汽车预期轨迹驾驶员模糊决策模型及典型路况方针[J].汽车工程, 2001, 30(1).
    [159]管欣,高振海,郭孔辉.驾驶员稳态预瞄动态校正假说[J].汽车工程, 2003, 25(3): 227-231.
    [160]张立存.高速汽车弯道前方碰撞预警算法的研究[D].长春:吉林大学, 2004.
    [161]吉林大学汽车动态模拟国家重点实验室与西安交通大学人工智能与机器人研究所合作课题项目总结, 2003.10.
    [162] David A.Atchison Dion H.Scott,曲万春,宋贵才.人眼水平视场的单色波像差[J].光学精密机械, 2003(4): 1-4.
    [163] J. Y. Bouguet. Camera Calibration Toolbox for Matlab: First Calibration Example - Corner Extraction, Calibration, Additional Tools, http://www.vision.caltech.edu/bouguetj/calib_doc/htmls/example.html
    [164] B.Southall, C.J.Taylor. Stochastic Road Shape Estimation[C]. Proceedings of Eighth IEEE International Conference on Computer Vision, 2001, 1: 205-212.
    [165]苏小华,赵继广,张慧星. CCD摄像机成像畸变的研究[J].物理实验, 2003, 23(9): 39-41.
    [166]周国清,袁保宗,唐晓芳.论CCD相机标定的内、外因素:畸变模型与信噪比[J].电子学报, 1996.11, 11: 12-17.
    [167] R. Y. Tsai. A Versatile Camera Calibration Technique for High-accuracy 3D Machine Vision Metrology Using Off-the-shelf TV Cameras and Lenses [J]. IEEE Journal of Robotics and Automation, 1987, 3(4): 323-344.
    [168] Zhengyou Zhang. A Flexible New Technique for Camera Calibration [J]. IEEE Transactions on pattern analysis and machine intelligence, 2002, 22(11): 1330-1334.
    [169]李介谷.计算机视觉的理论和实践[M].上海:上海交通大学出版社, 1999,第2版.
    [170] Ballard D. H,计算机视觉[M].北京:科学出版社, 1987.
    [171]邱茂林,马颂德,李毅.计算机视觉中摄像机定标综述[J].自动化学报,2000.1, 26(1): 43-55.
    [172]郑南宁.计算机视觉与模式识别[M].北京:国防工业出版社, 1998.
    [173] J. Y. Bouguet. Camera Calibration Toolbox for Matlab: Description of The Calibration Parameters. http://www.vision.caltech.edu/bouguetj/calib_doc/htmls/parameters.html
    [174]周福良,严惠民.高速公路汽车测距系统中目标汽车提取的研究[J].公路交通科技, 2003, 20(5): 87-90.
    [175] Isomoto, K. Niibe, et al. Development of A Lane-keeping System for Lane Departure Avoidance[C]. Proc. 2nd world congress on ITS, Yokahama, November 1995.
    [176] BOOKSTEIN F. Fitting Conic Sections to Scattered Data[J]. Computer Vision Graphics and Image Processing, 1979, 9: 56-71.
    [177]徐南荣.系统辨识[M].南京:东南大学出版社, 1991.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700