基于嘴部状态的疲劳驾驶和精神分散状态监测方法研究
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
安全是汽车交通发展的永恒主题,随着汽车保有量的迅速增加,公路上的交通事故,特别是恶性交通事故发生率居高不下,交通安全问题日益突出。在这种情况下,作为智能车辆的关键技术之一,安全辅助驾驶技术受到人们日益关注。安全辅助驾驶的相关技术为解决常规车辆因驾驶员主观因素产生的交通事故提供了有力的技术支持。美、欧、日等西方发达国家已经抢先一步,在汽车安全辅助驾驶技术的研究方面相继投入大量人力、物力,取得了许多有价值的研究成果,在乘用车、重型卡车、公共交通汽车及特殊车辆上进行了汽车安全辅助驾驶系统的研究和应用。
    驾驶员人为因素已经成为交通事故发生的主要因素之一。驾驶员监测已成为当前安全辅助驾驶技术的一个研究热点。机器视觉在实时性、准确性、适用性及经济性等方面具有比其他监测方法更大的优势。许多研究者利用车载摄像机系统进行驾驶员视觉监测技术的研究。目前许多研究者集中于通过跟踪驾驶员的人脸,眼睛,瞳孔等,得到头部转动和方向,眼睑运动,眨眼频率,注意力方向等监测驾驶员疲劳驾驶或精神分散状态。然而,驾驶员打哈欠,长时间与他人说话或打手机通话等疲劳驾驶或驾驶精神分散状态没有得到人们的重视。本文通过一个车载CCD摄像机对驾驶员面部状态进行实时监测,在国内首次提出了利用机器视觉识别驾驶员嘴部状态进而判别驾驶员打哈欠疲劳驾驶和与他人说话或打手机通话等驾驶精神分散状态的方法,进一步拓展了驾驶员监测技术的涵盖范围,也为驾驶员监测系统的综合信息监测技术提供参考和支持。很显然,疲劳驾驶和驾驶精神分散状态监测系统对于降低交通事故率有着重要的作用。
    论文的研究内容包括四个方面,即驾驶员人脸检测、驾驶员嘴部定位与跟踪、驾驶员嘴部状态识别、驾驶员疲劳驾驶和驾驶精神分散状态判别。
    在驾驶员人脸检测方面,本文利用了人脸皮肤颜色模型的驾驶员人脸检测方法。不同人的皮肤颜色在YCrCb颜色空间中色度信息Cr、Cb具有一定的分布特性,虽然不同人的皮肤颜色有差异,但它们在Cr、Cb色度上的差异远
    
    
    小于亮度上的差异,也就是说,不同人的肤色在色度上往往很相近,只是在亮度上差异较大。实验证明:不同的肤色Cr、Cb具有相同的二维高斯模型。本文利用人脸的皮肤颜色在YCrCb颜色空间的分布特点,得到了一种快速、有效的在复杂背景下人脸检测方法。实验结果表明,该种驾驶员人脸检测方法可靠性高,具有良好的动态定位能力,而且对驾驶员坐姿的变化有较好的适应能力。
    在驾驶员嘴部定位和跟踪方面,嘴唇的主要颜色特征是唇色相对肤色颜色较红,而且归一化RGB颜色向量对光照和人脸运动和旋转具有不变性,利用Fisher线性变换得到肤色和唇色rgb颜色向量的最佳投影方向。将肤色和唇色颜色向量投影到该方向上后,肤色和唇色能够很好地区分出来。采用该方法不仅使唇色和肤色能够区分开而且嘴唇轮廓有较明显的边界,提高嘴唇检测、定位的准确性。在嘴唇唇色分割的基础上,本文利用连通成分标示算法和嘴部区域几何约束进行驾驶员嘴部定位,利用卡尔曼滤波跟踪算法和假设约束对驾驶员嘴部进行跟踪。
    连通成分标示算法可将驾驶员嘴部感兴趣区域的图像与唇色相似的各个独立区域标示出来,结合脸部下半部分各器官的几何特征即可将驾驶员嘴部定位,首先对获取的人脸下半部分区域即嘴部感兴趣区域的图象进行嘴唇分割处理,得到嘴部感兴趣区域内与唇色相似且相互孤立的几个区域。然后利用连通成分标示算法对这几个孤立的区域进行区域连通标示,进而获取各孤立区域的各种参数,如坐标位置、区域面积等。然后利用前文所讲到的几何约束条件对这些孤立区域进行判断,从中选取最符合约束条件的区域作为驾驶员嘴部区域。
    为了持续不断地监测嘴部,实时每帧地跟踪嘴部是重要的。这可以通过使用预测和定位的方案有效地解决这种问题。预测包括根据嘴部当前的位置信息确定下一帧图像嘴的近似位置。定位即通过局部搜索精确地定位嘴部的位置。根据卡尔曼滤波理论,本文建立嘴部状态方程和观测方程,第帧图像嘴部中心点的当前估计值决定和嘴部区域搜索窗口大小由协方差矩阵值决定,这样在此范围内经过嘴唇分割得到嘴部区域。
    在驾驶员嘴部状态识别方面,本文提出了利用图像投影定位驾驶员嘴唇特征点的算法,根据嘴部区域几何特征值进行BP神经网络识别驾驶员不说话时的嘴闭合、说话时的普通张嘴、打呵欠时的大张嘴的三种不同的嘴部状态。
    在嘴唇分割和定位的基础上,利用垂直投影得到左右嘴角,利用水平投影得到上嘴唇中心最上点、上嘴唇中心最下点、下嘴唇中心最上点、下嘴唇中
    
    
    心最下点。本文根据驾驶员三种不同的嘴部状态,嘴部区域的最大宽度、嘴部区域的最大高度、 上、下嘴唇之间的高度值不同,将这三个嘴部区域的几何特征值作为BP神经网络的输入向量。
    BP神经网络是应用最普遍的一种人工神经网络,目前其应用实例约占神经网络应用实例的80%,它已成为人工神经网络的经典代表。本文将采用BP神经网络来识别驾驶员的嘴部状态。该BP神经网络为三层结构,输入层有3个神经元,隐层有14个神经元,输出层有3个神经元,代表驾驶员三种
Safety is always one of the eternal topics in vehicle Transportation. With the rapid development of transportation, traffic accidents are greatly increasing, especially including traffic fatality. Safety problem in transportation is paid more and more attention worldwide. Under this circumstance, Safety Driving Assist technologies, as a part of Intelligent Vehicle technologies, are paid more and more attention and it can support greatly for reducing the road accidents due to drivers’ human factors. The developed countries, such as the U.S., U.K., Japan, Germany etc., have made their research programs to develop Safety Driving Assist technologies and have already achieved great progress. Some systems about Safety Driving Assist technologies have been applied in the passenger car, heavy truck, bus and public transport and special vehicle.
    Driver’s human factors have been one of the most important causes of road accidents. Driver monitoring has been a focus of Safety Driving Assist technologies research. The Driver monitoring method of Machine vision has better advantage than other method on time, accuracy, adaptability and economical respects. Therefore, most of researchers have studied the driver’s monitoring method based on machine vision by vehicle-mounted cameras. Until now many research have focused on monitoring the driver’s face, eye, pupil and so on to obtain his/her face rotation and orientation, eye activities, eye blinking rate, gaze direction, finally to determine his/her fatigue or distraction state. However, Most of researchers have neglected driver’s fatigue state such as driver’s yawning and his/her distraction like conservation and talking on a cellular phone while his/her driving. This paper presents a method for real-time monitoring a driver’s mouth
    
    
    state by one vehicle-mounted camera, and monitoring driver’s fatigue state such as driver’s yawning and his/her distraction like conservation and talking on a cellular phone while his/her driving by recognizing his/her mouth state based on machine vision first in our country, extends the Safety Driving Assist technologies, and provides the reference and support for the integrated driver monitoring technologies. Obviously, driver’s fatigue and distraction warning system take important role on reducing accident rate.
    The research work in this paper include the four parts, i.e. driver’s face detection, driver’s mouth detection and tracking, driver’s mouth state recognition and driver’s fatigue and distraction state identification.
     Driver’s face detection applies the human face skin color model. The color distribution of skin colors of different people was found to be clustered in a small area of the chromatic YCrCb color space. Although skin colors of different people appear to vary over a wide range, they differ much less in Cr,Cb chroma than in brightness. In other words, skin colors of different people are very close, but they differ mainly in intensities. Results showed that skin color Cr,Cb chroma distribution of different people can be represented by a Gaussian model. This paper proposed a fast, available face detection method under different background using the human skin color property in YCrCb color space. Experiment results showed that this method is much reliable and adaptable to driver’s different pose.
    Driver’s mouth detection starts with the driver’s lip pixels segmentation. Lip pixels are prevalently redder than skin pixels, not pure red. And normalized RGB is invariant to changes of the light source, face orientation and rotation. In normalized RGB color space Fisher transformation is used to determine an optimal projection vector , onto which rgb color vector data of lip and skin pixels are distinguished. This method segments lip and skin pixels, keeps the distinct lip contour boundary and increases lip detection accuracy. This paper locates driver’s mouth by connected component analysis and region geometric constraint. Kalman filter is used to track Driver’s mouth.
    Connected component analysis algorith
引文
王武宏, 孙逢春, 曹琦, 刘淑艳, 道路交通系统驾驶行为理论与方法, 科学出版社. 2001年8月第1版:1-5.
    李斌, 智能车辆前方车辆探测及安全车距控制方法的研究, 吉林大学博士学位论文,.2002, 6.
    Matthias Kopf, “Advanced Driver Assistance Systems Research within INVENT”, Business Briefing: Global Automative Manufacturing & Technology, 2002.
    K Yi, M Woo, S H Kim, et al. An Experimental Investigation of a CW/CA System for Automobiles Using Hardware-in-the-loop Simulation[A]. San Diego: Proceeding of the American control conference, 1999: 724-728.
    A Takahashi, N Asanuma, Introduction of HONDA ASV-2(Advanced Safety Vehicle-phase 2)[A]. USA: Proceeding of the IEEE Intelligent Vehicles Symposium, 2000: 694-701.
    H Masuda, Y Hirosima, T Ito, Development of Daihatsu ASV2[A]. USA: Proceedings of the IEEE Intelligent Vehicles Symposium, 2000: 708-713.
    R. Bishop, R. Bishop Consulting, USA: A Survey of Intelligent Vehicle Applications Worldwide: Overview. Proceedings of the IEEE Intelligent Vehicles Symposium 2000, 2000.10, pp25~30.
    U Franke, D Gavrila, et al, Autonomous Driving Goes Downtown[A]. IEEE Intelligent Systems[See Also IEEE Expert], 1998, 13(6): 40-48.
    U Hofmann, A Rieder, E D Dickmanns, EMS-vision: Application to Hybrid Adaptive Cruise Control [A]. Intelligent Vehicles Symposium Proceedings of the IEEE, 2000: 468-473.
    P.ellutta, R. Manduchi, L. Matthies, K. Owens, A. Rankin, California Institute of Technology, USA. Terrain Perception for DEMO III. Proceedings of the IEEE Intelligent Vehicles Symposium 2000, 2000.10, pp326~331.
    National Highway Traffic Safety Administration, Traffic Safety Facts 1996 - Motor Vehicle Crash Data from FARS and GES “Related Factors for Drivers Involved In Fatal Crashes”. See http://www-fars.nhtsa.dot.gov.
    刘晓君, 丰田先进安全车, 世界汽车,1998-02.
    Bekiaris, E., et. al., “SAVE project Final Report”, January 1999.
    NHTSA, 1998, “Drowsy Driving and Automobile Crashes”,
    http://www.nhtsa.dot.gov/people/perform/human/ Drowsy.html.
    Wierwille, W.W., Lewin, M.G. and Fairbanks. R.J.III, 1996, September, “Final report:
    
    
    Research on vehicle-based driver status/performance monitoring, Parts I, II and III”, Tech.Report No DOT HS 88 638, Washington DC:NHTSA.
    NHTSA, 1998, “Evaluation of Techniques for Ocular Measurement as an Index of fatigue and the basis for alertness management”, DOT HS 808 762.
    Renner, G. Mehring, S. 1997, “Lane Departure and Drowsiness-Two major accident causes-One Safety System”, 4th World Congress on ITS, 21-24 October, 1997, ICC Berlin, Germany.
    “SafeTRAC product for drowsiness detection”, www.assistware.com.
    H. Ueno, M. Kaneda, and M. Tsukino. “Development of drowsiness detection system”. Proceedings of 1994 Vehicle navigation and information systems conference, Yokohama, Japan. Aug, 1994: pp. 15-20: 1998.
    王春燕, 杨英俊, 当前国际智能运输领域安全辅助驾驶新技术, 交通与计算机 1999-08.
    秦利燕, 济青高速公路交通事故分布规律的研究, 吉林大学硕士毕业论文, 2002, 2.
    “Traffic Safety Facts 2000: A Compilation of Motor Vehicle Crash Data From the Fatality Analysis Reporting System and the General Estimates System,” U.S. Dept. Transportation Administration, National Highway Traffic Safety Report DOT HS 809 337, 2001.
    D. Anderson, A. Abdalla, B. Pomietto, Claudia N. Goldberg, and V.Clement, “Distracted Driving: Review of Current Needs, Efforts and Recommended Strategies”, Senate Document No.14 Commonwealth of Virginia, Richmond, 2002.
    梁路宏, 艾海舟, 徐光佑, 张钹, 人脸检测研究综述.计算机学报, Vol.24, No.3, 2002, 5.
    Kin-Man Lam, Hong Yan, An analytic-to-holistic approach for face recognition based on a single frontal view, IEEE Trans. Pattern Anal. Machine Intelligent. 20 (1998)673-686.
    Chung J. Kuo, Ruey-Song Huang, Tsang-Gang Lin, 3-D Facial Model Estimation From Single Front-View Facial Image, IEEE Trans. Circuits System Video Technology. 12 (3) (2002) 183-192.
    P. Eisert, T. Wiegand, B. Girod, Model-aided coding: A new approach to incorporate facial animation into motioncompensated video coding, IEEE Trans. Circuits System Video Technology. 10 (3) (2000) 344-358.
    D.E. Pearson, Developments in model-based video coding, Proc. IEEE 83 (6) (1995) 892-906.
    Sung K, Poggio T, Example-based learning for view based human face detection, IEEE Trans Pattern Analysis and Machine Intelligence, 1998, 20 (1): 39-51.
    周 杰, 卢春雨, 张长水等, 人脸自动识别方法综述, 电子学报, 2000, 28 (4) : 102-106.
    卢春雨, 张长水, 闻方等, 基于区域特征的快速人脸检测法, 清华大学学报(自然科
    
    
    学版), 1999, 39 (1) : 101-105.
    梁路宏, 艾海舟, 何克忠, 基于多模板匹配的单人脸检测, 中国图象图形学报, 1999, 4A (10): 823-830.
    Ai H Z, Liang L H, Xu G Y, A general framework for face detection, In: Tan Tie-Niu, Shi Yuan Chun, Gao Wen eds. In:Proc the 3rd Conference on Multimodal Interfaces, Lecture Notes in Computer Science, 1948, Berlin: Springer-Verlag,2000. 119-126.
    Miao J, Yin B C, Wang K Q et al. A hierachical multiscale and multiangle system for human face detection in a complex background using gravity-center template. Pattern Recognition,1999, 32 (10): 1237-1248.
    Xing Xin, Wang Kong Qiao, Shen Lan Sun, Organ-based real-time face tracking method. ACTA Electronica Sinica, 2000,28 (6): 29-31.
    Jun Miao, Baocai Yin, Kongqiao Wang, Lansun Shen, Xuecun Chen, A hierarchical multiscale and multiangle system for human face detection in a complex background using gravity-center template, Pattern Recognition 32 (7) (1999) 1237-1248.
    Y. Suzuki, H. Saito, S. Ozawa, Extraction of the human face from the natural background using GAs, in: Proceedings of IEEE TENCON, Digital Signal Processing Applications, Vol. 1, 1996, pp. 221-226.
    Y. Yokoo, M. Hagiwara, Human faces detection method using genetic algorithm, in: Proceedings of the IEEE International Conference on Evolutionary Computation, 1996, pp.113-118.
    Liu Ming Bao, Yao Hong Xun, Gao Wen, Real-time face tracking method in color images. Chinese Journal of Computer, 1998, 21 (6): 527-532.
    Wang J G, Tan T N, A new face detection method based on shape information, Pattern Recognition Letters, 2000, 21 (6-7): 463-471.
    Yang M H, Kriegman D, A hujaN, Detecting faces in images: A survey, IEEE Trans Pattern Analysis and Machine Intelligence, 2002, 24 (1): 34-58.
    Kah-kay Sung, Tomaso Poggio, Example-based learning for view-based human face detection, IEEE Trans. Pattern Anal. Machine Intelligent. 20 (1) (1998) 39-51.
    Terrillon J C, ShiraziM N, Fukamachi H et al. Comparative performance of different skin chrominance models and chrominance spaces for the automatic detection of human faces in color images.In: Proc Conference on Automatic Face and Gesture Recognition, Grenoble, France, 2000. 54-61.
    Jones M J, Rehg J M, Statistical color models with application to skin detection, In: Proc IEEE Conference on Computer Vision and Pattern Recognition, Fort Collins, Co lorado, 1999.274-280.
    Wei G, Sethi I K, Face detection for image annotation, Pattern Recognition Letters, 1999, 20 (11-13) : 1313-1321.
    Gu Qian, S.Z. Li, Combining feature optimization into neural network based face
    
    
    detection, in: Proceedings of International Conference on Pattern Recognition, Vol. 2, 2000, pp814-817.
    Abdel Mottaleb M, Elgammal, A Face detection in complex environments from color images, In: Proc IEEE Conference on Image Processing, Kobe, Japan, 1999, 3: 622-626.
    Karlekar J, Desai U B, Finding faces in color images using wavelet transform. In: Proc IEEE Conference on Image Analysis and Processing, Venice, Italy, 1999. 1085-1088.
    Yang M H, A huja N, Detecting human faces in color images. In: Proc IEEE Conference on Image Processing, Chicago, 1998. 127-139.
    Wu H Y, Chen Q, Yachida M, A fuzzy-theory-based face detector. In: Proc Conference on Pattern Recognition, Vienna, Austria, 1996, 3: 406-410.
    Wu H Y, Chen Q, Yachida M, Face detection from color images using a fuzzy pattern matching method. IEEE Trans Pattern Analysis and Machine Intelligence, 1999, 21 (6): 557-563.
    Cai J, Go sh tasby A, Detecting human faces in color images. Image and Vision Computing, 1999, 18 (1): 63-75.
    Ishii H, Fukumi M , Akamatsu N, Face detection based on skin color information in visual scenes by neural networks, In: Proc Systems, Man, and Cybernetics, Tokyo, Japan, 1999, 5: 350-355.
    Zarit B D, Super B J, Quek F K H, Comparison of five color models in skin pixel classification, In: Proc Workshop on Recognition, Analysis, and Tracking of Faces and Gestures in Real-Time systems, Corfu, Greece, 1999. 58-63.
    郭克友, 驾驶员疲劳状态视觉监测技术的研究, 吉林大学博士学位论文, 2003, 6.
    D. Chai, K.N. Ngan, Face segmentation using skin-color map in videophone applications, IEEE Trans. Circuits System Video Technology. 9 (4) (1999) 551-564.
    H. Greenspan, J. Goldberger, I. Eshet, Mixture model for face-color modeling and segmentation, Pattern Recognition Letters 22 (14) (2001) 1525-1536.
    Hualu Wang, Shih-Fu Chang, A highly efficient system for automatic face region detection in MPEG video, IEEE Trans. Circuits Systems Video Technology. 7 (4) (1997)615-628.
    Ing-Sheen Hsieh, Kuo-Chin Fan, Chiunhsiun Lin, A statistic approach to the detection of human faces in color nature scene, Pattern Recognition 35 (2002)1583-1596.
    K. Sobottka, I. Pitas, A novel method for automatic face segmentation, facial feature extraction and tracking, Signal Processing: Image Communication 12 (3) (1998) 263-281.
    Yanjiang Wang, Baozong Yuan, A novel approach for human face detection from color images under complex background, Pattern Recognition 34 (2001) 1983-1992.
    Yang J, Waibel A, A Real-Tine Face Tracker [C]. In :IEEE Proc of the 3rd Workshop on ACV, Florida, USA, 1996.
    艾海舟, 梁路宏, 徐光佑等, 基于肤色和模板的人脸检测[J], 软件学报, 2001 ,12
    
    
    (12) :1784-1792.
    Hongo H, Ohya M, Yasumoto M, Focus of Attention for Face and Hand Gesture Recognition Using Multiple Cameras[C], In: Pro 4th IEEE Inter Conf on AFGR, 20001.
    陶霖密, 彭振云, 徐光佑, 人体的肤色特征[J], 软件学报, 2001, 12 (7):1032-1041.
    Yoo T W, Oh I S, A Fast Algorithm for Tracking Humane Faces Based in Chromatic Histogram[J], Pattern Recognition Letters, 1999, 20(10): 967-978.
    陶霖密, 徐光佑, 机器视觉中的颜色问题及应用, 科学通报, 第46卷, 第3期, 2001, 2.
    BRAND, J., AND MASON, J. 2000. A comparative assessment of three approaches to pixel level human skin-detection, In Proc. of the International Conference on Pattern Recognition, vol. 1, 1056-1059.
    Zhang Y J, Yao Y R, He Y, Color image segmentation based on HIS model, High Technology Letters, 1998, 4(1): 28-31.
    CHAI, D., AND BOUZERDOUM, A. 2000. A bayesian approach to skin color classification in ycbcr color space, In Proceedings IEEE Region Ten Conference (TENCON’2000), vol. 2, 421-424.
    Yang J, Lu W, Waibe A, Shin-color modeling and adaptation [R]. Pittsburgh: CMU-CS, 1997, 97-146.
    唐良瑞, 马全明, 景晓军等, 图像处理实用技术, 化学工业出版社, 2002, 1: 40-41.
    容观澳, 计算机图像处理, 清华大学出版社, 2000, 2: 129-131.
    J. Luettin, N. A. Thacker, Speechreading using probabilistic models, Computer vision and Image Understanding, 1997, 165(2):163-178.
    M. Gordan, C. Kotropoulos, A. Georgakis, A new fuzzy C-means based segmentation strategy applications to lip region identification, IEEE-TTTC International Conference on Automation, Quality and Testing, Robotics, 2002, Cluj-Napoca, Romania.
    R. Kaucic and A. Blake, Accurate, real-time, unadorned lip tracking, Proc. 6th Int.Conf. Computer Vision, Bombay, India, pages 370-375, 1998.
    Alam W.C. Liew, K.L. Sum, S.H. Leung and W.H. Lau, Fuzzy segmentation of lip image using cluster analysis, 2001.
    Tarcisio Coianiz, Lorenzo Torresani, and Bruno Caprile, “2D Deformable Models for Visual Speech Analysis”, in Speechreading by Humans and Machines, D.G.Stork & M.E.Hennecke Eds., Springer, NY, 1996.
    陆继祥, 张增芳, 李陶深, 胡迎春, 基于24位彩色人脸图像嘴唇的分割和提取, 计算机工程, 第29卷, 第2期, 2003, 2.
    Michael vogt, Fast matching of a dynamic lip model to color video sequences under regular illumination conditions, In: Stork D G, Hennecke M E eds. Nato ASI Series F, 1996, 150:399-407.
    Tian Ying Li, Kanade Takeo, Cohn J F, Robust lip tracking by combining shape, color and motion. In: Proc 4th Asian Conference on Computer Vision(ACCV’00), TaiWan, 2000,
    
    
    1:394-398.
    边肇祺, 张学工等, 模式识别(第二版), 清华大学出版社, 2000, 1: 87-90.
    章毓晋, 图象处理和分析, 清华大学出版社, 1999, 3: 205-206.
    Gong S, Psarrou A, Tracking and recognition of face sequences, In: Proc Euro Workshop on Combined Real and Synthetic Image Processing for Broadcast and Video Production, Hamburg, 1994.
    T. F. Cootes, G. J. Edwards, and C. J. Taylor, Active appearance models, Proc. European Conference on Computer Vision, June 1998: 484-498.
    J.Luetin, N. A. Thacker, and W. Beet, Active shape models for visual speech feature extraction, Technical report, University of Sheffield, Sheffield, UK, 1995.
    M. Kass, A. Witkin, and D. Terzopoulos, Snakes: Active contour models, In Proc.1st Int. Conf. On Computer Vision, pp. 259-268, 1987.
    A. Yuille, Feature extraction from faces using deformable templates, Int. Journal of Computer Vision, 8(2):99-111,1992.
    H. Hennecke, K. Venkatesh, and D. Stork, Using deformable templates to infer visual speech dynamics, Technical Report 9430, California Research Center, June 1994.
    C. Bregler and S. Omohundro, Advances in Neural Information Processing Systems, chapter Surface Learning with Applications to Lipreading, Morgan Kaufmann, San Francisco, CA,1994.
    Fabrice Bourel, Claude C. Chibelushi, Adrian A. Low, Robust Facial Feature Tracking, The seventh British Machine vision Conference, University of Bristol, UK, 2000.
    Milan Sonka, Vaclav Hlavac, Roger Boyle, Image Processing, Analysis, and Machine Vision(Second Edition), Brooks/Cole, a division of Thomson Asia Pte Led, USA, 2002,1: 708-710.
    晏洁, 文本驱动的唇动合成系统, 计算机工程与设计, 第19卷第1期, 1998, 2.
    边肇祺, 张学工等, 模式识别(第二版), 清华大学出版社, 2000, 1: 176-177.
    杨行竣,郑君里, 人工神经网络, 北京: 高等教育出版社, 1992, 9.
    Kung S Y, An Algebraic Projection Analysis for Optimal Hidden Units Size and Learning Rates in Back-Propagation Learning, International Conference on Neural Network, USA, 1988: 363-370.
    袁曾任, 人工神经元网络及其应用, 清华大学出版社, 1999, 10.
    Fukuda Toshio, Ghibata Takanori, Theory and Applications of Neural Networks for Industrial Control Sytems, IEEE Transctions on Industrial Electronics, December 1992, 39(6): 472-489.
    徐大威, 李伟, 何永保, 多层BP网络的研究及应用, 信息与控制, 1 995, 2 4(增刊)588-596.
    
    纪寿文, 智能车辆在恶劣环境中自主导航方法的研究, 博士学位论文, 2001.
    Hecht-Nielsen R, Application of Backpropagation Neural Networks, Neural Networks, 1988, 1:131-139.
    Lipamann.R.P., An Introduction to Computing with Neural Nets, IEEE ASSP Magazine, Nov, 1989, 4-32.
    Kuarycki, On Hidden Nodes for Neural Nets, IEEE Transactions on Circuits and Systems, May 1989, 36(5): 661-664.
    廖宁放, 高稚允, BP神经网络用于函数逼近的最佳隐层结构, 北京理工大学学报, 18(4), 1998, 8.
    吕柏权, 李天铎, 一种优化神经网络结构算法, 华北电力大学学报, 1996, 23(3): 36-40.
    Hung S Y, Hwang J N, An algebraic Projection Analysis for Optional Hidden Units Size and Learning Rates in Back-Propagation Learning, IEEE, 1991, I: 363-370.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700