面向智能安全气囊的乘驾人检测理论和方法研究
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
汽车安全气囊作为乘驾人的安全保护系统之一,其主要防止乘驾人在车辆碰撞过程中与车体内饰件发生二次碰撞,由于汽车气囊的快速膨胀常造成乘驾人的二次伤害,甚至窒息死亡等。在车辆碰撞时,智能安全气囊依据乘驾人的乘员类型、乘员坐姿等参数,实时控制气囊的展出和力度,可极大地减少二次伤害。因此,乘驾人类型和状态等参数是实现智能安全气囊的核心。
     为实现乘驾人的类型与状态检测,本文系统研究了面向智能安全气囊的乘驾人检测系统构建方法,开发了车载的多传感器融合的硬件开发平台,建立了乘驾人融合分类决策和跟踪方法体系。具体研究工作如下:
     1、构建了面向智能安全气囊的乘驾人检测系统与开发平台。
     针对乘驾人检测系统的特点,利用视频与座椅压力传感器,依托TMS320DM642数字信号处理器(DSP),构建了基于DSP的车载乘驾人检测系统平台;结合视频与座椅压力传感器的融合判决,建立了基于多传感器融合的乘驾人检测软件系统框架体系。针对目前缺乏开放的乘驾人相关测试视频序列,利用乘驾人检测硬件平台采集了大量不同光照、状态环境的乘驾人视频序列数据。为乘驾人检测提供了基础的软硬件支持和基础数据源。
     2、提出了基于梯度方向的结构相似性图像质量评价方法
     针对乘驾人检测图像模糊、图像亮度不均等图像质量下降和虚假运动图像区域,引起后期识别处理困难的问题,根据梯度方向受光照强度影响较小的特点和人眼视觉结构相似性特性,本文提出了基于梯度方向的结构相似性图像质量评价方法,其利用参考图像局部参考区域,建立了梯度方向的结构特征表达,构建了亮度、对比度和结构相似性的综合图像质量评价方法,实现乘驾人图像质量优劣的评价。
     3、建立了乘驾人图像的Contourlet变换降噪处理与光照估计图像均衡化增强方法
     图像的噪声和光照是制约乘驾人图像质量的重要因素之一。为克服该方面的影响,根据Contourlet变换可实现任意尺度上任意方向的多尺度分解,较好地保留图像的轮廓和方向性纹理信息,弥补小波变换的不足的特点以及不均匀光照反射模型,本文建立了Contourlet变换的乘驾人图像降噪处理与光强估计的图像均衡化增强方法,该方法利用Contourlet变换,获取乘驾人图像分解后的Contourlet系数,采用系数阈值抑制和阈值函数处理,抑制图像高频区域的Contourlet系数,结合Contourlet逆变换重建乘驾人图像,实现了乘驾人图像的降噪处理;然后抑制图像傅里叶变换的直流分量,降低图像光强的影响,通过光亮强度的线性拉伸,将图像光照强度映射到[0,255]的亮度空间,最后采用子块部分重叠的局部直方图均衡化方法增强图像局部区域,改善图像亮度分布和均衡化乘驾人图像,增加图像细节的对比度,实现了乘驾人图像增强的处理。
     4、提出了基于梯度方向直方图相关的乘驾人区域估计算法
     乘驾人区域估计一方面可降低图像检测搜索范围,提供检测速度;另一方面可抑制非乘驾人区域的图像边缘或其他特征的干扰,提高算法的检测精度。目前常用方法如背景差分等难以获取乘驾人区域。基于此,本文利用乘驾人图像序列和无乘驾人的背景参考图像之间具有较好的空间结构相互性特点,构建了局部方向直方图的局部区域特征表达和余弦定理的局部区域相关性度量分类方法,通过图像局部区域映射关系表的创建和局部区域映射表的区域分裂与归并以及孤立节点和较小局部块去除,实现了乘驾人区域估计,该方法具有光照、噪声、对比度等不敏感性,可满足乘驾人检测区域的估计需要。
     5、提出了多传感器源融合的乘驾人分类方法
     成人、儿童、空乘等乘驾人类型与状态是汽车安全气囊是否展出的重要依据。乘驾人类型与姿态的各异性、座椅异物的放置、阳光照射等因素的影响增加了单一检测器对乘驾人类型与状态检测的困难。针对上述问题,本文构建了视频传感器与压力传感器融合的多传感器融合决策识别方法,设计了多传感器融合策略,确定了设备层和决策层的融合策略;构建了乘驾人HOG与SVM结合的窗口分类器框架体系,建立了多尺度的乘驾人窗口分类融合定位方法,实现了乘驾人的视频检测方法;根据压力传感器的压力信息,建立了乘驾人空乘状态判决方法;根据视频与压力传感器的决策层融合策略,建立了多传感器融合决策分类方法,实现了空乘、成人、儿童的分类。
     6、提出了基于半监督学习的乘驾人跟踪与交互式多模的运动估计方法
     乘驾人的跟踪和运动预测可提供乘驾人状态的预测,为安全气囊的事先决策和快速反应提供依据。然而,由于跟踪过程乘驾人外部特征受光照、阴影、手势等影响而改变以及乘驾人运动的多模态性,增加了乘驾人跟踪和运动估计的困难。针对上述问题,本文建立了半监督在线学习的特征更新与交互式多模的乘驾人跟踪与运动估计方法,该方法采用采用先检测后跟踪思想,首先在adaboost框架下获取乘驾人离线状态下的乘驾人特征选择的强弱分类器,结合当前静态乘驾人分类窗口确定的乘驾人区域和周围背景部分所确定的正负样本,实现乘驾人特征的在线更新选择,获取乘驾人分类的先验分类器;根据先验知识分类器检测乘驾人,Mean shift核函数估计乘驾人分类度量的相似性空间分布,获取其最大性度量作为乘驾人跟踪区域;根据乘驾人离线强弱分类器和权值分布,结合当前跟踪乘驾人区域的伪标注情况在线更新乘驾人特征选择,实现了乘驾人的半监督在线跟踪算法;最后利用IMM框架和多个模式的Kalman滤波器,通过输入交互、模型滤波、概率更新和组合估计,实现了多模式的乘驾人运动估计方法。该方法满足了乘驾人跟踪过程外观特征变化的特征在线更新和乘驾人运动多模特性的跟踪和运动估计,限制了online Boosting的漂移现象。
     实验过程对不同乘驾人类型、光照、状态等视频序列测试,测试结果表明本文算法具有良好的性能,可满足乘驾人检测的基本性能要求,可为相关研究提供方法与理论依据和参考。
Airbags into automobiles have significantly improved the safety of the occupants.Unfortunately, airbags can also cause fatal injuries when vehicle collision. Automotiveairbag, which can detect occupant classification, position, collision types etc. to control theairbag stress and velocity, will be used to prevent occupant re-harmed by airbagdeployment. Accordingly, the occupant type and position one of the important part ofautomobile intelligent constraint system are key factors to decide the stress and velocity ofairbag to deploy.
     To detect occupant type and position, the methods of occupant detection for smartairbag are proposed and hardware development platform based on vehicular multi-sensorfusion is developed and the methods on occupant classification fusion decision andtracking is putted forward in the Thesis. The mainly research work is presented as follows:
     1. The occupant detection system and Hardware platform is developed for smartairbag.
     According to the characteristics of the occupant detection system, the Hardwareplatform is developed by TMS320DM642digital signal processor (DSP), video and seatpressure sensor etc. and the occupant multi-sensor fusion detection software framework ispresented by occupant classification and occupant tracking and motion estimation. Forlacking occupant test video sequence for public testing, lots of videos under differentcircumstance on occupant are captured as testing data source for occupant detection
     2. The occupant image quality evaluation method on structure similarity of gradientdirection is put forward to obtain high quality occupant image for detection.
     It is difficult to detect occupant image when the occupant image is blur or unevenbrightness. To overcome the difficulty, the quality evaluation method on the structuresimilarity of gradient direction which is robust to sunlight, illumination intensity and samehuman visual system is proposed in the thesis. The structure similarity of brightness,contrast and direction of the gradient is computed by local regions of the reference imageand current image to evaluate occupant image good or not in the method.
     3. Contourlet transform is used to reduced noise and local equalization method is put forward to enhance the occupant image
     Image noise and illumination is the important factors to restrict the occupant imagequality. To overcome this aspect effects, the method about reducing noise by Contourlettransform to occupant image and image equalization enhancement about illuminationestimation is put forward. As the Contourlet transform can realize multi-scale arbitrarydirection on any scale decomposition, better keep the outline of image and texture directioninformation and make up for the inadequacy of the wavelet transform, in this paper. Thismethod got the Contourlet coefficient of the occupant decomposed images, which useContourlet transform, control the Contourlet coefficient of the high frequency of theimage region, which use coefficient threshold and threshold function inhibition, andcombine the reconstruction of occupant images with Contourlet inverse transform to realizeoccupant image noise reduction processing. And then, to reduce the influence of imageintensity by suppressing the DC component of the image of Fourier transform and map thelight intensity to the [0,255] the brightness of the space by linear stretching to the lightintensity. In the last, the processing of the occupant image enhancement is realized byenhancing the local image by using the method of the partially overlapped sub-block localhistogram equalization, improving the image brightness distribution, balancing theoccupant image, and increasing the contrast of image details
     4. Occupant region estimation algorithm using the gradient direction histogram is putforward to estimate the occupant region
     Occupant region estimation can reduce the search range in occupant detection, and itcan reduce the interference by edge of no-occupant objects. For occupant image isinfluenced by sunshine dynamic projection, shadow, the landscape outside the window ofcar, other matters inside, etc, when the vehicle is moving, It is difficult to get the occupantregion by using the method of background difference for dynamic image detection andframe difference. Based on this, the feature expression of local characteristics of the localorientation histograms is set up to reduce sensitive degree to illumination, noise andcontrast in this paper by using the mutual spatial structure characteristic between occupantimage sequence and the background of the occupant. To set up local image mappingrelation table by using the cosine theorem to metric vector angle and express the relativedegree of vectors, get occupant area and realize occupant area estimation by using the splitand merge process of local area mapping relation table and getting rid of local block ofisolated nodes.
     5. The occupant multi-sensor fusion classification method is put forward to classifythe occupant type. It is difficult to detect occupant type and position accurately by single sensor for the occupant type, sunshine, and so on. In order to overcome the problem, in thispaper the multi-sensor fusion decision recognition method using the video sensor andpressure sensor is presented, the multi-sensor fusion strategy was designed, and theequipment layer and decision layer fusion strategy were determined. At same time, windowclassifier framework based on occupant HOG and SVM, built multi-scale occupantwindow classification fusion location method to realize the occupant video detectionmethod. According to the pressure information of pressure sensor, it built occupantclassification decision method for empty, and according to the decision layer fusionstrategy of the video and pressure sensor, the method of multi-sensor fusion decisionclassification was proposed, which can realize the classification of the empty, adults andchildren.
     6. Motion estimation method of occupant tracking based on semi-supervised learningand IMM was proposed.
     In order to meet the rapid reaction on automobile safety airbag and locting occupantposition, according to the characteristic of feature change and multimodal movement whenoccupant is tracked, occupant tracking with feature update and IMM based onsemi-supervised online learning and motion estimation method was set up in this paper.
     The method used the thought of detection before occupant tracking. Firstly, we get theoccupant feature selecting strong and weak classifiers for occupant off-line using adaboostframe, according to the occupant area,which is decided by current static occupantclassification window and positive and negative samples, which is decided by thebackground, realize online update selecting for occupant feature and get the prior classifierof occupant classification. Secondly, according to the prior knowledge, the classifierdetected occupant, using Mean shift kernel function estimates the similarity of spatialdistribution of occupant classification metric. And then the metric as occupant tracking arearegions of maximum was got. Thirdly, according to the strong and weak classifiers foroff-line occupant and the weight distribution, realize the semi-supervised online learningalgorithm with the online update occupant feature selection using the un-labeled ofoccupant area tracked at present. Finally,realize the method of multi-mode occupantmotion estimation by using IMM frame and multi-mode Kalman filter, via mutual input,mode filter, update probability and combination estimation. This method meets theoccupant tracking process appearance characteristics changed characteristics of theoccupant online update and occupant movement of multi-mode tracking and motionestimation, and restricts the drift phenomenon in online Boosting.
     Videos under different circumstance such as occupant type, illumination and occupant state and so on were tested by proposed methods, and the results showed the proposedmethod can meet the need of occupant classification and predict the position of occupantfor smart airbags application, meanwhile the proposed methods can be as a reference forfuture research in depth on smart airbag.
引文
[1].朱红艳,舒少龙,林峰等.乘员分类系统研究综述[J].汽车电器.2011,(01):1-4.
    [2].刘杰.乘员类型的自动识别及其在智能乘员约束系统中的应用[博士]:吉林大学;2007.
    [3].Krumm J., Kirk G. Video occupant detection for airbag deployment. In: Applications of Computer Vision,1998WACV '98Proceedings, Fourth IEEE Workshop on;199819-21Oct1998;1998. p.30-35.
    [4].高振海,肖振华,李红建.基于体压分布检测和支持矢量机分类的汽车乘员坐姿识别[J].机械工程学报.2009,(07):216-220.
    [5].朱敏慧.捷豹智能安全气囊引爆系统ARTS[J].汽车与配件.2002,(10):19.
    [6].The torsional sensing load cell for occupant positioning sensing.2005.(Accessed athttp://www.gagetek.com/.)
    [7]. Shahmanesh Nargess. Safety[J]. Automotive Engineer (London).1999,24(10):44-45.
    [8].肖振华.基于体压分布检测和支持向量机分类的乘员体征识别[硕士]:吉林大学;2008.
    [9].Thomas Lich, Mack. Frank, inventors; Method for Passenger Classification Using a Seat Mat in theVehicle Seat. US.2003.
    [10]. George B., Zangl H., Bretterklieber T.等. Seat Occupancy Detection Based on Capacitive Sensing [J].Ieee Transactions on Instrumentation and Measurement.2009,58(5):1487-1494.
    [11]. George B., Zangl H., Bretterklieber T.等. A Combined Inductive-Capacitive Proximity Sensor for SeatOccupancy Detection[J]. Ieee Transactions on Instrumentation and Measurement.2010,59(5):1463-1470.
    [12]. Satz Armin, Hammerschmidt Dirk, Tumpold David. Capacitive passenger detection utilizing dielectricdispersion in human tissues[J]. Sensors and Actuators A: Physical.2009,152(1):1-4.
    [13].Zangl H., Bretterklieber T., Werth T.等. Seat occupancy detection using capacitive sensing technology. In:SAE World Congr; Apr.2008; Detroit, MI; Apr.2008.
    [14].Faber P. Seat occupation detection inside vehicles. In: Image Analysis and Interpretation,2000Proceedings4th IEEE Southwest Symposium;20002000;2000. p.187-191.
    [15].Owechko Y., Srinivasa N., Medasani S.等. Vision-based fusion system for smart airbag applications. In:Intelligent Vehicle Symposium,2002IEEE;200217-21June2002;2002. p.245-250vol.241.
    [16].Alefs B., Clabian M., Bischof H.等. Robust occupancy detection from stereo images. In: IntelligentTransportation Systems,2004Proceedings The7th International IEEE Conference on;20043-6Oct.2004;2004. p.1-6.
    [17].Krotosky S. J., Cheng S. Y., Trivedi M. M. Real-time stereo-based head detection using size, shape anddisparity constraints. In: Intelligent Vehicles Symposium,2005Proceedings IEEE;20056-8June2005;2005.p.550-556.
    [18]. Devarakota P. R., Castillo-Franco M., Ginhoux R.等.3-D-Skeleton-Based Head Detection and TrackingUsing Range Images[J]. IEEE Transactions on Vehicular Technology.2009,58(8):4064-4077.
    [19].Makrushin Andrey, Langnickel Mirko, Schott Maik等. Car-seat occupancy detection using a monocular360NIR camera and advanced template matching. In: DSP2009:16th International Conference on DigitalSignal Processing, July5,2009-July7,2009;2009; Santorini, Greece: IEEE Computer Society;2009.
    [20]. Farmer M., Jain A. Smart Automotive Airbags: Occupant Classification and Tracking[J]. IEEETransactions on Vehicular Technology.2007,56(1):60-80.
    [21].Farmer M. E. Integrating Temporal Streams of Image Classifications Using Evidential Reasoning forSmart Airbag Applications. In: Electro/information Technology,2006IEEE International Conference on;20067-10May2006;2006. p.360-365.
    [22].Farmer M., Rieman J. Fusion of Motion Information with Static Classifications of Occupant Images forSmart Airbag Applications. In: Information Fusion,20069th International Conference on;200610-13July2006;2006. p.1-8.
    [23].刘杰,孙吉贵,高振海等.一种新的汽车乘员分类视觉检测算法[J].计算机工程.2009,(06):32-34.
    [24].Huang Shih-Shinh, Hsiao Pei-Yung. Occupant classification for smart airbag using Bayesian filtering. In:1st International Conference on Green Circuits and Systems, ICGCS2010, June21,2010-June23,2010;2010; Shanghai, China: IEEE Computer Society;2010. p.660-665.
    [25]. Trivedi M. M., Shinko Yuanhsien Cheng, Childers E. M. C.等. Occupant posture analysis with stereoand thermal infrared video: algorithms and experimental evaluation[J]. IEEE Transactions on VehicularTechnology.2004,53(6):1698-1712.
    [26]. Izumi Tatsuya, Saito Hiroaki, Hagihara Takeshi等. Development of occupant detection system usingfar-infrared ray (FIR) camera[J]. SEI Technical Review.2009,(Compendex):72-77.
    [27].Zhencheng Hu, Kawamura T., Uchimura K. Grayscale Correlation based3D Model Fitting for OccupantHead Detection and Tracking. In: Intelligent Vehicles Symposium,2007IEEE;200713-15June2007;2007.p.1252-1257.
    [28].Min-Soo Jang, Young-Hoon Kim, Gwi-Tae Park等. An Embedded System for Posture Classification ofthe Vehicle Occupant. In: Computational Intelligence and Security,2006International Conference on;2006Nov.2006;2006. p.743-746.
    [29].Lee Jeong-Eom, Kim Yong-Guk, Kim Sang-Jun等. Head Detection and Tracking for the Car Occupant’s Pose Recognition. In: Advances in Applied Artificial Intelligence;2006:540-547.
    [30].Kim Yong-Guk, Lee Jeong-Eom, Kim Sang-Jun等. Head Detection of the Car Occupant Based onContour Models and Support Vector Machines. In: Innovations in Applied Artificial Intelligence;2005:59-61.
    [31]. Kim S. J., Kim Y. G., Lee J. E.等.3-D Pose tracking of the car occupant[J]. Knowledge-BasedIntelligent Information and Engineering Systems, Pt3, Proceedings.2005,3683:219-224.
    [32]. Hannan M. A., Hussain A., Mohamed A.等. Decision fusion of a multi-sensing embedded system foroccupant safety measures[J]. International Journal of Automotive Technology.2010,11(1):57-65.
    [33]. Wang Z., Bovik A. C., Sheikh H. R.等. Image quality assessment: From error visibility to structuralsimilarity[J]. Ieee Transactions on Image Processing.2004,13(4):600-612.
    [34].杨春玲,高文瑞.基于结构相似的小波域图像质量评价方法的研究[J].电子学报.2009,(04):845-849.
    [35].庄晓丽,陈红卫.基于梯度幅度值的结构相似度的图像质量评价方法[J].计算机应用与软件.2009,(10):222-224.
    [36].王强,梁德群,毕胜等.基于结构方向信息的图像质量评价方法[J].计算机应用.2010,(06):1622-1625+1628.
    [37]. Zhang X. D., Feng X. C., Wang W. W.等. Edge Strength Similarity for Image Quality Assessment[J].IEEE Signal Process Lett.2013,20(4):319-322.
    [38]. Do M. N., Vetterli M. The contourlet transform: An efficient directional multiresolution imagerepresentation[J]. Ieee Transactions on Image Processing.2005,14(12):2091-2106.
    [39].唐飞,杨恢先,曾友伟,李利,谭正华.改进的Contourlet变换的图像去噪算法[J].计算机工程与应用.
    [40]. Bruce; A G, Gao Hong Ye. Understangding waveshrink:variance and bias estimation[J]. Biometrika.1996,83(4):727-745.
    [41]. Pizer Stephen M., Amburn E. Philip, Austin John D.等. Adaptive histogram equalization and itsvariations[J]. Computer Vision, Graphics, and Image Processing.1987,39(3):355-368.
    [42]. Stauffer Chris, Grimson W. E. L. Adaptive background mixture models for real-time tracking[J].Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition.1999,2(Compendex):246-252.
    [43].肖志涛,国澄明,侯正信等.图像特征检测算法的分析与研究[J].中国图象图形学报.2004,(12):20-26.
    [44]. Zhu Xiaodong, Chen Ying, Li Minji等. Study on series iris image quality evaluation[J]. Yi Qi Yi BiaoXue Bao/Chinese Journal of Scientific Instrument.2006,27(SUPPL):2173-2176.
    [45].Carneiro Gustavo, Jepson Allan D. Multi-scale phase-based local features. In;2003; Madison, WI,United States: Institute of Electrical and Electronics Engineers Computer Society;2003. p.736-743.
    [46].Carneiro G., Jepson A. D. Phase-Based Local Features. In: Lecture Notes in Computer Science:Computer Vision-ECCV2002:7th European Conference on Computer Vision, Copenhagen, Denmark, May28-31,2002Proceedings, Part I;2002;2002. p.282-296.
    [47].Venkatesh S, Owens RA. An energy feature detection scheme. In: The International Conference onImage Processing;1989; Singapore;1989. p.553—557.
    [48]. Owens Robyn, Venkatesh Svetha, Ross John. Edge detection is a projection[J]. Pattern RecognitionLetters.1989,9(4):233-244.
    [49].Freeman W. Steerable filters and the local analysis of image structure: MIT;1992.
    [50]. L Rosenthaler, F Heitger, Kubler Olaf. Detection of General Edges and Keypoints[J]. EuropeanConference on Computer Vision.1992:78-86.
    [51].B Robbins, A Owens R. The2D local energy model. Technical Report: Department of Computer Science,University of Western Australia;1994. Report No.:94/5.
    [52].Fancourt C., Bogoni L., Hanna K.等. Iris recognition at a distance. In: Audio and Video Based BiometricPerson Authentication, Proceedings. Berlin: Springer-Verlag Berlin;2005:1-13.
    [53]. Ma L., Tan T. M., Wang Y. H.等. Local intensity variation analysis for iris recognition[J]. PatternRecognit.2004,37(6):1287-1298.
    [54]. Lim S., Lee K., Byeon O.等. Effective iris recognition system by optimized feature vectors andclassifier[J]. Artificial Intelligence: Methodology, Systems, Applications, Proceedings.2000,1904:348-357.
    [55].Gu H. Y., Zhuang Y. T., Pan Y. H.等. A new iris recognition approach for embedded system. In:Embedded Software and Systems. Berlin: Springer-Verlag Berlin;2005:103-109.
    [56].Gu H. Y., Gao Z. W., Wu F. Selection of optimal features for iris recognition. In: Advances in NeuralNetworks-Isnn2005, Pt2, Proceedings. Berlin: Springer-Verlag Berlin;2005:81-86.
    [57].Jang J., Park K. R., Son J.等. Multi-unit iris recognition system by image check algorithm. In: BiometricAuthentication, Proceedings. Berlin: Springer-Verlag Berlin;2004:450-457.
    [58].龙云利、刘安芝、刘希顺.基于小波变换与支持向量机的虹膜识别新算法[J].《微机算计信息》CJFD收录期刊2005,21(12-2):158-160.
    [59].Grimson W. E. L., Stauffer C., Romano R.等. Using adaptive tracking to classify and monitor activitiesin a site. In: Proceedings of the1998IEEE Computer Society Conference on Computer Vision and PatternRecognition, June23,1998-June25,1998;1998; Santa Barbara, CA, USA: IEEE Comp Soc;1998. p.22-29.
    [60]. Long Warren, Yang Yee-Hong. Stationary background generation. An alternative to the difference of twoimages[J]. Pattern Recognition.1990,23(Compendex):1351-1359.
    [61]. Dalal N., Triggs B. Histograms of oriented gradients for human detection[J].2005IEEE ComputerSociety Conference on Computer Vision and Pattern Recognition, Vol1, Proceedings.2005:886-893.
    [62]. Noyce D. A., Gajendran A., Dharmaraju R. Development of bicycle and pedestrian detection andclassification algorithm for active-infrared overhead vehicle imaging sensors[J]. Pedestrians and Bicycles.2006,(1982):202-209.
    [63].Zhu Qiang, Yeh Mei-Chen, Cheng Kwang-Ting等. Fast Human Detection Using a Cascade ofHistograms of Oriented Gradients. In: Proceedings of the2006IEEE Computer Society Conference onComputer Vision and Pattern Recognition-Volume2: IEEE Computer Society;2006.
    [64]. Cortes Corinna, Vapnik Vladimir. Support-vector networks[J]. Machine Learning.1995,20(3):273-297.
    [65]. Scholkopf B., Smola A. J., Williamson R. C.等. New support vector algorithms[J]. Neural Computation.2000,12(5):1207-1245.
    [66]. Scholkopf B., Platt J. C., Shawe-Taylor J.等. Estimating the support of a high-dimensionaldistribution[J]. Neural Computation.2001,13(7):1443-1471.
    [67].N.Cristianini, J.S.Taylor.支持向量机导论:北京:电子工业出版社;2005.
    [68]. B. Scholkopf, A. Smola, C. Williamson R.等. New support vector algorithms[J]. Neural Computation.2000,12:1207-1245.
    [69].Comaniciu D. Nonparametric information fusion for motion estimation. In: Computer Vision and PatternRecognition,2003Proceedings2003IEEE Computer Society Conference on;200318-20June2003;2003. p.I-59-I-66vol.51.
    [70].Grabner Helmut, Bischof Horst. On-line boosting and vision. In:2006IEEE Computer SocietyConference on Computer Vision and Pattern Recognition, CVPR2006, June17,2006-June22,2006;2006;New York, NY, United states: Institute of Electrical and Electronics Engineers Computer Society;2006. p.260-267.
    [71].Grabner Helmut, Leistner Christian, Bischof Horst. Semi-supervised on-line boosting for robust tracking.In:10th European Conference on Computer Vision, ECCV2008, October12,2008-October18,2008;2008;Marseille, France: Springer Verlag;2008. p.234-247.
    [72].Grabner Helmut, ochman Jan, Bischof Horst等. Training sequential on-line boosting classifier for visualtracking. In:200819th International Conference on Pattern Recognition, ICPR2008, December8,2008-December11,2008;2008; Tampa, FL, United states: Institute of Electrical and Electronics Engineers Inc.;2008.
    [73]. Grabner Helmut, Nguyen Thuy Thi, Gruber Barbara等. On-line boosting-based car detection from aerialimages[J]. ISPRS Journal of Photogrammetry and Remote Sensing.2008,63(3):382-396.
    [74].陈利斌,佟明安.机动目标跟踪的交互式多模型自适应滤波算法[J].火力与指挥控制.2000,(04):36-38+42.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700