基于视觉显著特征的目标检测方法研究
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
本文围绕视觉显著特征开展研究,根据对视觉认知生理学和心理学的研究,选取视觉特征,利用视觉认知机制,构建检测模型。研究过程中,以检测静态目标和运动目标为目的,通过仿生视觉原理实现对显著目标的检测。
     在对静态目标检测中,为了实现目标和复杂背景的分离,快速准确的从场景图像中检测出显著目标,提出自适应显著目标分割方法。该方法依据显著图构建目标检测蒙板,利用最大熵估计方法选择最优目标检测蒙板,实现显著目标检测。在基于底层视觉特征的目标检测研究的基础上,为了进一步提高静态目标分割准确度,将基于中层视觉的超像素分割引入到检测方法中,通过显著强度判断超像素分割初始参数k,获得图像的超像素表示后,利用底层视觉特征将超像素分为背景与目标两类,构成融合底层视觉特征和中层视觉特征的目标分割蒙板。
     对于运动目标的检测,为了实现对视频中关键信息运动感知,提出基于局部显著特征点的光流运动检测方法,该方法将图像中局部关键点看做图像中显著特征信息并以此表示图像,利用光流法获取显著关键点的运动轨迹,实现对运动目标的检测跟踪。本方法利用颜色对比空间描述帧图像,并在此空间获取显著特征点,并通过金字塔光流估计显著点的运动变化,这样无需在帧图像中全局计算光流,运算更为简便。
Visual saliency is can be represent salient information of scene images,which is acquiredby extracting visual salient feature and using cognitive mechanism. The research of visualsalieny is base on visual cognition of human; it is can be humanly represent information ofimages. Saliency map is based on human visual system (HVS), and provides guidance to usfor selecting a few salient visual regions quickly to pay more attention to useful information.Visual saliency model can assign the limited processing resource in system priority to a fewsalient visual regions. The research of visual saliency computational model is proposed basedon analysis of human visual behavior,that is how to perceive the world by their own visualsystem。 According to physiology, psychology, and eye movement tracking method,weacquire a large number knowledge about human visual perception: Which feature informationstimulates eyes of human more strongly. How to process low level visual and construct highlevel information. Acquire high level information by which cognitive mechanism. Visualcognitive behavior include which general rules.How to use these general rules andmechanism process low-level feature,and acquire high level visual information. With thedevelopment of HSV,the knowledge of visual cognition is Constitute a complete system.Visual stimulation principle from feature information of image,visual cognitive mechanismfrom brain of human and general visual behavior rules,all of them can construct visualsaliency model,and use for object detection.
     The research is beginning to visual salient feature, select some corresponding feature ofimage based on cognitive physiology and cognitive psychology. According to research ofvisual cognitive mechanism and behavior rule,proposed visual saliency model for objectdetection. In this research, we analyze the feature of dynamic object and static object,construct model for detecting dynamic or static object. The model can be used in naturalscene and Visual robot platform.
     In research of static object detection, for detecting object from complicated background,and rapidly segment salient object from scene image, proposed an adaptive segmentation forsalient object detection. It is inspired by an assumption that image information consist of redundant information and novelty fluctuations, in addition object detection is to remove thenon-saliency part and focus on the salient object. Considering the relation betweencomposition of image and aim of object detection, we construct a more reliable saliency mapfor evaluating composition of image. Local energy feature was combined with simplebiologically-inspired model (color, intensity, orientation) to strength the integrity of object inthe saliency map. Estimate entropy of object via maximum entropy method. According to thesaliency map, remove pixels of minimal intensity from original image. Then compute entropyof the images removed redundant and correlate with the entropy of object for final result.Experimental results show that proposed algorithm outperforms the state-of-the-art methods.We compute Recall Rate and precision rate, and acquire ROC curve and F-Measure forevaluate method.. Experimental results show that proposed algorithm outperforms thestate-of-the-art methods.
     The objects in natural scene images are irregular contour. For segmenting object frombackground more accurately, we propose a salient object segmentation method based onlow-level visual feature and middle-level visual cues. Extract the low-level visual feature oforiginal image via color, intensity, orientation and local energy feature channels, combine fourchannels for the saliency map. Acquire salient feature mask via maximization informationentropy principle. In mid-level visual cues, we use an over-segmentation of an image intosuperpixels. According to salient intensity automatically set initial parameters and consideringspatial information of feature vector in clustering, it can be more accurate support contour ofobject. Finally,for segmenting irregular object from background,, classify superpixels byusing salient feature mask, that means low-level and mid-level visual feature fusion. Theexperimental results demonstrate that the proposed method is less sensitive to complexillumination and background, and can segment contour accurately. Moreover, It can be used tosegment Irregular objects from the complex background.
     In research of dymanic object detection, we propose an opponent color optical flowalgorithm for robust object tracking. Unlike previous works of coloroptical flow methods thattreat color as separating channels, the proposed algorithm exploits opponent colorrepresentation of color and processes color as a holistic signal. In this way, it enables moreaccurate flow estimation at the pixel locations of spatial color variations, and reduces trackingerrors by leaving more features points at their correct locations on the objiect. For successfuland efficient object tracking, we also proposed a novel type of opponent color corners that arereliable features during tracking. Together with grayscale corners, they form a good feature point set, especially when used with the proposed opponent color optical flow algorithm. Weconduct a quantitative evaluation on publicly available dataset to verify the efficacy of theproposed algorithm. And object tracking experiments demonstrate that robust tracking can beachieved for dymanic object detection.
     Use the adaptive segmentation for salient object in visual robot platform.because it ismore difficult to create the most advanced detection, obstacle locating and detection methodsfalter for different environment. The joint use of multiple imaging modalities is one means ofimproving some of the quality measures of the input data, in hopes of ultimately improvingoverall system performance. Since integrating radar with a Vision Sensor for obstacledetection will presumably improve overall detection performance. Combining them can beadvantageous, especially when one algorithm performs poorly in detecting objects. Forexample, salient analysis has an obvious limitation of the image in which the objects are noobstacles,and fails in detecting any salient area as object. To deal with the above challenge,the proposed method uses salient detection model for locating and combination ofpre-location mask from infrared sensors information about the objects. For extracting integrityfeature of object, we make local energy feature instead of orientation feature. That proposedcan be helpful for obstacles location. According to the issue of salient detection, we useinfrared sensors for detecting whether objects are barrier. The pre-location mask from theresult of infrared sensor is useful for efficiency of algorithm. The object location method isbased on object segmentation. The fusion of radar-vision is as pretreatment, and then wedesign a maximum entropy estimate method for obstacle location. Finally, we test our radar-vision system in indoor simulation environment.
引文
[1] Lidberg, P., Lindeberg, T., Roland, P.,1997. Analysis of brain activation patterns using a3-D scale-space primal sketch[C]. In: Proc.3rd International Conference on FunctionalMapping of the Human Brain, Copenhagen, Denmark. Neuroimage5(4)393.
    [2] Mikolajczuk K, Schmid C. An affine invariant interest point detector[C].In Proceedingsof the8th International Conference on Computer Vision, Vancouver, Canada,2002.
    [3] Lindeberg T. Shape from texture from a multi-scale perspective[C]. In Proc. InternationalConference on Computer Vision Germay,1993.
    [4] XUE Y N, ZHAO Y, LIU L P, et al.. Rényi entropy-based saliency map generation andtarget detection[J] Optics and Precision Engineering,2010,18(3):723-731(in Chinese)
    [5] N. Tamayo, V.J. Traver. Entropy-based Saliency Computation in Log-polar Images[C].3rdInternational Conference on Computer Vision Theory and Applications, Funchal, Madeira,Portugal,2008:501-506.
    [6] R. Achanta, S. Hemami, F. Estrada, et al.. Frequency-tuned salient region detection[C]. InIEEE Conference on Computer Vision and Pattern Recognition2,2009:409-414
    [7] N.Dalal, B.Triggs. Histogram of oriented gradients for human detection[C]. InCVPR,2005, vol1pages886-893.
    [8] P.Felzenszwalb, D.McAllester, D.Ramanan. A discriminatively trained,multiscale,deformable part model[C]. In CVPR,2008, pages1-8.
    [9] P.Felzenszwalb, R.Girshick, D.McAllester, and D. Ramanan. Object detection withdiscriminatively trained part based models[J]. In IEEE PAMI,2009, pages1627-1645.
    [10]N. Paragios and R. Deriche. Geodesic Active Contours and Level Sets for the Detectionand Tracking of Moving Objects. IEEE Transactions Pattern Analysis and MachineIntelligence,2000,22(3):266-280.
    [11]ZHEN W J,WAN L,ZHANG T D, et al.. Fast detection of weak targets in complexsea-sky background[J] Optics and Precision Engineering,2012,20(2):196-205(inChinese)
    [12]Itti L, Koch C. Niebur E. A model of saliency-based visual attention for rapid sceneanalysis [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,1998,20(11):1254-1259.
    [13]Z. Li, L. Itti, Saliency and Gist Features for Target Detection in Satellite Images [J]. IEEETransactions on Image Processing,2011,20(7):2017-2029.
    [14]Hou X D, hang L Q. Saliency detection:a spectral residual approach [C]. In: Proceedingsof the IEEE Conference on Computer Vision and Pattern Recognition. Minneapolis, USA:IEEE,2007.1-8.
    [15]Mingming Cheng, Guoxin Zhang, Niloy J. Mitra, et al.. Global Contrast based SalientRegion Detection [C]. IEEE CVPR, Colorado Springs, USA:2011June21-23:409-416.
    [16]Andrea Perna, Michela Tosetti, Domenico Montanaro, et al.. BOLD response to spatialphase congruency in human brain [J]. Vision of Journal,2008,8(10):1-15.
    [17]Linda Henriksson, Aapo Hyvarinen, Simo Vanni. Representation of Cross-FrequencySpatial Phase Relationships in Human Visual Cortex[J]. The Journal of Neuroscience,2009,29(45):14342-14351.
    [18]P. Kornprobst R. Deriche and G. Aubert. Image Sequence Analysis via Partial DifferentialEquations[J]. Journal of Mathematical Imaging and Vision,1999,11:5-26.
    [19]Y.L. Tian, M. Lu, A. Hampapur. Robust and Efficient Foreground Analysis for Real-timeVideo Surveillance[C]. IEEE International Conference on Computer Vision and PatternRecognition,2005,1:1182-1187.
    [20]J. Yao and J.M. Odobez. Multi-Layer Background Subtraction Based on Color andTexture[C]. IEEE International Conference on Computer Vision and Pattern Recognition,2007:1-8.
    [21]G. Dalley, J. Migdal and W. Eric L. Grimson. Background Subtraction for TemporallyIrregular Dynamic Textures[J]. Proceedings of the IEEE Workshop on Applications ofComputer Vision,2008.
    [22]M. Seki, T. Wada, H. Fujiwara and K. Sumi. Background Subtraction Based onCooccurrence of Image Variations[C]. IEEE International Conference on ComputerVision and Pattern Recognition,2003,2:65-72
    [23]K.A. Patwardhan, G. Sapiro and V. Morellas. Robust Foreground Detection in VideoUsing Pixel Layers[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2008,30(4):746-751.
    [24]B. D. Lucas, T. Kanade. An Iterative Image Registration Technique with an Application toStereo Vision[C].7th International Joint Conference on Artificial Intelligence,1981:674–679
    [25]Jianru Xue, Ce Li, and Nanning Zheng,“Proto-Object Based Rate Control forJPEG2000: An Approach to Content-Based Scalability,” IEEE Transactions on ImageProcess,vol.20,1177-1184(2011).
    [26]Alexander Toet,“Computational versus Psychophysical Bottom-Up Image Saliency:AComparative Evaluation Study,” IEEE Transactions on pattern analysis and machineintelligence, vol.33(11)2131-2144(2011).
    [27]Andrea Perna, Michela Tosetti, Domenico Montanaro, et al.. BOLD response to spatialphase congruency in human brain [J]. Vision of Journal,2008,8(10):1-15.
    [28]Linda Henriksson, Aapo Hyvarinen, Simo Vanni. Representation of Cross-FrequencySpatial Phase Relationships in Human Visual Cortex[J]. The Journal of Neuroscience,2009,29(45):14342-14351.
    [29]S. Venkatesh, R.A.Owens. An energy feature detection scheme [C]. Singapore: in theInternational Conference on Image Processing,1989:553-557.
    [30]Michael Felsberg,Gerald Sommer. The monogenic signal [J]. IEEE Transactions onSignal Proeessing.200l,vol49:3136-3144
    [31]许元男,赵远,刘丽萍,等.基于Renyi熵的显著图生成与目标探测[J].光学精密工程,2010,18(3):723-731
    [32]XUE Y N, ZHAO Y, LIU L P, et al.. Rényi entropy-based saliency map generation andtarget detection[J] Optics and Precision Engineering,2010,18(3):723-731(in Chinese)
    [33]N. Tamayo, V.J. Traver. Entropy-based Saliency Computation in Log-polar Images[C].3rdInternational Conference on Computer Vision Theory and Applications, Funchal, Madeira,Portugal,2008:501-506.
    [34]R. Achanta, S. Hemami, F. Estrada, et al.. Frequency-tuned salient region detection[C]. InIEEE Conference on Computer Vision and Pattern Recognition2,2009:409-414
    [35]Yaoru Sun, Robert Fisher, Fang Wang, Herman Martins Gomes. A computer visionmodel for visual-object-based attention and eye movements[J]. Computer Vision andImage Understanding,2008,112:126–142.
    [36]Alexandre Bernardino, Jose' Santos-Victor, Giulio Sandini. Model Based AttentionFixation Using Log-Polar Images[M]. in V. Cantoni, A. Petrosino&M. Marinaro (Eds.),"Visual Attention Mechanisms", Plenum Press, New York2002.
    [37]F. Orabona, G. Metta, G. Sandini, Object-based visual attention: a model for a behavingrobot[J]. in:3rd International Workshop on Attention and Performance in ComputationalVision within CVPR,San Diego, CA, USA, June25,2005.
    [38]G.Backer, B.Mertsching. Two selection stages provide efficient object based attentionalcontrol for dynamic vision[J]. International Workshop on Attention and Performance inComputer Vision,2003,9-16.
    [39]G.Backer, B.Mertsching. Two selection stages provide efficient object based attentionalcontrol for dynamic vision[J]. International Workshop on Attention and Performance inComputer Vision,2003,9-16.
    [40]V. K. Singh, S. Maji, A. Mukerjee. Confidence based updation of motion conspicuity indynamic scenes. Third Canadian Conference on Computer and Robot Vision,2006.
    [41]Grossberg.S. Linking attention to learning, expectation, competition, andconsciousness[J]. Neurobiology of attention,2005,652-662.
    [42]张鹏,王润生.基于视点转移和视区追踪的图像显著区域检测[J].软件学报,2004,15(6):891-898.
    [43]田媚,罗四维,黄雅平等.基于局部复杂度和初级视觉特征的自底向上注意信息提取算法[J].计算机研究与发展,2008,45(10):1739-1746.
    [44]张鹏,王润生.基于视觉注意的遥感图像分析方法[J].电子与信息学报,2005,27(12):1855-1860A
    [45]邹琪,罗四维,郑宇.利用多尺度分析和编组的基于目标的注意计算模型[J].电子学报.2006,34(3).A
    [46] Fukunaga. K,Hostetler. L, The estimation of the gradient of a density function, withapplications in pattern recognition[J]. IEEE Transactions on InformationTheory,1975,32-40.
    [47] Cheng. Yizong, Mean shift, mode seeking, and clustering[J]. Transactions on PatternAnalysis and Machine Intelligence, IEEE Transactions on,1995,790-799.
    [48]Collins. R.T., Mean-shift blob tracking through scale space[C].2003IEEE ComputerSociety Conference on Computer Vision and Pattern Recognition,2003.
    [49]丛明煜,何文家,逯力红,等.复杂背景成像条件下运动点目标的轨迹提取[J].光学精密工程,2012,20(7):211-217
    [50]曾文静,万磊,张铁栋,等.复杂海空背景下弱小目标的快速自动检测[J].光学精密工程,2012,20(2):196-205
    [51]Itti L, Koch C. Niebur E. A model of saliency-based visual attention for rapid sceneanalysis [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,1998,20(11):1254-1259.
    [52]Z. Li, L. Itti, Saliency and Gist Features for Target Detection in Satellite Images [J]. IEEETransactions on Image Processing,2011,20(7):2017-2029.
    [53]Hou X D, hang L Q. Saliency detection:a spectral residual approach [C]. In: Proceedingsof the IEEE Conference on Computer Vision and Pattern Recognition. Minneapolis, USA:IEEE,2007.1-8.
    [54]Mingming Cheng, Guoxin Zhang, Niloy J. Mitra, et al.. Global Contrast based SalientRegion Detection [C]. IEEE CVPR, Colorado Springs, USA:2011June21-23:409-416.
    [55]胡正平,孟鹏权.全局孤立性和局部同质性图表示的随机游走显著目标检测算法[J].自动化学报.2011,37(10):1279-1284.
    [56]赵宏伟,王慧,刘萍萍,等.有指向性的视觉注意计算机模型[J]计算机研究与发展.2009,46(7):1192-1197.
    [57]L. Itti, C. Koch, and E. Niebur, A model of saliency-based visual attention for rapid sceneanalysis, IEEE Trans. Pattern Anal. Mach. Intell.,vol.20(11),1254–1259,(1998).
    [58]Gopalakrishnan, Viswanath; Hu, Yiqun; Rajan, Deepu, Random Walks on Graphs forSalient Object Detection in Images, IEEE TRANSACTIONS ON IMAGEPROCESSING, vol19(12),3232-3242(2010)
    [59]X. Hou and L. Zhang, Saliency detection: A spectral residual approach,,in Proc. IEEECVPR,1-8(2007).
    [60]A. Hafez, V. Chari, C.V. Jawahar, Combine texture and edges based on goodness weightsfor planar object tracking, in: International Conference on Robotics and Automation(ICRA), Roma, Italy,2007, pp.4620–4625.
    [61]Q. Zhao, H. Tao, Object tracking using color correlogram, in: IEEE InternationalWorkshop on Visual Surveillance-PETS, Beijing, China,2005, pp.263–270.
    [62]M. D. Cubells-Beltrán, C. Reig, J. Martos, J. Torres, J. Soret Limitations ofMagnetoresistive Current Sensors in Industrial Electronics Applications, InternationalReview of Electrical Engineering(IREE), Vol.6. n.1, February2011, pp.423-429
    [63]Linda Henriksson, Aapo Hyvarinen, Simo Vanni, Representation of Cross-FrequencySpatial Phase Relationships in Human Visual Cortex, The Journal of Neuroscience,vol29(45),2009,14342-14351.
    [64]Victor.J D, Conte.M M, Local image statistics: maximum-entropy constructions andperceptual salience, Journal of the optical society of America A-optics image science andvision, vol29(7)2012:1313-1345.
    [65]Mori G. Guiding model search using segmentation.[C] In: ICCV, vol.2;2005. p.1417–23.
    [66]Levinshtein, A. Stere, K. Kutulakos, D. Fleet, S. Dickinson, K. Siddiqi TurbopixelsFastsuperpixels using geometric flows[J]. IEEE Trans Pattern Anal Mach Intell,31(12)(2009),2290–2297.
    [67]Achanta R, Shaji A, Smith K, Lucchi A, Fua P, Süsstrunk S. SLIC Superpixels, écolePolytechnique Fédéral de Lausssanne (EPFL)[J]. Technical Report149300; June2010.
    [68]胡正平,孟鹏权.全局孤立性和局部同质性图表示的随机游走显著目标检测算法[J].自动化学报.2011,37(10):1279-1284.
    [69]HU ZH P,MENG PQ. Graph Presentation Random Walk Salient Object DetectionAlgorithm Based on Global Isolation and Local Homogeneity[J] Acta Automatica Sinica.2011,37(10):1279-1284.(in Chinese)
    [70]Linda Henriksson, Aapo Hyvarinen, Simo Vanni,“Representation of Cross-FrequencySpatial Phase Relationships in Human Visual Cortex,”. The Journal of Neuroscience,vol29(45),14342-14351(2009).
    [71]Victor.J D, Conte.M M,“Local image statistics: maximum-entropy constructions andperceptual salience,” JOURNAL OF THE OPTICAL SOCIETY OF AMERICAA-OPTICS IMAGE SCIENCE AND VISION,vol29(7),1313-1345(2012)
    [72]J.N. Kapur, P.K. Sahoo, A.K.C. Wong A new method for gray-level picture thresholdingusing the entropy of the histogram [J]Comput. Vision Graphics Image Process.,1985,273–285
    [73]W. Tao, H. Jin, L. Liu Object segmentation using ant colony optimization algorithm andfuzzy entropy[J].Pattern Recognition Lett.,2007,788–796
    [74]Tie Liu, J. Sun, N.-N. Zheng, X. Tang, H.-Y. Shum,“Learning to Detect A SalientObject,” In: Proc. IEEE Conf. on Computer Vision and Pattern Recognition,2007,1-8.
    [75]Ken Fukuchi, Kouji Miyazato, Akisato Kimura,;Shigeru Takagi,;Junji Yamato,“Saliency-based video segmentation with graph cuts andsequentially updated priors,” ICME2009,638-641.
    [76]魏巍,申铉京,千庆姬,等.三维最大Renyi熵的灰度图像阈值分割算法[J].吉林大学学报(工学版),2011,41(4):1083-108.
    [77]Wei wei, Shen Xuanjing, Qianqingji.et al.. Thresholding algorithm based onthree-dimensional Renyi’s entropy[J]. Journal of Jilin University(Engineering andTechnology Edition),2011,41(4):1083-1088.
    [78]Otsu N. A threshold selection method from gray-level histograms[J]. IEEE Transactionson Systems, Man and Cybernetics,1979,9(1):62-66.
    [79]Chien-Hsing Chou,Wen-Hsiung Lin, Fu Chang. A binarization method with learning-build rules for document images produced by cameras[J]. Pattern Recognition,2010,43(4):1518-1530.
    [80]马儒宁,涂小坡,丁军娣,等.视觉显著性凸显目标的评价[J].自动化学报,2012,28(5):879-876.
    [81]Ma Ruming, Xue Xiaobo, Ding Jundi, et al.. To Evaluate Salience Map towards Poppingout Visual Objects[J]. Acta Automatica Sinica,2012,28(5):879-876.
    [82]Chou C H, Li W H, Chang F. A binarization method with learning-build rules fordocument images produced by cameras [J]. Pattern Recognition,2010,43(4):1518-1530.
    [83]Pai Y T, Chang Y F, Ruan S J. Adaptive thresholding algorithm:Efficient computationtechnique based on intelligent block detection for degeaded document images[J]. PatternRecognition,2010,43(9):3177-3187.
    [84]Jin-Gang Yu,Jinwen Tian. Saliency detection using midlevel visual cues[J]. OPTICSLETTERS,2012,27(23):4994-4996.
    [85]赵宏伟,陈霄,刘萍萍,等.视觉显著目标的自适应分割[J].光学精密工程,2013,21(2):531-538.
    [86]Zhao Hong-wei, Chen Xiao, Liu Ping-ping, et al.. Adaptive segmentation for visualsalient object[J]. Optics and Precision Engineering,2013,21(2):531-538.
    [87]Achanta R, Shaji A, Smith K,et al.. SLIC superpixels compared to state of the artsuperpixel method[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,2012,34(11):2274-2282.
    [88]Rother C, Kolmogorov V, Blake A. GrabCut": interac-tive foreground extraction usingiterated graph cuts. ACM Transactions on Graphics,2004,23(3):309-314.
    [89]Sezgin M, Sankur B. Survey over image thresholding techniques and quantitativeperformance evaluation. Journal of Electronic Imaging,2004,13(1):146-168.
    [90]Shivakumara, Palaiahnakote; Trung Quy Phan; Bhowmick, Souvik; A novel ring radiustransform for video character reconstruction [J]. PATTERN RECOGNITION,2012,Vol46:1-6
    [91]Youssef, Doaa; Solouma, Nahed H Accurate detection of blood vessels improves thedetection of exudates in color fundus images.[J]. Computer methods and programs inbiomedicine,2012,Vol.108:3-5
    [92]A.V.Oppenheim and J.s.Lim. The importance of phase in signals [J].In Proceedings ofThe IEEE,1981:539-541.
    [93]M.C.Morrone and R.A.Owons. Feature detection from local energy[J].PatternRecognition Letters,1987,6:303.3-13.
    [94]Lv Yaoxin,Research on Image Feature Extraction of Phase Spectrum Analysis[J].Application Research of Computers,2005vol(1):8
    [95]S.Venkatesh and R.A.Owens. An energy feature detection scheme[C]. Singapore: in theInternational Conference on Image Processing,1989:553-557.
    [96]Michael Felsberg,Gerald Sommer. The monogenic signal [J].IEEE Transactions onSignal Proeessing.200l,VOI,49:3136-3144.
    [97]Rother C, Kolmogorov V, Blake A. GrabCut": interac-tive foreground extraction usingiterated graph cuts. ACM Transactions on Graphics,2004,23(3):309-314.
    [98]Wu Jingbo, Zhao Meiling, Study on the stability of Hilbert space entanglement Rieszspectrum system [C]. China control meetings,1997.8:419-422
    [99]Field D.J. Relations between the statistics of natural images and the response propertiesof cortical cells[J].Joumal of the Optical society of Ameriean,1987,4(12):2379-2394.
    [100] Cyganek B. Circular road signs recognition with affine moment invariants and theprobabilistic neural classifier [C]. M Inter national Conference on Adaptive NaturalComputing Algorithms,2007:508-516
    [101] Tao Wang, Nanning Zheng, Jingmin Xin, Zheng Ma. Integrating millimeter wave radarwith a monocular vision sensor for on-Road obstacle detection applications[J]. Sensors,2011,11(9):8992-9008;
    [102] Miura J, Kanda T, Shirai Y. An active vision system for real time traffic signrecognition[C]. MIEEE Intelligent Transpor tation Systems,2000:52-57
    [103] Fleyeh H. Shadow and highlight invariant color segmentation algorithm for trafficsigns[C]. IEEE Conference on Cybernetics and Intelligent Systems,2006:1-7
    [104] Gevers T.,Smeuder A.W.M. Evaluating color and shape invariant image indexing ofconsumer photograph[C]. In: Proceedings of the1st International Conference on VisualInformation Systems, Melbourne, Astralia,1996:254-261.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700