面向移动目标检测的天气场景建模方法
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
复杂多变的天气环境影响人类生产生活的方方面面。将带有复杂天气环境的视觉表现作为输入,会使基于计算机视觉的诸多领域应用面临前所未有的机遇和挑战。一方面,户外成像设备所获图像极易受到复杂天气环境的影响,导致其视觉效果、数据质量乃至应用价值下降。另一方面,如何从户外场景中有效地提取出所含天气环境的视觉表现,将为虚拟现实和基于计算机视觉的辅助气象预报提供相当大的技术支持。
     仅就户外视频分析而言,移动目标检测作为绝大部分实际应用的预处理模块,其检测结果不可避免地会受到复杂天气环境的干扰。为克服上述干扰,本文立足所创建的一种天气场景建模的通用研究框架,面向移动目标检测提出了鲁棒的天气场景建模方法,具体内容如下:
     (1)提出了一种基于多示例的场景动态区域分割方法。动态区域是指场景中存在明显像素变化的动态点集。作为带有复杂天气环境场景分类应用的预处理模块,简单而有效的动态区域分割方法有助于在场景中选出代表天气环境的关键位置用于分类。本文将场景动态区域分割抽象成一个多示例分类问题,通过构建示例特征的提取和排序、包距离度量公式、改进的多示例K均值聚类算法等一系列步骤,成功地完成有关移动目标检测的场景动态区域分割。
     (2)创建了一种带有复杂天气环境的场景客观分类方法。首先,根据对比不同天气环境的视觉表现所得出的三条假设构建了一个天气环境层次分类框架。在此基础上,从场景的动态区域中提取了一组时间、空间和彩色特征用于对带有复杂天气环境视觉表现的视频进行分类。此后,设计了一个将基于分类和回归树(Classification and Regression Tree, CART)与支持向量机(C-SVM)紧密结合的非度量分类器,用于完成对复杂天气环境的客观有效分类。
     (3)提出了一种基于变化时间窗的动态天气环境去除方法,作为动态天气条件下目标检测的预处理模块。现有的研究工作主要围绕如何从视频中有效地检测雨雪展开。本文则着重关注如何有效地去除检测到的动态天气环境。为此,本文改进了现存的动态天气环境检测算法,亦即构建了一种将K均值聚类的离线学习和基于高斯分布的在线学习相结合的学习策略。在此基础上,设计一个像素变化时间窗,用于去除视频中的动态天气环境。
     (4)创建了一种适应户外光照变化的自回归——纹理模型,用于在带有户外光照变化的视频中进行移动目标检测。根据复杂天气环境所含的不同视觉表现,考虑为不同的天气环境建立不同的场景模型。为此,本文提出了一种适用于缓慢光照变化和快速光照变化的自回归——纹理模型。该模型分别由适用于平稳图像序列的自回归(Auto Regression, AR)模型和对光照变化不敏感的纹理(Texture)模型构成。其一,针对不同的户外光照变化分别建立了基于帧间像素亮度差统计直方图的像素亮度扰动阈值,并将其引入到AR模型的快速背景估计中。其二,设计了一种递推的最小二乘法对AR模型进行实时地参数估计。其三,创建了一种精确的纹理度量,用于完善针对移动目标的估计。其四,基于亮度扰动阈值和自回归——纹理模型(TAR)构建了像素亮度置信区间和纹理置信区间,用于在带有户外光照变化的场景中完成移动目标检测。
     (5)提出了一种基于选择注意隐喻的分段记忆模型,用于复杂背景环境的移动目标检测。如何克服记忆容量的限制,对出现频率较低的确定背景状态建立模型,这是所有基于背景剪除的移动目标检测算法所面临的问题。解决上述问题的关键在于:采用何种实时检测和快速适应复杂环境变化(如不同的背景光照变化)的背景识别方法,以及如何建立带有记忆功能的背景模型。为此,本文提出了一种基于高斯混合模型(Gaussian Mixture Model, GMM)的分段记忆框架,用以解决有关移动目标检测的研究所面临的除语义反馈之外的诸如背景光照变化、背景周期性运动、背景的稳定性等一系列问题。
     上述有关天气场景建模的研究工作,为户外复杂天气条件下的移动目标检测提供了鲁棒的方法。依托所提的带有复杂天气环境的场景建模的通用研究框架,可以很好地克服甚至去除天气环境对场景中的移动目标所带来的干扰。此外,所提的分段记忆模型,为带有长时记忆背景模型的建立提供了一个新的思路。
The complex and uncertain weather conditions afect every aspects of our human’sdaily life. The visual analysis on the complex manifestations of weather conditions chal-lenge many relevant methodologies and applications in computer vision. On the one hand,the manifestations of weather conditions will lead to a poor visual efect, low data qualityand reduced importance of application. On the other hand, an efective visualization ofweather conditions extracted from outdoor scenes will provide a first-step support on thevirtual reality and vision-aided weather forecast.
     Narrowing down to outdoor video analysis, object detection, which is regarded as apre-processing module of many practical applications, will inevitably obtain a poor detec-tion resulting from the disturbance of weather conditions. To this end, a robust weatherscene modelling method under severe weather conditions is proposed in this thesis basedon an established general weather modelling framework. More details are provided below.
     (1) A dynamic region segmentation approach is proposed based on multiple instancelearning (MIL). The dynamic region refers to the set of locations afected by temporalpixel changes in video. As a pre-processing module of complex weather classification, asimple but efective region segmentation approach makes a contribution to the selection ofkey locations representing the weather condition for the classification of diferent weatherconditions. In this research, the dynamic region segmentation is converted to the problemof the MIL. Through the phases of bag description, instance definition, instance sorting,distance measure and MI-based K-means clustering, we accomplish the dynamic regionsegmentation on videos.
     (2) A quantitative classification method on the visual efects of diferent weatherconditions is proposed. Due to the complex manifestations of weather conditions, threehypotheses are made. Moreover, a two-stage classification scheme is provided. Then,features derived from the spatio-temporal and chromatic space are extracted. Using theserepresentative features, we develop a quantitative classifier based on an experiential deci-sion binary tree associated with C-SVM.
     (3) A varying temporal window approach is proposed to remove dynamic weatherefects from a video. Particularly, we focus more on the removal of rain and snow ratherthan the detection method in this research. Dynamic weather conditions are detected by integrating an of-line K-means clustering with an on-line parameter maintenance ofGaussian distribution. Moreover, a variable time window containing adaptive backgroundedges is presented for removal of rain and snow.
     (4) An autoregressive texture-based background model combining short-term andlong-term analysis is established for accurate foreground detection from videos with vary-ing outdoor illumination. Diferent weather scenes can be modelled in terms of theirdiferent visual properties. Firstly, we discuss autocorrelation-based features for the iden-tification of foreground and outdoor illumination variations in short-term sequences, andpropose an adaptive threshold learning approach insensitive to inner-pixel fast illumina-tion variation based on the obtained histograms of intensity diferences between succes-sive frames. Then, an iterative orthogonal least squares (OLS) algorithm is designed toestimate the parameters for the auto regression (AR) model against gradual illuminationchange for background estimation in long-term sequence. Finally, we devise a texturemeasure to eliminate the regional efect of fast illumination variation.
     (5) A piecewise memorizing framework is proposed, which is capable of subtractinglong period video background under the restriction of memory capacity. Three major con-tributions can be claimed. Firstly, hypotheses of background subtraction indicating whatto recognize and memorize are proposed, taking the metaphors of psychological selectiveattention theory into consideration. Secondly, a prior perception-concerned recognition ofrapid illumination change is presented based on a segmented stationarity test. Thirdly, amemorizing framework based on GMM is put forward for the storage of long period back-ground. This framework is capable of identifying long period background appearances,as well as circumventing numerous typical problems except for semantic feed-back.
     The above research work on weather scene modelling methods results in robustforeground detection under complex weather conditions. Based on the proposed generalframework of scene modelling for the complex manifestations of weather conditions, thedisturbance of weather conditions can be overcome, and even removed. Besides, the pre-sented piecewise memorizing framework provides a new idea for background modellingwith long period memory.
引文
[1]寿天德.视觉信息处理的脑机制(第2版)[M].安徽:中国科学技术大学出版社,2010.
    [2]叶笃正,周家斌.气象预报怎么做如何用[M].北京:清华大学出版社,2009.
    [3]王星拱.科学方法论·科学概论[M].北京:商务印书馆,2011.
    [4] Gonzalez R, Woods R. Digital Image Processing (3rd edition)[M]. Upper SaddleRiver, New Jersey, USA.: Prentice Hall,2008.
    [5] Faghih F, Smith M. Combining spatial and scale-space techniques for edge de-tection to provide a spatially adaptive wavelet-based noise filtering algorithm[J].IEEE Tran. on Image Processing,2002,11(9):1062–1071.
    [6] Bourennane E, Gouton P, Paindavoine M, et al. Generalization of Canny-Deriche filter for detection of noisy exponential edge[J]. Signal Processing,2002,82(10):1317–1328.
    [7] Zhao J, Wang H, Yu D. A new approach for edge detection of noisy imagebased on CNN[J]. International Journal of Circuit Theory and Applications,2003,31(2):119–131.
    [8] Wyatt P, Nakai H. Developing nonstationary noise estimation for application inedge and corner detection[J]. IEEE Tran. on Image Processing,2007,16(7):1840–1853.
    [9] Zhao Z, Cheng L. Variational image segmentation in presence of multiplicativegamma noise[J]. Electronics Letters,2011,47(16):918–919.
    [10] Barcelos C, Boaventura M, Silva E. A well-balanced flow equation for noise re-moval and edge detection[J]. IEEE Tran. on Image Processing,2003,12(7):751–763.
    [11] Galland F, Bertaux N, Refregier P. Multi-component image segmentation in homo-geneous regions based on description length minimization: Application to speckle,Poisson and Bernoulli noise[J]. Pattern Recognition,2005,38(11):1926–1936.
    [12] Mahmoodi S, Sharif B. Nonlinear optimisation method for image segmentationand noise reduction using geometrical intrinsic properties[J]. Image and VisionComputing,2006,24(2):202–209.
    [13] Lee G, Kim I, Jung D, et al. Edge detection in noisy images using a water-flowmodel[J]. Journal of Electronic Imaging,2005,14(4).
    [14] Yuksel M. Edge detection in noisy images by neuro-fuzzy processing[J]. AeuInternational Journal of Electronics and Communications,2007,61(2):82–89.
    [15] Sulaiman S, Isa N. Denoising-based clustering algorithms for segmentation oflow level salt-and-pepper noise-corrupted images[J]. IEEE Tran. on ConsumerElectronics,2010,56(4):2702–2710.
    [16] Wen P, Zhou J, Zheng L. Hybrid methods of spatial credibilistic clustering andparticle swarm optimization in high noise image segmentation[J]. InternationalJournal of Fuzzy Systems,2008,10(3):174–184.
    [17] Tam S, Leung C, Tsui W. A robust segmentation method for the AFCM-MRF mod-el in noisy image[C]//Proc. of the18th IEEE International Conference on FuzzySystems(ICFS). Jeju Island, South Korea: IEEE,2009:379–383.
    [18] Wyatt P, Nakai H. Applying non-stationary noise estimation to achieve contrastinvariant edge detection[C]//Proc. of the Asian Conference on Computer Vision(ACCV). Hyderabad, India: Springer,2006:742–751.
    [19] Yu J, Wang Y, Shen Y. Noise reduction and edge detection via kernel anisotropicdifusion[J]. Pattern Recognition Letters,2008,29(10):1496–1503.
    [20] Wan T, Canagarajah N, Achim A. Segmentation of noisy colour images usingCauchy distribution in the complex wavelet domain[J]. IET Image Processing,2011,5(2):159–170.
    [21] Kim J, Kim H. Multiresolution-based watersheds for efcient image segmenta-tion[J]. Pattern Recognition Letters,2003,24(1-3):473–488.
    [22] Garg K, Nayar S. Vision and rain[J]. International Journal of Computer Vision,2007,75(1):3–27.
    [23] Song H, Xin L, Chen Y, et al. Weather identifying system based on vehicle videoimage[C]//Proc. of the9th World Congress on Intelligent Control and Automation(WCICA). Taipei, TaiWan: IEEE,2011:172–175.
    [24] Roser M, Moosmann F. Classification of weather situations on single color im-ages[C]//Intelligent Vehicles Symposium. Eindhoven, Holland: IEEE,2008:480–485.
    [25] Yu X, Xiao C, Deng M, et al. A classification algorithm to distinguish imageas haze or non-haze[C]//Proc. of the6th International Conference on Image andGraphics (ICIG). Hefei, China: IEEE,2011:286–289.
    [26] Yitzhaky Y, Dror I, Kopeika N. Restoration of atmospherically blurred images ac-cording to weather-predicted atmospheric modulation transfer function[J]. OpticalEngineering,1997,36(11):3064–3072.
    [27] Narasimhan S, Nayar S. Removing weather efects from monochrome im-ages[C]//Proc. of the International Conference on Computer Vision and PatternRecognition (CVPR). Kauai, USA: IEEE,2001:186–193.
    [28] Narasimhan S, Nayar S. Vision and the atmosphere[J]. International Journal ofComputer Vision,2002,48(3):233–254.
    [29] Narasimhan S, Nayar S. Contrast restoration of weather degraded images[J]. IEEETran. on Pattern Analysis and Machine Intelligence,2003,25(6):713–724.
    [30] Land E. Recent Advances in retinex theory[J]. Vision Research,1986,26(1):7–21.
    [31] Jobson D, Rahman Z, Woodell G. Feature visibility limits in the nonlinear en-hancement of turbid images[C]//Proc. of Visual Information Processing XII. Or-lando, USA: SPIE,2003:24–30.
    [32] He K, Sun J, Tang X. Single image haze removal using dark channel prior[J]. IEEETran. on Pattern Analysis and Machine Intelligence,2011,33(12):2341–2353.
    [33]武凤霞,王章野,彭群生.最小失真意义下的雾化图像复原[J].系统仿真学报,2006,18(1):363–368.
    [34]桑梓勤,丁跃明,张天序.雨雾天气下的户外场景成像[J].电子学报,2000,28(3):131–133.
    [35]刘锦峰,黄锋.天气影响场景影响复原方法[J].光电工程,2005,32(1):71–73.
    [36]任俊,李志能,傅一平.薄雾天气下的图像复原和边缘检测研究[J].计算机辅助设计与图形学学报,2005,17(4):694–698.
    [37] Garg K, Nayar S. Detection and removal of rain from videos[C]//Proc. of theInternational Conference on Computer Vision and Pattern Recognition (CVPR).Washington D.C., USA: IEEE,2004,1:528–535.
    [38] Garg K, Nayar S. When does a camera see rain?[C]//Proc. of the InternationalConference on Computer Vision (ICCV). Beijing, China: IEEE,2005,2:1067–1074.
    [39] Bossu J, Hautiere N, Tarel J. Rain or snow detection in image sequences throughuse of a histogram of orientation of streaks[J]. International Journal of ComputerVision,2011,93(3):348–367.
    [40] Shen Y, Ma L, Liu H, et al. Detecting and extracting natural snow from videos[J].Information Processing Letters,2010,110(24):1124–1130.
    [41] Zhang X, Li H, Qi Y, et al. Rain removal in video by combining temporal andchromatic properties[C]//Proc. of the International Conference on Multimedia andExpo (ICME). Toronto, Canada: IEEE,2006:461–464.
    [42] Barnum P, Narasimhan S, Kanade T. Analysis of rain and snow in frequency s-pace[J]. International Journal of Computer Vision,2010,86(2-3):256–274.
    [43] Park W, Li K. Rain removal using Kalman filtering in video[C]//Proc. of the Inter-national Conference on Smart Manufacturing Application. Kintex, South Korea:IEEE,2008:494–497.
    [44] Kim K, Chalidabhongse T, Harwood D, et al. Real-time foreground-backgroundsegmentation using codebook model[J]. Real-time Imaging,2005,11(3):172–185.
    [45] Maddalena L, Petrosino A. A Self-organizing approach to background subtractionfor visual surveillance applications[J]. IEEE Tran. on Image Processing,2008,17(7):1168–1177.
    [46] Li R, Chen Y, Zhang X. Fast robust eigen-background updating for foregrounddetection[C]//Proc. of the International Conference on Image Processing (ICIP).Atlanta, USA: IEEE,2006:1833–1836.
    [47] Armanfard N, Komeili M, Kabir E. TED: A texture-edge descriptor for pedestriandetection in video sequences[J]. Pattern Recognition,2012,45:983–992.
    [48] Zhao X, Satoh Y, Takauji H, et al. Object detection based on a robust and accuratestatistical multi-point-pair model[J]. Pattern Recognition,2011,44:1296–1311.
    [49] Dong Y, Desouza G. Adaptive learning of multi-subspace for foreground detectionunder illumination changes[J]. Computer Vision and Image Understanding,2011,115:31–49.
    [50] Lee S, Woo H, Moon G. Global illumination invariant object detection with levelset based bimodal segmentation[J]. IEEE Tran. on Circuits and Systems for VideoTechnology,2010,20(4):616–620.
    [51] Chen Z, Ellis T. Self-adaptive Gaussian mixture model for urban trafc monitoringsystem[C]//Proc. of the International Conference on Computer Vision Workshops.Barcelona, Spain: IEEE,2011:1769–1776.
    [52] Toyama K, Krumm J, Brumitt B, et al. Wallflower: principles and practice ofbackground maintenance[C]//Proc. of the International Conference on ComputerVision (ICCV). Kerkyra, Greece: IEEE,1999,1:256–261.
    [53] Sun Y, Yuan B. Hierarchical GMM to handle sharp changes in moving objectdetection[J]. Electronics Letters,2004,40(13):801–802.
    [54] Paruchuri J, Sathiyamoorthy E, Cheung S, et al. Spatially adaptive illuminationmodeling for background subtraction[C]//Proc. of the International Conference onComputer Vision (ICCV). Barcelona, Spain: IEEE,2011:1745–1752.
    [55] Tanaka T, Shimada A, Arita D, et al. Object detection under varying illuminationbased on adaptive background modeling considering spatial locality[C]//Proc. ofthe3rd Pacific Rim Symposium on Advances in Image and Video Technology.Tokyo, Japan: Springer,2009:645–656.
    [56] Klare B, Sarkar S. Background subtraction in varying illuminations using an en-semble based on an enlarged feature set[C]//Proc. of the International Conferenceon Computer Vision and Pattern Recognition Workshop. Florida, USA: IEEE,2009:66–73.
    [57] Parameswaran V, Singh M, Ramesh V. Illumination compensation based changedetection using order consistency[C]//Proc. of the International Conference onComputer Vision and Pattern Recognition. San Francisco, CA: IEEE,2010:1982–1989.
    [58] Vijverberg J, Loomans M, Koeleman C, et al. Global Illumination compensationfor background subtraction using Gaussian-based background diference model-ing[C]//Proc. of the6th Conference on Advanced Video and Signal Based Surveil-lance (AVSS09). Genoa, Italy: IEEE,2009:448–453.
    [59] Pilet J, Strecha C, Fua P. Making background subtraction robust to sudden illu-mination changes[C]//Proc. of the European Conference on Computer Vision (EC-CV). Marseille, France: Springer,2008:567–580.
    [60] Qi Y, Wang Y. Memory-based Gaussian mixture modeling for moving objectdetection in indoor scene with sudden partial changes[C]//Proc. of the InternationalConference on Signal Processing (ICSP). Beijing, China: IEEE,2010:752–755.
    [61] Huang L, Wang Z, Wang C, et al. Real-time rendering of ray scattering efect underthe conditions of rain and fog[J]. Journal of Software,2006,17(Suppl.):126–137.
    [62] Rousseau P, Jolivet V, Ghazanfarpour D. Realistic real-time rain rendering[J].Computers and Graphics,2006,30(4):507–518.
    [63] Puig-Centelles A, Ripolles O, Chover M. Creation and control of rain in virtualenvironments[J]. Visual Computer,2009,25(11):1037–1052.
    [64] Seipel S, Hast A. A local curvature based lighting model for rendering ofsnow[C]//Proc. of the IADIS International Conference on Computer Graphics,Visualization, Computer Vision and Image Processing (CGVCVIP). Freiburg,Germany: International association for development of the information society,2010:367–372.
    [65] Weidmann N, Frank E, Pfahringer B. A two-level learning method for generalizedmulti-instance problems[C]//Proc. of European Conference on Machine Learning(ECML). Cavtat-Dubrovnik, Croatia: Springer,2003:468–479.
    [66] Viola P, Platt J, Zhang C. Multiple instance boosting for object detection[C]//Proc.of the Neural Information Processing Systems (NIPS). Vancouver, Canada: Neuralinformation processing system foundation,2005:1417–1426.
    [67] Babenko B, Yang M, Belongie S. Robust object tracking with on-line multipleinstance learning[J]. IEEE Tran. on Pattern Analysis and Machine Intelligence,2009,33(8):1619–1632.
    [68] http://pr-ai.hit.edu.cn/percy/DRS.
    [69] Haritaoglu I, Harwood D, Davis L. A fast background scene modeling and mainte-nance for outdoor surveillance[C]//Proc. of the International Conference on PatternRecognition (ICPR). Barcelona, Spain: IEEE,2000:179–183.
    [70] Li L, Huang W, Gu I, et al. Statistical modeling of complex backgrounds for fore-ground object detection[J]. IEEE Tran. on Image Processing,2004,13(11):1459–1472.
    [71] Morita H, Hild M, Miura J, et al. Panoramic view-based navigation in outdoor en-vironments based on support vector learning[C]//Proc. of the International Confer-ence on Intelligent Robots and Systems. Beijing, China: IEEE,2006:2302–2307.
    [72] Shimamura J, Takemura H, Yokoya N, et al. Construction and presentation of avirtual environment using panoramic stereo images of a real scene and computergraphics models[C]//Proc. of the International Conference on Pattern Recognition(ICPR). Barcelona, Spain: IEEE,2000:463–467.
    [73] Lalonde J, Narasimhan S, Efros A. What do the sun and the sky tell us about thecamera?[J]. International Journal of Computer Vision,2010,88(1):24–51.
    [74] Fang S, Zhan J, Cao Y, et al. Improved single image dehazing using segmenta-tion[C]//Proc. of the International Conference on Image Processing (ICIP). HongKong, China: IEEE,2010:3589–3592.
    [75] Schaefer G. How useful are colour Invariants for image retrieval?[C]//Proc. ofthe International Conference on Computer Vision and Graphics. Warsaw, Poland:Springer,2004:381–386.
    [76] Podilchuk C, Barinov L, Hulbert W, et al. Face recognition in a tactical environ-ment[C]//Proc. of the Military Communications Conference. San Jose, Canada:IEEE,2010:900–905.
    [77] Deniz O, Bueno G, Salido J, et al. Face recognition using histograms of orientedgradients[J]. Pattern Recognition Letters,2011,32(12):1598–1603.
    [78] Liu C, Wechsler H. Independent component analysis of Gabor feature’s for facerecognition[J]. IEEE Tran. on Neural Networks,2003,14(4):919–928.
    [79] Chiu C, Ku M, Liang L. A robust object segmentation system using a probability-based background extraction algorithm[J]. IEEE Tran. on Circuits and Systems forVideo Technology,2010,20(4):518–528.
    [80] Ross D, Lim J, Lin R, et al. Incremental learning for robust visual tracking[J].International Journal of Computer Vision,2008,77(1-3):125–141.
    [81] Murshed M, Kabir M, Chae O. Moving object tracking: an edge segment based ap-proach[J]. International Journal of Innovative Computing Information and Control,2011,7(7A):3963–3979.
    [82] Wang H, Suter D. A consensus-based method for tracking: modelling backgroundscenario and foreground appearance[J]. Pattern Recognition,2007,40(3):1091–1105.
    [83] Alper Y, Omar J, Mubarak S. Object tracking: a survey[J]. ACM ComputingSurveys,2006,38(4):1–45.
    [84] Hu W, Tan T, Wang L, et al. A survey on visual surveillance of object motion andbehaviors[J]. IEEE Tran. on System, Man, and Cybernetics—Part C: Applicationsand Reviews,2004,34(3):334–352.
    [85] Rodriguez-Benitez L, Solana-Cipres C, Moreno-Garcia J, et al. Approximate rea-soning and finite state machines to the detection of actions in video sequences[J].International Journal of Approximate Reasoning,2011,52(4):526–540.
    [86] Kratz L, Nishino K. Anomaly detection in extremely crowded scenes using spatio-temporal motion pattern models[C]//Proc. of the International Conference on Com-puter Vision and Pattern Recognition (CVPR). Miami, USA: IEEE,2009:1446–1453.
    [87] Mehran R, Oyama A, Shah M. Abnormal crowd behavior detection using socialforce model[C]//Proc. of the International Conference on Computer Vision andPattern Recognition (CVPR). Miami, USA: IEEE,2009:935–942.
    [88] Ali S, Shah M. A Lagrangian particle dynamics approach for crowd flow segmen-tation and stability analysis[C]//Proc. of the International Conference on ComputerVision and Pattern Recognition (CVPR). Minneapolis, USA: IEEE,2007:1–6.
    [89] Brostow G, Cipolla R. Unsupervised Bayesian detection of independent motion incrowds[C]//Proc. of the International Conference on Computer Vision and PatternRecognition (CVPR). New York, USA: IEEE,2006:594–601.
    [90] Zhao T, Nevatia R. Tracking multiple humans in crowded environment[C]//Proc.of the International Conference on Computer Vision and Pattern Recognition(CVPR). Washington D.C., USA: IEEE,2004:406–413.
    [91] Starck J, Murtagh F, Candes E, et al. Gray and color image contrast enhancementby the curvelet transform[J]. IEEE Tran. on Image Processing,2003,12(6):706–717.
    [92] Heikkila¨ M, Pietika¨ine M. A texture-based method for modeling the backgroundand detecting moving objects[J]. IEEE Tran. on Pattern Analysis and MachineIntelligence,2006,28(4):657–662.
    [93] Fattal R. Single image dehazing[J]. ACM Trans. on Graphics,2008,27(3):1–9.
    [94] Kang L, Lin C, Fu Y. Automatic single-image-based rain streaks removal via imagedecomposition[J]. IEEE Tran. on Image Processing,2012,21(4):1742–1755.
    [95] http://weatherspark.com.
    [96] http://ftp.pets.rdg.ac.uk/PETS2001.
    [97] Fan J, Yao Q. Nonlinear time series: nonparametric and parametric methods[M].New York, NY: Springer,2005.
    [98] Langford M, Fox A, Smith R. Langford’s basic photography: the guide for seriousphotographers[M]. Eighth Edition. Oxford, UK: Elsevier,2007.
    [99] Takagi M, Shimoda H. Handbook of image analysis[M]. Revised Edition. Tokyo,Japan: University of Tokyo Press,2004.
    [100] http://www.cs.columbia.edu/CAVE/databases.
    [101] http://pr-ai.hit.edu.cn/scotia.
    [102] Zhang B, Gao Y, Zhao S, et al. Kernel similarity modeling of texture pattern flowfor motion detection in complex background[J]. IEEE Tran. on Circuits and Sys-tems for Video Technology,2011,21(1):29–38.
    [103] Khan Z, Gu I, Backhouse A. Robust visual object tracking using multi-modeanisotropic mean shift and particle filters[J]. IEEE Tran. on Circuits and Systemsfor Video Technology,2011,21(1):74–87.
    [104] Woo H, Jung Y, Kim J, et al. Environmentally robust motion detection for videosurveillance[J]. IEEE Tran. on Image Processing,2010,19(11):2838–2848.
    [105] Gouko M, Ito K. An action generation model by using time series prediction and itsapplication to robot navigation[J]. International Journal of Neural Systems,2009,19(2):105–113.
    [106] Geetha M, Palanivel S. A novel event-oriented segment-of-interest discoverymethod for surveillance video[J]. International Journal of Computational Intel-ligence Systems,2009,2(1):39–50.
    [107] Tang M, Lo K, Chiang C, et al. Moving cast shadow detection by exploiting mul-tiple cues[J]. IET-Image Processing,2008,2(2):95–104.
    [108] Ma J, Li S. Moving target detection based on background modeling by multi-level median filter[C]//Proc. of the6th World Congress on Intelligent Control andAutomation. Dalian, China: IEEE,2006:9974–9978.
    [109] Li Y, Xu L, Morphett J, et al. An integrated algorithm of incremental and ro-bust PCA[C]//Proc. of the International Conference on Image Processing (ICIP).Barcelona, Spain: IEEE,2003:245–248.
    [110] Staufer C, Grimson W. Learning patterns of activity using real-time tracking[J].IEEE Tran. on Pattern Analysis and Machine Intelligence,2000,22(8):747–757.
    [111] Elgammal A, Harwood D, Davis L. Non-parametric model for background subtrac-tion[C]//Proc. of the European Conference on Computer Vision (ECCV). Dublin,Ireland: Springer,2000:751–767.
    [112] Sheikh Y, Shah M. Bayesian modeling of dynamic scences for object detection[J].IEEE Tran. on Pattern Analysis and Machine Intelligence,2005,27(11):1778–1792.
    [113] Vosters L, Shan C, Gritti T. Background subtraction under sudden illuminationchanges[C]//Proc. of the7th Conference on Advanced Video and Signal BasedSurveillance (AVSS10). Boston, USA: IEEE,2010:384–391.
    [114] Withagen P, Schutte K, Groen F. Global intensity correction in dynamic scenes[J].International Journal of Computer Vision,2010,86(1):33–47.
    [115] Sayed M, Delva J. An efcient intensity correction algorithm for high definitionvideo surveillance applications[J]. IEEE Tran. on Circuits and Systems for VideoTechnology,2011,21(11):1622–1630.
    [116] Manzanera A, Richefeu J. A new motion detection algorithm based on Σ-back-ground estimation[J]. Pattern Recognition Letters,2007,28(3):320–328.
    [117] Tian Y, Lu M, Hampapur A. Robust and efcient foreground analysis for real-timevideo surveillance[C]//Proc. of the International Conference on Computer Visionand Pattern Recognition (CVPR). San Diego, USA: IEEE,2005:1182–1187.
    [118] Fan J, Yao Q. Nonlinear time series: nonparametric and parametric methods[M].New York, USA: Springer Press,2003.
    [119] Monnet A, Mittal A, Paragios N, et al. Background modeling and subtractionof dynamic scenes[C]//Proc. of the International Conference on Computer Vision(ICCV). Nice, France: IEEE,2003,2:1305–1312.
    [120] Ridder C, Munkelt O, Kirchner H. Adaptive background estimation and fore-ground detection using Kalman-filtering[C]//Proc. of the International Conferenceon Recent Advances in Mechatronics.I˙stanbul, Turkey: Elsevier,1995:193–199.
    [121] Kato J, Watanabe T, Joga S, et al. An HMM-Based Segmentation Method forTrafc Monitoring Movies[J]. IEEE Tran. on Pattern Analysis and Machine Intel-ligence,2002,24(9):1291–1296.
    [122] Seki M, Wada T, Fujiwara H, et al. Background subtraction based on cooccurrenceof image variations[C]//Proc. of the International Conference on Computer Visionand Pattern Recognition (CVPR). Madison, USA: IEEE,2003,2:65–72.
    [123] Liu C, Yuen P, Qiu G. Object motion detection using information theoretic spatio-temporal saliency[J]. Pattern Recognition,2009,42:2897–2906.
    [124] Chen Y, Chen C, Huang C, et al. Efcient hierarchical method for backgroundsubtraction[J]. Pattern Recognition,2007,40:2706–2715.
    [125] Choi J, Yoo Y, Choi J. Adaptive shadow estimator for removing shadow of movingobject[J]. Computer Vision and Image Understanding,2010,114:1017–1029.
    [126] TrakulPong P, Bowden R. A real time adaptive visual surveillance system fortracking low-resolution colour targets in dynamically changing scenes[J]. Imageand Vision Computing,2003,17:913–929.
    [127] Wang H, Suter D. A novel robust statistical method for background initializa-tion and visual surveillance[C]//Proc. of the Asian Conference on Computer Vision(ACCV). Hyderabad, India: Springer,2006:328–337.
    [128] Dikmen M, Huang T. Robust estimation of foreground in surveillance videos bysparse error estimation[C]//Proc. of the International Conference on Pattern Recog-nition (ICPR). Tampa, USA: IEEE,2008:1–4.
    [129] Yin F, Makris D, Velastin S. Time efcient ghost removal for motion detection invisual surveillance systems[J]. Electronics Letters,2008,44:1351–1353.
    [130] Friedman N, Russell S. Image segmentation in video sequences: a probabilisticapproach[C]//Proc. of the13th Conference on Uncertainty in Artificial Intelligence(UAI’97). Rhode Island, USA: Morgan Kaufmann,1997:175–181.
    [131] Brisson N, Zaccarin A. Learning and removing cast shadows through a multidis-tribution approach[J]. IEEE Tran. on Pattern Analysis and Machine Intelligence,2007,29(7):1133–1146.
    [132] Galotti K. Cognitive psychology-in and out of the laboratory[M]//,4th ed. Bel-mont, CA: Thomson Wadsworth Press,2008:107–116.
    [133] Priestley M. Non-linear and non-stationary time series analysis[M]. London: Aca-demic Press,1988.
    [134] http://cvrr.ucsd.edu/aton/shadow/index.html.
    [135] http://research.microsoft.com/en-us/um/people/jckrumm/wallflower/testimages.htm.
    [136] http://pr-ai.hit.edu.cn/percy/F&GLC.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700