基于灰阶迁移统计法的背景模型自适应更新方法研究
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
在智能视频监控技术中背景建模是一项位于底层的关键技术,其性能将直接决定上层各种智能视频分析功能的可实现性及鲁棒性。对背景建模技术的研究近十年来一直是视频分析与安防监控领域的研究热点与难点,因此开展与背景建模相关的研究具有重要理论意义和实际工程意义。
     目前,大多数背景建模方法在实用化程度上仍存在不足,具体表现为无法应对现实监控场景的复杂多样性,其核心问题在于:已构建的背景模型无法快速有效地学习场景在时空维度上的各种随机性变化。于是,对背景模型自适应更新问题的研究成为背景建模技术实用化的关键一步。现有的主流背景模型自适应更新方法存在以下不足:需人工设置背景模型的初始学习率,自适应性有待提升;背景模型学习率的调控策略依赖于具体的背景模型,通用性不高;逐点式地计算背景模型学习率,运算效率低。为克服传统方法的上述不足,本论文提出了一种新颖的背景模型自适应更新方法。论文的主要研究工作如下:
     ①受物理学中原子能级跃迁模型启发,论文提出将视频中像素灰度变化理解为像素点样本在不同灰阶(即光强能级)间发生了迁移,进而提出了以视频灰阶为对象提取视频变化统计信息的视频低层数据挖掘新范式——灰阶迁移统计法。相比于传统视频低层数据挖掘三大范式(即像素点分析范式、区域分析范式和子空间分析范式),灰阶迁移统计法能够从监控视频中挖掘出传统范式所无法获得的独特统计信息,该统计信息被证明可有效地用于控制背景模型的自适应更新过程。
     ②针对传统背景模型自适应更新方法的不足,提出了一种基于灰阶迁移统计法的全局化背景模型自适应更新方法。该方法对视频中全局场景进行灰阶迁移统计,生成一种被称为全局灰阶迁移概率图的二维离散概率分布函数,然后将全局灰阶迁移概率图作为在线学习率查询表,以查表方式快速获取背景模型更新所需的学习率。该方法有以下优点:1)无需人工设置初始学习率,自适应程度高;2)学习率的产生不依赖具体背景模型,通用性好;3)学习率的产生由快速查表方式实现,运算效率高。实验表明,该方法可有效提高背景模型的自适应性与鲁棒性。
     ③对于某些具有复杂局部动态性的监控场景,由②中方法计算出的全局灰阶迁移概率图可能出现误差。为此,通过对②中的全局化背景模型自适应更新方法进行改进,论文提出了一种基于灰阶迁移统计法的区域化背景模型自适应更新方法。该方法包含以下关键步骤:1)自适应的场景动态性估计;2)基于场景动态性的自适应场景区域分割;3)对不同的场景区域分别进行灰阶迁移统计,生成对应的区域灰阶迁移概率图;4)将区域灰阶迁移概率图作为对应区域内背景模型学习率的查询表。实验表明,区域化的方法能够有效地克服全局化方法存在的不足。
     ④当场景中出现某些特殊事件(例如出现遗留物),在③中提出的区域化背景模型自适应更新方法将可能在特殊事件区域内失效。为此,论文提出了一种基于灰阶迁移统计法的特殊事件区域背景模型自适应更新方法,其由两部分组成:1)基于灰阶迁移概率图的非参数化特殊事件区域检测与分割;2)基于人类进行拼图游戏时的视觉感知机制对特殊事件区域内的背景模型进行自适应更新。最后,上述特殊事件区域背景模型自适应更新方法被整合到③中提出的区域化背景模型自适应更新方法中,从而有效地改进了区域化背景模型自适应更新方法的鲁棒性。
     通过在背景建模领域较权威的Changedetection标准测试数据集上的一系列实验表明:灰阶迁移统计法这种视频低层数据挖掘范式在应用上具有多样性,能有效挖掘出监控视频中隐藏的多种独特且有价值的统计信息,而基于灰阶迁移统计法的背景模型自适应更新方法明显优于传统的背景模型自适应更新方法。
Background modeling (BGM) is a key technology in intelligent video surveillance,and its performance will determine the realization and robustness of various high-levelintelligent video analyses. Over the past decade, the study of BGM has been a popular,but challenging topic in the fields of video analysis and security monitoring. Therefore,it has both theoretical and engineering significance to carry out studies related to BGM.
     So far most BGM methods have insufficient practicability, due to the complexityand diversity of real-world scenes. The core problem is that a built background modelcannot rapidly and effectively learn all kinds of random changes in the temporal andspatial dimensions of the scenes. Hence, the study of adaptive background modelupdating is a critical step in BGM’s practical application. The existing popular methodsof adaptive background model updating usually have the following drawbacks: Initiallearning rates of the background models must be set manually, which leads toinsufficient adaptability; The learning rate control schemes usually depend on specificbackground models, which leads to poor generality; The pixel-wise calculation oflearning rates is needed, which leads to low efficiency. To overcome the abovedrawbacks of the traditional methods, a novel method of adaptive background modelupdating is proposed in this thesis. The main work of the thesis is as follows:
     ①Inspired by the model of atomic energy level transition in physics, the thesisproposes that the pixels’ intensity changes in videos can be interpreted as the migrationsof pixel samples between different intensity levels. On this basis, a new paradigm oflow-level video data mining for surveillance videos, called intensity-level migrationstatistics (IMS), is proposed. Compared to three traditional paradigms of low-levelvideo data mining (i.e., pixel-based, regional-based, and subspace-based paradigms),IMS can mine unique statistical information that the traditional paradigms cannot obtainfrom surveillance videos. It is proved that the statistical information mined by IMS canbe effectively applied to control the adaptive background model updating.
     ②To resolve the drawbacks of traditional adaptive background model updatingmethods, an IMS-based global method of adaptive background model updating isproposed. By calculating the statistics of the intensity-level migrations of pixels withinthe global surveillance scene, the method can generate a two-dimensional discreteprobability function called global intensity-level migration probability map (IMPM). On this basis, the global IMPM is utilized as an online learning rate lookup table, which isemployed to rapidly retrieve the suitable adaptive learning rates for background modelupdating. This method has the following advantages:1) It has good generality since thelearning rate generation is independent of background models;2) It has goodadaptability since there is no need to manually set any initial learning rate;3) It hasgood computational efficiency since all pixels’ learning rates can be rapidly retrievedfrom a lookup table. Experimental results show that the proposed method caneffectively enhance background models’ adaptability and robustness.
     ③For certain surveillance videos with complex regional scene dynamics, eorrsmight occur in the above global IMPM. To improve the global adaptive backgroundmodel updating method proposed in②, an IMS-based regional method of adaptivebackground model updating is proposed. The method consists of the following steps:1)Adaptive scene dynamics estimation;2) Scene-dynamics based adaptive scenesegmentation;3) The generation of regional IMPMs by calculating the statistics of theintensity-level migrations of pixels within different scene regions;4) To utilize theregional IMPMs as the learning rate lookup tables for the corresponding regions.Experimental results show that the regional adaptive updating method can effectivelyovercome the defect of the global adaptive background model updating method.
     ④When certain particular incidents (e.g., abandoned objects) occur in surveillancescenes, the regional adaptive background model updating method proposed in③maybe ineffective. Hence, an IMS-based adaptive background model updating method forthe particular incident region (PIR) is proposed. The method comprises two parts:1)IMPM-based nonparametric PIR detection and segmentation;2) Adaptive backgroundmodel updating for the PIR based on the human visual perception for jigsaw puzzles.Finally, the adaptive updating method for PIR is integrated into the regional adaptiveupdating method in③, therefore whose robustness is effectively improved.
     Through a series of experiments carried on the Changedetection benchmark dataset,it shows that the IMS could have a variety of possible applications and can mine uniqueand valuable statistical information from surveillance videos. Meanwhile, theIMS-based adaptive background model updating method can significantly outperformthe traditional adaptive background model updating methods.
引文
[1] I. Ahmad, Z. He, M. Liao. Special Issue on Video Surveillance [J]. IEEE Transactions onCircuits and Systems for Video Technology,2008,18(8):1001–1005.
    [2] L. Lee, R. Romano, G. Stein. Introduction to the special section on video surveillance [J]. IEEETransactions on Pattern Analysis and Machine Intelligence,2000,22(8):745–746.
    [3] P. L. Venetianer, H.Deng. Performance evaluation of an intelligent video surveillance system–A case study [J]. Computer Vision and Image Understanding,2010,11(114):1292–1302.
    [4] C. R. Wren, T. Darrell, A. P. Pentland. Pfinder: Real-time tracking of the human body [J]. IEEETransactions on Pattern Analysis and Machine Intelligence,1997,19(7):780–785.
    [5] I. Haritaoglu, D. Harwood, L. Davis. W4: Realtime surveillance of people and their activities[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2000,22(8):809–830.
    [6] Y. Benezeth, P. M. Jodoin, B. Emile. Comparative study of background subtraction algorithms[J]. Journal of Electronic Imaging,2010,19(3):380–385.
    [7] O. Barnich and M. Van Droogenbroeck. ViBe: A universal background subtraction algorithmfor video sequences [J]. IEEE Transactions on Image Processing,2011,20(6):1709–1724.
    [8] A. McIvor. Background subtraction techniques [C]. Proceeding of Image Vision Computing.Auckland, New Zealand,2000,1(3):155–163.
    [9] C. Stauffer, W. Eric, L. Grimson. Learning patterns of activity using real-time tracking [J].IEEE Transactions on Pattern Analysis and Machine Intelligence,2000,8(22):747–757.
    [10] T. Bouwmans. Recent advanced statistical background modeling for foreground detection: Asystematic survey [J]. RPCS,2011,4(3):147–176.
    [11] D. Farcas, C. Marghes, T. Bouwmans. Background subtraction via incremental maximummargin criterion: a discriminative subspace approach [J]. Machine Vision and Applications,2012,23(6),1083–1101.
    [12] R. Radke, S. Andra, O. Al-Kofahi, and B. Roysam. Image change detection algorithms: Asystematic survey [J]. IEEE Transactions on Image Processing,2005,14(3):294–307.
    [13] M. Piccardi. Background subtraction techniques: A review [C]. IEEE international conferenceon Systems, man and cybernetics,2004,4:3099–3104.
    [14] D. Parks and S. Fels. Evaluation of background subtraction algorithms with post-processing[C]. IEEE international conference on Adv. Video Signal Based Surveillance,2008:192–199.
    [15] T. Bouwmans, et al. Special issue on background modeling for foreground detection inreal-world dynamic scenes [J]. Machine Vision and Applications, November2013.
    [16] M.L. Cascia, S. Sclaroff, V. Athitsos. Fast, reliable head tracking under varying illumination:An approach based on registration of texture-mapped3d models [J]. IEEE Transactions onPattern Analysis and Machine Intelligence,2000,22(4):322–336.
    [17] B. Lee, M. Hedley. Background Estimation for Video Surveillance [C]. Image and visioncomputing New Zealand,2002:315–320.
    [18] N. McFarlane, C. Schofield. Segmentation and tracking of piglets in images [C]. MachineVision and Applications,1995,8(3):187–193.
    [19] J. Zheng, Y. Wang, N. Nihan. Extracting Roadway Background Image: A mode based approach[J]. Journal of Transportation Research Report,2006,1944(1):82–88.
    [20] C. Stauffer, W. Grimson. Adaptive background mixture models for real-time tracking [C].CVPR,1999,2:246–252.
    [21] Elgammal A., Harwood D., Davis L. Non-parametric Model for Background Subtraction [C].ECCV,2000:751–767.
    [22] D. Butler, S. Sridharan. Real-Time Adaptive Background Segmentation [C]. ICASSP,2003,3:III–349.
    [23] K. Kim, T. Chalidabhongse, D. Harwood, L. Davis. Real-time Foreground-BackgroundSegmentation using Codebook Model [J]. Real-Time Imaging,2005,11(3):172–185.
    [24] D. Culbrik, O. Marques. Neural network approach to background modeling for video objectsegmentation [J]. IEEE Transaction on Neural Networks,2007,18(6):1614–1627.
    [25] L. Maddalena, A. Petrosino. A self-organizing approach to background subtraction for visualsurveillance applications [J]. IEEE Transactions on Image Processing,2008,17(7):1729–1736.
    [26] M. Sigari, N. Mozayani, H. Pourreza. Fuzzy Running Average and Fuzzy BackgroundSubtraction: Concepts and Application [J]. International Journal of Computer Science andNetwork Security,2008,8(2):138–143.
    [27] F. El Baf, T. Bouwmans, B. Vachon. Type-2fuzzy mixture of Gaussians model: Application tobackground modeling [C]. Advances in Visual Computing,2008:772–781.
    [28] H. Zhang, D. Xu. Fusing Color and Texture Features for Background Model [C]. InternationalConference on Fuzzy Systems and Knowledge Discovery,2006,4223(7):887–893.
    [29] F. El Baf, T. Bouwmans, B. Vachon. Fuzzy Integral for Moving Object Detection [C].FUZZ-IEEE,2008:1729–1736.
    [30] M. Sivabalakrishnan, D. Manjula. Adaptive Background subtraction in Dynamic EnvironmentsUsing Fuzzy Logic [J]. International Journal on Computer Science and Engineering,2010,2(2):270–273.
    [31] S. Biswas, J. Sil, N. Sengupta. Background Modeling and Implementation using DiscreteWavelet Transform: a Review [J]. Journal ICGST-GVIP,2011,11(1):29–42.
    [32] K. Toyama, J. Krumm, B. Brumitt, B. Meyers. Wallflower: Principles and Practice ofBackground Maintenance [C]. International Conference on Computer Vision,1999:255–261.
    [33] S. Messelodi, C. Modena, N. Segata, M. Zanin. A Kalman filter based background updatingalgorithm robust to sharp illumination changes [C]. In Image Analysis and Processing,2005,3617:163–170.
    [34] R. Chang, T. Ghandi, M. Trivedi. Vision modules for a multi sensory bridge monitoringapproach [C]. Intelligent Transportation Systems,2004:971–976.
    [35] R. Zhang, W. Gong, A. Yaworski, and M. Greenspan. Nonparametric on-line backgroundgeneration for surveillance video [C]. In21st International Conference on Pattern Recognition,2012:1177–1180.
    [36] Z. Zivkovic. Efficient adaptive density estimation per image pixel for the task of backgroundsubtraction [J]. Pattern Recognition Letters,2006,27(7):773–780.
    [37] M. Haque, M. Murshed, M. Paul. Improved Gaussian mixtures for robust object detection byadaptive multi-background generation [C]. In19st International Conference on PatternRecognition,2008:1–4.
    [38] N. Ohta. A statistical approach to background subtraction for surveillance systems [C]. IEEEInternational Conference on Computer Vision,2001,2:481–486.
    [39] M. Durus, A. Ercil. Robust Vehicle Detection Algorithm [C]. Signal Processing andCommunication Applications,2007:1–4.
    [40] S. Wei, S. Jiang, Q. Huang. A Pixel-Wise Local Information-Based Background SubtractionApproach [C]. In Multimedia and Expo,2008:1501–1504.
    [41] S. Agarwal, A. Awan. Learning to detect objects in images via a sparse, part-basedrepresentation [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2004,26(11):1475–1490.
    [42] S.-S. Huang, L.-C. Fu, and P.-Y. Hsiao. Region-level motion-based background modeling andsubtraction using mrfs [J]. IEEE Transactions on Image Processing,2007,16(5):1446–1456.
    [43] X. Cai, L. Jiang, X. Hao. A new region Gaussian background model for video surveillance [C].International Conference on Natural Computation,2008:123v127.
    [44] S. Babacan, T. Pappas. Spatiotemporal algorithm for background subtraction [C]. In IEEEInternational Conference on Acoustics, Speech and Signal Processing,2007,1: I–1065–I–1068.
    [45] M. Izadi, P. Saeedi. Robust Region-Based Background Subtraction and Shadow RemovingUsing Colour and Gradient Information [C]. In19st International Conference on PatternRecognition,2008:1–5.
    [46] T. Bouwmans. Subspace learning for background modeling: A survey [J]. Recent Patent OnComputer Science,2009,2(3):223–234.
    [47] Y. Dong, G. DeSouza. Adaptive Learning of Multi-Subspace for Foreground Detection underIllumination Changes [J]. Computer Vision and Image Understanding,2011,115(1):31–49.
    [48] Y. Zhao, H. Gong. Spatio-temporal Patches for Night Background Modeling by SubspaceLearning [C]. In19st International Conference on Pattern Recognition,2008:1–4.
    [49] W. Hu, X. Li, X. Zhang. Incremental Tensor Subspace Learning and Its Applications toForeground Segmentation and Tracking [J]. International Journal of Computer Vision,2011,91(3):303–327.
    [50] L. Wang, M. Wen, Q. Zhuo. Background subtraction using incremental subspace learning [C].International Conference on Image Processing,2007,5:45–48.
    [51] Y. Li. On incremental and robust subspace learning [J]. Pattern Recognition,2004,37(7):1509–1518.
    [52] P.W. Power, J. A. Schoonees. Understanding background mixture models for foregroundsegmentation [C]. In Proceedings image and vision computing New Zealand,2002:10–11.
    [53] S.-C. S. Cheung, C. Kamath. Robust techniques for background subtraction in urban trafficvideo [C]. Proceedings of SPIE,2004,5308:881–892.
    [54] N. McFarlane and C. Schofield. Segmentation and tracking of piglets in images [J]. MachineVision and Applications,8:187–193,1995.
    [55] M. Camplani, C. Del Blanco, Advanced background modeling with rgb-d sensors throughclassifiers combination and inter-frame foreground prediction [J]. Machine Vision andApplications,2013:1–14.
    [56] H. Yang, J. Tian, Y. Chu, Q. Tang, J. Liu. Spatiotemporal smooth models for moving objectdetection [J]. IEEE Signal Processing Letters,2008,15:497–500.
    [57] S. Calderara, R. Melli, A. Prati, R. Cucchiara. Reliable background suppression for complexscenes [C]. In Proceedings of the4th ACM international workshop on Video surveillance andsensor networks,2006:211–214.
    [58] M. Hofmann, P. Tiefenbacher, G. Rigoll. Background segmentation with feedback: Thepixel-based adaptive segmenter [C]. Computer Vision and Pattern Recognition Workshops,2012:38–43.
    [59] N. Oliver, B. Rosario, A. Pentland. A bayesian computer vision system for modeling humaninteractions [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2000,22(8):831–843.
    [60] V. Mahadevan, N. Vasconcelos. Background subtraction in highly dynamic scenes [C]. IEEEConference on Computer Vision and Pattern Recognition,2008:1–6.
    [61] P. Spagnolo, T. Orazio, M. Leo. Moving object segmentation by background subtraction andtemporal analysis [J]. Image and Vision Computing,2006,24(5):411–423.
    [62] L. Vosters, C.Shan, T. Gritti. Real-time robust background subtraction under rapidly changingillumination conditions [J]. Image and Vision Computing,2012,30(12):1004–1015.
    [63] J. Li, Z. Miao. Foreground segmentation for dynamic scenes with sudden illumination changes[J]. IET image processing,2012,6(5):606–615.
    [64] L. Vosters, C. Shan, T. Gritti. Background subtraction under sudden illumination changes [C].IEEE International Conference on Advanced Video and Signal-Based Surveillance,2010:.384–391.
    [65] Y. B. Lee. A real-time color-based object tracking robust to irregular illumination variations [C].IEEE International Conference on Robotics and Automation,2001,2:1659–1664.
    [66] J. Gu, Z. Liu, Z. Zhang. Novel moving object segmentation algorithm using kernel densityestimation and edge information [J]. Journal of Computer-Aided Design and ComputerGraphics,2009,21(2):223–228.
    [67] Y. Mao, P. Shi. Multimodal background model with noise and shadow suppression for movingobject detection [J]. Journal of Southeast University,2004,20(4):423–426.
    [68] L. Li, W. Huang, I. Y.-H. Gu, and Q. Tian. Statistical modeling of complex backgrounds forforeground object detection [J]. IEEE Transactions on Image Processing,2004,13(11):1459–1472.
    [69] H. Li, A. Achim, D. Bull. GMM-based efficient foreground detection with adaptive regionupdate [C]. International Conference on mage Processing,2009:3181–3184.
    [70] M. Paul, W. Lin, C. T. Lau, B. S. Lee. Video coding with dynamic background [J]. EURASIP J.on Advances in Signal Processing,2013:1–17.
    [71] X. Gao, T. Boult, F. Coetzee, V. Ramesh. Error analysis of background adaption [C]. IEEEConference on Computer Vision and Pattern Recognition,2000,1:503–510.
    [72] Sheikh Y., Shah M. Bayesian Modeling of Dynamic Scenes for Object Detection [J]. IEEETransactions on Pattern Analysis and Machine Intelligence,2005,27(11):1778–1792.
    [73] J. Pan, Q. Fan, S. Pankanti. Robust abandoned object detection using region-level analysis [C].18th IEEE International Conference on Image Processing,2011:3597–3600.
    [74] A. Singh, S. Sawan, M. Hanmandlu, V. Madasu, B. Lovell. An abandoned object detectionsystem based on dual background segmentation [C].6th IEEE International Conference onAdvanced Video and Signal Based Surveillance,2009:352–357.
    [75] Q. Fan, S. Pankanti. Modeling of temporarily static objects for robust abandoned objectdetection in urban surveillance [C].8th IEEE International Conference on Advanced Video andSignal-Based Surveillance,2011:36–41.
    [76] X. Li, C. Zhang, D. Zhang. Abandoned objects detection using double illumination invariantforeground masks [C]. IEEE International Conference on Pattern Recognition,2010:436–439.
    [77] S. Lu, J. Zhang, D. Feng. An effcient method for detecting ghost and left objects in surveillancevideo [J]. IEEE Conference on Advanced Video and Signal Based Surveillance,2007:540–545.
    [78] H. Zhao, H. Yang, S. Zheng, An effcient method for detecting ghosts and left objects inintelligent video surveillance [C]. IEEE2nd International Congress on Image and SignalProcessing,2009:1–6.
    [79] H. Yang, Y. Nam, W. Cho, Y. Choi. Adaptive background modeling for effective ghost removaland robust left object detection [C]. IEEE2nd International Conference on InformationTechnology Convergence and Services,2010:1–6.
    [80] T. Huang, C. Guo, J. Qiu, T. Ikenaga. Temporal information cooperated gaussian mixturemodels for real-time surveillance with ghost detection [C]. IEEE Fifth International Conferenceon Intelligent Information Hiding and Multimedia Signal Processing,2009:1338–1341.
    [81] San Miguel, J. C., and J. M. Martínez. Robust unattended and stolen object detection by fusingsimple algorithms [C]. IEEE Fifth International Conference on Advanced Video and SignalBased Surveillance,2008:18–25.
    [82] S., Ferrando, G. Gera, C. Regazzoni. Classification of unattended and stolen objects invideo-surveillance system [C]. IEEE International Conference on Video and Signal BasedSurveillance,2006:21–21.
    [83] Y. Tian, R. S. Feris, H. Liu. Robust detection of abandoned and removed objects in complexsurveillance videos [J]. IEEE Transactions on Systems, Man, and Cybernetics, Part C:Applications and Reviews,2011,41(5):565–576.
    [84] R. Cucchiara, C. Grana, M. Piccardi, A. Prati. Detecting moving objects, ghosts, and shadowsin video streams [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2003,25(10):1337–1342.
    [85] A. Prati, M. M. Trivedi, R. Cucchiara. Detecting moving shadows: Algorithms and evaluation[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2003,25(7):918–923.
    [86] J. Hu, T. Su. Robust Background Subtraction with Shadow and Highlight Removal for IndoorSurveillance [J]. Journal on Advances in Signal Processing,2007,1:1–14.
    [87] S. Wang, T. Su, S. Lai. Detecting moving objects from dynamic background with shadowremoval [C]. International Conference on Acoustics, Speech and Signal Processing,2011:925–928.
    [88] P. Jodoin, J.Konrad, V. Saligrama. Motion detection with an unstable camera [C].15th IEEEInternational Conference on Image Processing,2008:229–232.
    [89] Y. Ren, C. S. Chua. Statistical background modeling for non-stationary camera [J]. PatternRecognition Letters,2003,24(1):183–196.
    [90] A. Tavakkoli, M. Nicolescu, G. Bebis, M. Nicolescu. Non-parametric statistical backgroundmodeling for efficient foreground region detection [J]. Machine Vision and Applications,2009,20(6):395–409.
    [91] H. Yalcin, R. Collins, M. Hebert. Background estimation under rapid gain change in thermalimagery [J]. Computer Vision and Image Understanding,2007,106(2):148–161.
    [92] A. Bevilacqua, L. Di Stefano. An efficient change detection algorithm based on a statisticalnonparametric camera noise model [J]. IEEE International Conference on Image Processing,2004,4:2347–2350.
    [93] J. M. Lucas, M. S. Saccucci. Exponentially weighted moving average control schemes:properties and enhancements [J]. Technometrics,1990,32(1):1–12.
    [94] P. Kaewtrakulpong, R. Bowden. An improved adaptive background mixture model for realtimetracking with shadow detection [C]. European Workshop on Advanced Video BasedSurveillance Systems,2002:135–144.
    [95] L. Ying-hong, T. Hong-fang, Z. Yan. An improved Gaussian mixture background model withreal-time adjustment of learning rate [C]. IEEE International Conference on InformationNetworking and Automation,2010,1: V1–512.
    [96] D. Lee. Effective Gaussian mixture learning for video background subtraction [J]. IEEETransactions on Pattern Analysis and Machine Intelligence,2005,27(5):827–832.
    [97] H.-H. Lin, J.-H. Chuang, T.-L. Liu. Regularized background adaptation: a novel learning ratecontrol scheme for Gaussian mixture modeling [J]. IEEE Transactions on Image Processing,2011,20(3):822–836.
    [98] Z. Sheng, X. Cui. An adaptive learning rate GMM for background extraction [J].Optoelectronics Letters,2008,4:460-463.
    [99] D. Park, H. Byun. A unified approach to background adaptation and initialization in publicscenes [J]. Pattern Recognition,2013,46(7):1985-1997.
    [100] Q. Katharina, K. André. AUTO GMM-SAMT: an automatic object tracking system for videosurveillance in traffic scenarios [J]. EURASIP Journal on Image and Video Processing,2011.
    [101] P. Chiranjeevi. Spatially correlated background subtraction, based on adaptive backgroundmaintenance [J]. Journal of Visual Communication and Image Representation,2012,23(6):948-957.
    [102] B. White, M.Shah. Automatically tuning background subtraction parameters using particleswarm optimization [C]. IEEE International Conference on Multimedia and Expo,2007:1826-1829.
    [103] X. Liu and C. Qi. Future-data driven modeling of complex backgrounds using mixture ofGaussians [J]. Neurocomputing,2013,119:439-453.
    [104] N. Goyette, P. Jodoin, F. Porikli, J. Konrad, and P. Ishwar. Changedetection. net: A newchange detection benchmark dataset [C]. IEEE Workshop on Change Detection at CVPR’12,2012:16-21.
    [105] R. Evangelio, T. Sikora. Complementary background models for the detection of static andmoving objects in crowded environments [C]. In: IEEE Conf. AVSS,2011:71-76.
    [106] R. Evangelio, M. Ptzold, and T. Sikora. Splitting Gaussians in Mixture Models
    [C]. International Conference on Advanced Video and Signal-Based Surveillance,2012:300–305,.
    [107] D. Riahi, P. L. St-Onge, G. A. Bilodeau. RECTGAUSS-Tex: Block-based BackgroundSubtraction [R]. Technical Report, école Polytechnique de Montréal, EPM-RT-2012-03,2012.
    [108] I. Saleemi, K. Shafique, and M. Shah. Probabilistic modeling of scene dynamics forapplications in visual surveillance [J]. IEEE Transactions on Pattern Analysis and MachineIntelligence,2009,31(8):1472–1485.
    [109] S. Grondin. Timing and time perception: a review of recent behavioral and neurosciencefindings and theoretical directions [J]. Attention, Perception,&Psychophysics,2010,72(3):561–582.
    [110] J. Cass, E. Van der Burg, D. Alais. Finding flicker: critical differences in temporal frequencycapture attention [J]. Frontiers in psychology,2011,2:1–7.
    [111] R. Zhang, W. Gong, V. Grzeda, A. Yaworski, M. Greenspan. An adaptive learning rate methodfor improving adaptability of background models [J]. IEEE Signal Processing Letters,2013,20(12):1266-1269.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700