区域颜色属性空间直方图背景建模
详细信息    查看全文 | 推荐本文 |
  • 英文篇名:Region spatiogram in color names for background modeling
  • 作者:金静 ; 党建武 ; 王阳萍 ; 翟凤文
  • 英文作者:Jin Jing;Dang Jianwu;Wang Yangping;Zhai Fengwen;School of Electronic and Information Engineering,Lanzhou Jiao Tong University;Gansu Provincial Engineering Research Center for Artificial Intelligence and Graphics & Image Processing;
  • 关键词:计算机视觉 ; 智能视频分析 ; 运动检测 ; 背景建模 ; 颜色属性 ; 空间直方图
  • 英文关键词:computer vision;;intelligent video analysis;;moving detection;;background model;;color names;;spatiogram
  • 中文刊名:ZGTB
  • 英文刊名:Journal of Image and Graphics
  • 机构:兰州交通大学电子与信息工程学院;甘肃省人工智能与图形图像处理工程研究中心;
  • 出版日期:2019-05-16
  • 出版单位:中国图象图形学报
  • 年:2019
  • 期:v.24;No.277
  • 基金:国家自然科学基金项目(61562057);; 甘肃省高等学校科研项目(2017D-08);; 兰州交通大学校青年基金项目(2015003)~~
  • 语种:中文;
  • 页:ZGTB201905004
  • 页数:10
  • CN:05
  • ISSN:11-3758/TB
  • 分类号:54-63
摘要
目的为了能在光照变化、动态背景干扰这一类复杂场景中实时、准确地分割出运动前景,针对传统的基于颜色特征和基于像素的方法的不足,提出一种在颜色属性空间进行区域直方图建模的运动目标检测方法。方法首先将RGB颜色空间映射到更为稳健的低维颜色属性空间,以颜色属性为特征在像素的局部范围内建立直方图,同时记录直方图每一个分区中像素的空间信息,使用K个空间直方图构成每个像素的背景模型,每个直方图根据其匹配度赋予不同的权重。降维的颜色属性提高了模型的鲁棒性和检测的时效性,空间直方图引入的位置信息提高了背景模型的准确性。然后通过学习率αb和αω来控制各模型直方图及其权重的更新,以提高模型的适应性。在标准测试数据集的所有视频序列中进行了实验,通过分析综合性能指标(F1)及平均假阳性(FN)曲线,确定了算法中涉及参数的合理取值范围。结果对实验结果定性和定量的分析表明,本文方法能够得到良好的前景检测效果,尤其在多模态场景和光线变化的复杂场景中能显著提高检测性能。各类场景的平均综合性能指标(average F1)相比性能突出的方法 ViBe、LOBSTER(local binary similarity segmenter)和DECOLOR(detecting contiguous outliers in the low-rank representation)分别提高了0. 65%、3. 86%和3. 9%,并通过GPU并行加速实现运动目标的实时检测。结论在复杂视频环境下的运动目标检测中,相比已有方法,本文方法能够更为准确地分割出运动前景,是一种实时、有效的检测方法,具有一定的实用价值。
        Objective In recent years,the technique of intelligent video analysis has become an important research area in computer vision. Moving object detection is aimed at catching moving foreground in all types of surveillance environment and is thus an essential foundation for following video processing,including target tracking and object segmentation. Traditional methods often model the background in a color feature space and single pixel. The traditional color feature is easily disturbed by light and shadow. A single pixel cannot reflect the region spatial relation between pixels. To detect the moving foreground precisely in complex video sequences,including the illumination and dynamic background in time,we propose a moving detection method on the basis of the background modeling technique via region spatiogram in the color name space.Color names are linguistic labels that humans attach to colors. The learning of color names is achieved by the PLSA model.In fact,it conducts mapping from the RGB space to the robust 11-dimension CN space. The modeling background in thecolor name space addresses the illumination variation. A histogram is a zeroth-order tool for feature description that is robust to scale variation and rotation variation,whereas a second-order spatiogram contains the spatial mean and covariance for each histogram bin. Thus,the spatiogram retains extensive information about the geometry of patches and captures the global positions of pixels rather than their pairwise relationships. Therefore,using spatiogram in the color name space for background modeling is necessary. Method A novel method for moving detection was proposed. At first,we mapped the RGB color space to a lower-dimensional color name space that is more robust. Then,we established spatiograms in the pixel local region characterized by the color name feature and recorded the spatial information of pixels in every bin. The background models of every pixel comprised K spatiograms. The spatiograms were given different weights according to the matching rates. The color name feature by dimension reduction enhanced the robustness of the models and the detection of timeliness.The spatial information introduced by the spatiograms enhanced the accuracy of the background model. To enhance the adaptivity of the models,the approach controlled the update of the model spatiograms and their weights by learning rate αbandαω. We conducted experiments on all video sequences from the standard test data CDnet( changedetection. net),which included different challenges,such as illumination variation,moving shadow,multi-model background,and so on. The parameters such as model size K; threshold TB,Tp; and learning rates αb,αωin the algorithm were determined through the analysis of comprehensive performance F1 and averaged false negative curves. Result The quantitative and qualitative analyses indicates that the proposed method can achieve expected results. The method can obtain outstanding effects in certain scenes,including illumination and multi-model background. Compared with Vi Be,LOBSTER( loeal binary similarity segmenter),and DECOLOR( detecting contiguous outliers in the low-rank representation),the method enhances 0. 65%,3. 86%,and 3. 9% of the average comprehensive performance F1 of all scenes,respectively. Modeling for every pixel in its local region is concurrent. Thus,real-time detection is achieved with GPU parallel acceleration to improve time efficiency. Conclusion Robust color name spaces effectively address illumination variation. Multiple spatiogram models effectively match multi-model background,such as waving tree,water,and fountain. Therefore,the algorithm can segment moving foreground in complex video environment more accurately than existing methods. The algorithm is a real-time and effective detection algorithm that has certain practical value in intelligent video analysis.
引文
[1]Dong J N,Yang C H.Moving object detection using improved Gaussian mixture models based on spatial constraint[J].Journal of Image and Graphics,2016,21(5):588-594.[董俊宁,杨词慧.空间约束混合高斯运动目标检测[J].中国图象图形学报,2016,21(5):588-594.][DOI:10.11834/jig.20160506]
    [2]Aqel S,Aarab A,Sabri M A.Shadow detection and removal for traffic sequences[C]//Proceedings of 2016 International Conference on Electrical and Information Technologies.Tangiers,Morocco:IEEE,2016:168-173.[DOI:10.1109/EITech.2016.7519583]
    [3]Bouwmans T.Traditional and recent approaches in background modeling for foreground detection:an overview[J].Computer Science Review,2014,11-12:31-66.[DOI:10.1016/j.cosrev.2014.04.001]
    [4]Stauffer C,Grimson W E L.Adaptive background mixture models for real-time tracking[C]//Proceedings of the Computer Society Conference on Computer Vision and Pattern Recognition.Fort Collins:IEEE,1999:246-252.[DOI:10.1109/CVPR.1999.784637]
    [5]Kim K,Chalidabhongse T H,Harwood D,et al.Real-time foreground-background segmentation using codebook model[J].Real-Time Imaging,2005,11(3):172-185.[DOI:10.1016/j.rti.2004.12.004]
    [6]Jin J,Dang J W,Wang Y P,et al.Application of adaptive lowrank and sparse decomposition in moving objections detection[J].Journal of Frontiers of Computer Science and Technology,2016,10(12):1744-1751.[金静,党建武,王阳萍,等.自适应低秩稀疏分解在运动目标检测中的应用[J].计算机科学与探索,2016,10(12):1744-1751.][DOI:10.3778/j.issn.1673-9418.1603092]
    [7]Liu X,Zhong B N,Zhang M S,et al.Motion saliency extraction via tensor based low-rank recovery and block-sparse representation[J].Journal of Computer-Aided Design&Computer Graphics,2014,26(10):1753-1763.[柳欣,钟必能,张茂胜,等.基于张量低秩恢复和块稀疏表示的运动显著性目标提取[J].计算机辅助设计与图形学学报,2014,26(10):1753-1763.]
    [8]Maddalena L,Petrosino A.The SOBS algorithm:What are the limits?[C]//Proceedings of 2012 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops.Providence,RI,USA:IEEE,2012:21-26.[DOI:10.1109/CVPRW.2012.6238922]
    [9]Hofmann M,Tiefenbacher P,Rigoll G.Background segmentation with feedback:the pixel-based adaptive segmenter[C]//Proceedings of 2012 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops.Providence,RI,USA:IEEE,2012:38-43.[DOI:10.1109/CVPRW.2012.6238925]
    [10]Barnich O,Van Droogenbroeck M.Vibe:a universal background subtraction algorithm for video sequences[J].IEEE Transactions on Image Processing,2011,20(6):1709-1724.[DOI:10.1109/TIP.2010.2101613]
    [11]Braham M,Van Droogenbroeck M.Deep background subtraction with scene-specific convolutional neural networks[C]//Proceedings of 2016 International Conference on Systems,Signals and Image Processing.Bratislava,Slovakia:IEEE,2016:1-4.[DOI:10.1109/IWSSIP.2016.7502717]
    [12]Van De Weijer J,Schmid C,Verbeek J,et al.Learning color names for real-world applications[J].IEEE Transactions on Image Processing,2009,18(7):1512-1523.[DOI:10.1109/TIP.2009.2019809]
    [13]Conaire C O,O'Connor N E,Smeaton A F.An improved spatiogram similarity measure for robust object localisation[C]//Proceedings of 2007 IEEE International Conference on Acoustics,Speech and Signal Processing.Honolulu,HI,USA:IEEE,2007:I-1069-I-1072.[DOI:10.1109/ICASSP.2007.366096]
    [14]Goyette N,Jodoin P M,Porikli F,et al.Changedetection.net:A new change detection benchmark dataset[C]//Proceedings of2012 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops.Providence,RI,USA:IEEE,2012:1-8.[DOI:10.1109/CVPRW.2012.6238919]
    [15]St-Charles P L,Bilodeau G A.Improving background subtraction using local binary similarity patterns[C]//Proceedings of 2014IEEE Winter Conference on Applications of Computer Vision.Steamboat Springs,CO,USA:IEEE,2014:509-515.[DOI:10.1109/WACV.2014.6836059]
    [16]Gao Z,Cheong L,Wang Y X.Block-sparse RPCA for salient motion detection[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,2014,36(10):1975-1987.[DOI:10.1109/TPAMI.2014.2314663]

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700