用户名: 密码: 验证码:
智能视频监控中运动目标检测的算法研究
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
运动目标检测和跟踪融合了计算机视觉和模式识别技术,是智能视频监控系统(Intelligient Video Surveillance,IVS)中最基础、最核心、应用最广泛的两个命题。从视觉技术诞生之日起,目标检测和跟踪就得到了高度重视,并且积累了大量研究成果,但是,运动目标的检测和跟踪依旧是视觉研究中极富挑战性的问题,有许多理论和实际的技术难点需要解决。实践表明,仅仅从图像底层分析进行目标检测,任何一种技术都达不到准确分割,另外,动态场景(天气变化、光照变化、树木、湖水、旗帜干扰、影子、遮挡、摄像机运动)等给运动目标有效准确地分割带来了极大困难。在还没有图像分析反馈机制理论指导的情况下,建立自适应性的背景模型,是个更好的选择。本文以“智能视频监控中运动目标的检测算法研究”为题,通过引入先验知识,增加反馈机制等,研究了序列图像中复杂背景下的目标检测和跟踪方法。研究的算法主要包括动态背景建模、运动阴影消除,基于检测的目标跟踪算法、单目序列图像目标的空间定位算法等。本文主要工作如下:
     1.收集背景信息是运动目标检测的基础,无论是采用水平集分割,还是光流检测,建立鲁棒的背景是检测和跟踪的第一步。对复杂背景,可以建立多态分布模型,传统方法是建立混合高斯背景模型(Gaussian Mixture Model,GMM),但是在进行模型参数估计数据训练的过程中,我们发现由于拍摄时的抖动原因,以及负样本的出现,导致像素灰度值的分布更多地呈现出窄带厚尾形态。因此,本文引入混合学生t分布模型,研究了自适应混合t分布的背景建模方法(Adaptive Student’s t mixture model,ASMM)。与高斯分布模型相比,t分布多了一个与样本数量有关的自由度,因此提高了异常点的容纳性。实验结果表明,目标检测的错误率下降到了1%以下,从而提高了目标识别率。
     2.混合概率密度模型通常采用EM算法进行参数估计,为了适应t分布模型特点,本文提出采用周期性ECM2(Expectation Conditional Maximum)方法进行参数估计,通过将参数空间分割为两部分,从而增长算法迭代步长。由于EM算法容易限于局部最优,为了避免模型对初始点敏感的缺点,本文引入了初始点先验步骤,即在训练集里选择出现频率高的灰度值作为初始聚类中心,并自适应调节聚类分量数目。在参数更新的过程中,引入成分消灭法和三角不等式判别法,避免冗余计算,加快了运算速度。实验结果表明,由于引入了步长加速算法,初始点先验步骤以及去冗余算法,ASMM的运算效率优于传统GMM方法。
     3.运动目标检测是计算机视觉信息提取的一个关键步骤,但是由于受到光照影响,在检测到的运动目标中往往包含有阴影,如果不分割出运动阴影,会导致运动目标检测失败。本文研究了阴影检测理论背景,以及基于阴影物理属性的各种阴影检测方法,研究了光照无关图理论,最后本文提出了一种连续阈值阴影检测法。该检测法将阴影物理属性和光照无关图理论结合起来,首先在光照无关图中分离出粗糙运动目标(T1),然后由背景减除法在原图中得到含有阴影的运动目标(T2),根据阴影物理属性,阴影和目标之间存在分界线,因此进行第三次分割(T3),将含有阴影的运动区域分成两部分,最后我们将T1步骤得到的目标与T3步骤得到的两个分类进行直方图匹配,从而检测出阴影,得到较完整的目标区域。实验结果表明,本章提出的方法与现有方法比较,无论是在阴影检测率还是目标识别率方面都得到了提高,目标识别率超过了85%。
     4.运动目标检测和跟踪总是密切相关,跟踪方法通常有基于检测的跟踪,以及基于特征的跟踪。本文在背景减除法分割出准确运动目标的基础上,研究了基于检测的跟踪方法,采取颜色直方图匹配法进行了目标识别实验,实验表明,在视频监测系统中,该跟踪方法速度快,效率高。
     5.在单目视频上进行目标空间定位,恢复出目标的三维空间运动轨迹,是个难题,本文将地面约束引入到单目视觉监控中,提出了一种单目序列图像运动目标跟踪方法。该方法利用摄像机安装信息和几何成像原理,结合非线性补偿,推导出单目不对称非线性成像的地面运动目标实际位置计算公式,也就是建立了场景的视觉模型。通过提取目标不变角点,实现了运动目标的空间定位和运动轨迹描述。实验结果表明,本文建立的视觉模型能够对目标进行定位,计算值相对实际测量值的误差在5%以内,定位结果足够进行运动目标的跟踪。
     总之,本文的创新性主要体现在三个方面:(1)引入ASMM模型进行动态背景建模以提高目标识别率,并采用多种方法提高算法效率。(2)在背景建模的基础上,本文基于阴影物理属性和光照无关图,提出了一种连续阈值阴影消除方法。(3)在准确检测到运动目标的基础上,本文引入地面约束,提出了一种单目序列图像运动目标跟踪定位方法。
Moving object detection and tracking fuses computer vision and pattern recognizetechnology. It is the most foundational, core, and widely used topic in the intelligent videosurveillance systems (IVS). From the beginning of vision technology, many people have paidmuch attention to object detection and tracking and then obtained many researchachievements. However, it is still a challenge issue, due to so many problems on theory andpractice need to be resolved. The practices show that any kind of technologies can notaccurately segment moving target if we only analyze it from the low-level-images.Additionally, dynamic scenes (including weather changes, illumination, waving trees, flag,flowing water, shadow, occlusion, camera motion etc.) have brought great difficulties to themoving object segmentation. Under the circumstances of non-guidance from vision computertheory on feedback mechanism, it is a better choice to build an adaptive background model.With the title “Study on Moving Object Detection in Intelligent Video Surveillance”, in thispaper we research the algorithms of moving object detection and tracking in complexbackgrounds of the video sequence by taking advantage of priori knowledge and addingfeedback mechanism. The algoritms researched in this dissertation include dynamic scenesmodeling, shadows of moving object ellimanation, target tracking and space positioning inmonocular sequence images and so on. The main work and contributions of this paper are asfollows.
     1. Collecting background information is the basis of the moving target detection.Constructing robust background is the first step whatever using level set segmentation oroptical flow detection method. For the complex background, we can establish multi-modaldistribution model. The traditional way is building Gaussian mixture model (GMM). However,during the data training about the model parameter estimation, we found that the pixel valuedistributions present narrowband thick tail morphology due to camera shake and negativesamples. Therefore, this paper introduces student’s t-distribution and studies an adaptivestudent’s t mixture model for background (ASMM). Compared with the Gaussian model,student’s t-distribution has an additional degree of freedom related to the number of samples,which increase the outliers accommodate capacity. Experimental results show that the targetdetection error is less than1%, thus the target recognition rate is greatly improved.
     2. The EM algorithm is used to build background probability model. Further, wepresent a periodical ECM2method to estimate the parameters of Student’s t distribution,which cut the parameter space into two parts so that the length of the iteration steps is increased. In this paper, we introduce a priori step to lower the influence of initial pointsbecause EM algorithm is likely to fall into local optimum. The pixel values of high frequencyare chosen as initial cluster center, at mean time, the number of cluster components isselected automatically. Additionally, we use methods of component elimination and triangleinequality to avoid computation redundant during parameters updating. Experiment resultsshow that the computation efficiency of ASMM is better than that of GMM.
     3. Moving object detection is a key step of information extracting in computer vision.However, due to the influence of the light, shadows are often got with the moving targets. Ifwe do not split out the shadows, the moving targets detection will fail. In this paper, thetheoretical knowledge of shadow detection is discussed. Then the detection methods based onthe shadow physical nature are presented. The theory about invariant image independent ofillumination is studied too. At last, a method using successive thresholds to classify pixels asmoving target or shadow is proposed, which combines the shadow physical properties andillumination invariant image theory. Firstly, rough motion targets are isolated from theillumination invariant image (T1). Secondly, moving areas including shadow are obtainedfrom the original image by the background subtracting method (T2). Thirdly, the movingareas are segmented into two groups because there are boundaries between shadow andmoving targets according to shadow physical properties (T3). Finally, the two partitions fromstep T3match with the object from step T1in order to obtain accurate target area. Experimentresults show that the proposed method can improve the detection rates of target and shadowby comparision to existing methods. The average target recognition rate exceeds85%.
     4. Always, moving object detection is closely related to tracking. Usually, there are twotracking methods; one is based on detection, and the other is based on feature. In this paper,we study the first one on the basis of background subtraction. The experiment shows that theproposed tracking method is fast and high efficient in the video surveillance system.
     5. It is a hard problem to locate position and trace the movement of target from3-dimensional space in monocular sequence images. A new method of moving target trackingis proposed by introducing a ground constraint. In this method, the formulas to estimate theposition of moving target in asymmetric nonlinear imaging are derived according to camera’sinstallation information and geometrical imaging principle together with nonlinearcompensation. Plane coordinates is conversed to stereo ones, which means a stereomeasurement model is built for monocular vision sensor. By extracting invariant cornerfeature of moving targets, the spatially orientation and movement description are realized.Experiment results show those relative errors between calculated values and actual measured ones are less than5%.
     In conclusion, the innovations of this article are mainly embodied in three aspects:(1)Dynamic background is modeled by ASMM to enhance the target recognition rate. And kindof methods is used to improve the effiency of algorithm.(2) On the foundation of backgroundsubtraction, a method using successive thresholds to eliminate the shadow of moving object isproposed.(3) On the basis of accurate detection of moving object, a new method of targettracking and locating is proposed by introducing the ground constraint in monocular sequenceimages.
引文
[1] http://baike.baidu.com/view/10044059.htm
    [2] http://www.bellsent.com/
    [3] Hubel D.H., Wiesel T.N. Receptive fields, binocular interaction and functionalarchitecture in the cat’s visual cortex[J]. The Journal of Physiology,1962,160(1):106
    [4] Hubel D.H, Wiesel T.N. Receptive fields and functional architecture of monkey striatecortex[J]. The Journal of Physiology,1968,195(1):215–243
    [5] Robinson D.A, Gordon J.L, Gordon S E. A model of the smooth pursuit eye movementsystem[J]. Biological Cybernetics,1986,55(1):43–57
    [6] Ungerleider L G, Mishkin M. Two cortical visual systems[M].2000
    [7] Ungerleider L G, Haxby J V.“What”and “where”in the human brain[J]. Current Opinionin Neurobiology,1994,4(2):157–165
    [8] Kravitz D J, Saleem K S, Baker C I, et al. The ventral visual pathway: an expandedneural framework for the processing of object quality[J]. Trends in Cognitive Sciences,2013,17(1):26–49
    [9] Grossberg S, McLoughlin N P. Cortical dynamics of three-dimensional surfaceperception: Binocular and half-occluded scenic images[J]. Neural Networks,1997,10(9):1583–1605
    [10] Grossberg S, Howe P D. A laminar cortical model of stereopsis and three-dimensionalsurface perception[J]. CAS/CNS Technical Report Series,2010(002)
    [11] Marr D. Vision: A computational investigation into the human representation andprocessing of visual information[M]. San Francisco: W.H. Freeman and Company,1982:19-20
    [12] Marr D, Nishihara H K. Representation and recognition of the spatial organization ofthree-dimensional shapes[J]. Proceedings of the Royal Society of London. Series B.Biological Sciences,1978,200(1140):269–294
    [13] Bajcsy R. Active perception[J]. Proceedings of the IEEE,1988,76(8):966–1005.
    [14] Lowe D G. Three-dimensional object recognition from single two-dimensionalimages[J]. Artificial Intelligence,1987,31(3):355–395
    [15] Luo R C, Lai C C. Enriched indoor map construction based on multisensor fusionapproach for intelligent service robot[J]. IEEE Transactions on Industrial Electronics,2012,59(8):3135–3145
    [16] Bhowmik M K, De B K, Bhattacharjee D, et al. Multisensor fusion of visual andthermal images for human face identification using different SVM kernels[A].2012IEEE Long Island Systems, Applications and Technology Conference (LISAT)[C].IEEE,2012:1–7
    [17] Luo R C, Chang C-C. Multisensor fusion and integration: A review on approaches andits applications in mechatronics[J]. IEEE Transactions on Industrial Informatics,2012,8(1):49–60
    [18]罗四维.视觉感知系统信息处理理论[M].北京:电子工业出版社,2006:30-50
    [19]孙彬.运动感知计算问题研究[D].武汉:华中科技大学,2010
    [20] Woo H, Jung Y M, Kim J-G, et al. Environmentally robust motion detection for videosurveillance[J]. IEEE Transactions on Image Processing,2010,19(11):2838–2848
    [21] Saleemi I, Hartung L, Shah M. Scene understanding by statistical modeling of motionpatterns[A].2010IEEE Conference on Computer Vision and Pattern Recognition(CVPR)[C]. IEEE,2010:2069–2076
    [22] Ying-hong L, Hong-fang T, Yan Z. An improved Gaussian mixture background modelwith real-time adjustment of learning rate[A].2010IEEE International Conference onInformation Networking and Automation (ICINA)[C]. Kunming, China: IEEE,2010,V1-512-V1-515
    [23] Shah M, Deng J D, Woodford B J. Enhancing the mixture of Gaussians backgroundmodel with local matching and local adaptive learning[A]. Proceedings of the27thConference on Image and Vision Computing[C]. Dunedin, New Zealand: ACM,2012:103-108
    [24] Wang H L, Ding H F, Wang J Q, et al. Moving Target Detection Based on the ImprovedGaussian Mixture Model Background Difference Method[J]. Advanced MaterialsResearch,2012,482:569–574
    [25] Lee D S. Effective gaussian mixture learning for video background subtraction[J]. IEEETransactions on Pattern Analysis and Machine Intelligence,2005,27(5):827-832
    [26] Gall J, Rosenhahn B, Brox T, et al. Optimization and filtering for human motioncapture[J]. International journal of computer vision,2010,87(1-2):75–92
    [27] Hu W, Zhou X, Li W, et al. Active contour-based visual tracking by integrating colors,shapes, and motions[J]. IEEE Transactions on Image Processing,2013,22(5):1778–1792
    [28] Wang J, Yagi Y. Integrating color and shape-texture features for adaptive real-timeobject tracking[J]. IEEE Transactions on Image Processing,2008,17(2):235–240
    [29]王春平,高喜俊,付强.高炮校射双站多目标跟踪定位技术研究[J].电光与控制,2011,18(12):66-69
    [30]许允喜,陈方.基于视差空间运动估计的高精度立体视觉定位[J].光电工程,2012,39(10):95–102
    [31] Abramson H, Avidan S. Tracking through scattered occlusion[A]. IEEE ComputerSociety Conference on Computer Vision and Pattern Recognition Workshops[C]. IEEE,2011:1–8
    [32] Wang X, Han T X, Yan S. An HOG-LBP human detector with partial occlusionhandling[A].2009IEEE12th International Conference on Computer Vision(ICCV)[C].Kyoto, Japan: IEEE,2009:32–39
    [33]李子青.国内智能视频监控技术的发展[J].智能建筑,2008,1:38–40
    [34] http://www.cbsr.ia.ac.cn/
    [35]李子青.智能视频监控技术——自主创新引领未来[J].中国安防,2007,3
    [36] http://www.cs.cmu.edu/~vsam/
    [37] http://ec.europa.eu/research/transport/projects/items/_avitrack____automated_surveillance_of_airport_aprons_en.htm
    [38] Hang Z, Qiuqi R, Houjin C. A new approach of hand tracking based on integratedoptical flow analyse[A].2010IEEE10th International Conference on Signal Processing(ICSP)[C].Beijing, China: IEEE,2010:1194–1197
    [39] Senst T, Evangelio R H, Sikora T. Detecting people carrying objects based on an opticalflow motion model[A].2011IEEE Workshop on Applications of Computer Vision[C].IEEE,2011:301–306
    [40] Liu Q, Osechas O, Rife J. Optical flow measurement of human walking[A].2012IEEE/ION Position Location and Navigation Symposium (PLANS)[C]. IEEE,2012:547–554
    [41]秋林李,家峰何.基于三帧差法和交叉熵阈值法的车辆检测[J].计算机工程,2011,37(4):172–175
    [42] Park D, Hush D R. Statistical analysis of a change detector based on image modeling ofdifference picture[A].1990International Conference on Acoustics, Speech, and SignalProcessing[C]. IEEE,1990:2185–2188vol.4
    [43] Su T M, Hu J. Background removal in vision servo system using Gaussian mixturemodel framework[A].2004IEEE International Conference on Networking, Sensing andControl[C]. Taipei, Taiwan: IEEE,2004,1:70-75
    [44] Migdal J, Grimson W E L. Background Subtraction Using Markov Thresholds[A]. IEEEWorkshops on Motion and Video Computing[C]. Breckenridge, USA: IEEE,2005,2:58–65
    [45] McHugh J M, Konrad J, Saligrama V, et al. Foreground-Adaptive BackgroundSubtraction[J]. IEEE Signal Processing Letters,2009,16(5):390–393
    [46] Guan Y-P. Spatio-temporal motion-based foreground segmentation and shadowsuppression[J]. IEEE Transactions on IET Computer Vision,2010,4(1):50–60
    [47] Zhao L, Thorpe C E. Stereo-and neural network-based pedestrian detection[J]. IEEETransactions on Intelligent Transportation Systems,2000,1(3):148–154
    [48] Culibrk D, Marques O, Socek D, et al. Neural network approach to backgroundmodeling for video object segmentation[J]. IEEE Transactions on Neural Networks,2007,18(6):1614–1627
    [49] Elgammal A. Background Subtraction: Theory and Practice[A]. Augmented Vision andReality[C]. Springer Berlin Heidelberg,2013:1-21
    [50] Alper Y, Omar J, Mubarak S. Object Tracking: A Survey[J]. ACM Journal ComputingSurveys,2006,38(4):1–45
    [51] Li P. An Adaptive Binning Color Model for Mean Shift Tracking[J]. IEEE Transactionson Circuits and Systems for Video Technology,2008,18(9):1293–1299
    [52] Gevers T. Robust segmentation and tracking of colored objects in video[J]. IEEETransactions on Circuits and Systems for Video Technology,2004,14(6):776–781
    [53] Comaniciu D, Ramesh V, Meer P. Kernel-based object tracking[J]. IEEE Transactionson Pattern Analysis and Machine Intelligence,2003,25(5):564–577
    [54] Elgammal A, Duraiswami R, Davis L S. Efficient kernel density estimation using thefast gauss transform with applications to color modeling and tracking[J]. IEEETransactions on Pattern Analysis and Machine Intelligence,2003.,25(11):1499–1504
    [55] Zhao G, Ahonen T, Matas J, et al. Rotation-Invariant Image and Video Description WithLocal Binary Pattern Features[J]. IEEE Transactions on Image Processing,2012,21(4):1465–1477
    [56] Zhao W-L, Ngo C-W. Flip-Invariant SIFT for Copy and Object Detection[J]. IEEETransactions on Image Processing,2013,22(3):980–991
    [57] Wu X, Li L, Lai J, et al. A Framework of Face Tracking with Classification UsingCAMShift-C and LBP[A]. Fifth International Conference on Image and Graphics[C].Xi'an, China: IEEE,2009:217–222
    [58] Zhao Z, Yu S, Wu X, et al. A multi-target tracking algorithm using texture for real-timesurveillance[A]. IEEE International Conference on Robotics and Biomimetics[C].Bangkok, Thailand: IEEE,2009:2150–2155
    [59] Song H, Mei-li S. A Specific Target Track Method Based on SVM and AdaBoost[A].International Symposium on Computer Science and Computational Technology[C].Shanghai, China: IEEE,2008,1:360–363
    [60] Ying W, Juanjuan L, Di W. A particle filter tracking algorithm based on SIFT featurematching[A].201224th Chinese Control and Decision Conference (CCDC)[C]. Taiyuan,China: IEEE,2012:1450–1454
    [61] Wang M, Qiao H, Zhang B. A New Algorithm for Robust Pedestrian Tracking Based onManifold Learning and Feature Selection[J]. IEEE Transactions on IntelligentTransportation Systems,2011,12(4):1195–1208
    [62] Grabner M, Grabner H, Bischof H. Learning Features for Tracking[A]. IEEEConference on Computer Vision and Pattern Recognition(CVPR)[C]. IEEE,2007:1–8
    [63] Lucas B.D., Kanade T. An Iterative Image Registration Technique with an Applicationto Stereo Vision[A]. Image Understanding Workshop[C].1981:121–130
    [64]吉彦潘,波胡,建秋张.抑制模板漂移的目标跟踪算法[J].电子学报,2009,37(3):623–629
    [65] Karaulova I A, Hall P M, Marshall A D. A hierarchical model of dynamic for trackingPeople with a single video camera[A]. Proceedings of British Machine VisionConference[C].2000:352–361
    [66] Horaud R, Niskanen M, Dewaele G, et al. Human Motion Tracking by Registering anArticulated Surface to3D Points and Normals[J]. IEEE Transactions on PatternAnalysis and Machine Intelligence,2009,31(1):158–163
    [67] Kundu A, Krishna K M, Sivaswamy J. Moving object detection by multi-viewgeometric techniques from a single camera mounted robot[A]. IEEE/RSJ InternationalConference on Intelligent Robots and Systems[C]. St. Louis, MO, USA: IEEE,2009:4306–4312
    [68] Wachter S, Nagel H H. Tracking of persons in monocular image sequences[A]. IEEEProceedings Nonrigid and Articulated Motion Workshop[C]. San Juan, Puerto Rico:IEEE,1997:2–9
    [69] Kass M, Andrew W. Snakes: Active Contour Models[J]. international Journal ofComputer Vision,1988:321–331
    [70] Torkan S, Behrad A. A new contour based tracking algorithm using improved greedysnake[A].201018th Iranian Conference on Electrical Engineering (ICEE2010)[C].Isfahan Iran: IEEE,2010:150–155
    [71] Qiang Chen, Quan-Sen Sun, Pheng Ann Heng, et al. Two-Stage Object TrackingMethod Based on Kernel and Active Contour[J]. IEEE Transactions on Circuits andSystems for Video Technology,2010,20(4):605–609
    [72] Peterfreund N. Robust tracking of position and velocity with Kalman snakes[J]. IEEETransactions on Pattern Analysis and Machine Intelligence,1999,21(6):564–569
    [73] Fu Y, Erdem A T, Tekalp A M. Tracking visible boundary of objects using occlusionadaptive motion snake[J]. IEEE Transactions on Image Processing,2000,9(12):2051–2060
    [74] Fang H, Kim S-H, Jang J. A snake algorithm for automatically tracking multipleobjects[A].201118th IEEE International Conference on Image Processing (ICIP)[C].Brussels, Belgium: IEEE,2011:469–472
    [75] Fang H, Kim J, Jang J. A Fast Snake Algorithm for Tracking Multiple Objects[J].Journal of Information Processing Systems,2011,7(3):519–530
    [76] Liu P R, Meng M Q-H, Liu P X. Moving object segmentation and detection formonocular robot based on active contour model[J]. Electronics Letters,2005,41(24):1320–1322
    [77]吴继明.基于水平集方法的主动轮廓模型理论研究及其应用[D].广州:华南理工大,2010
    [78]坤乔,郭朝勇,史进伟.基于卡尔曼滤波的运动人体跟踪算法研究[J].计算机与数字工程,2012,40(1):1–6
    [79] Ciarelli P M, Salles E O, Oliveira E. Human Automatic Tracking in Outdoor Scenes[A].17th International Conference on Systems, Signals and Image Processing[C].2010:324–327
    [80] Zhang D, Xia F, Yang Z, et al. Localization technologies for indoor human tracking[A].20105th International Conference on Future Information Technology[C]. Busan, Korea:IEEE,2010:1–6
    [81]张琳琳,蒋敏,唐晓微.基于眨眼修正卡尔曼滤波的人眼跟踪研究[J].计算机工程,2012,38(21):157–162
    [82]卢建国,蔡安妮.基于粒子滤波的视频目标跟踪算法研究[D].北京:北京邮电大学2011
    [83] Ozdemir O, Niu R, Varshney P K, et al. Modified Bayesian Crame R-rao lower boundfor nonlinear tracking[A].2011IEEE International Conference on Acoustics, Speechand Signal Processing[C]. Prague, Czech Republic: IEEE,2011:3972–3975
    [84] Gustafsson F, Gunnarsson F, Bergman N, et al. Particle filters for positioning,navigation, and tracking[J]. IEEE Transactions on Signal Processing,2002,50(2):425–437
    [85]孔素然.基于粒子滤波图像帧的视觉跟踪算法研究[J].微电子学与计算机,2012,29(11):177–180
    [86]王娟,江艳霞,唐彩虹.融合颜色和轮廓的粒子滤波人脸跟踪[J].光电工程,2012,39(10):32–39
    [87]包加桐,郭晏,唐鸿儒.一种基于多特征聚类的粒子滤波跟踪算法[J].机器人,2011,33(5):634–640
    [88] Das S, Kale A, Vaswani N. Particle Filter With a Mode Tracker for Visual TrackingAcross Illumination Changes[J]. IEEE Transactions on Image Processing,2012,21(4):2340–2346
    [89] Tsai R Y, Lenz R K. A new technique for fully autonomous and efficient3D roboticshand/eye calibration[J]. IEEE Transactions on Robotics and Automation,1989,5(3):345–358
    [90] Tsai R Y, Huang T S. Uniqueness and Estimation of Three-Dimensional MotionParameters of Rigid Objects with Curved Surfaces[J]. IEEE Transactions on PatternAnalysis and Machine Intelligence,1984, PAMI-6(1):13–27
    [91] Zhang Z. A flexible new technique for camera calibration[J]. IEEE Transactions onPattern Analysis and Machine Intelligence,2000,22(11):1330–1334
    [92]寿天德.视觉信息处理的脑机制[M].2版.合肥:中国科学技术大学出版社,2010:3-7
    [93] Horng-Horng Lin, Tyng-Luh Liu, Jen-Hui Chuang. Learning a Scene BackgroundModel via Classification[J]. IEEE Transactions on Signal Processing,2009,57(5):1641–1654
    [94] Du-Ming Tsai, Shia-Chih Lai. Independent Component Analysis-Based BackgroundSubtraction for Indoor Surveillance[J]. IEEE Transactions on Image Processing,2009,18(1):158–167
    [95] Stauffer C, Grimson W E. Adaptive background mixture models for real-timetracking[A]. IEEE Computer Society Conference on Computer Vision and PatternRecognition[C]. Fort Collins, USA: IEEE,1999,2:252Vol.2
    [96] Wang G, Schultz L, Qi J. Statistical image reconstruction for muon tomography using aGaussian scale mixture model[J]. IEEE Transactions on Nuclear Science,2009,56(4):2480-2486
    [97] Shah M, Deng J, Woodford B J. Localized Adaptive Learning of Mixture of GaussiansModels for Background Extraction[A].201025thInternational Conference of Image andVision Computing. Queenstown, New Zealand: IEEE,2010:1-8
    [98] Elgammal A, Harwood D, Davis L. Non-parametric model for backgroundsubtraction[A]. David Vernon. Computer Vision—ECCV2000. Dublin, Ireland:Springer,2000:751–767
    [99] Chiu Chung-Cheng Ku Min-Yu, Liang Li-Wey. A robust object segmentation systemusing a probability-based background extraction algorithm[J]. IEEE Transactions onCircuits and Systems for Video Technology,2010,20(4):518–528
    [100] Lin Horng-Horng, Chuang Jen-Hui, Liu Tyng-Luh. Regularized background adaptation:a novel learning rate control scheme for Gaussian mixture modeling[J]. IEEETransactions on Image Processing,2011,20(3):822–836
    [101] Jae Kyu Suhr, Ho Gi Jung, Gen Li, et al. Mixture of Gaussians-based backgroundsubtraction for bayer-pattern image sequences[J]. IEEE Transactions on Circuits andSystems for Video Technology,2011,21(3):365–370
    [102] Dar-Shyang Lee. Effective Gaussian mixture learning for video backgroundsubtraction[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2005,27(5):827–832
    [103] Stauffer C, Grimson W E L. Learning patterns of activity using real-time tracking[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,2000,22(8):747–757
    [104] Dempster A P, Laird N M, Rubin D B. maximum likelihood from incomplete data viathe EM algorithm[J]. Journal of the Royal Statistical Society,1997,39(1):1–38
    [105] Li D, Xu L, Goodman E. Online background learning for illumination-robustforeground detection[A].201011th International Conference on Control AutomationRobotics&Vision[C]. Singapore: IEEE,2010:1093–1100
    [106] Haque M, Murshed M. Robust Background Subtraction Based on PerceptualMixture-of-Gaussians with Dynamic Adaptation Speed[A].2012IEEE InternationalConference on Multimedia and Expo Workshops[C]. Melbourne, Australia: IEEE,2012:396–401
    [107] Fisher D H. Knowledge acquisition via incremental conceptual clustering[J]. MachineLearning,1987,2(2):139–172
    [108] Ester M, Wittmann R. Incremental generalization for mining in a data warehousingenvironment[A]. Schek H.-J., Alonso G., Saltor F., et al. Advances in DatabaseTechnology-EDBT’98[C]. Valencia, Spain: Springer,1998:135–149
    [109]陈宁,陈安,周龙骧.基于密度的增量式网格聚类算法[J].软件学报,2002,13(1):1–7
    [110]李桃迎,陈燕,秦胜君, et al.增量聚类算法综述[J].科学技术与工程,2010(035):8752–8759
    [111] Toyama K, Krumm J, Brumitt B, et al. Wallflower: principles and practice ofbackground maintenance[A]. The Proceedings of the Seventh IEEE InternationalConference on Computer Vision[C]. IEEE,1999,1:255–261vol.1
    [112] Mukherjee A, Sengupta A. Estimating the probability density function of anonstationary non-Gaussian noise[J]. IEEE Transactions on Industrial Electronics,2010,57(4):1429–1435
    [113] Wang Z M, Song Q, Soh Y C, et al. Robust curve clustering based on a multivariate-distribution model[J]. IEEE Transactions on Neural Networks,2010,21(12):1976–1984
    [114] Peel D, McLachlan G J. Robust mixture modelling using the t distribution[J]. Statisticsand Computing,2000,10(4):339–348
    [115] Thanh Minh Nguyen, Wu Q M. Robust Student’s-t mixture model with spatialconstraints and its application in medical image segmentation[J]. IEEE Transactions onMedical Imaging,2012,31(1):103–116
    [116] Chatzis S P, Kosmopoulos D I, Varvarigou T A. Robust sequential data modeling usingan outlier tolerant Hidden Markov Model[J]. IEEE Transactions on Pattern Analysis andMachine Intelligence,2009,31(9):1657–1669
    [117] Svensén M, Bishop C M. Robust Bayesian mixture modelling[J]. Neurocomputing,2005,64(0):235–252
    [118] Liu C, Rubin D B. ML estimation of the t distribution using EM and its extensions,ECM and ECME[J]. Statistica Sinica,1995,5(1):19–39
    [119] Lange K L, Little R J, Taylor J M. Robust statistical modeling using the t distribution[J].Journal of the American Statistical Association,1989,84(408):881–896
    [120] Shoham S. Robust clustering by deterministic agglomeration EM of mixtures ofmultivariate t-distributions[J]. Pattern Recognition,2002,35(5):1127–1142
    [121] Meng X-L, Rubin D B. Maximum likelihood estimation via the ECM algorithm: Ageneral framework[J]. Biometrika,1993,80(2):267–278
    [122] Celeux G, Chrétien S, Forbes F, et al. A component-wise EM algorithm for mixtures[J].Journal of Computational and Graphical Statistics,2001,10(4)
    [123]赖玉霞,刘建平. K-means算法的初始聚类中心的优化[J].计算机工程与应用,2008,44(10)
    [124]崔江涛,李凤华,马建峰.基于子空间三角不等式的高维码字搜索算法[J].电子学报,2011,39(4):940–945
    [125] Tsai V J D. A comparative study on shadow compensation of color aerial images ininvariant color models[J]. IEEE Transactions on Geoscience and Remote Sensing,2006,44(6):1661–1671
    [126] Cucchiara R, Grana C, Piccardi M, et al. Improving shadow suppression in movingobject detection with HSV color information[A].2001IEEE Intelligent TransportationSystems Proceedings[C]. Okaland, USA: IEEE,2001:334–339
    [127] Zhang W, Fang X Z, Yang X K, et al. Moving Cast Shadows Detection Using RatioEdge[J]. IEEE Transactions on Multimedia,2007,9(6):1202–1214
    [128] Nadimi S, Bhanu B. Physical models for moving shadow and object detection invideo[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2004,26(8):1079–1087
    [129] Martel-Brisson N, Zaccarin A. Learning and Removing Cast Shadows through aMultidistribution Approach[J]. IEEE Transactions on Pattern Analysis and MachineIntelligence,2007,29(7):1133–1146
    [130] Zhiyan Bin, Yunyi Liu. Robust Moving Object Detection and Shadow Removing Basedon Improved Gaussian Model and Gradient Information[A].2010InternationalConference on Multimedia Technology[C]. Ningbo, China: IEEE,2010:1–5
    [131]刘晓山.光照变化条件下人脸识别技术研究[D].广州:华南理工大学,2011
    [132]廖彬.单目视频中的光流场估计技术研究[D].广州:华南理工大学,2011
    [133] Amato A, Mozerov M G, Bagdanov A D, et al. Accurate moving cast shadowsuppression based on local color constancy detection[J]. IEEE Transactions on ImageProcessing,2011,20(10):2954–2966
    [134] Makarau A, Richter R, Muller R, et al. Adaptive shadow detection using a blackbodyradiator model[J]. IEEE Transactions on Geoscience and Remote Sensing,2011,49(6):2049–2059
    [135] Finlayson G D, Hordley S D, Lu C, et al. On the removal of shadows from images[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,2006,28(1):59–68.
    [136] Land E H, McCann J. Lightness and Retinex theory[J]. Journal of the Optical Societyof America,1971,61(1):1–11
    [137] Maloney L T, Wandell B A. Color constancy: a method for recovering surface spectralreflectance[J]. Journel of Optical Society of America A,1986,3(1):29–33
    [138] Ciurea F. Colour constancy theories in the real world[D]. Canada: Simon FraserUniversity,2004
    [139] Arbel E, Hel-Or H. Shadow removal using intensity surfaces and texture anchorpoints[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2011,33(6):1202–1216
    [140] Qin R, Liao S, Lei Z, et al. Moving cast shadow removal based on local descriptors[A].201020th International Conference on Pattern Recognition(ICPR)[C]. Istanbul, Turkey:IEEE,2010:1377–1380
    [141] Yu M-Z, Liu Z-X, Luo J, et al. Moving cast shadow detection based on texture andshadow property[J]. Computer Engineering and Design,2011,32(10):3431–3434
    [142] Tsai D-M, Lin C-T. Fast normalized cross correlation for defect detection[J]. PatternRecognition Letters,2003,24(15):2625–2631
    [143] Luo J, Konofagou E E. A fast normalized cross-correlation calculation method formotion estimation[J]. IEEE Transactions on Ultrasonics, Ferroelectrics and FrequencyControl,2010,57(6):1347–1357
    [144] Basheer M R, Jagannathan S. Localization and Tracking of Objects UsingCross-Correlation of Shadow Fading Noise[J]. IEEE Transactions on MobileComputing,2013,99(1):1-7
    [145] Zhang W, Fang X Z, Yang X K, et al. Moving Cast Shadows Detection Using RatioEdge[J]. IEEE Transactions on Multimedia,2007,9(6):1202–1214
    [146] Horprasert T, Harwood D, Davis L S. A statistical approach for real-time robustbackground subtraction and shadow detection[A]. IEEE ICCV[C].1999,99:256–261
    [147] Weiss Y. Deriving intrinsic images from image sequences[A]. Proceedings Eighth IEEEInternational Conference on Computer Vision(ICCV2001)[C]. USA: IEEE,2001,2:68–75
    [148] Hordley S D, Finlayson G D. Re-evaluating colour constancy algorithms[A].Proceedings of the17th International Conference on Pattern Recognition(ICPR2004)[C]. IEEE,2004,1:76–79Vol.1.
    [149] Finlayson G D, Hordley S D. Color constancy at a pixel[J]. Journel of the OpticalSociety of America,2001,18(2):253–264
    [150] Tappen M F, Freeman W T, Adelson E H. Recovering intrinsic images from a singleimage[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2005,27(9):1459–1472
    [151] Drew M S, Finlayson G D. Recovery of chromaticity image free from shadows viaillumination invariance[A]. Proceeding workshop Color and Photometric methods incomputer vision[C]. Nice, France: IEEE,2003:32–39
    [152] Finlayson G, Drew M, Lu C. Intrinsic images by entropy minimization[J]. ComputerVision-ECCV2004,2004:582–595
    [153] Alvarez J M A, Lopez A M. Road detection based on illuminant invariance[J]. IEEETransactions on Intelligent Transportation Systems,2011,12(1):184–193
    [154] Finlayson G, Drew M, Lu C. Entropy Minimization for Shadow Removal[J].International Journal of Computer Vision,2009,85(1):35–57
    [155] Scandaliaris J, Villamizar M, Sanfeliu A. Comparative analysis for detecting objectsunder cast shadows in video images[A].201020th International Conference on PatternRecognition(ICPR2010)[C]. Istanbul, Turkey: IEEE,2010:4577–4580
    [156] Kumar P. Intrinsic image based moving object cast shadow removal in imagesequences[A].2011International Conference on Digital Image Computing: Techniquesand Applications[C]. Noosa, Australia: IEEE,2011:410–415
    [157] Huang W, Fu L, Xiao Y. Shadow removal based on invariant image with Fisherdiscrimination criterion[A].2011International Conference on Image Analysis andSignal Processing[C]. IEEE,2011:214–218
    [158]张建方,王秀祥.直方图理论与最优直方图制作[J].应用概率统计,2009,25(2):201–214
    [159] Baba M, Mukunoki M, Asada N. Shadow removal from a real image based on shadowdensity[A]. Ronen Barzel. ACM SIGGRAPH2004Posters[C]. ACM,2004:60-64
    [160]章毓晋.过渡区和图象分割[J].电子学报,1996,24(1):12–17
    [161]胡勇,赵春霞.单幅室外自然场景中的阴影检测与消除[J].南京理工大学学报:自然科学版,2011,35(1):1–5
    [162] El-Zahhar M M, Karali A, ElHelw M. A semi supervised learning-based method foradaptive shadow detection[A].2011IEEE International Conference on Signal andImage Processing Applications[C]. Kuala Lumpur: IEEE,2011:348–353
    [163] Zhang W, Wu Q M J. Moving shadow detection based on normalized eigenvalue ofWishart matrix[A].2010International Conference on Mechatronics and Automation(ICMA)[C]. Xi'an, China: IEEE,2010:645–650
    [164] Prati A, Mikic I, Trivedi M M, et al. Detecting moving shadows: algorithms andevaluation[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2003,25(7):918–923
    [165] Satoh Y, Okatani T, Deguchi K. A color-based tracking by Kalman particle filter[A].Proceedings of the17th International Conference on Pattern Recognition(ICPR2004)[C]. IEEE,2004,3:502–505
    [166] Posada-Gomez R, Sandoval-Gonzalez O O, Sanchez-Muniz E. Development of amotion analysis system and human-machine interaction through digital imageprocessing and virtual reality[A].201121st International Conference on ElectricalCommunications and Computers[C]. Cholula, Mexico: IEEE,2011:284–288
    [167] Robert T. Collins, Alan J. Lipton, Takeo Kanade. A system for video surveillance andmonitoring[R]. VSAM final report, CMU-RI-TR-00-12, Camegie Mellon University,2000
    [168] Robust Visual Object Tracking Using Multi-Mode Anisotropic Mean Shift and ParticleFilters[J]. IEEE Transactions on Circuits and Systems for Video Technology,2011,21(1):74–87
    [169] Rafael Munoz-Salinas, Eugenio Aguirre, Miguel Garcia-Silvente. People detection andtracking using stereo vision and color[J]. Image and Vision Computing,2007,25(6):995–1007
    [170]陈爱斌.基于视觉的运动目标跟踪方法研究[D].长沙:中南大学,2009
    [171] Lee Chong Wan, Sebastian P, Voon Y V. Stereo vision tracking system[A]. InternationalConference on Future Computer and Communication[C]. Kuala Lumpar, Malaysia:IEEE,2009:487–491
    [172] Barber D B, Redding J D, McLain T W, et al. Vision-based Target Geo-location using aFixed-wing Miniature Air Vehicle[J]. Journal of Intelligent and Robotic Systems,2006,47(4):361–382
    [173] Siming Wang, Yingli Fan. A vision location algorithm for CCD camera based ongeometric knowledge[A].2010International Conference on Environmental Science andInformation Application Technology[C]. Wuhan, China: IEEE,2010,1:430–433
    [174] Cardillo J, Sid-Ahmed M A.3-D position sensing using a passive monocular visionsystem[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,1991,13(8):809–813
    [175] Randeniya D I., Sarkar S, Gunaratne M. Vision–IMU Integration Using aSlow-Frame-Rate Monocular Vision System in an Actual Roadway Setting[J]. IEEETransactions on Intelligent Transportation Systems,2010,11(2):256–266
    [176] Heikkila J, Silven O. A four-step camera calibration procedure with implicit imagecorrection[A]. Proceedings IEEE Computer Society Conference on Computer Visionand Pattern Recognition(CVPR1997)[C]. San Juan, Puerto Rico: IEEE,1997:1106–1112
    [177] Tsai R. A versatile camera calibration technique for high-accuracy3D machine visionmetrology using off-the-shelf TV cameras and lenses[J]. IEEE Transactions on Roboticsand Automation,1987,3(4):323–344
    [178] Weng J, Cohen P, Herniou M. Camera Calibration with Distortion Models and AccuracyEvaluation[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,1992,14(10):965–980
    [179]Kannala J, Brandt S S. A generic camera model and calibration method for conventional,wide-angle, and fish-eye lenses[J]. IEEE Transactions on Pattern Analysis and MachineIntelligence,2006,28(8):1335–1340
    [180] Seok-han Lee, Sang-keun Lee, Jong-soo Choi. Correction of radial distortion using aplanar checkerboard pattern and its image[J]. IEEE Transactions on ConsumerElectronics,2009,55(1):27–33
    [181] Sun W, Yang X, Xiao S, et al. Robust checkerboard recognition for efficient nonplanargeometry registration in projector-camera systems[A]. Proceedings of the5thACM/IEEE International Workshop on Projector camera systems[C]. California, USA:ACM,2008:1-5
    [182]钟必能.复杂动态场景中运动目标检测与跟踪算法研究[D].哈尔滨:哈尔滨工业大学,2010
    [183]桑农,李正龙,张天序.人类视觉注意机制在目标检测中的应用[J].红外与激光工程,2004,33(1):38–42
    [184] Zhang X, Zhaoping L, Zhou T, et al. Neural activities in V1create a bottom-up saliencymap[J]. Neuron,2012,73(1):183
    [185]卢宗庆.运动图像分析中的光流计算方法研究[D].西安:西安电子科技大学,2007.
    [186]王守觉.仿生模式识别(拓扑模式识别)–一种模式识别新模型的理论与应用[J].电子学报,2002,30(10):1417-1420
    [187]王守觉,曲延锋,李卫军, et al.基于仿生模式识别与传统模式识别的人脸识别效果比较研究[J].电子学报,2004,32(7):1057-1061
    [188]张祥合.复杂场景中目标识别与分类的仿生原理和方法[D].长春:吉林大学,2012

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700