复杂环境运动目标检测若干关键问题研究
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
随着计算机科学、图像工程、模式识别、人工智能等学科的发展和普及,智能视频监控系统已被广泛应用于国民经济和国防建设的各个领域,它在武器导航、科学探测、智能工业、人机交互、智能交通、科学研究等方面都发挥着巨大的作用,应用前景广阔。复杂环境下运动目标检测结果的优劣直接影响到视频的高层理解。研究复杂场景下的运动目标检测技术具有重要的理论意义和实用价值。
     本文围绕着监控设备实际拍摄的图像序列所存在的光学畸变、环境光照变化、阴影及摄像头的运动等复杂条件,对运动目标检测的若干关键技术进行了研究和探讨,并获得了一些有意义的成果。本论文的主要研究成果如下:
     1.针对视频采集摄像头光学畸变的实时校正问题,将GPU引入到畸变校正算法的计算中,提出了基于GPU的大视场成像实时非线性几何校正算法,不仅满足了监控系统在线处理的要求,而且极大地消除了广角镜头桶形畸变对目标提取产生的负面影响;
     2.针对静态场景下的运动目标检测问题,系统地分析了不同色彩空间下的像素特征对目标检测算法性能的影响,提出了基于不同颜色特征的运动目标检测算法,为多种典型环境下颜色特征优选和算法优化提供了理论基础和实践指导;
     3.将图像的局部二元模式的统一模式作为纹理特征与颜色特征相结合,且引入了Choquet模糊积分,提出了一种新的多特征相似度融合算子,建立了基于隶属度的背景/前景分类模型,实现了光照变化、阴影等复杂环境下的运动目标检测;
     4.针对动态场景下的运动目标检测问题,提出了基于多尺度Harris;角点方根-算术均值距离的图像序列配准算法,建立了摄像机运动的测量模型,完成了运动补偿,最终通过三帧差分法实现动态场景下运动目标区域的提取;
     5.针对红外图像的运动目标检测问题,根据红外图像的成像特点,利用一种基于小波系数相关度的图像去噪增强算法,解决了抑制噪声与增强图像之间的矛盾;将可见光图像序列的运动目标检测算法扩展到该领域,提出了基于亮度-纹理模糊积分特征的红外图像运动目标检测算法。
Moving object detection is a challenging and very important topic in computer vision field, which covers computer science, optical, mathematical, control science, cognitive science and so on, and has been widely applied in our real life, such as military, industry, medicine, meteorology, traffic management, public security, etc. so research on motion detection has important theoretical significance and practical value. However, so far there are a number of urgent practical problems need to be solved, especially in complex background environment or dynamic scenarios. Generally, video surveillance environment, including indoor and outdoor variety, a scene often composed by background, moving object (such as pedestrians, vehicles), disturbance objects (such as the shaking of the branches, swinging curtain) etc, which has many actual problems, such as the background changes caused by illumination changing, misjudgment caused by disruptors disturbance, motion state changing, high noise in the background region, difficult extracting of moving targets'effective features, lack of imaging devices, real-time monitoring, etc, therefore, there are still challenges and difficulties in practical applications. As a result, how to improve the intelligent degree of video processing, expand its range of application and improve the performance of system has become a hot spot in research and application field.
     In this work, we mainly research and discuss several key technologies of moving object detection, such as optical distortion, environmental illumination changing, shadows and camera movement, which often occurs at image sequences captured by monitoring equipment.
     For real-time optical distortion correction issue of video capture camera, GPU-based real-time nonlinear geometric correction algorithms for the large field image is proposed, which not only meets the monitoring system on-line processing demands, but also greatly eliminates the negative impacts of the wide-angle lens barrel distortion on the target Extraction; Through the theoretical analysis of camera calibration model and non-linear distortion model, the transformation matrix and solving methods of distortion correction are abstracted and concluded; The problem of reduce the amount of data at the expense of accuracy is solved by combining the GPU hardware and CUDA software architecture, which make full use of its parallel computing advantage; while taking into account the various factors leading to lens distortion, so a number of lens distortion coefficient, which contain the radial distortion, centrifugal distortion and thin prism distortion coefficient, are solved by using the least square method, and the use of cubic interpolation algorithm to non-integer coordinates of pixel interpolation, greatly improving the calibration accuracy.
     For classification feature selection issue, the effect of pixel characteristics on detection algorithm performance in different color space is systematic analyzed, which provides a theoretical basis and practical guidance for color feature selection and algorithm optimization under typical environments. For color images, in addition to brightness information can be used in gray-scale image, but also chroma, saturation, etc, are increased, which can improve the moving object detection capability by focusing on the development and take full advantage of color information. At present, the most moving object detection algorithm choose the color characteristics as testing standards, but different color space reflects different image information. Therefore, we analyze the color space models and the information changing of moving objects in different color space, for example, RGB, YCbCr, HSV and Lab color space; Background subtraction results, which base on Gaussian mixture model algorithm with different colors spatial characteristics under complex background environment, are compared; Experimental results shows:in RGB, YCbCr, HSV and Lab color space, YCbCr, and Lab are better in many respects, such as anti-interference of background light, environmental adaptability, robustness.
     For illumination changing and shadows interference issue, color feature is combined with local binary pattern features of the unified model as a texture feature, and Choquet fuzzy integral is introduced, a new moving object detection algorithm is proposed based on multi-feature similarity fusion operator, background/ foreground classification model by membership degree is established, a new idea about high-precision object extraction in complex environment is opened up. The choice of classification features directly affects moving object detection results. Color is one of the most commonly used features, but the instability of natural background determines that color is sometimes difficult to accurately describe the image information, and usually the methods with individual classification are sensitive to the dynamic changes of the scene, such as illumination changing, shadows and reflected light, etc., which makes the moving object extraction inaccurate. Therefore, we combine color features with block-based texture features to improve the foreground/ background classification accuracy, through the link-by-analysis of feature selection, similarity measure definition, multi-feature fusion as well as the segmentation threshold determining, a moving object detection algorithm based on the color-texture fuzzy integral feature is proposed. In the feature selection aspect, the combination mode of color features and texture features is selected. To reduces the illumination changing on the impact of moving object detection, YCbCr color space, which separates brightness from color, is adopted to describe the color characteristics; The rotation invariance of the LBP with the extended form of a unified model of operator ULBP K.R ri is defined to describe the texture feature, which not only has advantage of non-parametric express and non-monotonic changes in gray-scale effects, but also makes histogram entries compact, and not susceptible to noise, thus avoiding interference of illumination changing and shadows; In multi-feature fuse aspect, Choquet integration method from fuzzy mathematics is introduced, which integrates similarity measure of the color and texture features, to improve the classification accuracy rate; In segmentation threshold determination aspect, an adaptive strategy is defined, in the foreground/background segmentation stage, classification threshold adaptive solving step is increased, which makes the threshold T c.t (x, y) no longer invariable, but change as environmental change, and the frequency and step length are set as adjustable parameters to improve the convergence speed and solution accuracy.
     For dynamic Scene issue, image registration method of dynamic background based on the arithmetic mean of multi-scale Harris corner point square-root is proposed, the measurement model of camera movement is established, the motion compensation is completed, finally moving object detection of dynamic scene is achieved by three frame differencing. In practical application, the moving camera led to aliasing between the changes of background and changes of moving objects in the image. To eliminate the disturbance, a new moving object detection algorithm of dynamic scenes based on Harris corner points'SAM is presented. First, register the current frame image with the previous image to catch the global motion parameters; compensate image motion by using of the motion parameters; and then take frame differencing method with the first two images, to get the outline of moving object; regard moving region as mask, and detect and locate the moving target. In image registration aspect, an image registration algorithm based on the SAM of multi-scale Harris corner points is proposed. Local invariant point, which usually use for image retrieval and identification, is introduced into the image registration field. To improve the repetition rate of the matching points, multi-scale Harris corner detection operator, which easy-to-extract and anti-disturb of images translation, rotation, brightness change, etc, is used; To obtain stable points, the angle, scale and location of the candidate matching points are iterative screened by clustering and SAM information; During the parameters solution, only stable corners are matched to reduce the computation, and no need to optimal search, which avoid falling into local extreme; In moving object segmentation aspect, three frame differencing method is used, namely, movement area is obtained by calculating the difference values of the adjacent three consecutive frames, the motion range of moving targets between adjacent frame is quickly detected by thresholding the images'differencing; To improve the accuracy of motion detection, adaptive thresholding is used for difference image binary. The experimental results show that this algorithm can deal with positioning of moving targets of dynamic scenes, such as camera motion, etc. which lay the foundation for moving target tracking and identification;
     For Infrared images, in accordance with the imaging characteristics of infrared image, the moving object detection algorithm of the visible light image sequences is extended to this area, infrared object detection algorithm based on combination of luminance information and texture information is proposed, the similarity fusion based on fuzzy integral is extended to the infrared image motion detection. Infrared imaging system becomes hot research in the intelligent visual surveillance field for its advantages of all-weather working, no shadows, anti ambient light disturbance, etc, but there are also unique challenges, such as low signal to noise ratio, polarity reversal, the halo effect and so on. For infrared imaging features, image de-noising and enhancing algorithm based on correlation of wavelet coefficients is used to enhance noise suppression to solve the contradiction between noise suppression and image enhancement; moving object detection algorithm based on fuzzy integral of the color-texture features is extended, A new background modeling and moving object detection Algorithm for infrared image is proposed, namely, infrared image moving object detection algorithm based on the fuzzy integral of brightness-texture features. The Sugeno fuzzy integral theory is introduced into the infrared target detection, the combination of brightness features and texture features is regarded as classification characteristics, fuzzy measure and fuzzy integral are used for classification, which effectively resolve the limitation exist in traditional background differencing method used in infrared image.
     The moving object detection under complex background is a very important and challenging research field, this research work plays a role in promoting this field research, the put forward and improved method offers a number of new way for moving object detection technology, experiments test and verify that this paper's work will enrich and push forward the studies of the related areas in both theoretical and technological aspects.
引文
[1]. Rafael C. Gonzalez, Richard E. Woods等著,阮秋琦等译.数字图像处理[M].北京:电子工业出版社,2005.
    [2]. David A. Forsyth, Jean Ponce著,林学訚等译.计算机视觉:一种现代方法[M].北京:电子工业出版社,2004.
    [3].马颂德,张正友.计算机视觉:计算理论与算法[M].北京:科学出版社,1998.
    [4]. Cipolla R. Pentland A. Computer Vision for Human-Machine Interation [M].Cambridge universityPress,1998.
    [5].沈庭芝,方子文.数字图像处理及模式识别[M].北京:北京理工大学出版社,1998.
    [6].卢秋波.视频监控技术简介与发展趋势[J].安防科技,2007(5):1-4.
    [7].单勇.复杂条件下视频运动目标检测和跟踪[M].国防科学技术大学博士学位论文,2006.
    [8]. Hu W, Tan T, Wang L, et al. A survey on visual surveillance of object motion and behaviors [J]. IEEE Transaction on systems, Man and Cybernetics Part C:Applications and Reviews,2004,34(3):334-352.
    [9]. Titsias M K, Williams C K. Unsupervised learning of multiple aspects of moving objects from video[C]. in:Lecture Notes in computer Science. Volos, Greece,746-756,2005.
    [10]. Alan J Lipton, H Fujiyoshi, Raju S Patil. Moving target classification and tracking from real-time video [J]. IEEE Transactions on Workshop Application of Computer Vision.1998:8-14.
    [11].郑世友.动态场景图像序列中运动目标检测与跟踪[M].东南大学博士学位论文,2005.
    [12].陈远.复杂场景中视觉运动目标检测与跟踪[M].华中科技大学博士学位论文,2008.
    [13]. R.Collins, A. Lipton and T. Kanade. A system for video surveillance and monitoring:VSAM final report[R]. CMU-RI-TR-00-12, Camegie Melon University, Pittsburgh, America, May,2000.
    [14]. Chris Stauffer, W.E.L Grimson. Adaptive background mixture models for real-time tracking [J]. Proc. of CVPR,1999:246-252.
    [15]. Remagnino P, Tan T and Baker K. Multi-agent visual surveillance of dynamic scenes[J].Image and Vision Computing,1998,16(8):529-532.
    [16]. I. Haritaoglu, D. Harwood and L.S.Davis. W4:Real-time surveillance of people and their activities [J].IEEE Transaction on Pattern Analysis and Machine Intelligence,2000,22(8):809-830.
    [17]. I. Haritaoglu, D. Harwood and L.S.Davis.W4:who?when?where?what?A Real Time System for Detection and Tracking People[C].In third International Conference on Face and Gesture Recognition,1998-222-121.
    [18]. Javed O, Rasheed Z, Alatas O, et al. KNIGHTM:A Real Time Surveillance System for Multiple Overlapping and Non-overlapping Cameras. In Proceeding of ICME.2003.
    [19]. Freeman W, Weissman C. Television control by hand gestures[C]. In Proceedings of International Conference on Automatic Face and Gesture Recognition, Zurich, Switzerland,1995:179-183.
    [20].田原,谭铁牛,孙洪赞.一种具有良好鲁棒性的实时跟踪方法[J].计算机学报,2002,28(5):851-853.
    [21].王亮,胡卫明,谭铁牛.基于步态的身份识别[J].计算机学报,2003,26(3):353-360.
    [22]. M J Swain, D H Ballard. Indexing via Color Histograms. Proc ICCV90:390-393.
    [23]. Aki. Kobayashi, Toshiyuki Yoshida, Sakai. Image Retrieval by Estimating Parameters of Distance Measure. SPIE. San Jose, California,2000,3972.432-441.
    [24]. J. Huang, S. Kumar, M.Mitra, W.J.Zhu, and R.Zabih. Image Indexing Using Color Correlogram. Proc. of IEEE Conf. On Computer Vision and Pattern Recognition,1997.
    [25]. R M Haralick, Shangmugam, Dinstein. Textural Feature for Image Classification. IEEE Trans on Systems. Man, Cybernetics,1973, SMC-3(6):610-621.
    [26]. R M Haralick. Statistical and Structural Approaches to Texture [J]. Proc.IEEE,1979,67:786-80.
    [27]. Barron J, Fleet D, Beauchemin S. Performance of optical flow techniques [J], International Journal of Computer Vision,1994,12(1),42-77.
    [28]. Ketani A, Kuno Y, Shimada N, et al. Real time Surveillance System Detecting Persons in Complex Scenes [J], Proceedings of Image Analysis and Processing,1999,1112-1115.
    [29]. Rajagopalan R., Orchard M. T. and Brandt R. D. Motion field modeling for video sequences. IEEE Transactions on Image Processing.1997,6(11):1503-1516.
    [30]. Altunbasak Y., Mersereau R. M. and Patti A. J. A fast parametric motion estimation algorithm with illumination and lens distortion correction. IEEE Transactions on Image Processing.2003,12(4): 395-408.
    [31]. Trucco E, Tommasini T, Roberto V. Near-recursive optical flow from weighted image differences. IEEE Transactions on Systems, Man, and Cybernetics, Part B:Cybernetics,2005,35(1):124-129.
    [32]. Criminisi A, Cross G, Blake A, et al. Bilayer segmentation of live video, in:Proceedings-2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2006. New York, United States.53-60,2006.
    [33]. Anderson C H, Burt P J, Van W G Change detection and tracking using pyramid transform techniques, in: Proc. SPIE-Int. Soc. Opt. Eng.(USA).Cambridge, MA, USA.1985.Cambridge, MA, USA:1985.72-78.22 Cox I J, Rao S B, Zhong Y. Ratio regions:a technique for image segmentation, in:Proceedings of thel3th International Conference on Pattern Recognition. Vienna, Austria.1996.Vienna, Austria:IEEE ComPut. Soc. Press,1996.557-56.
    [34]. Neri A, Colonnese S, Russo G, et al. Automatic moving object and background separation. Signal Proeessing,1998,66(2):219-232.
    [35]. Mortensen E N, Barrett W A. Interactive segmentation with intelligent scissors. Graphical Models and Image Processing,1998,60(5):349-384.
    [36]. Prati A, Mikic I, Trivedi M M, et al. Detecting moving shadows:Algorithms and evaluation. IEEE Transactions on Pattern Analysis and Machine Intelligence,2003,25(7):918-923.
    [37]. Song H., Shi F. A real-time algorithm for moving objects detection in video images [J]. Proceedings of the 5th World Congress on Intelligent Control and Automation.2004,5:4108-4111.
    [38]. Yang Y. Q, Gu W. and Lu Y. D. An improved slow-motion detection approach for soccer video [J]. Proccedings of 2005 International Conference on Machine Learning and Cybernetics.2005:4593-4598.
    [39]. Thakoor N., Gao J. Automatic video object shape extraction and its classification with camera in motion [J]. Proceedings of 2005 IEEE International Conference on Image Processing.2005:437-440.
    [40]. Paragios N., Tziritas C. Detection and location of moving objects detection in video images [J]. Proceedings of the 13th International Conference on Pattern Recogniton.1996,1:201-205.
    [41].张文涛,李晓峰,李在铭.高速密集视频目标场景下的运动分析[J].电子学报,2000,28(10):114-117.
    [42]. Shao J, Zhou S K, Zheng Q. Robust appearance-based tracking of moving object from moving platform. in:Proceedings-International Conference on Pattern Recognition. Cambridge, United Kingdom.2004. Cambridge, United Kingdom:Institute of Electrical and Electronics Engineers Inc., Piscataway, NJ 08855-1331, United States,2004.215-218.
    [43].岑峰,戚飞虎,陈茂林。长期视频监控系统的多分布模型背景差方法[J].红外与毫米波学报,2002,21(1):59-63.
    [44].侯志强,韩崇昭.基于像素灰度归类的背景重构算法[J].软件学报2005,16(9):1568-1576.
    [45]. Christogiannopoulos G, Birch P B, Young R C, et al. Segmentation of moving objects from cluttered background scenes using a running average model, in:Proceedings of SPIE-The International Society for Optical Engineering. Chisinau, Moldova.2005. Chisinau, Moldova:International Society for Optical Engineering, Bellingham WA, WA98227-0010, United States,2005.13-20.
    [46]. Iwahori Y, Takai T, Kawanaka H, et al. Particle filter based tracking of moving object from image sequence, in:Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). Bournemouth, United Kingdom.2006. Bournemouth, United Kingdom:Springer Verlag, Heidelberg, D-69121, Gennany,2006.401-408.
    [47]. Czajewski W, Staniak M. Real-time image segmentation for visual servoing. in:Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). Warsaw, Poland.2007.Warsaw, Poland:Springer Verlag, Heidelberg, D-69121, Germany, 2007.633-640.
    [48]. Stauffer C, Grimson E. Learning Patterns of Activity Using Real Time Tracking [J]. IEEE Transactions on Pattern Recognition and Machine Intelligence (TPAMI),2000,22(8):747-757.
    [49]. Elgammal A, Harwood D, Davis L. Nonparametric Model for Background Subtraction. In:Proceedings of International Conference on Computer Vision[C], Kerkyra, Greece,1999:751-767.
    [50]. Han B, Comaniciu D, Davis L. Sequential Kernel Density Approximation through Mode Propagation: Applications to Background Modeling[C]. Proc. ACCV Asian Conf. on Computer Vision,2004.
    [51]. Mckenna S J, Jabri S, Duric Z, et al. Tracking groups of people. Computer Vision and Image Understanding,2000,80(1):42-56.
    [52]. Kilger M. A shadow handler in a video-based real-time traffic monitoring system, in:Proceedings. IEEE Workshop on Applications of Computer Vision (Cat. No.92TH0446-5). Palm Springs, CA, USA.1992. Palm Springs, CA, USA:IEEE Comput. Soc, Press,1992.11-18.
    [53]. Friedman N., Russell S. Image segmentation in video sequences:a probabilistic approach. Proeeedings of the 13th Conference on Uncertainty in Artificial Intelligence.1997, pp.1-3.
    [54]. Toyama K., Kumm J., et al. Wallflower:principles and practive of background maintenance. Proceedings of 1999 IEEE International Conference on Computer Vision.1999, vol.1, pp.255-261.
    [55]. T. Bouwmans, F. El Baf, B. Vachon. Background Modeling using Mixture of Gaussians for Foreground Detection-A Survey.
    [56]. Gloyer B, Aghajan HK, Siu KY, Kailath T. Video-Based freeway monitoring system using recursive vehicle tracking. In:Proc.of the IS&T-SPIE Symp.on Electronic Imaging:Image and Video Processing, Vol 2421.1995.173-180.
    [57]. Sen-Ching S, Cheung, Chandrika Kamath. Robust techniques for background subtraction in urban traffic video. Video Communications and Image Processing, SPIE Electronic Imaging, San Jose, January 2004, UCRL-JC-153846-ABS,UCRL-CONF-200706.
    [58]. Kyungnam Kim, Thanarat H. Chalidabhongse, David Harwood, Larry Davis. Real-time foreground-background segmentation using codebook model. Real-Time Imaging.1077-2014/$-see front matter@2005 Elsevier Ltd.http://www.elsevier.com/locate/rti
    [59]. Dempster A, Laird N, Rubin D. Maximum likelihood from incomplete data via the EM algorithm. J Royal Statistical Society, Series B (Methodological) 1977; 39(1):1-38.
    [60]. Atev S, Masoud O, Papanikolopoulos N. Practical mixtures of gaussians with brightness monitoring. IEEE Conf on Intt Transportation Systems, Proceedings (ITS 2004),2004; 423-428.
    [61]. Zang Q, Klette R. Parameter analysis for Mixture of Gaussians. CITR Technical Report 188, Auckland University,2006.
    [62].邱茂林,马颂德,李毅.计算机视觉中摄像机定标综述[J],自动化学报,2000年1月.
    [63].吴毅红,胡占义,摄像机标定与三维重建[D],博士后学位论文,自动化研究所,2003年1月
    [64].陈小天,沈振康,摄像机标定技术研究[M],国防科技大学硕士论文,2003年12月
    [65].于泓.摄像机标定算法研究[M].山东大学硕士论文2006年
    [66]. Weng J, Cohen P and Henuou M. Camera calibration with distortion models and accuracy evaluation [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,1992,14(10):965-980.
    [67]. H. S. Sawhney, R. Kumar, True Multi-Image Alignment and Its Application to Mosaicing and Lens Distortion Correction [J]. IEEE Transactions PAMI,1999,21(3):235-243.
    [68]. A bdel-A ziz YI, Karara HM., Direct linear transformation into object space coordinates in Close-Range Photogrammetry[A]. Proc. Symp[C]. Close-Range Photogrammetry.1971.1-18
    [69]. Tsai R Y. An efficient and accurate camera calibration technique for 3D machine vision[A]. In: Proceedings of International Conference on Computer Vision and Pattern Recognition[C],Miami Beach, FL, USA,1986,6:364-374.
    [70]. Tsai. A versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-the-shelf TV cameras and lenses[J]. IEEE Journal of robotics and automation,1987,Vol. RA-3(4):323-344.
    [71]. R. Tai and R. K. Lenz, A technique for fully autonomous and efficient 3D robotics Hand/Eye calibration [J], IEEE Trans. Robotics and Automation,1989,5(3):345-358
    [72]. Zhang Zheng You. A Flexible Camera Calibration by Viewing a Plane from Unknown Orientations[A], ICCV99[C],1999:666-673.
    [73]. Zhang Zheng You. Camera Calibration with One-Dimensional Objects [J]. Pattern Analysis and Machine Intelligence, IEEE Trans On Pattern Analysis and Machine Intelligence,2004,7(26):892-899.
    [74]. Zhang Zheng You. A flexible new technique for camera calibration [J]. IEEE Trans On Pattern Analysis and Machine Intelligence,2002,11(22):1330-1334.
    [75]. S. J. Maybank and O. D. Faugeras, A theory of Self-calibration of a Moving Camera [J], International Journal of Computer Vision,8(2):123-151,1992.
    [76]. O. D. Faugeras, Q. Luong, and Maybank, Camera Self-calibration:Theory and Experiments[C], in Proceedings of Euorpean Conference on Computer Vision, LMCS 588, pp.321-334, Springer-Verlag, 1992.
    [77]. M. Armstrong, A. Zisserman, and R. Hartley. Self-calibration from Image Triplets [C], ECCV96,1996,pp: 3-16.
    [78]. Triggs B, Auto-calibration and absolute quadric [A]. Proceedings of Computer Vision and Pattern Recognition[C],1997,604-614.
    [79]. S. D. Ma, A Self-Calibration Technique for Active Vision System [J], IEEE Trans, on Robot Automation, 12(1), pp.114-120,1996.
    [80].吴福朝,胡占义.摄像机自定标的线性理论与算法[J],计算机学报,第24卷,第9期,pp.1121-1135,2001.
    [81].李华,吴福朝.胡占义一种新的线性摄像机自标定方法[J],计算机学报,2000,23(11):1121-1129.
    [82].吴福朝,李华,胡占义.基于主动视觉的摄像机自标定方法研究[J],自动化学报,2001,27(6):736-746.
    [83].雷成,吴福朝,胡占义.一种新的基于主动视觉系统的摄像机自标定方法[J],计算机学报,2000,23(11):1130-1139.
    [84]. Thomopson C J, Hahn S Y, Oskin M. Using modern graphics architectures for general purpose computing:a framework and analysis. In Proceedings of International Syposium on Microarchitecture, Istanbul.2002:306-317.
    [85].吴恩华,柳有权.基于图形处理器(GPU)的通用计算.计算机辅助设计与图形学学报,2004,16(5):601-612.
    [86]. J. Clark. The geometry engine:A VLSI geometry system for graphics. Proceedings of ACM SIGGRAPH'82,1982:127-133.
    [87]. D. Ebert, C. Morris, P. Rheingans, T. Yoo. Designing effective transfer functions for volume rendering from photographics volumes. IEEE Transactions on Visualization and Computer Graphics,2002,8(2): 183-197.
    [88]. J. Eyles, J. Austin, H. Fuchs, T. Greer, and J. Poulton. Pixel-Plane 4:A summary, advances in computer graphics hardware II. Proceedings of Eurographics Seminars Tutorials and Perspectives in Computer Graphics,1988:183-208.
    [89]. H. Fuchs, L. Israel, J. Poulton, J. Eyles etc. Pixel-Planes 5:A heterogeneous multiprocessor graphics system using processor-enhanced memories. Proceedings of ACM SIGGRAPH'89,1989:79-88.
    [90]. GPU:Changes Everything, http://www.nvidia.com/object/gpu.html.
    [91]. J. Bolz, I. Farmer, E. Grinspun etc. The GPU as numerical simulation engine. Proceedings of ACM SIGGRAPH'03,2003:917-924.
    [92]. E. Larsen, D.McAllister. Fast matrix multiplies using graphics hardware. Proceedings of Supercomputing'01,2001:43-49.
    [93]. J. Nickolls, I. Buck. NVIDIA CUD A software and GPU parallel computing architecture. Microprocessor Forum,2007.
    [94]. S. S. Stone, H. Yi, W. W. Hwu, J. P. Haldar, B. P. Sutton, and Z.-P. Liang. How GPUs can improve the quality of magnetic resonance imaging. In The First Workshop on General Purpose Processing on Graphics Processing Units,2007.
    [95]. K. Morel, E. Angel. The FFT on a GPU. Proceedings of SIGGRAPH/EUROGRAPHICS Workshop on Graphics Hardware 2003:112-119.
    [96]. Horn D R, Houston M. ClawHMMER:A streaming HMMer-search implementation[C] 2005.
    [97]. LIU WEIGUO, SCHMIDT B. Bio-sequence database scanning on a GPU[C] 2006.
    [98]. GALOPPO N, LU-GPU:Efficient algorithms for solving dense linear systems on graphics hardware[C] 2005.
    [99]. FAN ZHE. GPU cluster for high performance computing[C] 2004.
    [100].Msrk J H, William V B. Simulation of cloud dynamics on graphics hardware. Proceedings of the ACM Siggraph/Eurographics Conference on Graphics Hardware.2003:92-101.
    [101]. Wu E H, Liu Y Q, Liu X H. An improved study of real-time fluid simulation on GPU. Computer Animation& Virtual World (CASA2004),2004,15(3,4):139-146.
    [102].柳有权,刘学慧,吴恩华.基于GPU带有复杂边界的三维实时流体模拟[J],软件学报,2006,17(3):568-578.
    [103].张明.GPU加速的实时三维海洋漫游系统,大连理工大学硕士学位论文,2006.
    [104].郑杰.基于GPU的高质量交互式可视化技术研究.西安电子科技大学博士学位论文,2007.
    [105]. J. Owens, Streaming architectures and technology trends. GPU Gems2,2005:457-470.
    [106].S. S. Stone, H. Yi, W. W. Hwu, J. P. Haldar, B. P. Sutton, and Z.-P. Liang. How GPUs can improve the quality of magnetic resonance imaging. In The First Workshop on General Purpose Processing on Graphics Processing Units,2007.
    [107].霍宏涛.数字图像处理.北京:机械工业出版社,2004.43-45.
    [108].C.Brauer-Burchardt. A simple new method for precise lens distortion correction of low cost camera systems. DAGM 2004, LNCS 3175,570-577.
    [109].王红霞.基于边缘和颜色特征的图像检索技术研究,中国石油大学硕士学位论文,2009.
    [110].IPPR Dataset [DB/OL]. http://archer.ee.nctu.edu.tw/contest/.
    [111].Fang X, Xiong W, Hu B, Wang L. A moving object detection algorithm based on color information. Int Symposium on Instrumentation Science and Technology (1ST 2006), J Physics 2006,48:384-387.
    [112].Pokrajac D, Latecki L. Spatiotemporal blocks-based moving objects identificat ion and tracking. IEEE Visual Surveillance and Performance Evaluation of Tracking and Surveillance (VS-PETS 2003), October 2003,70-77.
    [113].Bhaskar H, Mihaylova L, Maskell S. Automatic target detection based on background modeling using adaptive cluster density estimation. LNCS from the 3rd German Workshop on Sensor Data Fusion: Trends, Solutions, Applications, Universitat Bremen, Germany, September 2007.
    [114].Stijnman G, Van den Boomgaard R. Background estimation in video sequences. Technical Report 10, Intelligent Sensory Information Systems Group (ISIS Report 10), University of Amsterdam, January 2000.
    [115].Xu M, Ellis T. Illumination-invariant motion detection using color mixture models. British Machine Vision Conf (BMVA 2001), Manchester, September 2001.
    [116].Comaniciu D, Meer P. Robust analysis of feature space:color image segmentation. IEEE Conf on Computer Vision and Pattern Recognition (CVPR 1997),1997,750-755.
    [117].El Baf F, Bouwmans T, Vachon B. Type-2 fuzzy mixture of Gaussians model:Application to background
    modeling. International Symposium on Visual Computing, ISVC 2008, Las Vegas, USA, December 2008.
    [118].Harville M, Gordon G, Woodfill J. Foreground segmentation using adaptive mixture models in color and depth. Proc of the IEEE Workshop on Detection and Recognition of Events in Video, Vancouver, Canada, July 2001.
    [119]. Sun Y, Li B, Yuan B, Miao Z, Wan C. Better foreground segmentation for static cameras via new energy form and dynamic graph-cut.18th Int Conf on Pattern Recognition (ICPR 2006),2006,49-52.
    [120]. Wang W, Wu R. Fusion of luma and chroma GMMs for HMM-based object detection. First Pacific Rim Symposium on Advances in Image and Video Technology (PSIVT 2006), Hsinchu, Taiwan, December 2006; 573-581.
    [121].Setiawan N, Hong S, Kim J, Lee C. Gaussian mixture model in improved IHLS color space for human silhouette extraction.16th Int Conf on Artificial Reality and Telexistence (ICAT 2006), Hangzhou, China, 2006,732-741.
    [122].Javed O, Shafique K, Shah M. A hierarchical approach to robust background subtraction using color and gradient information. IEEE Workshop on Motion and Video Computing (WMVC 2002), Orlando, December 2002,22.
    [123].Lindstrom J, Lindgren F, Ltrstrom K, Holst J, Holst U. Background and foreground modeling using an online EM algorithm. IEEE Int Workshop on Visual Surveillance VS 2006 in conjunction with ECCV 2006, May 2006; 9-16.
    [124].Farag, A.A., El-Baz, A.:US20080002870 (2008).
    [125]. Jain V, Kimia B, Mundy J. Background modelling based on subpixel edges. ICIP 2007, San Antonio, USA, September 2007; 6:321-324.
    [126].Tian Y, Lu M, Hampapur A. Robust and efficient foreground analysis for real-time video surveillance. CVPR 2005, San Diego, USA, June 2005,1182-1187.
    [127]. Gordon G, Darrell T, Harville M, Woodfill J. Background estimation and removal based on range and color. Proc of the IEEE Conf on Computer Vision and Pattern Recognition (CVPR 1999), June 1999,2: 459-464.
    [128].Silvestre D. Video surveillance using a time-of-light camera. PhD thesis, Informatics and Mathematical Modelling, Technical University of Denmark, DTU,2007.
    [129]. Yang S, Hsu C. Background modeling from GMM likelihood combined with spatial and color coherency. ICIP 2006, Atlanta, USA,2006, pages 2801-2804.
    [130].Dickinson P, Hunter A. Scene modeling using an adaptive mixture of gaussians in color and space. IEEE Conf on Advanced Video and Signal based Surveillance (AVSS 2005), Como, Italy, September 2005; 64-69.
    [131].O'callaghan, R.J.:EP1881454 (2008).
    [132].Tang P, Gao L, Liu Z. Salient moving object detection using stochastic approach filtering. Fourth Int Conf on Image and Graphics (ICIG 2007),2007; 530-535.
    [133]. Wang W, Gao W, Yang J, Chen D. Modeling background from compressed video. The Second Joint IEEE Int Workshop on Visual Surveillance and Performance Evaluation of Tracking and Surveillance, in conjunction with the Tenth IEEE Int Conf on Computer Vision (ICCV 2005), Beijing, China, October 2005; 161-168.
    [134]. Jain V, Kimia B, Mundy J. Background modelling based on subpixel edges. ICIP 2007, San Antonio, USA, September 2007; 6:321-324.
    [135].Tian Y, Lu M, Hampapur A. Robust and efficient foreground analysis for real-time video surveillance. CVPR 2005, San Diego, USA, June 2005,1182-1187.
    [136].Gordon G, Darrell T, Harville M, Woodfill J. Background estimation and removal based on range and color. Proc of the IEEE Conf on Computer Vision and Pattern Recognition (CVPR 1999), June 1999,2: 459-464.
    [137]. Tang P, Gao L, Liu Z. Salient moving object detection using stochastic approach filtering. Fourth Int Conf on Image and Graphics (ICIG 2007),2007:530-535.
    [138]. Wang W, Gao W, Yang J, Chen D. Modeling background from compressed video. The Second Joint IEEE Int Workshop on Visual Surveillance and Performance Evaluation of Tracking and Surveillance, in conjunction with the Tenth IEEE Int Conf on Computer Vision (ICCV 2005), Beijing, China, October 2005:161-168.
    [139].Yi Yang, Shawn Newsam. Comparing SIFT descriptors and gabor texture features for classification of remote sensed imagery, ICIP2008,1852-1855.
    [140].D. K. Iakovidis, D. E. Maroulis, S. A. Karkanis etc. Color Texture Recognition in Video Sequences using Wavelet Covariance Features and Support Vector Machines, Proceedings of the 29th Conference on EUROMICRO,2003:199-205.
    [141]. Ahmet Latif Amet, Aysm Ertuzun and Aytuzun Ercil, An efficient method for texture defect detection: subband domain co-occurrence matrices, Image Vision Comput.18 (2000), pp.543-572.
    [142].O.G Sezer, A. Ertuzun and A. Ercil, Independent component analysis for texture defect detection, Pattern Recognition Image Anal.14 (2004), pp.303-307.
    [143].D.-M. Tsai, P.-C. Lin and C.-J. Lu, An independent component analysis-based filter design for defect detection in low-contrast surface images, Pattern Recognition 39 (2006), pp.1679-1694.
    [144].R. Jenssen and T. Eltoft, Independent component analysis for texture segmentation, Pattern Recognition 36 (2003), pp.2301-2315.
    [145].S.-S. Liu and M.E. Jernigan, Texture analysis and discrimination in additive noise, Comput. Vision Graphics Image Process.49 (1990), pp.52-67.
    [146].Ojala T, Pietikainen M, Maenpaa T. Multiresolution gray-scale and rotation-invariant texture classification with local binary patterns[J], IEEE Trans on Pattern Analysis and Machine Inteligence, 2002,24(7):971-986.
    [147].Wenchao Zhang, Shiguang Shan, Xilin Chen etc. Local Gabor Binary Patterns Based on Kullback-Leibler Divergence for Partially Occluded Face Recognition [J], Signal Processing Letters, IEEE,2007,14(11):875-878.
    [148].Zhou H, Wang RS, Wang C. A novel extended local-binary-pattern operator for texture analysis [J], Information Sciences,2008,178 (22):4314-4325.
    [149].Nanni L, Lumini A. Local binary patterns for a hybrid fingerprint matcher [J], Pattern Recognition,2008, 41(11):3461-3466.
    [150].Savelonas Michalis A, Iakovidis Dimitris K, Dimitris Maroulis. LBP-guided active contours [J], Pattern Recognition Letters,2008,29 (9):1404-1415.
    [151].Friedman N, Russell S. Image segmentation in video sequences:A probabilistic approach. In:Proc. of the 13th Conf. on Uncertainty in Artificial Intelligence (UAI). San Francisco,1997.
    [152].M. Sugeno, S. Kwon. A new approach to time series modeling with fuzzy measures and the Choquet integral.4th IEEE International Conference on Fuzzy Systems, pp.799-804, Mar.1995.
    [153].M. Grabish. Fuzy integral in multicriteria decision making. Fuzzy Sets and Systems, pp.69-279,1995.
    [154].Brodatz P. Texture:a photographic album for artists and designers [M].New York:Dover,1996.
    [155]. Wallflower Dataset:http://research.microsoft.com/users/jckrumm/VWallFlower/TestImages.htm
    [156]. PETS Dataset:http://pets2006.net.
    [157].Zhang Tong, Carlo Tomasi. Fast, robust, and consistent camera motion estimation [A], In:Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition [C], LosAlamitos, CA, USA,1999, pp:164-170.
    [158].Tsaig Y, Averbuch A. Automatic segmentation of moving objects in video sequences:A region labeling approach [J]. IEEE Transactions on Circuits and System forVideo Technology,2002,12 (7):597-612.
    [159].Neri A, Colonnese S, Russo G, et al. Automatic moving object and background separation [J]. Signal Processing,1998,66(2):219-232.
    [160].Chien S Y, Ma S Y, Chen L G. Efficientmoving object segmentation algorithm using background registration technique [J]. IEEE Transactions on Circuits and System for Video Technology,2002,12(7): 577-586.
    [161].David A Forsyth, Jean Ponce. ComputerVision:AModern App roach [M]. New Jersey:Prentice Hall, 2002.
    [162]. Weiss Y, Adelson E H. A unified mixture framework for motion segmentation:incorporating spatial coherence and estimating the number of models [A]. In:IEEE Computer Society Conference on Computer Vision and Pattern Recognition [C], San Fransisco, California, USA,1996:321-326.
    [163].EtohM, Shirai Y. Segmentation and 2D motion estimation by region fragments [A]. In:Proceedings on International Conference on ComputerVision[C], Seattle, USA,1994:192-199.
    [164].Dellaert F, Seitz S, Thorpe C, et al. Structure from motion without correspondence [A]. In:IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'00)[C], South Carolina, USA,2000:557-564.
    [165]. Wang J Y A, Adelson E H. Rep resenting moving images with layers [J]. IEEE Transactions on Image Processing,1994,3 (5):625-638.
    [166]. Baker S, Szeliski R, Anandan P. A layered app roach to stereo reconstruction [A], In:Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition [C], Washington, DC, USA,1998:434-441.
    [167].IraniM, Anandan P. A unified app roach to moving object detection in 2D and 3D scenes [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,1998,20(6):577-589.
    [168]. Viola P, Wells W M III. Alignment by maximization of mutual information[C]. Proc Int Conf on Computer Vision, Cambridge, CA,1995:16-23.
    [169].Collignon A, Maes F, Delaere D, et al. Automated multimodality medical image registration using information theory[C]. Information Processing in Medical Imaging:Computational Imaging and Vision, 1995:263-274.
    [170].Studholme C, Hill DLG, Hawkes DJ. An overlap invariant entropy measures of 3D medical image alignment. Pattern Recognition,1999,32(1):71-86.
    [171].Pluim J, Maintz J, Viergever MA. Image registration by maximization mutual information and gradient information. IEEE Trans on Medical Imaging,2001,19(8):809-814.
    [172].Pluim, J.P.W., Maintz, J.B.A., Viergever, M.A. f-Information measures in medical image registration. IEEE Trans. Med. Imag.2004,23(12),1508-1516.
    [173].卢振泰,陈武凡.基于共生互信息量的医学图像配准.计算机学报,2007,30(6):1022-1027.
    [174].张二虎,卞正中.基于最大熵和互信息最大化的特征点配准算法.计算机研究与发展,2004,41(7):1194-119.
    [175]. Brown L. A survey of image registration techniques, ACM Computer Survey,1993,24(4):325-376.
    [176].Zitova B, Flusser J. Image registration methods:A survey. Image Vision Computing,2003,21:977-1000. 3 Goshtasby A A.2-D and 3-D image registration for medical, remote sensing and industrial applications. John Wiley and Sons,2005.
    [177]. Yves Dufournaud, Cordelia Schmid, Radu Horaud. Image matching with scale adjustment [J]. Computer Vision and Image Understanding,2004,93(2):175-194.
    [178].Caner G, Tekalp A.M., Sharma, G, Heinzelman W. Local image registration by adaptive filtering. IEEE Transactions on Image Processing,2006,3053-3065.
    [179].Kelman A., Sofka M., Stewart C.V. Keypoint descriptors for matching across multiple image modalities and nonlinear intensity variations. Proceedings IEEE Conference on Computer Vision and Pattern Recognition,2007:1-7.
    [180]. Schmid C, Mohr R, Bauckhage C. Evaluation of interest point detectors [J]. International Journal of Computer Vision,2000,37(2):151-172.
    [181].Rui Gan, Albert C.S. Chung, Shu Liao. Maximum distance-gradient for robust image registration. Medical Image Analysis,2008,12:452-468.
    [182].杨金宝,刘常春,胡顺波,顾建军.基于均值距离测度的医学图像配准.光子学报,2008,37(5):1046-1051.
    [183].EV ANS AC. BrainWeb:online simulated brain database [DB/OL]. http://www.bic.mni.mcgill.ca/brainweb.
    [184].Akber M Ali Dewan, Hossain M Julius, Oksam Chae. Background Independent Moving Object Segmentation For Video Surveillance, IEICE Transactions on Communications,2009, e92-b (2): 585-598.
    [185].宋柳平.图像序列中小信噪比点目标的检测算法研究[D].长沙:国防科大,1992.
    [186].陈朝阳,张桂林.红外警戒系统小目标实时检测方法[J].红外与毫米波学报,1998,17(4):283-286.
    [187].Silverman J, Caefer C.E. and Vickers V.E. Temporal filtering for point target detection in staring IR imagery[C]; II. Recursive variance filter. Proc. SPIE. Vol.3373,1998:45-53.
    [188].熊辉,沈振康等.低信噪比运动红外点目标的检测[J].电子学报,1999,27(12):26-29.
    [189].Philip B., Chapple D.C., Bertilone R.S.C. Stochastic model-based processing for detection of small targets in non-Gaussian natural imagery[C]. IEEE Tans, on Image Processing,2001,10(4):554-564.
    [190].李勐.红外序列图像弱小运动目标检测新方法研究[D].华中科技大学博士论文,2006,10.
    [191].卓志敏等,一种复杂环境下的红外成像运动目标检测方法[J].宇航学报,2008,29(1):339-343.
    [192].明英等,基于Cauchy分布的红外视频运动目标检测[J],红外与毫米波学报,2008,27(1):65-71.
    [193].Zong X, Laine A F, Geiser E A, et al. De-noising and contrast enhancement via wavelet shrinkage and non-linear adaptive gain [A]. Wavelet Applications 3:Proceeding of SPIE [C]. Orlando, FL,1996,2726: 566-574.
    [194].Xu Y, Weaver J B, Healy D M, et al. Wavelet transform domain filters:A spatially selective noise filtration technique [J]. IEEE Transactions on Image Processing,1994,3(6):747-758.
    [195].Donoho D L, Nonlinear wavelet methods for recovery of signals, densities, and spectra from indirect and noisy data [A]. Proc. Symposia Applied math [C], Rhode Island, American Methematical Society,1993, 47:173-205.
    [196].OTCBVS Dataset:http://www.cse.ohio-state.edu/otcbvs-bench/
NGLC 2004-2010.National Geological Library of China All Rights Reserved.
Add:29 Xueyuan Rd,Haidian District,Beijing,PRC. Mail Add: 8324 mailbox 100083
For exchange or info please contact us via email.