基于可见光和红外热像仪的双目视觉运动目标跟踪
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
随着社会经济的发展,科学技术水平的提高,迅速形成了各类生活社区、交通运输网、车站和码头等组成的视频监控系统网络。为确保系统安全和运行效益,满足人们的宜居要求,国内外都在积极地开展面向复杂运动目标的检测、跟踪和行为判断等技术的研究,以期形成高度智能化的视频监控系统网络。而立体视觉的监控技术又是其中的核心和热点技术。现有的立体视觉监控技术的研究主要是利用同源传感器,即用可见光摄像机实现立体视觉。虽然可以减少光线变化和阴影的影响,但算法实现相对复杂;而且在低能见度的复杂环境下,仅依靠可见光视觉系统实现运动目标的检测和跟踪有一定的难度。此外,还有学者和研究机构在试探将多元视觉传感器运用于视觉监控中,即可见光和-热红外视频运动目标融合检测。然而,却没有充分利用双目立体视觉能够获得三维信息的优势,未能够获取空间目标三维信息。结合红外热像仪和可见光摄像机组成立体监控系统,充分利用可见光系统提供的灰度信息,和红外热像仪提供的温度信息,提取目标的运动信息以及三维空间信息,对复杂环境下全天候的目标连续跟踪实现信息互补是非常有意义的。基于此,本文开展了利用红外热像仪和可见光摄像机组成的异源双目立体系统,实现运动目标检测和跟踪的研究,主要内容及取得的研究成果总结如下:
     (1)明确提出现有基于四象限分割思想的二维图像分割理论的不足,并经过系统性测试实验的验证。在此基础上,提出了基于阈值线的二维阈值图像分割技术方法。并以二维熵阈值分割方法为例,给出了一种分步骤二维熵阈值线的确定方法,即在被划分为边缘和噪声的象限中寻找第二个阈值点,进一步明确边缘和噪声的属性。本文方法不仅充实了二维图像分割理论,解决了传统算法中因忽略大量有效信息而导致的分割失败的问题,也具有较好的操作性。系统性的实验证明了该方法可以大大改善图像的分割结果。
     (2)对于二维直方图像素分布严重不均的图像,即使不忽略边缘和噪声象限中的像素信息,传统的四象限法仍然无法得到理想的分割结果。为此,本文提出了一种基于二维直方图质心的图像阈值分割方法。不仅利用了各个质点的质量,也充分考虑了各质点的位置信息,有效改善了分割结果。尤其是对于二维直方图中目标和背景的灰度分布差异极小的图像,本文方法可以得到更加理想的分割结果。
     (3)结合帧间差和背景差实现了运动目标检测的方法可以充分利用两种方法的优势,弥补了单一使用帧间差法或背景差法的不足。在此目标提取结果的基础上,利用角点信息完成了运动目标前后帧的匹配,提高了单目视觉下图像运动跟踪的精度。
     (4)考虑到异源图像视差匹配的特殊性,结合异源图像信息互补的特点,提出了一种基于目标区域的匹配方法。该方法融合了归一化转动惯量和归一化互相关特征,不仅避免了红外热图像和可见光图像成像机理不同带来的灰度差异缺点,也可以充分考虑到像素间的空间位置关系。此外,该方法利用红外热图像的视差匹配结果在可见光图像中提取的目标区域不易受拖影以及背景光线的影响的特性,大大改善了可见光图像中的目标提取结果。即使无法同时从可见光图像和红外热图像中提取出理想的前景目标,本文方法也能实现匹配。实际测试实验结果证明了方法的有效性。
     (5)设计了一种以云台旋转中心为基准点的空间点三维重建模型。在分析红外热像仪特性的基础上,根据立体监控实用性要求和双目立体视觉原理,以不考虑畸变的针孔模型为依据,实现了空间点的三维重建。同时,针对双目系统参数标定复杂的问题,给出了适合本文模型的简化参数标定方法,最多需要四个参考点,利用两个摄像机同时拍摄一幅图片即可完成所需参数的标定。实验证明,利用已标定的双目系统,在不同的场景,即使两台云台的相对位置发生旋转变化或是平移依然可以实现三维测距,非常适合野外环境的快速标定。
     (6)在设计的红外热像仪和可见光摄像机组成的双目立体视觉实验系统的基础上,获取运动物体对应特征点的三维信息,以此作为运动物体在序列图像不同帧的特征值,实现空间运动物体的跟踪定位,最终获得了目标运动信息以及三维空间信息。
With the development of social economy and the improvement of science, the video monitoringsystem implemented in living communities, stations and terminals is rapidly established. To improvethe living quality, this system must be operated stably and effectively. Therefore, researches on thedetection, tracking and behavior judgment of complex moving objects are actively carried out all overthe world. Based on these researches, excellent intelligent video monitoring system can be built. Andstereo vision has attracted great concern as one of the key points in this system.
     Most of the existing stereo vision systems are consisted of two visible cameras, which arehomologous sensors. Although it could reduce the impact of illuminate change and shadow, thealgorithm is relatively complex. In addition, it is a huge challenge to detect and track the movingobject in poor visibility. On the other hand, a lot of researchers have made an attempt on movingobject detection with visual-thermal fusion. However, they did not make full use of the advantages ofstereo vision to obtain three-dimensional (3-D) information of the moving object. A stereo monitoringsystem consisted of a thermal infrared camera and a visible camera is able to make full use ofgrayscale and temperature to get the motion information and3-D information of a moving object.Therefore, it contributes to continuous tracking in all weather conditions. In this paper, we focus onthe research of moving object detection and tracking with visual-thermal fusion. The primary workand remarks are as follows:
     1) The shortcoming of the existing two-dimensional (2-D) thresholding methods based on fourquadrants is analyzed in this paper. And it is verified through a series of testing. Therefore, anovel2-D threshold line segmentation method is proposed. A2-phase strategy is specified todetermine the2-D thresholding line based on entropy. The second threshold point is determinedin the quadrants including the edge pixels and the noise pixels. Thus the attribution of the edgepixels and the noise pixels is refined. The proposed method improves the segmentation results bymaking full use of the ignored pixels. It not only improves the2-D thresholding method, but itcan be easily carried out as well. The experiments on typical images demonstrated that theproposed method achieves very competitive segmentation results in comparison with the existingrepresentative methods.
     2) When the2-D histogram of an image is extremely uneven in distribution, it can not get successfulsegmentation through2-D thresholding method based on four quadrants even if the ignored pixelsare fully utilized. To solve this problem, a threshoding method using the centre of mass of the2-Dhistogram is proposed to binarize this kind of images. It not only makes use of the mass of each pixel but also fully considers the location information of each pixel. Therefore, it couldeffectively improve the segmentation result, especially for the images in which2-D histogram ispoor in distinguishing the object from the background.
     3) A strategy takes advantage of frame difference method and background subtraction method isadopted to detect the moving object. Because there are shortcomings when any of them is usedalone in motion detection. And corner matching is implemented to match the moving object indifferent frames. Thus it makes the moving object tracking more accurate.
     4) Considering the complement effect came from different source images of the same scene, aregistration method of regions of interest is provided to solve the disparity correspondence inmultimodal stereo systems. The method takes advantage of NCC and NMI. It avoids the graydifference caused by the imaging mechanism of thermal image and visible image, and meanwhileit also makes use of the position information of the pixels. The fusion of the object in differentimages reduces the impact of light and shadow, and greatly improves the object detection resultsin visible images. Even if ideal objects could not be detected from the visible images or thethermal images at the same time, this method is available. And the experiments proved itsvalidity.
     5) According to the feature of multimodal stereo vision, a3-D reconstruction method of point isproposed on the basis of pinhole model. The rotation centers of the pan-tilt devices are used as thereference points in the method. Since the parameters identification is complicated in stereo vision,the procedure is shown to identify the parameters used in this method. A pair of images shot bytwo cameras at the same time and four reference points are required for calibration. Even if therelative position of two cameras is changed, it can still get3-D information of the object with thecalibrated multimodal stereo system. It is appropriate to quickly get the parameters of the stereovision system in field.
     6) Based on the multimodal stereo vision system consisted of a thermal camera and a visible camera,the3-D information of the feature point of the moving object can be obtained. It can provideposition information in the stereo tracking system. Therefore, the motion information and3-Dinformation of the moving object in all weather conditions can be gotten.
引文
[1] Regazzoni.C, Ramesh.V, Foresti.G. Special Issue On Video Communications, Processing,and Understanding for Third Generation Surveillance Systems[J]. Proceedings of the Ieee,2001,89(10):1355-1539.
    [2] Matsuyama T. Cooperative Distributed Vision: Dynamic Integration of Visual Perception,Action, and Communication[M]//BURGARD W, CREMERS A, CRISTALLER T. KI-99:Advances in Artificial Intelligence. Springer Berlin/Heidelberg,1999:72.
    [3] Matsuyama T. Cdv Project2Cooperative Distributed Vision[C]. In Proc. of1st InternationalWorkshop on Cooperative Distributed Vision,1997.:1-28
    [4] Collins R T, Lipton A J, Kanade T, et al. A System for Video Surveillance and Monitoring:Vsam Final Report[R]. Pittsburgh,Penn,America: Carnegie Mellon University,2002.
    [5] Collins R T, Lipton A J, Fujiyoshi H, et al. Algorithms for Cooperative MultisensorSurveillance[J]. Proceedings of the Ieee,2001,89(10):1456-1477.
    [6] Haritaoglu I, Harwood D, Davis L. W4S: A Realtime System for Detecting and TrackingPeople in21/2D[Z]. Heidelberg: Springer Berlin,1998.
    [7] Haritaoglu I, Harwood D, Davis L S. W4: Real-Time Surveillance of People and theirActivities[J]. Pattern Analysis and Machine Intelligence, Ieee Transactions On,2000,22(8):809-830.
    [8] Javed O, Rasheed Z, Alatas O, et al. Knight: A Multi-Camera Surveillance System[C]. InIEEE International Conference on Multimedia and Expo,2003.
    [9] Javed O, Rasheed Z, Alatas O, et al. Knight&Trade;: A Real Time Surveillance System forMultiple and Non-Overlapping Cameras[C]. Multimedia and Expo,2003. ICME '03.Proceedings.2003International Conference on,2003.:649-652
    [10] Cucchiara R, Grana C, Piccardi M, et al. Detecting Objects, Shadows and Ghosts in VideoStreams by Exploiting Color and Motion Information[C]. Image Analysis and Processing,2001. Proceedings.11th International Conference on,2001.:360-365
    [11] Cucchiara R, Grana C, Piccardi M, et al. Detecting Moving Objects, Ghosts, and Shadowsin Video Streams[J]. Pattern Analysis and Machine Intelligence, Ieee Transactions On,2003,25(10):1337-1342.
    [12] Hampapur A, Brown L, Connell J, et al. Smart Video Surveillance: Exploring the Conceptof Multiscale Spatiotemporal Tracking[J]. Signal Processing Magazine, Ieee,2005,22(2):38-51.
    [13] Ferryman J. Avitrack:Aircraft Surroundings, Categorized Vehicles&Individuals Trackingfor Apron Activity Model Interpretation&Check.[C]. In IEEE International Conference onComputer Vision (ICCV2005), Beijing, China,2005.
    [14] Ferryman J. Avitrack:Aircraft Surroundings, Categorised Vehicles&Individuals Trackingfor Apron Activity Model Interpretation&Check[C]. In IEEE International Conference onComputer Vision and Pattern Recognition (CVPR’05), San Diego, USA,2005.
    [15] Okada K, Inaba M, Inoue H. Integration of Real-Time Binocular Stereo Vision and WholeBody Information for Dynamic Walking Navigation of Humanoid Robot[C]. MultisensorFusion and Integration for Intelligent Systems, MFI2003. Proceedings of IEEE InternationalConference on,2003. IEEE:131-136
    [16] Bensrhair A, Bertozzi A, Broggi A, et al. Stereo Vision-Based Feature Extraction forVehicle Detection[C]. Intelligent Vehicle Symposium,2002. IEEE,2002. IEEE:465-470
    [17] Fang Y, Masaki I, Horn B. Depth-Based Target Segmentation for Intelligent Vehicles:Fusion of Radar and Binocular Stereo[J]. Intelligent Transportation Systems, IeeeTransactions On,2002,3(3):196-202.
    [18] Trivedi M M, Gandhi T L, Huang K S. Distributed Interactive Video Arrays for EventCapture and Enhanced Situational Awareness[J]. Intelligent Systems, Ieee,2005,20(5):58-66.
    [19] Town C. Multi-Sensory and Multi-Modal Fusion for Sentient Computing[J]. InternationalJournal of Computer Vision,2007,71(2):235.
    [20] Black J, Ellis T. Multiple Camera Image Tracking[C]. Proc. Performance Evaluation ofTracking and Surveillance Conf.(PETS2001),2001.
    [21] Tan T N, Sullivan G D, Baker K D. Recognizing Objects On the Ground Plane[J]. ImageVision Computing,1994,12(164):172.
    [22] Pasula H, Russell S, Ostland M, et al. Tracking Many Objects with Many Sensors[C]. Proc.Int’l Joint Conf. Artificial Intelligence’99,1999.
    [23] Kelly P, Katkere A, Kuramura D, et al. An Architecture for Multiple Perspective InteractiveVideo[C]. Proc. ACM Multimedia95,1995.:201-212
    [24] Kettnaker V, Zabih R. Bayesian Multi-Camera Surveillance[C]. Proc. of CVPR,1999.:253-259
    [25] Tao Z, Aggarwal M, Kumar R, et al. Real-Time Wide Area Multi-Camera StereoTracking[C]. Computer Vision and Pattern Recognition,2005. CVPR2005. IEEE ComputerSociety Conference on,2005.:976-983
    [26]李戈,赵杰,闫继红.基于立体视觉平台的彩色图像视觉跟踪[J].哈尔滨工业大学学报,2007,39(6):932-935.
    [27]张汝波,张亮,张子迎.基于立体视觉的机器人目标识别与跟踪[c].2007年中国智能自动化会议,兰州,2007.
    [28]郑小东,赵杰文,刘木华.基于双目立体视觉的番茄识别与定位技术[J].计算机工程,2004,30(22):155-156,171.
    [29]赵虹.立体视觉技术在客流统计系统中的应用[D].浙江大学,2006.
    [30]潘浩.基于双目视觉的客流检测与跟踪技术的研究[D].上海交通大学,2009.
    [31]刘冬冬.基于双目视觉和Camshift算法的目标检测与跟踪[D].山东大学模式识别与智能系统,2006.
    [32]王雷雷.基于可见光与红外汽车夜视系统目标测距[D].浙江理工大学,2011.
    [33] Pajares G, Manuel De La Cruz J. A Wavelet-Based Image Fusion Tutorial[J]. PatternRecognition,2004,37(9):1855-1872.
    [34] Kong S G, Heo J, Abidi B R, et al. Recent Advances in Visual and Infrared FaceRecognition—a Review[J]. Computer Vision and Image Understanding,2005,97(1):103-135.
    [35] Toet A, Franken E M. Perceptual Evaluation of Different Image Fusion Schemes[J].Displays,2003,24(1):25-37.
    [36] Davis J W, Sharma V. Background-Subtraction Using Contour-Based Fusion of Thermaland Visible Imagery[J]. Computer Vision and Image Understanding,2007,106(2):162-182.
    [37] Torabi A, Massé G, Bilodeau G. Feedback Scheme for Thermal-Visible Video Registration,Sensor Fusion, and People Tracking[C]. Computer Vision and Pattern RecognitionWorkshops (CVPRW),2010IEEE Computer Society Conference on,2010. IEEE:15-22
    [38] Bilodeau G A, Torabi A, Morin F. Visible and Infrared Image Registration UsingTrajectories and Composite Foreground Images[J]. Image and Vision Computing,2011,29(1):41-50.
    [39] Krotosky S J. On Using Multiperspective Color and Thermal Infrared Videos to DetectPeople Issues Computational Framework Algorithms and Comparative Analysis[D].University of California, San Diego.bElectrical Engineering (Signal and Image Proc),2007.
    [40] Pal N. Review On Image Segment Techniques[J]. Patter Recognition,1993,26:1274-1294.
    [41] Satonobu E A. The Development of a Computer System for Autonomous VehicleControl[C]. Proc. of the International Symposium on Advanced Vehicle Control,1992.:194-199
    [42] Sahoo P K, Soltani S, Wong A K C. A Survey of Thresholding Techniques[J]. ComputerVision, Graphics, and Image Processing,1988,41(2):233-260.
    [43] Sezgin M, Sankur B. Survey Over Image Thresholding Techniques and QuantitativePerformance Evaluation[J]. Journal of Electronic Imaging,2004,13(1):146-168.
    [44] Lee S U, Yoon Chung S, Park R H. A Comparative Performance Study of Several GlobalThresholding Techniques for Segmentation[J]. Computer Vision, Graphics, and ImageProcessing,1990,52(2):171-190.
    [45]韩思奇,王蕾.图像分割的阈值法综述[J].系统工程与电子技术,2002,24(6).
    [46] Xia Y, Feng D G, Wang T J, et al. Image Segmentation by Clustering of Spatial Patterns[J].Pattern Recognition Letters,2007,28(12):1548-1555.
    [47] Zhang L, Bao P. Edge Detection by Scale Multiplication in Wavelet Domain[J]. PatternRecognition Letters,2002,23(14):1771-1784.
    [48] Bao P, Zhang D, Xiaolin W. Canny Edge Detection Enhancement by Scale Multiplication[J].Pattern Analysis and Machine Intelligence, Ieee Transactions On,2005,27(9):1485-1490.
    [49] Adams R, Bischof L. Seeded Region Growing[J]. Pattern Analysis and MachineIntelligence, Ieee Transactions On,1994,16(6):641-647.
    [50] Ning J F, Zhang L, Zhang D, et al. Interactive Image Segmentation by Maximal SimilarityBased Region Merging[J]. Pattern Recognition,2010,43(2):445-456.
    [51] M K, A W, D T. Snakes:Active Contour Models[J].1998,1(04):321-331.
    [52] Zhang K H, Zhang L, Song H H, et al. Active Contours with Selective Local Or GlobalSegmentation: A New Formulation and Level Set Method[J]. Image and Vision Computing,2010,28(4):668-676.
    [53] Zhang K H, Song H H, Zhang L. Active Contours Driven by Local Image Fitting Energy[J].Pattern Recognition,2010,43(4):1199-1206.
    [54] Boykov Y Y, Jolly M P. Interactive Graph Cuts for Optimal Boundary and RegionSegmentation [C]. Computer Vision,2001. ICCV2001. Proceedings. Eighth IEEEInternational Conference on,2001.:105-112
    [55]章毓晋.图像分割[M].北京市:科学出版社,2004.
    [56] Otsu N. Threshold Selection Method From Gray-Level Histograms[J]. Ieee Transactions OnSystems, Man and Cybernetics,1979,SMC-9(1):62-66.
    [57] Kapur J N, Sahoo P K, Wong A K C. A New Method for Gray-Level Picture ThresholdingUsing the Entropy of the Histogram Computer Vision[J]. Computer Vision Graphics andImage Processing,1985:273-285.
    [58] Kittler J, Illingworth J. Minimum Error Thresholding[J]. Pattern Recognition,1986,19(01):41-47.
    [59] Rosin P L, Ioannidis E. Evaluation of Global Image Thresholding for Change Detection[J].Pattern Recognition Letters,2003,24(14):2345-2356.
    [60] Reddi S S, Rudin S F, Keshavan H R. An Optimal Multiple Threshold Scheme for ImageSegmentation[J]. Systems, Man and Cybernetics, Ieee Transactions On,1984(4):661-665.
    [61] Cheng H D, Jiang X H, Sun Y, et al. Color Image Segmentation: Advances and Prospects[J].Pattern Recognition,2001,34(12):2259-2281.
    [62] Abutaleb A S. Automatic Thresholding of Gray-Level Pictures Using Two-DimensionalEntropy[J]. Computer Vision, Graphics, and Image Processing,1989,47(1):22-32.
    [63] Brink A D. Thresholding of Digital Images Using Two-Dimensional Entropies[J]. PatternRecognition,1992,25(8):803-808.
    [64] Sahoo P K, Slaaf D W, Albert T A. Threshold Selection Using a Minimal HistogramEntropy Difference[J]. Optical Engineering,1997,36(7):1976-19811981.
    [65] Sahoo P K, Arora G. A Thresholding Method Based On Two-Dimensional Renyi'sEntropy[J]. Pattern Recognition,2004,37(6):1149-1161.
    [66] Sahoo P K, Arora G. Image Thresholding Using Two-Dimensional Tsallis–Havrda–Charvát Entropy[J]. Pattern Recognition Letters,2006,27(6):520-528.
    [67] Li G, Xiaoping F, Yan L. The Thresholding Methods Based On Two-DimensionalNon-Extensive Entropy[C]. Image and Signal Processing,2008. CISP '08. Congress on,2008.:729-733
    [68] Xinming Z, Huiyun Z. Improved Image Thresholding Based On2-D Tsallis Entropy[C].Environmental Science and Information Application Technology,2009. ESIAT2009.International Conference on,2009.:363-366
    [69] Cheng H D, Chen Y H, Jiang X H. Thresholding Using Two-Dimensional Histogram andFuzzy Entropy Principle[J]. Image Processing, Ieee Transactions On,2000,9(4):732-735.
    [70]刘健庄,栗文青.灰度图象的二维Otsu自动阈值分割法[J].自动化学报,1993(1):101-105.
    [71] Jun Z, Jinglu H. Image Segmentation Based On2D Otsu Method with HistogramAnalysis[C]. Computer Science and Software Engineering,2008International Conferenceon,2008.:105-108
    [72] Zhu H. Segmentation of Blood Vessels in Retinal Images Using2D Entropies of GrayLevel-Gradient Cooccurrence Matrix[C]. Acoustics, Speech, and Signal Processing,2004.Proceedings.(ICASSP '04). IEEE International Conference on,2004.:509-512
    [73] Allen B S, Jansing E D, Selvage J E, et al. Neural and Genetic Approximations of FractalError[C]. Aerospace Conference,1998IEEE,1998.:205-220
    [74] Wu X J, Zhang Y J, Xia L Z. A Fast Recurring Two-Dimensional Entropic ThresholdingAlgorithm[J]. Pattern Recognition,1999,32(12):2055-2061.
    [75] Gong J, Li L, Chen W. Fast Recursive Algorithms for Two-Dimensional Thresholding[J].Pattern Recognition,1998,31(3):295-300.
    [76] Jansing E D, Albert T A, Chenoweth D L. Two-Dimensional Entropic Segmentation[J].Pattern Recognition Letters,1999,20(3):329-336.
    [77] Heywood M I, Noakes P D. Fractional Central Moment Method for Movement-InvariantObject Classification[J]. Iee Proceedings: Vision, Image and Signal Processing,1995,142(4):213-219.
    [78] Demi M, Paterni M, Benassi A. The First Absolute Central Moment in Low-Level ImageProcessing[J]. Computer Vision and Image Understanding,2000,80(1):57-87.
    [79]王冰,职秦川,张仲选等.灰度图像质心快速算法[J].计算机辅助设计与图形学学报,2004(10):1360-1365.
    [80]温福喜,刘宏伟.基于中心矩特征的空间目标识别方法[J].雷达科学与技术,2007(1):8-12.
    [81]张涛,王成儒.基于尺度质心统计的纹理分析[J].光学技术,2006(3):413-415.
    [82]胡以静,李政访,胡跃明.基于光流的运动分析理论及应用[J].计算机测量与控制,2007(2):219-221.
    [83] Seki M, Fujiwara H, Sumi K. A Robust Background Subtraction Method for ChangingBackground[C]. Applications of Computer Vision,2000, Fifth IEEE Workshop on.,2000.:207-213
    [84] Cristani M, Farenzena M, Bloisi D, et al. Background Subtraction for AutomatedMultisensor Surveillance: A Comprehensive Review[J]. Eurasip Journal On Advances inSignal Processing,2010,2010.
    [85] Barron J L, Fleet D J, Beauchemin S S. Performance of Optical Flow Techniques[J].International Journal of Computer Vision,1994,12(1):43-77.
    [86] Lim S H, El Gamal A. Optical Flow Estimation Using High Frame Rate Sequences[C].IEEE International Conference on Image Processing (ICIP), October7,2001-October10,2001, Thessaloniki, Greece,2001. Institute of Electrical and Electronics EngineersComputer Society:925-928
    [87] Zhao Y, Zhang Z, Gao Z. A Simple and Workable Moving Objects SegmentationMethod[C]. Proceedings Elmar-2004-46th International Symposium Electronics in Marine,June16,2004-June18,2004, Zadar, Croatia,2004. Croatian Society Electronics in Marine-ELMAR:585-590
    [88] Cucchiara R, Piccardi M. Vehicle Detection Under Day and Night Illumination[C]. Proc. of3rd International ICSC Symposium on Intelligent Industrial Automation,1999. Citeseer,1999:618-623
    [89] Stauffer C, Grimson W E L. Adaptive Background Mixture Models for Real-TimeTracking[C]. Computer Vision and Pattern Recognition,1999. IEEE Computer SocietyConference on.,1999. IEEE,
    [90] Cheung S S, Kamath C. Robust Techniques for Background Subtraction in Urban TrafficVideo[C]. Visual Communications and Image Processing2004, January20,2004-January22,2004, San Jose, CA, United states,2004. SPIE:881-892
    [91] Fuentes L M, Velastin S A. From Tracking to Advanced Surveillance[C]. Image Processing,2003. ICIP2003. Proceedings.2003International Conference on,2003.:121-124
    [92] Herrero-Jaraba E, Orrite-Uru uela C, Senar J. Detected Motion Classification with aDouble-Background and a Neighborhood-Based Difference[J]. Pattern Recognition Letters,2003,24(12):2079-2092.
    [93] Bulas-Cruz J, Ali A, Dagless E. A Temporal Smoothing Technique for Real-Time MotionDetection[C]. Computer Analysis of Images and Patterns,1993. Springer:379-386
    [94]史忠科,曹力.交通图像检测与分析[M]//第一版.北京:科学出版社,2007:122-134.
    [95] Sheu H, Hu W. A Rotationally Invariant Two-Phase Scheme for Corner Detection[J].Pattern Recognition,1996,29(5):819-828.
    [96]章毓晋.图像工程[M].2.清华大学出版社有限公司,2005.
    [97] Moravec H. Towards Automatic Visual Obstacle Avoidance[C]. Proceedings of the5thIJCAI, MIT, Cambridge,1977.:584
    [98] Smith S M, Brady J M. Susan—a New Approach to Low Level Image Processing[J].International Journal of Computer Vision,1997,23(1):45-78.
    [99] Harris C, Stephens M. A Combined Corner and Edge Detector[C]. Alvey vision conference,1988. Manchester, UK:50
    [100] Kamel S, Ebrahimnezhad H, Ebrahimi A. Moving Object Removal in Video Sequence andBackground Restoration Using Kalman Filter[C]. Telecommunications,2008. IST2008.International Symposium on,2008. IEEE:580-585
    [101] Brown L G. A Survey of Image Registration Techniques[J]. Acm Computing Surveys(Csur),1992,24(4):325-376.
    [102] Briechle K, Hanebeck U D. Template Matching Using Fast Normalized Cross Correlation[J].Optical Pattern Recognition Xii,2001,4387:95-102.
    [103] Roche A, Malandain G, Pennec X, et al. The Correlation Ratio as a New Similarity Measurefor Multimodal Image Registration[J]. Medical Image Computing and Computer-AssistedInterventation—Miccai’98,1998:1115-1124.
    [104] Sarrut D, Miguet S. Similarity Measures for Image Registration[C]. European Workshop onContent-Based Multimedia Indexing,1999.:263-270
    [105] Hrka T, Kalafati Z, Krapac J. Infrared-Visual Image Registration Based On Corners andHausdorff Distance[J]. Image Analysis,2007:383-392.
    [106] Kim Y S, Lee J H, Ra J B. Multi-Sensor Image Registration Based On Intensity and EdgeOrientation Information[J]. Pattern Recognition,2008,41(11):3356-3365.
    [107] Caspi Y, Simakov D, Irani M. Feature-Based Sequence-to-Sequence Matching[J].International Journal of Computer Vision,2006,68(1):53-64.
    [108] Lee L, Romano R, Stein G. Monitoring Activities From Multiple Video Streams:Establishing a Common Coordinate Frame[J]. Pattern Analysis and Machine Intelligence,Ieee Transactions On,2000,22(8):758-767.
    [109] Han J, Bhanu B. Fusion of Color and Infrared Video for Moving Human Detection[J].Pattern Recognition,2007,40(6):1771-1784.
    [110] Szlavik Z, Havasi L, Sziranyi T. Estimation of Common Groundplane Based On Co-MotionStatistics[J]. Image Analysis and Recognition,2004:347-354.
    [111] Szlavik Z, Havasi L, Sziranyi T. Image Matching Based On Co-Motion Statistics[C].3DData Processing, Visualization and Transmission,2004.3DPVT2004. Proceedings.2ndInternational Symposium on,2004. IEEE:584-591
    [112] Szlávik Z, Szirányi T, Havasi L, et al. Optimizing of Searching Co-Motion Point-Pairs forStatistical Camera Calibration[C]. Image Processing,2005. ICIP2005. IEEE InternationalConference on,2005. IEEE:1178
    [113] Szlávik Z, Szirányi T, Havasi L. Video Camera Registration Using Accumulated Co-MotionMaps[J]. Isprs Journal of Photogrammetry and Remote Sensing,2007,61(5):298-306.
    [114]张秀伟,张艳宁,杨涛等.基于Co-Motion的可见光-热红外图像序列自动配准算法[J].自动化学报,2010(9).
    [115] Hua-mei C, Varshney P K, Slamani M A. On Registration of Regions of Interest (Roi) inVideo Sequences[C]. Advanced Video and Signal Based Surveillance,2003. Proceedings.IEEE Conference on,2003.:313-318
    [116] Krotosky S J, Trivedi M M. Mutual Information Based Registration of Multimodal StereoVideos for Person Tracking[J]. Computer Vision and Image Understanding,2007,106(2):270-287.
    [117] Kim K S, Lee J H, Ra J B. Robust Multi-Sensor Image Registration by Enhancing StatisticalCorrelation[C]. Information Fusion,20058th International Conference on,2005. IEEE:7
    [118] Haritaoglu I, Harwood D, Davis L S. W4: Real-Time Surveillance of People and theirActivities[J]. Pattern Analysis and Machine Intelligence, Ieee Transactions On,2000,22(8):809-830.
    [119]杨小冈,付光远,缪栋等.基于图像Nmi特征的目标识别新方法[J].计算机工程,2002(6):149-151.
    [120] Muyun W, Mingyi H. Image Feature Detection and Matching Based On Susan Method[C].Innovative Computing, Information and Control,2006. ICICIC'06. First InternationalConference on,2006. IEEE:322-325
    [121] Lu F, Xia S. Microscopic Image Mosaicing Algorithm Based On Normalized Moment ofInertia[J].2007,2:1010-1017.
    [122]东南大学.物理学[M].北京:高等教育出版社,2006.
    [123]邱茂林,马颂德,李毅.计算机视觉中摄像机定标综述[J].自动化学报,2000(1):47-59.
    [124] Faig W. Calibration of Close-Range Photogrammetric Systems: MathematicalFormulation.[J]. Photogrammetric Engineering and Remote Sensing,1975.
    [125] Dainis A, Juberts M. Accurate Remote Measurement of Robot Trajectory Motion[C].Robotics and Automation. Proceedings.1985IEEE International Conference on,1985.IEEE:92-99
    [126] Ganapathy S. Decomposition of Transformation Matrices for Robot Vision[J]. PatternRecognition Letters,1984,2(6):401-412.
    [127] Qiucheng S, Xiaogang D, Ming L, et al. An Improved Method of Camera Calibration[C].Electronic Measurement&Instruments (ICEMI),201110th International Conference on,2011. IEEE:155-157
    [128] Heikkila J, Silven O. A Four-Step Camera Calibration Procedure with Implicit ImageCorrection[C]. Computer Vision and Pattern Recognition,1997. Proceedings.,1997IEEEComputer Society Conference on,1997.:1106-1112
    [129] Tsai R Y. A Versatile Camera Calibration Technique for High-Accuracy3D MachineVision Metrology Using Off-the-Shelf Tv Cameras and Lenses[J]. Robotics and Automation,Ieee Journal of,1987,3(4):323-344.
    [130] Weng J, Cohen P, Herniou M. Camera Calibration with Distortion Models and AccuracyEvaluation[J]. Ieee Transactions On Pattern Analysis and Machine Intelligence,1992,14(10):965-980.
    [131] Zhengyou Z. Flexible Camera Calibration by Viewing a Plane From UnknownOrientations[C]. Computer Vision,1999. The Proceedings of the Seventh IEEEInternational Conference on,1999.:666-673
    [132] Zhengyou Z. A Flexible New Technique for Camera Calibration[J]. Pattern Analysis andMachine Intelligence, Ieee Transactions On,2000,22(11):1330-1334.
    [133] Meng X Q, Hu Z Y. A New Easy Camera Calibration Technique Based On CircularPoints[J]. Pattern Recognition,2003,36(5):1155-1164.
    [134] Martins H A, Birk J R, Kelley R B. Camera Models Based On Data From Two CalibrationPlanes[J]. Computer Graphics and Image Processing,1981,17(2):173-180.
    [135] Wei G, De Ma S. Implicit and Explicit Camera Calibration: Theory and Experiments[J].Pattern Analysis and Machine Intelligence, Ieee Transactions On,1994,16(5):469-480.
    [136] Maybank S J, Faugeras O D. A Theory of Self-Calibration of a Moving Camera[J].International Journal of Computer Vision,1992,8(2):123-151.
    [137] De Ma S. A Self-Calibration Technique for Active Vision Systems[J]. Robotics andAutomation, Ieee Transactions On,1996,12(1):114-120.
    [138]马丽娜,潘泉,赵永强等.红外热像仪标定方法[J].火力与指挥控制,2008,33(11):25-28.
    [139]章喜,王继平,孙华燕.半导体制冷器作为标定物的红外成像仪隐式标定[J].红外与激光工程,2010,39(5):972-978.
    [140]郗润平,张艳宁,胡伏原.一种红外图像目标定位的方法[J].计算机工程与应用,2008,44(9):242-244.
    [141] Ju X, Nebel J, Siebert J P.3D Thermography Imaging Standardization Technique forInflammation Diagnosis[J]. Proc. Spie, Photonics Asia,2004.
    [142]刘启海.高温构件三维尺寸红外视觉测量的理论和实验研究[D].天津大学,2011.
    [143] Da Silva Tavares P J, Vaz M A. Accurate Subpixel Corner Detection On Planar CameraCalibration Targets[J]. Optical Engineering,2007,46(10):107205.
    [144]王忠石,徐心和.棋盘格模板角点的自动识别与定位[J].中国图象图形学报,2007(4):618-622.
    [145] Hartley R, Zisserman A. Multiple View Geometry in Computer Vision[M].2. CambridgeUniv Press,2000.
    [146] Baker S, Nayar S K. A Theory of Catadioptric Image Formation[C]. Computer Vision,1998.Sixth International Conference on,1998. IEEE:35-42
    [147] Forsyth D A, Ponce J. Computer Vision: A Modern Approach[M]. Prentice HallProfessional Technical Reference,2002.
    [148] Xie M, Lee C M, Li Z Q, et al. Depth Assessment by Using Qualitative Stereo-Vision[C].Intelligent Processing Systems,1997. ICIPS '97.1997IEEE International Conference on,1997.:1446-1449
    [149] Kwon K C, Lim Y T, Kim N, et al. Vergence Control of Binocular Stereoscopic CameraUsing Disparity Information[J]. Journal of the Optical Society of Korea,2009,13(3):379-385.
    [150] Olson T J. Stereopsis for Verging Systems[C]. Computer Vision and Pattern Recognition,1993. Proceedings CVPR'93.,1993IEEE Computer Society Conference on,1993.IEEE:55-60
    [151] Yamanoue H. The Differences Between Toed-in Camera Configurations and ParallelCamera Configurations in Shooting Stereoscopic Images[C]. Multimedia and Expo,2006IEEE International Conference on,2006. IEEE:1701-1704
    [152] Allison R S. The Camera Convergence Problem Revisited[J]. Stereoscopic Displays andVirtual Reality Systems Xi,2004,5291:167-178.
    [153]马颂德,张正友.计算机视觉——计算理论与算法基础[M].科学出版社,1998.
    [154] More J. The Levenberg-Marquardt Algorithm: Implementation and Theory[J]. NumericalAnalysis,1978:105-116.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700