主动视频监控中若干问题的研究
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
随着人们安全防范意识的增强,视频监控得到了广泛的普及,越来越多的摄像机被安装在机场、商店、停车场、交通路口等场所。为了提高监控的准确性和监控效率,视频监控需要向智能化方向上发展,产生了所谓的智能视频监控。这一技术包括在视频图像序列中自动地进行运动目标的检测、目标跟踪、目标分类和行为理解等方面的内容,目的是在图像及图像描述之间建立映射关系,从而使计算机能够理解视频画面中的内容。本文将可移动的动态摄像机引入到由静止摄像机构成的监控系统中,讨论该系统所涉及到的摄像机标定、手眼关系的标定、移动目标的提取和跟踪、两个摄像机之间目标的匹配以及对动态摄像机运动平台的伺服控制等问题。
     首先讨论了摄像机标定问题和手眼关系的标定问题,这方面的具体工作如下:(1)提出的一种利用环境中目标物体运动的已知信息,求解摄像机的内参数问题的方法。该方法的基本思想是:让被摄物体作三个不在同一平面的平移运动,然后,根据每次平移运动前后摄像机获取的两幅不同图像特征点的对应关系,建立方程,求解摄像机的内部参数。通过引入了中间变量,以避免求解非线性方程,在物体上选择多个特征点,可以得到一个线性方程组。并且考虑未知量之间的约束条件,利用Lagrange乘数法对方程组求解。(2)提出了一种内参数异构情况下摄像机平移位置的测定方法。该方法核心是给出了焦距变化时图像平面极点的求解方法,进而可求出摄像机的运动方向,并实现摄像机平移位置的测定,同时也给出了在摄像机焦距调整后,焦距检测的简单方法。(3)提出了一种新的手眼关系的标定方法。该方法是一种自标定方法。需要场景中两个特征点,通过控制摄像机运动平台的做两次平移和两次旋转运动,即可通过图像之间的对应关系求出手眼关系的旋转矩阵和平移向量。与以往算法的不同之处在于,在计算手眼关系的平移向量时,让摄像机平台进行纯旋转运动,再对摄像机坐标系进行虚设旋转变换使旋转转化为平移问题。同时,也给出了基于主动视觉的空间点深度值计算方法。
     其次,由于复杂的环境因素,使得背景模型越来越复杂。为了提高运动目标检测的执行效率,提出了一种基于图像块的多像素背景模型的构建方法,该方法将视频图像分块,以图像块的特征来构建背景模型。并给出了基于图像块的高斯混合背景模型和LOTS背景模型。并进行了实验验证。
     最后,为了克服由静止摄像机组成的监控系统对目标跟踪的不足,将手眼系统引入到视频监控中。提出了一种主动的视频监控系统,它是由一个固定的静止摄像机加上一个可移动的动态摄像机组成的双摄像机监控系统。其目的是对异常目标进行实时的跟踪并控制动态摄像机运动平台使所要跟踪移动目标总是出现在动态摄像机图像的中心位置。该系统在标定摄像机和手眼关系的基础上,利用所监控的环境特点,给出两个摄像机图像平面之间的近似的单应性关系,并以此为基础,建立两个摄像机之间目标匹配的方法。系统在静态摄像机的图像平面上建立目标的2D运动模型,采用卡尔曼滤波实现目标运动位置的预测,然后再利用单应性关系,得到在动态摄像机图像平面上对应目标的位置预测,计算动态摄像机平台所要旋转的角度,实现对该摄像机的运动控制,同时也讨论了系统的误差补偿等技术问题。
Along with the boost up of people’s security consciousness, video surveillance has become very common. More and more cameras have been installed at airports, stores, parking plots and traffic crossroads etc. In order to improve the veracity and efficiency of surveillance, we should develop video surveillance in the direction of intelligentizing. The technique includes automatically carrying detection of moving objects, objects tracking, objects classifying and behavior understanding in video image sequence, etc. The purpose of the technique is to build a mapping relation between image and description of the image, thereby making computer be able to understand the content of video images. In this paper we introduce a removable camera into surveillance system consisting of static cameras. We discuss some problems related to the system, such as camera calibration, hand-eye relation calibration, moving objects detection and tracking, objects matching between these two cameras and the servo control of a dynamic camera etc.
     First of all, we discuss the calibration of cameras and eye-hand relation, in which the main work is as the following: (i) We present a method of solving inner parameters of cameras by using known kinematics information about the target objects in surroundings. The main idea of the method is: let the object shot make the translation movements in three directions which are not in the same plane, then by using the corresponding relationship of characteristic points on the two different images before and after the translation movement, we build up the equation and solve the inner parameters of the camera. Introducing the affixation variables instead of solving non linear equation, we can make use of several corresponding points on the object and obtain a group of linear equations. According to the restricting conditions of the variables, we solve the equation by means of Lagrange's Multiplier; (ii) We also present a method of detecting the translation position of the camera under its different inner parameters at different positions. The core of this method is putting forward a way to solve epipolar of image plane while focus distance varies. Besides, we make sure of the translation direction ofcamera coordinate system and give a method of examining camera translation position too. At the same time two simple methods of examining focus distance after adjusting camera focus distance are given. (iii) We come up with a new hand-eye calibration method, which is a kind of self-calibration method. It needs two characteristic points in the scene and controlling the plate do two translations and two rotations of camera plate to calculate rotation matrix and translation vector of hand-eye relation based on the correspondence between images after the camera movement. The difference from other methods is that as calculating translation vector of hand-eye relation, this one firstly makes the camera plate do pure rotating movement and then utilizes nominal rotation to make rotation change into the translation. At the same time we also give the calculating method for depth value of the points in the scene based on the active vision.
     The second, complex factors in the surrounding make the background model more and more complicated. In order to improve executing efficiency of examining moving objects, we present a method of constructing multi-pixel background model based on the image block. The method partitions the video image into blocks and constructs the background model according to the characteristics of the blocks. We give Gaussian mixture background model and LOTS background model based on the image block and carry out experiments to validate it.
     The last, in order to overcome the shortcomings of surveillance which consists of static cameras to track objects, hand-eye system should be introduced into video surveillance. We bring forward a kind of active video surveillance system. The system, composed of two cameras --- one static camera and the other removable camera, is a real time bi-camera tracking system. The aim of the system is to carry out the real time tracking down the unconventional object and to control the locomotion plate of the removable camera so as to make the object be tracked appear in the centre of the image in the dynamic camera. On the basis of calibration of cameras and hand-eye relation of the system, we give approximate homography relation between two cameras’image planes by making use of the characteristics of the surroundings watched. From this we can establish a method to correspond the objects on the two image plates between two cameras. In the system, 2D motion model is established on the image plane of the static camera to predictthe position of the object by Kalman filter. Then by using homography relation, the prediction of the object position can be obtained on the image plane of the dynamic camera. Finally, after calculating the rotation angle of the dynamic camera plate, we can implement the servo control of the dynamic camera. We also discuss the technical problems of the error compensations of the system.
引文
1 马颂德, 张正友. 计算机视觉——计算理论与算法基础. 北京: 科学出版社, 1998.
    2 高文, 陈熙霖. 计算机视觉——算法与系统原理. 北京: 清华大学出版社,1999.
    3 游素亚,徐光祐. 立体视觉研究的现状与进展. 中国图象图形学报, 1997,2(1):17-24.
    4 M. J. Tarr, M. Black, J. Dialogue. A Computational and Evolutionary Perspective on the Role of Representation in Vision. CVGIP: Image Understanding, 1994,60(1):65-73.
    5 Y. Aloimonos, A. Rosenfeld. A Response to “Ignorance, Myopia, and Naive in Computer Vision Systems” by R. C. Jain and T. O. Binford. CVGIP: Image Understanding, 1991,53(1):120-124.
    6 R. Bajcsy. Active Perception. Proceeding of IEEE, 1988,76(8):996-1005.
    7 H. D. Ballade, M. C. Brown. Principles of Animate Vision. CVGIP: Image Understanding, 1992,56(1):3-21.
    8 B. W. Thompson. Qualitative Constraints for Structure from Motion. CVGIP: Image understanding, 1992,56(1):69-77.
    9 C. Antonio, F. Mario. A Control Architecture For Active Vision Systems. Pattern Recognition and Applications, In Frontiers in Artificial Intelligence and Applications, 2000, 56:144-153.
    10 王亮,胡卫明,谭铁牛. 人运动视觉分析综述. 计算机学报, 2002,25(3):225-237.
    11 R. Anthony Dick, J. Michael Brooks. Issues in Automated Visual Surveillance. Proc.VIIth Digital Image Computing:Techniques and Applications, Sydney, 2003.
    12 Sen-Ching S.Cheung, C. Kamath. Robust techniques for background subtraction in urban traffic video. Video Communications and Image, Processing Conference, 2007,5308(4): 881-892.
    13 I. Haritaoglu, D. Harwood, L. S. Davis. W4: Rea-Time Surveillance of People and Their Activities. IEEE Trans. PAMI, 2000, 22(8):809-830.
    14 C. Wren, A. Azarbayejani, T. Darrell, A. Pentland. Pfinder: “Real-time Tracking of the Human Body”, IEEE Trans. PAMI, 1997, 19(7):780-785.
    15 S. McKenna. Tracking groups of people. Computer Vision and Image Understanding, 2000, 80 (1):42-56.
    16 K. Karmann, A. Brandt. Moving object recognition using an adaptive background memory. In: Cappellini V editor. Time-Varying Image Processing and Moving Object Recognition, Elsevier,Amsterdam, Netherlands, 1990,2:289-296
    17 M. Kilger. A shadow handler in a video-based real-time traffic monitoring system. In: Proc IEEE Workshop on Applications of Computer Vision, Palm Springs, CA, 1992:1060-1066.
    18 N. Friedman, S. Russell. Image Segmentation in Video Sequences: A Probabilistic Approach. Proceedings Conference. Uncertainty in Artificial Intelligence, 1997:175-181.
    19 M. Heikkila, M. Pietikainen. A texture-based method for modeling the background and detecting moving objects. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2006, 28(4):657-662.
    20 A. Lipton, H. Fujiyoshi, R. Patil. Moving target classification and tracking from real-time video. In: Proc. IEEE Workshop on Applications of Computer Vision, Princeton, NJ, 1998: 8-14.
    21 C. Anderson, P. Bert. G. W. Vander. Change detection and tracking using pyramids transformation techniques. In: Proc SPIE Conference on Intelligent Robots and Computer Vision, Cambridge, MA, 1985, 579:72-78.
    22 R. Collins. A system for video surveillance and monitoring: VSAM final report. Carnegie Mellon University, Technical Report: CMU-RI-TR-00-12, 2000.
    23 D. Meyer, J. Denzler, H. Niemann. Model based extraction of articulated objects in image sequences for gait analysis. In: Proc IEEE International Conference on Image Processing, Santa Barbara, California 1997:78-81.
    24 J. Barron, D. Fleet, S. Beauchemin. Performance of optical flow techniques. International Journal of Computer Vision, 1994, 12 (1):42-77.
    25 A. Verri, S. Uras, E. DeMicheli. Motion Segmentation from optical flow. In: Proc the 5th Alvey Vision Conference, Brighton, UK, 1989:209-214.
    26 A. J. Lipton, H. Fujiyoshi, R.S. Patil. Moving target classification and tracking from real-time video. In Proc. of Workshop Applications of Computer Vision, 1998:129-136.
    27 M. Saptharishi, J. B. Hampshire, P. Khosla. Agent-based moving object correspondence using Differential discriminative diagnosis. In Proc. of Computer Vision and Pattern Recognition, 2000:652-658.
    28 C. Papageorgiou, T. Evgeniou, T. Poggio. A trainable pedestrian detection system. In Proc. of IEEE Int. Conf. on Intelligent Vehicles, Germany, 1998:241-246.
    29 T. Brodsky. Visual Surveillance in Retail Stores and in the Home: Video-Based SurveillanceSystems. Kluwer Academic Publishers, Boston, 2002:51-61.
    30 R. Cutler, L.S. Davis. Robust real-time periodic motion detection, analysis and applications. In IEEE Transactions on Pattern Analysis and Machine Intelligence, 2000, 8:781-796.
    31 L. Wixson, A. Selinger. Classifying moving objects as rigid or non-rigid. In Proc. of DARPA Image Understanding Workshop, 1998:341-358.
    32 A. J. Lipton. Local application of optic flow to analyse rigid versus non-rigid motion. Technical Report CMU-RI-TR-99-13, Robotics Institute, Carnegie Mellon University, Pittsburgh, PA, 1999. In: http://www.eecs.lehigh.edu/FRAME/Lipton/iccvframe.html
    33 C.Stauffer, Automatic hierarchical classification using time-base co-occurrences. In Proc. of IEEE CS Conference on Computer Vision and Pattern Recognition, 1999:330-339.
    34 H. Simon. Kalman Filtering and Neural Networks. New York: AWILEY-INTERSCIENCE PUBLICATION, JOHN WILEY & SONS, INC, 2001.
    35 M. Isard, A. Blake. Condensation-conditional density propagation for visual tracking. International Journal of Computer Vision, 1998, 29 (1): 5-28.
    36 V. Pavlovi, J. Rehg, T-J Cham, K. Murphy. A dynamic Bayesian network approach to figure tracking using learned dynamic models. In: Proc IEEE International Conference on Computer Vision, Corfu, Greece, 1999: 94-101.
    37 J. Yang, A. Waibel. A real-time face tracker. In: Proc IEEE Workshop on Applications of Computer Vision, Sarasota, USA, 1996:142-147.
    38 M-H Yang, N. Ahuja. Recognizing hand gesture using motion trajectories. In: Proc IEEE Conference on Computer Vision and Pattern Recognition, Fort Collins, Colorado, 1999: 468-472.
    39 Y. Cui, J. Weng. Hand segmentation using learning-based prediction and verification for hand sign recognition. In: Proc IEEE Conference on Computer Vision and Pattern Recognition, Puerto Rico, 1997:88-93.
    40 L. Goncalves, E. Bernardo, E. Ursella, P. Perona. Monocular tracking of the human arm in 3D. In: Proc the 5th International Conference on Computer Vision, Cambridge, 1995:764-770.
    41 J. Rehg, T. Kanade. Visual tracking of high DOF articulated structures: an application to human hand tracking. In: Proc European Conference on Computer Vision, Stockholm, Sweden, 1994: 35-46.
    42 D. Meyer. Gait classification with HMMs for Trajectories of body parts extracted by mixturedensities. In: British Machine Vision Conference, Southampton, England, 1998:459-468.
    43 P. Fieguth, D. Terzopoulos. Color-based tracking of heads and other mobile objects at video frame rate. In: Proc IEEE Conference on Computer vision and Pattern Recognition, Puerto Rico, 1997: 21-27.
    44 D-S Jang, H-I Choi. Active models for tracking moving objects. Pattern Recognition, 2000, 33 (7):1135-1146.
    45 S. Ju, M. Black, Y. Yaccob. Cardboard people: a parameterized model of articulated image motion. In: Proc. IEEE International Conference on Automatic Face and gesture Recognition, Killington, Vermont USA, 1996: 38-44.
    46 Y. Zhong, A. Jain, M. Dubuisson-Jolly. Object tracking using deformable templates. IEEE Trans Pattern Analysis and Machine Intelligence, 2000, 22 (5):544-549.
    47 N. Paragios, R. Deriche. Geodesic active contours and level sets for the detection and tracking of moving objects. IEEE Trans Pattern Analysis and Machine Intelligence, 2000,22 (3): 266-280.
    48 M. Bertalmio, G. Sapiroo, G. Randll. Morphing active contours. IEEE Trans Pattern Analysis and Machine Intelligence, 2000, 22(7):733-737.
    49 N. Peterfreund. Robust tracking of position and velocity with Kalman snakes. IEEE Trans Pattern Analysis and Machine Intelligence, 2000, 22 (6): 564-569.
    50 M. Isard, A. Blake. Contour tracking by stochastic propagation of conditional density. In: Proc European Conference on Computer Vision, Cambridge, 1996:343-356.
    51 A. Baumberg, D. Hogg. An efficient method for contour tracking using active shape models. In: Proc IEEE Workshop on Motion of Non-Rigid and Articulated Objects, Austin, 1994:194-199.
    52 R. Polana, R. Nelson. Low level recognition of human motion. In: Proc IEEE Workshop on Motion of Non-Rigid and Articulated Objects, Austin, TX, 1994:77-82.
    53 J. Segen, S. Pingali. A camera-based system for tracking people in real time. In: Proc International Conference on Pattern recognition, Vienna, 1996: 63-67.
    54 A. Amer. Voting-based simultaneous tracking of multiple video objects. In Proc. SPIE Int. Symposium on Electronic Imaging, Santa Clara, USA, 2003:500-511.
    55 C. Stauffer, W. E. L. Grimson. Learning patterns of activity using real-time tracking. IEEE Pattern Recognition and Machine Intelligence, 2000, 22 (8):747-757.
    56 C. Myers, L Rabinier, A. Rosenberg. Performance tradeoffs in dynamic time warping algorithmsfor isolated word recognition. IEEE Trans Acoustics, Speech, and Signal Processing, 1980, 28 (6): 623-635.
    57 A. Bobick, A. Wilson. A state-based technique for the summarization and recognition of gesture. In: Proc. International Conference on Computer Vision, Cambridge, 1995:382-388.
    58 K. Takahashi, S. Seki. Recognition of dexterous manipulations from time varying images. In: Proc.IEEE Workshop on Motion of Non-Rigid and Articulated Objects, Austin, 1994:23-28.
    59 A. Poritz. Hidden Markov Models: a guided tour. In: Proc IEEE International Conference on Acoustics, Speech and Signal Processing, New York City, NY, 1988:7-13.
    60 L. Rabinier. A tutorial on hidden Markov models and selected applications in speech recognition. In: Proc.IEEE 1989, 77(2):257-285.
    61 T. Starner, A. Pentland. Real-time American Sign Language recognition from video using hidden Markov models. In: Proc International Symposium on Computer Vision, Coral Gables, Florida, 1995:265-270.
    62 J. Yamato, J. Ohya, K. Ishii. Recognizing human action in time-sequential images using hidden Markov model. In: Proc. IEEE Conference on Computer Vision and Pattern Recognition, Champaign, Illinois, 1992:379-385.
    63 M. Brand, N. Oliver, A. Pentland. Coupled hidden Markov models for complex action recognition. In: Proc. IEEE Conference Computer Vision and Pattern Recognition, Puerto Rico, 1997:994-999.
    64 Y. Guo, G. Xu, S. Tsuji. Understanding human motion patterns. In: Proc International Conference on Pattern Recognition, Jerusalem, Israel, 1994:325-329.
    65 M. Rosenblum, Y. Yacoob, L. Davis. Human emotion recognition from motion using radial basis function network architecture. In: Proc IEEE Workshop on Motion of Non-Rigid and Articulated Objects, Austin, 1994: 43-49.
    66 A. Mittal, L. S. Davis. M2Tracker: A Multi-view Approach to Segmenting and Tracking People in a Cluttered Scene. International Journal of Computer Vision, 2003,51(3):189-203.
    67 J. Krumm, S. Harris, B. Meyers, B. Brumitt, M. Hale, S. Shafer. Multi-camera Multi-person Tracking for EasyLiving. Proceedings 3rd IEEE International Workshop on Visual Surveillance. Dublin Ireland, 2000:3-10.
    68 G. Wei, V. Petrushin, A. Gershman. Multiple-camera people localization in a clutteredenvironment. Proceedings of the Fifth International Workshop on Multimedia Data Mining,2004:52-60.
    69 V. Kettnaker, R Zabih. Bayesian Multi-camera Surveillance. Proc. IEEE Conference on Computer Vision and Pattern Recognition, Fort Collins, Colorado, 1999:2253-2259.
    70 Q. Cai, J. K Aggarwal. Tracking Human Motion using Multiple Cameras. Proc. International Conference on Pattern Recognition, Vienna, Austria, 1996: 68-72.
    71 Q. Cai, J. K. Aggarwal. Tracking Human Motion in Structured Environments using a Distributed-camera System. IEEE transactions on Pattern Analysis and Machine Intelligence, 1999, 2(11):1241-1247.
    72 S. Khan, O. Javed, Z. Rasheed, M. Shah. Human Tracking in Multiple Cameras. Proc. 8th IEEE International Conference on Computer Vision, Vancouver, Canada, 2001, 1:331-336.
    73 O. Javed, Z. Rasheed, O. Atalas, M. Shah, M. Knight. A real Time Surveillance System for Multiple Overlapping and Non-overlapping Cameras. The fourth IEEE International Conference on Multimedia and Expo (ICME 2003), Baltimore, MD, 2003:649-652.
    74 K. S. Huang, M. M. Trivedi. Distributed Video Arrays for Tracking, Human Identification, and Activity Analysis. The fourth IEEE International Conference on Multimedia and Expo (ICME 2003), Baltimore, MD, 2003, 2: 9-12.
    75 H. Zhong, J. Shi. Finding (Un)Usual Events in Video. CMU-RI-TR-03-05, CMU, 2003.
    76 M. Bartholomeus, B. Krose, A. Noest. A robust multi-resolution vision system for target tracking with a moving camera. In H. Wijshof, editor, Computer Science in the Netherlands, CWI, Amsterdam, 1993:52-63.
    77 P. Nordlund, T. Uhlin. Closing the loop: detection and pursuit of a moving object by a moving observer. Image and Vision Computing, 1996,14(4):265-275.
    78 D. Murray, A. Basu. Motion tracking with an active camera. IEEE Trans. on Pattern Analysis & Machine Intelligence, 1994,16(5):449-459.
    79 I. Reid, D. Murray. Active tracking of foveated feature clusters using affine structure. Int. Journal of Computer Vision, 1996,18(1):41-45.
    80 P. Allen, A. Timcenko, B. Yoshimi, P. Michelman. Automated tracking and grasping of a moving object with a robotic hand-eye system. IEEE Trans. on Robotics & Automation, 1993,9(2):152-165.
    81 C. Brown. Gaze control cooperating through prediction. Image and Vision Computing, 1990,8(1):10-17.
    82 D. Murray, K. Bradshaw, P. Mc Lauchlan, P. Sharkey. Driving saccade to pursuit using image motion. Int. Journal of Computer Vision, 1995,16(3):205-228.
    83 C. Armel, C. Francois, B. Patrick. Complex object tracking by visual servoing based on 2D image motion. Proceedings of the IAPR International Conference on Pattern Recognition.ICPR’98, Brisbane Australia, 1998, 2:1251-1254.
    84 Y. Shiao. Design and implementation of real-time tracking system based on vision servo control. Tamkang Journal of Science and Engineering, 2001,4(1): 45-58.
    85 林靖 , 陈辉堂 , 王月娟 , 蒋 平 . 机器 人 视 觉 伺 服 系统 的 研 究 . 控制理 论与应 用 , 2000,17(6):476-481.
    86 G. Wells, C. Venaille, C. Torras. Vision-based robot positioning using neural networks. Image and Vision Comput, 1996, 14:715-732.
    87 P. I. Corke, S. A. Hutchinson. Real-time vision, tracking and control. Proceeding of the 2000 IEEE International Conference on Robotics & Automation, 1989,5:691-700.
    88 W. Gordon, T. Carme. Assessing Image Features for Vision-Based Robot Positioning. Journal of Intelligent and Robotic Systems, 2001,30: 95-118.
    89 L. M. Fuentes, S. A. Velastin. People tracking in surveillance applications. Proceedings and IEEE International Workshop on PETS, Kauai, Hawaii, USA. 2001.
    90 邱茂林,马颂德. 计算机视觉中摄像机定标综述. 自动化学报, 2000,6(1): 43-45.
    91 袁野. 摄像机标定方法及边缘检测和轮廓跟踪算法研究. [大连理工大学博士论文], 2002.
    92 B. Nelson, P. K. Khosla. Increasing the Tracking Region of an Eye-in-Hand System Using Controlled Active Vision. In Proc. of the 1992 IEEE Int. Conf. on Control Applications, 1992:1128-1133.
    93 孟偲,王田苗,丑武胜. 基于手眼视觉的测量与定位方法的研究与实现.中国空间科学技术, 2002,6:18-25.
    94 S. D. Ma. A Self-Calibration Technique for Active Vision System. IEEE Transaction Robotics and Automation, 1996, 12(1):114-120.
    95 J. Weng, P. Cohen, M. Herniou. Calibration of Stereo Cameras Using a Non-linear Distortion Model. Proc. International Conference on Pattern Recognition, 1990:246-253.
    96 R. Y. Tsai. An efficient and accurate camera calibration technique for 3D machine vision. Proc. CVPR’86. New York:IEEE, 1986:364-374.
    97 Y. I. Abdel-Aziz, H. M. Karara. Direct linear transformation into object space coordinates in close-range photogrammetry. In: Proc. Symp. Close-Range Photogrammetry,1971:1-18.
    98 R.Y. Tsai. A versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-the-shelf TV camera and lenses. IEEE Journal of Automation, 1987, 3(4): 323-334.
    99 高立志,方勇,林志航,陈康宁.高精度立体视觉测量中一种通用的摄像机标定技术, 机械科学与技术,1998, 17(5):809-811.
    100 R. K. Lenz, R. Y. Tsai. Techniques for calibration of the scale factor and image center for high accuracy 3D machine vision metrology. IEEE Trans. on PAMI, 1998,10(5): 713-720.
    101 张艳珍,欧宗瑛.一种新的摄像机线性标定方法.中国图象图形学报, 2001,6(8):727-731.
    102 W. Faig. Calibration of close-range Photogrammetric Eng. Remote Sensing, photogrammetry systems: Mathematical formulation.1975,41(12):1479-1486.
    103 Z. Zhang. A flexible new technique for camera calibration. Microsoft Corporation: Technical Report, MSR-TR-98-71, 1998.
    104 O. D. Faugcras. Stratification of three dimensional vision: projective, affine, and metric representations. J.Opt.Soc. Am.A.1995,12(3):465-484.
    105 H. C. Longuet-Higgins. A computer algorithm for reconstructing a scene from two projections. Nature, 1981,293(1O):133-135.
    106 O. D. Faugcras, S. Mayhank. Motion from point matches: multiplicity of solutions. Intl. J. Computer Vision, 1990, 4:225-246.
    107 S. J. Mayhank, O. D. Faugcras. A theory of self-calibration of a moving camera. Intl.J. Computer Vision,1992,8(2):123-151.
    108 O. D. Faugeras, B. Mourrain. On the geometry and algebra of the point and line correspondences between N images. In:Proc.ICCV'95,1995:951-956.
    109 R. I. Hartley. A Linear method for reconstruction from lines and points. In: Proc. ICCV'95, 1995:882-887.
    110 A. Heyden. Reconstruction from image sequences by means of relative depths. In: Proc. ICCV'95,1995:1058-1063.
    111 A. Shashua. Trilincar functions for visual recognition. In: Proc.ECCV'94,1994:479-484.
    112 A. Shashua. Algebraic function for recognition. IEEE Trans. PAMI.1995,17:779-789.
    113 A. E. Brennemann, R. L. Hollis, M. A. Lavin, B. L. Musits. Sensors for robotic assembly. Proc. IEEE Int. Conf. On Robotics and Automation, Pennsylvaia, USA, 1998:1606-1610.
    114 贾云德.机器视觉.北京:科学出版社, 2000, 35-41.
    115 P. K. Khosla, C. P. Neuman, F. B. Prinz. Algorithm for seam tracking application. Int. J. Robotics Research,1985:4(1):27-41.
    116 G. J. Page. Vision driven stack picking in an FMS cell. Proc. 4th Int. Conf. On Robot Vision and Sensory Control, London, UK,1984:1-12.
    117 B. Frederick, B. M. Carrell, E. Alexander, G. Vansant. Simulation of irregular mailpieces by adaptive robotics. Proc. IEEE Int. Conf. on Robotics and Automation, Pensylvania, USA,1998:1448-1454.
    118 H. T. Chen, P. Jiang, Y, J, Wang. Shape glass cutting direct drive robot. Proc. Int Conf. On Manufacturing Automation, HongKong,1997:146-151.
    119 蒋平,林靖,陈辉堂,王月娟.机器人轨线跟踪的视觉与控制集成方法.自动化学报, 1999, 25(1):18-24.
    120 R. Horaud, F. Dornaik. Hand-eye calibration. International Journal of Robotics Research, 1995, 14(3):195-210.
    121 M. Bertozzi, A. Broggi, A.Fascioli. Vision-Based Intelligent Vehicles: State of the Art and Perspective. Robotics and Autonomous Systems, 2000, 32 (1):1-16.
    122 H. Malm, A. Heyden. Hand-Eye calibration from image derivative. Proceedings of the 6th European Conference on Computer Vision-Part II table of contents, 2000: 493-507.
    123 D. Konstantinos. Hand-Eye calibration using dual quaternions. The International Journal of Robotics Research, 1999,18(3):286-298.
    124 C. Stauffer, W. E. L. Grimson. Adaptive background mixture models for real-time tracking. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Piscataway, USA: IEEE, 1999:246-252.
    125 P. KaewTraKulPong, R. Bowden. An improved adaptive background mixture model for real-time tracking with shadow detection. Proceedings of the 2nd European Workshop on Advanced Video Based Surveillance Systems. USA: Kluwer Academic Publishers, 2001:135-144.
    126 Z. Zivkovic. Improved adaptive Gaussian mixture model for background subtraction. Proceedings of the International Conference on Pattern Recognition. Piscataway, USA: IEEE, 2004:28-31.
    127 D. S. Lee, J. J. Hull, B. A. Erol. Bayesian framework for Gaussian mixture background modeling. Proceedings of the IEEE International Conference on Image Processing. New York, USA: IEEE, 2003:973-976.
    128 T. E. Boult, R. J. Micheals, X. Gao, M. Eckmann. Into the woods: Visual surveillance of noncooperative and camouflaged targets in complex outdoor settings. Proceedings of the IEEE, 200189(10):1382-1402.
    129 崔屹. 图象处理与分析——数学形态学方法及应用. 北京: 科学出版社, 2001.
    130 C. Armel, C. Francois, B. Patrick. Complex object tracking by visual servoing based on 2D image motion. Proceeding of the IAPR International Conference on Pattern Recognition, Brisbane, Australia, 1998, 2:1251-1254.
    131 Y. Ke, R. Sukthankar, M. Hebert. Efficient Visual Event Detection using Volumetric Features. Proceedings of the International Conference on Computer Vision. Los Alamitos, CA, USA: IEEE Computer Society, 2005:166-173.
    132 R. Visser, N. Sebe, E. Bakker. Object recognition for video retrieval. Proceedings of the International Conference on Image and Video Retrieval. Berlin, Germany: Springer-Verlag, 2002:250-259.
    133 B. U. Toreyin, Y. Dedeoglu, U. Gudukbay. Computer vision based method for real-time fire and flame detection. Pattern Recognition Letters, 2006, 27(1):49-58.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700