基于超小型无人机的地面目标实时图像跟踪
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
地面移动目标低空实时图像跟踪系统的研究是一个涉及多个学科具有重要意义的应用研究,主要是研究和开发一套基于超小型旋翼机的地面移动目标低空实时图像跟踪系统。该系统悬挂在超小型旋翼机上,从空中对地面目标实施监视与跟踪。跟踪系统能够锁定跟踪目标,排除干扰目标的影响,并且能够为飞行器的跟踪飞行提供控制参数。本论文围绕着地面移动目标低空实时图像跟踪系统所涉及的图像处理算法,系统硬件、软件及操作控制等多个方面进行研究。主要研究工作如下:
     1.分析地面移动目标低空实时图像跟踪系统的研究背景和意义,追踪国内外视觉监控方面的多个大型研究项目的进展。依据图像监控的基本框架,结合本系统应用的环境特征,分析得出构建基于超小型旋翼机的地面目标实时图像跟踪系统存在的几个问题,提出解决问题的房方案和技术路线。
     2.构建地面移动目标低空实时图像跟踪实验系统,并根据其优缺点给出了跟踪系统的硬件框架的最佳方案。跟踪系统采用并联控制回路,实现人工监视状态与自动跟踪状态无缝切换。此外,还搭建了一个跟踪系统的室内实验平台,模拟跟踪过程中飞行载体恶劣的运动状态。
     3.研究分析数字图像的阈值分割方法,根据跟踪系统实际应用环境的特征,采用基于直方图的自适应容忍度的多阈值分割方法对动态序列图像的运动区域进行分割,在直方图上,采用图像灰度值像素频率平均线限定图像上一些干扰子集的影响,并采用限制搜索范围的爬山算法来进一步确定每一帧图像上每个灰度子集的容忍度,确保图像分割的有效性和实时性,分割后可得到目标不完全分割的二值图像。
     4.针对分割得到的运动区域中常常包含有目标和干扰目标的问题,依据各个目标之间的相对聚集性,系统分别采用两种方法排除干扰目标。从数字图像处理的角度出发,采用数学形态学方法或者金字塔结构增加目标区域的连通性,然后采用数字图像处理的区域标识方法标识目标与干扰目标,通过判断靶框内标识区域的位置识别目标区域;从模式识别的角度出发,采用基于双重子窗口的动态聚类方法计算得到目标的形心位置,根据靶框内运动区域的面积设置一子窗口,通过子窗口聚类,最终搜索到目标的形心位置,从而排除了干扰目标的影响。动态聚类方法相对区域标识方法简单明了,如果要获得目标的形状,则需要进行后处理。
     5.针对目标的不完全分割对确定目标的形心位置可能存在影响的问题,系统根据形态学理论和计算几何的凸壳理论分别计算生成目标的凸壳,从而完整地分割目标。通过计算凸壳所围成的面积的形心位置,能够部分地消除了目标不完全分割的影响。其中形态学方法生成的是一个近似的凸壳,并且计算量大;计算几何方法生成的目标凸壳相对比较精确,且计算量较小。
     6.分析了跟踪系统云台的运动特性,实现了云台的跟踪运动与像平面上的目标跟踪的统一,讨论了云台运动的控制方式——速度控制和位置控制的特性及应用环境,并根据跟踪过程中云台相对于飞行器的姿态为飞行器的飞行控制提供必要的控制参数。
     7.从人机工程学的角度出发,充分考虑人的主观因素,设计良好的人机交互界面,将跟踪系统绝大多数的功能操作都集中在跟踪视野范围内的鼠标上,使得操作人员能够集中注意力,简单快捷地捕捉跟踪目标,合理的操作设置能够减轻操作人员的劳动强度。
     通过以上的研究工作,为构建一套实用化的图像跟踪系统提供了必要的理论基础和可行性验证。
The study of the real time surveillance and tracking system of the object moving on the ground from low altitude is an important application research related to multi-subject. The primary purpose of study is to develop an applied surveillance and tracking system based on a small helicopter. The system, hanged under the helicopter, is used to watch and track the object moving on the ground. It can lock the tracked object steadily through eliminate the disturbance of other objects. And it also can provide the control parameter for the helicopter flying after the object. The thesis discussed some correlative key techniques about the surveillance and tracking system, such as the algorithm of image processing, hardware, software, control operation and etc. The main contents list as follows:
     1. Analyzed the background and significance of the research about the surveillance and tracking system, and traced some international research projects on surveillance system. According to the general frame and the applied conditions of the surveillance system, the reasonable approaches to construct the tracking system were presented.
     2. Chosen the best hardware architecture to construct an experiment system for surveillance and tracking. With parallel connection control loop, the system can switch smoothly between the manual surveillance and the automatic tracking. Besides, constructed an inside experiment platform, carried the surveillance and tracking system, to simulate the worst movement of the helicopter in the tracking process.
     3. Researched the threshold segmentation on digital image. The multi-threshold segmentation method with adaptive tolerance based on histogram is used to segment the motion region on the dynamic image. Some disturbance subsets are confined by the pixel frequency average of gray value. The tolerance of gray subset can be calculated by the hill-climbing search algorithm with confined scope. The image can be segmented into a binary image with an incomplete object, and the segmentation is validity and real time.
     4. Aim at the disturbance of other objects, there are two methods to remove these objects in the motion region. From the point of view of digital image processing, the connectivity of the object can be enhanced by pyramid data structure. Each region is labeled with a unique number by scan the image once. The object region can be identified based on its position in the subwindow. From the point of view of pattern recognition, the center of gravity of the tracked object can be located by dynamic cluster in the double sub-window. The method of dynamic cluster is simpler than the method of region label, but can not identify the object's boundary unless post processing.
     5. Aim at the incomplete segmentation of the object, there two methods, based on the mathematical morphology and the computational geometry, to make the convex hull of the object to segment the object completely, and replace the center of gravity of the object pixels by the center of gravity of the region enveloped in hull. The convex hull made by morphology is approximate, and the computational cost is too high. The computational geometry can make a precise convex hull, and its cost is low.
     6. Analyzed the movement characters of the pan and tilt head. The movement of the pan and tilt head is correspond with the object tracking on the image plane. Two control modes of the movement of the pan and tilt head, speed control and position control, are discussed, and the control parameters of the aircraft's flight control system can be calculated based on the pose of the pan and tilt head relate to aircraft during the tracking process.
     7. Considered man's subjective factor, a friendly human computer interface was designed from the point of view of man-machine engineering. All the key operations are gathered on the mouse in the tracking view so that the operator can concentrate on the tracking view and hit the object easily. The reasonable setup of operation can decrease labor intensity.
     The thesis presented the theoretical foundation and validated the feasibility of the surveillance and tracking system.
引文
[1],龚振邦,罗均,蒋蓁,基于先进机器人技术的突发应急特种装备国际研究概况之一[J],机器人技术与应用,2006(1):21-29
    [2],李伟,国外反恐科技现状及趋势研究[J],中国安防产品信息,2005(03):8-13
    [3],淳于江民,张珩,微型无人直升机技术研究现状与展望[J],机器人技术与应用,2004(6):6-11
    [4], Collins R T, Lipton A J, and Kanade T, Introduction to the special section on video surveillance [J], IEEE Transactions on Pattern Analysis and Machine Intelligence, 2000, 22(8):745-746
    [5], Hilton A, Fua P, Forword: Modeling people toward vision-based understanding of a person's shape, appearance, and movement [J], Computer Vision and Image Understanding, 2001, 81(3):227-230
    [6], Regazzoni C, Ranesh V, and Foresti G L, Special issue on video communications, processing, and understanding for third generation surveillance systems [C], in Proceedings of the IEEE, 2001, 89(10):1355-1367
    [7], Kanade T, Collins R T, Lipton A J, Anandan P, Burt P, Wixson L, Cooperative multi-sensor video surveillance (IFD PI Overview Paper), DARPA IUW: May 11-14, 1997, New Orleans, LA. (http://www.cs.cmu.edu/~vsam)
    [8], Camegie Mellon University and Sarnoff Corporation, Video surveillance and monitoring integrated feasibility demonstration specification document (IFD Testbed Specification Document), May 18, 1997 (http://www.cs.cmu.edu/~vsam)
    [9], Carnegie Mellon University and Samoff Corporation, Video Surveillance and Monitoring FRE Spec Document 98-1.0 (IFD - FILE Collaboration Specification), February 19, 1998 (http://www.cs.cmu.edu/~vsam)
    [10], IFD Testbed System and Demo Ⅱ Specification, Video Surveillance and Monitoring Integrated Feasibility Demonstration Specification Document (1998), Camegie Mellon University and Samoff Corporation, February 19, 1998 (http://www.cs.cmu.edu/~vsam)
    [11], Lipton A, Lee Y C, Boult T, Video Surveillance and Monitoring Communication Specification Document 98-2.1 (IFD Testbed Communication Spec 98-2.1), July 9,1998 (http://www.cs.cmu.edu/~vsam)
    [12], Collins R T, Tsin Y, Miller J R, Lipton A J, Using a DEM to Determine Geospatial Object Trajectories (CMU Technical Papers), DARPA 1UW: Nov 20-23, 1998, Monterey, CA. (http://www.cs.cmu.edu/~vsam)
    [13], Kanade T, Collins R T, Lipton A J, Burt P, Wixson L, Advances in Cooperative Multi-Sensor Video Surveillance (IFD PI Overview Paper), Darpa IUW: Nov 20-23, 1998, Monterey, CA. (http://www.cs.cmu.edu/~vsam)
    [14], Fujiyoshi H, Lipton A J, Real-time Human Motion Analysis by Image Skeletonization [C], in Proceedings of IEEE Workshop on Applications of Computer Vision (WACV), Princeton N J, October 1998, pp: 15-21
    [15], Lipton A J, Fujiyoshi H, Patil R S, Moving Target Classification and Tracking from Real-time Video [C], in Proceeding of IEEE Workshop on Applications of Computer Vision (WACV), Princeton N J, October 1998, pp:8-14
    [16], Collins R T, Lipton A J, Kanadem T, A System for Video Surveillance and Monitoring [C], in Proceedings of American Nuclear Society (ANS) Eighth International Topical Meeting on Robotics and Remote Systems, Pittsburgh, PA, April 25-29, 1999. (http://www.cs.cmu.edu /--vsam)
    [17], Collins R T, Tsin Y, Calibration of an Outdoor Active Camera System [C], in Proceedings of the IEEE Computer Vision and Pattern Recognition (CVPR99), Fort Collins, CO, June 23-25, 1999,pp:528-534
    [18], Dellaert F, Collins R T, Fast Image-Based Tracking by Selective Pixel Integration [C], in ICCV Workshop on Frame-Rate Vision, Corfu, Greece, September 1999. (http://www.cs. cmu.edu /-vsam)
    [19], Lipton A J, Local Application of Optic Flow to Analyse Rigid versus Non-Rigid Motion [C], in ICCV Workshop on Frame-Rate Vision, Corfu, Greece, September 1999. (http://www.cs. cmu.edu/-vsam)
    [20], Lipton A J, Virtual Postman - Real-Time, Interactive Virtual Video [C], in IASTED CGIM, Palm Springs, CA, October 1999. (http://www.cs.cmu.edu /-vsam)
    [21], Grimson W E L, Stauffer, C, Romano R, Lee L, Using adaptive tracking to classify and monitor activities in a site [C], in CVPR98. (http://www.ai.mit.edu/projects/vsam/)
    [22], Stein G P, Tracking from Multiple View Points: Self-calibration of Space and Time [C], in Image Understanding Workshop, Montery CA, Nov. 1998. (http://www.ai.mit.edu/projects /vsam/)
    [23], Stauffer C, Grimson W E L, Adaptive background mixture models for real-time tracking [C], in CVPR99, Fort Colins, CO, June 1999. (http://www.ai.mit.edu/projects/vsam/)
    [24], Stauffer C, Automatic hierarchical classification using time-based co-occurrences [C], in CVPR99, Fort Colins, CO, June 1999. (http://www.ai.mit.edu/projects/vsam/)
    [25], Lee L, Romano R, Stein G P, Monitoring Activities from Multiple Video Streams Establishing a Common Coordinate Frame [C], in IEEE PAMI Special Section on Video Surveillance and Monitoring, 2000. (http://www.ai.mit.edu/projects/vsam/)
    [26], Fejes S, Davis L S, Detection of independent motion using directional motion estimation (Technical Report), CAR-TR-866, CS-TR 3815, University of Maryland, August 1997. (http://www.cfar.umd.edu/)
    [27], Fejes S, Davis L S, Direction-selective filters for egomotion estimation (Technical Report), CAR-TR-865, CS-TR 3814, University of Maryland, July 1997. (http://www.cfar.umd.edu/)
    [28], Fejes S, Davis L S, What can projections of flow fields tell us about the visual motion? [C], in the Proceedings of the ICCV, Bombay, India, 1998. (http://www.cfar.umd.edu/)
    [29], Fejes S, Davis L S, Exploring visual motion using projections of flow fields [C], In the Proceedings of the DARPA Image Understanding Workshop, New Orleans, LA, 1997. (http://www.cfar.umd.edu/)
    [30], Seitz S M, Dyer C R, View Morphing Uniquely Predicting Scene Appearance from Basis Images [C], in Proceedings of Image Understanding Workshop, 1997, pp:881-887
    [31], Seitz S M, Image-Based Transformation of Viewpoint and Scene Appearance [D].[Ph.D. Dissertation], Computer Sciences Department Technical Report 1354, University of Wisconsin - Madison, October 1997.
    [32], Dyer C R, Image-Based Visualization from Widely-Separated Views [C], in Proceedings of Image Understanding Workshop, 1998, pp: 101-105
    [33], Cohen I, Medioni G, Detection and Tracking of Objects in Airborne Video Imagery [C], in Interpretation of Visual Motion Workshop, CVPR98, Santa Barbara, June 1998.
    
    
    [34], Medioni G, Nevatia R, Cohen I, Event Detection and Analysis from Video Streams [C], in DARPA Image Understanding Workshop, IUW98, Monterey, November 1998, pp: 63-72
    [35], Cohen I, Medioni G, Detecting and Tracking Moving Objects in Video from an Airborne Observer [C], in DARPA Image Understanding Workshop, IUW98, Monterey, November 1998, pp:217-222
    [36], Cohen I, Medioni G, Detecting and Tracking Objects in Video Surveillance [C], in Proceedings of the IEEE Computer Vision and Pattern Recognition 99, Fort Collins, June 1999,11:319-325
    [37], Boykov Y Y, Huttenlocher D P, A Bayesian Framework for Model Based Tracking [C], in Proceedings of IEEE CVPR, 2000, pp: 697-704
    [38], Olson C F, Huttenlocher D P, Automatic Target Recognition by Matching Oriented Edge Pixels [J], IEEE Transactions on Image Processing, 1997, vol. 6, no. 1, pp: 103-113,
    [39], Boykov Y Y, Huttenlocher D P, A New Bayesian Approach to Object Recognition [C], in Proceedings of IEEE CVPR, 1999, pp:517-523
    [40], Matthews I, Ishikawa T, Baker S, The Template Update Problem [J], IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 26, No. 6, June, 2004, pp: 810-815
    [41], Baker S, Gross R, Matthews I, Ishikawa T, Lucas-Kanade 20 Years On A Unifying Framework Part 2 (tech. report CMU-RI-TR-03-01), Robotics Institute, Carnegie Mellon University, February, 2003.
    [42], Ishikawa T, Matthews I, Baker S, Efficient Image Alignment with Outlier Rejection(tech. report CMU-RI-TR-02-27), Robotics Institute, Carnegie Mellon University, October, 2002.
    [43], Kim Z, Realtime Obstacle Detection and Tracking Based on Constrained Delaunay Triangulation [C], in Proceedings of IEEE Intelligent Transportation Systems Conference, 2006, pp:548-553
    [44], Kim Z, Realtime Lane Tracking of Curved Local Roads , in Proceedings of IEEE Intelligent Transportation Systems Conference, 2006, pp:1149-l 155
    [45], Kim Z, Cohn T E, Pseudo-realtime Activity Detection for Railroad Grade Crossing Safety [J], IEEE Trans. Intelligent Transportation Systems, 2004, vol. 5, no. 4, pp: 319-324
    [46], Kim Z, Gomes G, Hranac R, Skabardonis A, A Machine Vision System for Generating Vehicle Trajectories over Extended Freeway Segments [C], in 12th World Congress on Intelligent Transportation Systems, 2005. (http://path.berkeley.edu/)
    [47], Kim Z, Malik J, High-Quality Vehicle Trajectory Generation from Video Data Based on Vehicle Detection and Description [C], in Proceedings of IEEE Intelligent Transportation Systems Conference, 2003. (http://path.berkeley.edu/)
    [48], Kim Z, Malik J, Fast Vehicle Detection with Probabilistic Feature Grouping and Its Application to Vehicle Tracking [C], in Proceeding of IEEE Int'l Conf, on Computer Vision, 2003. (http://path.berkeley.edu/)
    [49], Javed O, Shafique K, Shah M, A Hierarchical Approach to Robust Background Subtraction using Color and Gradient Information [C], in IEEE Workshop on Motion and Video Computing, Orlando, Dec 5-6 2002. (http://www.cs.ucf.edu/)
    [50], Javed O, Shah M, Tracking And Object Classification For Automated Surveillance [C], in the seventh European Conference on Computer Vision (ECCV 2002), Copenhagen, May 2002. pp: 343-357 (http://www.cs.ucf.edu/)
    [51], Khan S, Javed O, Shah M, Tracking in Uncalibrated Cameras with Overlapping Field of View [C], in Proceedings of Performance Evaluation of Tracking and Surveillance PETS 2001, (with CVPR 2001), Kauai, Hawaii, Dec 9th, 2001. (http://www.cs.ucf.edu/)
    [52], Remagnino P, Baumberg A, Grove T, Hogg D, Tan T, Worrall A, Baker K, An Integrated Traffic and Pedestrian Model-based Vision System [C], in Proceedings of BMVC97, University of Essex, UK, volume 2, Colchester, 8-11th September, 1997, pp:380-389
    [53], Remagnino P, Tan T, Baker K, Agent Orientated Annotation in Model Based Visual Surveillance [C], in Proceedings of ICCV'98, 4-7th January, Bombay, India, 1998, (http://www.cvg.cs.rdg.ac.uk/)
    [54], Morris R J, Hogg D C, Statistical models of object interaction [C], in Proceedings of IEEE Workshop on Visual Surveillance Bombay '98. pp: 81-85.
    [55], Baumberg A, Hierarchical shape fitting using an iterated linear filter [C], in Proceedings of British Machine Vision Conference, BMVA, September 1996.vol.1, pp: 313-323
    [56], Pece A, Worrall A, A Newton method for pose refinement of 3D models [C], in Proceedings of the 6th International Symposium on Intelligent Robotic Systems, 21-23 July 1998. (http://www.cvg.cs.rdg.ac.uk/)
    [57], Maybank S J, Worrall A D, Sullivan G D, Filter for Car Tracking Based on Acceleration and Steering Angle [C], in Proc 7th British Machine Vision Conference, University of Edinburgh, Edinburgh UK, 9-12th September, 1996. (http://www.cvg.cs.rdg.ac.uk/)
    [58], Maybank S J, Worrall A D, Sullivan G D, A Filter for Visual Tracking Based on a Stochastic Model for Driver Behavior [C], in Proc. 4th European Conference on Computer Vision, Cambridge, UK, April 1996, Vol. 2, pp: 540-549
    [59], Sullivan G D, Baker K D, Worrall A D, Attwood C I, Remagnino P R, Model-based vehicle detection and classification using orthographic approximations [C], in Proc. 7th British Machine Vision Conference, University of Edinburgh, Edinburgh UK, 9-12th September, 1996. (http://www.cvg.cs.rdg.ac.uk/)
    [60], Worrall A D, Ferryman J M, Sullivan G D, Baker K D, Pose and Structure Recovery using Active Models [C], in Proc. 6th British Machine Vision Conference, the University of Birmingham, Birmingham UK, ll-14th September 1995, pp: 137-146
    [61], Ferryman J M, Worrall A D, Sullivan G D, Baker K D, A Generic Deformable Model for Vehicle Recognition [C], in Proc. 6th British Machine Vision Conference, the University of Birmingham, Birmingham UK, 11-14th September 1995, pp: 127-136
    [62], Sullivan G D, Worrall A D, Ferryman J M, Visual Object Recognition Using Deformable Models of Vehicles [C], in Proc. Workshop on Context-Based Vision, Cambridge Massachusetts, 19th June 1995.pp:75-86
    [63], Needham C J, Boyle R D, Tracking multiple sports players through occlusion, congestion and scale [C], in Proc. British Machine Vision Conference, Manchester, UK, 2001, pp: 93-102
    [64], Needham C J, Boyle R D, Performance evaluation metrics and statistics for positional tracker evaluation., In Proc. Third Intl. Conference on Computer Vision Systems, number 2626 in LNCS, Graz, Austria, Springer, 2003, pp: 278-289
    [65], Pers J, Kovacic S, Computer Vision System for Tracking Players in Sports Games [C], In: Proceedings of the First Int'l Workshop on Image and Signal Processing and Analysis IWISPA 2000, Pula, Croatia, June 2000, pp.81-86.
    [66], Pers J, Kovacic S, A System for Tracking Players in Sports Games by Computer Vision [J], Electrotechnical Review - Journal for Electrical Engineering and Computer Science, 2000, 67(5): 281-288
    [67], Okuma K, Taleghani, A, De Freitas N, Little J J, Lowe D G, A Boosted Particle Filter: Multitarget Detection and Tracking [C], in the European Conference on Computer Vision (ECCV), May 2004. (http://www.cs.ubc.ca/)
    [68], Okuma K, Little J J, Lowe D G, Automatic rectification of long image sequences [C], in the Asian Conference on Computer Vision(ACCV), Jeju Island, Korea, January 2004. (http://www.cs.ubc.ca/)
    [69], Bregler C, Learning and Recognizing Human Dynamics in Video Sequences [J], Computer Vision and Pattern Recognition, 1997, pp:568-574 (http://http.cs.berkeley.edu/)
    [70], Ioffe S, Forsyth D A, Probabilistic methods for finding people [J], International Journal of Computer Vision, 2001, Vo. 43(1), pp: 45-68.
    [71], Efros AA, Berg A C, Mori G, Malik J, Recognizing Action at a Distance [C], in International Conference on Computer Vision, 2003. (http://http.cs.berkeley.edu/)
    [72], Ramanan D, Forsyth D A, Finding and Tracking People from the Bottom Up [J], Computer Vision and Pattern Recognition, 2003. (http://http.cs.berkeley.edu/)
    [73], Gee A H, R. Cipolla R, Fast visual tracking by temporal consensus [J], Image and Vision Computing, 1996, 14(2):105-114
    [74], Matthews I, Baker S, Active Appearance Models Revisited [J], International Journal of Computer Vision, 2004, Vol. 60, No. 2, pp: 135 - 164
    [75], Baker S, Matthews I, Xiao J, Gross R, Ishikawa T, Kanade T, Real-Time Non-Rigid Driver Head Tracking for Driver Mental State Estimation (tech. report CMU-RI-TR-04-10), Robotics Institute, Carnegie Mellon University, February, 2004. (http://www.ri.cmu.edu/)
    [76], Baker S, Gross R, Matthews I, Lucas-Kanade 20 Years On: A Unifying Framework: Part 4 (tech. report CMU-RI-TR-04-14), Robotics Institute, Carnegie Mellon University, February, 2004. (http://www.ri.cmu.edu/)
    [77], Baker S, Matthews I, Lucas-Kanade 20 Years On: A Unifying Framework [J], International Journal of Computer Vision, 2004, Vol. 56, No. 3, pp: 221 - 255
    [78], Xiao J, Baker S, Matthews I, Kanade T, Real-Time Combined 2D+3D Active Appearance Models [C], in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, June, 2004. (http://www.ri.cmu.edu/)
    [79], Ishikawa T, Baker S, Matthews I, Kanade T, Passive Driver Gaze Tracking with Active Appearance Models(tech. report CMU-RI-TR-04-08), Robotics Institute, Carnegie Mellon University, February, 2004. (http://www.ri.cmu.edu/)
    [80],王亮,胡卫明,谭铁牛,人运动的视觉分析综述[J],计算机学报,2002,25(3):225-237
    [81], Yang H, Lou J G, Sun H Z, Hu W M, Tan T N, Efficient and robust vehicle localization [C], in IEEE Int. Conf. on Image processing, 2001, pp: 355-358
    [82], Lou J G, Liu Q F, Tan T N, Hu W M, 3-D Model Based Visual Traffic Surveillance [J],自动化学报,2003,29(3):434-449
    [83],楼建光,柳崎峰,胡卫明,谭铁牛,交通视觉监控中的摄像机参数求解[J],计算机学报,2002,25(11):1269-1273
    [84],杨明,宋雪峰,王宏,张钹,面向智能交通系统的图像处理[J],计算机工程与应用,2001(9):4-7,26
    [85],王宏,何克忠,张钹,智能车辆的自主驾驶与辅助导航[J],机器人,1997(3):155-160
    [86],皮文凯,刘宏,查红彬,基于自适应背景模型的全方位视觉人体运动检测[J],北京大学学报,2004,40(3),PP:458-464
    [87],梁国远,查红彬,刘宏,基于三维模型和仿射对应原理的人脸姿态估计方法[J],计算机学报,2005,28(5),PP:792-800
    [88],梁国远,查红彬,刘宏,基于三维模型和深度灰度约束加权的人脸姿态跟踪[J],计算机辅助设计与图形学学报,2006,18(1),PP:94-100
    [89],薛建儒,郑南宁,钟小品,多目标跟踪的序贯分层抽样信任传播算法[J],中国科学E辑,信息科学,2005,Vol.35,No.10,PP:1049-1063
    [90],马琳,郑南宁,李青,程洪,自主车辆视觉系统的摄像机动态自标定算法[J],西安交通大学学报,2005,Vol.39,No.10,PP:1072-1076
    [91],邢征北,郑南宁,刘铁,程洪,道路检测算法及其DSP实时实现[J],微电子学与计算机,2004,Vol.21,No.5,PP:114-117
    [92],程洪,郑南宁,高振海,李青,基于主元神经网络和K-均值的道路识别算法[J],西安交通大学学报,2003,Vol.37,No.8,PP:812-815
    [93],张洪明,赵德斌,高文,基于肤色模型、神经网络和人脸结构模型的平面旋转人脸检测[J],计算机学报 2002,Vol.25,No.11,PP:1250-1256
    [94],陈熙霖,山世光,高文,多姿态人脸识别[J],中国图像图形学报,1999,Vol.4,No.10, pp:818-824
    [95],刘明宝,姚鸿勋,高文,彩色图像的人脸实时跟踪方法[J],计算机学报,1998,Vol.21,No.6,PP:527-532
    [96],主动视觉用于人脸跟踪和识别,http://cit.bjtu.edu.cn/lab/chengguozhanshi/shijue/3.htm
    [97], Hu W, Tan T, Wang L, Maybank S, A survey on visual surveillance of object motion and behaviors [J], IEEE TRANSACTIONS ON SYSTEM, MAN, AND CYBERNETIC - PART C: APPLICATIONS AND REVIEWS, 2004, vol. 8, no. 2, pp: 334-352
    [98], Friedman N, Russell S, Image segmentation in video sequences: a probabilistic approach [C], in Proc. 13th Conf. Uncertainty in Artificial Intelligence, 1997, pp. 1-3.
    [99], Kohle M, Merkl D, Kastner J, Clinical gait analysis by neural networks: Issues and experiences [C], in Proc. IEEE Symp. Computer-Based Medical Systems, 1997, pp. 138-143.
    [100], Ridder C, Munkelt O, Kirchner H, Adaptive background estimation and foreground detection using Kalman-filtering [C], in Proc. Int. Conf. Recent Advances in Mechatronics, 1995, pp: 193-199
    [101], Toyama K, Krumm J, Brumitt B, Meyers B, Wallflower principles and practice of background maintenance [C], in Proc. Int. Conf. Computer Vision, 1999, pp. 255-261.
    [102], Haritaoglu I, Harwood D, Davis L S, W~4 Real-time surveillance of people and their activities [J], IEEE Transactions Pattern Analysis Machine Intelligence, 2000, vol. 22, pp: 809-830
    [103], McKenna S, Jabri S, Duric Z, Rosenfeld A, Wechsler H, Tracking groups of people [J], Computer Vision Image Understanding, 200, vol. 80, no. 1, pp. 42-56
    [104], Barron J, Fleet D, Beauchemin S, Performance of optical flow techniques [J], Int. J. Comput.Vis., 1994, vol. 12, no. 1, pp. 43-77
    [105], Stringa E, Morphological change detection algorithms for surveillance applications [C], in Proc. British Machine Vision Conf., 2000, pp: 402-412
    [106], Collins R T, Lipton A J, Kanade T, Fujiyoshi H, Duggins D, Tsin Y, Tolliver D, Enomoto N, Hasegawa O, Burt P, Wixson L, A system for video surveillance and monitoring (Tech. Report, CMU-RI-TR-00-12), Carnegie Mellon University, Pittsburgh, PA, 2000.
    [107],邓演喆,《超小型无人旋翼机 XZ03 飞控系统的机械电子学设计研究》[D],[博士论文],上海:上海大学,2007.6
    [108],Sonka M,Hlavac V,Boyle R著,艾海舟,武勃等译,《图像处理、分析与机器视觉》(第二版)[M],北京:人民邮电出版社,2003.
    [109], Otsu N, A threshold selection method from gray-level histograms [J], IEEE transactions on systems, man and cybernetics,1979, 9(1): 62-66
    [110], Russell S J, Norvig P, Artificial Intelligence: A Model Approach [M], Prentice Hall, Englewood Cliffs, New Jersey, 1995.
    [111],边肇祺,张学工,《模式识别》(第二版)[M],北京:清华大学出版社,2004.
    [112],J.P.Marquess de Sa著,吴逸飞译,《模式识别》——原理、方法及应用[M],北京:清华大学出版社,2002.
    [113],孙家广,《计算机图形学》[M],北京:清华大学出版社,1999.
    [114],周培德,《计算几何》——算法分析与设计[M],北京:清华大学出版社,1999.
    [115],Gonzalez R C,Woods R E著,阮秋琦,阮宇智,等译,《数字图像处理》(第二版)[M],北京:电子工业出版社,2005.
    [116],吴玲达,老松杨,王晖,张茂军,杨冰,姜志宏编著,《多媒体人机交互技术》[M],长沙:国防科技大学出版社,1999.3.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700