无人飞行器影像场景配准与目标监视技术研究
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
面向基于动态环境的无人飞行器目标实时监视需求,以快速、准确获取运动目标的位置和场景属性为目的,本文针对可见光、红外序列影像的特点,从无人飞行器影像的不变性匹配、仿射变形目标匹配关联、场景影像配准、运动目标检测与跟踪等方面做了较为深入的研究。研究的内容和创新点主要有:
     1,分析了无人飞行器目标监视技术的组成和流程,设计了目标监视系统架构。
     2,分析了局部不变性稀疏特征匹配各方法的原理,针对无人飞行器序列影像的应用特点,提出空间分布控制方法来设置SIFT匹配的取舍参数,使匹配点更加均匀;在搜索策略上采用改进的k-d树提高处理速度;并以实验对比了各方法优缺点。
     3,引入了基于ASIFT经纬度模拟的大倾角影像地物目标关联算法。在低分辨率层模拟目标影像成像时的倾斜纬度和旋转经度,使用尺度不变算法获得与搜索影像相似的模拟影像;完成选定模拟影像在高分辨率层对应影像的精确匹配。克服了无人飞行器大倾角拍摄时带来的仿射变形影响,保证了序列影像目标框定的稳定。
     4,提出一种基于稠密SIFT流的场景影像配准算法。将稀疏特征改进到逐个像素对应的稠密特征,同时保持空间离散性,并将多尺度信息量度量引入到二维影像中,由粗到精完成不同场景的影像对应关系。影像不同场景配准实验证明,本方法完成了传统像素级影像配准所不能完成的任务。
     5,提出结合空间结构与MeanShift的目标跟踪算法。针对红外影像特点,采用了小波滤噪的方法降噪。对序列影像相邻帧采用SURF配准的方法检测运动目标,依据Kalman滤波方法对运动目标预测;从搜索区域提取中心点空间结构描述符,与颜色直方图连接,结合MeanShift修正跟踪目标位置。解决了目标跟踪时的遮挡、大小变化等问题。
     6,研究了粒子滤波算法的原理和过程,结合Adaboost和混合粒子滤波完成无人飞行器运动目标的检测与跟踪。利用学习型Adaboost建议分布快速检测到场景中的目标,生成运动目标检测结果。采用混合粒子滤波的方法,用Adaboost生成的假设检验和目标动态模型信息组成混合模型来构建建议分布函数,实现运动目标的实时跟踪。
Basing on the exigent requirements of the real-time object surveillance of dynamic environment for unmanned aerial vehicle(UAV), and aiming at applications of object location and the scene property acquired by UAV, this dissertation makes a study on several key techniques for optical and infrared sequential imagery, such as invariability matching, object orientation of affine distortion, image registration across scenes, motion object detection and tracking,etc. The works achieved in this dissertation are mainly as follow:
     1,The components and the process of object surveillance technique for UAV is analyzed, with the framework designed too.
     2,The theoretics of each matching method of sparse local invariant features is analyzed. Aiming at characteristic of applications for UAV sequential imagery, the space distribution controlling method is proposed to adjust the parameters of SIFT, which makes the matching space more uniform. The improved k-d tree method is adopted to accelerate processing. Each matching method is implemented, and the advantage and disadvantage of each method are analyzed with the experimental results.
     3,A method of simulating longitude and latitude based on ASIFT is proposed, in order to find the orientation of object in oblique image. The procedure selects the affine transforms that yielded matches in the low-resolution process, then simulates the selected affine transforms on the original query and search images, and finally compares the simulated images by SIFT. The effect of affine distortion by great tilt of camera axes on UAV can be overcome, and the stabilization of object orientation of sequence imagery can be guaranteed, too.
     4,SIFT flow is introduced into the method for registering an image to its nearest neighbors in a large image corpus containing a variety of scenes. The SIFT flow algorithm consists of matching densely sampled, pixel-wise SIFT features between two images, while preserving spatial discontinuities, accomplishing the correspondence densely across scenes of coarse-to-fine. Experiments of registration across scenes of UAV sequential imagery prove that this method can complete assignments which the tradition method is not able to.
     5,A method that combines center point featrue descriptor and MeanShiftis is proposed. According to the noise characteristics of infrared image, the wavelet method is used to reduce the noise of infrared image. After registering with SURF matching method, the motion object can be predicted by Kalman filtering. A new histogram can be made by combining the color histogram and center descriptor extracted in search area, then with MeanShift modifying, object tracking can be achieved. Also, the problem of object blocked and size changed is solved in this dissertation.
     6,The principal of Particle Filtering is studied. An approach of combining two well-developed algorithms: mixture particle filters and Adaboost, is adopted. The learned Adaboost proposal distribution allows us to quickly detect object, while the filtering process enables us to keep track of the motion object. We construct the proposal distribution using a mixture model that incorporates information from the dynamic models of object and the detection hypotheses generated by Adaboost. The result of interleaving Adaboost with mixture particle filters is a simple, yet powerful and fully automatic multiple object tracking system.
引文
[1] A. Yilmaz, O.Javed,and M.Shah. 0bject Tracking:A Survey,ACM Computing Surveys, Vol.38, No.4, 1-45,2006
    [2] S.Dockstader,and M.B.Tekalp,On the tracking of articulated and occluded video object motion.Real Time Image,Vol.7,No.5, 415-432,2001
    [3] T.Zhao,M.Aggarwal,R.Kumar and H.Sawhney.Real-time Wide Area Multi-Camera Stereo Tracking,IEEE Computer Society Conference on Computer Vision and Pattern Recognition,Vol.1,976-983,2005.
    [4] JohnA T.The Air Force is Pursuing Uninhabited Combat Air Vehicles in a Big Way[J].Air Force Magazine,2001,84(8):64.
    [5] Mark C C.A Discussion of a Modular Unmanned Demonstration Air Vehicle[R].A GARD CP-600,2000.
    [6]张明廉.飞行控制系统.北京:航空工业出版社.1994
    [7] Robinson T. Robot Wars[J]. Aerospace Intemationa1,2001,64(7):12-17.
    [8] Katrina H. 2000 and Beyond-a UAV Forecast[J].Unmaned vehicles,1999,(11):24-31.
    [9]波音公司演示用于无人机的微型武器技术[Z].每日防务快讯,2009,12,3
    [10]刘焕松.微型无人机的尖端领域[J].轻兵器,2005(11):32-34
    [11]美国MQ-9无人机或将成为第一种无人作战飞机.每日防务快讯,2009,11,30
    [12] A. Baumberg. Reliable feature matching across widely separated views. In CVPR, pages 774-781, 2000.
    [13] H.Bay. From Wide-baseline Point and Line Correspondences to 3D. PhD thesis, ETH Zurich, 2006.
    [14] H.Bay, B. Fasel, and L. van Gool. Interactive museum guide:Fast and robust recognition of museum objects. In Proceedings of the _rst international workshop on mobile vision, May 2006.
    [15] Pechaud M. Vanzetta.Sifi-based Sequence Registration and Flow-based Cortical Vessel Segmentation Applied to High Resolution Optical Imaging Data[C] IEEE Intemational Symposium on2008.1:720-723
    [16] Borghgraef A M.Acheroy.Using Optical Flow for the Detection of Floating Mines in IR Image Sequences[C] 2006.6395:301.3l1.
    [17] Liu C,Yuen J.Sift flow:Dense Correspondence across Diferent Scenes[C].European Conference on Computer Vision(ECCV).2008.1:28-41.
    [18] L.D.Stcfano,M.Marchionni, A fast area-based stereo matching algorithm[J].Image andVision Computing,2004,Vol.22:983-1005.
    [19] R.Szeliskil,D.Scharstein,Symmetric sub-pixel stereo matching Proceedings European Conference Computer Vision[J].2002:525-540
    [20] B.D.Lucas,T.Kanade,An iterative image registration technique with an application to stereo vision[J].Proceedings Joint Conf Artificial Intelligence 1981:674-679。
    [21] C.Sminchisescu,B.Triggs. Variational Mixture Smoothing for Non-Linear Dy-namical Systems,IEEE International Conference on Computer Vision and Pattern Recognition,2004.
    [22] C.Sminchisescu,A.Kanaujia,Z.Li,D.Metaxas. Discriminative Density Propagation for 3D Human Motion Estimation, IEEE International Conference On Computer Vision and Pattern Recognition,San Diego,2005.
    [23] Collins,R.T. and A. J.Lipton,Introduction to the Special Section on Vieo Surveillance. IEEETransactions on Patern Analysis and Machine Intelligence,2000.22(8):745~756
    [24] Sgae,K.and Young, Security Application of Compueter Vision.IEEE Aerospace and Electronics Systems Magazine, 1999.14(4):19-29.
    [25]鲍金河,沙宇芳.无人侦察机的发展现状及作战能力分析[A].见:中国航空学会.尖兵之翼——2008中国无人机大会论文集[C].2008:123-126
    [26]蒋谱成,武坦然,张宇涵.近地空间飞艇发展现状与趋势[J].空间电子技术,2008(3):5-10
    [27]徐舸,李雪峰.以色列无人机的发展理念[A].见:中国航空学会.尖兵之翼——2008中国无人机大会论文集[C].2008:115-117
    [28]郭立民,杜洋.谈谈无人机的发展与发展趋势[A].见:中国航空学会.尖兵之翼——2008中国无人机大会论文集[C].2008:135-138
    [29]马蓉,段中喆,刘沛清.现代无人机发展现状[A].见:中国航空学会.尖兵之翼——2008中国无人机大会论文集[C].2008:226-233
    [30] C. Harris and M. Stephens. A combined corner and edge detector. Alvey Vision Conference,15:50, 1988.
    [31] K. Mikolajczyk and C. Schmid. Indexing based on scale invariant interest points. Proc. ICCV,1:525–531, 2001.
    [32] K. Mikolajczyk and C. Schmid. Scale and Affine Invariant Interest Point Detectors. International Journal of Computer Vision, 60(1):63–86, 2004.
    [33] L.F′evrier. A wide-baseline matching library for Zeno. Internship report, www.di.ens.fr/?fevrier/papers/2007-InternsipReportILM.pdf, 2007.
    [34] Witkin AP Scale-space filtering.Proc.8th Int.Joint Conf.Art.Intell. Karlsruhe, Germany, 1983
    [35] A. Baumberg. Reliable feature matching across widely separated views. Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, 1:774–781, 2000.
    [36] K. Mikolajczyk and C. Schmid. An affine invariant interest point detector. Proc. ECCV, 1:128–142, 2002.
    [37] K. Mikolajczyk and C. Schmid. Scale and Affine Invariant Interest Point Detectors. International Journal of Computer Vision, 60(1):63–86, 2004.
    [38] T. Tuytelaars and L. Van Gool. Matching Widely Separated Views Based on Affine Invariant Regions. International Journal of Computer Vision, 59(1):61–85, 2004.
    [39] T. Tuytelaars, L. Van Gool, et al. Content-based image retrieval based on local affinely invariant regions. Int. Conf. on Visual Information Systems, pages 493–500, 1999.
    [40] T. Tuytelaars and L. Van Gool. Wide baseline stereo matching based on local, affinely invariant regions. British Machine Vision Conference, pages 412–425, 2000.
    [41] T. Kadir, A. Zisserman, and M. Brady. An Affine Invariant Salient Region Detector. In European Conference on Computer Vision, pages 228–241, 2004.
    [42] J. Matas, O. Chum, M. Urban, and T. Pajdla. Robust wide-baseline stereo from maximally stable extremal regions. Image and Vision Computing, 22(10):761–767, 2004.
    [43] P. Mus′e, F. Sur, F. Cao, and Y. Gousseau. Unsupervised thresholds for shape matching. Proc. of the International Conference on Image Processing, 2:647–650.
    [44] P. Mus′e, F. Sur, F. Cao, Y. Gousseau, and J.M. Morel. An A Contrario Decision Method for Shape Element Recognition. International Journal of Computer Vision, 69(3):295–315, 2006.
    [45] F. Cao, J.-L. Lisani, J.-M. Morel, P. Mus′e, and F. Sur. A Theory of Shape Identification.Springer Verlag, 2008.
    [46] D.G. Lowe. Object recognition from local scale-invariant features. In IEEE International Conference on Computer Vision (ICCV), pages 1150–1157, Kerkyra, Greece, 1999.
    [47] D.G Lowe. Distinctive image features from scale-invariant key points. International Journal of Computer Vision, 60(2):91–110, 2004.
    [48] Ke Y, Sukthankar R. PCA-SIFI: A more distinctive representation for local image descriptors[C]. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,Washington DC, USA, 2004:506-513
    [49] Ye K,Sukthankar R.PCA-SIFT:A more distinctive representation for local image despcriptors. In:Proceedings of the Conference on Computer Vision and Pattern Recognition, Wahington, USA:IEEE, 1,511,2004
    [50] H. Bay, T. Tuytelaars, and L. Van Gool. SURF: Speeded up robust features. In ECCV, 2006.
    [51] H. Bay. From Wide-baseline Point and Line Correspondences to 3D. PhD thesis, ETH Zurich, 2006.
    [52] Shijun Sun, David Haynor, and Yongmin Kim. Motion Estimation Based on Optical Flow with Adaptive Gradients[J].IEEE,2000
    [53] Lipton A,Fujiyoshi H and Patil R.Moving target classification and tracking from real time video[C].Proc of the 1998 DARPA Image Understanding Workshop,1998
    [54] Changick K.Fast and automatic video object segmentation and tracking for content bassed application[J].IEEE Transaction on Circuits and System for Video Technology,2002,12(2):122~129
    [55] Sheikh, Y., Zhai, Y., Shah, M.: An accumulative framework for alignment of an image sequence. In: Proceedings of Asian Conference on Computer Vision (2004)
    [56] Shafique, K., Shah, M.: A noniterative greedy algorithm for multiframe point correspondence.In: IEEE Trans. Pattern Anal. Mach. Intell. (2005)
    [57] Ahmed, J., Jafri, M.N., Shah, M., Akbar, M.: Real-time edge-enhanced dynamic correlation and predictive open-loop car-following control for robust tracking. Machine Vision and Applications Journal, Manuscript submission ID MVA-May-06-0110 (accepted, 2007)
    [58] O. Faugeras. Three-Dimensional Computer Vision: A Geometric Viewpoint. MIT Press, 1993.
    [59] Zhu Yong-li,Xu Zheng-ya.Adaptive eontext based coding for lossless color image compression[C] IMACS Muhiconferenee on Computational Engineering in Systems Applications,2006,2:1310-1314.
    [60] Ouafi A,Ahmed A T,Baarir z,et a1.Color image coding by modified embedded zerotree wavelet(EZW) algorithm[C] Information and Communication Technologies,2006,IC1TrA’06,2006,1:1451-1456
    [61] H. Bay, B. Fasel, and L. van Gool. Interactive museum guide: Fast and robust recognition of museum objects. In Proceedings of the rst international workshop on mobile vision, May 2006.
    [62] Brown. M and Lowe.D.G. Invariant features from interest point groups[C]. British Machine Vision Conference,2002:656-665
    [63] T. Lindeberg and J. Garding. Shape-adapted smoothing in estimation of 3-D depth cuesfro affine distortions of local 2-D brightness structure. Proc. ECCV, pages 389–400, 1994.
    [64]李志勇,沈振康,杨卫平,谌海新.动态图像分析[M].北京:国防工业出版社,1999
    [65] Guo Shen Yu, Jean-Michel Morel. A New Framework for Fully Affine Invariant Image Comparison [J], to appear in SIAM Journal on Imaging Sciences, 2009.
    [66] Guo Shen Yu, Jean-Michel Morel. A Fully Affine Invariant Image Comparison Method[J], Method, IEEE ICASSP, Taipei, 2009.
    [67] Guo Shen Yu, Jean-Michel Morel. Fully Affine Invariant[J], IEEE ICASSP, 2009.
    [68] SunT,Neuvo Y. Detail-preserving median-based filters in image processing.[J].Pattern Recognition Letter, 1994,15(4);341-347
    [69] K Mikolajczyk. http://www.robots.ox.ac.uk/_vgg/research/affine/
    [70] C. Liu, W. T. Freeman, and E. H. Adelson. Analysis of contour motions. In Advances in Neural Information Processing Systems (NIPS), 2006.
    [71] C. Liu, W. T. Freeman, E. H. Adelson, and Y. Weiss. Human-assisted motion annotation. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2008.
    [72] C. Liu, J. Yuen, and A. Torralba. Nonparametric scene parsing: Label transfer via dense scene alignment. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2009.
    [73] C. Liu, J. Yuen, A. Torralba, J. Sivic, and W. T. Freeman. SIFT flow: dense correspondence across different scenes. In European Conference on Computer Vision (ECCV), 2008.
    [74] J. R. Bergen, P. Anandan, K. J Hanna, and R. Hingorani. Hierarchical model-based motion estimation. In European Conference on Computer Vision (ECCV), pages 237–252, 1992.
    [75] Y. Weiss. Interpreting images by propagating bayesian beliefs. In Advances in Neural Information Processing Systems (NIPS), pages 908–915, 1997.
    [76] Y. Weiss. Smoothness in layers: Motion segmentation using nonparametric mixture estimation. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 520–527, 1997.
    [77] S. Belongie, J. Malik, and J. Puzicha. Shape context: A new descriptorfor shape matching and object recognition. In Advances in NeuralInformation Processing Systems (NIPS), 2000.
    [78] R. Szeliski, R. Zabih, D. Scharstein, O. Veksler, V. Kolmogorov, A. Agarwala, M. Tappen, and C. Rother. A comparative study of energy minimization methods for markov random fields with moothness-based priors. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 30(6):1068–1080, 2008.
    [79] J. L. Barron, D. J. Fleet, and S. S. Beauchemin. Systems and experimentperformance of optical flow techniques. International Journal of Computer Vision (IJCV), 12(1):43–77, 1994.
    [80] T. Brox, C. Bregler, and J. Malik. Large displacement optical flow. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2009.
    [81] T. Brox, A. Bruhn, N. Papenberg, and J. Weickert. High accuracy optical flow estimation based on a theory for warping. In European Conference on Computer Vision (ECCV), pages 25–36, 2004.
    [82] A. Bruhn, J. Weickert, and C. Schn¨orr. Lucas/Kanade meets Horn/Schunk: combining local and global optical flow methods. International Journal of Computer Vision (IJCV), 61(3):211–231, 2005.
    [83] D. J. Fleet, A. D. Jepson, and M. R. M. Jenkin. Phase-based disparity measurement. Computer Vision, Graphics and Image Processing (CVGIP), 53(2):198–210, 1991.
    [84] R. Szeliski. Image alignment and stiching: A tutorial. Foundations and Trends in Computer Graphics and Computer Vision, 2(1), 2006.
    [85] S. Lazebnik, C. Schmid, and J. Ponce. Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), volume II, pages 2169–2178, 2006.
    [86] B. Lucas and T. Kanade. An iterative image registration technique with an application to stereo vision. In Proceedings of the International Joint Conference on Artificial Intelligence, pages 674–679, 1981.
    [87] D. Scharstein and R. Szeliski. A taxonomy and evaluation of dense two-frame stereo correspondence algorithms. International Journal of Computer Vision (IJCV), 47(1):7–42, 2002. [46] C. Schmid, R. Mohr, and C. Bauckhage. Evaluation of interest point detectors. International Journalof Computer Vision (IJCV), 37(2):151–172, 2000.
    [88] M. J. Swain and D. H. Ballard. Color indexing. International Journal of Computer Vision (IJCV), 7(1), 1991.
    [89] Ali, S., Shah, M.: Cocoa-tracking in aerial imagery. In: SPIE Airborne Intelligence, Surveillance, Reconnaissance (ISR) Systems and Applications, Orlando (2006)
    [90] Javed, O., Shah, M.: Tracking and object classification for automated surveillance. In: The Seventh European Conference on Computer Vision, Denmark (2002)
    [91]吴福朝.计算机视觉中的数学方法[M],科学出版社,北京,2008.3
    [92] Yilmaz, A., Javed, O., and Shah, M.Object tracking: A survey, ACM Computing Surveys, 38, 4,Article 13, Dec. 2006, 45 pages.
    [93] K. Grauman and T. Darrell. Pyramid match kernels: Discriminative classification with sets of image features. In IEEE International Conference on Computer Vision (ICCV), 2005.
    [94] Weissman, T., EE378 Handout: Kalman Filter, Lecture notes on EE378 Statistical Signal Processing,http://eeclass.stanford.edu/ee378/.
    [95] Thrun, S., and Kosecka, J., Lecture 12 Tracking Motion”, Lecture notes on CS223b, http://cs223b.stanford.edu/notes/CS223B-L12-Tracking.ppt
    [96] Jon S,.Joachim W.Information measures in scale-spaces[J].IEEE Transactions on Information Theory,1999,45(3):1051~1059
    [97] Intille, S. S., Davis, J. W., Bobick, A.F.: Real-Time Closed-World Tracking. IEEE Conference on Computer Vision and Pattern Recognition, pp. 697-703 (1997)
    [98] Isard, M., MacCormick, J.: BraMBLe: A Bayesian multiple-blob tracker. International Conference on Computer Vision, pp. 34-41(2001)
    [99] Isard,M. and A. Blake, Condensation-conditional density propagation for visual tracking. International Journal on Computer Vision,1998,29(1):p5~28
    [100] Vermaak, J., Doucet, A., P′erez, P.: Maintaining Multi-Modality through Mixture Tracking. International Conference on Computer Vision (2003)
    [101] Freund, Y., Schapire, R. E.: A decision-theoretic generalization of on-line learning and an application to boosting. Computational Learning Theory, pp. 23-37, Springer-Verlag, (1995)
    [102] Hue, C., Le Cadre, J.-P., P′erez, P.: Tracking Multiple Objects with Particle Filtering. IEEE Transactions on Aerospace and Electronic Systems, 38(3):791-812 (2002)
    [103] Deutscher, J., Blake, A., Ried, I.: Articulated body motion capture by annealed particle filtering.IEEE Conference on Computer Vision and Pattern Recognition, (2000)
    [104] Doucet, A., de Freitas, J. F. G., N. J. Gordon, editors: Sequential Monte Carlo Methods in Practice.Springer-Verlag, New York (2001)
    [105] Doucet A, Godsill S,Andrieu C. On sequential Monte Carlo sampling methods for Bayesian Filtering. Atatistics and Computing,2000,10(3):197~208
    [106] Li P H,Zhang T W, Pece A E C, Visual contour tracking based on particle filters. Image and Vision Computing, 2003,21(1):111~123
    [107] Nummiaro K, Koller-Meier E B,van Gool L. An adaptive color-based particle filter. Image and Vision Computing,2003,21(1):100~110
    [108] Perez P. Color-based probabilistic tracking. Proceedings of European Conf.on Computer Vision, Copenhagen,Denmark,Berlin:Springer-Verlag,2002:661
    [109] Jacquot A. Adaptive tracking of non-rigid objects based on color histograms and automatic parameter selection. Proceedings of the IEEE Workshop on Motion and Video Coputing,NewYork:IEEE Press,2005:103~109
    [110] Town C. Multi-sensory and multi-modal fusion for sentient computing. Int.J.of Computer Vision,2007,71(2):235~253
    [111] Comaniciu, D., Ramesh, V., Meer, P.: Real-Time Tracking of Non-Rigid Objects using Mean Shift. IEEE Conference on Computer Vision and Pattern Recognition, pp. 142-151 (2000)
    [112] R.Merwe,Doucet A,Freitas deN,etal. The unscented particle filters[C]. Technical Report CUED/F-INFENG/TR380.
    [113] M S .Arulampalam. A tutorial on particle filters for online nonlinear/non-Gaussian Bayesian tracking[J]. IEEE Trans. On Signal Processing, 2002
    [114] N. J. Gordon, D. J. Salmond & A. F. M. Smith. Novel Approach to Nonlinear/Non-Gaussian Bayesian State Estimation [C]. IEE Proceedings F on Radar and Signal Processing, 140 (2), pp 107-113, 1993
    [115] P′erez, P, Hue. C, Vermaak, J., Gangnet, M.: Color-Based Probabilistic Tracking. European Conference on Computer Vision, (2002)
    [116]裴鹿成,张孝泽,蒙特卡罗方法及其在粒子输运问题中的应用[M].北京:北京科学出版社,1980.
    [117] D.Comaniciu. Real-Time Tracking of Non-Rigid Objects using Mean Shift[C]. IEEE International Conference on Computer Vision and Pattern Recognition, 2000:142~149
    [118]高惠璇.统计计算[M].北京:北京大学出版,1995.
    [119] Comaniciu, D., Ramesh, V., Meer, P.: Real-Time Tracking of Non-Rigid Objects using Mean Shift. IEEE Conference on Computer Vision and Pattern Recognition, pp. 142-151 (2000)
    [120] Vermaak, J., Doucet, A., P′erez, P. Maintaining Multi-Modality through Mixture Tracking. International Conference on Computer Vision (2003)

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700