地理场景协同的多摄像机目标跟踪研究
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
地理场景具有高动态性、多尺度性和不确定性等特点,研究动态目标的实时感知方法,快速智能感知地理场景中动态目标时空特征,基于此探索动态目标的行为规律,已成为当前学术界和政府管理部门亟待解决的问题。
     目前,视频监控系统以其高清实时、功能智能、价格低廉等优势在安全领域发挥着越来越重要的作用。然而,现有基于监控视频的动态目标跟踪局限于二维图像空间,无法感知其在真实地理场景中的时空特征,对于具有大量视频监控设备的区域,不能实现多摄像机对动态目标的协同跟踪。鉴于此,本文以监控视频与地理场景的协同分析为手段,针对地理场景下动态目标状态的感知这一科学问题,发展地理场景约束下的动态目标时空信息提取与分析方法,取得了以下研究成果:
     (1)提出了多平面约束下的监控视频与2D地理空间数据几何互映射模型。现有监控视频至地理空间的映射模型假定地面为单一高程,当监控区域存在多个不同的高程平面时,现有方法需重新求解互映射矩阵,过程较繁琐。本文通过对比分析摄影测量学与计算机视觉的相机模型,构建了适用于固定摄像机与PTZ摄像机的监控视频与2D地理空间数据几何互映射模型,该模型具有各参数物理意义明确、理论严密、灵活性强等特点。
     (2)提出了基于深度的监控视频与3D地理空间数据几何互映射模型。从三维空间坐标到图像坐标的转换可基于针孔成像模型来实现,但从图像坐标到三维空间坐标的转换,现有方法主要通过视线与3D模型相交的方式来实现,其计算量大且过程繁琐。本文基于3D地理空间数据可视化过程中的缓存深度值构建了监控视频与3D地理空间数据的互映射模型。与传统方法相比,本模型具有过程简洁,效率更高,可实现监控视频与3D地理空间数据间实时同步的动态互映射。
     (3)提出了一种监控视频与地理空间数据的半自动互映射方法。本文系统分析了传统的基于单应的几何互映射方法,并进行了不确定性分析。鉴于地理空间数据精度日益提高,本文基于结构化地理场景的约束,设定灭点相似性、特征线相似性两个指标作为匹配的依据,探讨了监控视频与2D/3D地理空间数据视图的半自动匹配方法。
     (4)设计了一种面向监控视频的前景目标时空信息提取方法。基于监控视频与地理空间数据互映射模型,提出了地理场景中监控视频前景目标时空信息的提取方法,包括目标方位信息、几何信息、目标轨迹、前景图像等,基于面向对象思想建立了其对应的时空数据模型,实现了对前景目标数据的管理与GIS集成。
     (5)构建了一种基于路网约束的盲区目标轨迹估计模型。监控摄像机通常布设在相对重要的位置,具有独立性、分散性等特点,完全基于监控视频无法感知监控目标的连续运动轨迹。现有多摄像机协同下的目标连续跟踪,多利用监控视频场景间具有重叠条件下开展研究,不适合大场景中目标的连续跟踪。本文以场景中动态目标的时空信息及基础地理信息为数据基础,提出了路网(及目标行为规则)约束下的动态目标连续跟踪方法。
     (6)研发了地理场景中多摄像机协同的动态目标连续跟踪原型系统。基于以上研究成果,设计并开发了动态目标连续跟踪原型系统,该系统具有监控视频与地理空间数据的互映射、动态目标时空信息提取、动态目标跟踪及可视化、大场景下监控目标轨迹估算等功能。
With the development of society and economy, human activities become more and more frequent. This forms a more dynamic, multi-scale, uncertain and complex system. So, fast perception and monitoring of dynamic objects has become an urgent problem in regions of academia and government administration.
     The surveillance video is a kind of real-time and high-definition data source and contains a wealth of spatial information and attribute, which plays an important role not only in security but also in GIS. At present, the integration of surveillance videos and geospatial data are focused on static, one-way and interactive operation. It is difficult to meet the needs of the dynamic, two-way, automatic mapping between the surveillance videos and2D/3D geospatial data. And object tracking algorithms are mainly focused on the single surveillance video and it is difficult to form the continuous trajectory of the object in a large area which results in the failure of the understanding of the dynamic object's behavior.
     It is necessary to develop an efficient and macroscopic analysis technology with the integration of intelligent video analysis and spatial analysis to meet the requirements for the whole view and behavior understanding of the dynamic object. Therefore, we combine videos with2D/3D geospatial data and propose some methods of object information extracting and analysis in geographical scene. The contributions are as follows:
     (1) The mutual mapping modal between surveillance videos and2D geospatial data based on multi-planar constrains is proposed. Current2D mutual mapping model assume the ground surface has the same elevation. When there are multiple different elevations, the existing methods need to respectively solve the matrix and the process is cumbersome. Through the comparative analysis of the differences in photogrammetry and computer vision camera model, a new mutual mapping modal is proposed and the parameters have clearly physical meaning. The modal is not only applicable to fixed cameras, but also dynamic cameras, such as PTZ cameras. Based on the modal, we analyze the characteristics of the mapping deviation under undulate ground and the resolution of surveillance videos.
     (2) The mutual mapping modal between surveillance videos and3D geospatial data based on depth buffer is proposed. Based on pin-hole camera modal, the3D coordinates can be converted to the corresponding image coordinates. But the transformation from the image coordinates to3D coordinates is difficult. The existing method is mainly to calculate the intersection point between the sight line and3D scene. We propose a new modal which is based on the depth buffer when3D data display in3D GIS. Through the modal, the surveillance video and the corresponding3D GIS view can achieve real-time synchronization. Based on the modal, we analyze the characteristics of the resolution of3D surveillance videos and dynamic mapping.
     (3) A semi-automatic mapping method between surveillance video and geospatial data is proposed. The mutual mapping uncertainty was discussed through the analysis of the homographic matrix method. In order to achieve the automatic mapping between some surveillance videos and3D GIS views, a new method is proposed, which is based on the features of vanishing points and lines.
     (4) A dynamic object information extracting method is proposed based on geographic scene and the corresponding data model is designed. The information contains the location and direction, geometric size, trajectories, foreground image under geospatial reference. The data modal is based on object-oriented method and is easily integrated with GIS.
     (5) An object trajectory estimation model is proposed. In order to get the whole trajectories in a large area where was not all covered by surveillance cameras, we combine geospatial data with the movement parameters, geometric size and color information of the object to track the object. On the one hand, through the road and the object behavior regulars, the object location can be estimated. On the other hand, the similarities including the location probability, the feature similarly of geometric size and movement parameters, images similarity can be calculated.
     (6) We design and develop a system for object continuous tracking. This system contains functions of the mapping between videos and geospatial data, object tracking, information extracting and trajectory estimation.
引文
[1]Mills J W, Curtis A, Kennedy B, et al. Geospatial video for field data collection[J]. Applied Geography,2010,30(4):533-547.
    [2]Hirose M, Watanabe S, Endo T. Generation of wide-range virtual spaces using photographic images[C]//Virtual Reality Annual International Symposium,1998. Proceedings., IEEE 1998. IEEE,1998:234-241.
    [3]Kawasaki H, Yatabe T, Ikeuchi K, et al. Automatic modeling of a 3d city map from real-world video [C]//Proceedings of the seventh ACM international conference on Multimedia (Part 1). ACM,1999:11-18.
    [4]Montoya L. Geo-data acquisition through mobile GIS and digital video:an urban disaster management perspective [J]. Environmental Modelling & Software,2003,18(10):869-876.
    [5]Curtis A, Mills J W, Kennedy B, et al. Understanding the Geography of Post-Traumatic Stress:An Academic Justification for Using a Spatial Video Acquisition System in the Response to Hurricane Katrina[J]. Journal of Contingencies and Crisis Management,2007,15(4):208-219.
    [6]Nobre E M N, Camara A S. Spatial Video, exploring space using multiple digital videos [M]//Multimedia 2001. Springer Vienna,2002:177-188.
    [7]Lippman A. Movie maps:an application of the optical videodisc to computer graphics[J]. SIGGRAPH'80,1980,14(3):32-42.
    [8]Fonseca A, Gouveia C, Camara A S, Ferreira F C. Functions for a Multimedia GIS[C]. Proceedings 3rd European Conference on Geographical Information Systems EGIS,1992:1095-1101.
    [9]Bill R. Multimedia GIS:definition, requirements and applications[J]. The 1994 European GIS yearbook,1994,151-154.
    [10]Openshaw S, Wymer C, Charlton M. A geographical information and mapping system for the BBC Domesday optical discs[J]. Transactions of the Institute of British Geographers,1986,11(3):296-304.
    [11]Stefanakis E, Peterson M P. Geographic hypermedia[C]. In Stefanakis E, et al. Geographic Hypermedia:Concepts And Systems, Springer Verlag,2006:1-22.
    [12]Klamma R, Spaniol M, Jarke M, et al. A hypermedia Afghan sites and monuments database[C]. In Klamma R, et al. Geographic Hypermedia: Concepts And Systems, Springer Verlag,2006:189-209.
    [13]Berry J K. Capture 'Where' and 'When' on Video-Based GIS[J]. GEOWORLD, 2000, (9):26-27.
    [14]Kim K H, Kim S S, Lee S H, et al. The interactive geographic video[C] IGARSS'03 Proceedings,2003, Vol.1:59-61.
    [15]间国年,刘学军,吴勇.发展影视GIS,推进GIS社会化进程——兼论GIS中的“世界”观察与表达方式[C].《测绘通报》测绘科学前沿技术论坛论文集,2008.
    [16]Curtis A, Mills J. Spatial video data collection in a post-disaster landscape:The Tuscaloosa Tornado of April 27th 2011[J]. Applied Geography,2012,32(2): 393-400.
    [17]Hwang T H, Choi K H, Joo I H, et al. MPEG-7 metadata for video-based GIS applications[C]//Geoscience and Remote Sensing Symposium,2003. IGARSS'03. Proceedings.2003 IEEE International. IEEE,2003,6:3641-3643.
    [18]Joo I H, Hwang T H, Choi K H. Generation of video metadata supporting video-GIS integration[C]//Image Processing,2004. ICIP'04.2004 International Conference on. IEEE,2004,3:1695-1698.
    [19]Sourimant G, Colleu T, Jantet V, et al. Toward automatic GIS-video initial registration[J]. annals of telecommunications-annales des telecommunications, 2012,67(1-2):1-13.
    [20]孔云峰.一个公路视频GIS的设计与实现[J].公路,2007,(1):119-121.
    [21]丰江帆,张宏,沙月进.GPS车载移动视频监控系统的设计[J].测绘通报,2007,(2):52-54.
    [22]吴勇,刘学军,赵华等.可定位视频采集方法研究[J].测绘通报,2010,(1):24-27.
    [23]孔云峰.地理视频数据模型设计及网络视频GIS实现[J].武汉大学学报(信息科学版),2010,35(2):133-137.
    [24]宋宏权,孔云峰.Adobe Flex框架中的视频GIS系统设计与开发[J].武汉大学学报(信息科学版)2010,35(6):743-746.
    [25]韩志刚.地理超媒体数据模型及Web服务研究[博士学位论文].开封:河南大学,2011.
    [26]Nobre E M N, Camara A S. Spatial Video Exploring Space Using Multiple Digital Videos [M]//Multimedia 2001. Springer Vienna,2002:177-188.
    [27]Navarrete T, Blat J. VideoGIS:Segmenting and indexing video based on geographic information[C]//5th AGILE conference on geographic information science. Palma de Mallorca, Spain,2002:1-9.
    [28]Navarrete T. Semantic integration of thematic geographic information in a multimedia context[D]. PhD Thesis. Technology Department. Universitat Pompeu Fabra, Barcelona, Spain,2006.
    [29]Lewis P, Fotheringham S, Winstanley A. Spatial video and GIS[J]. International Journal of Geographical Information Science,2011,25(5):697-716.
    [30]李德仁,郭晟,胡庆武.基于3S集成技术的LD2000系列移动道路测量系统及其应用[J].测绘学报,2008,37(3):272-276.
    [31]Hoiem D, Efros A A, Hebert M. Automatic photo pop-up[C]//ACM Transactions on Graphics (TOG). ACM,2005,24(3):577-584.
    [32]Saxena A, Chung S H, Ng A Y. Learning depth from single monocular images[C]//NIPS.2005,18:1-8.
    [33]Snavely N, Seitz S M, Szeliski R. Photo tourism:exploring photo collections in 3D[J]. ACM transactions on graphics (TOG),2006,25(3):835-846.
    [34]Furukawa Y, Ponce J. Accurate, dense, and robust multiview stereopsis[J]. Pattern Analysis and Machine Intelligence, IEEE Transactions on,2010,32(8): 1362-1376.
    [35]章国锋.视频场景的重建与增强处理[D].杭州:浙江大学,2009.
    [36]刘学军,王美珍,甄艳等.单幅图像几何量测研究进展[J].武汉大学学报(信息科学版)2011,36(8):941-747.
    [37]王美珍,刘学军,甄艳等.包含圆形的单幅图像距离量测[J].武汉大学学报(信息科学版)2012,37(3):348-353.
    [38]宋宏权,刘学军,闾国年,甄艳.地理参考下未标定图像序列的三维点云精度分析[J].测绘通报,2012,7:14-16.
    [39]Lewis, J., Grindstaff, G., and Whitaker, S.,2006. Open geospatial consortium geo-video web services.1-36. [online]. Available from: http://portal.opengeospatial.org/files/?artifact_id= 12899.
    [40]韩志刚,曾明,孔云峰.校园地理视频监控WebGIS系统设计与实现[J],测绘科学,2012,37(1):195-197.
    [41]张迪.基于GIS的公安视频监控预案系统设计与实现[D].开封:河南大 学,2011.
    [42]王元园,张健,韩宁.基于GIS的森林火灾视频监控定位方法研究[J],林业机械与木工设备,2008,36(5):24-26.
    [43]Bradshaw K J, Reid I D, Murray D W. The active recovery of 3d motion trajectories and their use in prediction[J]. Pattern Analysis and Machine Intelligence, IEEE Transactions on,1997,19(3):219-234..
    [44]Coifman B, Beymer D, McLauchlan P, et al. A real-time computer vision system for vehicle tracking and traffic surveillance[J]. Transportation Research Part C: Emerging Technologies,1998,6(4):271-288.
    [45]Kim K, Oh S, Lee J, et al. Augmenting aerial earth maps with dynamic information[C],8th IEEE International Symposium on Mixed and Augmented Reality,2009:35-38.
    [46]宋红权,刘学军,闾国年等.区域人群状态的实时感知监控[J],地球信息科学学报,2012,14(6):686-692.
    [47]蒋卫星.视频监控开发平台的设计与实现[D].浙江:浙江大学,2008.
    [48]Zhang X, Liu X, Song H. Video surveillance GIS:a novel application[C]. In the proceedings of Geoinfomatics 2013.
    [49]Kanade T, Collins R, Lipton A, et al. Cooperative multisensory video surveillance. [C]//Proceedings of DARPA Image Understanding Workshop. 1997,1(3-10).
    [50]Kanade T, Collins R, Lipton A, et al. Advances in cooperative multi-sensor video surveillance [C]//Proceedings of DARPA Image Understanding Workshop. 1998,1(3-24).
    [51]Milosavljevic A, Dimitrijevic A, Rancid D. GIS-augmented video surveillance[J]. International Journal of Geographical Information Science,2010, 24(9):1415-1433.
    [52]Anderson C H, Burt P J, Van Der Wai G S. Change detection and tracking using pyramid transform techniques[C]//Proc. SPIE Conference on Intelligent Robots and Computer Vision.1985:300-305.
    [53]Haritaoglu I, Harwood D, Davis L. W 4 S:A real-time system for detecting and tracking people in 21/2D[J]. Computer Vision—ECCV'98,1998:877-892.
    [54]Wren C R, Azarbayejani A, Darrell T, et al. Pfinder:Real-time tracking of the human body[J]. Pattern Analysis and Machine Intelligence, IEEE Transactions on,1997,19(7):780-785.
    [55]Barron J L, Fleet D J, Beauchemin S S. Performance of optical flow techniques[J]. International journal of computer vision,1994,12(1):43-77.
    [56]Magee D R. Tracking multiple vehicles using foreground, background and motion models [J]. Image and Vision Computing,2004,22(2):143-155.
    [57]Kamijo S, Sakauchi M. Illumination invariant and occlusion robust vehicle tracking by spatio-temporal MRF model[C]//9th World Congress on ITS, Chicago.2002.
    [58]Setchell C J. Applications of computer vision to road-traffic monitoring[D]. University of Bristol,1997.
    [59]She K, Bebis G, Gu H, et al. Vehicle tracking using on-line fusion of color and shape features[C]//Intelligent Transportation Systems,2004. Proceedings. The 7th International IEEE Conference on. IEEE,2004:731-736.
    [60]Kass M, Witkin A, Terzopoulos D. Snakes:Active contour models[J]. International journal of computer vision,1988,1(4):321-331.
    [61]Yezzi Jr A, Kichenassamy S, Kumar A, et al. A geometric snake model for segmentation of medical imagery [J]. Medical Imaging, IEEE Transactions on, 1997,16(2):199-209.
    [62]Paragios N, Deriche R. Geodesic active contours and level sets for the detection and tracking of moving objects [J]. Pattern Analysis and Machine Intelligence, IEEE Transactions on,2000,22(3):266-280.
    [63]Isard M, Blake A. Condensation—conditional density propagation for visual tracking[J]. International journal of computer vision,1998,29(1):5-28.
    [64]Zhou S K, Chellappa R, Moghaddam B. Visual tracking and recognition using appearance-adaptive models in particle filters[J]. Image Processing, IEEE Transactions on,2004,13(11):1491-1506.
    [65]Bobick A F, Davis J W. The recognition of human movement using temporal templates[J]. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 2001,23(3):257-267.
    [66]Bobick A F. Movement, Activity, and Action:The Role of Knowledge in the Perception of Motion[J]. Royal Society Workshop on Knowledge-based Vision in Man and Machine,1997,352:1257-1265.
    [67]Weinland D, Ronfard R, Boyer E. Free viewpoint action recognition using motion history volumes [J]. Computer Vision and Image Understanding,2006, 104(23):249-257.
    [68]Y L, D W T, N H J. Object-based analysis and interpretation of human motion in sports video sequences by dynamic Bayesian networks[J],2003,92(02): 196-216.
    [69]Nguyen N T, Phung D Q, Venkatesh S, et al. Learning and detecting activities from movement trajectories using the hierarchical hidden Markov model[C]. 2005.
    [70]Guerra-Filho G, Aloimonos Y. A Language for Human Action[J]. Computer, 2007,40:42-51.
    [71]Ogale A S, Karapurkar A, Aloimonos Y. View-invariant modeling and recognition of human actions using grammars. Workshop on Dynamical Vision at ICCV O5[C].2005.
    [72]明安龙,马华东.多摄像机之间基于区域SIFT描述子的目标匹配[J].计算机学报,2008,31(4):650-661.
    [73]Kang J, Cohen I, Medioni G. Continuous tracking within and across camera streams[C]//Computer Vision and Pattern Recognition. Proceedings.2003 IEEE Computer Society Conference on. IEEE,2003.
    [74]Devarajan D, Cheng Z, Radke R J. Calibrating distributed camera networks[J]. Proceedings of the IEEE,2008,96(10):1625-1639.
    [75]王鑫,徐立中.图象目标跟踪技术[M].北京:人民邮电出版社,2012.
    [76]赵春晖,潘泉,梁彦等.视频图象运动目标分析[M].北京:国防教育出版社,2011.
    [77]吴福朝,于洪川,袁波等.摄像机内参数自标定——理论与算法[J].自动化学报,1999,25(6):769-776.
    [78]邱茂林,马颂德,李毅.计算机视觉中摄像机标定综述[J].自动化学报,2000,26(01):43-55.
    [79]孟晓桥,胡占义.一种新的基于圆环点的摄像机自标定方法[J].软件学报,2003,29(1):110-124.
    [80]Tsai. Block-based Vanishing line and vanishing point detection for 3D Scene Reconstruction[C], In Proceedings of the International Symposium on Intelligent Sigal Processing and Communication Systems (ISPACS).2006.
    [81]Caprile B,Torre V.Using vanishing points for camera calibration[J].International Journal of Computer Vision,1990,4(2):127-140.
    [82]赵霆,谈正.从单幅图像恢复立体景象的新方法[J].红外与激光工程,2005,33(6):629-633.
    [83]Hartley R, Zisserman A. Multiple view geometry in computer vision[M]. Cambridge university press,2003.
    [84]Zhong H, Mai F, Hung Y S. Camera calibration using circle and right angles[C]//Pattern Recognition,2006. ICPR 2006.18th International Conference on. IEEE,2006,1:646-649.
    [85]Chen Y, Ip H, Huang Z, et al. Full camera calibration from a single view of planar scene[M]//Advances in Visual Computing. Springer Berlin Heidelberg, 2008:815-824.
    [86]Mae Y, Shirai Y, Miura J, et al. Object tracking in cluttered background based on optical flow and edges[C], In:Proc. of the 13th International Conference On Pattern Recgonition,1996:196-200.
    [87]Mae Y et al. Optical Flow Based Realtime object tracking by active vision system[C], In:Proc. of 2nd Japan-France Congress on Mechatronics, 1994:545-548.
    [88]Xiong Y L, Shafer S A. Monent and hypergeometric filters for high precision computation of focus, stereo and optical flow [J]. International Journal of Computer Vision,1997,22(1):25-59.
    [89]Ye M. Robust visual motion analysis:piecewise-smooth optical flow and motion-based detection and tracking [D]. Universtiy of Washington,2002.
    [90]Neri A, Colonnese S, Russo G et al. Automatic moving object and background separation[J]. Signal Processing,1998,66:219-232.
    [91]Lipton A J, Fujiyoshi H, Patil R S. Moving target classification and tracking from real-time video[C]. In:Proc. of IEEE Workshop on Applications of Computer Vision,1998:8-14.
    [92]Ismail H, David H, Lary S. Davis:A Fast Background Scene Modeling and Maintenance for Outdoor Surveillance[C]. ICPR 2000:4179-4183.
    [93]McKenna S et al. Tracking groups of people [J]. Computer Vision and Image Understanding,2000,80(1):42-56.
    [94]Kilger M. A shadow handler in a video-based real-time traffic monitoring system[C]//Applications of Computer Vision, Proceedings,1992., IEEE Workshop on. IEEE,1992:11-18.
    [95]Stauffer C, Grimson W E L. Adaptive background mixture models for real-time tracking[C]//Computer Vision and Pattern Recognition,1999. IEEE Computer Society Conference on. IEEE,1999,2.
    [96]Gutchess D, Trajkovics M, Cohen-Solal E, et al. A background model initialization algorithm for video surveillance[C]//Computer Vision,2001. ICCV 2001. Proceedings. Eighth IEEE International Conference on. IEEE,2001,1: 733-740.
    [97]Friedman N, Russell S. Image segmentation in video sequences:A probabilistic approach[C]//Proceedings of the Thirteenth conference on Uncertainty in artificial intelligence. Morgan Kaufinann Publishers Inc.,1997:175-181.
    [98]Elgammal A, Duraiswami R, Harwood D, et al. Background and foreground modeling using nonparametric kernel density estimation for visual surveillance[J]. Proceedings of the IEEE,2002,90(7):1151-1163.
    [99]Fukunaga K, Hostetler L. The estimation of the gradient of a density function, with applications in pattern recognition[JJ. Information Theory, IEEE Transactions on,1975,21(1):32-40.
    [100]Cheng Y. Mean shift, mode seeking, and clustering[J]. Pattern Analysis and Machine Intelligence, IEEE Transactions on,1995,17(8):790-799.
    [101]Comaniciu D, Ramesh V, Meer P. Real-time tracking of non-rigid objects using mean shift[C]//Computer Vision and Pattern Recognition,2000. Proceedings. IEEE Conference on. IEEE,2000,2:142-149.
    [102]Comaniciu D, Ramesh V, Meer P. The variable bandwidth mean shift and data-driven scale selection[C]//Computer Vision,2001. ICCV 2001. Proceedings. Eighth IEEE International Conference on. IEEE,2001,1:438-445.
    [103]La Scala B F, Bitmead R R. Design of an extended Kalman filter frequency tracker[J]. IEEE Transactions on Signal Processing,1996,44(3):739-742.
    [104]Sanjeev Arulampalam M, Maskell S, Gordon N, et al. A tutorial on particle filters for online nonlinear/non-Gaussian Bayesian tracking[J]. Signal Processing, IEEE Transactions on,2002,50(2):174-188.
    [105]张永军.基于序列图像的视觉检测理论与方法[M].武汉大学出版社,2008:40-48.
    [106]on Gioi R G, Jakubowicz J, Morel J M, et al. Lsd:A line segment detector[J]. Image Processing On Line,2012.
    [107]Grompone, G., Jakubowicz, J., Morel, J. and Randall, G. (2010). LSD:A Fast Line Segment Detector with a False Detection Control. IEEE Transactions on Pattern Analysis and Machine Intelligence,32,722.
    [108]Toldo, R. and Fusiello, A.. Robust multiple structures estimation with J-Linkage. European Conference on Computer Vision(ECCV),2008.
    [109]Tardif J.-P., Non-iterative Approach for Fast and Accurate Vanishing Point Detection,12th IEEE International Conference on Computer Vision,2009.
    [110]Chen Feng, Fei Deng, Vineet R. Kamat. Semi-automatic 3D Reconstruction of Piecewise Planar Building Models From Single Image. The 10th International Conference on Construction Applications of Virtual Reality,2010.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700