球面全景影像自动测量路灯坐标的方法
详细信息    查看全文 | 推荐本文 |
  • 英文篇名:Automatically measuring the coordinates of streetlights in vehicle-borne spherical images
  • 作者:王志旋 ; 钟若飞 ; 谢东海
  • 英文作者:Wang Zhixuan;Zhong Ruofei;Xie Donghai;Beijing Advanced Innovation Center for Imaging Technology,Capital Normal University;College of Resources Environment and Tourism,Capital Normal University;Key Lab of 3D Information Acquisition and Application,Capital Normal University;
  • 关键词:Faster ; R-CNN ; 深度学习 ; 路灯检测 ; 全景 ; 前方交会 ; 核线约束
  • 英文关键词:faster region convolutional neural network(Faster R-CNN);;deep learning;;street light pole detection;;panorama;;forward intersection;;epipolar geometry
  • 中文刊名:ZGTB
  • 英文刊名:Journal of Image and Graphics
  • 机构:首都师范大学北京成像技术高精尖创新中心;首都师范大学资源环境与旅游学院;首都师范大学三维数据获取与应用重点实验室;
  • 出版日期:2018-09-16
  • 出版单位:中国图象图形学报
  • 年:2018
  • 期:v.23;No.269
  • 基金:国家自然科学基金项目(41371434)~~
  • 语种:中文;
  • 页:ZGTB201809010
  • 页数:11
  • CN:09
  • ISSN:11-3758/TB
  • 分类号:103-113
摘要
目的针对城市实施的路灯杆编码项目,即获取路灯坐标并依次编号,传统的测量方法耗费大量人力物力,且作业周期长,虽然激光测量精度高,但成本也高。为此本文提出一种将基于深度学习的目标检测与全景量测结合自动获取路灯坐标的方法。方法通过Faster R-CNN(faster region convolutional neural network)训练检测模型,对全景图像中的路灯底座进行检测,同时输出检测框坐标,并与HOG(histogram of oriented gradient)特征结合SVM(support vector machine)的检测结果进行效果对比。再将检测框的对角线交点作为路灯脚点,采用核线匹配的方式找到两幅图像中一个或多个路灯相对应的同名像点并进行前方交会得到路灯的实际空间坐标,为路灯编码做好前期工作。结果采用上述两种方法分别对100幅全景影像的路灯进行检测,结果显示Faster R-CNN具有明显优势,采用Faster R-CNN与全景量测结合自动获取路灯坐标。由于路灯底部到两成像中心的距离大小及3点构成的交会角大小对坐标量测精度具有较大影响,本文分别对距离约为7 m、11 m、18 m的点在交会角为0°180°的范围进行量测结果对比。经验证,交会角在30°150°时,距离越近,对量测精度的影响越小。基于上述规律,在自动量测的120个路灯坐标中筛选出交会角大于30°且小于150°,距离小于20 m的102个点进行精度验证,其空间坐标量测结果的误差最大不超过0.6 m,中误差小于0.3 m,满足路灯编码项目中路灯坐标精度在1 m以内的要求。结论本文提出了一种自动获取路灯坐标的方法,将基于深度学习的目标检测应用于全景量测中,避免手动选取全景图像的同名像点进行双像量测,节省大量人力物力,具有一定的实践意义。本文方法适用于城市车流量较少的路段或时段,以免车辆遮挡造成过多干扰,对于路灯遮挡严重的街道全景,本文方法存在一定的局限性。
        Objective With the development of urban management,a growing number of cities are implementing coding projects for streetlight poles. In such projects,the coordinates of streetlamps are obtained and serial numbers are assigned to them. The coordinates can be obtained in many ways,such as RTK and laser measurements. A quick and easy approach to obtain the data is required because tens of thousands of streetlamps are present in a city. In consideration of the cost,mobile panorama measurement is preferred. However,most current panorama measurements are conducted by means of human-computer interaction,in which homologous image points are selected to perform forward intersection to obtain the coordinates. This approach consumes substantial energy and time. Therefore,in this paper,we propose an automatic method to obtain the coordinates of streetlamps by combining object detection with panoramic measurement. Method The method combines deep learning and panoramic measurement to automatically obtain the coordinates of streetlight poles. No feature points are obvious on the poles because of their rod-shaped features,and the top of the streetlamp is different because of the different design. The distortion of panoramic images strongly influences the detection of the top of a streetlamp. Thus,the bottom of the poles is used as the detection target in this paper. The pole bottoms are detected by faster R-CNN. Meanwhile,the coordinate file that contains the upper-left and lower-right corners of the detection frames are output and compared with the detection results obtained by the combination of histogram of oriented gradient( HOG) and support vector machine( SVM). Then,the diagonal intersection of the detection box is regarded as the foot of the streetlight pole,and an epipolar line is used to find homologous image points in two panoramic images because multiple streetlight poles can be present in a panoramic image. Based on the matching results,the space coordinates of the streetlight poles are obtained by forward intersection of the panoramas,thereby confirming the potential of this preliminary work for the coding projects.Result The aforementioned two methods were used to detect the streetlights of 100 panoramic images,which include162 streetlights. A total of 1826 detection results were obtained based on HOG features,of which the correct bottom of streetlamps is 142. A total of 149 detection results are based on the faster R-CNN,of which 137 are correct. We can conclude that the faster R-CNN has obvious advantages. Thus,in this study,we use the faster R-CNN combined with panoramic measurement to automatically obtain the streetlight coordinates. The distance from the bottom of the streetlamp to the two imaging centers and the intersection angle formed by the three points significantly affect the accuracy of coordinate measurement. To filter out the coordinates that are less affected by the aforementioned two factors,we compare measurement results,which are the distances of approximately 7,11,and 18 m; the intersection angles are 0° to 180°. We have verified that when the intersection angles are from 30° to 150°,the influence on the measurement accuracy is smaller because the distance is closer. Based on the preceding rules,120 coordinates of streetlamps are selected to determine the statistical distribution of the intersection angle and distance. Points with a distance of less than 20 m and intersection angle greater than30° and less than 150° are selected for the coordinate error analysis,and 102 points meet the requirements for accuracy verification. The deviation of space coordinate measurement is less than 0. 3 m and the maximum is not more than 0. 6 m,thereby satisfying the requirement that the accuracy of the coordinates is within 1 m. Conclusion This paper presents a method of automatically obtaining the coordinates of streetlamps. The method of target detection based on deep learning is applied to the panorama measurement. The method avoids the manual selection of the homologous image points for measurement,which saves considerable labor and material resources. We conclude that this method exhibits practical significance because it is suitable for road sections or periods with low traffic volume in the city,thereby preventing excessive obstruction caused by vehicles. However,for panoramas with seriously obstructed streetlights,this method has certain limitations.
引文
[1]Guo L,Yang Y C,Li B J.Location method of urban alarm based on street lamp[J].Urban Geotechnical Investigation&Surveying,2009,(3):35-38.[郭岚,杨永崇,历保军.基于路灯的城市110报警定位方法的研究[J].城市勘测,2009,(3):35-38.][DOI:10.3969/j.issn.1672-8262.2009.03.011]
    [2]Yu Y T,Li J,Guan H Y,et al.Semiautomated extraction of street light poles from mobile Li DAR point-clouds[J].IEEETransactions on Geoscience and Remote Sensing,2015,53(3):1374-1386.[DOI:10.1109/TGRS.2014.2338915]
    [3]Hu Y J,Li X,Xie J,et al.A novel approach to extracting lamps from vehicle-borne laser data[C]//Proceedings of the 19th International Conference on Geoinformatics.Shanghai,China:IEEE,2011:1-6.[DOI:10.1109/Geo Informatics.2011.5981183]
    [4]Lehtomki M,Jaakkola A,HyyppJ,et al.Detection of vertical pole-like objects in a road environment using vehicle-based laser scanning data[J].Remote Sensing,2010,2(3):641-664.[DOI:10.3390/rs2030641]
    [5]Zheng H,Tan F T,Wang R S.Pole-Like object extraction from mobile lidar data[C]//International Archives of the Photogrammetry,Remote Sensing and Spatial Information Sciences,2016,XLI-B1:729-734.[DOI:10.5194/isprs-archives-XLI-B1-729-2016]
    [6]Jiang Z D,Jiang N,Wang Y J,et al.Distance measurement in panorama[C]//Proceedings of 2007 IEEE International Conference on Image Processing.San Antonio,Texas:IEEE,2007:VI-393-VI-396.[DOI:10.1109/ICIP.2007.4379604]
    [7]Fangi G.Multiscale multiresolution spherical photogrammetry with long focal lenses for architectural surveys[C]//Proceedings of International Archives of Photogrammetry,Remote Sensing and Spatial Information Sciences.Newcastle upon Tyne:Commission V Symposium,2010:1-6.
    [8]Zeng F Y,Zhong R F,Song Y,et al.Vehicle panoramic image matching based on epipolar geometry and space forward intersection[J].Journal of Remote Sensing,2014,18(6):1230-1236.[曾凡洋,钟若飞,宋杨,等.车载全景影像核线匹配和空间前方交会[J].遥感学报,2014,18(6):1230-1236.][DOI:10.11834/jrs.20144025]
    [9]Sun Z X,Zhong R F.Measurement scheme for panoramic images[J].Journal of Applied Sciences-Electronics and Information Engineering,2015,33(4):399-406.[孙振兴,钟若飞.一种用于全景影像的测量方案[J].应用科学学报,2015,33(4):399-406.][DOI:10.3969/j.issn.0255-8297.2015.04.006]
    [10]Kumar S,Deshpande A,Ho S S,et al.Urban street lighting infrastructure monitoring using a mobile sensor platform[J].IEEESensors Journal,2016,16(12):4981-4994.[DOI:10.1109/JSEN.2016.2552249]
    [11]Girshick R,Donahue J,Darrell T,et al.Rich feature hierarchies for accurate object detection and semantic segmentation[C]//Proceedings of 2014 IEEE Conference on Computer Vision and Pattern Recognition.Columbus,Ohio:IEEE,2014:580-587.[DOI:10.1109/CVPR.2014.81]
    [12]Uijlings J R R,van de Sande K E A,Gevers T,et al.Selective search for object recognition[J].International Journal of Computer Vision,2013,104(2):154-171.[DOI:10.1007/s11263-013-0620-5]
    [13]He K M,Zhang X Y,Ren S Q,et al.Spatial pyramid pooling in deep convolutional networks for visual recognition[J].IEEETransactions on Pattern Analysis and Machine Intelligence,2015,37(9):1904-1916.[DOI:10.1109/TPAMI.2015.2389824]
    [14]Girshick R.Fast R-CNN[C]//Proceedings of 2015 IEEE International Conference on Computer Vision.Santiago,Chile:IEEE,2015:1440-1448.[DOI:10.1109/ICCV.2015.169]
    [15]Ren S Q,He K M,Girshick R,et al.Faster R-CNN:towards real-time object detection with region proposal networks[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,2017,39(6):1137-1149.[DOI:10.1109/TPAMI.2016.2577031]
    [16]Zeiler M D,Fergus R.Visualizing and understanding convolutional networks[C]//Proceedings of the 13th European Conference on Computer Vision.Zurich:Springer,2014:818-833.[DOI:10.1007/978-3-319-10590-1_53]
    [17]Simonyan K,Zisserman A.Very deep convolutional networks for large-scale image recognition[EB/OL].(2015-04-10)[2017-11-30].https://arxiv.org/abs/1409.1556.
    [18]Xie D H,Zhong R F,Wu Y,et al.Relative pose estimation and accuracy verification of spherical panoramic image[J].Acta Geodaetica et Cartographica Sinica,2017,46(11):1822-1829.[谢东海,钟若飞,吴俣,等.球面全景影像相对定向与精度验证[J].测绘学报,2017,46(11):1822-1829.][DOI:10.11947/j.AGCS.2017.20160645]
    [19]Chen L W.Precision analysis of the plane-linear forward intersection in road surveying[J].Ningxia Engineering Technology,2010,9(1):87-89.[陈立文.公路测量中平面前方交会点位的精度分析[J].宁夏工程技术,2010,9(1):87-89.][DOI:10.3969/j.issn.1671-7244.2010.01.027]

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700