基于倾向流和深度学习的机场运动目标检测
详细信息    查看全文 | 推荐本文 |
  • 英文篇名:A Target Detection Method of Moving Objects at Airport Based on Streak Flow and Deep Learning
  • 作者:詹昭焕 ; 韩松臣 ; 李炜 ; 余丽莎
  • 英文作者:ZHAN Zhaohuan;HAN Songchen;LI Wei;YU Lisha;School of aeronautics and astronautics,Sichuan Univ.;National key Lab.of air traffic control automation system Technol.,Sichuan Univ.;
  • 关键词:智能交通 ; 运动目标 ; 机场场面监视 ; 倾向流 ; 卷积神经网络 ; 目标检测
  • 英文关键词:intelligent transportation;;moving target;;airport surveillance;;streak flow;;CNN;;object detection
  • 中文刊名:JTJS
  • 英文刊名:Journal of Transport Information and Safety
  • 机构:四川大学空天科学与工程学院;四川大学国家空管自动化系统技术重点实验室;
  • 出版日期:2019-02-28
  • 出版单位:交通信息与安全
  • 年:2019
  • 期:v.37;No.216
  • 基金:国家自然科学基金项目(61403065);; 中央高校基本科研业务费专项资金项目(YJ201450);; 四川省应用基础研究项目(2015JY0084)资助
  • 语种:中文;
  • 页:JTJS201901009
  • 页数:9
  • CN:01
  • ISSN:42-1781/U
  • 分类号:55-63
摘要
针对当前基于视频图像的场面监视目标检测方法存在定位误差较大,识别准确率低等问题,建立一种结合目标运动信息的机场场面运动目标检测方法:利用倾向流法提取出运动目标在图像中的候选区域,对候选区域执行点池化操作以确定区域建议的边界,采用Inception结构构建一个浅层卷积神经网络,并使用该网络对区域建议中的航空器、车辆和人员进行识别。结合国内机场的监视视频,构建了一个包含4 938张图片的机场目标数据集,用于算法的训练和测试。结果表明,运动目标提取的准确率达到94%以上,运动目标识别的Top-1准确率达到了97.23%,运动目标平均准确率达到86.23%。与3种深度学习目标检测算法相比,运动目标检测精度平均提升了39%。
        Existing methods for detection of objects on the surface of airports based on visual surveillance have problems such as large deviation of location and low accuracy of recognition. To solve these problems, a new method for detection of moving objects at airports is proposed. A streak flow method is used to extract a moving object in one image of a candidate region, and corner-pooling operation is implemented to determine its proposal border. Inception structure is used to construct a shallow convolution neural network, which is applied to identify of aircrafts, vehicles and people in the region. Based on surveillance video data from domestic airports, a dataset is developed, including 4 938 pictures of aircrafts, vehicles, and people for training and testing. The results show that accuracy of extraction for moving targets is more than 94%; accuracy of recognition for top-1 prediction of moving targets is up to 97.23%; the mean average precision of moving targets is up to 86.23%. Compared with other three algorithms of object detection based on deep learning, the accuracy of detection for moving targets of the proposed algorithm is improved by 39% on average.
引文
[1] SCHMIDT M,RUDOLPH M,WERTHER B,et al.Remote airport tower operation with augmented vision video panorama HMI[C]. International Conference for Research in Air Transportation, Belgrade, Serbia and Montenegro: DLR, 2006.
    [2] SCHMIDT M,RUDOLPH M,WERTHER B,et al. Development of an augmented vision video panorama human-machine interface for remote airport tower operation[M]. Orlando, USA: Springer Berlin Heidelberg, 2007.
    [3] SCHMIDT M, RUDOLPH M, PAPENFUSS A, et al. Remote airport traffic control center with augmented vision video panorama[C]. IEEE/AIAA Digital Avionics Systems Conference, Orlando, USA: IEEE, 2009.
    [4] 罗晓,卢宇,吴宏刚.采用多视频融合的机场场面监视方法[J].电讯技术,2011,51(7):128-132. LUO Xiao, LU Yu, WU Honggang. A novel airport surface surveillance method using multi-video fusion[J]. Telecommunication Engineering, 2011,51(7):128-132. (in Chinese)
    [5] 卢宇,吴宏刚,徐自励.一种基于视频MLAT的场面目标监视新方法[J].电讯技术,2013,53(7):854-858. LU Yu, WU Honggang, XU Zili. A novel airport surface surveillance method based on video MLAT[J]. Telecommunication Engineering, 2013,53(7):854-858. (in Chinese)
    [6] 唐勇,胡明华,吴洪刚,等.一种在机场视频中实现飞机自动挂标牌的新方法[J].江苏大学学报(自然科学版),2013,34(6):681-686. TANG Yong,HU Minghua,WU Honggang,et al. An automatical labeling aircraft method for airport video monitoring[J].Journal of Jiangsu University(Natural Science Edition), 2013,34(6):681-686. (in Chinese)
    [7] 岳仁田,焦阳,赵嶷飞.着陆阶段航空器航迹检测与风险识别方法[J].交通运输系统工程与信息,2015,15(6):133-139. YUE Rentian, JIAO Yang, ZHAO Yifei. Aircraft flight path detection and risk identification in landing phase[J]. Journal of Transportation Systems Engineering and Information Technology, 2015,15(6):133-139. (in Chinese)
    [8] 王跃东,钱小燕,张天慈,等.基于联合双边滤波的融合图像跟踪算法研究[J].航空计算技术,2016,46(2):37-41. WANG Yuedong, QIAN Xiaoyan, ZHANG Tianci, et al. A tracking algorithm for fusion image based on joint bilateral filter[J]. Aeronautical Computing Technique, 2016,46(2):37-41. (in Chinese)
    [9] 顾佳羽,包丹文,贾俊华.航站楼出发大厅行人交通特性研究[J].武汉理工大学学报(交通科学与工程版),2018,42(2):318-322. GU Jiayu, BAO Danwen, JIA Junhua. Research on Pedestrian Traffic Characteristics of Terminal Departure Hall[J]. Journal of Wuhan University of Technology (Transportation Science & Engineering-dition), 2018,42(2):318-322. (in Chinese)
    [10] 吴淼,汤新民,沈志远,等.1种机场场面移动目标特征提取方法[J].交通信息与安全,2015,33(3):16-22. WU Miao, TANG Xinmin, SHEN Zhiyuan, et al. A method of moving object feature extraction at airport[J]. Journal of Transport Information and Safety, 2015,33(3):16-22. (in Chinese)
    [11] 邢健,汤新民,韩松臣,等.基于事件驱动型传感器网络的场面目标跟踪方法[J].交通信息与安全,2014,32(6):48-52. XING Jian, TANG Xinmin, HAN Songchen, et al. A method of target tracking on airport surface on event-driven sensor network[J]. Journal of Transport Information and Safety, 2014,32(6):48-52. (in Chinese)
    [12] KRIZHEVSKY A, SUTSKEVER I, HINTON G E. Imagenet classification with deep convolutional neural networks[C]. International Conference on Neural Information Processing Systems, Lake Tahoe, USA: Curran Associates Inc, 2012.
    [13] ZEILER M D, FERGUS R. Visualizing and understanding convolutional networks[C]. European Conference on Computer Vision, Zurich, Swiss Confederation: Springer, 2014.
    [14] SZEGEDY C, LIU W, JIA Y, et al. Going deeper with convolutions[C]. Computer Vision and Pattern Recognition, Boston, USA: IEEE, 2015.
    [15] REN S, HE K, GIRSCHICK R, et al. Faster R-CNN: towards real-time object detection with region proposal networks[C]. International Conference on Neural Information Processing Systems, Montréal, Canada: MIT, 2015.
    [16] LIU W, ANGUELOV D, ERHAN D, et al. SSD: single shot multibox detector[C]. European Conference on Computer Vision, Amsterdam, Netherlands: Springer, 2016.
    [17] REDMON J, FARHADI A. Yolov3: An incremental improvement[J/OL]. (2018-04-08)[2018-08-14]. https://arxiv.org/abs/1804.02767.
    [18] MEHRAN R, MOORE B E, SHAH M. A streakline representation of flow in crowded scenes[C]. European Conference on Computer Vision. Crete, Greece: Springer, 2010.
    [19] LAW H, DENG Jia. Cornernet: Detecting objects as paired keypoints. European Conference on Computer Vision. Munich, Germany: Springer, 2018.
    [20] GIRSCHICK R, DONAHUE J, DARRELL T, et al. Rich feature hierarchies for accurate object detection and semantic segmentation [C]. Conference on Computer Vision and Pattern Recognition, Columbus, USA: IEEE, 2014.
    [21] EVERINGHAM M, ESLAMI S M A, VAN GOOL L, et al. The pascal visual object classes challenge: A retrospective[J]. International journal of computer vision, 2015,111(1):98-136.
    [22] RUSSAKOVSKY O, DENG J, SU H, et al. Imagenet large scale visual recognition challenge[J]. International Journal of Computer Vision, 2015,115(3):211-252.
    [23] KAEWTRAKULPONG P, BOWDEN R. An improved adaptive background mixture model for real-time tracking with shadow detection[M]. Boston, USA: Springer, 2002.
    [24] BARNICH O, VAN DROOGENBROECK M. ViBe: A universal background subtraction algorithm for video sequences[J]. IEEE Transactions on Image Processing, 2011,20(6):1709-1724.
    [25] LECUN Y, BOTTOU L, BENGIO Y, et al. Gradient-based learning applied to document recognition[J]. Proceedings of the IEEE, 1998,86(11):2278-2324.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700