基于Dense_YOLO的室内目标异常检测
详细信息    查看全文 | 推荐本文 |
  • 英文篇名:Indoor Target Anomaly Detection based on Dense_YOLO
  • 作者:贾世杰 ; 胡斯平 ; 杨明珠 ; 刘舒婷
  • 英文作者:JIA Shijie;HU Siping;YANG Mingzhu;LIU Siting;School of Electrical and Information Engineering, Dalian Jiaotong University;
  • 关键词:室内场所 ; 视频监控 ; 异常检测 ; 目标检测 ; Dense_YOLO
  • 英文关键词:indoor location;;video surveillance;;anomaly detection;;objet detection;;Dense_YOLO
  • 中文刊名:DLTD
  • 英文刊名:Journal of Dalian Jiaotong University
  • 机构:大连交通大学电气信息工程学院;
  • 出版日期:2019-06-15
  • 出版单位:大连交通大学学报
  • 年:2019
  • 期:v.40;No.183
  • 基金:辽宁省自然科学基金资助项目(201602118);; 辽宁省教育厅科学研究计划资助项目(JDL2016024)
  • 语种:中文;
  • 页:DLTD201903023
  • 页数:6
  • CN:03
  • ISSN:21-1550/U
  • 分类号:105-110
摘要
针对室内场所,运用目标检测等算法实现对监控视频的实时异常检测.为提高检测效果,对YOLO v2模型进行了三个方面的改进:利用稠密网络中特征融合方式改进网络结构;使用K-means++对目标框进行聚类改进网络参数;利用迁移学习的方式对网络进行训练;改进最终得到Dense_YOLO目标检测模型.实验结果表明Dense_YOLO正确率达到了93.66%,相比YOLO v2提高了7.06%.针对人、宠物、贵重物品这几种常见的监控目标,利用Dense_YOLO对目标状态进行异常检测,并分别在一般场景、光照强、光照弱、目标被遮挡、目标较小等不利条件下进行测试,区域入侵检测、物品移动/移出检测两种特定目标异常检测功能分别到达92.73%、90.07%的平均正确率.
        A object detection algorithm is employed to realize the real-time anomaly detection of the surveillance video at the indoor locations. In order to improve the detection effect, the YOLO v2 model is improved with the following three methods. Feature fusion method in dense networks is used to improve the network structure, K-means++ is used to cluster the target frame to improve the network parameters, and the transfer learning method is used to train the network. The improved model is named Dense_YOLO. The experiment results show that the accuracy of the Dense_YOLO is 93.66% which is 7.06% higher than that of YOLO v2. For common monitoring targets such as people, pets and valuables, Dense_YOLO is used to detect anomalies of the object, and tests are conducted under adverse conditions such as general scenes, strong light, weak light, occlusion and small objects. For the two specific target anomaly detections of regional intrusion and object movement/removal, the average accuracies reach 92.73% and 90.07% respectively.
引文
[1]宋磊,黄祥林,沈兰荪.视频监控系统概述[J].测控技术,2003,22(5):33-35.
    [2]黄凯奇,陈晓棠,康运锋,等.智能视频监控技术综述[J].计算机学报,2015,38(6):1093-1118.
    [3]陈政.基于分层贝叶斯模型的智能视频监控中的异常检测[D].重庆:西南大学,2015.
    [4]罗超.智能视频分析搭上 AI 世界会怎样[J].中国公共安全,2017(6):106-110.
    [5]FELZENSZWALB P F,GIRSHICK R B,MCALLESTER D A,et al.Object Detection with Discriminatively Trained Part-Based Models[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,2010,32(9):1627-1645.
    [6]BREIMAN L.Random forests[J].Machine learning,2001,45(1):5-32.
    [7]FERREIRA A J,FIGUEIREDO M A T.Boosting algorithms:A review of methods,theory,and applications[M].Boston,Springer:Ensemble machine learning,2012:35-85.
    [8]CHERKASSKY V.The Nature Of Statistical Learning Theory[J].IEEE Transactions on Neural Networks,1997,8(6):1564-1564.
    [9]FELZENSZWALB P F,GIRSHICK R B,MCALLESTER D A,et al.Object Detection with Discriminatively Trained Part-Based Models[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,2010,32(9):1627-1645.
    [10]BREIMAN L.Random forests[J].Machine learning,2001,45(1):5-32.
    [11]LECUN Y,BENGIO Y,HINTON G.Deep learning.[J].Nature,2015,521(7553):436.
    [12]SCHMIDHUBER J.Deep learning in neural networks[M].Neural Networks,2015:85-117.
    [13]GIRSHICK R,DONAHUE J,DARRELL T,et al.Rich feature hierarchies for accurate object detection and semantic segmentation[J].Computer Science,2013,11:580-587.
    [14]HE K,ZHANG X,REN S,et al.Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition.[J].IEEE Transactions on Pattern Analysis & Machine Intelligence,2014,37(9):1904-1916.
    [15]GIRSHICK R.Fast R-CNN[C].IEEE International Conference on Computer Vision.IEEE ComputerSociety,2015:1440-1448.
    [16]REN S,HE K,GIRSHICK R,et al.Faster R-CNN:towards real-time object detection with region proposal networks[C].International Conference on Neural Information Processing Systems.MIT Press,2015:91-99.
    [17]DAI J,LI Y,HE K,et al.R-fcn:Object detection via region-based fully convolutional networks[J].Computer Vosion and Pattren Recognition,2016,20:379-387.
    [18]LIU W,ANGUELOV D,ERHAN D,et al.SSD:Single Shot MultiBox Detector[C].European Conference on Computer Vision.Springer,Cham,2016:21-37.
    [19]REDMON J,DIVVALA S,GIRSHICK R,et al.You only look once:Unified,real-time object detection[C].Proceedings of the IEEE conference on computer vision and pattern recognition.2016:779-788.
    [20]REDMON J,FARHADI A.YOLO9000:Better,Faster,Stronger[C].Computer Vision and Pattern Recognition (CVPR),2017 IEEE Conference on.IEEE,2017:6517-6525.
    [21]HUANG G,LIU Z,WEINBERGER K Q,et al.Densely connected convolutional networks[C].Proceedings of the IEEE conference on computer vision and pattern recognition,2017.
    [22]ARTHUR D,VASSILVITSKII S.k-means++:the advantages of careful seeding[C].Eighteenth Acm-Siam Symposium on Discrete Algorithms,New Orleans,2007:1027-1035.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700