用户名: 密码: 验证码:
基于视频分段的空时双通道卷积神经网络的行为识别
详细信息    查看全文 | 推荐本文 |
  • 英文篇名:Two-stream CNN for action recognition based on video segmentation
  • 作者:王萍 ; 庞文浩
  • 英文作者:WANG Ping;PANG Wenhao;School of Electronic and Information Engineering, Xi'an Jiaotong University;
  • 关键词:双通道卷积神经网络 ; 行为识别 ; 视频分段 ; 迁移学习 ; 特征融合
  • 英文关键词:two-stream Convolutional Neural Network(CNN);;action recognition;;video segmentation;;transfer learning;;feature fusion
  • 中文刊名:JSJY
  • 英文刊名:Journal of Computer Applications
  • 机构:西安交通大学电子与信息工程学院;
  • 出版日期:2019-04-15 14:06
  • 出版单位:计算机应用
  • 年:2019
  • 期:v.39;No.347
  • 基金:国家自然科学基金资助项目(61671365)~~
  • 语种:中文;
  • 页:JSJY201907036
  • 页数:6
  • CN:07
  • ISSN:51-1307/TP
  • 分类号:219-224
摘要
针对原始空时双通道卷积神经网络(CNN)模型对长时段复杂视频中行为识别率低的问题,提出了一种基于视频分段的空时双通道卷积神经网络的行为识别方法。首先将视频分成多个等长不重叠的分段,对每个分段随机采样得到代表视频静态特征的帧图像和代表运动特征的堆叠光流图像;然后将这两种图像分别输入到空域和时域卷积神经网络进行特征提取,再在两个通道分别融合各视频分段特征得到空域和时域的类别预测特征;最后集成双通道的预测特征得到视频行为识别结果。通过实验讨论了多种数据增强方法和迁移学习方案以解决训练样本不足导致的过拟合问题,分析了不同分段数、预训练网络、分段特征融合方案和双通道集成策略对行为识别性能的影响。实验结果显示所提模型在UCF101数据集上的行为识别准确率达到91.80%,比原始的双通道模型提高了3.8个百分点;同时在HMDB51数据集上的行为识别准确率也比原模型提高,达到61.39%,这表明所提模型能够更好地学习和表达长时段复杂视频中人体行为特征。
        Aiming at the issue that original spatial-temporal two-stream Convolutional Neural Network(CNN) model has low accuracy for action recognition in long and complex videos, a two-stream CNN for action recognition based on video segmentation was proposed. Firstly, a video was split into multiple non-overlapping segments with same length. For each segment, one frame image was sampled randomly to represent its static features and stacked optical flow images were calculated to represent its motion features. Secondly, these two patterns of images were input into the spatial CNN and temporal CNN for feature extraction, respectively. And the classification prediction features of spatial and temporal domains for action recognition were obtained by merging all segment features in two streams respectively. Finally, the two-steam predictive features were integrated to obtain the action recognition results for the video. In series of experiments, some data augmentation techniques and transfer learning methods were discussed to solve the problem of over-fitting caused by the lack of training samples. The effects of various factors including the number of segments, network architectures, feature fusion schemes based on segmentation and two-stream integration strategy on the performance of action recognition were analyzed. The experimental results show that the accuracy of action recognition of the proposed model on dataset UCF101 reaches 91.80%, which is 3.8% higher than that of original two-stream CNN model; and the accuracy of the proposed model on dataset HMDB51 is improved to 61.39%, which is higher than that of the original model. It shows that the proposed model can better learn and express the action features in long and complex videos.
引文
[1] 单言虎,张彰,黄凯奇.人的视觉行为识别研究回顾、现状及展望[J].计算机研究与发展,2016,53(1):93-112.(SHAN Y H,ZHANG Z,HUANG K Q.Review,current situation and prospect of human visual behavior recognition [J].Journal of Computer Research and Development,2016,53 (1):93-112.)
    [2] FORSYTH D A.Computer Vision:A Modern Approach[M].2nd ed.Englewood Cliffs,NJ:Prentice Hall,2011:1-2.
    [3] CAI Z,WANG L,PENG X,et al.Multi-view super vector for action recognition[C]// Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition.Washington,DC:IEEE Computer Society,2014:596-603.
    [4] WANG H,SCHMID C.Action recognition with improved trajectories[C]// Proceedings of the 2013 IEEE International Conference on Computer Vision.Washington,DC:IEEE Computer Society,2014:3551-3558.
    [5] PENG X,WANG L,WANG X,et al.Bag of visual words and fusion methods for action recognition:comprehensive study and good practice [J].Computer Vision and Image Understanding,2016,150:109-125.
    [6] WANG L,QIAO Y,TANG X.MoFAP:a multi-level representation for action recognition[J].International Journal of Computer Vision,2016,119 (3):254-271.
    [7] KARPATHY A,TODERICI G,SHETTY S,et al.Large-scale video classification with convolutional neural networks[C]// Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Rec-ognition.Washington,DC:IEEE Computer Society,2014:1725-1732.
    [8] TRAN D,BOURDEV L,FERGUS R,et al.Learning spatiotemporal features with 3D convolutional networks[C]// Proceedings of the 2014 IEEE International Conference on Computer Vision.Washington,DC:IEEE Computer Society,2015:4489-4497.
    [9] VAROL G,LAPTEV I,SCHMID C.Long-term temporal convolutions for action recognition [J].IEEE Transactions on Pattern Analysis and Machine Intelligence,2018,40(6):1510-1517.
    [10] SIMONYAN K,ZISSERMAN A.Two-stream convolutional net-works for action recognition in videos[C]// Proceedings of the 2014 Conference on Neural Information Processing Systems.New York:Curran Associates,2014:568-576.
    [11] NG Y H,HAUSKNECHT M,VIJAYANARASIMHAN S,et al.Beyond short snippets:deep networks for video classification[C]// Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition.Washington,DC:IEEE Computer Society,2015:4694-4702.
    [12] WANG L M,XIONG Y J,WANG Z,et al.Temporal segment networks:towards good practices for deep action recognition [C]// Proceedings of the 2016 European Conference on Computer Vision.Berlin:Springer,2016:22-36.
    [13] WANG L,QIAO Y,TANG X.Action recognition with trajectory-pooled deep-convolutional descriptors[C]// Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition.Washington,DC:IEEE Computer Society,2015:4305-4314.
    [14] SZEGEDY C,VANHOUCKE V,IOFFE S,et al.Rethinking the inception architecture for computer vision[C]// Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition.Washington,DC:IEEE Computer Society,2016:2818-2826.
    [15] MURPHY K P.Machine Learning:A Probabilistic Perspective [M].Cambridge:MIT Press,2012:22.
    [16] HORN B K P,SCHUNCK B G.Determining optical flow [J].Artificial Intelligence,1981,17 (1/2/3):185-203.
    [17] 周志华.机器学习[M].北京:清华大学出版社,2016:171-173.(ZHOU Z H.Machine Learning [M].Beijing:Tsinghua University Press,2016:171-173.)
    [18] JIANG Y G,LIU J,ZAMIR A,et.al.Competition track evaluation setup,the first international workshop on action recognition with a large number of classes [EB/OL].[2018- 05- 20].http://www.crcv.ucf.edu/ICCV13-Action-Workshop/index.files/Competition_Track_Evaluation.pdf.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700