Specific video identification via joint learning of latent semantic concept, scene and temporal structure
详细信息    查看全文
文摘
In this paper, based on three typical characteristics of specific videos, i.e., the theme, scene and temporal structure, a novel data-driven identification architecture for the specific video is proposed. To be concrete, at the frame-level, semantic features and scene features from two independent Convolutional Neural Networks (CNNs) are extracted. At the video-level, Vector of Locally Aggregated Descriptors (VLAD) is firstly adopted to encode spatial representation, and then multiple-layer Long Short-Term Memory (LSTM) networks are introduced to represent temporal information. Additionally, a large-scale specific video dataset (SVD) is built for evaluation. The experimental results show that our method obtain impressive 98% mAP. Moreover, in order to validate generalization capability of proposed architecture, extensive experiments on two public datasets, Columbia Consumer Videos (CCV) and Unstructured Social Activity Attribute (USAA), are conducted. Comparison results indicate that our approach outperforms state-of-the-art methods on USAA, and achieves comparable results on CCV.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700