Dynamic scene understanding by improved sparse topical coding
详细信息    查看全文
文摘
The explosive growth of cameras in public areas demands a technique which develops a fully automated surveillance and monitoring system. In this paper, we propose a novel unsupervised approach to automatically explore motion patterns occurring in dynamic scenes under an improved sparse topical coding (STC) framework. Given an input video, it is segmented into a sequence of clips without overlapping. Optical flow features are extracted from each pair of consecutive frames, and quantized into discrete visual flow words. Each video clip is interpreted as a document and visual flow words as words within the document. Then the improved STC is applied to explore latent patterns which represent the common motion distributions of the scene. Finally, each video clip is represented as a weighted summation of these patterns with only a few non-zero coefficients. The proposed approach is purely data-driven and scene independent, which make it suitable for very large range applications of scenarios, such as rule mining and abnormal event detection. Experimental results and comparisons on various public datasets demonstrate the promise of the proposed approach.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700