Action recognition new framework with robust 3D-TCCHOGAC and 3D-HOOFGAC
详细信息    查看全文
文摘
Action recognition has very high academic research value, potential commercial value and wide market application prospect in computer vision. In order to improve the action recognition accuracy, two kinds of dynamic descriptors based on dense trajectories are proposed in this paper. Firstly, to capture the local position information that action occurs, dense sampling in motion regions is done by constraining and clustering of optical flow. Secondly, the motion corners of object are selected as feature points which are then tracked to obtain motion trajectories. Finally, the gradient information and optical flow gradient information are extracted respectively in the video cube centered at the trajectories, then the auto-correlation and normalization processing are carried out on the two above information to obtain two dynamic descriptors named 3D histograms of oriented gradients in trajectory centered cube auto-correlation and 3D histograms of oriented optical flow gradients auto-correlation, which can resist a certain degree of interferences caused by camera motion and complex background. However, the diversity of realistic videos makes dynamic or static descriptors alone unable to achieve accurate action classification. A new framework is proposed, which makes the dynamic descriptors and static descriptors fuse and supplement mutually to further improve the action recognition accuracy. This paper adopts the leave-one-out cross validation on datasets of Weizmann and UCF-Sports with action recognition accuracy of 100 % and 96.00 %, and adopts the four-fold cross validation on datasets of KTH and YouTube with action recognition accuracy of 97.17 % and 88.23 %, which has the better performance over the references.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700