Mode-Driven Volume Analysis Based on Correlation of Time Series
详细信息    查看全文
  • 作者:Chengcheng Jia (16)
    Wei Pang (18)
    Yun Fu (16) (17)

    16. Electrical and Computer Engineering
    ; Northeastern University ; Boston ; USA
    18. School of Natural and Computing Sciences
    ; University of Aberdeen ; Aberdeen ; UK
    17. Computer and Information Science
    ; Northeastern University ; Boston ; USA
  • 刊名:Lecture Notes in Computer Science
  • 出版年:2015
  • 出版时间:2015
  • 年:2015
  • 卷:8925
  • 期:1
  • 页码:818-833
  • 全文大小:959 KB
  • 参考文献:1. Ballani, J, Grasedyck, L (2013) A projection method to solve linear systems in tensor format. Numerical Linear Algebra with Applications 20: pp. 27-43 CrossRef
    2. Belhumeur, P, Hespanha, J, Kriegman, D (1997) Eigenfaces vs. fisherfaces: Recognition using class specific linear projection. IEEE TPAMI 19: pp. 711-720 CrossRef
    3. Birnbaum, A, Johnstone, IM, Nadler, B, Paul, D (2013) Minimax bounds for sparse pca with noisy high-dimensional data. The Annals of Statistics 41: pp. 1055-1084 CrossRef
    4. Biswas, S, Aggarwal, G, Flynn, PJ, Bowyer, KW (2013) Pose-robust recognition of low-resolution face images. TPAMI 35: pp. 3037-3049 CrossRef
    5. Fukunaga, K (1990) Introduction to statistical pattern recognition. Pattern Recognition 22: pp. 833-834
    6. Gong, D., Medioni, G.: Dynamic manifold warping for view invariant action recognition. In: ICCV, pp. 571鈥?78. IEEE (2011)
    7. Gong, W, Sapienza, M, Cuzzolin, F (2013) Fisher tensor decomposition for unconstrained gait recognition. Training 2: pp. 3
    8. Grasedyck, L, Kressner, D, Tobler, C (2013) A literature survey of low-rank tensor approximation techniques. GAMM-Mitteilungen 36: pp. 53-78 CrossRef
    9. Guo, K, Ishwar, P, Konrad, J (2013) Action recognition from video using feature covariance matrices. IEEE TIP 22: pp. 2479-2494
    10. Ho, HT, Gopalan, R (2014) Model-driven domain adaptation on product manifolds for unconstrained face recognition. IJCV 109: pp. 110-125 CrossRef
    11. Hu, H (2013) Enhanced gabor feature based classification using a regularized locally tensor discriminant model for multiview gait recognition. IEEE Transactions on Circuits and Systems for Video Technology 23: pp. 1274-1286 CrossRef
    12. Huang, C-H, Yeh, Y-R, Wang, Y-CF Recognizing actions across cameras by exploring the correlated subspace. In: Fusiello, A, Murino, V, Cucchiara, R eds. (2012) Computer Vision 鈥?ECCV 2012. Springer, Heidelberg, pp. 342-351
    13. Jia, C, Wang, S, Peng, X, Pang, W, Zhang, C, Zhou, C, Yu, Z (2012) Incremental multi-linear discriminant analysis using canonical correlations for action recognition. Neurocomputing 83: pp. 56-63 CrossRef
    14. Jia, C., Zhong, G., Fu, Y.: Low-rank tensor learning with discriminant analysis for action classification and image recovery. In: AAAI (2014)
    15. Kim, T., Cipolla, R.: Canonical correlation analysis of video volume tensors for action categorization and detection. IEEE T. Pattern Anal. 1415鈥?428 (2008)
    16. Kolda, T, Bader, B (2009) Tensor decompositions and applications. SIAM Review 51: pp. 455-500 CrossRef
    17. Laat, KF, Norden, AG, Gons, RA, Oudheusden, LJ, Uden, IW, Norris, DG, Zwiers, MP, Leeuw, FE (2011) Diffusion tensor imaging and gait in elderly persons with cerebral small vessel disease. Stroke 42: pp. 373-379 CrossRef
    18. Leibe, B., Schiele, B.: Analyzing appearance and contour based methods for object categorization. In: CVPR, vol. 2, pp. II-409 (2003)
    19. Lui, Y.M., Beveridge, J.R.: Tangent bundle for human action recognition. In: FG, pp. 97鈥?02. IEEE (2011)
    20. Lykou, A, Whittaker, J (2010) Sparse cca using a lasso with positivity constraints. Computational Statistics & Data Analysis 54: pp. 3144-3157 CrossRef
    21. Miyamoto, K, Adachi, Y, Osada, T, Watanabe, T, Kimura, HM, Setsuie, R, Miyashita, Y (2014) Dissociable memory traces within the macaque medial temporal lobe predict subsequent recognition performance. The Journal of Neuroscience 34: pp. 1988-1997 CrossRef
    22. Goud Tandarpally, M, Nagendar, G, Ganesh Bandiatmakuri, S, Jawahar, CV Action recognition using canonical correlation kernels. In: Lee, KM, Matsushita, Y, Rehg, JM, Hu, Z eds. (2013) Computer Vision 鈥?ACCV 2012. Springer, Heidelberg, pp. 479-492 CrossRef
    23. Perez, E.A., Mota, V.F., Maciel, L.M., Sad, D., Vieira, M.B.: Combining gradient histograms using orientation tensors for human action recognition. In: ICPR, pp. 3460鈥?463. IEEE (2012)
    24. Schuldt, C., Laptev, I., Caputo, B.: Recognizing human actions: a local SVM approach. In: ICPR, vol. 3, pp. 32鈥?6 (2004)
    25. Tao, D, Li, X, Wu, X, Maybank, S (2007) General tensor discriminant analysis and gabor features for gait recognition. IEEE T. Pattern Anal. 29: pp. 1700-1715 CrossRef
    26. Tian, C, Fan, G, Gao, X, Tian, Q (2012) Multiview face recognition: From tensorface to v-tensorface and k-tensorface. IEEE T. Syst. Man Cy. B 42: pp. 320-333 CrossRef
    27. Wu, X., Wang, H., Liu, C., Jia, Y.: Cross-view action recognition over heterogeneous feature spaces. In: ICCV, pp. 609鈥?16 (2013)
    28. Xue, G, Mei, L, Chen, C, Lu, ZL, Poldrack, R, Dong, Q (2011) Spaced learning enhances subsequent recognition memory by reducing neural repetition suppression. Journal of Cognitive Neuroscience 23: pp. 1624-1633 CrossRef
    29. Yan, S., Xu, D., Yang, Q., Zhang, L., Tang, X., Zhang, H.: Discriminant analysis with tensor representation. In: CVPR, vol. 1, pp. 526鈥?32 (2005)
    30. Yang, F., Bourdev, L., Shechtman, E., Wang, J., Metaxas, D.: Facial expression editing in video using a temporally-smooth factorization. In: CVPR, pp. 861鈥?68. IEEE (2012)
    31. Youn, J, Cho, JW, Lee, WY, Kim, GM, Kim, ST, Kim, HT (2012) Diffusion tensor imaging of freezing of gait in patients with white matter changes. Movement Disorders 27: pp. 760-764 CrossRef
    32. Yu, ZZ, Jia, CC, Pang, W, Zhang, CY, Zhong, LH (2012) Tensor discriminant analysis with multiscale features for action modeling and categorization. IEEE Signal Processing Letters 19: pp. 95-98 CrossRef
    33. Zafeiriou, S (2009) Discriminant nonnegative tensor factorization algorithms. IEEE TNN 20: pp. 217-235
  • 作者单位:Computer Vision - ECCV 2014 Workshops
  • 丛书名:978-3-319-16177-8
  • 刊物类别:Computer Science
  • 刊物主题:Artificial Intelligence and Robotics
    Computer Communication Networks
    Software Engineering
    Data Encryption
    Database Management
    Computation by Abstract Devices
    Algorithm Analysis and Problem Complexity
  • 出版者:Springer Berlin / Heidelberg
  • ISSN:1611-3349
文摘
Tensor analysis is widely used for face recognition and action recognition. In this paper, a mode-driven discriminant analysis (MDA) in tensor subspace is proposed for visual recognition. For training, we treat each sample as an N-order tensor, of which the first N-1 modes capture the spatial information of images while the N-th mode captures the sequential patterns of images. We employ Fisher criteria on the first N-1 modes to extract discriminative features of the visual information. After that, considering the correlation of adjacent frames in the sequence, i.e., the current frame and its former and latter ones, we update the sequence by calculating the correlation of triple adjacent frames, then perform discriminant analysis on the N-th mode. The alternating projection procedure of MDA converges and is convex with different initial values of the transformation matrices. Such hybrid tensor subspace learning scheme may sufficiently preserve both discrete and continuous distributions information of action videos in lower dimensional spaces to boost discriminant power. Experiments on the MSR action 3D database, KTH database and ETH database showed that our algorithm MDA outperformed other tensor-based methods in terms of accuracy and is competitive considering the time efficiency. Besides, it is robust to deal with the damaged and self-occluded action silhouettes and RGB object images in various viewing angles.
NGLC 2004-2010.National Geological Library of China All Rights Reserved.
Add:29 Xueyuan Rd,Haidian District,Beijing,PRC. Mail Add: 8324 mailbox 100083
For exchange or info please contact us via email.