Exploiting multi-channels deep convolutional neural networks for multivariate time series classification
详细信息    查看全文
  • 作者:Yi Zheng ; Qi Liu ; Enhong Chen ; Yong Ge ; J. Leon Zhao
  • 关键词:convolutional neural networks ; time series classification ; feature learning ; deep learning
  • 刊名:Frontiers of Computer Science in China
  • 出版年:2016
  • 出版时间:February 2016
  • 年:2016
  • 卷:10
  • 期:1
  • 页码:96-112
  • 全文大小:994 KB
  • 参考文献:1.Xing Z, Pei J, Keogh E. A brief survey on sequence classification. ACM SIGKDD Explorations Newsletter, 2010, 12(1): 40–48CrossRef
    2.Ding H, Trajcevski G, Scheuermann P, Wang X, Keogh E. Querying and mining of time series data: experimental comparison of representations and distance measures. Proceedings of the VLDB Endowment, 2008, 1(2): 1542–1552CrossRef
    3.Orsenigo C, Vercellis C. Combining discrete svm and fixed cardinality warping distances for multivariate time series classification. Pattern Recognition, 2010, 43(11): 3787–3794MATH CrossRef
    4.Batal I, Sacchi L, Bellazzi R, Hauskrecht M. Multivariate time series classification with temporal abstractions. In: Proceedings of FLAIRS Conference. 2009
    5.Haselsteiner E, Pfurtscheller G. Using time-dependent neural networks for EEG classification. IEEE Transactions on Rehabilitation Engineering, 2000, 8(4): 457–463CrossRef
    6.Kampouraki A, Manis G, Nikou C. Heartbeat time series classification with support vector machines. IEEE Transactions on Information Technology in Biomedicine, 2009, 13(4): 512–518CrossRef
    7.Reiss A, Stricker D. Introducing a modular activity monitoring system. In: Proceedings of IEEE Annual International Conference on Engineering in Medicine and Biology Society. 2011, 5621–5624
    8.Batista G E A P A, Wang X, Keogh E J. A complexity-invariant distance measure for time series. In: Proceedings of SIAM Conference on Data Mining. 2011
    9.Rakthanmanon T, Campana B, Mueen A, Batista G, Westover B, Zhu Q, Zakaria J, Keogh E. Searching and mining trillions of time series subsequences under dynamic time warping. In: Proceedings of the 18th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 2012, 262–270CrossRef
    10.Xi X, Keogh E J, Shelton C R, Wei L, Ratanamahatana C A. Fast time series classification using numerosity reduction. In: Proceedings of the 23rd International Conference on Machine Learning. 2006, 1033–1040
    11.Bengio Y, Courville A, Vincent P. Representation learning: a review and new perspectives. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2013, 35(8): 1798–1828CrossRef
    12.LeCun Y, Bengio Y. Convolutional networks for images, speech, and time series. The Handbook of Brain Theory and Neural Networks, 1995, 3361(10)
    13.LeCun Y, Kavukcuoglu K, Farabet C. Convolutional networks and applications in vision. In: Proceedings of IEEE International Symposium on Circuits and Systems. 2010, 253–256
    14.Zheng Y, Liu Q, Chen E, Ge Y, Zhao J. Time series classification using multi-channels deep convolutional neural networks. In: Proceedings of the 15th International Conference on Web-Age Information Management. 2014, 298–310
    15.Hu B, Chen Y, Keogh E. Time Series Classification under More Realistic Assumptions. In: Proceedings of SIAM International Conference on Data Mining. 2013, 578
    16.Goldberger A L, Amaral L A N, Glass L, Hausdorff J M, Ivanov P C, Mark R G, Mietus J E, Moody G B, Peng C K, Stanley H E. Physiobank, Physiotoolkit, and Physionet omponents of a new research resource for complex physiologic signals. Circulation, 2000, 101(23): e215–e220CrossRef
    17.Ye L, Keogh E. Time series shapelets: a new primitive for data mining. In: Proceedings of the 15th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 2009, 947–956CrossRef
    18.Ratanamahatana C A, Keogh E. Making time-series classification more accurate using learned constraints. In: Proceedings of SIAM International Conference on Data Mining. 2004
    19.Ratanamahatana C A, Keogh E. Three myths about dynamic time warping data mining. In: Proceedings of SIAM International Conference on Data Mining. 2005, 506–510
    20.Yu D, Yu X, Hu Q, Liu J, Wu A. Dynamic time warping constraint learning for large margin nearest neighbor classification. Information Sciences, 2011, 181(13): 2787–2796CrossRef
    21.LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 1998, 86(11): 2278–2324CrossRef
    22.Simard P Y, Steinkraus D, Platt J C. Best practices for convolutional neural networks applied to visual document analysis. In: Proceedings of the 7th International Conference on Document Analysis and Recognition. 2003, 958–962
    23.Nair V, Hinton G E. Rectified linear units improve restricted boltzmann machines. In: Proceedings of the 27th International Conference onMachine Learning. 2010, 807–814
    24.Zeiler M D, Ranzato M, Monga R, Mao M, Yang K, Le Q, Nguyen P, Senior A, Vanhoucke V, Dean J, Hinton G E. On rectified linear units for speech processing. In: Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing. 2013, 3517–3521
    25.Scherer D, Müller A, Behnke S. Evaluation of pooling operations in convolutional architectures for object recognition. In: Proceedings of the 20th International Conference on Artificial Neural Networks. 2010, 92–101
    26.Nagi J, Ducatelle F, Di Caro G A, Ciresan D, Meier U, Giusti A, Nagi F, Schmidhuber J, Gambardella L M. Max-pooling convolutional neural networks for vision-based hand gesture recognition. In: Proceedings of IEEE International Conference on Signal and Image Processing Applications. 2011, 342–347
    27.LeCun Y, Bottou L, Orr G B, Müller K R. Efficient backprop. Lecture Notes in Computer Science, 2012, 7700: 9–48CrossRef
    28.Bouvrie J. Notes on convolutional neural networks. Technical Report. 2006
    29.Krizhevsky A, Sutskever I, Hinton G. Imagenet classification with deep convolutional neural networks. In: Proceedings of Advances in Neural Information Processing Systems. 2012, 1106–1114
    30.Sutskever I, Martens J, Dahl G, Hinton G. On the importance of initialization and momentum in deep learning. In: Proceedings of the 30th International Conference on Machine Learning. 2013, 1139–1147
    31.Erhan D, Bengio Y, Courville A, Manzagol P A, Vincent P, Bengio S. Why does unsupervised pre-training help deep learning? The Journal of Machine Learning Research, 2010, 11: 625–660MATH MathSciNet
    32.Hinton G E, Salakhutdinov R R. Reducing the dimensionality of data with neural networks. Science, 2006, 313(5786): 504–507
    33.Masci J, Meier U, Cireşan D, Schmidhuber J. Stacked convolutional auto-encoders for hierarchical feature extraction. Lecture Notes in Computer Science, 2011, 6791: 52–59CrossRef
    34.Pinto N, Cox D D, DiCarlo J J. Why is real-world visual object recognition hard? PLoS Computational Biology, 2008, 4(1): e27MathSciNet CrossRef
    35.Cireşan D C, Meier U, Masci J, Gambardella L M, Schmidhuber J. Flexible, high performance convolutional neural networks for image classification. In: Proceedings of the 22nd International Joint Conference on Artificial Intelligence. 2011, 1237–1242
    36.Cireşan D, Meier U, Masci J, Schmidhuber J. Multi-column deep neural network for traffic sign classification. Neural Networks, 2012, 32: 333–338CrossRef
    37.Lines J, Davis L M, Hills J, Bagnall A. A shapelet transform for time series classification. In: Proceedings of the 18th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 2012, 289–297CrossRef
    38.Nanopoulos A, Alcock R O B, Manolopoulos Y. Feature-based classification of time-series data. International Journal of Computer Research, 2001, 10(3)
    39.Lee H, Grosse R, Ranganath R, Ng A Y. Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations. In: Proceedings of the 26th Annual International Conference on Machine Learning. 2009, 609–616
    40.Lee H, Largman Y, Pham P, Ng A Y. Unsupervised feature learning for audio classification using convolutional deep belief networks. In: Proceedings of Advances in Neural Information Processing Systems. 2009, 1096–1104
    41.Waibel A, Hanazawa T, Hinton G, Shikano K, Lang K J. Phoneme recognition using time-delay neural networks. IEEE Transactions on Acoustics, Speech and Signal Processing, 1989, 37(3): 328–339CrossRef
  • 作者单位:Yi Zheng (1) (3)
    Qi Liu (1)
    Enhong Chen (1)
    Yong Ge (2)
    J. Leon Zhao (3)

    1. School of Computer Science and Technology, University of Science and Technology of China, Hefei, 230027, China
    3. Department of Information Systems, City University of Hong Kong, Hong Kong, China
    2. Department of Computer Science, University of North Carolina at Charlotte, Charlotte, 28223, USA
  • 刊物类别:Computer Science
  • 刊物主题:Computer Science, general
    Chinese Library of Science
  • 出版者:Higher Education Press, co-published with Springer-Verlag GmbH
  • ISSN:1673-7466
文摘
Time series classification is related to many different domains, such as health informatics, finance, and bioinformatics. Due to its broad applications, researchers have developed many algorithms for this kind of tasks, e.g., multivariate time series classification. Among the classification algorithms, k-nearest neighbor (k-NN) classification (particularly 1-NN) combined with dynamic time warping (DTW) achieves the state of the art performance. The deficiency is that when the data set grows large, the time consumption of 1-NN with DTWwill be very expensive. In contrast to 1-NN with DTW, it is more efficient but less effective for feature-based classification methods since their performance usually depends on the quality of hand-crafted features. In this paper, we aim to improve the performance of traditional feature-based approaches through the feature learning techniques. Specifically, we propose a novel deep learning framework, multi-channels deep convolutional neural networks (MC-DCNN), for multivariate time series classification. This model first learns features from individual univariate time series in each channel, and combines information from all channels as feature representation at the final layer. Then, the learnt features are applied into a multilayer perceptron (MLP) for classification. Finally, the extensive experiments on real-world data sets show that our model is not only more efficient than the state of the art but also competitive in accuracy. This study implies that feature learning is worth to be investigated for the problem of time series classification. Keywords convolutional neural networks time series classification feature learning deep learning

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700