基于门控循环单元神经网络的金融时间序列预测
详细信息    查看全文 | 推荐本文 |
  • 英文篇名:Predicting Financial Time Series Based on Gated Recurrent Unit Neural Network
  • 作者:张金磊 ; 罗玉玲 ; 付强
  • 英文作者:ZHANG Jinlei;LUO Yuling;FU Qiang;College of Electronic Engineering,Guangxi Normal University;
  • 关键词:循环神经网络 ; 门控循环单元 ; 差分运算 ; 金融时间序列预测 ; 深度学习
  • 英文关键词:recurrent neural network;;gated recurrent unit(GRU);;differencing operation;;financial time series prediction;;deep learning
  • 中文刊名:GXSF
  • 英文刊名:Journal of Guangxi Normal University(Natural Science Edition)
  • 机构:广西师范大学电子工程学院;
  • 出版日期:2019-04-25
  • 出版单位:广西师范大学学报(自然科学版)
  • 年:2019
  • 期:v.37
  • 基金:国家自然科学基金(61603104);; 广西自然科学基金(2015GXNSFFBA139256,2016GXNSFCA380017,2017GXNSFAA198180);; 广西研究生教育创新计划项目(XYCSZ2018080)
  • 语种:中文;
  • 页:GXSF201902010
  • 页数:8
  • CN:02
  • ISSN:45-1067/N
  • 分类号:86-93
摘要
针对循环神经网络(recurrent neural networks,RNN)网络结构存在的长期依赖问题,门控循环单元(gated recurrent unit,GRU)神经网络作为RNN的一种变体被提出。在继承RNN对时间序列优秀记忆能力的前提下,GRU克服了时间序列的长期依赖问题。本文针对金融时间序列数据存在的依赖问题,将GRU扩展应用到金融时间序列预测,提出了基于差分运算与GRU神经网络相结合的金融时间序列预测模型。该模型能够处理金融时间序列数据的复杂特征,如非线性、非平稳性和序列相关性。通过对标准普尔(S&P)500股票指数的调整后收盘价进行预测,实验结果表明,所提出的方案能够提高GRU神经网络的泛化能力和预测精度,并且与传统预测模型相比该模型对金融时间序列的预测拥有更好的预测效果和相对较低的计算开销。
        To solve the long-term dependence of the Recurrent Neural Networks(RNN),the Gated Recurrent Unit(GRU)neural network is proposed as a variant of the RNN.By inheriting the excellent memory ability of the RNN to time series,the GRU overcomes this long-term dependence problem.For the long-term dependence of time series in financial time series,the GRU extension is applied to financial time series prediction.A financial time series prediction model based on differential operation and GRU neural network is proposed in this paper.The model is capable of handling complex features of financial time series data such as non-linearity,non-stationary and sequence correlation.It is used to predict the adjusted close price of the Standard & Poor's(S&P)500 stock index in this work.The experimental results show that the differencing operation can improve the generalization ability and the prediction accuracy of the GRU neural network,and the proposed method can perform a better prediction for the financial time series than the conventional approach,with a relatively low computing overhead.
引文
[1]SHEN Furao,CHAO Jing,ZHAO Jinxi.Forecasting exchange rate using deep belief networks and conjugate gradient method[J].Neurocomputing,2015,167:243-253.DOI:10.1016/j.neucom.2015.04.071.
    [2]TAYLOR G.Composable,distributed-state models for high-dimensional time series[D].Toronto:University of Toronto,2009.
    [3]HORNIK K,STINCHCOMBE M,WHITE H.Multilayer feedforward networks are universal approximators[J].Neural Networks,1989,2(5):359-366.DOI:10.1016/0893-6080(89)90020-8.
    [4]BENGIO Y,LECUN Y.Scaling learning algorithms toward AI[M]//BOTTOU L,CHAPELLE O,DeCOSTE D,et al.Large-scale Kernel Machines.Cambridge,MA:MIT Press,2007:321-359.
    [5]SCHUSTER M,PALIWAL K K.Bidirectional recurrent neural networks[J].IEEE Transactions on Signal Processing,1997,45(11):2673-2681.DOI:10.1109/78.650093.
    [6]HSIEH T J,HSIAO H F,YEH W C.Forecasting stock markets using wavelet transforms and recurrent neural networks:an integrated system based on artificial bee colony algorithm[J].Applied Soft Computing,2011,11(2):2510-2525.DOI:10.1016/j.asoc.2010.09.007.
    [7]FISCHER T,KRAUSS C.Deep learning with long short-term memory networks for financial market predictions[J].European Journal of Operational Research,2018,270(2):654-669.DOI:10.1016/j.ejor.2017.11.054.
    [8]KIM K J,HAN I.Genetic algorithms approach to feature discretization in artificial neural networks for the prediction of stock price index[J].Expert Systems with Applications,2000,19(2):125-132.DOI:10.1016/S0957-4174(00)00027-0.
    [9]GURESEN E,KAYAKUTLU G,DAIM T U.Using artificial neural network models in stock market index prediction[J].Expert Systems with Applications,2011,38(8):10389-10397.DOI:10.1016/j.eswa.2011.02.068.
    [10]BENGIO Y.Learning deep architectures for AI[J].Foundations and Trends in Machine Learning,2009,2(1):1-127.DOI:10.1561/2200000006.
    [11]YEH S H,WANG C J,TSAI M F.Deep belief networks for predicting corporate defaults[C]//24th Wireless and Optical Communication Conference.Piscataway,NJ:IEEE Press,2015:159-163.DOI:10.1109/WOCC.2015.7346197.
    [12]BENGIO Y,SIMARD P,FRASCONI P.Learning long-term dependencies with gradient descent is difficult[J].IEEETransactions on Neural Networks,1994,5(2):157-166.DOI:10.1109/72.279181.
    [13]KOLEN J F,KREMER S C.Gradient flow in recurrent nets:the difficulty of learning long-term dependencies[M]//KOLEN J F,KREMER S C.A Field Guide to Dynamical Recurrent Networks.Piscataway NJ:IEEE Press,2001:237-244.DOI:10.1109/9780470544037.ch14.
    [14]CHUNG J,GULCEHRE C,CHO K H,et al.Empirical evaluation of gated recurrent neural networks on sequence modeling[EB/OL].(2014-12-11)[2018-08-13].https://arxiv.org/pdf/1412.3555v1.pdf.
    [15]HUYNH H D,DANG L M,DUONG D.A new model for stock price movements prediction using deep neural network[C]//Proceedings of the Eighth International Symposium on Information and Communication Technology.New York,NY:ACM Press,2017:57-62.DOI:10.1145/3155133.3155202.
    [16]WILLIAMS R J,ZIPSER D.A learning algorithm for continually running fully recurrent neural networks[J].Neural Computation,1989,1(2):270-280.DOI:10.1162/neco.1989.1.2.270.
    [17]XIAO Lin,LI Shuai,YANG Jian,et al.A new recurrent neural network with noise-tolerance and finite-time convergence for dynamic quadratic minimization[J].Neurocomputing,2018,285:125-132.DOI:10.1016/j.neucom.2018.01.033.
    [18]HOCHREITER S,SCHMIDHUBER J.Long short-term memory[J].Neural Computation,1997,9(8):1735-1780.DOI:10.1162/neco.1997.9.8.1735.
    [19]CHO K,Van MERRIENBOER B,BAHDANAU D,et al.On the properties of neural machine translation:encoderdecoder approaches[EB/OL].(2014-10-07)[2018-08-13].https://arxiv.org/pdf/1409.1259.pdf.
    [20]SIAMI-NAMINI S,NAMIN A S.Forecasting economics and financial time series:ARIMA vs.LSTM[EB/OL].(2018-03-15)[2018-08-13].https://arxiv.org/ftp/arxiv/papers/1803/1803.06386.pdf.
    [21]CHENG Jian,WANG Peisong,LI Gang,et al.Recent advances in efficient computation of deep convolutional neural networks[J].Frontiers of Information Technology&Electronic Engineering,2018,19(1):64-77.DOI:10.1631/FITEE.1700789.
    [22]SRIVASTAVA N,HINTON G,KRIZHEVSKY A,et al.Dropout:a simple way to prevent neural networks from overfitting[J].Journal of Machine Learning Research,2014,15(1):1929-1958.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700