随机森林算法优化研究
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
随机森林算法(Random Forests)是一种基于统计学习理论的组合分类器,它将bootstrap重抽样方法和决策树算法相结合,该算法的本质是构建一个树型分类器h k (x), k1,的集合,然后使用该集合通过投票进行分类和预测。由于该算法较好地解决了单分类器在性能上无法提升的瓶颈,因此具有较好的性能,能应用于各种分类筛选和预测中。当然,该算法也存在一些有待完善的地方,针对这些不足,理论界主要集中在三个方面进行优化研究,一是引进新的算法,二是对将数据预处理融入到算法中,三是针对算法自身构建过程进行优化。本文在充分查阅国内外相关资料的基础上,对后二个方面开展了优化研究。
     一、在数据预处理方面,提出了两种改进随机森林的优化算法。
     首先,针对随机森林不能很好地处理非平衡数据的问题,根据聚类算法思想和物理学的重心理论,本文提出了C_SMOTE算法,该算法能较好地降低数据集的非平衡性,从而提升了随机森林算法的分类性能。该算法针对SMOTE算法在选取“人造”样本时,存在一定的盲目性现象和容易产生边缘化的问题,提出了从负类样本的重心出发,有目的构造“人造”样本的新思路,使得在“人造”负类样本的过程中,新产生的样本有向重心汇聚的趋势,这样就可以有效地解决了SMOTE算法的缺陷,从而实现了既保留原有数据集的信息,又较好地解决数据集的非平衡性问题,从而在很大的程度上提升了随机森林算法在非平衡数据集上的分类性能。
     其次,随机森林算法在进行节点分裂时常采用C4.5算法,但C4.5算法在处理连续变量时,采用二分离散化的方法,该方法运行效率依懒于连续变量取值的数量,该数量越大,随机森林算法执行时间越长。针对此现象,本文提出了一种降低连续变量取值的数量的新算法,该算法可以很好地为C4.5算法提供简约的数据集,从而提升C4.5算法的执行效率。新算法是在借鉴CHI2系列算法思想的基础上,针对CHI2系列算法没有考虑2统计量和真实值之间存在偏差的问题而提出的。该算法使用2矫正公式较好地处理了CHI2系列算法中的偏差问题。文中通过使用三种通用的UCI数据集,将新算法和没有解决偏差问题CHI2系列算法,在改善随机森林算法性能方面进行了比较分析。实证数据表明,和CHI2系列算法相比,新算法能更有效地约简数据集中的冗余信息,使连续变理取值的数量很大程度地减少,从而提升随机森林算法的执行效率。
     二、在随机森林自身构建过程优化方面,本文通过分析随机森林算法分类性能的影响因素,针对随机森林在生成过程中,节点分裂算法不同引起的随机森林分类性能不同的现象,提出了一种基于线性组合的节点分裂混合算法。该算法将C4.5算法和CART算法在节点分裂时的函数进行线性组合,通过变换组合函数中的系数,充分发挥了这两种算法优势,较好地实现了随机森林算法分类性能的优化。同时,还详细分析了混合算法的稳定性、相关度和强度。首先,通过构造F统计量进行方差分析,对该混合算法的稳定性进行了检验。统计结果表明,该随机森林的混合算法随着森林中树木个数的变化虽然存在一定的不稳定性,但当森林中树木达到800棵时,算法可以达到稳定的状态。然后对混合算法的相关度和强度进行了理论上的推导和论述,同时实现了随机森林的平均相关度和强度的计算,并使用实证分析的办法,验证了平均相关度和算法分类精度存在负相关,森林的平均强度和算法的分类精度存在正相关的关系,并得了出混合算法对提升森林的平均强度和降低平均相关度较有其他算法具有明显的优势,也从另一个方面验证了混合算法的优越性。
     在优质股票池选择的实际应用中,该应用的数据集存在大量的连续变量,且该应用对分类算法的精度要求严格。本研究提出的随机森林优化算法,可以很好地处理连续变量及提升随机森林的分类精度。本文在价值成长投资策略的选股指标体系的基础上,通过小波分析和COR_CHI2算法进行数据预处理,使用节点分裂混合算法形成的随机森林成功地实现了优质股票池的选择,可为投资者进行有针对性的投资组合提供统计支持。
Random forests are a combined classifier based on statistical learning theory. Throughcombining bootstrap re-sampling method and decision tree algorithm, its essence is tobuild a collection tree classifier h k (x), k1,, and then use the set by voting forclassification and prediction. Because the algorithm breaks through the bottleneck that thesingle classifier cannot improve in performance, it has good performance and can beapplied to various kinds of classification filtering and prediction. Certainly there are alsosome points needing to be improved in the algorithm. Aiming at these deficiencies,theoretical research is mainly conducted in three aspects. One is the introduction of newalgorithm, second is to blend in data preprocessing in the algorithm, and the third is theoptimization of the algorithm structuring process. On the basis of full access to relevantinformation from both China and abroad, this paper concentrates on the study of latter twoaspects optimization.
     In terms of data preprocessing, this paper presents two optimization algorithms toimprove the random forests.
     First, in view of the random forest’s inability to well deal with the issue of unbalanceddata, and in line with clustering algorithm and the center of gravity in physics, this paperputs forward the C_SMOTE algorithm, which can reduce the data set unbalance, so as toimprove the random forest classification performance. Aiming at SMOTE algorithmhaving certain blindness and prone to the problem of marginalization in the selection of"artificial" sample, the algorithm put forward the starting from the gravity center of thenegative samples and with the new thoughts to purposely structure "synthetic" samples,which makes new samples have the trend of convergence to the gravity center in theprocess of structuring " synthetic " negative samples and effectively solves the defects ofthe SMOTE algorithm. It not only keeps the information of original data set, but also bettersolves the problem of unbalanced data sets, which to a large extent, improves the randomforest algorithm in classification performance of unbalanced data sets.
     Second, random forest often uses C4.5algorithm for node split, but in dealing withcontinuous variables, C4.5algorithm uses dichotomy discretization method with itsoperation efficiency depending on the number of continuous variable values. The larger isthe number, and the longer is the execution time of random forests. Aiming at this problem,this paper puts forward a new algorithm to reduce the number of continuous variablevalues. This algorithm can provide simple data set for C4.5algorithm, so as to improve the execution efficiency of C4.5algorithm. It uses2correction formula to deal with thedeviation in CHI2series algorithm. By using three kinds of common UCI data sets, thispaper comparatively analyzes the new algorithm and the CHI2series algorithm in terms ofimproving the performance of random forest. Empirical data show that compared withCHI2series algorithm, the new algorithm can reduce the redundant information of data setmore effectively, making the number of continuous value greatly reduced and thus toimprove the execution efficiency of random forests.
     In terms of random forests structuring process optimization, through analyzing thefactors affecting the classification performance of random forests and aiming at nodesplitting method difference causing random forest classification performance difference inrandom forests generating process, this paper proposes a node split hybrid algorithm basedon linear combination. The algorithm brings the function of C4.5algorithm and CARTalgorithm in the node split and forms a linear combination function.Through theconversion of combination function coefficient, it gives full play to the advantages of thesetwo algorithms and realizes the random forest classification performance optimization. Inthe mean time it is also analyzed in detail the stability, relevancy, and strength of the hybridalgorithm. First of all, by constructing F statistic variance analysis, the stability of thehybrid algorithm is inspected. Statistical results show that the hybrid algorithm of randomforest has certain instability as the change of the number of trees in the forest, but whenthere are more than800trees in the forest, the algorithm can achieve the stable state. Thenthe correlation and intensity of hybrid algorithm are theoretically derived and discussed,and meanwhile the average correlation and strength of random forests are calculated.Furthermore, empirical analysis is used to verify that there is negative relationship betweenthe average correlation and algorithm classification accuracy, the average intensity offorest and classification accuracy of algorithm are positively related, and comparing withother algorithms, the hybrid algorithm has obvious advantages in improving averageintensity and reducing average correlation of the forest, and also from another aspect, thesuperiority of the hybrid algorithm is verified.
     In practical application of selecting high quality stock pool, there are a large numberof continuous variables in the application of data sets, and it has a high accuracyrequirement for classification algorithm. The optimization algorithm proposed in this paperis a good way to deal with continuous variables and improve the classification precision ofrandom forests. Based on screening stock index system with value growth investmentstrategy, this paper uses wavelet analysis and COR_CHI2algorithm for data preprocessing, using random forests from node split hybrid algorithm to successfully realize the choice ofhigh quality stock pool, so as to provide statistical support to investors for targetedinvestment portfolios.
引文
[1] T. G. Dienerich.Machine Learning Research for Current Directions[J] AlMagazine.1997,18(4):97~136
    [2] R.E.Sehapire.The Strength of Weak Learnability[J].Machine Learning.1990,5(2):197~227
    [3] L.Breiman. Bagging Predicators.[J] Machine Learning,1996,24(2):123~140
    [4] T.K.Ho.Random Decision Forest [J].In Proceedings of the3rd InternationalConference on Document Analysis and Recognition. Montreal,Canada,1995,8:278~282
    [5] T. K. Ho, he Random Subspace Method for Constructing Decision Forests.[J]IEEE Transactions on Pattern Analysis and Machine Intelligence,1998,20(8):832~844.
    [6] L. Breiman,Random Forests[J].Machine Learning,2001,45(1):5~32
    [7] Parkhurst D F,Brenner K P,DufourAP,Wymer L J.Indicator Bacteria at FiveSwimming Beaches—Analysis Using Random Forests[J]. Water Research,2005,39(7)
    [8] Smith A,Sterba-Boatwright B,Mott J. Novel Application of a StatisticalTechnique, Random Forests, in a Bacterial Source Tracking Study[J]. Water Research,2010,44(14)
    [9] Perdiguero-Alonso D,Montero F E,AKostadinova,Raga JA,Barrett J. RandomForests.a Novel Approach for Discrimination of Fish Populations Using Parasites asBiological Tags[J]. International Journal for Parasitology,2008,38(12)
    [10] Gislason P O,Benediktsson JA,Sveinsson J R. Random Forests for Land CoverClassification[J]. Pattern Recognition Letters,2006,27(4)
    [11] Jan P,Bernard D B,Niko E C V,Roeland S,Sven D,Piet D B,Willy H.Random Forests as a Tool for Ecohydrological Distribution Modelling[J]. EcologicalModelling,2007,207(2/4).
    [12] Lee S L A,Kouzania A Z,Hu E J. Random Forest Based Lung NoduleClassification Aided by Clustering[J]. Computerized Medical Imaging and Graphics,2010,34(7)
    [13] Diaz-Uriate R,Andres S A D. Gene Selection and Classification of MicroarrayData Using Random Forest [J]. BMC Bioinformatics,2006,7(3)
    [14] Chen X W,Liu M. Prediction of Protein-protein Interactions Using RandomDecision Forest Framework [J]. Bioinformatics,2006,21(24)
    [15] Pal M. Random Forest Classifier for Remote Sensing Classification [J]. RemoteSens,2005,26(1)
    [16] Ham J,Chen Y C,Crawford M P,Ghosh J. Investigation of the Random ForestFramework for Classification of hyperspectral Data[J]. IEEE Trans. Geosci. Remote Sens,2005,43(3).
    [17] Gislason P O,Benediktsson JA,Sveinsson J R.Random Forests for Land CoverClassification[J].Pattern Recogn. Lett,2006,27(4)
    [18] Xu P,Jelinek F. Random Forests and the Data Sparseness Problem in LanguageModeling [J]. Computer Speech&Language,2007,21(1)
    [19] Auret L,Aldrich C. Change Point Detection in Time Series Data with RandomForests[J]. Control Engineering Practice,2010,18(8)
    [20] S.Q.Guo,C.Gao,J.Yao,L.Xie.An Intrusion Detection Model Based onImproved Random Forests Algorithm[J].Journal of Software.2005,16(8):1490-1498
    [21]张华伟.基于层次分类和集成学习的文本分类技术研究[D].江西师范大学,2007,5
    [22]吕飒丽,汪强虎,李霞,郭政.基于决策森林特征基因的两种识别方法[J].生物信息学.2004,2(3):20~23
    [23] W. Fan,H.X.Wang,P.S.Yu,S.Ma.Is Random Model Better On ItsAccuracy andEfficiency.In Proceedings of the Third IEEE International Conference on DataMining.IBM Thomas J.Watson Res. Center,Hawthorne,NY,USA,2003,12:51~58
    [24]李霞,张田文,饶绍奇等.特征基因挖掘的决策森林方法[J].哈尔滨工业大学学报.2004,36(4):480~483
    [25] Gall J, Lempitsky V.Class-Specific Hough forests for objectdetection[C]//Proceedings of IEEE Conference on Computer Vision and PatternRecognition. Los Alamitos:IEEE Computer Society Press,2009:1022~1029
    [26] Gall J,YaoA,Razavi N,et al.Hough forests for objbect detection,tracking,and action Recognition [J]. IEEE Transactions on Pattern Analysis and MachineIntelligence,2011,33(11):2188~2202
    [27] Criminisi A,Shotton J,Robertson D,et al. Regression forests for efficientanatomy detection and localization in CT studies [M]. Lecture Notes in ComputerScience.Heidelberg:Springer,2011,6533:106-117
    [28] Ishwaran H,Kogalur U B,Blackstone E H,Lauer M S. Random SurvivalForests [J]. The Annals of Applied Statis-tics,2008,2
    [29] Ishwaran H,Udaya B,Kogalur. Consistency of Random Survival Forests[J].Statistics and Probability Letters,2010,13~14.
    [30] Nicolai,Meinshausen. Quantile Regression Forests [J]. Journal of MachineLearning Research,2006,7(6).13~14.
    [31]马景义,吴喜之,谢邦昌.拟自适应分类随机森林算法[J]数理统计与管理2010,9.29(5):805-811
    [32] A. Prinzie,D.V.D.Poel.Random Forests for Multiclass Classification:RandomMultiNomial Logit. Expert Systems with Applications.2008,34(3):1721~1732
    [33]吴琼,李运田,郑献卫,面向非平衡训练集分类的随机森林算法优化.工业控制计算机2013.26(7):89~90
    [34]杜均代价敏感学习及其应用[D].中国地质大学(武汉),2009
    [35]雍凯.随机森林的特征选择和模型优化算法研究[D].哈尔滨工业大学,2008,12
    [36]孙丽丽.基于属性组合的随机森林[D].河北大学,2011.5
    [37] L. Breiman,Bagging predictors.Mach.Learn,1996,24(2):123~140
    [38] Schapire R E.The Strength of Weak Learn ability[J]. Machine Learning,1990:5:197~227
    [39] Freund Y.Boosting a Weak Algorithm by Majority[J]. Information andComputation,1995;1212):256~285
    [40] Dietterich T.An Experimental Comparison of Three Methods for ConstructingEnsembles of Decision Trees:Bagging, Boosting and Randomization [J]. MachineLearning,2000,40(2)
    [41] Breiman L.Randomizing Outputs to Increase Prediction Accuracy[J]. MachineLearning,2000,40(3)
    [42] D.S.Yeung, W.W.Y. Ng, D.Wang, E.C.C.Tsang,X.Z.Wang.LocalizedGeneralization Error and Its Application to Architecture Selection for Radial BasisFunction Nerual Network[J]. IEEE Trans.on Neural Networks.2007,18:1294~1305
    [43] http://www.cs.waikato.ac.nzJml/weka/
    [44] G.Izmirlian.Application of the random forest classification algorithm to aseldi-tof proteomics study in the setting of a cancer prevention trial[J]. Annals of the NewYork Academy of Sciences,2004,1020:154~174
    [45] D.R.Cutler et al.Random forests for classification in ecology[J]. Ecology,2007,88,2783~2792
    [46] Weiss G M.Mining withRarity.A Unifying Framework[J]. SIGKDDExplorations,2004,6(1):7~19
    [47]黄衍,查伟雄.随机森林与支持向量机分类性能比较[J].北京:软件,2012.33(6):107~110
    [48]李建更,高志坤.随机森林针对小样本数据类权重设置[J].计算机工程与应用,2009,45(26):131~134
    [49] Weiss G M.Mining withRarity: A Unifying Framework[J]. SIGKDDExplorations,2004,6(1):7-19
    [50] Fan W,Stolfo J S,Zhang J X.AdaCost:misclassifieation cost Sensitive boosting
    [C]. Proeeedings of the16th Intemational Conference on MachineLearning,1999:97~105.
    [51] Domingos P.Metaeost: a general method for making classifiers cost sensitive [C].proeeedings of the5th International Conferenc on Knowledge Disscovery and DataMining,1999:155~164
    [52] Kotsiantis S B,Pintelas p E.Mixture of Expert Agents for Handling ImbalaneedData Sets [J]. Annals of Mathematies ComPuting&Teleinformatics,2003,l(1):46~55.
    [53] EstabrookA,Taeho J,JaPkowiez N.AmultiPle resamPling method for leamingfrom imbalaneed data sets[J],ComPutatlonal Intelligenee,2004,20(1):18~36.
    [54] HAN J W,KAMBER M. Data Mining Concepts and Techniques[M]. MorganKaufmann Publishers,2000
    [55] METHAM,AGRAWAL R,RISSANEN J. SLIQ:AFast Scalable Classifier forData Mining[J]. Lecture Notes in Computer Sci. Proc. of the5th Int. Conf. on ExtendingDatabase Tech.[C],1996:18~33.
    [56]陈荣鑫.R软件的数据挖掘应用[J].重庆工商大学学报(自然科学版),2011,28(6):602-607
    [57] Chawla NV,Bowyer KW,Hall LO.SMOTE:Synthetie Minority Over-samplingTechnique[J]. Journal of Artificial Intelligence Researeh,2002(16):321~357.
    [58] Han H,Wang WY,Mao B H.Borderline-SMOTE:Anew over-sampling methodin imbalanced data sets learning [C]. Proc of International Conference on IntelligentComPuting(ICIC'05),2005:878~887.
    [59]王春玉.非平衡数据集分类方法研究及其在电信行业中的应用[D]浙江大学2011.6
    [60]薛薇.非平衡数据集的改进SMOTE再抽样算法统计研究[J].2012.6,29(6):95~98
    [61]何晓群.多元统计分析[M].中国人民大学出版社.2008.9:73
    [62] Dougherty J. Kohavi R. and Sahami M. Supervised and UnsupervisedDiscretization ofcontinuous feature. Machine learning: Proc.12th Int’l Conf,1995,194~202.
    [63] Liu Huan,Hussain Farhad,Tan Chew Lim etal.Discret ization: An EnablingTechnique[J]. Data Mining and Knowledge Discovery,2002,6(4),393~423
    [64] LiuH,HussainF.Diseretization:An Enabling Teehnique[J]. DataMiningandKnowledge Diseovery,2002:393~423.
    [65]李慧.基于粗糙集理论的连续属性离散化算法研究[D].辽宁师范大学,2010,5.
    [66] Kerber R. ChiMerge:discretization of numeric attributes[C].Proceedings NinthNational Conference0n Artificial Intelli-gence. AAAl Press,1992,123~128
    [67] Liu H.Setiono R.Feature selection via discretization [J]. IEEE Transactions onKnowledge and Data Engineering,1997,9(4):l642~645
    [68] Tay EH,Shen L。A modified chi2algorithm for discretization [J]。IEEETransactions on Knowledge and Data Engineering.2002,14(3):666~670
    [69] Chao-TonSu,Jyh-HwaHsu.An extended chi2algorithm for discretization of realvalue attributes[J]. IEEE Transactions on Knowledge and Data Engineering,2005,17(3):437~441
    [70]杜荣骞.生物统计学[M].高等教育出版社,2003,4
    [71] J.Y.Ching,A.K.C.Wong and K.C.C.Chan.Class-Dependent Discretization forInductiveLearning from Continuous and Mixed-Mode Data[J].IEEE Trans.PatternAnalysis and MachineIntelligence,1995,17(7):641~651
    [72] L.A. Kurgan and K.J.Cios.CAIM Discretization Algorithm[J]. IEEETransactions onKnowledge and Data Engineering,2004,16(2):145~153
    [73] Cheng-Jung Tai, Chien-I. Lee, Wei-Pang Yang. A discretization algorithmbased on Class-Attributes Contingency Coefficient[J]. Information Sciences,2008,178:714~731
    [74] Chien-I Lee,Cheng-Jung Tsai,Ya-Ru Yang,Wei-Pang Yang.ATop-Down andGreedy Method for Discretization of Continuous Attributes. Proceedings of the FourthInternational Conference on Fuzzy Systems and Knowledge Discovery.2007,1:472~476
    [75]白根柱,裴志利,王建,孔英,刘丽莎.基于粗糙集理论和信息熵的属性离散化方法[J].计算机应用研究,2008,25(6):1701~1703
    [76]贾俊平,何晓群,金勇进.统计学[M].中国人民出版社(第四版),2011,6
    [77]杨振海,程维虎,张军舰.拟合优度检验[M].科学出版社,2011,3
    [78]闫德勤,张丽平.连续属性离散化IntegralChi2算法[J].小型微型计算机系统,2008,4(4):691~693
    [79]杨秋洁.基于IV属性选择的随机森林模型研究[D],合肥工业大学,2010,4
    [80] M.Dash, H.Liu.Feature selection for Classification[J]. Intelligent DataAnalsysis,1997,1(3):131~156.
    [81] G.Izmirlian.Application of the random forest classification algorithm to aseldi-tof proteomics study in the setting of a cancer prevention trial[J]. Annals of the NewYork Academy of Sciences,2004:154~174
    [82] D. R. Cutler et al.Random forests for classification in ecology[J].Ecology,2007:2783~2792
    [83] Brian Van Essen,Chris Macaraeg, Maya Gokhale and Ryan Prenger.Accelerating a random forest classifier: multi-core,GP-GPU?[J].2012IEEE20thInternational Symposium on Field-Programmable Custom ComputingMachines,2012:232~239
    [84] Ruggieri S.Efficient C4.5.IEEE Transactions on Knowledge and DataEngineering[J].2O02,14(2):438~444
    [85]栾丽华,吉根林.决策树分类技术研究[J].计算机工程,2004,30(9):94~96
    [86]焦健,赵学昂,葛新元.CART决策树的行业选股方法[R].深圳:国信证券经济研究所,2010
    [87]赵永进.基于数据挖掘的股票分析与预测研究[D].河南:郑州大学,2005.
    [88]张建军.基于数据挖掘的股票数据分析[D].山东:中国石油大学(华东),2010
    [89]桑雨,闫德勤,梁宏霞,李克秋.对Chi2系列算法的改进方法[J].小型微型计算机系统,2009,3(3):524~529
    [90]李云飞.基于人工智能方法的股票价值投资研究[D].黑龙江:哈尔滨工业大学,2008
    [91] Wang Shouyang,YuLean,KKLai.Crude oil Price Forecasting with TEIEIethodology [J]. Journal of System Seience and ComPlexity,2005,8(2):145~166
    [92]方匡南,吴见彬,朱建平,谢邦昌.随机森林方法研究综述[J].统计与信息论坛,2011,3(26):32~38
    [93]孙媌.基于数据挖掘的股票分析和预测模型的设计与应用.北京邮电大学软件[D],2011,8
    [94]陈光兴.张一明.浅谈价值成长投资策略在中国股市的适用性[J].经营管理者,2010(24):8~8
    [95]严高剑.胡浩.马坚.李颖.GARP投资策略-成长与价值并重[R].中信证券研究所,2008.
    [96] HODRICK R J,PRESCOTT E C.Postwar US Business Cycles: An EmpiricalInvestigation[J]. Journal of Money,Credit,andBanking,1997:1~16.
    [97] KIM S J,KOH K,BOYD S.L1Trend Filtering[J].Siam Review,2009,51(2):339~360.
    [98] OSBORN D R.Moving Average Detrending and the Analysis of BusinessCycles[J].Oxford Bulletin of Economics and Statistics,1995,57(4):547~558
    [99] LUCAS R E. Two Illustrations of the Quantity Theory of Money[J]. TheAmerican Economic Review,1980,70(5):1005~1014
    [100] REINSCH C H.Smoothing by Spline Functions[J].Numerische Mathematik,1967,10(3):177~183
    [101] Cohen G,Hilario M. One-class Support Veetor Maehines with a ConformalKemel-A Case Study in Handling Class Imbalanee[C]. Proe.of SSPR&SPR,2004,2004:850~858.
    [102] T. S. Quah.DJIA stock selection assisted by neural network[J]. ExpertSystems with Applications,2008,35:50~58
    [103] Kim,K,Han,I.Genetic Algorithm Approach to Feature Discretization inArtificial Neural Networks for the Prediction of Stock Price Index[J]. Expert Systemswith Applications,2000,19:125~132
    [104] Zhang G,Patuwo,B.E Hu,M.Y. Forecasting withArtificial Neural Networks:the Stateof the Art [J].International Journal of Forecasting,1998,14:35~62
    [105] Kamijo K,Tanigawa T.Stock Price Pattern Recognition: A Recurrent NeuralNetworkApproach [J]. Proceedings of the International Joint Conference on NeuralNetworks,1990,215~221
    [106] Vladimir N,Vapnik.Statistical Learning Theory[C],Publishing House ofElectronics Industry,2004
    [107] Yoon Y,Swales G.Predicting Stock Price Performance: A Neural NetworkApproach [J].Proceedings of the24th Annual Hawaii International Conference on SystemSciences,1991,156~162
    [108] Trippi R.R.,DeSieno D.Trading Equity Index Futures with a Neural Network[J]. Journal of Portfolio Management,1992,19:309~317

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700