支撑向量回归算法及其应用研究
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
在过去的数十年,模式识别和机器学习领域的诸多算法深受过学习、局部极小点、训练样本过于巨大等问题的困扰,基于统计学习理论的支撑向量机(SVM)在一定程度上克服了这些问题,通过引入ε不敏感损失函数将SVM成功应用到回归中的,现有的SVM研究大多针对分类问题,而很多分类算法并不能直接用于回归问题,本文深入研究了支撑向量回归算法(SVR),主要研究内容为:
     1.基于双支撑向量回归(TwinSVR,TSVR)的改进算法
     对回归算法的改进主要是提高效率或者提高性能两个方面,TSVR通过求解两个支撑向量类型的问题求两个非平行平面,将一个大的带有两组约束的优化问题转化为两个小的分别带一组约束的优化问题,因此缩短了训练时间。本文基于TSVR提出了三种改进算法:①针对TSVR原始问题中不包含正则项,通过添加正则项实现了结构风险最小化原则,正则项参数的调节使得回归函数更能适应数据,得到的对偶问题可以采用SOR算法求解,比TSVR快速很多,不求对偶问题直接在原始空间采用梯度算法也可以快速求解。②采用临近支撑向量机的做法,将不等式约束改为等式约束,同时将TSVR中每一个小的二次规划问题中的对松弛变量原有的一次惩罚项改为一次项和二次项相结合,这样更有利于拟合数据,得到的优化问题可以采用惩罚函数法求解。③将TSVR优化问题中松弛变量原有的一次惩罚项改为二次惩罚项,对偶问题变为只有非负约束的优化问题,可以采用迭代算法求解,该迭代算法能从任何初始点快速收敛,避免了二次优化问题求解因此能显著提高训练速度。在人工数据集和标准数据集上的数值实验显示了提出算法的有效性。
     2.增量支撑向量回归算法研究
     当训练样本分批达到或者过于巨大时,传统的学习算法不能适用,因此需要增量学习算法。基于Lagrangian支撑向量回归(LSVR)提出两种增量回归算法,在线增量学习算法一次增加一个新样本,批增量学习算法一次增加多个新样本。LSVR得到的无约束最优化问题可以采用快速迭代算法求解,求解时只需要对矩阵求逆。本文提出的两种增量算法在增量训练时矩阵求逆可以利用上次求得的逆矩阵,充分利用了历史学习结果,因此减少了很多重复计算,使得增量后矩阵逆的计算大大简化,降低了算法运行时间。在多个数据集上进行了对比,实验结果表明算法同以前算法相比不仅提高了算法运算速度,而且保持了较好的拟合精度。
     3.Lagrangian支撑向量回归的牛顿算法
     LSVR是一种快速的回归算法,避免了求解二次优化问题,但是该算法需要较多的迭代次数才能终止。采用Armijo步长有限牛顿迭代算法(NLSVR)求解LSVR的优化问题,只需有限次求解一组线性等式,该算法具有全局收敛和有限步终止的性质,在多个合成数据集和标准数据集上的实验结果表明了提出算法具有有效性和快速性。
     4.鲁棒的原始空间加权支撑向量回归算法及在股票价格预测中的应用
     数据中包含离群点(outlier)会严重影响回归性能,因此需要剔除这些离群点,本文提出了一种鲁棒回归算法,以加权的方式通过软剔除技巧剔除偏离模型的离群点,数据偏离模型越远,它的损失函数的权重越小,对模型参数估计的影响也越小。支撑向量回归问题一般都在对偶空间进行求解,而在原始空间里也能高效地求解,对本文得到的加权支撑向量回归算法在原始空间采用递归有限步牛顿法求解。数值实验以及在股票价格预测中的实验证实了本算法的有效性。
In the past few decades, many of pattern recognition and machine learningalgorithms bothered with overfitting, local minima, training sample is too huge. Basedon statistical learning theory, support vector machine (SVM) partly overcome theseproblems. SVM successfully applied to regression by the introduction ofε-insensitive loss function. Many research of SVM is based on classificationproblems and can’t be directly used to solve the regression problems. This paperfocuses on the research of SVR and the main contents are as follows:
     1. Improved twin support vector regression algorithms.
     Twin SVR (TSVR) converts the classical quadratic programming problems (QPPs)with inequality constraints to two small QPPs with equality constraints. This paperproposed three modified algorithms based TSVR.①We add a regularization item inthe QPPs of TSVR and implement structural risk minimization principle. Theregression function can more fit the data by tuning the regularization parameter. TheSOR algorithm is used to solve the dual problem of the regularized TSVR. Thegradient algorithm can be used to solve the QPPs of regularized TSVR directly.②Wemodify the equality constraints of QPPs of TSVR to inequality constraints byintroducing a technique as used in proximal support vector machine and add the unitynorm constraint. The result dual QPPs are solved using penalty function approach.③Change the penalty item of slack variable to quadratic penalty item and thus the smallsize QPPs of TSVR can solve by an iterative algorithm. The iterative algorithmconverges from any starting point and does not need any quadratic optimizationpackages. Thus this algorithm is very fast. The experiments on artificial andbenchmark datasets show that the proposed method is competitive with previouslypublished methods.
     2. Incremental support vector regression
     The incremental learning algorithms are needed when the train sample is too hugeor arrive gradually. This paper proposed two incremental regression algorithms basedon Lagrangian support vector regression (LSVR). The incremental learningalgorithms for LSVR presented in this paper include two cases that are namely onlineand batch incremental learning. LSVR leads to the minimization of an unconstraineddifferentiable convex programming and is solved by an iterative algorithm with a simple linear convergence. The iterative algorithm converges from any starting pointand does not need any quadratic optimization packages. LSVR has the advantage thatits solution is obtained by taking the inverse of a matrix of order equals to the numberof input samples at the beginning of the iteration. The proposed algorithms are solvedbased on the previous computed information, it is unnecessary to repeat thecomputing process. The effectiveness of the proposed method is illustrated withseveral UCI data sets. These experiments show that the proposed method iscompetitive with previously published methods.
     3. Finite newton Method for Lagrangian support vector regression
     Lagrangian support vector regression is an effective algorithm, but need manytimes to convergence from a starting point. We use a finite Armijo-Newton algorithmsolving the Lagrangian SVR’s optimization problem. Solution is obtained by solving asystem of linear equations at a finite number of times rather than solving a quadraticoptimization problem. The proposed method has the advantage that the resultingoptimization problem is solved with global and finite termination properties.Experimental results on several artificial synthetic datasets and benchmark datasetsindicate that the proposed NLSVR is fast and shows good generalization performance.
     4. Primal weighted support vector regression.
     Propose a robust weighted regression algorithm solved in the primal space using aniterate newton algorithm. This algorithm eliminates outliers through weightedapproach. The further the sample deviates from the model, the smaller the weight ofthe loss function, and the affection to the estimation of the model’s parameters issmaller, too. We solved the weighted support vector regression in the primal spaceusing an iterative newton algorithm. These experiments on artificial, benchmarkdatasets and prediction of the stock prices show the proposed method is competitivewith previously published methods.
引文
[1] E. Mjolsness, D. DeCoste. Machine learning for science: state of the art and futurepropects[J]. Science,2001,293(5537):2051-2055
    [2] T. Mitchell著,曾华军,张银奎译.机器学习[M].北京:机械工业出版社,2003.
    [3] R. O. Duda, P E Hart, D G Stork著,李宏东,姚天祥等译.模式分类[M].北京:机械工业出版社,2006.
    [4] I. Guyon, A. Saffari, G. Dror, et al. Model selection: beyond the bayesian/frequentistDivide[J]. Journal of Machine Learning Research,2010,11:61-87.
    [5] V. Vapnik. An overview of statistical learning theory[J]. IEEE Trans. NeuralNetworks,1999,10(5):988–999.
    [6] V. Vapnik. The nature of statistical learning theory[M]. New York: Springer,1995.
    [7] V. Vapnik, Statistical learning theory[M]. New York: Springer,1999.
    [8] C. Cortes, V. Vapnik. Support vector networks. Maching Learning,1995,20:273–297.
    [9] N. Cristianini, J. Shawe-Taylor. An Introduction to support vector machines and otherkernel-based learning methods[M].北京:机械工业出版社,2005.
    [10] N. Cristianini, J. Shawe Taylor. Kernel methods for pattern analysis[M]. Cambridge:Cambridge University Press,2004.
    [11] C. J. C. Burges. A tutorial on support vector machines for pattern recognition[J]. DataMining Knowledge Discovery,1998,2:121-167.
    [12] J. Shawe Taylor, S. L. Sun. A review of optimization methodologies in support vectormachines [J]. Neurocomputing,2011,743:609–3618
    [13] A. J. Smola, B. Sch lkopf. A tutorial on support vector regression[J]. Statistics andComputing,2004,14(3):199-222.
    [14] D. R. Musicant, A. Feinberg. Active Set Support Vector Regression[J]. IEEE Transactionson neural networks,2004,15(2):268-275
    [15] Y. J. Lee, O. L. Mangasarian. SSVM: A smooth support vector machines[J].ComputationalOptimization and Applications,2000,20(1):5-22.
    [16] Y. J. Lee, W. F. Hsieh, C. M. Huang. ε-SSVR: a smooth support vector machine forε-insensitive regression[J]. IEEE Transactions on Knowledge and Data Engineering,2005,17(5):678-685.
    [17] B. Sch lkopf, A. J. Smola. Learning with kernels[M]. Cambridge: MIT Press,2002
    [18] O. L. Mangasarian, D. R. Musicant. Large Scale Kernel Regression via Linearprogrmaming[J]. Machine Learning,2002,46:255–269
    [19] B. Sch lkopf, A. J. Smola, R. Williamson, et al. New support vector algorithms[J]. NeuralComputation,2000,12(5):1207-1245
    [20] B. Sch lkopf,P. Bartlett,A. J. Smola,et al.Support vector regression with automaticaccuracy control. Proceedings of International Coference on Artificial Neural Network[C],New York: Springer,1998:111~116.
    [21] P. Y. Hao. New support vector algorithms with parmaetric insensitive/margin model[J].Neural Networks,2010,23:60-73.
    [22] X. B. Chen, J. Yang, J. Liang. A flexible support vector machine for regression[J]. NeuralComput&Applic,2011, DOI10.1007/s00521-011-0623-5.
    [23] X. J. Peng. Efficient twin parametric insensitive support vector regression model[J].Neurocomputing,2012,79:26–38.
    [24] X. J. Peng. TSVR: An efficient twin support vector machine for regression[J]. NeuralNetworks,2010,23:365-342.
    [25] X. J. Peng. Primal twin support vector regression and its sparse approximation[J].Neurocomputing,2010,73:2846–2858.
    [26] X. B. Chen, J. Yang, J. Liang, et al. Smooth twin support vector regression[J]. NeuralComput&Applic,2010, doi:10.1007/s00521-010-0454-9
    [27] P. Zhong, Y. T. Xu, Y. H. Zhao. Training twin support vector regression via linearprogramming[J]. Neural Comput&Applic,2011, DOI10.1007/s00521-011-0525-6.
    [28] M. Singh, J. Chadha, P. Ahuja, et al. Reduced twin support vector regression[J].Neurocomputing,2011,74:1474–1477.
    [29] Y. P. Zhao, J. G. Sun. A fast method to approximately train hard support vectorregression[J]. Neural Networks,2010,23:1276-1285.
    [30] Y. P. Zhao, J. G. Sun. Recursive reduced least squares support vector regression[J]. PatternRecognition,2009,42:837-842.
    [31] P. Lingras, C. J. Butz. Rough support vector regression[J]. European Journal of OperationalResearch,2010,206:445-455
    [32] Y. P. Zhao, J. G. Sun. Rough v-support vector regression[J]. Expert Systems withApplications,2009,36:9793-9798.
    [33] C. F. Lin, S. D. Wang. Fuzzy support vector machines[J]. IEEE Transactions on NeuralNetworks,2002,13(2):464-471.
    [34] L. Zhang, W. D. Zhou, L. C. Jiao. Wavelet support vector machine[J]. IEEE Transactions onSystems, Man and Cybernetics, Part B,2004,34(1):34-39.
    [35] D. P. Lewis, T. Jebara, W. S. Noble. Nonstationary kernel combination[C]. ICML,2006:553-560
    [36] S. O. Chen, A. J. Smola, B. Williamson. Learning the kernel with hyperkernels[J]. Journalof Machine Learning Research,2005,6(7):1043-1071.
    [37] G. R. G. Lanckriet, N.Cristianini, P. Bartlett, et al. Learning the kernel matrix withsemi-definite programming[C].ICML2002,323-330.
    [38] M. G onen, E. Alpaydin. Localized multiple kernel learning[C]. ICML2008,352-359.
    [39]汪洪桥,孙富春,蔡艳宁等.多核学习方法[J].自动化学报,2011,36(8):1037-1050.
    [40] G. R. G. Lanckriet, N. Cristianini, P. Bartlett, et al. Learning the kernel matrix withsemi-definite programming[J]. Journal of Machine Learning Research,2004,5:27-72.
    [41] W. J. Lee, S. Verzakov, R. P. Duin. Kernel combination versus classiffier combination[C].Proceedings of the7th International Workshop on Multiple Classifier Systems2007,22-31.
    [42] K. P. Bennett, M. Momma, M. J. Embrechts. MARK: a boosting algorithm forheterogeneous kernel models[C]. Proceedings of8th ACM-SIGKDD InternationalConferenceon Knowledge Discovery and Data Mining,2002,24-31.
    [43] F. R. Bach, G. R. G. Lanckriet, M I Jordan. Multiple kernel learning, conic duality, and theSMO algorithm[C]. Proceedings of the21st International Conference on MachineLearning,2004,41-48.
    [44] S. Sonnenburg, G. Ratsch, C. Schafer, et al. Large scale multiple kernel learning[J]. Journalof Machine Learning Research,2006,7:1531-1565.
    [45] S. O. Cheng, A. J. Smola, B. Williamson. Learning the kernel with hyperkernels[J]. Journalof Machine Learning Research,2005,6:1043-1071.
    [46] A. Rakotomamonjy, F. R. Bach, S. Canu, et al. More efficiency in multiple kernel learning.ICML,2007,775-782.
    [47] A. Rakotomamonjy, F. R. Bach, S. Canu, et al. Simple MKL[J]. Journal of MachineLearning Research,2008,9:2491-2521.
    [48] Z. Wang, S. C. Chen, T. K. Sun. MultiK-MHKS: A novel multiple kernel learningalgorithm[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2008,30(2):348-353.
    [49] A. N. Syed.Incremental learning with support vector machine[C]. IJCAI Workshop onSupport vector machines,1999.
    [50] J. S. Ma,J. Theiler,S. Perkins. Accurate On-line Support Vector Regression[J]. NeuralComputation,2003,15:2683-2703.
    [51] H. Duan, X. J. Shao, W. Z. Hou, et al. An incremental learning algorithm for Lagrangiansupport vector machines[J]. Pattern Recognition Letters,2009,30:1384-1391.
    [52] Y. P. Zhao, J. G. Sun. Robust support vector regression in the primal[J]. Neural Networks,2008,21:1548-1555.
    [53] C. C. Chuang, Z. J. Lee. Hybrid robust support vector machines for regression withoutliers[J]. Applied Soft Computing,2011,11:64-72.
    [54] M. Tang, H. B. Zhang. An Effective Method For Weighted Support Vector RegressionBased On Sample Simplification[C]. ISECS International Colloquium on Computing,Communication, Control, and Management,2009:33-37.
    [55] W. T. Cui, X. F. Yan. Adaptive weighted least square support vector machine regressionintegrated with outlier detection and its application in QSAR. Chemometrics and IntelligentLaboratory Systems,2009,98:130-135.
    [56] Y. P. Zhao, J. G. Sun. Robust truncated support vector regression[J]. Expert Systems withApplications,2010,37:5126_5133.
    [57] X. J. Peng, Y. F. Wang. The robust and efficient adaptive normal direction support vectorregression[J]. Expert Systems with Applications,2011,38:2998-3008.
    [58]边肇祺,张学工等编著.模式识别[M].北京:清华大学出版社,2000.
    [59]袁亚湘,孙文瑜.最优化理论与方法[M].北京:科学出版社,1997.
    [60]邓乃扬,田英杰.数据挖掘中的最优化方法—支持向量机[M].北京:科学出版社,2004.
    [61] J. A. K. Suykens, T. V. Gestel, J. D. Brabanter, et al. Least Squares Support VectorMachines[M]. World Scientific Press,2002.
    [62] S. Boyd, L. Vandenberghe. Convex Optimization[M]. Cambridge: Cambridge UniversityPress,2002.
    [63] J. Nocedal, S. Wright. Numerical Optimization[M]. Springer,2006.
    [64] B. Sch lkopf, S. Mika, C. J. C. Burges, et al. Input space versus feature space inkernel-based methods[J]. IEEE Transactions on Neural Networks,1999,10(5):1000-1017.
    [65] G. Baudat, F. Anouar. Generalized discriminant analysis using a kernel approach[J]. NeuralComputation,2000,12(10):2385-2404.
    [66] C. Saunders, A. Gammerman, V. Vovk. Ridge regression learning algorithm in dualvariables[C]. ICML,1998,515–521.
    [67] T. Melzer, M. Reiter, H. Bischof. Appearance models based on kernel canonical correlationanalysis[J], Pattern Recognition,2003,36(9):1961-1971.
    [68] R. Rosipal, L. J. Trejo. Kernel partial least squares regression in reproducing kernel Hilbertspace[J], Journal of Machine Learning Research,2001,2:97-123.
    [69] S. Mika, G. R tsch, J. Weston, et al. Fisher discriminant analysis with kernels[C],Procedding IEEE Workshop Neural Networks for Signal Processing IX,1999,41-48.
    [70] F. R. Bach, M. I. Jordan. Kernel independent component analysis[J]. Journal of MachineLearning Research,2002,3:1–48.
    [71] M. Girolami. Mercer kernel-based clustering in feature space[J]. IEEE Transactions onNeural Networks,2002,13:780-784.
    [72] K. Yu, L. Ji, X. Zhang. Kernel nearest-neighbor algorithm[J], Neural Processing Letters,2002,15:147–156.
    [73] J. D. Wang, J. G. Lee, C. S. Zhang. Kernel trick embedded Gaussian mixture model[J]. ALT,2003,159-174.
    [74] A. Gretton, R. Herbrich, A. Smola. The kernel mutual information[C]. Proceedings of theICASSP,2003.
    [75] B. Sch lkopf, A. J. Smola, K R Muller. Nonlinear component analysis as a kerneleigenvalue problem[J]. Neural Computation,1998,10(5):1299–1319.
    [76] Y. Xu, B. Sun, C. Y. Zhang, et al. An implemention framework for kernel methods withHigh-dimensional patterns [C]. Proceedings of the Fifth International Conference on MachineLearning and Cybernetics, Dalian,2006,13-16.
    [77] W. A. Chen, H. B. Zhang. The condition of kernelizing an algorithm and an equivalencebetween kernel methods[J]. Lecture Notes In Computer Science,2007,4477:338-345.
    [78] J. Yang, A. F. Frangi, J. Y. Yang, et al. KPCA pus LDA: A complete kernel fisherdiscriminant framework for feature extraction and recognition [J], IEEE Transactions OnPattern Analysis and Machine Intelligence,2005,27(2):230-244.
    [79] C. S. Zhang, F. P. Nie, S. M. Xiang. A general kernelization framework for learningalgorithms based on kernel PCA[J]. Neurocomputing,2010,73:959–967.
    [80] S. Ghorai, A. Mukherjee, P. K. Dutta. Nonparallel plane proximal classifier[J].SignalProcessing,2009,89:510-522.
    [81] S. Ghorai, S. J. Hossain, A. Mukherjee, et al. Newton’s method for nonparallel planeproximal classifier with unity norm hyperplanes[J]. Signal Processing,2010,90:93-104.
    [82] Y. H. Shao, C. H. Zhang, X. B. Wang, et al. Improvements on Twin Support VectorMachines[J]. IEEE Transactions on Neural Networks,2011,22(6):962-968.
    [83] O. L. Mangasarian, D. R. Musicant, Successive overrelaxation for support vectormachines[J]. IEEE Transactions on Neural Network,1999,10(5):1032-1037.
    [84] S. Gunn. MATLAB Support Vector Machine Toolbox.2001,http://www.isis.ecs.soton.ac.uk/resources/svminfo/.
    [85] M. Wang, X. S. Hua, Y. Song, et al. Semisupervised Kernel regression[C]. Proceedings ofthe sixth international conference on data mining,2006:1130-1135.
    [86] Z. H. Zhou, M. Li, Semisupervised regression with cotraining-style algorithms[J]. IEEETransactions on Knowledge Data Engineer,2007,19:1479-1493.
    [87] http://lib.stat.cmu.edu/datasets/.
    [88] http://www.cse.ogi.edu/~ericwan.
    [89] C. L. Blake, C. J. Merz. UCI repository learning databases. Irvine, University of California,Department of Information and Computer Sciences,1998,http://www.ics.uci.edu/_mlearn/MLRepository.html.
    [90] J. Nocedal, S. Wright. Numerical Optimization[M]. Springer,2006.
    [91] J. Chambers, A Avlonities. A robust mixed-norm adaptive filter algorithm[J], IEEE SignalProcess Letters,1997,4(2):46–48.
    [92] A. K. S. Johan, V G Tony, D B Jos, et al. Least Squares Support Vector Machines. WorldScientific,2002.
    [93] LS-SVMtoolbox, http://www.esat.kuleuven. ac.be/sista/lssvmlab/S.
    [94] O. L. Mangasarian, D R Musicant, Lagrangian support vector machine[J], Journal ofMachine Learning Research[J],2001,1:161-177.
    [95] O. L. Mangasarian, D R Musicant. Finite Newton method for Lagrangian support vectormachine[J], Neurocomputing,2003,55:39-55.
    [96] S. Balasundaram, Kapil. On Lagrangian support vector regression[J], Expert Systems withApplications,2010,37:8784-8793.
    [97]段华.支持向量机的增量算法研究[D].上海交通大学博士论文,2008.
    [98] F. J. Anseombe. Rejection of outliers[J]. Technometrics,1960,2:123-147.
    [99] A. J. Fox. Outliers in timeseries[J]. Journal of the Royal Statistieal Soeiety,1972,34(3):350-363.
    [100] E. M. Knorr,R. T. Ng. Algorithms for mining distanee-based outliers in large datasets[C].Proeedings of23th VLDB Conference,1998,392-403.
    [101] E. M. Knorr,R. T. Ng,Vladimir Tueakov. Distanee-based outliers: algorithms andapplications[J]. The International Journal on Very Large Data Bases,200,8:237-253
    [102] T. G. Dietterieh. Ensemble Methods in Maehine Learning. Lecture Notes in ComputerScience[J],2000,1857:1-15.
    [103] G. Ratseh,B. Sch lkopf, A. J. Smola,et al. v-Arc: Enselnble Learning in the Presence ofoutliers[J]. Advances in neural information processing systems,1999,12.
    [104] C. C. Chuang, S. E. Su, j. T. Jeng, et al. Robust Support Vector Regression Networks forFunction Approximation with Outliers[J]. IEEE Transactions on Neural Networks,2002,13(6)1322-1330.
    [105] A. N. Srivastava. Mixture density Mercer kernels: A method to learn kernels directly fromdata[C]. In SDM Data Mining Conference,2004.
    [106] S. X. Du, T. J. Wu. Weighted support vector machines for regression and its application[J].Journal of Zhejiang University (Engineering Science),2004,38(3):302~306.
    [107] W. M. Huang. Weighted Support Vector Regression Algorithm Based on DataDescription[C]. ISECS International Colloquium on Computing, Communication, Control,and Management,2008,3-4:250-254.
    [108] O. Chapelle. Training a support vector machine in the primal[J]. Neural Computation,2007,19:1155-1178.
    [109] L. F. Bo, L. Wang, L. C. Jiao. Recursive Finite Newton Algorithm for Support VectorRegression in the Primal[J]. Neural Computation,2007,19:1082-1096.
    [110] L. F. Bo, L. Wang, L. C. Jiao. Selecting Reduced Set for Building Sparse Support VectorRegression in the Primal[J]. Lecture Notes in Computer Science,2007,4426:35-46.
    [111] T. C. Mills. The Econometric Modelling of Financial Time Series[M]. CambridgeUniversity Press,1999.
    [112] R. S. Tsay. Analysis of financial time series[M]. New York:Wiley Press,2002.
    [113]汤凌冰.基于支持向量机的金融时间序列回归分析[D].上海交通大学博士论文,2010.
    [114] http://finance.yahoo.com

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700