支持向量机的核选择
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
由Vapnik等人提出的支持向量机(Support Vector Machine,SVM)技术,由于具有极强的模型泛化能力,不会陷入局部极小点,以及很强的非线性处理能力等特点,近十年来取得了全面飞速的发展,获得了大量成功的应用,已成为模式识别中最为活跃的研究领域之一。
     当前,选择合适的核函数及其参数(核选择)已成为SVM进一步发展的关键点和难点。核函数决定了SVM的非线性处理能力,也决定着分类函数的构造,而对具体问题而言,选择合适的核函数及其参数,还存在着许多的实际困难。
     针对SVM中的核选择问题,本文对SVM的模型问题、特征空间线性可分的结构问题、核学习中基核的选择问题、以及核函数及其参数的评判准则问题开展了深入的探讨,主要的工作有:
     1.在SVM的模型方面,给出了L2-范数下平分最近点原理问题;然后得到了它的解与最大间隔原理问题的解之间的关系,建立了它与最大间隔原理的等价性;指出它还具有模型性质更好、几何意义更直观、能利用求解凸包之间距离的内点算法等优点;最后给出了它的SMO(Sequential minimal optimization)求解算法。
     2.在特征空间线性可分的结构方面,利用平分最近点原理模型,通过对核矩阵零空间的深入分析,得出特征空间中样本线性可分与核矩阵零空间关系的一个充要条件。
     3.在基核矩阵的选取方面,首先提出矩阵的秩空间差异性(Rank Space Diversity,RSD)概念,其次将其作为基核矩阵的差异性度量,由此导出选择基核矩阵的一个定量规则“基核矩阵的秩空间差异性越大越好”。我们还给出了基于L2-范数下平分最近点原理的核学习模型和模型求解算法;最后通过实验验证了基核矩阵选择规则的有效性。
     4.在核函数及其参数的评判准则方面,首先从分类函数抗样本扰动的“泛化性能”出发,分析了传统最大间隔原理的不足,提出了分类函数的鲁棒度概念;探讨了鲁棒度的性质;并提出用最大鲁棒度作为核选择的评判准则;通过与经典的交叉验证方法和最小支持向量数方法的实验对比,表明最大鲁棒度准则克服了交叉验证方法时间代价高,最小支持向量数方法测试准确率不稳定的缺点,获得了很好的结果。
     5.在核学习方面,提出了按单属性设计基核,以最大鲁棒度为优化目标的核学习方法,给出了鲁棒度的梯度计算公式和模型的求解算法,并用实验表明了该方法的有效性和优越性。
In the last ten years there have been very significant developments in the theoretical understanding of Support Vector Machines (SVMs) , proposed by Vapnik and others, as well as algorithmic strategies for implementing them, and applications of the approach to practical problems.
     Nowadays, the selection of the SVM-kernel with suitable form and parameters (Kernel Selection) has become a key-point both in theoretical research and application consideration. In fact, the nonlinear processing ability of SVM and the structure of the separating function are both largely decided by the choice of individual kernel function, and actually there are still a lot of difficulties on practice.
     As a research work focused on kernel selection of SVM, this paper has mainly discussed the following problems:
     1. On the modeling of SVM, the principle of bisecting closest points under L2-norm is firstly introduced. The relation between the solutions based respectively on the bisecting closest points principle and the maximum margin principle is then deduced, and the equivalence is established on these two solutions. The advantage of bisecting closest points method is showed, including of the better model character, the more intuitive geometric significance, and the optional nearest point algorithm. A SMO typed algorithm for the model based on bisecting closest points principle under L2-norm is also presented.
     2. On the aspect of linear separable structure of sample set in feature space, a necessary and sufficient condition is obtained based on null space of kernel matrix.
     3. On the aspect of selecting the base-kernels in kernel learning, a new concept of rank space diversity of matrices is firstly proposed; it is considered as a diversity measure for the base-kernel matrices.“Rank space diversity of base-kernel matrices should be as big as possible”is then deduced as a rule for the selection of base-kernel matrices. The kernel learning model based on bisecting closest points principle under L2-norm, as well as it’s solving algorithm, are given, and the validity of this rule is showed by some experiments.
     4. On the aspect of the criterion of kernel evaluation, a robustness concept on separating function is firstly proposed based on the anti-disturbance ability of samples. By its properties, the maximum robustness of separating function is proposed to be a criterion for kernel evaluation. Experiments on the comparison among classic k-fold cross validation, minimum support vectors and maximum robustness methods show that our proposition is efficiency, which overcomes the shortages of high time cost for k-fold cross validation and the unstable testing accuracy for minimum support vectors.
     5. On the aspect of kernel learning, a new method is proposed, in which the base-kernels are designed on each attribute and the robustness of separating function is maximized. The corresponding solving algorithm of this kernel learning model is presented, and the validation and advantages of our method is shown by some numerical experiments.
引文
[1] J. Han, and M. Kantardzic. Data Mining: Concepts and Techniques [M]. San Francisco: Morgan Kaufmann,2000.(中文版:范明, 孟小峰等译. 数据挖掘概念与技术[M]. 北京:机械工业出版社, 2001年8月第1版.)
    [2] R.O. Duda, P.E. Hart,and D.J. Stork. Pattern Recognition [M]. New York: Wiley,2001.
    [3] J.R. Quinlan. Induction of decision trees [J]. Machine Learning, 1986,1(1):81-106.
    [4] L. Breiman, J.H. Friedman, R,A. Olshen, and C.J. Stone. Classification and Regression Tress [M] . Monterey,CA: Wadsworth Statistical Press,1984.
    [5] P.J. Werbos. The roots of backpropagation: From ordered derivatives to Neural Networks and Political Forecasting [M]. John Wiley & Sons Inc, 1994.
    [6] M.H. Hassoun. Fundamentals of Artificial Neural Networks [M]. Cambridge, Massachusetts: The MIT Press,1995.
    [7] C.M.Bishop. Neural Networks for Pattern Recognition [M]. Oxford,U.K.: Clarendon Press,1995.
    [8] 阎平凡,张长水,编著. 人工神经网络与模拟进化计算[M]. 北京: 清华大学出版社,2000年10月第1版.
    [9] E. Russek, R.A. Kronmal,and L.D. Fisher. The effect of assuming independence in applying Bayes’ theorem to risk estimation and classification in diagnosis[J]. Computers and Biomedical Research,16,pp.537-552,1983.
    [10] P. Domingos. A general method for making classifiers cost-sensitive [A]. Proceedings of the Fifth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, New York:ACM Press,pp.155-164,1995.
    [11] B.V. Dasarathy. Nearest Neighbor (NN) Norms: NN Pattern Classification Techniques. Los Alamitos,CA: IEEE Computer Society Press,1991.
    [12] 李订芳,胡文超,何炎祥.基于共享最近邻聚类和模糊集理论的分类器[J]. 控制与决策,Vol.21, No.10, pp.1103-1108, 2006.
    [13] L A. Zadeh. Fuzzy sets[J]. Information and Control, 8:338-353, 1965.
    [14] 胡宝清. 模糊理论基础[M]. 武汉:武汉大学出版社, 2004年10月第1版.
    [15] V. Vapnik. Statistical Learning Theory[M]. New York: Wiley, 1998. (中文版:许建华,张学工译. 统计学习理论[M]. 北京:电子工业出版社,2004.)
    [16] C. Burges. A Tutorial on Support Vector Machines for Pattern Recognition[J]. Data Mining and Knowledge Discovery, Vol.2,No.2, pp.121–167, 1998.
    [17] N. Cristianini, and J. Shawe-Taylor. An Introduction to Support Vector Machine[M]. Cambridge University Press, 2000.
    [18] 邓乃扬,田英杰. 数据挖掘中的新方法——支持向量机[M].北京:科学出版社,2004年6月第1版.
    [19] Z.D. Lei, and Y. Dai. An SVM-based system for predicting protein subnuclear localizations [J]. BMC Bioinformatics 6:291, 2005. Available: http://www.biomedcentral.com/1471- 2105/6/291.
    [20] Y.Q. Wang, S.Y. Wang, and K.K. Lai. A new fuzzy support vector machine to evaluate credit risk [J]. IEEE Transactions on Fuzzy Systmes, Vol. 13, No. 6, pp.820-831,Dec. 2005.
    [21] J. Li, N. Allinson, D.C. Tao, and X.L. Li. Multitraining Support Vector Machine for Image Retrieval[J]. IEEE Transactions on Image Processing, Vol. 15, No. 11, pp.3597-3601,NOV. 2006.
    [22] A.B.A. Graf, F.A. Wichmann, H.H. Bülthoff, and B. Sch?lkopf. Classification of Faces in Man and Machine[J]. Neural Computation 18, pp.143-165,2006.
    [23] S.T. Li, J.T. Kwok, H.L. Zhu, and Y.N. Wang. Texture classification using the support vector machines[J]. Pattern Recognition 36,pp.2883 – 2893,2003.
    [24] Q. Tao, G.W. Wu, F.Y. Wang, and J. Wang. Posterior Probability Support Vector Machines for Unbalanced Data [J]. IEEE Transactions on Neural Networks, Vol.16, No. 6, pp.1561-1572, NOV. 2005.
    [25] C.F. Lin, and S.D. Wang. Fuzzy Support Vector machines[J]. IEEE Transactions on Neural Networks, Vol. 13, No. 2, pp.464-471, MAR. 2002.
    [26] C.W. Hsu, and C.J. Lin. A comparison of methods for multiclass support vector machines[J]. IEEE Transactions on Neural Networks, Vol.13, No.2, pp.415-425, March 2002.
    [27] Y. Grandvalet, and S. Canu. Adaptive scaling for feature selection in SVMs[A]. In S. Thrun S. Becker and K. Obermayer, editors, Advances in Neural Information Processing Systems 15, pages 553–560, Cambridge, MA, MIT Press, 2003.
    [28] T. Joachims. Learning to Classify Text using Support Vector Machines[A]. Kluwer,2002.
    [29] T. Joachims. Text categorization with support vector machines: Learning with many relevant features [A]. In C. Nédellec and C. Rouveirol, editors, ECML ’98: Proceedings of the 10th European Conference on Machine Learning, Lecture Notes in Computer Science, pages 137–142, Berlin / Heidelberg, Springer-Verlag,1998.
    [30] T. Joachims. Making large–scale SVM learning practical[A]. In B. Sch?lkopf, C.J.C. Burges, and A.J. Smola, editors, Advances in Kernel Methods — Support Vector Learning, pages 169–184, Cambridge, MA, USA, MIT Press, 1999.
    [31] Y.J. Lee, and O.L.Mangasarian. SSVM: A Smooth Support Vector Machine for Classification [J]. Computational Optimization and Applications,20, pp.5-22, 2001.
    [32] O.L. Mangasarian, and D.R. Musicant. Successive Overrelaxation for Support Vector Machines[J]. IEEE Transactions on Neural Networks, Vol.10, pp.1032-1037,1999.
    [33] S. Sonnenburg, G. R?tsch, and B. Sch?lkopf. Large scale genomic sequence SVM classifiers[A]. In L. D. Raedt and S. Wrobel, editors, ICML ’05: Proceedings of the 22nd international conference on Machine learning, pages 849–856, ACM Press ,New York, NY, USA, 2005b.
    [34] 刘胥影,吴建鑫,周志华. 一种基于级联模型的类别不平衡数据分类方法[J]. 南京大学学报自然科学版, 42 (2): 148-155,2006.
    [35] 吴高巍,陶卿,王珏.基于后验概率的支持向量机[J].计算机研究与发展, 42(2):196-202, 2005.
    [36] 张铃.基于核函数的SVM机与三层前向神经网络的关系[J].计算机学报, 25(7): 696-700, 2002.
    [37] L.K. Luo, H. Peng, Q.S. Zhang, and C.D. Lin. A Comparison of Strategies for Unbalance Sample Distribution in Support Vector Machine[A]. Proceedings of 1st IEEE Conference on Industrial Electronics and Applications(ICIEA 2006),pp.128-132,2006.
    [38] L.K. Luo, C.D. Lin, H. Peng, and Q.F. Zhou. A Study on Piecewise Polynomial Smooth Approximation to the Plus Function[A]. Proceedings of the 9th International Conference on Control, Automation, Robotics and Vision, pp.2177-2182, Dec.2006.
    [39] L.K. Luo, C.D. Lin, H. Peng, and Q.F. Zhou. A Strategy of Maximizing the Sum of Weighted Margins for Ranking Multi Classification Problem[A]. Proceeding of the 9th InternationalConference on Control, Automation, Robotics and Vision, pp. 487-491,Dec.2006.
    [40] 罗林开,彭洪,林成德. 支持向量机中两类误判惩罚权重的研究[J].中国科技大学学报,Vol.35. Supp., pp.304-308, Nov.2005.
    [41] 刘闽,林成德. 基于支持向量机的商业银行信用风险评估模型[J].厦门大学学报(自然科学版), Vol.44, No.1,pp.29-32, 2005年1月.
    [42] 袁玉波,严杰,徐成贤. 多项式光滑的支撑向量机[J].计算机学报,Vol.28,No.1,pp.9-17,2005年 1 月.
    [43] B. Sch?lkopf and A.J. Smola. Learning with Kernels[M]. MIT Press, Cambridge, MA, 2002.
    [44] T. Hofmann, B. Sch?lkopf, and A.J. Smola. A Review of Kernel Methods in Machine Learning[R].Technical Report No.156,2006.
    [45] T. Joachims, N. Cristianini, and J. Shawe-Taylor. Composite Kernels for Hypertext Categorization[A]. In Proceedings of the International Conference on Machine Learning, ICML’01, Morgan Kauffman,2001.
    [46] H. Lodhi, C. Saunders, J. Shawe-Taylor, N. Cristianini, and C. Watkins. Text Classification Using String Kernels[J]. Journal of Machine Learning Research,2: 419-444,2002.
    [47] K.P. Bennett, M. Momma, and M.J. Embrechts. MARK: a boosting algorithm for heterogeneous kernel models[A]. In Proceedings of the 8th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 24–31. ACM, 2002.
    [48] J. Bi, T. Zhang, and K.P. Bennett. Column-generation boosting methods for mixture of kernels[A]. In W. Kim, R. Kohavi, J. Gehrke, and W. DuMouchel, editors, Proceedings of the Tenth ACM SIGKDD International Conference on Knowledge Discovery and DataMining, pages 521–526. ACM, 2004.
    [49] 张莉,周伟达,焦李成.一类新的支撑矢量机核[J]. 软 件 学 报,,13(4): 713-718, 2002.
    [50] O. Chapelle, V. Vapnik, O. Bousquet, and S. Mukherjee. Choosing multiple parameters for support vector machines[J]. Machine Learning, 46(1):131--159, 2002
    [51] N. Cristianini, J. Shawe-Taylor, A. Elisseeff, and J. Kandola. On kernel-target alignment[A]. In T.G. Dietterich ,S.Becker, and Z,Ghahramani, editors, Advances in Neural Information Processing Systems 14,pp.367-373,Cambridge,MA,MIT Presss,2002.
    [52] Gert R.G. Lanckriet, N. Cristianini, P. Bartlett, L.E. Ghaoui, and M.I. Jprdan. Learning the Kernel Matrix with Semidefinite Programming[J]. Journal of Machine Learning Research 5, pp.27-72, 2004.
    [53] K. Duan, S.S. Keerthi, and A.N. Poo. Evaluation of Simple Performance Measures for Tuning SVM Hyperparameters[J]. Neurocomputing, 51,pp.41-59,2003.
    [54] S. Sonnenburg, G. R?tsch, S. Sch?fer, and B. Sch?lkopf. Large Scale Multiple Kernel Learning[J]. Journal of Machine Learning Research 7, pp.1531-1565, 2006.
    [55] J. M. Leski. On support vector regression machines with linguistic interpretation of the kernel matrix[J]. Fuzzy Sets and Systems 157,pp.1092 – 1113,2006.
    [56] Y. Tan, and J. Wang. A support vector machine with a hybrid kernel and minimal Vapnik-Chervonenkis dimension [J]. IEEE Transactions on Knowledge and Data Engineering, Vol. 16, No. 4, pp.385-395, April 2004.
    [57] Ivor Wai-Hung Tsang and James Tin-Yau Kwok. Efficient Hyperkernel Learning Using second-order cone programming[J]. IEEE Transactions on Neural Networks, Vol. 17, No.1, pp.48-58,Jan. 2006.[20]
    [58] Cheng Soon Ong, Alexander J. Smola,and Robert C. Williamson. Learning the Kernel with Hyperkernels[J]. Journal of Machine Learning Research 3, pp.1001-1030,2003.
    [59] N. Cristianini, J. Shawe-Taylor, and C. Campbell. Dynamically adapting kernels in support vector machines[A]. M.S.Kearns,S.A.Solla,and D.A.Cohn,(Eds.), Advances in Neural Information Processing Systems,11.MIT Press,1998.
    [60] D. Mackay. Introduction to Gaussian Processes[A]. In Neural Networks and Machine Learning(NATO Asi Series),Ed. by Chris Bishop,1999.
    [61] S.S. Keerthi. Efficient tuning of SVM hyperparameters using radius/margin bound and iterative algorithms[J]. IEEE Transactions on Neural Networks, 13 (5):1225-1229, 2002.
    [62] K.M. Chung, W.C. Kao, C.L. Sun, L.L. Wang, and C.J. Lin. Radius margin bounds for support vector machines with the RBF kernel [J]. NEURAL COMPUTATION 15 (11): 2643-2681, NOV 2003.
    [63] R. Hettich and K.O. Kortanek. Semi-infinite programming: Theory, Methods and Applications[J]. SIAM Review, 3:380–429, 1993.
    [64] Francis R. Bach, Gert R.G. Lanckriet, Michael I. Jordan. Multiple Kernel Learning, ConicDuality, and the SMO Algorithm[A]. Proceedings of the 21st International Conference on Machine Learning, July 2004.
    [65] A. Argyriou, M. Herbster, and M. Pontil. Combining graph Laplacians for semi--supervised learning[J]. Advances in Neural Information Processing Systems 18. MIT Press, Cambridge, MA, USA.,2006.
    [66] D. Haussler. Convolution Kernels on Discrete Structures[R]. Technical Report CSD- TR- 03- 02, Univeristy of California in Santa Cruz,Computer Science Department, July 1999.
    [67] C. Watkins. Dynamic Alignment Kernels[A]. Advances in Large Margin Classifiers, MIT Press, pp.39-50, 2002.
    [68] C.Leslie, E. Eskin, and W.S. Noble. The Spectrum Kernel: A String Kernels for SVM Protein Classification[A]. In Proceedings of the Pacific Symposium on Biocomputing(PSB-2002), World Scientific Publishing,2002.
    [69] C. Leslie, E. Eskin, J. Weston, and W.S. Noble. Mismatch String Kernels for SVM Protein Classification[A]. In Advances in Neural Information Processing Systems 15, MIT Press,2003.
    [70] C. Leslie and R. Kuang. Fast Kernels for Inexact String Matching[A]. In Proceedings of the 16th Conference on Learning Theory (COLT),ACM,2003.
    [71] C. Leslie, R. Kuang, and E. Eskin. Inexact matching string kernels for protein classification[A]. In Kernel Methods in Computational Biology, MIT Press series on Computational Molecular Biology, pages 95–112, MIT Press, 2004.
    [72] T.S. Jaakkola and D. Haussler. Exploiting Generative Models in Discriminative Classifiers[A]. In Advances in Neural Information Processing System 11,MIT Press,1998.
    [73] T.S. Jaakkola and D. Haussler. Probabilistic Kernel regression Models[A]. In Proceedings of the 1999 Conference on AI and Statistics, Moran Kauffman,1999.
    [74] T. Hofmannn. Learning the Similarity of Documents: an Information-geometric Approach to Document Retrieval and Categorization[A]. In Proceedings of Advances in Neural Information Processing Systems , Vol.12., MIT Press,2000.
    [75] 赵玲玲,翁苏明,曾华军等译, J. Shawe-Taylor, and N. Cristianini. 模式分析的核方法[M]. 北京:机械工业出版社, 2006年1月第1版.
    [76] C. Burges. Geometry and Invariance in Kernel Based Methods[A].Advances in KernelMethods: Support Vector Learning, Cambridge,Mass.: MIT Press, 1999.
    [77] S. Amari and S. Wu. Improving Support Vector Machine Classifiers by Modifying Kernel Functions[J]. Neural Networks,vol.12, no. 6, pp. 783-789, 1999.
    [78] J. Kandola and J. Shawe-Taylor. Refining Kernels for Regression and Uneven Classification Problems[A]. Proc. Ninth Int’l Workshop Artificial Intelligence and Statistics, 2003.
    [79] G. Wu and E.Y. Chang. KBA: Kernel Boundary Alignment Considering Imbalanced Data Distribution[J]. IEEE Transactions on Knowledge and Data Engineering, Vol. 17, No. 6,pp.786-795, JUNE 2005.
    [80] M. Belkin, P. Niyogi, and V. Sindhwani. Manifold Regularization: A Geometric Framework for Learning from Labeled and Unlabeled Examples[J]. Journal of Machine Learning Research 7 , pp. 2399-2434,2006.
    [81] 何力,张军平,周志华.基于放大因子和延伸方向研究流形学习算法[J].计算机学报,Vol.28,No.12,2005年12月.
    [82] J. Shi and J. Malik. Normalized Cuts and Image Segmentation[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 22, No. 8, pp.888-905,AUG. 2000.
    [83] X.J. Zhu. Semi-Supervised Learning with Graphs[D]. Doctoral Thesis, Language Technologies Institute School of Computer Science Carnegie Mellon University, CMU- LTI-05-192, May 2005.
    [84] 罗 四 维 , 赵 连 伟 . 基 于 谱 图 理 论 的 流 形 学 习 算 法 [J]. 计 算 机 研 究 与 发 展 ,Vol.43(7),pp.1173-1179,2006.
    [85] D. Zhou, B. Sch?lkopf, and T. Hofmann. Semi-supervised learning on directed graphs[A]. In L. K. Saul, Y. Weiss and L. Bottou (Eds.), Advances in neural information processing systems 17. Cambridge, MA: MIT Press,2005.
    [86] J. Mercer. Functions of Positive and Negative Type, and their connection with the Theory of Integral Equations. Philosophical Transactions of the Royal Society of London, Series A, Containing Papers of a Mathematical or Physical Character, Vol.209., pp.415-446,1909.
    [87] 郭懋正. 实变函数与泛函分析[M]. 北京: 北京大学出版社,2005年2月第1版.
    [88] C. Berg, J.P.R. Cristensen, and P. Ressel. Harmonic Analysis on Semigroups[M]. New Yrok: Springer Verlag,1984.
    [89] B. Sch?lkopf. The Kernel Trick for Distances[J]. Neural Information Processing Systems(NIPS), 13, 2000.
    [90] M. Hein, O. Bousquet and B. Sch?lkopf. Maximal margin classification for metric spaces[J]. Journal of Computer and System Sciences 71(3): 333-359,2005.
    [91] C.A. Micchelli. Interpolation of scattered data: Distance matrices and conditionally positive definite functions[J]. Constructive Approxiamtion,2:11-22,1986.
    [92] K.P. Bennett, and J. Bredensteiner. Duality and Geometry in SVM Classifiers[A]. Proceedings of 17th International Conf. on Machine Learning.pp.57-64, 2000.
    [93] J. Platt. Sequential minimal optimization: A fast algorithm for training support vector machines[A]. In advance in Kernel Methods—Support Vector Learning, Bernhard Sch?lkopf, Christopher J.C. Burges, and Alexander J.Smola (Eds.),MIT Press: Cambridge,MA.pp.185-208,1999.
    [94] 陈宝林. 最优化理论与算法[M]. 北京: 清华大学出版社,2005年10月第2版.
    [95] 薛毅. 最优化原理与方法[M]. 北京: 北京工业大学出版社,2001年2月第1版.
    [96] 袁亚湘,孙文瑜. 最优化理论与方法[M]. 北京: 科学出版社,1997 年 1 月第 1 版.
    [97] C.C. Chang and C.J. Lin. Datasets for LIBSVM. http://www.csie.ntu.edu.tw/~cjlin/ libsvmtools/datasets/, 2007.
    [98] C.C. Chang and C.J. Lin. LIBSVM: a library for support vector machines, Software available at http://www.csie.ntu.edu.tw/~cjlin/libsvm, 2001.
    [99] Thorsten Joachims. SVMLIGHT Software. http://svmlight.joachims.org/, 2004.
    [100] 徐宗本,张讲社,郑亚林. 计算智能中的仿生学:理论与算法[M]. 北京:科学出版社,2003年 5 月第 1 版.
    [101] I.J. Schoenberg. Metric Spaces and Positive Definite Functions[J]. TAMS, 44:.522-536, 1938.
    [102] I.J. Schoenberg. Metric Spaces and Completely Monotone Functions[J]. Ann. Math., 39: 811-841, 1938.
    [103] T.R. Golub, K.D. Slonim, P. Tamayo, et al. Molecular Classification of Cancer: Class Discovery and Class Prediction by Gene Expression Monitoring [J]. Science, 286, pp.531-537, 1999.
    [104] S. Sonnenburg, G. R?tsch, and C. Sch?fer. Learning interpretable SVMs for biological sequence classification[A]. In S. Miyano, J. P. Mesirov, S. Kasif, S. Istrail, P. A.Pevzner, and M. Waterman, editors, Research in Computational Molecular Biology, 9th Annual International Conference, RECOMB 2005, volume 3500, pages 389–407. Springer-Verlag Berlin Heidelberg, 2005a.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700