半监督支持向量机学习算法研究
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
支持向量机是Vapnik等在统计学习理论基础上发展起来的针对小样本的新型机器学习方法。该方法由于具有较强的泛化能力、方便对高维数据操作而得到了日益广泛的研究和应用。传统的基于监督学习的分类方法,虽然能够有效地解决各种实际问题,但是需要手工对大量样本进行标记以获取足够的训练样本,代价高,效率低。因此,根据实际需要研究人员又提出了基于半监督学习的分类方法。这类方法能够自动(或半自动)地对有标签样本和无标签样本的混合样本集进行数据分类,在提高效率的同时扩大了算法的适用范围。然而,半监督支持向量机学习是机器学习领域中相对比较新的理论,它在很多方面尚不成熟、不完善,需要进一步地研究和改进。本文主要从半监督支持向量机两分类学习算法、基准学习算法以及多分类学习算法三方面对半监督支持向量机学习算法展开研究,充分发挥半监督支持向量机的优势和潜力。
     首先,针对半监督支持向量机学习算法训练时间代价大的问题,提出最小二乘支持向量机半监督学习算法。在迭代过程中以最小二乘支持向量机为学习模型,充分利用和发挥最小二乘支持向量机学习算法训练速度快、效率高等优点提高半监督支持向量机算法的训练速度。然后,采用区域标注法对无标签样本进行迭代的标注,提高无标签样本的标注效率,在迭代过程中将有标签样本集和半标记样本集一同进行训练。仿真实验结果表明,最小二乘支持向量机半监督学习算法可以有效的降低训练时间。
     其次,针对由局部最优化引起的半监督支持向量机学习算法在同一数据集上参数敏感、最优解差异大,以及基于全局最优化技术的基准学习算法时间复杂度高的问题,提出一种改进的分枝定界半监督支持向量机学习算法。该算法重新对结点的下界进行定义,将伪对偶函数的值作为结点的下界,避免了计算量较大的0-1二次规划,降低了各结点计算下界的时间复杂度;同时,依据无标签样本的样本可信度确定分枝结点,避免了多次支持向量机训练,提高了算法的训练速度。仿真实验分析表明该算法同其它半监督支持向量机学习算法相比具有精度高、参数不敏感的优点,并且具有较快的训练速度。本文利用多主机协同训练实现算法的并行化,提出一种分枝定界半监督支持向量机并行学习算法,仿真实验表明该算法具有较好的加速比,在训练速度上有明显的提升。
     最后,针对半监督学习中有标签样本数据较少,多分类问题实施困难,多分类精度低的问题,提出一种半监督支持向量数据域描述多分类学习算法。算法通过定义非目标样本的隶属度得到非目标样本的接受标签与拒绝标签,在此基础上采用半监督支持向量数据域描述学习算法构造多个超球体,将一个k分类问题转化为k个单分类问题,实现多分类。仿真实验结果表明该算法可以在有标签样本数较少的情况下,有效的提高多分类学习算法的分类精度。
Support vector machine as a novel machine learning method aimed at small samples is developed by Vapnik and others in the basis of statistical learning theory. Support vector machines are widely researched and applied for its advantage of strong generalization ability and convenient for high dimension data operation recently. Even though, the traditional classify methods based on the supervised learning can resolve many actual problems effectively, it have to label mass unlabeled data in order to get enough training samples. It makes these methods costly and low efficiency. So the classify methods based on the semi-supervised learning are proposed according to the actual requirement. These methods can classify with the mixed set of labeled and unlabeled data automatically (or semi-automatic), improving efficiency while expanding the application scope of the algorithm. However, the semi-supervised support vector is a novel theory in the fields of machine learning, which in many respects, is not yet mature, imperfect, and the need for further study and improvement. In this thesis, semi-supervised support vector machine algorithms are studied from two-classification learning algorithm, benchmark learning algorithm and multi-classification learning algorithm, which fully improve the S3VMs’strength and potential.
     Semi-supervised learning algorithm based on least square support vector machine (SLS-SVM) is proposed aimed at S3VM algorithm computing costly and complex firstly. SLS-SVM inspired by the thought of LS-SVM, as a learning model, with the combination of semi-supervised learning thinking, using the advantage of LS-SVM which has fast training speed and high efficiency. Area labeling principle is used to label the unlabeled samples iteratively. SLS-SVM algorithm is trained on a set of both the labeled and semi-labeled data in the iterative process. The experiments on artificial and real datasets shows that SLS-SVM's training accuracy is higher than the standard SVM training algorithm, which has to a certain extent, reflects the advantage of semi-supervised learning using unlabelled training samples.
     Improved learning algorithm for branch and bound for semi-supervised support vector machines is proposed secondly, according to the greater difference in the optimal solution in different semi-supervised support vector machines for the same data set caused by the local optimization. The lower bound of node in IBBS3VM algorithm is re-defined, which will be pseudo-dual function value as the lower bound of node to avoid the large amount of calculation of 0-1 quadratic programming, reducing the lower bound of each node calculate the time complexity; at the same time, in determining the branch nodes, only based on the credibility of the unlabeled samples without the need to repeatedly carry out the training of support vector machines to enhance the training speed of the algorithm. Simulation analysis shows that IBBS3VM presented in this thesis has faster training speed than BBS3VM algorithms, higher precision and stronger robustness than the other semi-supervised support vector machines. Parallel Branch and Bound Semi-Supervised Support Vector Machines algorithm is presented in order to expand the scope of the IBBS3VM. Simulation results show that PBBS3VM has good speedup and improvement in learning efficiency.
     In order to solve less labeled data learning, difficulties in the implementation and poor results of semi-supervised multi-classification, which full use the distribution of information in of non-target samples, Semi-supervised Support Vector Data Description multi-classification algorithm is presented finally,. S3VDD-MC algorithm defines the degree of membership of non-target samples, in order to get the non-target samples’accepted labels or refused labels, on this basis, several super-spheres constructed, a k-classification problem is transformed into k SVDDs problem. The simulation results verify the effectiveness of the algorithm.
引文
[1] Mitchell T M,曾华军,张银奎.机器学习.机械工业出版社, 2003, 27页
    [2] Lippmann R P. Pattern classification using neural networks. IEEE Communications magazine. 1989, 27(11):47-64P
    [3] Shahshahan B, Landgrebe D. The effect of unlabeled samples in reducing the small sample size problem and mitigating the hughes phenomenon. IEEE Transactions on Geoscience and Remote Sensing, 1994, 32(5): 1087-1095P
    [4] Chen F, Gao B J, Doan A H, et al. Optimizing complex extraction programs over evolving text data. SIGMOD Conference, Rhode Island, 2009: 321-334P
    [5] Downey D, Etzioni O. Look Ma, No Hands: Analyzing the Monotonic Feature Abstraction for Text Classification. Advances in Neural Information Processing Systems (NIPS), Vancouver, 2008: 2009-2021P
    [6] Vijayanarasimhan S, Grauman K. Multi-level active prediction of useful image annotations for recognition. Advances in Neural Information Processing Systems, Vancouver, 2008: 1705-1712P
    [7]李华北,胡卫明,罗冠.基于语义匹配的交互式视频检索框架.自动化学报. 2008,34(10):1243-1249页
    [8]张鸿,吴飞,庄越挺.一种基于内容相关性的跨媒体检索方法.计算机学报. 2008, 31(05):820-826页
    [9] Fabio Cozman, Ira Cohen, Marcelo Cirelo. Semi-supervised learning of Mixture models. In proceedings of international comference on machine learning, Washington, 2003: 211-219P
    [10] K. Nigam, R. Ghani. Analyzing the effectiveness and applicability of co-training. In proceedings of information and knowledge management, McLean, 2000: 86-93P
    [11] Muslea I, Minton S, Knoblock C A. Active+ Semi-supervised Learning= Robust Multi-View Learning. In proceedings of the 19th internationalcomference on machine learning, Sydney, 2002: 435-442P.
    [12] Martin Szummer, Tommi Jaakkola, Tomaso Poggio. Learning from partially labeled data. Artificial intelligence laboratory and the center for biological and computational learning, Massachusetts institute of technology cambridge,2002: 234-236P
    [13] Rey-Long Liu. Dynamic category profiling for text filtering and classification. Information Processing and Management. 2007,43 (1): 154-168P
    [14] Xing E P, Ng A Y, Jordan M I, Russell S. Distance metric learning with application to clustering with side-information. Advances in neural information processing systems. 2003, 15: 521-528P
    [15] D Alche-Buc F, Grandvalet Y, Ambroise C. Semi-supervised marginboost. Advances in neural information processing systems. 2002, 1: 553-560P
    [16] Enssen Robert, Principe Jose C.,Erdogmus Deniz, Eltoft Torbj?rn .The Cauchy-Schwarz Divergence and Parzen Windowing: Coonection to Graph Theory and Mercer Kernels. IEEE International Conference on Acoustics Speech and Signal Processing, Las Vegas, 2006: 614-629P
    [17] Charles C. Kemp, Thomas L. Griffiths, Sean Stromsten, Joshua B. Tenenbaum, Semi-supervised learing with trees. In Procedings of NIPS, Whistler, 2003: 243-257P
    [18] Aharon Bar-Hillel, Tomer Hertz, Noam Shental, Daphna Weinshall. Learning distance functions using equivalence relation. In Procedings of international comference on machine learning, Washington, 2003: 11-18P
    [19] Z-H Zhou, M Li Tri-training: Exploiting unlabeled data using three classifiers. IEEE Transactionson Knowledge and Data Engineering. 2005, 17(11): 1529-1541P
    [20] M Li, Z-H Zhou. Improve computer-aided diagnosis with machine learning techniques usingundiagnosed samples. IEEE Transactions on Systems, Man and Cybernetics - Part A: Systems andHumans. 2007, 37(6): 1088-1098P
    [21] Z-H Zhou, M Li Semi-supervised regression with co-training. In: Proceedings of the 19th International Joint Conference on ArtificialIntelligence, Edinburgh, 2005: 908-913P
    [22] W Wang, Z-H Zhou. Analyzing co-training style algorithms. In proceedings of the 18th European Conference on Machine Learning, Warsaw, 2007: 454-465P
    [23] Z-H Zhou, K-J Chen, and H-B Dai. Enhancing relevance feedback in image retrieval usingunlabeled data. ACM Transactions on Information Systems. 2006, 24(2): 219-244P
    [24] W Wang, Z-H Zhou. On multi-view active learning and the combination with semi-supervisedlearning. In proceedings of the 25th international comference on machine learning, Helsinki, 2008: 1152-1159P
    [25] Fei Wang, Changshui Zhang. Label Propagation through Linear Neighborhoods. IEEE Transaction on knowl data engineering. 2008, 20(1): 55-67P
    [26] Yangqiu Song, Changshui Zhang. Content-Based Information Fusion for Semi-Supervised MusicGenre Classification. IEEE Transactions on Multimedia. 2008, 10(1): 145-152P
    [27] Yangqing Jia, Changshui Zhang. Learning distance metric for semi-supervised image segmentation. In proceeings of the 15th IEEE international conference on image processing, San Diego, 2008: 3204-3207P
    [28] B. Sch?lkopf, A. Smola, R.Williamson, P.L.Bartlett. New support vector algorithms. Neural Computation. 2000, 12(2): 1207-1245P
    [29] Lee Y J, Mangasarian O L. SSVM: A smooth support vector machine for classification. Computational optimization and Applications. 2001, 20(1): 5-22P
    [30] Mangasarian O L, Musicant D R. Lagrangian Support Vector Machines. Journal of Machine Learning Research. 2001, 1: 161-177P
    [31] Fung G M, Mangasarian O L. Multicategory proximal support vector machine classifiers. Machine Learning. 2005, 59(1): 77-97P
    [32] Lee Y J, Huang S Y. Reduced support vector machines: A statistical theory. IEEE Transactions on Neural Networks. 2007, 18(1): 1-13P
    [33] Fung G, Mangasarian O L. Finite Newton method for Lagrangian support vector machine classification. Neurocomputing. 2003, 55(1-2): 39-56P
    [34] Keerthi S S, DeCoste D. A modified finite Newton method for fast solution of large scale linear SVMs. Journal of Machine Learning Research. 2006, 6(1): 341-361P
    [35] Chapelle O. Training a support vector machine in the primal. Neural Computation. 2007, 19(5): 1155-1178P
    [36] Burges C. A tutorial on support vector machines for pattern recognition. Data mining and knowledge discovery. 1998, 2(2): 121-167P
    [37] Van der Zant T, Schomaker L, Haak K. Handwritten-Word Spotting Using Biologically Inspired Features. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2008, 30(11): 1945-1957P
    [38] Malon C, Uchida S, Suzuki M. Mathematical symbol recognition with support vector machines. Pattern Recognition Letters. 2008, 29(9): 1326-1332P
    [39] Guo G, Li S Z, Chan K. Face recognition by support vector machines. Image and Vision Computing. 2001, 19(9-10): 631-638P
    [40] Wright J, Yang A Y, Ganesh A, et al. Robust face recognition via sparse representation. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2009: 210-227P
    [41] Wang P, Ma Y F, Zhang H J. A People Similarity based Approach to Video Indexing. Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing, Hong Kong, 2003: 693-696P
    [42]孙晋文,肖建国.基于SVM的中文文本分类反馈学习技术的研究.控制与决策. 2004, 19(8): 927-930页
    [43]付岩,王耀威,王伟强,高文. SVM用于基于内容的自然图像分类和检索.计算机学报. 2003, 26(10): 1261-1265页
    [44] Akay M F. Support vector machines combined with feature selection for breast cancer diagnosis. Expert Systems with Applications. 2009, 36(2P2): 3240-3247P
    [45] Khandoker A H, Palaniswami M, Karmakar C K. Support vector machines for automated recognition of obstructive sleep apnea syndrome from ECG recordings. IEEE Transaction on information technology in biomedicine, 2009, 13: 37-48P
    [46]张周锁,李凌均,何正嘉.基于支持向量机的机械故障诊断方法研究.西安交通大学学报. 2002, 32(2): 1303-1306页
    [47]徐启华,师军.应用SVM的发动机故障诊断若干问题研究.航空学报. 2005, 26(6): 686-690页
    [48] Ding C, Dubchak I. Multi-class protein fold recognition using support vector machines and neural networks. Bioinformatics. 2001, 17(4): 349-365P
    [49] Pugalenthi G, Kumar K K, Suganthan P N. Identification of catalytic residues from protein structure using support vector machine with sequence and structural features. Biochemical and Biophysical Research Communications. 2008, 367(3): 630-634P
    [50] Samanta B. . Gear Fault Detection Using Artificial Neural Networks and Support Vector Machines with Genetic Algorithms. Mechanical Systems and Signal Processing. 2004, 18(3): 625-644P
    [51] Joachims T. . Text Categorization with Support Vector Machines: Learning with Many Relevant Features. In proceedings of the 10th European Conference on Machine Learning, Berlin, 1998: 137-142P
    [52] Kim K I, Jung K, Park S H, et al. Support vector machines for texture classification. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2002, 24(11): 1542-1550P
    [53] V. Vapnik著,许建华,张学工译.《统计学习理论》,电子工业出版社,2004.6
    [54] V. Vapnik著,张学工译《.统计学习理论的本质》.北京:清华大学出版社,2000.8
    [55] John Shawe-Taylor N C著,李国正,王猛,曾华军译.《支持向量机导论》.北京:电子工业出版社, 2004.
    [56] Han J, Kamber M著,范明译.《数据挖掘概念与技术》.北京:机械工业出版社, 2004
    [57]邓乃扬,田英杰,数据挖掘中的新方法--支持向量机.北京:科学出版社, 2004.
    [58]边肇祺,张学工等编著,《模式识别》(第二版).北京:清华大学出版社,2000.1
    [59] T. Joachims. Transductive inference for text classification using support vector machines. In proceedings of International comference on machine learning, Bled,1999: 200-209P
    [60] O. Chapelle, A. Zien, R. Cowell, et al. Semi-Supervised Classification by Low Density Separation. Encyclopedia of Biostatistics.2005, 34: 57-64P
    [61] R.Collobert, F. Sinz, J. Weston, L. Bottou. Large scale transductive SVMs. Journal of Machine Learning Research. 2006, 7: 1687-1712P
    [62] V. Sindhwani, S. Keerthi, O. Chapelle. Deterministic annealing for semi- supervised kernel machines. In proceedings of international conference on machine learning, Pittsburgh, 2006: 108-116P
    [63] O. Chapelle, M. Chi, A. Zien. A continuation method for semi-supervised SVMs. In proceedings of international conference on machine learning, Pittsburgh, 2006: 184-192P
    [64] T. De Bie, N. Cristianini. Semi-supervised learning using semi-definite programming. In O. Chapelle, B. Scho?lkopf, A. Zen, editors, Semi-supervised Learning. MIT Press, 2006: 119-135P
    [65] Vapnik V. The nature of statistical learning theory. New York: Springer-Verlag, 1995
    [66] Sch?lkopf B. , Platt J. , J. A. Shawe-Taylor, Smola J. , Williamson R. C. . Estimating the support of a high-dimensional distribution. Neural Computation. 2001, 13(3): 1443-1471P
    [67] Valiant L G. A theory of the learnable. Communications of the ACM. 1984, 27(11): 1134-1142P
    [68]王磊.支持向量机学习算法的若干问题研究.电子科技大学博士学位论文. 2007: 22-23页
    [69] J .Weston, C. Watins. Support vector machines for multiclass pattern recognition. In proceedings of 7th European Symposium on Artificial Neural Networks, Bruges, 1999: 219-224P
    [70] Zhang Y, Chi Z X, Liu X D, et al. A movel fuzzy compensation multi-class support vector machine. Applied intelligence. 2007, 27(1): 21-28P
    [71] J. A. K. Suyens, J. Vandewallw. Multiclass least squares support vector machines. In proceedings of the international joint conference on neural networks, Washington, 1999: 900-903P
    [72] Amund Tveit, Mgnus Lie Hetland. Multicategory incremental proximal support vector classifiers. In proceedings of the 7th international conference on knowledge-based information and engineering systems, Oxford, 2003: 77-97P
    [73] Gleen Fung, Olvi L. Mangasarian. Multicategory proximal support vector machine classifiers. Machine learning. 2005,59(1-2): 77-97P
    [74] Crammer K., Singer Y. On the algorithmic implementation of multiclass kernel-based vector machnes. Journal of Machine Learning Rearch. 2002, 2(2): 265-292P
    [75]李昆仑,黄厚宽,田盛丰.模糊多类SVM模型.电子学报. 2004,32(5):830-832页
    [76] Kreoel U. Pairwise classification and support vector machines. Advances in Kernel Methods: Support vector learning. 1999: 255-268P
    [77] Abe, Inous T. Fuzzy support vector machines for muliclass problems. In proceedings of European symposium on artificial neural networks, Bruges, 2002: 113-118P
    [78] Dietterich T G, Bakiri G. Solving multiclass learning problems via error-correcting output codes. Journal of Artificial IntelligenceResearch. 1995, 2: 263-286P
    [79] Platt J.C., Cristianini N., Shawe-Taylor J. Large margin DAGs for multiclass classification. Advances in Neural information processing System. 2000(12): 547-553P
    [80] Hsu C.W., Lin C. J. A comparison of methods for multi-class support vector machines. IEEE Transaction on Neural Networks. 2002, 13(2):415-425P
    [81] Miller D J, Uyar H S. A mixture of experts classifier with learning based on both labelled and unlabelled data. Advances in Neural Information Processing Systems. 1997(9): 571-577P
    [82] Zhang T, Oles F. The value of unlabeled data for classification problems. In proceedings of the 17th international comference on machine learning, Stanford, 2000: 1191-1198P
    [83] O.Chapelle, A. Zent. Semi-supervised classification by low density separation. In tenth international workshop on artificial intelligence and statistics, Barbados, 2005: 57-64P
    [84] A.Yuille, A. Rangarajan. The concave– convex procedure. Neural computation. 2003(15): 915-936P
    [85] G. Fung, O. Mangasarian. Semi-supervised support vector machines for unlabeled data classification. Optimization methods and software. 2001, 15: 29-44P
    [86] Bin Zhao, Fei Wang, Changshui Zhang. Cut S3VM: A fast semi-supervised SVM algorithm. In proceeding of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining, Las Vegas, 2008: 830-838P
    [87] J.E. Kelley. The cutting-plane method for solving convex programs. Journal of the society for industrial applied methematics. 1960, 8: 703-712P
    [88] O. Chapelle, Vikas Sindhani, S. Keerthi. Optimization techniques for semi- supervised support vector machines. Journal of machine learning research. 2008, 8: 203-233P
    [89] O. Chapelle. Training a support vector machine in the primal. Neural computation, 2007, 19(5):1155-1178P
    [90] Suykens J. A. K, Vandewalle J. Least squares support vector machines classifiers. Neural Network Letters. 1999, 19(3):293-300P
    [91] Wang H Q, Sun F C, Cai Y N, et al. An unbiased LSSVM model forclassification and regression. Soft Computing-A Fusion of Foundations, Methodologies and Applications. 2009,14(2): 171-180P
    [92] Zhang X, Zou J. Face Recognition Based on Sub-image Feature Extraction and LS-SVM. International Conference on Computer Science and Software Engineering WuHan,2008: 772-772P
    [93] Hu Y, Li Y. LS-SVM for bad debt risk assessment in enterprises. IEEE International Joint Conference on Naural Networks, Hong Kong,2008: 1665-1669P
    [94]陈毅松,汪国平,董士梅.基于支持向量机的渐进直推式分类学习算法.软件学报. 2003, 14(3): 451-460页
    [95]廖东平,魏玺章,黎湘,庄钊文.一种改进的渐进直推式支持向量机分类学习算法.信号处理. 2008, 24(002): 213-218页
    [96] Bruzzone L, Chi M, Marconcini M. A novel transductive SVM for semisupervised classification of remote-sensing images. IEEE Transactions on Geoscience and Remote Sensing, 2006, 44(11): 3363-3373P
    [97] Silva M M, Maia T T, Braga A P. An evolutionary approach to transduction in support vector machines. In proceedings of fifth international conference on hybrid intelligent systems, Brazil, 2005(6):329-334P
    [98] http://people.csail.mit.edu/jrennie/20Newsgroups/
    [99] Ying zhao,Zhang Jianpei,YANG Jing. Parallel Branch and Bound Algorithms on Semi-supervised SVMs. International IEEE Workshop on Database Technology and Applications, Wuhan, 2009: 681-685P
    [100] K. Bennett, A. Demiriz. Semi-supervised Support Vector Machines. In Advances in Neural Information Processing Systems, Colorado, 1998: 368-374P
    [101] O.Chapelle, V.Sindhwani, S. Keerthi. Branch and Bound for Semi- supervised Support Vector Machine. Proceedings of the Twentieth Annual Conference on Neural Information Processing Systems, British, 2006: 217-224P
    [102]丰建荣,刘志河,刘正和.混合整数规划问题遗传算法的研究及仿真实现.系统仿真学报. 2004, 16(4): 845-848页
    [103] Wang J, Shen X, Pan W. On transductive support vector machines. In J. Verducci, X. Shen, and J. Lafferty, editors, Prediction and Discovery. American Mathematical Society, 2007: 7-15P
    [104]王玲,薄列峰,焦李成.密度敏感的半监督谱聚类.软件学报. 2007, 18(10): 2412-242页
    [105] http://www1.cs.columbia.edu/CAVE/software/softlib/coil-20.php
    [106]张建中,杨国辉,林文,蔡骏. Fresnel层析成像并行算法研究.计算机研究与发展. 2007, 44(10): 1661-1666页
    [107] http://www.cs.umass.edu/~mccallum/data/sraa.tar.gz
    [108]冯爱民,陈松灿.基于核的单类分类器研究.南京师范大学学报:工程技术版, 2008, 8(4): 1-6页
    [109] Wu M, Ye J. A Small Sphere and Large Margin Approach for Novelty Detection Using Training Data with Outliers. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2009, 31(11): 2088-2092P
    [110]曹玖新,毛波,罗军舟.基于嵌套EMD的钓鱼网页检测算法.计算机学报. 2009, 32(5): 922-929页
    [111] Lapray D, Bergeler J, Luhmann H J. Stimulus-induced gamma activity in the electrocorticogram of freely moving rats: The neuronal signature of novelty detection Behavioural Brain Research. 2009, 199(2): 350-354P
    [112] Mazzariello C, Sansone C. Anomaly-Based Detection of IRC Botnets by Means of One-Class Support Vector Classifiers. In proceedings of the 15th International Conference on Image Analysis and Processing, Vietri sul Mare, 2009: 892-905P
    [113] Schleif F M, Lindemann M, Diaz M. Support vector classification of proteomic profile spectra based on feature extraction with the bi-orthogonal discrete wavelet transform. Computing and Visualization in Science. 2009, 12(4): 189-199P
    [114] Torres R S, Falc o A X, Gon alves M A, et al. A genetic programming framework for content-based image retrieval. Pattern Recognition. 2009,42(2): 283-292P
    [115] Bishop C M. Neural networks for pattern recognition. Oxford University Press, 2005
    [116] Hodge V, Austin J. A survey of outlier detection methodologies. Artificial Intelligence Review. 2004, 22(2): 85-126P
    [117] Bishop C M. Novelty detection and neural network validation. In IEE Proceedings-Vision, Image and Signal processing. 1994, 141(4): 217-222P
    [118] Toosi A N, Kahani M. A new approach to intrusion detection based on an evolutionary soft computing model using neuro-fuzzy classifiers. Computer Communications. 2007, 30(10): 2201-2212P
    [119] Zanero S, Serazzi G. Unsupervised learning algorithms for intrusion detection. In IEEE Network Operations and Management Symposium, Osaka, 2008: 1043-1048P
    [120] Yeung D, Chow C. Parzen-window network intrusion detectors. In proceedings of the 16th international conference on pattern recognition, Québec, 2002: 385-388P
    [121] Campbell C, Bennett K P. A linear programming approach to novelty detection. Advances in neural information processing systems, Vancouver, 2001: 395-401P
    [122]冯爱民,陈斌.基于局部密度的单类分类器LP改进算法.南京航空航天大学学报. 2006, 38(6): 727-731页
    [123] Alberto M, J M. One-Class Support Vector Machines and Density Estimation: The Precise Relation. In proceedings of progress in pattern recognition, Puebla, 2004: 216-223P
    [124] Tsang I W, Kwok J T, Li S. Learning the Kernel in Mahalanobis One-Class Support Vector Machines. International Joint Conference on Neural Networks, Vancouver, 2006: 1169-1175P
    [125] Tao Q, Wu G, Wang J. A new maximum margin algorithm for one-class problems and its boosting implementation. Pattern Recognition. 2005, 38(7): 1071-1077P
    [126] Dolia A N, Harris C J, Shawe-Taylor J S, et al. Kernel ellipsoidal trimming. Computational Statistics and Data Analysis. 2008, 52(1): 309-324P
    [127] Juszczak P Learning to recognise, a study on one-class classification and active learning. PhD thesis, Delft University of Technology. 2006, 117-125P
    [128] Langford J, Shawe-Taylor J. PAC-Bayes and margins. Advances in neural information processing systems, Vancouver, 2003: 439-446P
    [129] Tax D, Juszczak P. Kernel whitening for one-class classification. International Journal of Pattern Recognition and Artificial Intelligence, 2003, 17(3): 333-347P
    [130]梁锦锦,刘三阳,吴德.一种约减支持向量域描述算法RSVDD.西安电子科技大学学报. 2008, 35(005): 927-931页
    [131] Pyo J K, Hyung J C, Jin Y C. Fast incremental learning for one-class support vector classifier using sample margin information. In proceedings of 19th international conference on pattern recognition, Tampa, 2008: 1-4P
    [132]徐磊,赵光宙,顾弘.基于作用集的一类支持向量机递推式训练算法.浙江大学学报:工学版. 2009, 43(1): 42-46页
    [133] Rakotomamojy, A. and M. Davy. One-class SVM regularization path and comparison with alpha seeding. In proceedings of 15th European symposium on artificial neural networks, Bruges, 2007:271-276P
    [134] Scholk?f B, Williamson R C, Smola A J. Support vector method for novelty detection. Advances in neural information processing systems, Denver, 2000: 582-588P
    [135] Scholk?f B, Platt J C, Shawe-Taylor J, et al. Estimating the support of a high-dimensional distribution. Neural computation. 2001, 13(7): 1443-1471P
    [136] Tax D, Duin R. Support vector domain description. Pattern Recognition Letters. 1999, 20(11-13): 1191-1199P
    [137] http://ict.ewi.tudelft.nl/~davidt/dd_tools.html
    [138] http://www.prtools.org/
    [139] http://luzhenbo.88uu.com.cn/
    [140] http://webee.technion.ac.il/people/koby/code/MCSVM/MCSVM_1_0.tar.gz

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700