基于图的半监督学习模型研究与分类器设计
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
在机器学习领域,有监督学习和无监督学习是两种常用的学习算法,但是他们在处理由于类别标注困难带来的标签数据极少、未标签数据众多的分类问题时效果往往不佳。针对此类问题,半监督学习近来被提出并获得广泛的研究。半监督学习结合两种传统学习算法的优势,能同时采用标签数据和未标签数据构造分类器,且一般能够获得较传统学习算法更好的学习效果。本文对半监督分类算法开展了较深入的研究,具体工作如下:
     文章首先对半监督学习中的典型算法进行了分析,并将其与有监督学习进行比较,发现半监督分类器的分类精度与其模型假设密切相关。只有在算法模型假设能够较好符合数据的真实结构时,未标签数据的采用能够帮助提升分类精度;否则,未标签数据可能不起作用,甚至起反作用。
     其次,本文通过对标签传递算法的实验研究,发现用随机选择的数据作为训练集会造成算法分类精度的较大波动。这说明可通过主动选择较优训练集去提升标签传递算法的分类精度。而在主动学习中,分类器可根据当前状态主动挑选能最大程度提升自身性能的待标注数据。通过引入此思想,本文提出了结合主动学习的标签传递算法,并对算法模型及待标注数据选择策略开展了研究,使得该算法可动态选择能最大程度降低标签传递算法当前分类风险的数据,提高训练集的质量。在UCI等数据集上的实验结果显示,训练数据数量相同时,该算法的分类精度超越了随机选择训练集数据的标签传递算法。实验中,我们发现该算法经常选择聚类中心的数据,因此,这些数据适合于作为标签传递算法的训练集。
     基于图的半监督分类算法需构造一个以数据为顶点、以数据间相似性值为边的图。在这种构造图的方法中相似性度量函数及其参数不易控制,数据的近邻个数也难以选择。通过对局部线性嵌入算法的研究,我们发现该算法构造线性近邻时不采用相似性函数,并且通过对数据局部流形的估算,判断数据是否位于分类间隙附近,可动态调整数据的近邻个数,达到减少不同类数据间连接的目的,进而减少标签误传递的概率。结合这两个优点,本文提出了基于局部线性嵌入算法构建图的标签传递算法。在UCI等数据等集上的实验结果表明,该算法中的图较传统图更容易使用,与典型标签传递算法相比,基于该图的标签传递算法分类精度更高。
In traditional machine learning area, there are two commonly used learning algorithms, supervised learning and unsupervised learning, but neither of them is suitable for dealing with the situation where there are few labeled data and large amount of unlabeled data. To this problem, semi-supervised learning was proposed recently and has attracted great research interest. Semi-supervised learning combines the advantage of both traditional learning methods, it can build a better classifier by using the unlabeled data together with the labeled data. This paper does some research on semi-supervised classification algorithms, the main work is as follows:
     Firstly, this paper analyses the typical semi-supervised classification algorithms, by comparing it to supervised ones, finds that the classifier’s accuracy is closely related to its model assumption. Only when the model assumption matches the problem structure well, the unlabeled data can help to improve the classifier’s accuracy; otherwise, unlabeled data may be of no use, or even have bad affect on the classifier.
     Secondly, by doing experiment of label propagation algorithm (LP), we find that LP is sensitive to the quality of training set which are randomly chosen by traditional ways. This means that LP can be improved by actively selected good training set. In active learning, classifier can query unlabeled data that improve its performance most. By combining active learning thought, an active learning based LP algorithm (AL-LP) is proposed. This algorithm can actively select unlabeled data that can degrade the classification risk most so as to make the accuracy increase faster. Promising experimental results of UCI et al data sets show that, when labeled data number is the same, AL-LP can achieve higher accuracy than LP by randomly selected training set. Through the analysis of the frequently queried data, we find AL-LP is prone to select the cluster center nearby data. This means that it is very meaningful to select the cluster center data as the training set for LP.
     Graph-based semi-supervised learning first construct a graph where labeled and unlabeled data are represented as vertices, and edges encode the similarity between data. But this kind of graph-constructing method often faces the difficulty of choosing the similarity function, as well as its parameters, and the number of nearest neighbors. Aiming at this, we investigate locally linear embedding algorithm (LLE), and find LLE doesn’t use similarity function when constructing linear neighbors, and by detecting the local manifold and judging whether the data is near the classification margin, we could easily rectify the number of data’s nearest neighbors so as to decrease the connections between data of different classes and reduce the mis-label-propagating probability. Based on these two points, this paper proposes LLE based graph-constructing method, and applies it to LP. Experimental results of UCI et al data sets show that this method is easy to use, and that LP based on this kind of graph performs better than LP based on traditional graph.
引文
文贵华,江丽君,文军.2007.基于邻域优化的局部线性嵌入.系统仿真学报.19(13):3119-3122.
    龙军,殷建平,祝恩,赵文涛.2008.针对入侵检测的代价敏感主动学习算法.南京大学学报,自然科学版.44(5):527-535.
    陈锦秀,姬东鸿. 2008.基于图的半监督关系抽取[J].软件学报, 19(11):2843-2852.
    Abe N, Mamitsuka H. 1998. Query learning using boosting and bagging[C]//Proceedings of the 15 International Conference on Machine Learning. San Francisco:Morgan Kaufmann, 1-9.
    Agrawala AK. 1970. Learning with a probabilistic teacher[J]. IEEE Transactions on Information Theory, 16:373-379.
    Balcan MF, Blum A, Choi PP et al. 2005. Person identification in webcam images: An application of semi-supervised learning[C]//The 22nd International Conference on Machine Learning. Germany, 1-9.
    Belkin M, Matveeva I, Niyogi P. 2004. Regularization and semi-supervised learning on large graphs[M]. Berlin Heidelberg: Springer, 624-638
    Belkin M, Niyogi P. 2003. Laplacian eigenmaps for dimensionality reduction and data representation[J]. Neural Computation, 15, 1373-1396.
    Belkin M, Niyogi P, Sindhwani V. 2004. Manifold regularization: A geometric framework for learning from examples [R]. University of Chicago.
    Belkin M, Niyogi P, Sindhwani V. 2005. On manifold regularization[C]//Proceedings of the 10th International Workshop on Artificial Intelligence and Statistics.
    Bennett K, Demiriz A. 1999. Semi-supervised support vector machines[J]. Advances in Neural Information Processing Systems, 11(5),368–374.
    Blum A, Chawla S. 2001. Learning from labeled and unlabeled data using graph mincuts[C]//Proceeding of 18th International Conference on Machine Learning.
    Blum A, Mitchell T. 1998. Combining labeled and unlabeled data with co-training[C]// Proceedings of the Workshop on Computational Learning Theory.
    Callison-Burch C, Talbot D, Osborne M. 2004. Statistical machine transla-tion with word- and sentence-aligned parallel corpora[C]// Proceedings of the 42nd Meeting of the Association for Computational Linguistics, Spain: Barcelona, 175-182.
    Castelli V, Cover T. 1995. The exponential value of labeled samples[J]. Pattern Recognition Letters, 16(8), 105–111.
    Castelli V, Cover T. 1996. The relative value of labeled and unlabeled samples in patternrecognition with an unknown mixing parameter[J]. IEEE Transactions on Information Theory, 42(6), 2101–2117.
    Chapelle O, Chi M, Zien A. 2006. A continuation method for semi-supervised SVMs[C]//23rd International Conference on Machine Learning. USA, Pittsburgh, 1148-1157.
    Chapelle O, Sch?lkopf B, Zien A. 2006. Semi-Supervised Learning[M]. Massachusetts:MIT Press, 33-543.
    Chapelle O, Sindhwani V, Keerthi SS. 2006. Branch and bound for semi-supervised support vector machines[C]. Advances in Neural Information Processing Systems, USA: MIT Press, 217-224.
    Chapelle O, Zien A. 2005. Semi-supervised classification by low density separation[C]// Proceedings of the Tenth International Workshop on Artificial Intelligence and Statistics. Barbados, 57-64.
    Cohn DA, Ghahramani Z, Jordan MI. Active learning with statistical models[J]. Journal of Artificial Intelligence Research, 1996, 4:129-145.
    Cooper DB, Freeman JH. 1970. On the asymptotic improvement in the outcome of supervised learning provided by additional nonsupervised learning[J]. IEEE Transactions on Computers, 19(11):1055–1063.
    Corduneanu A, Jaakkola T. 2001. Stable mixing of complete and incomplete information [R]. MIT AI Memo.
    Corduneanu A, Jaakkola T. 2003. On information regularization[C]//Nineteenth Conference on Uncertainty in Artificial Intelligence. USA:UAI, 1250-1257
    Cozman F, Cohen I, Cirelo M. 2003. Semi-supervised learning of mixture models[C].//20th International Conference on Machine Learning. Washington DC, 99-106.
    Demirez A, Bennett K. 2000. Optimization approaches to semi-supervised learning[M]. In M. Ferris, O. Mangasarian and J. Pang (Eds.), Applications and algorithms of complementarity. Boston: Kluwer Academic Publishers, 768-786.
    Dempster AP, Laird NM, Rubin D B. 1977. Maximum likelihood from incomplete data via the EM algorithm[J]. Journal of the Royal Statistical Society, Series B, 39(1):1–38.
    Fralick SC. 1967. Learning to recognize patterns without a teacher[J]. IEEE Transactions on Information Theory, 13:57–64.
    Fung G, Mangasarian O. 1999. Semi-supervised support vector machines for unlabeled data classification[R]. Data Mining Institute, University of Wisconsin Madison.
    Goldman S, Zhou Y. 2000. Enhancing supervised learning with unlabeled data[C]//Proc 17th International Conf. on Machine Learning. Morgan Kaufmann, San Francisco, CA. 327–334.
    Joachims T. 1999. Transductive inference for text classification using support vector machines[C]//Proc 16th International Conf on Machine Learning. San Francisco, CA: Morgan Kaufmann, 200–209.
    Joachims T. 2003. Transductive learning via spectral graph partitioning[C]//Proceedings of 20th International Conference on Machine Learning. Washington, DC:AAAI, 290-297.
    Kemp C, Griffiths T, Stromsten S, Tenenbaum J. 2003. Semi-supervised learning with trees[C]. Advances in Neural Information Processing System. USA: MIT Press, 257-265.
    Lewis DD, Gale WA. 1994. A sequential algorithm for training text classifiers[C]//Proceedings of the 17 ACM International Conference on Research and Development in Information Retrieva1. Be1in:Springer, 3-12.
    Long J, Yin J, Zhu E. 2007 An active learning method based on most possible misclassification sampling using committee. Modeling Decisions for Artificial Intelligence[J]. Lecture Notes in Artificial Intelligence. Berlin:Springer, 104-113.
    Mitchell T. 1997. Machine Learning[M]. New York: McGraw-Hill. 1-30.
    Mitchell T. 1999. The role of unlabeled data in supervised learning[C]//Proceedings of the Sixth International Colloquium on Cognitive Science. San Sebastian,Spain, 1-8.
    Nigam K. 2001. Using unlabeled data to improve text classification[D]. USA: Carnegie Mellon University. 33-125.
    Nigam K, McCallum AK, Thrun S, Mitchell T. 2000. Text classification from labeled and unlabeled documents using EM[J]. Machine Learning, 39, 103–134.
    Ratsaby J, Venkatesh S. 1995. Learning from a mixture of labeled and unlabeled examples with parametric side information[C]//Proceedings of the Eighth Annual Conference on Computational Learning Theory, 412-417.
    Roweis ST, Saul LK. 2000. Nonlinear dimensionality reduction by locally linear embedding[J]. Science. 2000, 290(5500):2323-2326.
    Roy N, McCallum A. Toward optimal active learning through sampling estimation of error[C]//Proceedings of 18th International Conference on Machine Learning. San Francisco:Morgan Kaufmann, 2001, 441-448.
    Scudder HJ. 1965. Probability of error of some adaptive pattern-recognition machines[J]. IEEE Transactions on Information Theory, 11:363–371.
    Sindhwani V, Keerthi S, Chapelle O. 2006. Deterministic annealing for semi-supervised kernel machines[C]//Proceedings of the 23rd International Conference on Machine Learning. USA: Association for Computing Machinery, 841-848.
    Smola A, Kondor R. 2003. Kernels and regularization on graphs[C]//16th Annual Conference onLearning Theory and 7th Kernel Workshop. Washington DC: Springer Verlag, 144-158.
    Tenenbaum JB, de Silva V, Langford JC. 2000. A global geometric framework for nonlinear dimensionality reduction[J]. Science, 290(5500):2319-2323.
    Vapnik VN. 1998. The nature of statistical learning theory[M]. Springer, 1-258.
    Wang F, Zhang C. 2006. Label propagation through linear neighborhoods[C]// Proceedings of the 23rd International Conference on Machine Learning. Pittsburgh PA: Association for Computing Machinery , 985-992.
    Yang L. 2005. Building k edge-disjoint spanning trees of minimum total length for isometric data embedding[J]. IEEE Transaction on Pattern Analysis and Machine Intelligence, 2005, 27(10):1680-1683.
    Zhang X, Lee WS. 2005. Hyperparameter learning for graph based semi-supervised learning algorithms[C]//Advances in Neural Information Processing Systems (NIPS).USA:MIT Press, 627-634.
    Zhou DY, Bousquet O, Lal T et al. 2004. Learning with local and global consistency[C]//Advances in Neural Information Processing System 16. USA:MIT Press, 321-328.
    Zhou Y, Goldman S. 2004. Democratic co-learning[C]//Proceedings of the 16th IEEE International Conference on Tools with Artificial Intelligence. Boca Raton: IEEE Computer Society, 594-602.
    Zhu XJ. 2006. Semi-supervised learning with graphs[D]. USA: Carnegie Mellon University. 1-89.
    Zhu XJ, Ghahramani Z. 2002. Towards semi-supervised classification with Markov random fields[R]. Carnegie Mellon University.
    Zhu XJ, Ghahramani Z, Lafferty J. 2003. Semi-supervised learning using Gaussian fields and harmonic functions[C]//Proceedings of Twentieth International Conference on Machine Learning. Washington DC: AAAI, 912-919.
    Zhu XJ, John L, Ghahramani Z. 2003. Combining Active Learning and Semi-Supervised Learning Using Gaussian Fields and Harmonic Functions[C]//Proceedings of the ICML-2003 Workshop on The Continuum from Labeled to Unlabeled Data, Washington DC, 58-65.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700