用户名: 密码: 验证码:
基于最小化训练误差的子空间分类算法研究
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
子空间方法是模式识别领域一个重要的研究方向,很多年来一直受到该领域学者们的广泛关注。Fisher线性判别分析方法(Fisher Linear DiscriminantAnalysis,FLD或LDA)及以其为代表的其他一些子空间分类方法,在分类问题中有着突出的作用。然而,这些子空间算法也存在一定的缺陷。其中最主要的问题是,大部分传统子空间算法的特征提取准则并不与训练误差直接相关联,而是根据某种准则由样本数据分布(通常假设为高斯分布)的统计特征得出。所以当统计准则不能正确反映样本分布情况时,算法往往会失效。这个问题导致传统子空间算法应用于某些数据分布较为复杂的情形时,难以取得理想的效果。本文所提出的方法正是围绕这个问题而展开的。
     本文第3章首先指出,传统的LDA方法由于其固有的缺陷,在处理多分类问题时,即使符类数据都满足高斯同方差分布,也可能无法找到最优分类子空间。接着通过分析数据样本分布与LDA算法得到的投影向量之间的关系,讨论了LDA投影向量与类间散布矩阵和类内散布矩阵特征值之间存在的关联,并以此提出一种基于遗传算法的LDA算法。该算法以子空间上的洲练误差最小为目标,通过遗传算法调整LDA算法中类间矩阵特征位的大小,达到搜索最佳特征子空间的效果。通过模拟数据和真实数据的实验,表明这种方法的分类正确率比现有的线性子间方法有所提高。
     集成学习理论中的AdaBoost(Adaptive Boosting)算法是一类以最小训练误差为准则构建分类器的学习算法。本文在第4章中通过结合AdaBoost算法与LDA子空间方法提出了基于提升自举LDA投影的特征提取算法,完成两类问题中的特征提取与组合。AdaBoost算法是一种将若干分类性能仅好于随机猜测的弱分类器提升为强分类器的算法框架,要求各弱分类器具有较大的分离度和不稳定性。所以,本文提出的算法首先借助Bagging(Bootstrap Aggregating)算法中的自举采样(Bootstrap Sampling)原理对训练样本进行随机抽样形成若干训练样本自举子集,再通过结合LDA算法和最近邻分类器由这些自举子集得出若干弱分类器,并由AdaBoost算法提升为强分类器。该算法克服了传统子空间方法特征提取准则不与训练误差相关联的弱点,生成的分类器有较好的泛化性能,能够很好地解决数据分布复杂的分类问题。文章通过复杂分布的两类问题实验证明了该算法的可行性和优越性。
     由于多类问题的研究,特别是人脸识别问题,具有更加广泛的应用价值,本文第5章在第4章的基础上,借助AdaBoost.M2算法与LDA子空间方法的结合将以上算法推广到多类问题中,提出了基于提升自举LDA子空间的分类算法。第5章通过改善的自举采样方法,使AdaBoost.M2算法在原有基础上更注重难分样本的分类,同时兼顾弱分类器的多样性,达到更好地提升和组合基于LDA子空间的弱分类器。通过手写数字图像和人脸图像识别的实验,比较了该算法与传统子空间方法及其他基于集成学习的分类算法的性能,征明了该算法的效果达到或超越了其它算法。
As a significant research direction, Subspace Methods attract wide attention from the scholars in the field of Pattern Recognition. Fisher Linear Discriminant Analysis (FLD or LDA) and other related subspace methods exert outstanding effects in classification problems. However, these subspace methods have some shortcomings. The main shortcoming is that traditional subspace methods, such as LDA, do not relate the feature extraction criteria directly to the training error, but relate to the statistical feature of the distribution (generally assumed as Gaussian) of training data. Thus, when the statistical feature is not able to reflect the data distribution properly, these methods will probably fail. As a result, traditional subspace methods are not competent for the problems with complex data distribution. In this dissertation, all proposed methods are to solve this problem.
     In chapter 3, we first point out that in multi-class problems, even that each class bearing a Gaussian homoscedastic distribution, LDA may fail in some cases. Then, through analyzing the relationship between the data distribution and the projection directions of LDA, we discuss that LDA result relates to the eigenvalues of the inter-class scatter matrix and the intra-class scatter matrix. Based on this observation, we propose a modified LDA method based on Genetic Algorithm. Aiming for the minimum training classification error, utilizing Genetic Algorithm, our method pertinently adjusts the eigenvalues of the inter-class scatter matrix to find the optimal feature subspace. Experiments on both synthetic data and real data show that the proposed method is superior to other linear subspace method.
     AdaBoost (Adaptive Boosting) algorithm, derived from the ensemble learning theory, is a learning method that directly related the training performance to the construction of classifier. In chapter 4, we propose a feature extraction algorithm based on boosting bootstrap LDA projections, which combines AdaBoost algorithm and LDA algorithm, to solve 2-class problems. AdaBoost algorithm is a learning framework that can boost a number of weak hypotheses to a strong classifier and it needs the weak hypotheses to be unstable and diverse. Therefore, we first utilize the Bootstrap Sampling method from the Bagging (Bootstrap Aggregating) algorithm to randomly sample the original training data into a number of bootstrap training subsets. Then we employ LDA and Nearest Neighbor (NN) classifier to make the same number of weak hypotheses from these training subsets, which are to be boosted by AdaBoost algorithm into a final classifier. This method overcomes the shortcoming of traditional subspace methods mentioned above. At the same time, it is proved to have good generalization performance and is qualified for classification problems with complex data distribution. Experiments on 2-class problems with complex distribution prove the feasibility and superiority of this method.
     Research on multi-class problems, such as face recognition tasks, is very valuable in application. Hence, in chapter 5, we use AdaBoost.M2 algorithm to generalize the method proposed in chapter 4 to solve multi-class problems and propose the method of boosting bootstrap LDA subspaces. In this method, we sophisticatedly improve the bootstrap sampling step to ensure more concentration on hard-to-classify samples by AdaBoost.M2 algorithm. Meanwhile, the diversity of weak hypotheses is kept so that the LDA based hypotheses can be boost and combine more efficiently. In experiments we compare our algorithm with the traditional subspace methods and other ensemble learning based algorithms on handwritten digit image recognition and face image recognition. The results show that our algorithm is superior or comparable to other methods.
引文
1.Donoho D.L.2000.High Dimensional Data Analysis:The Curses and Blessings of Dimensionaliry[C].Presented at American Mathematics Society Conference:Math Challenges of the 21~(st)Century.Los Angeles,USA.
    2.刘青山,卢汉清,马颂德.2003.综述人脸识别中的子空间方法[J].自动化学报.29(6):900-911.
    3.Jolliffe I.T.2002.Principle Component Analysis[M].Springer.
    4.Comon P.1994.Independent component analysis - a new concept?[J].Signal Processing.36:287-314.
    5.Fisher R.1936.The use of multiple measurements in taxonomic problems[J].Ann.Eugenics.7:179-188.
    6.Friedman J.H.1987.Exploratory projection pursuit[J].J.Am.Statistical Assoc..82:249-266.
    7.Lee D.and Seung H.1999.Learning the parts of objects by non-negative matrix factorization[J].Nature.401:788-791.
    8.Cox T.and Cox M.1994.Multi dimensional Scaling[M].Chapman & Hall,London.
    9.Muller K.-R.,Mika S.,Ratsch G.,Tsuda K.and Scholkopf B.2001.An introduction to kernel- based learning algorithms[J].IEEE Trans.Neural Networks.12(2):181-201.
    10.Scholkopf B.and Smola A.2002.Learning with kernels[M].MIT Press.
    11.Scholkopf B.,Smola A.and Muller K.1998.Nonlinear component analysis as a kernel eigenvalue problem[J].Neural Computation.10(5).
    12.Mika S.,ScholkopfB.,Smola A.J.,Muller K.-R.,Scholz M.and Ratsch G.1999.Kernel PCA and de-noising in feature spaces[A].In:Advances in Neural Information Processing Systems Ⅱ[C]:536-542.
    13.Baudat G.and Anouar F.2000.Generalized discrirninant analysis using a kernel approach [J].Neural Computation.12(10):2385-2404.
    14.Bach F.R.and Jordan M.I.2002.Kennel independent component analysis[J].Journal of Machine Learning Research.3:1-48.
    15.Seung H.S.,Lee D.D.2000.The manifold ways of Perception[J].Science.290(12): 2268-2269.
    16.Tenenbaum J.B.,Silva V.,Longford J.C.2000.A Global Geometric Framework for Nonlinear Dimensionality Reduction[J].Science.290(12):2319-2323.
    17.Roweis S.T.,Saul L.K.2000.Nonlinear Dimensionality Reduction by Locally Linear Embedding[J].Science.290(12):2323-2326.
    18.Balasubramanian M.,Schwartz E.L.,Tenenbaum J.B.,de S.V.,Longford J.C.2002.The Isomap Algorithm and Topological Stability[J].Science,295(5552):7-7.
    19.Zhang Z.,Zha H.2004.Principal Manifolds and Nonlinear Dimensionality Reduction via Tangent Space Alignment[J].SIAM Journal on Scientific Computing.26(1):313-338.
    20.Belkin M.,Niyogi P.2002.Laplacian eigenmaps and spectral techniques for embedding and clustering[C].In Advances in Neural Information Processing Systems.
    21.Belkin M.,Niyogi P.2003.Laplacian Eigenmaps for Dimensionality Reduetion and Data Representation[J].Neural Computation.15(6):1373-1396.
    22.Donoho D.L.,Grimes C.2003.Hessian eigenmaps:Locally linear embedding techniques for high-dimensional data[C].Proceedings of the National Academy of Sciences.100(10):5591-5596.
    23.de R.D.,Kouropteva O.,Okun O.,Pietikainen M.,Duin R.P.2003.Supervised locally linear embedding[C].In Proc.Joint Int.Conf.ICANN/ICONIR
    24.Weng S.,Zhang C.,Lin Z.2005.Exploring the structure of supervised data by Discriminant Isometric Mapping[J].Pattern recognition.38(4):599-601.25.张军平.2003.流形学习及应用(D):[博士].北京:中国科学院.
    26.杨剑.2006.流形学习若干问题研究(D):[博士].北京:中国科学院自动化研究所.
    27.徐东.2005.数据降维算法的研究及其应用(D):[博士].合肥:中国科学技术大学.
    28.Rowley H.,Baluja S.and Kanade T.1998.Neural Network-based Face Detection[J].IEEE Trans.on Pattern Analysis and Machine Intelligence.20:22-38.
    29.Schneiderman H.and Kanade T.2000.A Statistical Method for 3D Object Detection Applied to Faces and Cars[C].Proceedings of IEEE Conference on Computer Vision and Pattern Recognition.
    30.Viola P.and Jones M.2001.Robust Real Time Object Detection[C].Proceedings of IEEE Workshop on Statistical and Computational Theories of Vision.
    31.Li S.,Zhu L.,Zhang Z.,Blake A.,Zhang H.J.and Shum H.2002.Statistical Learning of Multi-View Face Detection [C]. Proceedings of European Conference on Computer Vision.
    32. Xiao R., Li M. and Zhang H. 2003. Robust Multipose Face Detection in Images [C]. Proceedings of IEEE Conference on Computer Vision.
    33. Cootes T. and Taylor C. 2001. Statistical Models of Appearance for Computer Vision. http://www.isbe.man.ac.uk-bim/refs.html.
    34. Yan S., Li M., Zhang H. and Cheng Q. 2003. Ranking Prior Likelihood Distribution for Bayesian Shape Localization Framework [C]. Proceedings of IEEE Conference on Computer Vision.
    35. Zhou Y.. Zhang W., Tang X. and Shum H. 2005. A Bayesian Mixture Model for Multi-view Face Alignment [C]. Proceedings of IEEE Conference on Computer Vision and Pattern Recognition.
    36. Zhou Y., Gu L. and Zhang H. J. 2002. Bayesian Tangent Shape Model: Estimating Shape and Pose via Bayesian Inference [C]. IEEE Conference on Computer Vision and Pattern Recognition.
    37. Yang M., Kriegman D.J. and Ahuja N. 2002. Detecting Faces in Images: A Survey [J]. IEEE Trans. on Pattern Analysis and Machine Intelligence, 24( 11): 34-58.
    38. Loutas E., Pitas I. and Nikou C. 2004. Probabilistic Multiple Face Detection and Tracking Using Entropy Measures [J]. IEEE Trans. on Circuits and Systems for Video Technology. 14(11): 128-135.
    39. Comaniciu D., Ramesh V. and Meer P. 2003. Kernel-based Object Tracking [J]. IEEE Trans. on Pattern Analysis and Machine Intelligence, 25(55): 564-577.
    40. Li B. and Chellappa R. 2002. A Generic Approach to Simultaneous Tracking and Verification in Video [J]. IEEE Trans. on Image Processing, 11(55): 530-544.
    41. Yang F. and Paindavoine M. 2003. Implementation of an RBF Neural Network on Embedded Systems: Real-time Face Tracking and Identity Verification [J]. IEEE Trans. on Neural Networks, 14(55): 1162-1175.
    42. Phillips P. I, Moon H., et al. 2001. The FERET Evaluation Methodology for Face-Recognition Algorithms [J]. IEEE Trans. Pattern Analysis and Machine Intelligence. 22(10): 1090-1104.
    43. Face Recognition Vendor Test, http://www.f-vt.org/.
    44. Wang H., Li Z. and Wang Y. 2004. Generalized Quotient Image [C]. Proceedings of IEEE Conference on Computer Vision and Pattern Recognition.
    45. Raviv T. R. and Shashua A. 1999. The Quotient image: Class based recognition and synthesis under varying illumination [C]. Proceedings of IEEE Conference on Computer Vision and Pattern Recognition.
    46. Ramamoorthi R. 2002. Analytic PCA Construction for Theoretical Analysis of Lighting Variability in Images of Lambertian Object [J]. IEEE Trans. on Pattern Analysis and Machine Intelligence. 24(10): 1322-1333.
    47. Basri R. and Jacobs D. 2003. Lambertian Reflectance and Linear Subspaces [J]. IEEE Trans. on Pattern Analysis and Machine Intelligence. 25(2): 218-233.
    48. Georghiades A. S. and Belhumeur P. N. and Krieginan D.J. 2001. From few to many: Illumination cone models for recognition under variable lighting and pose [J]. IEEE Trans. on Pattern Analysis and Machine Intelligence. 23(6): 643-660.
    49. Romdhani S., Blanz V. and Veter T. 2002. Face Identification by Fitting a 3D Morphable Model using Linear Shape and Texture Error Functions [C]. European Conference on Computer Vision.
    50. Jiang D. L., Hu Y. X., Yan S. C, Zhang L., Zhang H. and Gao W. 2004. Efficient 3D Reconstruction for Face Recognition [J]. Pattern Recognition, Special Issue on Image Understanding for Digital Photos.
    51. Zhao W., Chellappa R. and Phillips P. 2002. Face Recognition: A Literature Survey [J]. Technical Report.
    52. Jain A. K., Duin R. P. W. and Mao J. 2000. Statistical Pattern Recognition: A Review [J], IEEE Trans. on Pattern Analysis and Machine Intelligence. 22(1): 4-37.
    53. Swets D. L., Weng J. 1996. Using Discriminant Eigenfeatures for Image Retrieval [J]. IEEE Trans. on Pattern Analysis and Machine Intelligence. 18: 831-836.
    54. Turk M. and Pentland A. 1991. Eigenfaces for Recognition [J]. Journal of Cognitive Neuroscience. 3(1): 71-86.
    55. Belhumeur P., Hespanda J. and Kiregeman D. 1997. Eigenfaces vs. Fisherfaces: Recognition Using Class Specific Linear Projection [J]. IEEE Trans. on Pattern Analysis and Machine Intelligence. 19(7): 711-720.
    56. Moghaddam B., Jebara T. and Pentland A. 2000. Bayesian Face Recognition [J]. Pattern Recognition. 33: 1771-1782.
    57. Wang X. and Tang X. 2004. A Unified Framework for Subspace Face Recognition [J]. IEEE Trans. on Pattern Analysis and Machine Intelligence. 26(9): 1222-1228.
    58. Liu C. and Wechsler H. 1998. Enhanced Fisher Linear Discriminant Models for Face Recognition [C]. Proceedings of ICPR.
    59. Chen L, Liao H., Ko M., Lin J. and Yu G. 2000. A New LDA Based Face Recognition System Which can Solve the Small Sample Size Problem. Pattern Recognition. 33(10): 1713-1726.
    60. Yu H. and Yang J. 2001. A Direct LDA Algorithm for High Dimensional Data-with Application to Face Recognition. Pattern Recognition. 34:2067-2070.
    61. Wang X. and Tang X. 2004. Dual-space linear discriminant analysis for face recognition [C]. Proceedings of IEEE Conference on Computer Vision and Pattern Recognition.
    62. Yang I, Frangi A. F., Yang J. Y., Zhang D. and Jin Z. 2005. KPCA Plus LDA: A Complete Kernel Fisher Discriminant Framework for Feature Extraction and Recognition [J]. IEEE Trans. on Pattern Analysis and Machine Intelligence. 27:230-244.
    63. Ye J., Janardan R., Park C. and Park H. 2004. An Optimization Criterion for Generalized Discriminant Analysis on Undersampled Problems [J]. IEEE Trans. on Pattern Analysis and Machine Intelligence. 26(8): 982-994.
    64. Wang X. and Tang X. 2004. Random sampling LDA for face recognition [C]. Proceedings of IEEE Conference on Computer Vision and Pattern Recognition.
    65. Blake C. L., and Merz C. J. 1998. UCI repository of machine learning databases. URL: http://www.ics.uci.edu/~mlearn/MLRepository.html.
    66. Breiman L. 1996. Bagging Predictors [J]. Machine learning, 24(2): 123-140.
    67. Freund Y, Schapire R. E. 1997. A decision-theoretic generalization of online learning and an application to boosting [J]. Journal of Computer and System Sciences. 55(1): 119-139.
    68. Freund Y, Schapire R. E. 1996. Experiments with a new boosting algorithm [C]. In International Conference on Machine Learning, 148-156.
    69. Schapire R. E., Freund Y, Bartlett P., Lee W. S. 1998. Boosting the Margin: A New Explanation for the Effectiveness of Voting Methods [J]. The Annals of Statistics, 26(5): 1651-1686.
    70. LeCun Y. The MNIST DataBase of Handwritten digits. URL: http://yann.lecun.com/exdb/mnist/index.html.
    71.Martinez A.M.,Robert B.1998.The AR Face Database,CVC Technical Report #24.URL:http://cobweb.ecn.purdne.edu/~aleix/aleix face DB.html.
    72.Turk M.,Pentland A.1991.Face Recognition Using Eigenfaces[C].In Proc.IEEE Computer Vision and Pattern Recognition.586-591.
    73.Peter N.B.,Joao P.H.,David J.K.1997.Eigenfaces vs.Fisherfaces:Recognition Using Class Specific Linear Projection[J].IEEE Trans.on Pattern Analysis and Machine Intelligence.19(7):711-720.
    74.罗家洪.1996.矩阵分析引论[M].华南理工大学出版社.广州.
    75.庞彦伟.2004.基于子空间和局部特征的人脸识别研究(D):[博士].合肥:中国科学技术大学.
    76.Basri.2001.Spherical Harmonic[M].
    77.Moghaddam B.,Pentland A.1997.Probabilistc Visual Learning For Object Representation [J].IEEE Pattern Analysis and Machine Intelligence.19(7):696-710.
    78.Tipping M.,Bishop C.1997.Probabilistic Principal Component Analysis[J].Technical Report NCRG/97/010,Aston Univ..
    79.Zhou Y.,Gu L.,Zhang H.J.2003.Bayesian Tangent Shape Model:Estimating Shape and Pose Parameters via Bayesian Inference[C].IEEE Computer Vision and Pattern Recognition.1:109-116.
    80.Martinez A.M.,Kak A.C.2001.PCA versus LDA[J].IEEE Trans,on Pattern Analysis and Machine Intelligence.23(2):228-233.
    81.Swets D.L.,Weng J.1996.Using Discriminant Eigenfeatures for Image Retrieval[J].IEEE Trans.on Pattern Analysis and Machine Intelligence.18:831-836.
    82.Rao C.R.2002.Linear Statistical Inference and Its Applications[M].Wiley Interscience,New York.
    83.Marti'nez A.M.and Zhu M.2005.Where Are Linear Feature Extraction Methods Applicable?[J].IEEE Trans.on Pattern Analysis and Machine Intelligence.27(12):1934-1944.
    84.Fukunaga K.and Mantock J.M.1983.Nonparametric Discriminant Analysis[J].IEEE Trans.Pattern Analysis mad Machine Intelligence,5:671—678.
    85.Loog M.,Duin R.P.W.and Haeb-Umbach R.2001.Multiclass Linear Dimension Reduction by Weighted Pairwise Fisher Criteria[J].IEEE Trans.Pattern Analysis and Machine Intelligence.23(7):762-766.
    86.Lotlikar R.and Kothari R.2000.Fractional-step dimensionality reduction[J].IEEE Trans.on Pattern Analysis of Machine Intelligence,22(6):623-27.
    87.Yan S.,Xu D.,Zhang B.,Zhang H.,Yang Q.and Lin S.2007.Graph Embedding and Extensions:A General Framework for Dimensionality Reduction.IEEE Trans.Pattern Analysis and Machine Intelligence.29(1):40-51.
    88.Yu H.,Yang J.2001.A Direct LDA Algorithm for High-Dimensional Data With Application to Face Recognition[J].Pattern Recognition.34:2067-2070.
    89.Friedman J.H.1989.Regularized Discriminant Analysis[J].Journal of the American Statistical Association,84(405):165-175.
    90.陈国良,王煦法.1996.遗传算法及其应用[M],人民邮电出版社,北京.
    91.Holland J.H.1975.Adaptation in Natural and Artificial Systems[J].Ann Arbor:Univ.of Michigan Press.
    92.姜远.2004.集成学习及其应用的研究(D):[博士].南京:南京大学.
    93.Dienerich T.G.1997.Machine learning research four current directions[J].AI Magazine.18(4):97-136.
    94.Zhou Z.H.,Wu J.X.,Tang W.,Chen Z.Q.2001.Combining regression estimators:GA-based selective neural network ensemble[J].International Journal of Computational Intelligence and Applications.1(4):341-356.
    95.Sollich P.,Krogh A.1996.Learning with ensembles:how over-fitting can be useful[C].In:Touretzky D.S.,Mozer M.C.,Hasselmo M.E.eds.Advances in Neural Information Processing Systems 8,Cambridge,MA:MIT Press,190-196.
    96.Quinlan J.R.1996.Bagging,boosting,and C4.5[C].In:Proceedings of the 13~(th)National Conference on Artificial Intelligence,Portland,OR,725-730.
    97.Maclin R.,Opitz D.1997.An empirical evaluation of bagging and boosting[C].In:Proceedings of the 14~(th)National Conference on Artificial Intelligence,Providence,RI,546-551.
    98.Bauer E.,Kohavi R.1999.An empirical comparison of voting classification algorithms:bagging,boosting,and variants[J].Machine Learning.36(1-2):105-139.
    99.Opitz D.,Maclin R.1999.Popular ensemble methods:an empirical study[J].Journal of Artificial Intelligence Research,11:169-198.
    100.周志华,陈世福.2002.神经网络集成[J].计算机学报.25(1):1-8.
    101.Keams M.,Valiant L.G.1988.Learning Boolean Formulae or Factoring[J].Technical Report TR-1488,Cambridge.
    102.Anthony M.1997.Probabilistic analysis of learning in artificial neural networks:the PAC model and its variants[J].Neural Computing Surveys,1:1-47.
    103.Schapire R.E.1990.The strength of weak learnability[J].Machine Learning,5(2):197-227.
    104.Hansen L.K.,Salamon P.1990.Neural network ensembles[J].IEEE Trans.ou Pattern Analysis and Machine Intelligence,12(10):993-1001.
    105.Krogh A.,Vedelsby J.1995.Neural network ensembles cross validation,and active learning [C].In:Tesauro G.,Touretzky D.S.,Leen T.K.eds.Advances in Neural Information Processing Systems 7,Cambridge,MA:MIT Press,231-238.
    106.Freund Y.1995.Boosting a weak algorithm by majority[J].Information and Computation,121(2):256-285.
    107.Efron B.,Tibshirani R.1993.A Introduction to the Bootstrap[M].New York:Chapman &Hall.
    108.Breiman L.1998.Arcing classifiers[J].Annals of Statistics,26(3):801-849.
    109.Webb G.I.2000.MultiBoosting:a technique for combining boosting and wagging[J].Machine Learning.40(2):159-196.
    110.Zhou Z.H.,Wu J.X.,Tang W.2002.Ensembling neural uetworks:many could be better than all[J].Artificial Intelligence,137(1-2):239-263.
    111.Perrone M.P.,Cooper L.N.1993.When networks disagree:ensemble method for neural networks[C].In:Mammone R.J.ed.Artificial Nenral Networks for speech aud Vision,New York:Chapman & Hall,126-142.
    112.Opitz D.,Shavlik J.1996.Actively searching for an effective neural network ensemble[J].Connection Science,8(3-4):337-353.
    113.Hansen L.K.,Lusberg L.,Salamon P.1992.Ensemble methods for handwritten digit recognition[J].In:Proceedings of the IEEE Workshop on Neural Networks for Signal Processing,Copenhagen,Danmark,333-342.
    114.Schwenk H.,Bengio Y.2000.Boosting neural networks[J].Neural Computation.12(8):1869-1887.
    115.Mao J.1998.A case study on bagging,boosting and basic ensembles of neural networks for OCR [C]. In: Proceedings of the IEEE International Joint Conference on Neural Networks, Anchorage, AK, 3: 1828-1833.
    116. Gutta S., Wechsler H. 1996. Face recognition using hybrid classifier systems [C]. In: Proceedings of the IEEE International Conference on Neural Networks, Washinton, DC, 1017-1022.
    117. Gutta S., Huang I R. J., Jonathon P., Wechsler H. 2000. Mixture of experts for classification of gender, ethnic origin, and pose of human faces [J]. IEEE Transactions on Neural Networks, 11 (4): 948-960.
    118. Cortes C, Vapnik V. 1995. Support vector networks [J]. Machine Learning, 20(3): 273-297.
    119. Huang F. J., Zhou Z. H., Zhang H. J., Chen T. H. 2000. Pose invariant face recognition [C]. In: Proceedings of the 4th IEEE International Conference on Automatic Face and Gesture Recognition, Grenoble, France, 245-250.
    120. Cherkauer K. J. 1996. Human expert level performance on a scientific image analysis task by a system using combined artificial neural networks [C]. In: Proceedings of the AAAI'96 Workshop on Integrating Multiple Learned Models for Improving and Scaling Machine Learning Algorithms, Portland, OR, 15-21.
    121. Shimshoni Y., Intrator N. 1998. Classification of seismic signals by integrating ensembles of neural networks [J]. IEEE Trans. on Signal Processing, 46(5): 1194-1201.
    122. Schapire R. E., Singer Y. 2000. BoosTexter: a boosting-based system for text categorization [J]. Machine Learning. 39(2-3): 135-168.
    123. Weiss S. M.. Apte C, Damerau F. J., Johnson D. E., Oles F. J., Goetz T., Hampp T. 1999. Maximizing text-mining performance [J]. IEEE Intelligent Systems, 14(4): 63-69.
    124. Schapire R. E., Singer Y. 1999. Improved Boosting Algorithms Using Confidence-Rated Predictions [J]. Machine Learning. 37(3): 297-336.
    125. Freund Y. 2001. An Adaptive Version of the Boost by Majority Algorithm [J]. Machine Learning. 43(3): 293-318.
    126. Friedman J. H., Hastie T., Tibshirani R. 2000. Additive Logistic Regression: A Statistical View of Boosting. Annals of Statistics. 28(2): 337-374.
    127. Duffy N., Helmbold D. 2002. Boosting Methods for Regression [J]. Machine Learning. 47(2-3): 153-200.
    128. Schapire R. E. 1999. A Brief Introduction to Boosting [C]. Proc. ICAI, 1401-1406.
    129.Schapire R.E.,Freud Y.1998.Boosting the Margin:A New Explanation for the Effectiveness of Voting Methods.The Annals of Statistics,26(5):1651-1686.
    130.Breiman L.1996.Bias,variance,and arcing classifiers[J].Department of Statistics,University of California at Berkeley:Technical Report 460.
    131.Masip D.,Vitria J.2006.Boosted discriminant projections for nearest neighbor classification [J].Pattern Recognition,39(2):164-170.
    132.Skurichina M.,Duin R.R W.1998.Bagging for linear classifiers[J],Pattern Recognition,31(77):909-930.
    133.涂承胜,刁力力,鲁明明等.2003.Boosting家族AdaBoost系列代表算法[J].计算机科学.30(3):30-34.
    134.Lu J.,Plataniotis K.N.and Venetsanoponlos A.N.2003.Boosting Linear Discriminant Analysis for Face Recognition.In:Proc.of International Conference on Image Processing.1:Ⅰ-657-60.
    135.Lu J.,Plataniotis K.N.and Venetsanopoulos A.N.2003.Face Recognition Using LDA-Based Algorithms.IEEE Transactions on Neural Networks.14(1):195-200.
    136.王丽丽.2006.集成学习算法研究(D):[硕士].广西:广西大学.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700