核方法的若干关键问题研究及其在人脸图像分析中的应用
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
核方法作为一种非线性方法,对于非线性模式分类问题,具有坚实的理论支撑和强大的应用潜力。它具有两个显著的特点:首先是在线性与非线性之间架设起一座桥梁,其次是通过巧妙地引入核函数,避免了维数灾难,也没有增加计算复杂度。目前,支持向量机样本约简、核函数构造以及多重核学习都是核方法研究的重要方向。在支持向量机样本约简方面,一些研究致力于开发基于核聚类的样本约简方法;在核函数构造方面,利用数据的特性构造高效的专用核在很多应用领域尚未实现;在多重核学习方面,目前的多重核学习是在支持向量机的框架下提出并发展起来的,迄今还极少见到基于多重核的子空间分析方法的报告。
     本文的主要工作包括以下三个方面。
     提出了一种“自顶向下”的层次核聚类方法一核二分K一均值聚类算法(KBK),该方法能够在核特征空间中快速产生规模相近的簇;在此基础上,提出了支持向量机样本约简的KBK—SR算法,它将一个经过改造的KBK聚类过程与一个样本移除过程相结合,作为支持向量机训练的预处理过程。理论分析和实验都表明,KBK—SR算法能够在保持较高测试精度的同时,快速有效地进行支持向量机样本约简。
     针对人脸图像的光照变化,为基于核的LDA识别方法提出了一种系统的核学习方法。该方法从朗伯假设出发,通过最大化类内和类间相似度的差来学习核矩阵,进而使用散乱数据插值技术将核矩阵推广为被我们称作ILLUM核的核函数。在可变光照条件下的人睑图像集上的实验表明,我们的ILLUM核能够使基于核的LDA方法很好地处理人脸图像识别中的光照问题,在这个意义下,ILLUM核显著优于线性核和高斯径向基核等常用核。
     提出了多重核线性判别分析(MKDA)方法,首先针对基于核的LDA给出了一种多重核的构造方法,继而通过使用拉格朗日乘子法优化最大边缘准则,在基于核的LDA的框架下导出了MKDA权值优化的迭代算法。在实验部分,一方面,优化权值后的MKDA在几个UCI标准数据集上显示了高于单个核KDDA的鉴别性能;另一方面,将MKDA的权值优化算法用于核选择,为人脸图像识别有效地选取出了鉴别能力最强的核。
As a nonlinear approach, kernel methods possess a sound foundation and anextensive application potential for nonlinear pattern classification tasks. They are char-acterized by two merits. First, they build a bridge between linearity and nonlinearity;next, they introduce a kernel function to avoid the curse of dimensionality withoutincreasing computational complexity. At present, sample reduction for support vectormachines (SVMs), kernel construction and multiple kernel learning are all key researchtopics in the field of kernel methods. In terms of SVM sample reduction, some re-searches aim at developing sample reduction approaches based on kernel clustering.In terms of kernel construction, few kernels are successfully constructed for given datafrom specific application backgrounds. In terms of multiple kernel learning, it wasdeveloped under the framework of SVMs, and so far, there have hardly been reportson multiple kernel learning for subspace analysis methods.
     The contributions of this dissertation include the following three aspects.
     Atop-downhierarchicalkernelclusteringapproachnamedkernelbisectingk-means(KBK) clustering algorithm is proposed, which tends to quickly produce balancedclusters of similar sizes in the kernel feature space. On this basis, we present theKBK-SR algorithm for SVM sample reduction, which integrates a modified versionof KBK with a subsequent sample removal procedure as a sampling preprocess-ing part for SVM training to improve the scalability. Theoretical analysis and experimental results both show that, with very short sampling time, our algorithmdramatically accelerates SVM training while maintaining high test accuracy.
     A systematic method to construct a new kernel for the kernel-based LDA meth-ods is proposed, which is good for handling illumination problem. The proposedmethod first learns a kernel matrix by maximizing the di?erence between inter-class and intra-class similarities under the Lambertian model, and then generalizesthe kernel matrix to our proposed ILLUM kernel using the scattered data interpo-lation technique. Experiments on face images under varying illumination showthat, the kernel-based LDA methods with our ILLUM kernel deal with the illumi-nation problem well in face image recognition. And in this sense, ILLUM kerneloutperforms popular kernels such as the linear kernel and the Gaussian RBF kernel.
     Multiple kernel discriminant analysis (MKDA) is proposed. We first present amultiple kernel construction method for kernel-based LDA, then through opti-mizing the maximum margin criterion (MMC) using the Lagrangian multipliermethod, deduce the weight optimization scheme for MKDA. In the experiments,on one hand, MKDA with optimized weights shows superior discriminant powerto KDDA with individual kernels on several UCI benchmark datasets; on the otherhand, the weight optimization scheme for MKDA is used to e?ectively select thekernel with the most discriminant power for recognition of face images.
引文
[1]牟少敏.核方法的研究及其应用.博士论文.北京交通大学,2008.
    [2]史忠植.知识发现.北京:清华大学出版社,2002.
    [3]MITCHELL T.Machine Learning.McGraw—Hill,1997.
    [4]VAPNIK V N.The Nature of Statistical Learning Theory Springer Verlag,1995.
    [5]VAPNIK V N.Statistical Learning Theory.New York:John Wiley,1998.
    [6]瓦普尼克.张学工译.统计学习理论的本质.北京:清华大学出皈社,2000.
    [7]王迁,周志华,周傲英.机器学习及其应用.北京:清华大学出皈社,2006.
    [8]HE J,TAN A H&TAN C L.On machine learning methods for Chinese documentcategorization.Applied Intelligence,2003,18(3):311-322.
    [9]尹传环.结构化数据核函数的研究.博士论文.北京交通大学,2007.
    [10]蒋刚.核机器学习方法若干问题研究.博士论文.西南交通大学,2006.
    [11]VAPNIKV&CHERVONENKISA J.Ontheuniform convergenceof relativefrequenciesof events to their probabilities.Theory Probability Application,1971,16:264-280.
    [12]RUMELHART D E,HINTON G E&WILLIAMS R J.Learning internal representationsby error propagation.Parallel Distributed Processing:Explorations in Macrostructure ofCognition.Badford Books,Cambridge,MA.,1986,1:318-362.
    [13]BOSER B E,GUYAN I M&VAPNIK V A.Training algorithm for optimal margin classi—tiers.In Housler D editors,Proceeding of 5th Annual ACM Workshop on Computationallearning Theory Pittsburg,PA,1992:144—152.
    [14]BURGES C J C.A tutorial on support vector machines for pattern recognition.DataMining and Knowledge Discovery,1998,2(2):121—167.
    [15]CAMPBELL C.Kernel methods:a survey of current techniques.Neurocomputing,2002,48(1-4):63—84.
    [16]CORTES C&VAPNIK V.Support—vector networks.Machine Learning,1995,200):273—297
    [17]SMOLA A&SCHOLKOPF B.A tutorial on support vector regression.Statistics andComputing,1998.
    [18]SMOLA A&SCHOLKOPF B.On a kernel—based method for pattern recognition,regres—sion,approximation and operator inversion.Algorithmica,1998,22:211-231.
    [19]周亚同,张太镒,刘海员.基于核的机器学习方法及其在多用户检测中的应用.通信学报,2005,26(7),PP.96—108.
    [20]ARONSZAJN N.Theory of reproducing kernels.Trans.Amer.Math.Soc.,2001,12(4):765-775.
    [21]AIZERMAN et a1.Theoretical foundations of the potential function method in patternrecognition learning.Automation and Remote Control,1964,25:821—837.
    [22]SCHOLKOPF B.Nonlinear component analysis as a kernel eigenvalue problem.NeuralComputation,1998,10(5):1299—1319.
    [23]MIKA S,RATSCH G,WESTON J,SCHOLKOPF B&MULLER K.Fisher discriminantanalysis with kernels.In:Neural Networks for Signal Processing IX,1999:41—48.
    [24]BARZILAY O&BRAILOVSKY V L.On domain knowledge and feature selection usinga support vector machine.Pattern Recognition Letters,1999,20:475-484.
    [25]LODHI H,SAUNDERS C,SHAWE—TAYLOR J,et a1.Text Classification using StringKernels.Journal of Machine Learning Research,2001,2:419-444.
    [26]TSUDA K,KIN T&ASAI K.Marginalized kernels for biological sequences.Bioinformat—ics,2002,18:268-275.
    [27]JAAKKOLAT,DIEKHANS D&HAUSSLERD.Using the Fisher kernel method to detectremoteproteinhomologies.Proceedingsofthe7thInternationalConferenceonIntelligentSystems for Molecular Biology,1999.
    [28]SONNENBURG S,RATSCH G&SCHAFER C.A general and efficient multiple ker—nel learning algorithm.Advances in Neural Information Processing Systems(NIPS‘05).Cambridge,MA:MIT Press,2005:1273—1280.
    [29]SONNENBURG S,RATSCH G,SCH AFER C&SCHOLKOPF B.Large Scale MultipleKernel Learning.Journal of Machine Learning Research,2006,7:1531—1565.
    [30]RATSCH G,SONNENBURG S&SCHAFER C.Learning interpretable SVMs for biolog—ical sequence classification.BCM Bioinformatics,2006,7(Suppl 1):s9.
    [31]CARL G&PETER S.Model selection for support vector machine classification.Neuro—computing,2003,55(1-2):221-249.
    [32]CHAPELLE O,VAPNIK V BOUSQUET O&MUKHERJEE S.Choosing Multiple Param—eters for support vector machines.Machine Learning,2002,46(1-3):131—159.
    [33]CRISTIANINI N,SHAWE—TAYLOR J&CAMPBELL C.Dynamically adapting kernelsin support vector machines.Advances in Neural Information Processing Systems,MITPress,1998,11.
    [34]JOACHIMST.Text categorization with support vector machines.Proceedings of Euro—pean Conference on Machine Learning(ECML),1998.
    [35]JOACHIMST.Making large—scale support vector machine learning Practical.InSCHOLKOPFB.BURGESCSMOLAA,editors,AdvancesinKernelMethods:SupportVector Learning.Cambridge,sa:MIT press,1999:169—184.
    [36]TONG S&KOLLER D.Support vector machine active learning with applications to textclassification.Journal of Machine Learning Research,2001:45—66.
    [37]CANCEDDA N,GAUSSIER E,GOUTTE C&RENDERS J M.Word—sequence kernels.Journal of Machine Learning Research,2003,3:1059—1082.
    [38]李晓黎,刘继敏,史忠植.基于支持向量机与无监督聚类相结合的中文网页分类器.计算机学报,2001,24(1):62—68.
    [39]OSUNA E,FREUND R&GIROSI F.Training supportvector machines:An application toface detection.Proceedings of Computer Vision and Pattern Recognition,1997:130—136.
    [40]JONSSON K,KITTLER J,LI Y P&MATAS J.Support vector machines for face authenti—cation.Image and Vision Computing,2002,20(5—6):369-375.
    [41]高秀梅,杨静宇,杨健.一种最优的核Fisher鉴别分析与人脸识别.系统仿真学报,2004,16(12):2864-2868.
    [42]BROWN M,GRUNDY W,LIN L),et a1.Knowledge—based analysis of microarray geneexpression data by using support vector machines.Proc.Natl.Acad.Sci.,2000,97:262—267.
    [43]BEN—HUR A&NOBLE W S.Kernel methods for predicting protein—protein interactions.Bioinformatics,2005,21(1):138—146.
    [44]EISEN M,SPELLMAN,BROWN P&BOTSTEIN D.Clustering analysis and display ofgenome—wide expression patterns.Proc.Natl.Acad.Sci.USA 1998,95:14863—14868.
    [45]GUYONI,WESTONJ,BARNHILLS&VAPNIKV.Gene selectionforcancerclassificationusing support vector marines.Machine Learning,2002,46(1-3):389-422.
    [46]SHAWE—TAYLOR J&CRISTIANINI N.Kernel methods for pattern analysis.CambridgeUniversity Press,2004.
    [47]SHAWE—TAYLOR J&CRISTIANININ.赵玲玲,翁苏明,曾华军译.模式分析的核方法.北京:机械工业出皈社,2006.
    [48]MINSKY M&PAPERT S.Perceptrons.Cambridge:MIT Press,1969.
    [49]CRISTIANINI N&SHAWE—TAYLORJ.An Introduction to Support Vector Machines andOther Kernel—Based Learning Methods.Cambridge Univ.Press,2000.
    [50]XU JIANHUA,ZHANG XUEGONG&LI YANDA.Kernel MSE algorithm:A unifiedframework for KFD,LS—SVM and KRR.Proceedings of 2001 International Joint Confer—ence on Neural Networks.NeW York:IEEE Press,2001:1486—1491.
    [51]BACHFR&JORDANMI.KernelIndependentComponentAnalysis.JournalofMachineLearning Research,2002,3:1—48.
    [52]GIROLAMI M.Mercer kernel based clustering in feature space.IEEE Transactions on Neural Networks,2002,13(3):780—794.
    [53]张莉,周伟达,焦李成.核聚类算法.计算机学报,2002,25(6):587-590.
    [54]邓乃扬,田英杰.数据挖掘中的新方法一支持向量机.北京:科学出皈社,2004.
    [55]周伟达.核机器学习方法研究.博士论文.西安电子科技大学,2003.
    [56]李华庆.支持向量机及其在人脸识别中的应用研究.博士论文.上海交通大学,2006.
    [57]袁亚湘,孙文瑜.最优化理论与方法.北京:科学出皈社,2001.
    [58]FLETCHER R.Practical Methods of Optimization.John Wiley and Sons,second edition,1987.
    [59]CAUWENBERGHS G&POGGIOT.Incremental and decremental support vector ma—chine learning.In Proc.NIPS.MIT Press,2000:409-415.
    [60]DIEHL C&CAUWENBERGHS G.SVM incremental learning,adaptation and optimiza—tion.In Proc.Intl.Joint Conf.Neural Networks,2003:2685-2690.
    [61]VAPNIK V.Estimation of Dependences Based on Empirical Data.New York:Springer—Verlag,1982.
    [62]OSUNA E,FREUND R&GIROSI F An improved training algorithm for support vectormachines.In Proc.IEEE Workshop on Neural Networks on Signal Processing,1997:276—285.
    [63]OSUNA E,FREUND R&GIROSI F Support vector machines:Training and applications.Technical Report A.I.Meno No.1602.Artificial Intelligence Lab,MIT,1997.
    [64]PLATTJ.Fasttrainingofsupportvectormachinesusingsequentialminimaloptimization.In SCHOLKOPF B,BURGES C,&SMOLA A,editors,Advances in Kernel Methods:Support Vector Learning.Cambridge, MA:MIT press, 1998:185-208.
    [65]PLATTJ.UsinganalyticQPand sparsenessto speedtrainingofsupportVectormachines.In Proc.Neural Information Processing Systems II.Cambridge,sa:MIT press,1999:557_1563.
    [66]KEERTHI S,SHEVADE S,BHATTACHARYYA C&MURTHY K.Improvements to Platt"SSMO algorithm for SVM classifier design.Heural Computation,2001,13(3):637-649.
    [67]HSU C,CHANG C C&LIN C J.A practical guide to support vector classification.http://www.csie.ntu.edu.tw/qlin/papers/guide/guide.pdf,2003.
    [68]STAELIN C.Parameter selection for support vector machines.http://www.hpl.hp.com/techreports/2002/HPL一2002—354R1.pdf,2002.
    [69]JAAKKOLA T&HAUSSLER D.Probabilistic kernel regression models.In Proc.of the 1999 Conf.on AI and Statistics,1999.
    [70]OPPER M&WINTHER O.Gaussian processes and SVM:Mean field and leave—one—out estimator.Advances in Large Margin Classifiers.Cambridge, MA:MIT Press, 2000:311—326.
    [71]DUAN K,KEERTHI S&POO A.Evaluation of simple performance measures for tuningSVM hyperparameters.Neurocomputing,2003,51:41-59.
    [72]LI HUAQING,WANG SHAOYU&QI FEIHU.Minimal enclosing sphere estimation andits application to SVMs model selection.In Proc.Intl.Sympo.Neural Networks,volume3173 of Lecture Notes in Computer Science.Springer,2004:487—493.
    [73]LI HUAQING,WANG SHAOYU&QI FEIHU.SVM model selection with the VC bound.In Proc.Intl.Sympo.Computational Information Science,volume 3314 of Lecture Notesin Computer Science.Springer,2004:1067—1071.
    [74]BURGES c.J.C.Geometry and invariance in kernel based methods.In SCHOLKOPF B,BURGESC&SMOLAA,editors,AdvancesinKernelMethods:SupportVectorLearning.Cambridge,MA:MIT press,1999:89—116.
    [75]WAHBA G.Support vector machines,reproducing kernel Hilbert spaces and the randomized GACV.In SCHOLKOPF B,BURGES C&SMOLA A,editors,Advances in Kernel Methods:Support Vector Learning.Cambridge,sa:MIT press,1999:69—88.
    [76]SCHOHN G&COHN D.Less is more:active learning with support vector machines.In:Proceedings of the 17th International Conference on Machine Learning(ICML’00),2000:839-846.
    [77]朱红斌,蔡郁.基于主动学习支持向量机的文本分类.计算机工程与应用,2009,45(2):134—136.
    [78]WANG D&SHI L.Selecting Valuable Training Samples for SVMs via Data StructureAnalysis.Neurocomputing,2007,doi:10.1016 /I.neucom.2007.09.008.
    [79]LEE Y J&MANGASARIAN 0 L.RSVM:reduced support vector machine,in:Proceed—ings of the 1st SIAM International Conference on Data Mining,Chicago,2001.
    [80]WATANABE O,BALCZAR J L&DAI Y A random sampling technique for trainingsupport vector machines.In:Proceedings of the IEEE International Conference on DataMining(ICDM’01).San Jose,CA,2001:43-50.
    [81]YU H,HAN J&CHANG K C.Classifying large datasets using SVMs with hierarchicalclusters.In:Proceedings of the International Conference on Knowledge Discovery andData Mining(KDD’03),2003:306-315.
    [82]ZHANG T, RAMAKRISHNAN R&LWNY M.BIRCH:an efficient data clusteringmethod for very large databases.In:Proceedings of the ACM SIGMOD InternationalConference on Management of Data(SIGMOD’96),Montreal,Canada,1996:103—114.
    [83]EVERITT S,LANDAU S&LEESE M.Cluster Analysis,Hodder Arnold,London,2001.
    [84]JAIN A K&DUBES R.Algorithms for Clustering Data,Prentice—Hall,New Jerse~1988.
    [85]DUDA R HART P&STORK D.Pattern Classification,John—Wiley,2nd edition,2001.
    [86]WARDJ H.Hierarchical grouping to optimize an objective function.J.Am.Statist.Assoc.58,1963:236—244.
    [87]SALVADOR S&CHAN P Determining the number of clusters/segments in hierarchi—cal clustering/segmentation algorithms.In:Proceedings of the 16th IEEE International Conference on Tools with AI,2004:576-584.
    [88]杨小兵.聚类分析中若干关键技术的研究.博士论文.浙江大学,2005.
    [89]MACQUEEN J.Some methods for classification and analysis of multivariate observa—tions.Proc.5th Berkeley Symp.Math.Statist,1967,1:281-297.
    [90]STEINBACH M,KARYPIS G&KUMAR V.A comparison of document clustering tech—niques.In:KDD一2000 Workshop on Text Mining,Boston,MA,2000.
    [91]LI Y&CHUNG S M.Parallel Bisecting K—means with prediction clustering algorithm.J.Supercomput.,2007,39(1):19-37.
    [92]RUIZ A&LOPEZ DE TERUEL P E.Nonlinear kernel—based statistical pattern analysis.IEEE Trans.Neural Netw.,2001,12(1):16-32.
    [93]TURK M&PENTLAND A.Eigenfaces for recognition.J.Cogn.Neurosci.,1991,3(1):71-86.
    [94]BELHUMEUR P N,HESPANHA J P&KRIEGMAN D J.Eigenfaces VS.Fisherfaces:Recognition using class specific linear projection.IEEE Trans.Pattern Anal.Mach.Intell.,1997,19(7):711—720.
    [95]BAUDAT G&ANOUAR F Generalized discriminant analysis using a kernel approach.Neural Computation,2000, 12(10):2385-2404.
    [96]LUJW,PLATANIOTISKN&VENETSANOPOULOSAN.Face recognitionusingkernel direct discriminant analysis algorithms.IEEE Transactions on Neural Networks, 2003, 14(1):117-126.
    [97]MERCERJ.Functions of positive and negative type and their connection with the theory of integral equations.Philos.Trans.Roy.Soc.London Ser.A,1909,209:415—446.
    [98]MOSES , ADINI Y&ULLMAN S.Face recognition:the problem of compensating for changes in illumination direction.Proc.European Conf.Computer Vision,1994:286-296.
    [99]GROSSRALPH,SHIJIANBO&COHN JEFF.QuovadisFaceRecognition?CMU—RI—TR一01—17 June 2001.
    [100]SHASHUA A.Illumination and view position in 3D visual recognition.Advances in Neural Information Processing Systems,1992:404-411.
    [101]SHASHUA A.On photometric issues in 3D visual recognition from a single 2D Image.Int"l J.Computer vision,1997,21:99—122.
    [102]胡元奎.可变光照和可变姿态条件下的人睑图像识别研究.博士论文.中国科学技术大学,2006.
    [103]张志伟.可变光照条件下的人脸识别技术研究.博士论文.河北工业大学,2007.
    [104]ROWLEY H A,BALUJA S&KANADAT.Neural network—based face detection.IEEE Trans.Pattern Analysis and Machine Intelligence,1998,20(1):23-38.
    [105]XIAO R LI M&ZHANG H J.Robust multipose face detection in images.IEEE Trans.Circuits and System for Video Technology,2004,14(1):31_41.
    [106]BRUNELLI R&POGGIO T.Hyper BF networks for real object recognition.Proceedings of International Joint Conf.on Artificial Intelligence,1991:1278—1284.
    [107]YANG Q&DING X Q.Discriminant local feature analysis of facial images.Proceedings of IEEE Conf.Image Processing,2003,2:863—866.
    [108]LADES M,WORBRUGGEN J C&Buhman J.Distortion invariant object recognition in the dynamic link architecture,IEEE Trans.Computers,1993,42(3):300-311.
    [109]WISKOTT L,FELLOUS J M&KRUGER N.Face recognition by elastic bunch graph matching.IEEE Trans.Pattern Analysis and Machine Intelligence,1997,19r71:775—779.
    [110]EDELMAN S,REISFELD D&YESHURUN Y A system for face Recognition that learns from examples.ProceedingsofEuropeanConferenceonComputerVision,l992,787-791.
    [111]ZHANG R TAIP, CRYERJ&SHA M.Shape from shading:a survey.IEEE Trans.Pattern Analysis and Machine Intelligence,1999,21(8):690—706.
    [112]BELHUMEUR P N,KRIEGMAN D&YUILLE A L.The bas—relief ambiguity.Proceedings of IEEE Conf.on Computer Vision and Pattern Recognition,1997:1060—1066.
    [113]BELHUMEUR P N&KRIEGMAN D.What is the set of images of an object under all possible lighting conditions?Proceedings of IEEE Conf.on Computer Vision and Pattern Recognition,1996:270-277.
    [114]RAMAMOORITH R&HANRAHAN P A signal—processing framework for inverse ren—dering.Proceedings of SIGGRAP H.2001:117-128.
    [115]RAMAMOORITH R.Analytic PCA construction for theoretical analysis of lighting vari—ability in images of a Lambertian object.IEEE Trans.Pattern Anal.Machine Intell.2002,24 (10):1322—1333.
    [116]BASRI R&JACOBS D.Lambertian reflectance and linear subspace.IEEE Trans.Pattern Analysis and Machine Intelligence,2003,25(2):218-233.
    [117]GEORGHIADES A S,BELHUMEUR P N&KRIEGMAN D J.From few to many:Illu—mination cone models for face recognition under variable lighting and pose.IEEE Trans.Pattern Anal.Math.Intell.,2001,23(6):643—660.
    [118]CHEN W S&YUEN P C.Interpolatory Mercer kernel construction for kernel direct LDA on face recognition.In:Proc.IEEE Int.Conf.Acoustics,Speech and Signal Processing, Apr.2009.
    [119]SCHOLKOPF B&SMOLA A J.Learning with kernels—Support vector machine,Regular—ization, Optimization,and Beyond.Cambridge, MA:MIT Press,2002.
    [120]FISHER R A.The Use of Multiple Measurements in taxonomic problem.Annu.Eugenics,1936,7(2):179—188.
    [121]郑伟诗.模式识别的子空间方法及其在人脸图像分析上的应用.博士论文.中山大学,2008.
    [122]YU H&YANG J.A direct LDA algorithm for high—dimensional data—with application to face recognition.Pattern Recognition, 2001 , 34(11):2067-2070.
    [123]YE J P&LI Q.LDA/QR:an efficient and effective dimension reduction algorithm and its theoretical foundation.Pattern Recognition,2004,37(4):851—854.
    [124]CHEN L,LIAO H,KO M,et a1.A new LDA—based face recognition system,which can solve the small sample size problem.Pattern Recognition,2000,33(10):1713—1726.
    [125]WEBB A R.Statistical Pattern Recognition(2nd edition).UK:John Wiley&Sons,Ltd,2002.
    [126]YANG J&YANG J Y Why Can LDA be performed in PCA transformed space?Pattern Recognition,2003,36(2):563_566.
    [127]Xiong H,Swamy M N S&Ahmad M.Two—dimensional FLD for face recognition.Pattern Recognition,2005,38(7):1121—1124.
    [128]SIM , BAKER S&BSAT M.The CMU pose,illumination,and expression(PIE)database.In:Proceedings of the IEEE International Conference on Automatic Face and Gesture Recognition,May 2002.
    [129]HUANG J,YUEN P C,CHEN W S&LAI J H.Choosing parameters of kernel sub—space LDA for recognition of face images under pose and illumination variations.IEEE Transactions on Systems,Man and Cybernetics,2007,37(4):847-862.
    [130]WANG Z,CHEN S C&SUN T K.MultiK—MHKS:A novel multiple kernel learning algorithm.IEEE Transactions on Pattern Analysis and Machine Intelligence,2008,30(2):348—353.
    [131]LANCKRIET G,CRISTIANINI N,BARTLETT , eta1.Learning the kernel matrix with Semidefinite Programming.Journal of Machine Learning Research,2004,5:27—72.
    [132]BIJ,ZHANGT&BENNETTK.Column—generationboostingmethodsformixtureofker—nels.In:Proceedings of the lOth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining,Seattle,WA:ACM Press,2004:521-526.
    [133]BACHF, LANCKRIET G&JORDAN M.Multiple kernel learning,conic duality,and the SMO algorithm.Proceedings of the 21st International Conference on Ma~ine Learning, Banff,Canada:Omnipress,2004:775—782.
    [134]BENNETT K,MOMMA M&EMBRECHTS M.MARK:A boosting algorithm for hetero—geneous kernel models.Proceedings of the 8th ACM SIGKDD International Conference onKnowledgeDiscoveryandDataMining.Edmonton,Canada:ACMpress,2002:24—31.
    [135]LIH,JIANGT&ZHANGK.Efficientand robustfeature extractionbymaximummargin criterion.Advances in Neural Information Processing Systems 16.Cambridge,MA:MIT Press,2004:157_165.
    [136]PHILLIPS P J,MOON H,RIZVI S A&RAUSS P J.The FERET evaluation methodology for face—recognition algorithms.IEEE Transactions on Pattern Analysis and Machine Intelligence,2000,22(10):1090—1104.
    [137]LIU XIAO—ZHANG&FENG GUO—CAN.Kernel bisecting k-means clustering for SVM training sample reduction.In:Proceedings of 19th International Conference on Pattern Recognition(ICPR),Tampa,USA,2008:2380-2383.
    [138]LIU XIAO—ZHANG,YUEN PONG CHI,FENG GUO—CAN&CHEN WEN—SHENG.Learning kernel in kernel—based LDA for face recognition under illumination variations.IEEE Signal Processing Letters,2009,16(12):1019—1022.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700