基于流形学习的数据降维方法及其在人脸识别中的应用
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
近年来,随着科学技术的发展,人们对于各种数据的获取较之以往更为方便和普遍。然而,在很多实际应用问题中,我们所采集到的数据往往具有高维数、非线性等特征。这些特征一方面导致了“维数灾难”现象的出现,另一方面,不利于人们直接理解及发现数据集所蕴含的内在规律。因此,利用降维技术对高维数据进行处理就显得尤为重要。传统的降维方法(例如主成分分析、独立成分分析、线性判别分析等)能够有效地处理具有线性结构和高斯分布的数据集。但当数据集具有非线性结构时,这些方法却难以发现隐藏在高维数据中的内在低维信息。基于流形学习的数据降维方法假设高维观测数据位于嵌入到高维欧式空间的低维流形上,因此可以有效地发现和保持在高维空间中呈现高度扭曲数据集的内在几何结构。目前,流形学习算法已经成为了数据挖掘、模式识别、统计机器学习等相关领域研究的热点。
     本文对基于流形学习的数据降维方法进行了深入地研究,提出了3种基于流形学习的数据降维和特征提取方法,并将其应用于具体的人脸识别问题中。通过仿真实验和与其它算法的比较,验证了文中算法的有效性。主要工作和创新成果集中在以下几个方面:
     1、对现有的线性及非线性降维算法进行总结,并对流形学习的定义、研究现状、应用领域进行介绍。通过对人脸识别技术的分析,指出了将流形学习应用于人脸识别问题的合理性和可行性。
     2、为了解决传统主成分分析(PCA)算法无法应用于非线性结构数据的缺点,提出了一种基于局部PCA和低维坐标排列的流形学习算法。在本方法中,首先利用测地线距离的约束和最小集覆盖方法将数据所在的整体非线性流形划分成若干个互相重叠的最大线性贴片(Maximum Linear Patch,MLP)。由于得到的每个最大线性贴片所包含的数据具有线性结构,因此,我们可以利用传统的主成分分析(PCA)方法对每个最大线性贴片中的数据进行降维,得到其局部低维坐标。最后,将坐标排列(alignment)技术和最大间隔准则(Maximum Margin Criterion)结合,对所有最大线性贴片的局部低维坐标进行排列,得到整体数据集的全局降维结果。由于本方法在降维的过程中同时考虑到了数据的流形结构和类别信息,因此,在人脸识别的实验中取得了较好的结果。
     3、提出了一种自适应加权的子模式局部保持投影算法(Aw-SpLPP)。与传统的局部保持投影(LPP)算法不同,Aw-SpLPP首先将输入的高维原始数据划分成若干个子模式,然后利用LPP算法对得到的每个子模式集合分别进行降维,得到可以保持各个子模式集合局部结构的低维特征。此外,为了增强算法的鲁棒性,采用一种自适应的方法计算每个子模式对于识别的权重。通过将Aw-SpLPP算法应用于人脸识别问题,可以看出该方法不仅能够提高传统LPP的计算效率,在识别的准确率方面也要优于其它的子模式算法。
     4、提出了一种结构保持的投影算法(SPP)。在本方法中,我们同样将原始高维数据划分成若干个子模式。但与前面提到的Aw-SpLPP和其它基于子模式的方法不同,SPP在对每个子模式进行降维的过程中,不仅考虑到了它所在子模式集合的流形结构,还考虑到了它与来自于同一样本的其它子模式之间的关系。通过SPP算法,我们可以在保持各个子模式集合的非线性流形结构的同时保持每个输入样本内在结构。与前面提到的两种基于流形学习的降维算法相同,我们将SPP算法应用于人脸识别问题并在标准人脸数据库上验证了算法的有效性。从实验结果可以看出,SPP算法要优于其它全局和局部识别方法。
In recent years, with the development of science and technology, it is more easy and convenient for us to obtain various data. However, the data sets collected from the real application problems are always with the characters of high-dimensional and nonlinear. On the one hand, these characters cause the“curse of dimensionality”phenomenon; on the other hand, it is hard for us to understand and discover the intrinsic structure of the data set. As a result, using dimensionality reduction techniques for data processing becomes extremely important. Although the traditional dimensionality reduction methods (such as principal component analysis, independent component analysis and linear discriminant analysis) can effectively deal with the data set with linear structure and Gaussian distribution, they cannot discover the intrinsic nonlinear information hidden in the high-dimensional data set. The dimensionality reduction methods based on manifold learning assume that the high-dimensional observations can be modeled as data points reside on a low-dimensional nonlinear manifold, which is embedded into a high-dimensional Euclidean space. Therefore, the manifold learning methods can effectively discover and preserve the curve structure of the input data. At present, manifold learning has become a hot research topic in the fields of data mining, pattern recognition, machine learning, etc.
     In this thesis, we analyze the manifold learning and propose three novel manifold learning methods for dimensionality reduction and feature extraction, and apply them to the face recognition problem. The efficiency of the proposed algorithms is demonstrated by extensive experiments and comparison with other algorithms. The main work and contributions of this thesis are summarized as follows:
     1.A survey of the existing dimensionality reduction methods is given, and a brief introduction on the definition and applications of manifold learning is also made. Through the analysis of the face recognition problem, the rationality and feasibility of utilizing the manifold learning methods for face recognition are justified.
     2.A supervised manifold learning algorithm based on patches alignment is proposed. In this algorithm, we first chooses a set of overlapping patches which cover all data points using a minimum set cover algorithm with geodesic distance constraint. Then, principal component analysis (PCA) is applied on each patch to obtain the data’s local representations. Finally, patches alignment technique combined with modified maximum margin criterion (MMC) is used to yield the discriminant global embedding. The proposed method takes both label information and structure of manifold into account, thus it can maximize the dissimilarities between different classes and preserve data’s intrinsic structures simultaneously. Experimental results show that the proposed algorithm achieves better recognition rates than some existing methods for face recognition.
     3.An adaptively weighted sub-pattern locality preserving projection (Aw-SpLPP) algorithm is proposed. Unlike the traditional LPP algorithm which operates directly on the whole input patterns and obtains a global features that best detects the essential nonlinear manifold structure, the proposed Aw-SpLPP method operates on sub-patterns partitioned from an original whole patterns and separately extracts corresponding local sub-features from them. Furthermore, the contribution of each sub-pattern can be adaptively computed by Aw-SpLPP in order to enhance the robustness of the proposed method. Through applying the Aw-SpLPP to face recognition, it can be seen that the proposed method can not only reduce the computation complexity of the traditional LPP, but also improve the recognition performances.
     4.A novel local matching method called structure-preserved projections (SPP) is proposed. Unlike most existing local matching methods which neglect the interactions of different sub-pattern sets during feature extraction, i.e., they assume different sub-pattern sets are independent; SPP takes the holistic context of the original whole patterns into account and can preserve the configural structure of each input pattern in subspace. Moreover, the intrinsic manifold structure of the sub-pattern sets can also be preserved in our method. Like the two aforementioned algorithms, we also apply the SPP to face recognition problem. The efficiency of the proposed algorithm is demonstrated by extensive experiments on three standard face databases (Yale, Extended YaleB and PIE). Experimental results show that SPP outperforms other holistic and local matching methods.
引文
[1]王靖.流形学习的理论与方法研究[D]:[博士学位论文].浙江,浙江大学,2006.
    [2] Bellman R. Adaptive control process: a guided tour [M], Princeton: Princeton University Press, 1961.
    [3] D.W. Scott and J.R. Thompson. Probability density estimation in higher dimensions [C]. In J.R. Gentle, editor, Proceedings of the Fifteenth Symposium on the Interface, pages 173–179. Elsevier Science Publishers, B.V., North-Holland, 1983.
    [4] Lee, John A., Verleysen, Michel, Nonlinear Dimensionality Reduction [M], Springer, 2007.
    [5] K.S. Beyer, J. Goldstein, R. Ramakrishnan, and U. Shaft. When is“nearest neighbor”meaningful? [C] In Seventh International Conference on Database Theory, volume 1540 of Lecture Notes in Computer Science, 1999, 217–235.
    [6] D. Fran?cois. High-dimensional data analysis: optimal metrics and feature selection [D]. PhD thesis, Universit′e catholique de Louvain, D′epartement d’Ing′enierie Math′ematique, Louvain-la-Neuve, Belgium, September 2006.
    [7] H. Hotelling. Analysis of a complex of statistical variables into principal components [J]. Journal of Educational Psychology, 24:417–441, 1933.
    [8] T. Cox and M. Cox., Multidimensional scaling [M]. Chapman & Hall, London, UK, 1994.
    [9] Hyvarinen A, Oja E and Karhunen J., Independent component analysis [M]. New York: Wiley, 2001.
    [10] R.A. Fisher., The use of multiple measurements in taxonomic problems [J]. Annals of Eugenics, 7:179–188, 1936.
    [11] J.B. Tenenbaum, V. de Silva, and J.C. Langford, A Global Geometric Framework for Nonlinear Dimensionality Reduction [J], Science, vol. 290, 2319-2323, Dec. 2000.
    [12] S. Roweis and L. Saul. Nonlinear dimensionality reduction by locally linear embedding [J]. Science, 290: 2323–2326, 2000.
    [13] M. Belkin, P. Niyogi, Laplacian Eignmaps for Dimensionality Reduction and Data Representation [J], Neural Computation, Vol. 15, No. 6, 1373-1396, 2003.
    [14] Zhenyue Zhang and Hongyuan Zha. Principal Manifolds and Nonlinear Dimensionality Reduction via Tangent Space Alignment [J]. SIAM J. Scientific Computing, 26:313–338, 2004.
    [15] Bernhard Sch?lkopf, Alexander Smola and Klaus-Robert Müller, Nonlinear ComponentAnalysis as a Kernel Eigenvalue Problem [J], Neural Computation, Vol. 10, No. 5, 1299– 1319, 1998.
    [16] G. Baudat and F. Anouar, Generalized discriminant analysis using a kernel approach [J], Neural Computation 12: 2385-2404, 2000.
    [17] Francis R. Bach, Michael I. Jordan, Kernel Independent Component Analysis [J], Journal of Machine Learning Research 3: 1-48, 2002.
    [18]李春光.流形学习及其在模式识别中的应用[D]:[博士学位论文].北京,北京邮电大学,2007.
    [19] C. Bregler, S. M. Omohundro, Nonlinear manifold learning for visual speech recognition [C], Proceedings of the Fifth International Conference on Computer Vision, 1995.
    [20] C. Bregler, S. M. Omohundro, Nonlinear Image Interpolation using Manifold Learning [C], Advances in Neural Information Processing Systems, 1995.
    [21] V. de Silva, J. B. Tenenbaum, Global versus local methods in nonlinear dimensionality reduction [C], Advances in Neural Information Processing Systems, 2002.
    [22] Yoshua Bengio, Jean-Francois Paiement and Pascal Vincent, Out-of-Sample Extensions for LLE, Isomap, MDS, Eigenmaps, and Spectral Clustering [C], In Advances in Neural Information Processing Systems, 177–184, 2003.
    [23] Tat-Jun Chin and David Suter, Out-of-Sample Extrapolation of Learned Manifolds [J], IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 30, NO. 9, 1547–1557, 2008.
    [24] Pettis K, Bailey T, Jain A K and Dubes R, An Intrinsic Dimensionality Estimator From Near-Neighbor Information [J], IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 1, NO. 1, 25–36, 1971.
    [25] F. Camastra, A. Vinciarelli, Estimating the intrinsic dimension of data with a fractal-based method [J], IEEE Transactions on Pattern Analysis and Machine Intelligence 24 (10), 1404–1407, 2002.
    [26] B. K′egl, Intrinsic dimension estimation using packing numbers [C], in: Advances in Neural Information Processing Systems, 2003.
    [27] E. Levina, P.J. Bickel, Maximum likelihood estimation of intrinsic dimension [C], in: Advances in Neural Information Processing Systems, 2005.
    [28] Jose A. Costa, Alfred O. Hero , Geodesic Entropic Graphs for Dimension and Entropy Estimation in Manifold Learning, IEEE Transactions on Signal Processing, vol.52(no.8), 2210-2221, 2004.
    [29]张军平,何力.监督流形学习[C].机器学习及其应用2007.北京,清华大学出版社,194-226,2007.
    [30] Xin Geng, De-Chuan Zhan and Zhi-Hua Zhou, Supervised nonlinear dimensionality reduction for visualization and classification [J], IEEE Transactions on Systems, Man, and Cybernetics, Part B, Vol. 35, (6), 1098-1107, 2005.
    [31] E. Kokiopoulou, Y. Saad, Orthogonal neighborhood preserving projections [C], Proceedings of the Fifth IEEE International Conference on Data Mining, 1–7, 2005.
    [32] Hastie T., Principal curves and surfaces, Laboratory for computational Statistics Stanford University, Dept.of Statistics Technical Report 11, 1984.
    [33] Hastie T. and Stuetzle W., Principal curves [J], Journal of the American Statistical Asssociation, 84(406):502-516, 1989.
    [34] Shree K Nayar, Sameer A Nene and Hiroshi Murase, Subspace methods for robot vision, Technical report CUCS-06-95, Columbia University, New York, 1995.
    [35] H. S. Seung, and D. L. Daniel, The Manifold Ways of Perception [J], Science, 290, 2268-2269, 2000.
    [36] J. B. Tenenbaum, de Silva, V. and Langford, J.C, A global geometric framework for nonlinear dimensionality reduction [J], Science, 290, 2319-2323, 2000.
    [37] S. T. Roweis and K. S. Lawrance, Nonlinear Dimensionality reduction by locally linear embedding [J], Science, 290, 2323-2326, 2000.
    [38] Xiaofei He and Partha Niyogi, Locality Preserving Projections [C], Advances in Neural Information Processing Systems, Vancouver, Canada, 2003.
    [39] X. He, S. Yan, T. Hu, P. Niyogi, H. Zhang, Face recognition using Laplacianfaces [J], IEEE Transaction on Pattern Analysis and Machine Intelligence, Vol. 27, No. 3, 328-340, 2005.
    [40] Kilian Q. Weinberger, Fei Sha and Lawrence K. Saul, Learning a kernel matrix for nonlinear dimensionality reduction [C], Proceedings of the twenty-first international conference on Machine learning, 2004.
    [41] Brun A., Westin C. F., Herberthson M., Fast manifold learning based on Riemannian normal coordinations [C], Proc. of 14th Scandinavian Conference on Image Analysis, 2005.
    [42] Hinton G. E. and Roweis S. T.,Stochastic Neighbor Embedding [C], Advances in Neural Information Processing Systems, 2002.
    [43] Brand M., Charting a manifold [C], Advances in Neural Information Processing Systems, 961-968, 2003.
    [44] Coifman R. et al., Geometric diffusions as a tool for harmonic analysis and structure definition of data: Diffusion maps [C], Proc. of the National Academy of Sciences, vol. 102, 7426-7431, 2005.
    [45] Li Yang, Alignment of Overlapping Locally Scaled Patches for Multidimensional Scaling and Dimensionality Reduction [J], IEEE Transactions on Pattern Analysis and Machine Intelligence,30(3), 438-450, 2008.
    [46] Tong Lin, Hongbin Zha, Riemannian Manifold Learning [J], IEEE Transactions on Pattern Analysis and Machine Intelligence, 30(5), 796-809, 2008.
    [47]张军平,流形学习若干问题研究[C].机器学习及其应用.北京,清华大学出版社,135-169,2006.
    [48] M. H. C. Law, N. Zhang and A. K. Jain, Nonlinear manifold learning for data stream [C], Proc. of SIAM Data Mining, 33-44, 2004.
    [49] M. H. C. Law and A. K. Jain, Incremental nonlinear dimensionality reduction by manifold learning [J], IEEE Transactions on Pattern Analysis and Machine Intelligence, 28(3), 337-391, 2006.
    [50] Olga Kouropteva, , Oleg Okun and Matti Pietik?inen, Incremental locally linear embedding [J], Pattern Recognition, Volume 38, Issue 10, 1764-1767, 2005.
    [51] Matthew Brand and Kun Huang, A unifying theorem for spectral embedding and clustering [C], Proc. of the 9th International Conference on Artificial Intelligence and Statistics, 2003
    [52] Shuicheng Yan Dong Xu Benyu Zhang Hong-Jiang Zhang Qiang Yang Lin, S., Graph Embedding and Extensions: A General Framework for Dimensionality Reduction [J], IEEE Transactions on Pattern Analysis and Machine Intelligence, 29(1), 40-51, 1997.
    [53] Zhou D et al., Ranking on data manifold [C], Advances in Neural Information Processing Systems, 2003.
    [54] Ioannis Tziakos a, Andrea Cavallaro, Li-Qun Xu,Video event segmentation and visualisation in non-linear subspace [J],Pattern Recognition Letters, 30,123–131, 2009.
    [55] Errity A and Mckenna J, An investigation of manifold learning for speech analysis [C], Proc. of the International conference on Spoken Language Processing, 2506-2509, 2006.
    [56]许馨,吴福朝,胡占义,罗阿理,一种基于非线性降维求正常星系红移的新方法[J],光谱学与光谱分析,26(1),182-186,2006.
    [57] Jain A. K. and Chandrasekaran B., Dimensionality and sample size considerations in pattern recognition practice [M], Handbook of Statistices, Krishnaiah P R and Kanal L N Eds., 2:835-855, 1982.
    [58] K. Pearson. On lines and planes of closest fit to systems of points in space [J]. Philosophical Magazine, 2:559–572, 1901.
    [59] K. Karhunen. Zur Spektraltheorie stochastischer Prozesse [J]. Ann. Acad. Sci. Fennicae, 34, 1946.
    [60] M. Loeve. Fonctions aleatoire du second ordre [M]. In P. Levy, editor, Processus stochastiques et mouvement Brownien, page 299. Gauthier-Villars, Paris, 1948.
    [61] Richard O. Duda, Peter E. Hart and David G. Stork, Pattern Classification (Second Edition)[M], Wiley-Interscience, 2000.
    [62] Sammon J. J. W., A nonlinear mapping for data structure analysis, IEEE transaction on computer [J], 18(5), 401-409, 1969.
    [63] Wang X., Wang J. T. L., Lin K. I. etc.,An index structure for data mining and clustering [J], Knowledge and Information Systems, 12(2), 161-184, 2000.
    [64] Naud A., Duch W., Interactive data exploration using MDS mapping [C], Proc. of 5th Conference on Neural Network and Soft Computing, 2000.
    [65] Abu-Khzam F., Samatova N., Ostrouchov G., etc., Distributed Dimension Reduction Algorithm for Widely Dispersed Data [M], Oarallel and Distributed Computing and Systems, ACTA press, 174-178, 2002.
    [66] Ian Spence, Stephan Lewandowsky, Robust Multidimensional Scaling [J], Psychometrika, 54(3), 501-513, 1989.
    [67] Silva V. de, Tenebaum J. B., Sparse multidimensional scaling using landmark points, Technical report, Stanford Mathematics, 2004.
    [68] Yang T., Liu J., McMilan L., etc., A fast approximation to multidimensional scaling [C], CIMCV, 2006.
    [69] Lee D. D. and Seung H. S., Learning the parts of objects by non-negtive matrix factorization [J], Nature, 401, 6755, 1999.
    [70] Lee D. D. and Seung H. S., Algorithms for non-negtive matrix factorization [C], Advances in Neural Information Processing Systems, 556-562, 2000.
    [71]李子清,张军平,人脸识别中的子空间统计学习[C],机器学习及其应用.北京,清华大学出版社,270-301,2006.
    [72] Tao Feng, Li S., Heung-Yeung Shum and HongJiang Zhang, Local Non-Negative Matrix Factorization as a Visual Representation [C], Proc. of Second International Conference on Development and Learning, 2002.
    [73] Dong Liang,Jie Yang,Yuchou Chang, Supervised non-negative matrix factorization based latent semantic image indexing [J],中国光学快报, 4(5), 272-274,2006
    [74] Y Wang, Y Jia, C Hu, M Turk, Fisher non-negative matrix factorization for learning local features [C], Proc. Asian Conf. on Comp. Vision, 2004.
    [75] Taiping Zhang, Bin Fang, Yuan Yan Tang, Guanghui He, Jing Wen, Topology Preserving Non-negative Matrix Factorization for Face Recognition [J], IEEE Transactions on Image Processing, 17(4), 574-584, 2008.
    [76] Bernstein M., Silva V. de, Langford J., et al., Graph approximations to geodesics on embedded manifolds, Technical report, Department of Psychology, Stanford University, 2000.
    [77] Mukund Balasubramanian, Eric L. Schwartz, The Isomap Algorithm and TopologicalStability [J], Science, 295, 7a, 2002.
    [78] Li Yang. Building k-connected neighborhood graphs for isometric data embedding [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence. 28(5):827-831, May 2006.
    [79] Li Yang. Building k-edge-connected neighborhood graphs for distance-based data projection [J]. Pattern Recognition Letters. 26(13):2015-2021, October 2005.
    [80] Vlachos M., Domeniconi C., Gunopulos D, et al., Nonlinear dimensionality reduction techniques for classification and visualization [C], Proc. of 8th ACM SIGKDD, 645-651, 2002.
    [81] Yang M., Face recognition using extended Isomap [C], Proc. of ICIP, 117-120, 2002.
    [82] Zha H. Y., Zhang Z. Y., Isometric embedding and continuum ISOMAP [C], Proc. of the 20th International Conference on Machine Learning, 864-871, 2003.
    [83] Liu D., Ou Z., Wang G., Hua S. and Su T., Face Recognition Using Hierarchical Isomap [C], Proc. of IEEE Workshop on Automatic Identification Advanced Technologies, 2007.
    [84] Choi Heeyoul, Chui Seungjin, Robust kernel Isomap [J], Pattern Recognition, 40(3), 853-862, 2007.
    [85] Dongfang Zhao and Li Yang, Incremental isometric embedding of high dimensional data using connected neighborhood graphs [J], IEEE Transactions on Pattern Analysis and Machine Intelligence. 31(1):86-98, 2009.
    [86] Kokiopoulou E., Saad Y.,Orthogonal neighborhood preserving projections [C], Proc. of 5th IEEE International conference on data mining, 2005.
    [87] He X., Cai D., Yan S., et al., Neighborhood Preserving Embedding [C], Proc. of ICCV 2005, 1208-1213, 2005.
    [88] Dick de Ridder, Kouropteva O., Okun O., Supervised locally linear embedding [C], LNCS 2714, 333-341, 2003.
    [89] Dick de Ridder, Marco Loog and Marcel J.T. Reinders, Local Fisher Embedding [C], Proc. of ICPR 2004.
    [90] Bo Li, Chunhou Zheng and Deshuang Huang, Locally linear discriminant embedding: An efficient method for face recognition [J], Pattern Recognition, vol. 41, 3813-3821, 2008.
    [91] Donoho D., Grimes C., Hessian Eigenmaps: New locally linear embedding techniques for high dimensional data, Technical Report, Department of Statistics, Stanford University, 2003.
    [92] Zheng,Z. Zhao,Z. Yang,Z. Gabor Feature Based Face Recognition Using Supervised Locality Preserving Projection [C], in Proc. of ACIVS, 644-653, 2006.
    [93] Chen H., Chang H. and Liu T., Local discriminant embedding and its variants [C], in Proc. of CVPR, 846-853, 2005.
    [94] Hongyu Li, Li Teng, Wenbin Chen and I-Fan Shen, Supervised Learning on Local Tangent Space [C], Lecture Notes in Computer Science, 3496, 546-551, 2005.
    [95] Tianhao Zhang, Jie Yang, Deli Zhao, Xinliang Ge, Linear local tangent space alignment and application to face recognition [J], Neurocomputing, Volume 70, Issues 7-9, 1547-1553, 2007.
    [96] Yongzhou Li, Dayong Luo and Shaoqiang Liu, Orthogonal discriminant linear local tangent space alignment for face recognition [J], Neurocomputing, Volume 72, Issues 4-6, 1319-1323, 2009.
    [97]杨剑,李伏欣,王珏.一种改进的局部切空间排列算法[J].软件学报,16(9):1584-1590,2005.
    [98] Yee Whye Teh and Sam Roweis, Automatic Alignment of Local Representations [C], Advances in Neural Information Processing Systems, 841-848, 2003.
    [99]曾宪华,流形学习的谱方法相关问题研究[D] :[博士学位论文].北京,北京交通大学,2009.
    [100]孔万增,基于学习的人脸识别研究[D]:[博士学位论文].浙江,浙江大学,2008.
    [101] Li Stan Z., Jain Anil K, Handbook of face recognition [M], New York, Springer, 2005.
    [102] Kanade T. Picture processing by computer complex and recognition of human faces [D]. PhD thesis, Kyoto University, 1973.
    [103] W. Zhao, R. Chellappa, P. J. Phillips and A. Rosenfeld, Face recognition: a literature survey [J], ACM Comput. Surv. 35 (4), 399-458, 2003.
    [104] Turk M. A., Pentland A. P., Eigenface for recognition [J]. Journal of Cognitive Neuroscience, 3 (1), 71-86, 1991.
    [105] P. N. Belhumeur, J. P. Hepanha, D. J. Kriegman, Eigenfaces vs. Fisherfaces: recognition using class specific linear projection [J], IEEE Transaction on Pattern Analysis and Machine Intelligence, Vol. 19, No. 7, 711-720, 1997.
    [106] M. S. Barlett, J. R. Movellan, T. J. Sejnowski, Face recognition by independent component analysis [J], IEEE Transaction on Neural Network, 13 (6), 1450-1464, 2002.
    [107] Deli Zhao, Formulating LLE using alignment technique [J], Pattern Recognition 39(11), 2233-2235, 2006.
    [108] Fei Wang, Xin Wang, Daoqiang Zhang, Changshui Zhang and Tao Li, MarginFace: A novel face recognition method by average neighborhood margin maximization [J], Pattern Recognition, Volume 42, Issue 11, 2863-2875, 2009.
    [109] Bo Li, Deshuang Huang, Chao Wang and Kunhong Liu, Feature extraction using constrained maximum variance mapping [J], Pattern Recognition, Volume 41, Issue 11, 3287-3294, 2008.
    [110] Ruiping Wang, Shiguang Shan, Xilin Chen, Wen Gao, Manifold-Manifold Distance with Application to Face Recognition based on Image Set [C], Proc. of CVPR 2008, 1-8, 2008.
    [111] H. Li, T. Jiang and K. Zhang, Efficient and robust feature extraction by maximum margin criterion [J], IEEE Trans. Neural Networks 17 (2), 157-165, 2006.
    [112] Jun Liu, Songcan Chen and Xiaoyang Tan, A study on three linear discriminant analysis based methods in small sample size problem [J], Pattern Recognition Vol. 41, 102–116, 2008.
    [113] Manli Zhu and Aleix M. Martinez, Subclass discriminant analysis [J], IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 28, No. 8, pp. 1274-1286, 2006.
    [114] Onur C. Hamsici and Aleix M. Martinez, Bayes Optimality in Linear Discriminant Analysis [J], IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 30, No. 4, pp. 647-657, 2008.
    [115] Olivetti & Oracle Research Laboratory, The Olivetti & Oracle Research Laboratory Face Database of Faces [DB], http://www.cam-orl.co.uk/facedatabase.html.
    [116] Yale University Face Database [DB], http://cvc.yale.edu/projects/yalefaces/yalefaces.html
    [117] T. Sim, S. Baker, and M. Bsat,“The CMU Pose, Illumination, and Expression (PIE) Database”[C], in Proceedings of the IEEE International Conference on Automatic Face and Gesture Recognition, May, 2002.
    [118] T Cover, P Hart, Nearest neighbor pattern classification [J], IEEE Transactions on Information Theory, 13(1), 21-27, 1967.
    [119] Jie Zou, Qiang Ji and George Nagy, A Comparative Study of Local Matching Approach for Face Recognition [J], IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 16, NO. 10, 2617-2628, 2007.
    [120] Ben Niu, Qiang Yang, Simon Chi Keung Shiu and Sankar Kumar Pal, Two-dimensional Laplacianfaces method for face recognition [J], Pattern recognition, 41, 3237-3243, 2008.
    [121] Alex Pentland and Baback Moghaddam and Thad Starner, View-Based and Modular Eigenspaces for Face Recognition [C], Proc. of CVPR 1994, 84-91, 1994.
    [122] Rajkiran Gottumukkal, Vijayan K.Asari, An improved face recognition technique based on modular PCA approach [J], Pattern Recognition Letters, 25, 429–436, 2004.
    [123] Songcan Chen, Yulian Zhu, Subpattern-based principle component analysis [J], Pattern Recognition, 37, 1081–1083, 2004.
    [124] K. Tan and S. Chen, Adaptively weighted sub-pattern PCA for face recognition [J], Neurocomputing, vol. 64, 505–511, 2005.
    [125] X. Geng and Z.-H. Zhou, Image region selection and ensemble for face recognition [J], J. Comput. Sci. Technol., vol. 21, no. 1, 116–125, 2006.
    [126] Kadappagari Vijaya Kumar and Atul Negi, SubXPCA and a generalized feature partitioning approach to principal component analysis [J], Pattern Recognition, 41, 1398-1409, 2008.
    [127]张磊,高全学,块独立成分分析的人脸识别[J],计算机应用, 27(9),2091-2094,2007.
    [128] Yu-Lian Zhu, Sub-pattern non-negative matrix factorization based on random subspace forface recognition [C], International Conference on Wavelet Analysis and Pattern Recognition, 1356-1360, 2007.
    [129] Loris Nanni, Dario Maio, Weighted Sub-Gabor for face recognition [J], Pattern Recognition Letters 28, 487–492, 2007.
    [130] Hui Xue, Yulian Zhu and Songcan Chen, Local ridge regression for face recognition [J], Neurocomputing, Vol. 72, 1342-1346, 2009.
    [131]洪泉,陈松灿,倪雪蕾,子模式典型相关分析及其在人脸识别中的应用[J],自动化学报,34(1),21-30,2008.
    [132] Lawrence K. Saul, Sam T. Roweis, An Introduction to Locally Linear Embedding, http://www.cs.toronto.edu/~roweis/lle/publications.html.
    [133] W Zhang, S Shan, W Gao, Y Chang, B Cao, Component-based Cascade Linear Discriminant Analysis for Face Recognition [J], Lecture Notes in Computer Science, Vol. 3338, 288-295, 2005.
    [134] Pawan, S., Benjamin, B., Yuri, O. and Richard, R., Face recognition by humans: Nineteen results all conputer vision reseachers should know about [J], Proceedings of The IEEE, Vol. 94, No. 11, 1948-1962, 2006.
    [135] Sadr, J., Jarudi, I. and Sinda, P., The role of eyebrows in face recognition [J], Perception, Vol. 32, 285-293, 2003.
    [136] A. J. Calder, A. W. Young, J. Keane and M. Dean, Configural information in facial expression perception [J], J. Exp. Psychol. Hum. Percept. Perform., Vol 26, 527-551, 2000.
    [137] Jianzhong Wang, Jun Kong, Yinghua Lu, Miao Qi and Baoxue Zhang, A modified FCM algorithm for MRI brain image segmentation using both local and nonlocal spatial constraints [J]. Computerized Medical Imaging and Graphics, Vol 32, 685-698, 2008.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700