基于相关投影分析的特征提取研究及在图像识别中的应用
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
特征抽取是模式识别研究中的基本问题之一。对于图像识别而言,抽取有效的图像特征是完成识别任务的关键。线性与非线性投影分析作为特征抽取最为经典和广泛使用的方法,在图像识别中已得到了广泛的研究,获取了成功的应用。然而线性与非线性投影分析方法主要针对模式的一组特征进行处理,并不适用于多表示数据的特征融合与抽取。相关投影分析,包括典型相关分析与偏最小二乘,已广泛应用于多组特征间的融合与抽取,并在图像识别中取得了良好的实验结果。本文以相关投影分析为研究对象展开深入的拓展研究,致力于增强相关投影分析抽取特征的鉴别能力。所做的主要工作和研究成果如下:
     (1)从模式分类的角度出发,提出了一种监督的局部保持典型相关分析(SLPCCA),通过最大类内成对样本与其近邻间的权重相关性,在有效地利用样本类别信息的同时保持了数据的局部流形结构,提高了算法的稳定性与鲁棒性,并且融合了判别型典型相关分析(DCCA)的鉴别信息,而不受其抽取的最大特征维数不超过总类别数的限制。此外,面对图像识别问题中存在的大量非线性问题,在核技巧的基础之上又提出了核化的SLPCCA (KSLPCCA),以提取数据的非线性特征。
     (2)提出了一种正交正则化典型相关分析(ORCCA)。典型相关分析(CCA)获取的投影向量满足相互之间的共轭正交性,然而“共轭正交”需要考虑样本的总体协方差矩阵,在面对小样本问题中可能会出现由于对协方差矩阵的估计不足而产生算法性能的下降。此外,共轭正交性更加关注特征的低维最优表示而非鉴别能力的强弱,当分属于不同类别的样本分布具有较为明显的差异时,可能会出现分类性能不佳的现象。为了解决这两个问题,本文提出的ORCCA以投影矢量之间的正交性约束与正则化参数的引入,能够抽取出具有更强鉴别能力的特征。
     (3)提出了稀疏保持典型相关分析(SPCCA)与稀疏正则化的判别型典型相关分析(SrDCCA)。稀疏保持投影(SPP)实现了特征降维过程中对样本间稀疏重构能力的保持,因而能够在无类标签的情况下提取样本的自然鉴别信息。受此启发,本文提出的SPCCA,不仅实现了两组特征集鉴别信息的有效融合,同时对提取特征间的稀疏重构性加以约束,增强了特征的表示和鉴别能力。在SPCCA的基础上,通过对部分已标记样本的监督学习,又提出了稀疏正则化的判别型典型相关分析。在手写体字符与人脸识别上的实验结果表明,本文提出的两种方法均取得了良好的识别性能。
     (4)基于多表示数据的特征抽取问题,提出了一种多成分分析方法(MCA)。典型相关分析与偏最小二乘,作为两组特征融合与抽取的经典方法,如何将其推广到多组特征,一直吸引着人们广泛的关注。本文提出的MCA,通过高阶张量的构造,将多组特征之间的相关信息融入到协方差张量中,再利用高阶奇异值分解,获取各组特征对应的投影矩阵,实现维数约减与特征融合的双重任务。与基于子空间的特征融合方法(MFFSL)相比,MCA能够利用较少维数的特征表示实现多组特征间的融合,保证了抽取的特征具有更强的鉴别性。另外,主成分分析与偏最小二乘可以分别视作本文方法在面对一组及两组特征时的一种特例情况。在手写体字符与人脸识别上的详细实验,验证了MCA的有效性与鲁棒性。
     (5)针对图像集分类需将图像矩阵转化为图像矢量的问题,提出了二维互子空间方法与二维多主角嵌入方法。首先,借鉴二维主成分分析(2DPCA)的基本思想,提出了一种二维互子空间方法,避免图像集分类中二维图像矩阵的矢量化表示对图像结构信息的破坏。此外,在综合考虑多组图像集合间的“全局”与“局部”典型相关基础上,提出了二维多主角嵌入方法,以迭代优化方式寻找一组全局鉴别子空间,以满足同类多组子空间主角的最小化与非同类多组子空间主角的最大化。所提方法不仅能够增强子空间表示的鉴别能力,同时亦减少了对存储空间的需求以及新测试样本的分类时间。
Feature extraction is one of the most basic problems of pattern recognition. For image recognition tasks, extracting the effective image feature is a crucial step. As the classical and popular technique for feature extraction, linear and nonlinear projection analysis methods have been deeply researched and are verified to be effective in the application of image recognition. However, the linear and linear projection analysis methods are processing mainly on one feature set of the patterns. It is unsuitable to apply it to directly fuse and extract the features of multi-view data. Correlation projection analysis, including canonical correlation analysis (CCA) and partial least squares (PLS), has been widely employed in multiple feature fusion and extraction. It has obtained good recognition results in the application of image classification. In this dissertation, we focus on researching the correlation projection analysis and its variants, in order to increase the discriminative ability of extracted features. Our work mainly includes the following parts:
     (1) For using locality preserving canonical correlation analysis (LPCCA) to pattern classification and acquiring good results, a supervised LPCCA (SLPCCA) is proposed to incorporate the class label information. Through maximizing the weighted correlation between corresponding samples and their near neighbors belonged to the same class, it effectively utilizes the class label information and improves the stabilization and effectiveness of the algorithm. In addition, the proposed algorithm can effectively fuse the discrimination information of DCCA without the restriction of total class numbers. Besides, based on the kernel trick, a kernel SLPCCA (KSLPCCA) is also proposed in order to extract nonlinear features of the data.
     (2) CCA has been extensively researched and aims at extracting statistically uncorrelated features via conjugate orthonormalization constraints of the projection directions. However, there exist two problems. First, the formulated directions under conjugate orthonormalization are not reliable when the training samples are few and the covariance matrix is not exactly estimated. Secondly, this widely pursued property is focused on data representation rather than task discrimination. It is not suited for classification problems when the samples that belong to different classes do not share the same distribution type. An orthogonal regularized CCA (ORCCA) is proposed to avoid the above questions and extract more discriminative features via orthogonal constraints and regularization parameters.
     (3) Sparsity preserving projections (SPP) aim to preserve the sparse reconstructive relationship of the data, and contain natural discriminating information even without class labels. Enlightened by this, we propose a sparsity preserving canonical correlation analysis (SPCCA). It can not only fuse the discriminative information of two feature sets efficiently but also constrain the sparse reconstructive relationship among each feature set in order to increase the representational power and have good discrimination capability of the features extracted by SPCCA. Based on SPCCA, sparsity regularized discriminant canonical correlation analysis (SrDCCA) is proposed through semi-supervised learning on partly labeled samples. Extensive experiments on both handwritten numerals classification and face recognition demonstrate that the proposed methods can effectively enhance the recognition performance.
     (4) Multiple component analysis (MCA) is proposed for feature extraction of the multi-view data. CCA and PLS are always used as fusing two feature sets. How to extend them to fuse multiple features in a generalized way is still an unsolved problem. In this paper, a novel feature fusion method called MCA is proposed. By constructing a higher-order tensor, all kinds of information are fused into the covariance tensor. Then orthogonal subspaces corresponding to each feature set are learned through tensor singular value decomposition (SVD) that couples dimension reduction and feature fusion together. Compared with multiple feature fusion by subspace learning (MFFSL), our method has the ability to represent fused data more efficiently and discriminatively in very few components. And it is shown that PCA (principle component analysis) and PLS are special cases of our method when there are only one set and two sets of features respectively. Extensive experiments on both handwritten numerals classification and face recognition demonstrate the effectiveness and robustness of the proposed method.
     (5) Two-dimensional mutual subspace method (2DMSM) and multiple principle angles embedding (2DMPE) are proposed for image set classification. Based on two-dimensional principle component analysis,2DMSM is proposed to retain the underlying data structure of images which has been broken by vectorizing the image matrix into a high dimensional vector in the application of image set classification. Additionally,2DMPE is also proposed which jointly considers both'local'and'global'canonical correlations by iteratively learning a global discriminative subspace, on wich the angle among multiple subspaces of the same class is minimized while that of different classes is maximized. It can not only enhance the discriminative ability of subspaces, but also decrease the space storages and the time costing of classifying the newly test samples.
引文
[1]I.T. Jolliffe. Principle components analysis. Springer-Verlag, New York,1986.
    [2]R.A. Fisher. The use of multiple measurements in taxonomic problems. Annals of Eugenics,1936,7:178-188.
    [3]J. Wright, A.Y. Yang, A. Ganesh, S.S. Sastry, Yi Ma. Robust face recognition via sparse representation. IEEE Transaction on Pattern Analysis and Machine Intelligence,2009, 31(2):210-227.
    [4]C.J. Liu and H. Wechsler. A shape and texture based enhanced fisher classifier for face recognition. IEEE Transactions on Image Processing,2001,10(4):598-608.
    [5]J. Yang, J.Y. Yang, D. Zhang, J. F. Lu. Feature fusion:parallel strategy vs. serial strategy, Pattern Recognition,2003,36(6):1369-1381.
    [6]胡国定,张润楚.多元数据分析方法-纯代数处理.天津:南开大学出版社,1999.
    [7]H. Hotelling. Relations between two sets of variates. Biometrika,1936,28(3/4): 321-377.
    [8]王惠文.偏最小二乘回归方法及其应用.北京:国防工业出版社,1999.
    [9]Q.S. Sun, S.G. Zeng, Y. Liu, P.A. Heng and D.S. Xia. A new method of feature fusion and its application in image recognition. Pattern Recognition,2005,38(12):2437-2448.
    [10]Q.S. Sun, Z.D. Liu, P.A. Heng and D.S. Xia. A theorem on the generalized canonical projective vectors. Pattern Recognition,2005,38(3):449-452.
    [11]S. Wold, J. Trygg, A. Berglund, H. Antti. Some recent developments in PLS modeling. Chemometrics and Intelligent Laboratory Systems,2001,58(2):131-150.
    [12]J. Baek, M. Kim. Face recognition using partial least squares components. Pattern Recognition,2004,37(6):1303-1306.
    [13]Q.S. Sun, Z. Jin, P.A. Heng, D.S. Xia. A novel feature fusion method based on partial least squares regression. The third International Conference on Advances in Pattern Recognition(Bath, UK, August 2005), Lecture Notes in Computer Science(LNCS), Springer-Verlag, Heidelberg, Berlin,2005,3686:268-277.
    [14]杨健.线性投影分析的理论与算法及其在特征抽取中的应用.南京:南京理工大学,2002.
    [15]孙权森.基于相关投影分析的特征抽取与图像识别研究.南京:南京理工大学,2006.
    [16]K. Fukunaga. Introduction to Statistical Pattern Recognition (2nd ed). New York: Academic Press,1990.
    [17]M. Turk, A. Pentland. Eigenfaces for recognition. Journal o Cognitive Neuroscience, 1991,3(1):71-86.
    [18]M. Kirby, L. Sirovich. Application of the K-L procedure for the characterization of human faces. IEEE Transaction on Pattern Analysis and Machine Intelligence,1990, 12(1):103-108.
    [19]J. Yang, D. Zhang, A. F. Frangi, J.Y. Yang. Two-Dimensional PCA:A new approach to appearance-based face representation and recognition. IEEE Transaction on Pattern Analysis and Machine Intelligence,2004,26(1):131-137.
    [20]J.P. Ye. Generalized low rank approximations of matrices. Machine Learning,2005,61: 167-191.
    [21]H.P. Lu, K.N. Plataniotis, A.N. Venetsanopoulos. MPCA:Multilinear principal component analysis of tensor objects. IEEE Trans. on Neural Networks,2008,19(1): 18-39.
    [22]H.P. Lu, K.N. Plataniotis, A.N. Venetsanopoulos. "Uncorrelated multilinear principal component analysis for unsupervised multilinear subspace learning". IEEE Trans. on Neural Networks,2009,20(11):1820-1836.
    [23]C. Cortes, V.N. Vapnik. Support vector networks. Machine Learning,1995,20(3): 273-297.
    [24]V.N. Vapnik. Statistical learning Theory. Wiley, New York,1998.
    [25]B. Scholkopf, A. Smola, K.R. Miiller. Nonlinear component analysis as a kernel eigenvalue problem. Neural computation,1998,10(5):1299-1319.
    [26]B. Scholkopf, S. Mika, C.J.C. Burges, P. Knirsch, K. R. Muller, G. Ratsch, and A.J. Smola. Input space vs. feature space in kernel-based methods. IEEE Transactions on Neural Networks,1999,10(5):1000-1017.
    [27]Elnaz Barshan, Ali Ghodsi, Zohreh Azimifar, Mansoor Zolghadri Jahromi. Supervised principal component analysis:visualization, classification and regression on subspaces and submanifolds. Pattern Recognition,2011,44(7):1357-1371.
    [28]S.S. Wilks. Mathematical statistics. Wiley, New York,1962,577-578.
    [29]R. Duda, P. Hart. Pattern Classification and scene analysis. Wiley, New York,1973.
    [30]J.W. Sammon. An Optimal discriminant plane. IEEE Trans. Computers,1970, C-19(9): 826-829.
    [31]D.H. Foley, J.W. Sammon. An optimal set of discriminant vectors. IEEE Trans. Computer,1975, C-24(3):281-289.
    [32]J. Duchene, S. Leclercq. An optimal Transformation for discriminant and principal component analysis. IEEE Transaction on Pattern Analysis and Machine Intelligence, 1988,10(6):978-983.
    [33]Z. Jin and J.Y. Yang, Z.S. Hu, Z. Lou. Face Recognition based on the uncorrelated discriminant transformation. Pattern Recognition,2001,34(7):1405-1416.
    [34]Z. Jin and J.Y. Yang, Z.M. Tang, Z.S. Hu. A theorem on the uncorrelated optimal discriminant vectors. Pattern Recognition.2001,34(10):2041-2047.
    [35]杨健,杨静宇,金忠.最优鉴别特征的抽取及图像识别.计算机研究与发展,2001,38(11):1331-1336.
    [36]杨健,杨静宇,金忠.具有统计不相关性的图像投影鉴别分析及人脸识别.计算机研究与发展,2003,40(3):447-452.
    [37]P. Belhumeur, J. Hepanha, D. Kriegman. Eigenfaces vs. Fisherfaces:recognition using class specific linear projection. IEEE Transactions on Pattern Analysis and Machine Intelligence,1997,19(7):711-720.
    [38]Swets D L, Weng J. Using discriminant eigenfeatures for image retrieval. IEEE Transactions on Pattern Analysis and Machine Intelligence,1996,18(8):831-836.
    [39]C.J. Liu, H. Wechsler. Robust coding schemes for indexing and retrieval from large face database. IEEE Transactions on Image Processing,2000,19(1):132-137.
    [40]杨健,杨静宇,叶晖.Fisher线性鉴别分析的理论及其应用.自动化学报,2003,29(4):481-494.
    [41]J. Yang, J.Y. Yang. Why can LDA be performed in PCA transformed space. Pattern Recognition,2003,36(2):563-566.
    [42]Z.Q. Hong, J.Y. Yang. Optimal discriminant plane for a small number of samples and design method of classifier on the plane. Pattern Recognition,1991,24(4):317-324.
    [43]D.Q. Dai and P. Yuen. Face Recognition by Regularized Discriminant Analysis. IEEE Transactions on Systems, Man, and Cybernetics-Part B:Cybernetics,2007,37(4): 1080-1085.
    [44]YF. Guo, T.T. Shu, J.Y. Yang, S.J. Li. Feature extraction method based on the generalized Fisher discriminant criterion and face recognition. Pattern Analysis & Application,2001,4(1):61-66.
    [45]L.F. Chen, H.M. Liao, M.T. Ko, J.C. Lin, G.J. Yu. A new LDA-based face recognition system which can solve the small sample size problem. Pattern Recognition,2000,33(10): 1713-1726.
    [46]H. Yu, J. Yang. A direct LDA algorithm for high-dimensional data-with application to face recognition. Pattern Recognition,2001,34(10):2067-2070.
    [47]X.S. Zhuang, D.Q. Dai. Inverse Fisher discriminate criteria for small sample size problem and its application to face recognition. Pattern Recognition,2005,38(11): 2192-2194.
    [48]P. Howland, H. Park, Generalizing discriminant analysis using the generalized singular value decomposition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2004,26(8):995-1006.
    [49]S. Zhang, T. Sim, Discriminant subspace analysis:A Fukunaga-Koontz approach. IEEE Transactions on Pattern Analysis and Machine Intelligence,2007,29(10):1732-1745.
    [50]H.F. Li and T. Jiang and K. Zhang. Efficient and robust feature extraction by maximum margin criterion. IEEE Transactions on Neural Networks,2006,17(1):157-165.
    [51]J. Liu, S.C. Chen, X.Y. Tan, D.Q. Zhang. Comments on "Efficient and robust feature extraction by maximum margin criterion". IEEE Transactions on Neural Networks,2007, 18(6):1862-1864.
    [52]M. Li, B. Yuan.2D-LDA:A statistical linear discriminant analysis for image matrix. Pattern Recognition Letters,2005,26(5):527-532.
    [53]J. Yang, D. Zhang, Y. Xu, J.Y. Yang. Two-dimensional discriminant transform for face recognition. Pattern Recognition,2005,38(7):1125-1129.
    [54]H. Kong, L. Wang, E.K. Teoh, J.G. Wang, R. Venkateswarlu. A framework of 2D Fisher discriminant analysis:application to face recognition with small number of training samples. IEEE Computer Society Conference on Computer Vision and Pattern Recognition,2005,2:1083-1088.
    [55]X.Y. Jing, H.S. Wong, D. Zhang. Face recognition based on 2D Fisherface approach. Pattern Recognition,2006,39(4):707-710.
    [56]H.L. Xiong, M.N.S. Swamy, M.O. Ahmad. (2D) LDA:An efficient approach for face recognition. Pattern Recognition,2006,39(7):1396-1400.
    [57]D.C. Tao, X.L. Li, X.D. Wu, S.J. Maybank. General tensor discriminant analysis and Gabor features for gait recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence,2007,29(10):1700-1715.
    [58]S.C. Yan, D. Xu, Q. Yang, L. Zhang, X.O. Tang, H.J. Zhang. Multilinear discriminant analysis for face recognition. IEEE Transactions on Image Processing,2007,16(1): 212-220.
    [59]Dong Xu, S.C. Yan, S. Lin, T.S. Huang, Chang Shih-Fu. Enhancing bilinear subspace learning by element rearrangement, IEEE Transactions on Pattern Analysis and Machine Intelligence,2009,31(10):1913-1920.
    [60]H.S. Seung, D.D. Lee. The manifold ways of perception. Science,2000,290(5500): 2268-2269.
    [61]T. Hastie, W. Stuetzle. Principal curves. Journal of the American Statistical Association, 1989,84(406):502-516.
    [62]L.K. Saul, S.T. Roweis. Think Globally, Fit Locally:Unsupervised learning of low dimensional manifolds. Journal of Machine Learning Research,2003,4(4):119-155.
    [63]J. Gomes, A. Mojsilovic. A variational approach to recovering a manifold from sample points. Proceedings of the European Conference on Computer Vision, Springer,2002: 3-17.
    [64]J.B. Tenenbaum, V. Silva, J.C. Langford. A global geometric framework for nonlinear dimensionality reduction. Science,2000,290(5500):2319-1323.
    [65]S.T. Roweis, L.K. Saul. Nonlinear Dimensionality Reduction by Locally Linear Embedding. Science,2000,290:2323-2326.
    [66]M. Belkin, P. Niyogi. Laplacian Eigenmaps for Dimensionality Reduction and Data Representation. Neural Computation,2003,15(6):1373-1396.
    [67]X. He, P. Niyogi. Locality preserving projections. In:Proceedings of Conference on Advances in Neural Information Processing System,2003.
    [68]X. He, D. Cai, S. Yan, H. Zhang. Neighborhood preserving embedding. In:Proceedings of Conference in International Conference on Computer Vision,2005.
    [69]S. Yan, D. Xu, B. Zhang, et al. Graph Embedding and Extensions:A general framework for dimensionality reduction. IEEE Transaction on Pattern Analysis and Machine Intelligence,2007,29(1):40-51.
    [70]H.T. Chen, H.W. Chang, T.L. Liu. Local discriminant embedding and its variants. IEEE Computer Society Conference on Computer Vision and Pattern Recognition,2005: 846-853.
    [71]J. Yang, D. Zhang, J. Yang, B. Niu. Globally Maximizing, Locally Minimizing: Unsupervised Discriminant Projection with Applications to Face and Palm Biometrics. IEEE Transactions on Pattern Analysis and Machine Intelligence,2007,29(4):650-664.
    [72]R. Tibshirani. Regression shrinkage and selection via the lasso. J. Royal. Statist. Soc. Series B,1996,58(1):267-288.
    [73]B. Efron, T. Hastie, I. Johnstone, R. Tibshirani. Least angle regression. Annals of Statistics,2004,32(2):407-499.
    [74]H. Zhou, T. Hastie. Regression shrinkage and selection via the elastic net with applications to microarrays. Technical report, Statistics Department, Stanford University, 2003.
    [75]H. Zhou, T. Hastie, R. Tibshirani. Sparse principle component analysis. Technical reprot, Statistics Department, Stanford University,2004.
    [76]L. Clemmensen, T. Hastie, B. Ersb(?)ll. Sparse discriminant analysis. Technical reprot, Statistics Department, Stanford University,2008.
    [77]Deng Cai, X.F. He, J.W. Han. Spectral Regression:A Unified Approach for Sparse Subspace Learning. In:Proceedings of International Conference on Data Mining, Omaha, NE, Oct.2007.
    [78]T.Y. Zhou, D.C. Tao, X.D. Wu. Manifold elastic net:A unified framework for sparse dimension reduction. Data Mining and Knowledge Discovery,2011,22(3):340-371.
    [79]S.G. Mallat and Zhifeng Zhang. Matching pursuit with time-frequency dictionaries, IEEE Transactions on Signal Processing,1993,41(12):3397-3415.
    [80]A. Majumdar, R.K. Ward. Robust Classifiers for Data Reduced via Random Projections. IEEE Transactions on Systems, Man, and Cybernetics Part B,2010,40(5):1359-1371.
    [81]M. Aharon, M. Elad, and A.M. Bruckstein. The K-SVD:An algorithm for designing of overcomplete dictionaries for sparse representation", IEEE Transactions On Signal Processing,2006,54(11):4311-4322.
    [82]Q. Zhang and B. Li. Discriminative K-SVD for dictionary learning in face recognition. In Proceedings of IEEE International Conference on Computer Vision and Pattern Recognition,2010:2691-2698.
    [83]Z.L. Jiang and Zhe Lin, L.S. Davis. Learning a discriminative dictionary for sparse coding via label consistent K-SVD. In Proceedings of IEEE International Conference on Computer Vision and Pattern Recognition,2011:1697-1704.
    [84]L.S. Qiao, S.C. Chen, X.Y. Tan. Sparsity preserving projections with applications to face recognition. Pattern Recognition,2010,43(1):331-341.
    [85]L.S. Qiao, S.C. Chen, X.Y. Tan. Sparsity preserving discriminant analysis for single training image face recognition. Pattern Recognition Letters,2010,31(5):422-429.
    [86]M.Y. Fan, N. Gu, Hong Qiao, Bo Zhang. Sparse regularization for semi-supervised classification. Pattern Recognition,2011,44(8):1777-1784.
    [87]M. Borga. Learning Multidimensional Signal Processing. Linkoping Studies in Science and Technology, Dissertations, vol.531, Department of Electrical Engineering, Linkoping University, Linkoping, Sweden,1998.
    [88]Peter J. Schreier. A unifying discussion of correlation analysis for complex random vectors. IEEE Transactions on Signal Processing,2008,56(4):1327-1336.
    [89]Thomas Melzer, Michael Reiter, Horst Bischof. Appearance models based on kernel canonical correlation analysis. Pattern Recognition,2003,36(9):1961-1971.
    [90]M. Kuss, T. Graepel. The geometry of kernel canonical correlation analysis. Max Planck Institute for Biological Cybernetics, Technical Report No.108, May 2003.
    [91]Y. Grenier. Speaker adaptation through canonical correlation analysis. In Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing,1980,5: 888-891.
    [92]Y. Ariki, M. Sakuragi. Unsupervised speaker normalization using canonical correlation analysis. In Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing,1998,1:93-96.
    [93]M.E. Sargm, E. Erzin, Y. Yemez, A.M. Tekalp. Multimodal Speaker Identification Using Canonical Correlation Analysis. In Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing,2006,1:613-616.
    [94]M.S. Bartlett. Further aspects of the theory of multiple regression. In Proceedings of the Cambridge Philosophical Society,1938,34:33-40.
    [95]M. Yamada, A. Pezeshki, M.R. Azimi-Sadjadi. Relation between Kernel CCA and Kernel FDA. In Proceedings of International Joint Conference on Neural Networks,2005: 226-231.
    [96]T.K. Sun, S.C. Chen. Class label versus sample label-based CCA. Applied Mathematics and Computation,2007,185(1):272-283.
    [97]Y.Y. Liu, X.P. Liu, Z.X. Su. A new fuzzy approach for handling class labels in canonical correlation analysis. Neurocomputing,2008,71(7-9):1735-1740.
    [98]O. Kursun, E. Alpaydin, O.V. Favorov. Canonical correlation analysis using within-class coupling. Pattern Recognition Letter,2011,32(2):134-144.
    [99]T.K. Sun, S.C. Chen, J.Y. Yang, P.F. Shi. A supervised combined feature extraction method for recognition. In Proceedings of IEEE International Conference on Data Mining, Pisa, Italy,2008.
    [100]T.K. Sun, S.C. Chen, Z. Jin, J.Y Yang. Kernelized discriminative canonical correlation analysis. In Proceedings of International conference on Wavelet Analysis and Pattern Recognition (ICWAPR), Beijing,2007:1283-1287.
    [101]Q.S. Sun, P.A. Heng, Z. Jin, D.S. Xia. Face recognition based on generalized canonical correlation analysis. International Conference on Intelligent Computing, Heidelber, Berlin,2005,3645:958-967.
    [102]W.M. Zheng, X.Y. Zhou, C.R. Zou. Li Zhao. Facial expression recognition using kernel canonical correlation analysis (KCCA). IEEE Transactions on Neural Networks,2006, 17(1):233-238.
    [103]孙权森,曾生根,王平安,夏德深.典型相关分析的理论及其在特征融合中的应用.计算机学报,2005,28(9):1524-1533.
    [104]J.R. Kettenring. Canonical analysis of several sets of variables. Biometrika,1971,58(3): 433-451.
    [105]A.A. Nielsen. Multiset canonical correlations analysis and multispectral, truly multitemporal remote sensing data. IEEE Transactions on Image Processing,2002, 11(3):293-305.
    [106]Y.H. Yuan, Q.S. Sun, Q. Zhou, D.S. Xia. A novel multiset integrated canonical correlation analysis framework and its application in feature fusion. Pattern Recognition, 2011,44(5):1031-1040.
    [107]Z. Wang, S.C. Chen and T.K. Sun. MultiK-MHKS:A novel multiple kernel learning algorithm. IEEE Transaction on Pattern Analysis and Machine Intelligence,2008,30(2): 348-353.
    [108]S.H. Lee, S. Choi. Two-dimensional canonical correlation analysis. IEEE Signal Processing letters,2007,14(10):735-738.
    [109]H.X. Wang. Local two-dimensional canonical correlation analysis. IEEE Signal Processing letter,2010,17(11):921-924.
    [110]T.K. Sun, S.C. Chen. Locality preserving CCA with applications to data visualization and pose estimation. Image and Vision Computing,2007,25(5):531-543.
    [111]洪泉,陈松灿,倪雪蕾.子模式典型相关分析及其在人脸识别中的应用.自动化学报,2008,34(1):21-30.
    [112]彭岩,张道强.半监督典型相关分析算法.软件学报,2008,19(11):2822-2832.
    [113]X.H. Chen, S.C. Chen, H. Xue, X.D. Zhou. A unified dimensionality reduction framework for semi-paired and semi-supervised multi-view data. Patten Recognition, 2012,45(5):2005-2018.
    [114]J.C. Zhang, D.Q. Zhang. A novel ensemble construction method for multi-view data using random cross-view correlation between within-class examples. Pattern Recognition,2011,44(6):1162-1171.
    [115]A. Lykou and J. Whittaker. Sparse CCA using Lasso with positivity constraints. Computational Statistics & Data Analysis,2010,54(2):3144-3157.
    [116]D.R. Hardoon, and J. Shawe-Taylor. Sparse canonical correlation analysis. Machine Learning,2011,83(3):331-353.
    [117]S. Waaijenborg and A.H. Zwinderman. Sparse canonical correlation analysis for identifying, connecting, and completing gene-expression networks, BMC Bioinformatics,2009,10(1):1-9.
    [118]B. Avants, P.A. Cook, C. McMillan, et al. Sparse unbiased analysis of anatomical variance in longitudinal imaging. Medical Image Computing and Computer-Assisted Intervention, Lecture Notes in Computer Science,2010.
    [119]B.B. Avants, P.A. Cook, L. Ungar, J.C. Gee, M. Grassman. Dementia induces correlated reductions in white matter integrity and cortical thickness:A multivariate neuroimaging study with sparse canonical correlation analysis. NeuroImage,2010, 50(3):1004-1016.
    [120]L. Sun, S.W. Ji, J.P. Ye. Canonical correlation analysis for multi-Label classification:A least squares formulation, extensions and analysis. IEEE Transaction on Pattern Analysis and Machine Intelligence,2011,33(1):194-200.
    [121]S. Wold, H. Martens, H. Wold. The multivariate calibration problem in chemistry solved by the PLS method. In Proceedings of Conference on Matrix Pencils, Lecture Notes in Mathematics, Springer, Heidelberg,1983,973:286-293.
    [122]K. Parthasarathy, H.L. Jay, S. Victor, A.K. Gopal. Partial least squares (PLS) base monitoring and control of batch digesters. Journal of Process Control,2000,10: 229-236.
    [123]W.W. Chin. The partial least squares approach for structural equation modeling, In: Marcoulides, G.A.(Ed.), Modern Methods for Business Research. Lawrence Erlbaum Associates, London,1998,295-336.
    [124]D.V. Nguyen, D.M. Rocke. Tumor classification by partial least squares using microarray gene expression data. Bioinformatics,2002,18(1):39-50.
    [125]H. Wold. Estimation of principal components and related models by iterative least squares. Multivariate Analysis, Academic Press, NewYork,1966:391-420.
    [126]H. Wold. Soft Modeling by Latent Variables:the Nonlinear Iterative Partial Least Squares Approach. In J. Gani (Ed.), Perspectives in Probabilityand Statistics, Papers in Honour of M.S. Bartlett, Academic Press, London,1975.
    [127]Hoskuldsson A. PLS Regression Methods. Journal of Chemometrics,1988,2:211-228.
    [128]Hoskuldsson A. Causal and path modeling. Chemometrics and Intelligent Laboratory Systems,2001,58:287-311.
    [129]I.S. Helland. Partial least squares regression and statistical models. Scandinavian Journal of Statistics,1990,17(2):97-114.
    [130]I.S. Helland. Model reduction for prediction in regression models. Scandinavian Journal of Statistics,2000,27(1):1-20.
    [131]I.S. Helland. Some theoretical aspects of partial least squares regression. Chemometrics and Intelligent Laboratory Systems,2001,58(2):97-107.
    [132]M. Barker, W.S. Rayens. Partial least squares for discrimination. Journal of Chemometrics,2003,17(3):166-173.
    [133]L. Sun, S.W. Ji, S.P Yu, J.P. Ye. On the equivalence between canonical correlation analysis and orthonormalized partial least squares. The Twenty-first International Joint Conference on Artificial Intelligence,2009.
    [134]D.V Nguyen, D.M. Rocke. Multi-class cancer classification via partial least squares with gene expression probles. Bioinformatics,2002,18(9):1216-1226.
    [135]P.E. Miguel, T. Michel. Prediction of clinical outcome with microarray data:a partial least squares discriminant analysis (PLS-DA) approach. Human Genetics,2003, 112(5-6):581-592.
    [136]G. Fort, S. Lambert-Lacroix. Classification using partial least squares with penalized logistic regression. Bioinformatics,2005,21(7):1104-1111.
    [137]H.N. Qu, G.Z Li, W.S. Xu. An asymmetric classifier based on partial least squares. Pattern Recognition,2010,43(10):3448-3457.
    [138]A. Kembhavi, D. Harwood, L.S. Davis. Vehicle detection using partial least squares. IEEE Transactions on Pattern Analysis and Machine Intelligence,2011,33(6): 1250-1265.
    [139]R. Rosipal, L.J. Trejo. Kernel partial least squares regression in reproducing kernel. Journal of Machine Learning Research,2001,2:97-123.
    [140]C. Dhanjal, S.R. Gunn, J. Shawe-Taylor. Efficient sparse kernel feature extraction based on partial least squares. IEEE Transaction on Pattern Analysis and Machine Intelligence,2009,31(8):1347-1361.
    [141]B. McWilliams, G. Montana. Sparse partial least squares regression for on-line variable selection with multivariate data streams. Statistical Analysis and Data Mining,2010, 3(3):170-193.
    [142]山世光.人脸识别中若干关键问题的研究.北京:中国科学院研究生院,2004.
    [143]M.H. Yang, N. Ahuja, D. Kriegman. Detecting faces in images:A survey. IEEE Transactions on Pattern Analysis and Machine Intelligence,2002,24(1):34-58.
    [144]G.Z. Yang. Human face detection in a complex background. Pattern Recognition,1994, 27(1):53-63.
    [145]C.H. Lee, J.S. Kim, K.H. Park. Automatic human face location in a complex background using motion and color information. Pattern Recognition,1996,29(11): 1877-1889.
    [146]A. Lanitis, C.J. Taylor, T.F. Cootes. An automatic face identification system using flexible appearance models. Image and Vision Computing,1995,13(5):393-401.
    [147]Y. Freund, R.E. Schapire. Experiments with a new boosting algorithm. In Proceedings of the Thirteenth International Conference on Machine Learning, Bari, Italy,1996: 148-156.
    [148]S.Y. Lee, Y.K. Ham, R.H. Park. Recognition of human front faces using knowledge-based feature extraction and neuro-fuzzy algorithm. Pattern Recognition, 1996,29(11):1863-1876.
    [149]J. Buhmann, M. Lades, C.V. Malsburg. Size and distortion invariant object recognition by hierarchical graph matching. International Joint Conference on Neural Networks, 1990:411-416.
    [150]M. Lades, J.C. Vorbruggen, J. Buhmann, et al. Distortion invariant object recognition in the dynamic link architecture. IEEE Transactions on Computers,1993,42(3):300-311.
    [151]T. Kohonen. Self-organization and associative memory. Berlin:Springer,1988.
    [152]J.T. Chien, C.P. Liao. Maximum confidence hidden markov modeling for face recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence,2008, 30(4):606-616.
    [153]胡钟山.字符识别技术的研究与应用.南京:南京理工大学,1999.
    [154]王正群.手写体汉字识别研究.南京:南京理工大学,2001.
    [155]H. Yoshihiko,S. Uchimura, M. Watanabe, T. Yasuda, S. Tomita. Recognition of handwritten numerals using Gabor features. In Proceedings of the Thirteenth International Conference on Pattern Recognition,1996:250-253.
    [156]A. Khotanzad, Y.H. Hong. Invariant image recognition by Zernike moments, IEEE Transactions on Pattern Analysis and Machine Intelligence,1990,19(4):489-497.
    [157]S.X. Liao, M. Pawlak. On image analysis by moments. IEEE Transactions on Pattern Analysis and Machine Intelligence,1996,18(3):254-266.
    [158]R.R. Bailey, S. Mandyam. Orthogonal moment feature for use with parametirc and non-parametric classifiers. IEEE Transactions on Pattern Analysis and Machine Intelligence,1996,18(4):389-398.
    [159]金忠,胡钟山,杨静宇,刘克,孙靖夷.手写体数字有效鉴别特征的抽取和识别.计算机研究与发展,1999,36(12):1484-1489.
    [160]胡钟山,娄震,杨靖宇,刘克,孙靖夷.基于多分类器组合的手写体数字识别.计算机学报,1999,22(4):369-374.
    [161]S.D. Hou, Q.S. Sun, D.S. Xia. Feature fusion using multiple component analysis. Neural Processing Letters,2011,34(3):259-275.
    [162]O. Yamaguchi, K. Fukui, and K. Maeda. Face recognition using temporal image sequence. In Proceedings of International Conference on Automatic Face and Gesture Recognition,1998:318-323.
    [163]K. Fukui, O. Yamaguchi. Face recognition using multi-viewpoint patterns for robot vision. In Proceedings of International symposium of Robotics Research,2003: 192-201.
    [164]M. Nishiyama, O. Yamaguchi, K. Fukui. Face recognition with the multiple constrained mutual subspace method. In Proceedings of Audio and Video-Based Biometric Person Authentication,2005:71-80.
    [165]T.-K. Kim, J. Kittler, and R. Cipolla. Discriminative learning and recognition of image set classes using canonical correlations. IEEE Transactions on Pattern Analysis and Machine Intelligence,2007,29(6):1005-1018.
    [166]Y. Su, Y. fu, X. Gao, Q. Tian. Discriminant Learning Through Multiple Principal angles for Visual Recognition. IEEE Transactions on Image Processing,2012,21(3): 1381-1390.
    [167]R. Wang, S. Shan, X. Chen, W. Gao. Manifold-manifold distance with application to face recognition based on image set. In Pro ceedings of IEEE International Conference on Computer Vision and Pattern Recognition,2008:1-8.
    [168]R. Wang, X. Chen. Manifold discriminant analysis. In Proceedings of IEEE International Conference on Computer Vision and Pattern Recognition,2009:429-436.
    [169]J. Hamm, D.D. Lee. Grassmann discriminant analysis:A unifying view on subspace-based learning. In Proceedings of the 25th International Conference on Machine Learning, Helsinki, Finland,2008.
    [170]M.T. Harandi, C. Sanderson, S. Shirazi, B.C. Lovell. Graph embedding discriminant analysis on grassmannian manifold for improved image set matching. In Proceedings of IEEE International Conference on Computer Vision and Pattern Recognition,2011: 2705-2712.
    [171]P. Turaga, A. Veeraraghavan, A. Srivastava, R. Chellappa. Statistical Computations on Special Grassmann ans Stiefel manifolds for Image and Video-Based Recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence,2011, 33(11):2273-2286.
    [172]Y. Hu, A. S. Mian, R. Owens. Sparse approximated nearest points for image set classification. In Proceedings of IEEE International Conference on Computer Vision and Pattern Recognition,2011:121-128.
    [173]T.K. Kim, R. Cipolla. Canonical correlation analysis of video volume tensors for action categorization and detection. IEEE Transactions on Pattern Analysis and Machine Intelligence,2009,31(8):1415-1428.
    [174]N.M. Correa, T. Eichele. T. Adali, et al. Mult-set canonical correlation analysis for the fusion of concurrent single trial EPR and fuctional MRI. NeuroImage,2010,50(4): 1438-1445.
    [175]C.P. Hou, C.S. Zhang, F.P. Nie. Multiple view semi-supervised dimensionality reduction. Pattern Recognition,2010,43(3):720-730.
    [176]H. Huang, H.T. He, X. Fan, J.P. Zhang. Super-resolution of human face image using canonical correlation analysis. Patter Recognition,2010,43(7):2532-2543.
    [177]H. Huang, H.T. He. Super-resolution method for face recognition using nonlinear mappings on coherent features. IEEE Transaction on Neural Networks.2011,22(1): 121-130.
    [178]M.B. Blaschko, J.A. Shelton, A. Bartels, C.H. Lampert, A. Gretton. Semi-supervised Kernel Canonical Correlation Analysis with Application to Human fMRI. Pattern Recognition Letters,2011,32(11):1572-1583.
    [179]S. Ferdinando, H. Andy. Parameterisation of a stochastic model for human face identification. In Proceedings of 2nd IEEE Workshop on Applications of Computer Vision, Sarasota FL, December,1994.
    [180]A.M. Martinez and R. Benavente. The AR Face Database. CVC Technical Report#24, June 1998.
    [181]P.J. Phillips, H.J. Moon, S.A. Rizvi, P.J. Rauss. The FERET evaluation methodology for face recognition algorithms. IEEE Transaction on Pattern Analysis and Machine Intelligence,2000,20(10):1090-1104.
    [182]T. Hastie, R. Tibshirani, J. Friedman. The Elements of Statistical Learning Data Mining, Inference and Prediction, second edition. Springer, New York,2009.
    [183]http://archive.ics.uci.edu/ml/datasets/Multiple+Features
    [184]D. Cai, X. He, J. Han, H.J. Zhang. Orthogonal Laplacianfaces for face recognition. IEEE Transactions on Image Processing,2006,15(11):3608-3614.
    [185]F. Nie, S. Xiang, Y. Song, C. Zhang. Orthogonal locality minimizing globality maximizing projections for feature extraction. Optical Engineering,2009,48(1): 017202-1-5.
    [186]M. Belkin, V. Sindhwani, P. Niyogi. Manifold regularization:a geometric framework for learning from labeld and unlabeld examples. Jounral of Machine Leaning Research, 2006,7:2399-2434.
    [187]魏权龄,王日爽,徐冰等.数学规划与优化设计.北京,国防工业出版社,1984,459-522..
    [188]Ludmila I. Kuncheva, Christopher J. Whitaker. Measures of diversity in classifier ensembles and their relationship with the ensemble accuracy. Machine Learning,2003, 51(2):181-207.
    [189]G.R.G. Lanckriet, N. Cristianini, P. Bartlett, L.E. Ghaoui, M.I. Jordan. Learning the kernel matrix with semidefinite programming. Jounal of Machine Learning Research, 2004,5(12):27-72.
    [190]T. Xia, D.C. Tao, T. Mei, Y.D. Zhang. Multiview spectral embedding. IEEE Transactions on Systems, Man, and Cybernetics, Part B:Cybernetics,2010,40(6): 1438-1446.
    [191]B. Sun, X. Zhang, J. Liu, X. Mao. Feature fusion using locally linear embedding for classification. IEEE Transactions on Neural Network,2010,21(1):163-168.
    [192]Y. Fu, L. Cao, G. Guo, T.S. Huang. Multiple feature fusion by subspace learning. In Proceedings of ACM International Conference on Image and Video Retrieval,2008.
    [193]M. Vasilescu, D. Terzopoulos. Multilinear analysis of image ensembles:TensorFaces. In Proceedings of 7th European Conferenc on Computer vision,2002.
    [194]L.D. Lathauwer, B.D. Moor, J. Vandewalle. A multilinear singular value decomposition. SIAM Journal on Matrix Analysis and Applications,2000,21(4):1253-1278.
    [195]H.C. Wang, N. Ahuja. A tensor approximation approach to dimensionality reduction. International Journal of Computer Vision,2008,76(3):217-229.
    [196]De Lathauwe L., De Moor B., Vandewalle J. Blind source separation by higher-order singular value decomposition. In Proceedings of the 7th European Signal Processing Conference, Edinburgh, UK,1994.
    [197]张贤达.矩阵分析与应用.北京,清华大学出版社,2004.
    [198]B.W. Bader, T.G. Kolda. Algorithm 862:MATLAB tensor classes for fast algorithm prototyping. ACM Transactions on Mathematical Software,2006,32(4):635-653.
    [199]T. Ahonen, A. Hadid, M. Pietikainen. Face description with binary patterns:application to face recognition. IEEE Transaction on Pattern Analysis and Machine Intelligence, 2006,28(12):2037-2041.
    [200]C.J. Liu, H. Wechsler. Gabor feature based classification using the enhanced Fisher linear discriminant model for face recognition. IEEE Transaction on Image Processing, 2002,11(4):467-476.
    [201]B. Leibe, B. Schiele. Analyzing appearance and contour based methods for object categorization. In Proceedings of IEEE International Conference on Computer Vision and Pattern Recognition,2003.
    [202]T. Sim, S. Baker, M. Bsat. The CMU pose, illumination, and expression database. IEEE Transaction on Pattern Analysis and Machine Intelligence,2003,25(12):1615-1618.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700