用户名: 密码: 验证码:
基于流形学习的局部降维算法研究
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
数据降维算法在图像、声音和视频分析领域有着广阔的应用前景。近年来数据降维算法受到越来越多研究人员的关注。流形学习降维算法主要分为线性降维算法和非线性降维算法。主分量分析算法是线性降维算法的代表,其特点是通过对现有数据样本的分析,学习到一个映射矩阵,然后通过对数据样本的线性映射将高维空间的数据映射至低维空间,线性降维算法的优点是计算复杂度低,但对本质非线性分布的数据降维效果并不好,即在降维过程中会损失数据分布的一些重要特征。等距映射算法、局部线性嵌入算法及拉普拉斯映射算法是非线性降维算法的代表,其特点是通过非线性映射的方式将数据从高维空间映射至低维空间,具体又分为全局降维算法及局部降维算法。等距映射算法就是典型的全局降维算法,其特点是在降维过程中尽量保持数据分布的全局结构,算法优点是保证全局结构不变,缺点是计算复杂度高不适合实时应用。局部线性嵌入算法及拉普拉斯映射算法则是典型的局部降维算法,其优点是在降维过程中只保证数据分布的局部结构,这样降维后数据的整体分布会有所变形,但计算复杂度低,适合实时应用。论文首先介绍了一些经典的流形学习算法,并对现有流形学习算法进行分析,指出当前流形学习算法普遍存在的问题及主要研究热点,并据此展开如下研究:
     首先,针对传统流形学习降维算法中相似性度量问题,提出两种基于马氏距离度量的流形学习降维算法,两种算法分别将马氏距离度量应用于局部线性嵌入算法及拉普拉斯映射算法的近邻确定过程及新样本的识别过程。算法通过对现有数据样本的分析得出马氏度量的系数矩阵,再根据得到的马氏度量计算每个数据样本的近邻,继而进行数据从高维空间向低维空间的非线性映射。同样在新样本的降维及识别过程中也要用马氏度量确定其近邻及最终所属分类。
     其次,针对高维数据本质结构问题,提出自适应局部线性嵌入算法,传统流形学习降维算法通常依靠K近邻算法确定每个数据样本的近邻数,但K值如何获取却少有方法,通常使用的试凑法需要耗费较多时间,因此提出自适应局部线性嵌入算法,根据数据样本分布情况,自动为每个数据样本指定一个合适的阈值,当其他样本与该样本的距离小于此阈值时,则确定其为该样本的近邻,否则不是该样本的近邻。算法不但解决了流形学习降维算法中的K值选取的问题,而且根据数据样本分布情况为每个数据样本指定不同的K值,相对于传统流形学习降维算法中为每个样本指定相同K值的方式,自适应局部线性嵌入算法更合理。
     再次,针对图像降维算法需要将图像数据转变为图像向量的问题,提出一种线性的二维降维算法——改进的基于模块的主分量分析算法,该算法是对基于模块的主分量分析算法的一个改进,改变了原算法中图像均值计算方法及新样本识别方法,还通过理论分析证明了二维主分量分析算法是改进算法的一个特例。
     最后,针对图像降维算法需要将图像数据转变为图像向量的问题,提出一种非线性的二维降维算法——二维局部线性嵌入算法,算法通过理论分析指出二维主分量分析算法实质是基于行的主分量分析算法,而局部线性嵌入算法又可以看作主分量分析算法的一个非线性扩展,基于以上原因提出基于行(或列)的局部线性嵌入算法——二维局部线性嵌入算法。
Dimensionality reduction algorithms have been applied widely in images, audio, andvideo analysis fields. More and more researchers take more attention to these fields in recentyears. Manifold learning algorithms are divided into linear and nonlinear algorithms indimensionality reduction field. Principal Component Analysis (PCA) is a classical algorithmin linear dimensionality reduction field. And its characteristic is to learn a mapping matrix byanalyzing the existing samples. Then the algorithm maps the samples from high dimensionalspace to low dimensional space. Low complexity is one of the advantages of linear algorithms.But it doesn’t have a good effect on nonlinear distributed data. It means that we will losesome important features using linear algorithms in dimensionality reduction procedure. ISOMapping, Locally Linear Embedding and Laplacian Mapping are classical algorithms innonlinear dimensionality reduction fields. And their characteristic is to reduce the datadimensionalities by nonlinear mapping method. We divide nonlinear algorithms into globaland local algorithms. ISO Mapping is a classical global algorithm which tries to preserve theglobal structure of data distribution in dimensionality reduction procedure. An advantage ofISO Mapping is to preserve the global structure. But it has high complexity and it is notsuitable for real-time application. Locally Linear Embedding and Laplacian Mapping areclassical local algorithms, which only preserve the local structure in dimensionality reductionprocedure. The data in low dimensional space has some deformation, but it is suitable forreal-time application because of the low complexity. This paper introduces and analyzes someclassical algorithms firstly. Subsequently, we point out the existing problems and researchfocuses. Then we begin the research as follows:
     Firstly, aiming at the problem of similarity measurement, we present two Mahalanobisdistance metric algorithms based manifold learning algorithms. Both algorithms useMahalanobis distance metric in the procedure of searching neighbourhoods and recognizingnew samples. Two algorithms get the coefficient matrix based Mahalanobis distance metricby analyzing the existing samples firstly. Subsequently, algorithms search neighbourhoods ofeach sample. Then we can nonlinearly map the samples from high dimensionality space tolow dimensionality space. We can also use Mahalanobis distance metric to determine theneighbourhoods and the categories of new samples.
     Secondly, aiming at the problem of finding intrinsic structure of high dimensional data,we present adaptive neighbourhoods based locally linear embedding (ANLLE) algorithm.K-nearest neighbor (KNN) algorithm is usually used to search the neighbourhoods of eachsample in manifold learning based dimensionality reduction algorithms. But how to get thevalue of K? We normally use the method of trial and error. But it costs much time sometimes.For the reasons above, we propose ANLLE algorithm, which gives every sample a differentthreshold automatically. We regard a sample s2as a neighbourhood of sample s1if thedistance from s1to s2is smaller than the threshold of s1. Otherwise s2is not theneighbourhood of s1. ANLLE algorithm not only solves the problem of choosing K value, butalso gives every sample a different K value according to the distribution of the samples.ANLLE is more reasonable comparing with classical algorithms.
     Thirdly, aiming at the problem of transforming an image to an image vector in theprocedure of dimensionality reduction algorithm. We present a linear two-dimensionaldimensionality reduction algorithm, Modified Modular Principal Component Analysis, whichis an improved algorithm of Modular Principal Component Analysis (MPCA). The algorithmchanges the method of computing images mean and the method of recognizing new samples.We also prove that Two-Dimensional Principal Component Analysis algorithm is a specialcase of Modified Modular Principal Component Analysis algorithm.
     Finally, aiming at the problem of transforming an image to an image vector in theprocedure of dimensionality reduction. We present a nonlinear two-dimensionaldimensionality reduction algorithm named Two-Dimensional Locally Linear Embedding.Through theoretical analysis, we acquire that2dPCA is a special case of line-based PCAalgorithm and Locally Linear Embedding algorithm can be regarded as a nonlinear expansionof PCA. Based on these reasons, we propose line-based (or column-based) Locally LinearEmbedding algorithm named Two-Dimensional Locally Linear Embedding.
引文
[1] J. B. Tenenbaum, V. deSilva, and J. C. Langford. A global geometric framework fornonlinear dimensionality reduction. science.2000,290(5500):2323-2326
    [2] S. T. Roweis and L. k. Saul. Nonlinear dimensionality reduction by locally linearembedding. science.2000,290(5500):2323-2326
    [3] R. Gottumukkal and V. K. Asari. An improved face recognition technique based onmodular PCA approach. Pattern Recognition Letters.2004,25(4):429-436
    [4] Y. Jian, D. Zhang, A. F. Frangi, and Y. Jing-yu. Two-dimensional PCA: a newapproach to appearance-based face representation and recognition. Pattern Analysisand Machine Intelligence, IEEE Transactions on.2004,26(1):131-137
    [5] P. Comon. Independent component analysis, a new concept?. Signal Processing.1994,36(3):287-314
    [6] M. Belkin and P. Niyogi. Laplacian eigenmaps for dimension reduction and datarepresentation. Neural Computation.2001,15(6):1373-1396
    [7] Z. Y. Zhang and H. Y. Zha. Principal manifolds and nonlinear dimensionalityreduction via tangent space alignment. Journal of Shanghai University.2004,8(4):406-424
    [8] D. L. Donoho and C. Grimes. Hessian eigenmaps:new locally linear embeddingtechniques for high-dimensional data. Proceedings of the National Academy of Artsand Sciences.2003:5591-5596
    [9] L. W. Wang, Y. Zhang, and J. F. Feng. On the euclidean distance of images. PatternAnalysis and Machine Intelligence, IEEE Transactions on.2005,27(8):1334-1339
    [10] L. Zhang and N. Wang. Locally linear embedding based on image euclidean distance.Automation and Logistics,2007IEEE International Conference on, Jinan,2007.IEEEPress,1914-1918
    [11] C. Y. Zhou and Y. Q. Chen. Improving nearest neighbor classification with camweighted distance. Pattern Recognition.2006,39(4):635-645
    [12] Y. Pan, S. S. Ge, and A. Al Mamun. Weighted locally linear embedding for dimensionreduction. Pattern Recognition.2009,42(5):798-811
    [13] D. D. Ridder, O. Kouropteva, O. Okun, M. Pietikainen, and R. P. W. Duin. Supervisedlocally linear embedding. Lecture Notes in Computer Science,2003.Springer-Verlag,333-341
    [14] S. Q. Zhang. Enhanced supervised locally linear embedding. Pattern RecognitionLetters.2009,30(13):1208-1218
    [15] D. Liang, J. Yang, Z. Zheng, and Y. Chang. A facial expression recognition systembased on supervised locally linear embedding. Pattern Recognition Letters.2005,26(15):2374-2389
    [16] H. Chang and D. Y. Yeung. Robust locally linear embedding. PatternRecognition.2006,39(6):1053-1065
    [17] A. Hadid and M. Pietikainen. Efficient locally linear embeddings of imperfectmanifolds. Proceedings of the Third International Conference on Machine Learningand Data Mining in Pattern Recognition, Leipzig, Germany,2003.SpringerVerlag,188-201
    [18] S. S. Ge, F. Guan, Y. Pan, and A. P. Loh. Neighborhood linear embedding for intrinsicstructure discovery. Machine Vision and Applications.2008,21(3):391-401
    [19] P. Ying Han, A. B. J. Teoh, W. Eng Kiong, and F. S. Abas. Supervised locally linearembedding in face recognition. Biometrics and Security Technologies,2008. ISBAST2008. International Symposium on, Islamabad,2008.IEEE Press,1-6
    [20] P. N. Belhumeur, J. P. Hespanha, and D. J. Kriegman. Eigenfaces vs.fisherfaces-recognition using class specific linear projection. IEEE Transactions onPattern Analysis and Machine Intelligence.1997,19(7):711-720
    [21] L. Huang, L. Zheng, C. Chen, and M. Lu. Locally linear embedding algorithm withadaptive neighbors. Intelligent Systems and Applications,2009. ISA2009.International Workshop on, Hubei,2009.IEEE Press,1-4
    [22] L. K. Saul and S. T. Roweis. Think globally, fit locally: unsupervised learning of lowdimensional manifolds. Journal of Machine Learning Research.2004,4(2):119-155
    [23] O. Kouropteva, O. Okun, and P. Matti. Incremental locally linear embedding. PatternRecognition.2005,38(10):1764-1767
    [24] X. F. He, D. Cai, S. C. Yan, and H. J. Zhang. Neighborhood preserving embedding.Computer Vision,2005. ICCV2005. Tenth IEEE International Conference on, Beijing,2005.IEEE Press,1208-1213
    [25] X. F. He and P. Niyogi. Locality preserving projections. Advances in NeuralInformation Processing Systems16, Canada,2004.MIT Press,153-160
    [26] S. Chen, H. Zhao, M. Kong, and B. Luo.2D-LPP: A two-dimensional extension oflocality preserving projections. Neurocomputing.2007,70(4-6):912-921
    [27] D. Hu, G. Feng, and Z. Zhou. Two-dimensional locality preserving projections(2DLPP) with its application to palmprint recognition. Pattern Recognition.2007,40(1):339-342
    [28] X. Pan and Q.-Q. Ruan. Palmprint recognition with improved two-dimensionallocality preserving projections. Image and Vision Computing.2008,26(9):1261-1268
    [29] Q.-x. Gao, H. Xu, Y.-y. Li, and D.-y. Xie. Two-dimensional supervised local similarityand diversity projection. Pattern Recognition.2010, In Press, Accepted Manuscript
    [30] D. Q. Zhang and Z. H. Zhou.(2D)2PCA: Two-directional two-dimensional PCA forefficient face representation and recognition. Neurocomputing.2005,69(1):224-231
    [31] D. Hu, G. Feng, and Z. Zhou. Comment on "two-dimensional locality preservingprojections (2DLPP) with its application to palmprint recognition". PatternRecognition.2008,41(4):1427-1427
    [32] Z. Hu. Comment on:"Two-dimensional locality preserving projections (2DLPP) withits application to palmprint recognition". Pattern Recognition.2008,41(4):1426-1426
    [33] H. S. Seung and D. D. Lee. The manifold ways of perception. science.2000,290(5500):2268-2269
    [34] T. Lin and H. Zha. Riemannian manifold learning. Pattern Analysis and MachineIntelligence, IEEE Transactions on.2008,30(5):796-809
    [35] Manifold Learning and Applications in Recognition. STUDIES IN FUZZINESS ANDSOFT COMPUTING.2004,
    [36] P. He, X. Xu, and L. Chen,"Manifold mapping machine." vol. In Press, AcceptedManuscript,2010.
    [37] A. A. Jamshidi, M. J. Kirby, and D. S. Broomhead. Geometric Manifold Learning.Signal Processing Magazine, IEEE.2010,28(2):69-76
    [38] M. Belkin and P. Niyogi. Laplacian eigenmaps and spectral techniques for embeddingand clustering. Neural Information Processing Systems.2002:634-640
    [39] M. Belkin and P. Niyogi. Towards a theoretical foundation for Laplacian-basedmanifold methods. Journal of Computer and System Sciences.2008,74(8):1289-1308
    [40] S. C. Yan, D. Xu, B. Y. Zhang, H. J. Zhang, Q. Yang, and S. Lin. Graph embeddingand extensions: a general framework for dimensionality reduction. Pattern Analysisand Machine Intelligence, IEEE Transactions on.2007,29(1):40-51
    [41] W. J. Zeng, X. L. Li, X. D. Zhang, and E. Cheng. Kernel-based nonlinear discriminantanalysis using minimum squared errors criterion for multiclass and undersampledproblems. Signal Processing.2010,90(8):2333-2343
    [42] M. S. Bartlett, J. R. Movellan, and T. J. Sejnowski. Face recognition by independentcomponent analysis. Neural Networks, IEEE Transactions on.2002,13(6):1450-1464
    [43] A. Hyvarinen. Survey on independent component analysis. Neural ComputingSurveys.1999,2(1):94-128
    [44] P. C. Yuen and J. H. Lai. Face representation using independent component analysis.Pattern Recognition.2002,35(6):1247-1257
    [45] H. Claussen, J. Rosca, and R. Damper. Signature extraction using mutualinterdependencies. Pattern Recognition.2010, In Press, Accepted Manuscript
    [46] The Isomap Algorithm and Topological Stability. SCIENCE.2002,
    [47] J. Chen, R. Wang, S. Shan, X. Chen, and W. Gao. Isomap based on the imageeuclidean distance. Pattern Recognition,2006. ICPR2006.18th InternationalConference on, Hong Kong,2006.IEEE Press,1110-1113
    [48] H. Yin and W. Huang. Adaptive Nonlinear Manifolds and Their Applications toPattern Recognition. Information Sciences.2010, In Press, Accepted Manuscript
    [49] F. Shang, L. C. Jiao, J. Shi, and J. Chai. Robust Positive Semidefinite L-IsomapEnsemble. Pattern Recognition Letters.2010, In Press, Accepted Manuscript
    [50] G.-m. Lu and J.-k. Zuo. Orthogonal isometric projection for face recognition. TheJournal of China Universities of Posts and Telecommunications.2011,18(1):91-97,128
    [51] D. d. Ridder and R. P. W. Duin,"Locally linear embedding for classification,"University of Technology, Netherlands2002.
    [52] L. Saul and S. T. Roweis,"An Introduction to Locally Linear Embedding," Report atAT&T labs Research2002.
    [53] O. Kouropteva, O. Okun, and M. Pietikainen. Supervised locally linear embeddingalgorithm for pattern recognition. Lecture Notes in Computer Science.2003,
    [54] Z. Zhenyue and Z. Lingxiao. Probability-Based Locally Linear Embedding forClassification. Fuzzy Systems and Knowledge Discovery,2007. FSKD2007. FourthInternational Conference on,2007.243-247
    [55] H. Kanghua and W. Chunheng. Clustering-based locally linear embedding. PatternRecognition,2008. ICPR2008.19th International Conference on,2008.1-4
    [56] C. Hou, C. Zhang, Y. Wu, and Y. Jiao. Stable local dimensionality reductionapproaches. Pattern Recognition.2009,42(9):2054-2066
    [57] A. Vathy-Fogarassy and J. Abonyi. Local and Global Mappings of TopologyRepresenting Networks. Information Sciences.2009, In Press, Accepted Manuscript
    [58] C. Hou, J. Wang, Y. Wu, and D. Yi. Local linear transformation embedding.Neurocomputing.2009,72(10):2368-2378
    [59] B. Li, C.-H. Zheng, D.-S. Huang, L. Zhang, and K. Han. Gene expression dataclassification using locally linear discriminant embedding. Computers in Biology andMedicine.2010, In Press, Corrected Proof
    [60] S. You and H. Ma. Manifold topological multi-resolution analysis method. PatternRecognition.2011, In Press, Accepted Manuscript
    [61] Unsupervised learning of low dimensional manifolds. The Journal of MachineLearning Research.2003,
    [62] W. Yang, C. Sun, L. Zhang, and K. Ricanek. Laplacian bidirectional PCA for facerecognition. Neurocomputing.2010, In Press, Accepted Manuscript
    [63] E. Kokiopoulou and P. Frossard. Graph-based classification of multiple observationsets. Pattern Recognition.2010, In Press, Accepted Manuscript
    [64] M. Hein, J.-Y. Audibert, and U. Von Luxburg. Graph Laplacians and their convergenceon random neighborhood graphs. Journal of Machine Learning Research.2007,8:1325-1370
    [65] F. Zhao, L. Jiao, H. Liu, X. Gao, and M. Gong. Spectral clustering with eigenvectorselection based on entropy ranking. Neurocomputing.2010, In Press, AcceptedManuscript
    [66] S. C. Yan, D. Xu, B. Y. Zhang, and Z. Hong.Jiang. Graph embedding: a generalframework for dimensionality reduction. Computer Vision and Pattern Recognition,2005. CVPR2005. IEEE Computer Society Conference on, San Diego,2005.IEEEPress,830-837
    [67] W. Zhang, Z. Lin, and X. Tang. Tensor linear Laplacian discrimination (TLLD) forfeature extraction. Pattern Recognition.2009,42(9):1941-1948
    [68]曾宪华,罗四维,王娇, and赵嘉莉.基于测地线距离的广义高斯型Laplacian特征映射.软件学报.2009,
    [69] P. Jia, J. Yin, X. Huang, and D. Hu. Incremental Laplacian Eigenmaps by PreservingAdjacent Information between Data Points. Pattern Recognition Letters.2009, In Press,Accepted Manuscript
    [70] W. Qinggang, L. Jianwei, and W. Xuchu. Distinguishing variance embedding. Imageand Vision Computing.2010,28(6):872-880
    [71] F. Wang. A general learning framework using local and global regularization. PatternRecognition.2010, In Press, Accepted Manuscript
    [72] X. Liu, S. Yan, and H. Jin. Projective Nonnegative Graph Embedding. ImageProcessing, IEEE Transactions on.2010,19(5):1126-1137
    [73] H. Xiaofei and P. Niyogi. Locality Preserving Projections. Advances in NeuralInformation Processing Systems.2004,16:153-160
    [74] H. Xiaofei, Y. Shuicheng, H. Yuxiao, P. Niyogi, and Z. Hong-Jiang. Face recognitionusing Laplacianfaces. Pattern Analysis and Machine Intelligence, IEEE Transactionson.2005,27(3):328-340
    [75] D. Freedman. Efficient simplicial reconstructions of manifolds from their samples.Pattern Analysis and Machine Intelligence, IEEE Transactions on.2002,24(10):1349-1357
    [76] Y. Teh and S. Roweis. Automatic Alignment of Local Representations. Advances inNeural Information Processing Systems,2003.
    [77] J. Wang. Improve local tangent space alignment using various dimensional localcoordinates. Neurocomputing.2008,71(16-18):3575-3581
    [78] Y. Li, D. Luo, and S. Liu. Orthogonal discriminant linear local tangent spacealignment for face recognition. Neurocomputing.2009,72(4):1319-1323
    [79] P. Zhang, H. Qiao, and B. Zhang. An improved local tangent space alignment methodfor manifold learning. Pattern Recognition Letters.2010, In Press, AcceptedManuscript
    [80] J. Wang and Z. Zhang. Nonlinear embedding preserving multiple local-linearities.Pattern Recognition.2010,43(4):1257-1268
    [81] Y.-K. Lei, Z.-G. Ding, R.-X. Hu, S.-W. Zhang, and W. Jia. Orthogonal local splinediscriminant projection with application to face recognition. Pattern RecognitionLetters.2010, In Press, Accepted Manuscript
    [82] Y. Zhan and J. Yin. Robust Local Tangent Space Alignment via Iterative WeightedPCA. Neurocomputing.2011, In Press, Accepted Manuscript
    [83] K. Lu, J. Zhao, and Y. Wu. Hessian Optimal Design for image retrieval. PatternRecognition.2010, In Press, Accepted Manuscript
    [84] O. Abdel-Mannan, A. Ben Hamza, and A. Youssef. Incremental hessian locally linearembedding algorithm. Signal Processing and Its Applications,2007. ISSPA2007.9thInternational Symposium on, Sharjah,2007.IEEE Press,1-4
    [85] J. Li and B. L. Lu. An adaptive image euclidean distance. Pattern Recognition.2009,42(3):349-357
    [86] X. Yang, A. Goh, and A. Qiu. Locally Linear Diffeomorphic Metric Embedding(LLDME) for Surface-Based Anatomical Shape Modeling. NeuroImage.2011, In Press,Accepted Manuscript
    [87] S. Xiang, F. Nie, and C. Zhang. Learning a mahalanobis distance metric for dataclustering and classification. Pattern Recognition.2008,41(12):3600-3612
    [88] Y.-F. Guo, S.-J. Li, J.-Y. Yang, T.-T. Shu, and L.-D. Wu. A generalized Foley-Sammontransform based on generalized fisher discriminant criterion and its application to facerecognition. Pattern Recognition Letters.2003,24(1-3):147-158
    [89] S. M. Baghshah and B. S. Shouraki. Non-linear metric learning using pairwisesimilarity and dissimilarity constraints and the geometrical structure of data. PatternRecognition.2010,43(8):2982-2992
    [90] E. Kokiopoulou and P. Frossard. Minimum distance between pattern transformationmanifolds: algorithm and applications. Pattern Analysis and Machine Intelligence,IEEE Transactions on.2009,31(7):1225-1238
    [91] F. Li, Q. H. Dai, W. L. Xu, and E. Gui.Hua. Weighted subspace distance and itsapplications to object recognition and retrieval with image sets. Signal ProcessingLetters, IEEE.2009,16(3):227-230
    [92] R. P. Wang, S. G. Shan, X. L. Chen, and W. Gao. Manifold-manifold distance withapplication to face recognition based on image set. Computer Vision and PatternRecognition,2008. CVPR2008. IEEE Conference on, Alaska,2008.IEEE Press,1-8
    [93] C. Zhang, S. Xiang, F. Nie, and Y. Song. Nonlinear dimensionality reduction withrelative distance comparison. Neurocomputing.2009,72(7):1719-1731
    [94] Y. J. Li and B. Liu. A normalized levenshtein distance metric. Pattern Analysis andMachine Intelligence, IEEE Transactions on.2007,29(6):1091-1095
    [95] R. Jin, S. J. Wang, and Z. H. Zhou. Learning a distance metric from multi-instancemulti-label data. Computer Vision and Pattern Recognition,2009. CVPR2009. IEEEConference on, Miami,2009.IEEE Press,896-902
    [96] J. M. Sotoca and F. Pla. Supervised feature selection by clustering using conditionalmutual information-based distances. Pattern Recognition.2010,43(6):2068-2081
    [97] K. Bunte, B. Hammer, A. Wismuller, and M. Biehl. Adaptive local dissimilaritymeasures for discriminative dimension reduction of labeled data.Neurocomputing.2010,73(7):1074-1092
    [98] W. Zhang, X. Xue, Z. Sun, H. Lu, and Y.-F. Guo. Metric learning by discriminantneighborhood embedding. Pattern Recognition.2008,41(6):2086-2096
    [99] C. Varini, A. Degenhard, and T. W. Nattkemper. ISOLLE: LLE with geodesic distance.Neurocomputing.2006,69(13-15):1768-1771
    [100]文贵华,江丽君, and文军.基于邻域优化的局部线性嵌入.系统仿真学报.2007,19(13):3119-3122
    [101]李博,杨丹,雷明, and葛永新.基于近邻消息传递的自适应局部线性嵌入.光电子.激光.2010,21(5):772-778
    [102]文贵华,江丽君, and文军.邻域参数动态变化的局部线性嵌入.软件学报.2008,19(7):1666-1673
    [103]喻军,秦如新, and邓乃扬.基于自适应最近邻的局部线性嵌入算法.控制工程.2006,13(5):469-470
    [104]惠康华,肖柏华, and王春恒.基于自适应近邻参数的局部线性嵌入.模式识别与人工智能.2010,23(6):842-846
    [105]张育林,庄健,王娜, and王孙安.一种自适应局部线性嵌入与谱聚类融合的故障诊断方法.西安交通大学学报.2010,44(1):77-82
    [106] X. He, P. Beauseroy, and A. Smolarz. Nearest-neighbour ensembles in lasso featuresubspaces. Computer Vision, IET.2010,4(4):306-319
    [107] L. Yang. Distance Metric Learning:A Comprehensive Survey.2006,
    [108] M. Wang, B. Liu, J. Tang, and X.-S. Hua. Metric learning with feature decompositionfor image categorization. Neurocomputing.2010, In Press, Accepted Manuscript
    [109] A.-H. Jiang, X.-C. Huang, Z.-H. Zhang, J. Li, Z.-Y. Zhang, and H.-X. Hua. MutualInformation Algorithms. Mechanical Systems and Signal Processing.2010, In Press,Accepted Manuscript
    [110] C.-C. Chang. Generalized iterative RELIEF for supervised distance metric learning.Pattern Recognition.2010,43(8):2971-2981
    [111] J. Wang, B. Zhang, M. Qi, and J. Kong. Linear discriminant projection embeddingbased on patches alignment. Image and Vision Computing.2010, In Press, AcceptedManuscript
    [112] X. Zhang, J. Li, and H. Yu. Local Density Adaptive Similarity Measurement forSpectral Clustering. Pattern Recognition Letters.2010, In Press, Accepted Manuscript
    [113] Y. Yu, J. Jiang, and L. Zhang. Distance metric learning by minimal distancemaximization. Pattern Recognition.2010, In Press, Accepted Manuscript
    [114] P. Ciaccia and M. Patella. Metric information filtering. Information Systems.2010, InPress, Accepted Manuscript
    [115] L. Hedjazi, J. Aguilar-Martin, and M.-V. L. Lann. Similarity-margin based featureselection for symbolic interval data. Pattern Recognition Letters.2010, In Press,Accepted Manuscript
    [116] M. Lewandowski, D. Makris, and J.-C. Nebel. Automatic Configuration of SpectralDimensionality Reduction Methods. Pattern Recognition Letters.2010, In Press,Accepted Manuscript
    [117] B. Yang and S. Chen. Sample-dependent graph construction with application todimensionality reduction. Neurocomputing.2010, In Press, Corrected Proof
    [118] S. S. Ge, F. Guan, A. R. Loh, and C. H. Fua. Feature representation based on intrinsicstructure discovery in high dimensional space. Robotics and Automation,2006. ICRA2006. Proceedings2006IEEE International Conference on, Florida,2006.IEEEPress,3399-3404
    [119] M. Safayani, M. T. Manzuri Shalmani, and M. Khademi. Extended Two-DimensionalPCA for efficient face representation and recognition. Intelligent ComputerCommunication and Processing,2008. ICCP2008.4th International Conference on,2008.295-298
    [120] L. Xiaoming, W. Zhaohui, L. Jun, and F. Zhilin. Face recognition with LocalitySensitive Discriminant Analysis based on matrix representation. Neural Networks,2008. IJCNN2008.(IEEE World Congress on Computational Intelligence). IEEEInternational Joint Conference on,2008.4052-4058
    [121] R. Zhi and Q. Ruan. Facial expression recognition based on two-dimensionaldiscriminant locality preserving projections. Neurocomputing.2008,71(7-9):1730-1734
    [122] W. Yantao, L. Hong, and X. Tian. Two-dimensional locality sensitive discriminantanalysis. Wavelet Analysis and Pattern Recognition,2008. ICWAPR '08. InternationalConference on,2008.416-420
    [123] Z. Hu. Comment on "Facial expression recognition based on two-dimensionaldiscriminant locality preserving projections"[Neurocomputing71(2008)1730-1734].Neurocomputing.2009,72(13-15):3399-3400
    [124] Y. Weiwei. Two-dimensional discriminant locality preserving projections for facerecognition. Pattern Recognition Letters.2009,30(15):1378-1383
    [125] Y. Xu, G. Feng, and Y. Zhao. One improvement to two-dimensional locality preservingprojection method for use with face recognition. Neurocomputing.2009,73(1):245-249
    [126] L. Yong-zhi, L. Guo-dong, and Y. Jing-yu. A New Two-Directional Two-DimensionalFeature Extraction Based on Manifold Learning. Artificial Intelligence andComputational Intelligence,2009. AICI '09. International Conference on,2009.370-374
    [127] Y. Xu, D. Zhang, and J.-Y. Yang. A feature extraction method for use with bimodalbiometrics. Pattern Recognition.2009, In Press, Accepted Manuscript
    [128] Veerabhadrappa and L. Rangarajan. Diagonal and secondary diagonal localitypreserving projection for object recognition. Neurocomputing.2010, In Press,Corrected Proof
    [129] L. Yan, J.-S. Pan, S.-C. Chu, and M. Khurram Khan. Adaptively weightedsub-directional two-dimensional linear discriminant analysis for face recognition.Future Generation Computer Systems.2010, In Press, Accepted Manuscript
    [130] Z. Lai, Z. Jin, and J. Yang. Sparsefaces: Intuitive two-dimensional locality preservingprojections. Pattern Recognition.2011, In Press, Corrected Proof
    [131] A. Eftekhari, M. Babaie-Zadeh, and H. Abrishami Moghaddam. Two-dimensionalrandom projection. Signal Processing.2011, In Press, Uncorrected Proof
    [132] S.-I. Choi, C.-H. Choi, and N. Kwak. Face recognition based on2D images underillumination and pose variations. Pattern Recognition Letters.2011,32(4):561-571
    [133] L. Wang, X. Wang, X. Zhang, and J. Feng. The equivalence of two-dimensional PCAto line-based PCA. Pattern Recognition Letters.2005,26(1):57-60
    [134] G. Quanxue, Z. Lei, D. Zhang, and Y. Jian. Comments on-On Image Matrix BasedFeature Extraction Algorithms. Systems, Man, and Cybernetics, Part B: Cybernetics,IEEE Transactions on.2007,37(5):1373-1374
    [135] Y. Wang and Y. Wu. Face recognition using intrinsicfaces. Pattern Recognition.2010,In Press, Accepted Manuscript
    [136] J. Gui, W. Jia, L. Zhu, and D.-S. Huang. Locality preserving discriminant projectionsfor face and palmprint recognition. Neurocomputing.2010, In Press, AcceptedManuscript
    [137] X. M. Liu, J. W. Yin, Z. L. Feng, J. X. Dong, and L. Wang. Orthogonal neighborhoodpreserving embedding for face recognition. Image Processing,2007. ICIP2007. IEEEInternational Conference on, San Antonio,2007.IEEE Press,133-136
    [138] F. Wang and C. S. Zhang. Label propagation through linear neighborhoods.Knowledge and Data Engineering, IEEE Transactions on.2008,20(1):55-67
    [139] X.Zhu and Z.Ghahramani,"Learning from Labeled and Unlabeled Data with LabelPropagation," CMU-CALD-02-107, Carnegie Mellon University2002.
    [140] Y. Wang and Y. Wu. Complete neighborhood preserving embedding for facerecognition. Pattern Recognition.2010,43(3):1008-1015
    [141] H. Du, S. Wang, J. Zhao, and N. Xu. Two-dimensional neighborhood preservingembedding for face recognition.2010.
    [142] W. Liwei, W. Xiao, and F. Jufu. On image matrix based feature extraction algorithms.Systems, Man, and Cybernetics, Part B: Cybernetics, IEEE Transactions on.2006,36(1):194-197.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700