用户名: 密码: 验证码:
流形学习方法在图像处理中的应用研究
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
信息技术的发展使得人们所面对的数据变得越来越复杂。而数据本身往往是高维的数据,其内在规律的复杂性也超过了人们的感知能力,因而人眼很难进行辨识。而数据降维技术是解决这种问题的一种重要手段。该方法将原始数据对应的高维空间映射到低维,尽可能的保证数据间的几何关系和距离测度不变,这样不仅能在以后的相关计算中减少许多数据量,并能获得数据的主要特征。
     数据降维技术主要有线性和非线性两种。线性方法目前较为成熟,主要方法有主成分分析PCA和多维尺度分析MDS等,有较强的数学基础且实现较简单。但是线性方法无法表现数据的内在结构。流形学习方法是一种非线性方法,是目前的研究热点之一。主要的方法有等距映射Isomap、局部线性嵌入LLE、拉普拉斯映射LE、局部切空间排列LTSA等,对比传统的线性方法,流形学习方法能够有效地发现非线性高维数据的本质维数,利于进行维数约简和数据分析。
     本文研究流形学习算法在图像处理中的应用,对非线性降维的三种算法(等距映射Isomap、局部线性嵌入LLE、拉普拉斯映射LE)分别进行了仿真研究,分析和验证了每种方法的特性和相应结论。同时从算法思想差异、计算复杂度及降维效果等方面对三种方法做了相应的比较分析。
     在分析LLE的对于样本无法分辨的不足后,本文引入了一种有监督的局部线性嵌入方法(SLLE)。通过对原始的LLE和SLLE的仿真比较,得到SLLE方法有较好的分类能力。同时,针对LLE以及SLLE方法在样本点稀疏的情况下对于邻域点取值较敏感的缺点,本文提出了一种改进算法,改变样本间度量距离的计算方式,使得结果对邻域点取值不那么敏感。另外,将SLLE运用到人脸识别中,研究表明,采用SLLE相比于原始的LLE有较好的识别率。
With the development of information technology, the data processing has been becoming more and more complex. The inner structure of the data is usually high-dimensional, so that people can hardly understand it by direct-viewing cognition. Dimension reduction is one of the important techniques to deal with high—dimensional data. It has the original data in a higher dimensional space mapped into a lower dimensional space that the geometrical relationship and the distance measurement among data can be kept unchanged. Thus, the data quantity in future relative calculation can be reduced, also the mainly feature of the data can be availed.
     The dimensional reduction can be divided into two classes - linear and nonlinear. Linear methods, represented by Principal Component Analysis (PCA), Multi -dimensional Scaling (MDS), etc, with their substantial mathematical foundation and simple implementation, has been developed maturely. However, it can not show the inner structure of the data in linear methods. Manifold learning, such as Isometric Mapping (Isomap), Locally Linear Embedding (LLE), Laplacian Eigenmaps (LE), Local Tangent Space Alignment (LTSA), is a kind of nonlinear method, the research on it is a focus these days. Compared with traditional linear method, manifold learning can discover the intrinsic dimensions of nonlinear high dimensional data effectively, helping researchers to reduce dimensionality and analyze data better.
     This thesis studies the image processing with the method of manifold learning algorithm, conducts simulation to prove the three kinds of non-linear dimensionality reduction technologies (Isomap, LLE, LE), and analysis the features and conclusions of each method; Also, this thesis gives a corresponding improvement analysis on three manifold learning algorithms from the aspects as differences of algorithm ideological, Computational complexity and the effect of dimensional reduction.
     Having analyzed the disadvantage of disable to the classification of samples by LLE, this paper has introduced a theory of Supervised Locally Linear Embedding (SLLE). Through the simulation work, SLLE algorithm proves its stronger ability to classify the different samples. Besides, the method of original LLE and SLLE are too sensitive for the number of nearest neighbors. This thesis uses a method to improve algorithm which changes the way to measure the distance between two samples. As the result shows, the improved algorithms are not so sensitive to the number of he nearest neighbors. Also, SLLE is applied in the face recognition in this thesis. The result shows that compare to the original LLE, using SLLE concludes a higher recognition rate.
引文
[1]M A Carreira-Perpinan.A review of dimension reduction techniques.:Technical report CS-96-09,Department of Computer Science,University of Sheffield,1997.
    [2]Vlachos.M.,C.Domeniconi,D.Gunopulos,G Kollios,N.Koudas.Non-linear dimensionali -ty reduction techniques for classification and visualization[C].in Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining.2002.
    [3]IT Jolliffe.Principal component analysis.In:Springer-Verlag,New York,2002.
    [4]Zhou DL,Zhao DB.Face recognition based on singular value decomposition and discriminant KL projection.:Joumal of Software[J],2003,14(4):783-789.
    [5]K Fukunaga.Introduction to Statistical Pattern Recognition.California:Academic Press,1990.
    [6]A H,and E Oja.Independent component analysis:algorithms and applications.:Neural Networks,2000,13(45):411-430.
    [7]Penio S Penev.Local feature analysis:a statistical theory for information representation and transmission.New York:Rockefeller University,1998.
    [8]Borg I and Groenen P Modern.Multidimensional Scaling.Theory and Applications,New York:Springer-Verlag,1997.
    [9]刘小明.数据降维及分类中的流形学习研究[博士学位论文].浙江大学.2007
    [10]Haykin,S.,Neural Networks:A Comprehensive Foundation.1998:Prentice Hall PTR Upper Saddle River,NJ,USA.
    [11]Kohonen,T.,Self-organized formation of topologically correct feature maps.Biological Cybernetics[J],1982.43(1):59-69.
    [12]Kohonen,T.,Self-organization and associative memory.1989:Springer-Verlag New York,Inc.New York,NY,USA.
    [13]Hastie,T.,W.Stuetzle,American Statistical Association Principal Curves and Surfaces.Journal of the 1989[J].84(406):502-516.
    [14]Vapnik,V.N.,Statistical learning theory.1998:Wiley New York.
    [15]Muller,K.R.,S.Mika,G.Ratsch,K.Tsuda,B.Scholkopf,An introduction to kernel-based learning algorithms[J].Neural Networks,IEEE Transactions on,2001.12(2):181-201.
    [16]Scholkopf,B.,Nonlinear Component Analysis as a Kernel Eigenvalue Problem[J].Neural Computation,1998.10:1299-1319.
    [17]Baudat,G.,F.Anouar,Generalized Discriminant Analysis Using a Kernel Approach[J].Neural computation,2000.12(10):2385-2404.
    [18]J Shawe Taylor,.N Cristianini.Kernel methods for pattern analysis.:Cambridge University Press,Cambridge,2004.
    [19]Nayar S,Nene S,and Murase H.Subspace methods for robot vision.IEEE Transactions on Robotics and Automation,1996,12(5):750-758
    [20]Seung,H.S.,D.D.Lee,The manifold ways of perception[J].Science 290(5500):p.2268-2269.2000.
    [21]Tenebaum J B,Silvam V D.Langford J C.A global geometric framework for nonlinear dimensionality reduction.Science,2000,290:2319-2323
    [22]Roweis S T,Saul L K.Nonlinear dimensionality reduction by locally linear embedding.Science,2000,290:2323-2326
    [23]Belkin,M.,P.Niyogi,Laplacian Eigenmaps for Dimensionality Reduction and Data Representation[J].Neural Computation,2003.15(6):1373-1396.
    [24]Belkin M,Niyogi P.Semi-Supervised learning on Riemannian manifolds.Machine Learning,2004,56(1):209-239
    [25]Belkin M,Niyogi P,Sindhwani V.On manifold regularization.Proceedings of the International Conference on AI and Statistics,2005:17-24
    [26]Donoho D.,Grimes C.Hessian Eigenmaps:Locally Linear Embedding Techniques for High-Dimensional Data.Proceedings of the National Academy of Sciences,2003100(10):5591-5596
    [27]Weinberger K,Saul L.Unsupervised Learning of Image Manifolds by Semidefinite Programming.International Journal of Computer Vision,2006,70(1):77-90.
    [28]Lafon S,Keller Y,Coifman R.Data Fusion and Multicue Data Matching by Diffusion Maps.IEEE Transaction on Pattern Analysis and Machine Intelligence,2006,28(11):1784-1797.
    [29]Roweis S,Saul L,Hinton G Global coordination of local linear models.Advances in Neural Information Processing Systems,2002:889-896.
    [30]Brand M.Minimax embeddings.Advances in Neural Information Processing Systems,2004(16):505-512.
    [31]Zhang,Z.,H.Zha,Principal Manifolds and Nonlinear Dimensionality Reduction via Tangent Space Alignment[J].SIAM Journal on Scientific Computing,2004.26(1):313-338.
    [32]张军平.流形学习及应用:[博士学位论文].中国科学院自动化研究所.2003.
    [33]Zhang C S,Wang J,Zhao N Y,et al.Reconstruction and analysis of multi-pose face images based on nonlinear dimensionality reduction.Pattern Recognition,2004,37(1):325-336.
    [34]赵连伟,罗四维,赵艳敞.高维数据的低维嵌入及嵌入维数研究.软件学报[J],2005,16(8):1423-1430.
    [35]何力,张军平,周志平.基于放大因子和延伸方向研究流形学习算法.计算机学报[J],2005,28(12):2000-2009.
    [36]杨剑,李伏欣,王狂.一种改进的局部切空排列算法.软件学报[J],2005,16(9):1584-1589.
    [37]Nilsson J.Nonlinear dimensionality reduction of gene expression data.Phd thesis,Lund University,2006.
    [38]Shen X L,Meyer F G..Nonlinear dimension reduction and activation detection for fMRI dataset.Proceedings of the International Conference on Computer Vision and Pattern Recognition YVorkshop.2006:144-151.
    [39]Tenenbaum J B,Silva V,Langford J C.A Global Geometric Gramework for Nonlinear Dimensionality Reduction.Science[J],2000,290:2319-1323
    [40]Zhu L,Zhu S A.Face Recognition Based on Extended Locally Linear Embedding.Proceeding of IEEE Conference on Industrial Electronics and Application[J],2006:1-4
    [41]Ge S S,Yang Y,Lee T H.Hand gesture recognition and tracking based on distributed locally linear embedding.Proceedings of IEEE conference on Robotics,Automation and Mechatronics[J],2006:1-6
    [42]孙明明.流形学习理论与算法研究:[博士学位论文].南京理工大学.2007.
    [43]Yang X,Fu H,Zha H,et al.Semi-Supervised Nonlinear Dimensionality Reduction.Proceedings of the 23rd international conference on Machine learning.ACM Press New York,NY,USA,2006:1065-1072
    [44]Belkin M,Niyogi P.Using Manifold Structure for Partially Labeled Classification.Advances in Neural Information Processing Systems[J],2003,15:929-936
    [45]Chang K,Ghosh J.Principal Curve Classifier-a Nonlinear Approach to Pattern Classification.Proceedings of the IEEE International Joint Conference on Neural Networks,1998
    [46]R.einhard K,Niranjan M.Subspace Models for Speech Transitions using Principal Curves.Proceedings of Institute of Acoustics[J],1998:53-60
    [47]许馨,物福朝,胡站义,罗阿理.一种基于非线性降维求正常星系红移的新方法.光谱学与光谱分析[J].2006,26(1):182-186
    [48]李琳.几何学习理论及其应用研究[硕士学位论文].浙江工业大学.2006
    [49]陈省身,陈维桓.微分几何讲义.北京大学出版社,1983.
    [50]Silva V de and Tenenbaum J.B.Global versus local methods in nonlinear dimensionality reduction[J].Neural Information Processing Systems15(NIPS'2002),2003:705-712.
    [51]于秀林,任雪松.多元统计分析.北京:中国统计出版社.2003
    [52]孙越恒.基于统计的 NLP 技术在中文信息检索中的应用研究.[博士学位论文].天津大学.2005
    [53]刘建.高维数据的本征维数估计方法研究.[硕士学位论文].国防科学技术大学.2005.
    [54]王靖.流形学习的理论与方法研究.[博士学位论文].浙江大学.2006.
    [55]肖健.局部线性嵌入的流形学习算法研究与应用.[硕士学位论文].国防科技大学.2005
    [56]Thomas H.Cormen,Charles E.Leiserson,Ronald L.Rivest,and Clifford Stein.Introduction to Algorithms,Second Edition.MIT Press and McGraw-Hill,2001.Section 24.3:Dijkstra's algorithm:595-601.
    [57]Dick de Ridder,Olga Kouropteva,Oleg Okun,etc.Supervised locally linear embedding.Artificial Neural Networks and Neural Information Processing,ICANN/ICONIP 2003Proceedings,Lecture Notes in Computer Science 2714,Springer:333-341.
    [58]“The ORL face database”,http://www.cl.cam.ac.uk/research/dtg/attarchive/face -database.html.
    [59]E.Levina and P J.Bickel.Maximum Likelihood Estimation of Intrinsic Dimension.Advances in Neural Information Processing Systems 17(NIPS2004).MIT Press,2005.
    [60]J.Costa and A.O.Hero.Geodesic entropic graphs for dimension and entropy estimation in manifold learning.IEEE Transactions on Signal Processing[J],2004,8:2210-2221.
    [61]P Verveer and R.Duin.An evaluation of intrinsic dimensionality estimators.IEEE Transactions on Pattern Analysis and Mackine Intelligence[J],1995,17(1):81-86.
    [62]Vin de Silva.JoshuaB.Tenenbaum.Global Versus Local Methods in Nonlinear Dimensionality Reduction.NIPS 2002:705-712
    [63]Joshua.B.Tenenbaum.Vin.de Silva and J.C.Langford.A Global Geometric Framework for Nonlinear Dimensionality Reduction.Science[J],2000,290:2319-2323
    [64]郑杰.一种改进的 LLE 方法.湖南理工学院学报[J].2007,20(3):30-32.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700