基于图和低秩表示的张量分解方法及应用研究
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
大多数现有的高维图像、视频数据,一般本身就具有天然的张量结构,或者可以被组织成张量结构。张量结构具有良好的表达能力和计算特性,为此本文在总结和继承前人的研究成果的基础上,对基于张量的相关算法进行了研究,主要研究内容如下:
     (1)提出了图像内容相关的支持张量机分类器初始化方法。传统的支持张量机初始化方法以及非负矩阵降维方法均采用随机初始化方式,这种方法的缺点体现在两个方面:一方面,在没有数据的情况下,需要假定其分布,比如高斯分布,均匀分布等,这些分布的参数也很难假定,只能通过对测试数据多次验证的方式来确定;另一方面,采用随机初始化的方式很难捕获到图像本身的特性,所以随机方式会最终影响到分类器的分类结果以及降维结果的有效性。
     本文针对这两种随机初始化问题,提出图像内容相关的初始化方法,利用图像内容初始化支持张量机及非负矩阵分解方法。首先将支持张量机所要处理的数据构造成张量形式,对每幅图像,构造三阶图像特征张量,将图像集合构造成四阶张量。其次,提出了一种加权高阶奇异值分解算法对支持张量机进行初始化,该方法结合图谱理论与流形学习算法,利用图像数据集对支持张量机初始化,避免了随机性对分类器的影响。接着,对于子空问降维方法,本文选用非负矩阵分解方法对三阶图像特征张量进行降维,提出了基于二维主成分分析的方法初始化非负矩阵分解方法,充分利用了图像内容相关信息。最后,对输入支持张量机的数据,利用改进的非负矩阵分解算法进行降维,在该降维后的子空间中对支持张量机进行训练,利用该降维方法与改进的支持张量机分类器相结合进行图像分类。实验表明,与其他相关算法相比,本文所提方法分类结果较好。
     (2)提出了一种基于图和低秩表示的非负张量分解算法。指出如果在图像处理领域中对图像数据集采用非负矩阵分解方法,需要把每个图像数据拉直成向量形式,在转换过程中会丢失图像数据本身的结构信息,破坏图像的空间几何结构,为了避免这些问题,提出了两种非负张量分解算法的改进方法,并利用这两种子空间降维方法对图像进行分类实验。首先,提出了基于图的非负张量分解算法。在基于图的非负矩阵分解算法的基础上,扩展非负张量分解算法,继续借鉴图谱理论与流形学习算法的优势,把数据集的结构信息引入到非负张量分解算法中。其次,由于构建近邻图对于大数据来说太过耗费时间,计算量过大,提出了一种基于低秩表示的非负张量分解算法。作为压缩感知理论的推广和发展,低秩表示将矩阵的秩作为一种稀疏测度,由于矩阵的秩反映了矩阵的固有特性,所以低秩表示能有效的分析和处理矩阵数据,本文把低秩表示引入到张量模型中,即引入到非负张量分解算法中,进一步扩展非负张量分解算法。实验结果表明,本文所提两种算法与其他相关算法相比,分类结果较好。
     (3)提出了一种基于高阶奇异值分解的多级非负低秩稀疏矩阵分解算法。首先对低秩稀疏矩阵分解的计算方法进行了详细介绍。其次,对视频图像序列数据的张量表示及必要性做详细地说明,并对高阶奇异值分解与低秩稀疏矩阵分解的结合方法作出说明,指出视频图像序列数据的排列方式的重要性,以及高阶奇异值分解对数据排序的影响。本文在此基础上,提出了一种高阶奇异值分解下的多级非负低秩稀疏矩阵分解算法,该方法为了确保视频图像序列数据的特征不会被削弱,并实现原视频数据的纯加性描述,引入了非负约束,把数据逐级分解成时间和空问信息。另外,由于该方法是逐级分解方式,所以非负约束尤为重要。二级或更高级分解过程仅针对低秩矩阵,分解结果为稀疏矩阵对应时问信息(运动信息),低秩矩阵对应空间信息(背景信息)。通过对两个视频图像序列进行实验,说明了本文所提方法对提取前景及背景信息均有效。
We know that most of the existing high-dimensional image and video data has a natural tensor structure, or can be organized into a tensor structure. Moreover, the tensor structure representation possesses good presentation skills and calculation features, thus on the basis of summarizing and inheriting the predecessors' research results. This thesis studies the related algorithms based on tensor. The main contents are as follows:
     (1) Initialization method has been proposed for support tensor machine classifier. We point out that the disadvantages of the initialization method for the non-negative matrix dimensionality reduction method of traditional support tensor machine via randomization. On one hand, in the absence of the data, it needs an assumption of the data distribution, such as Gaussian distribution, uniform distribution, etc; it is also not easy to estimate the parameters of the distribution only by repeatedly veritification on test data. On the other hand, it is difficult to capture the characteristics of image itself by using the method of random initialization. Therefore, the random way will ultimately affect the classification results of the classifiers and the performances of the dimension reduction results.
     In order to address these two problems of random initialization method, this paper put forwards the image content based initialization method and makes use of the image content to initialize the support tensor machine and the non-negative matrix decomposition method. First of all, the raw data is processed by support tensor machine into tensor form. Specifically, each image is represented by a third-order tensor structure, and the collections of images become a fourth-order tensor. Secondly, this paper proposed a weighted higher order singular value decomposition algorithm for support tensor machine initialization. This initialization method combines graph theory with manifold learning algorithm, and initializes the support tensor machine by the image data set to avoid the influence caused by the random initialization. Moreover, in terms of subspace dimension reduction method, this paper adopts the non-negative matrix decomposition method for third-order image characteristics tensor dimensionality reduction, and proposes to initialize the non-negative matrix decomposition method via the two-dimensional principal component analysis method, which makes full use of the correlated information of the collections of image contents. At last, the dimensionality of the input data of support tensor machine is reduced by the method of the improved non-negative matrix decomposition algorithm. After that, support tensor machine has been trained in the subspace and the image classification has been performed by a dimension reduction method with the improved classifier of support tensor machine. Experimental results show that the classification results are better compared with other algorithms.
     (2) Non-negative tensor decomposition based on graph and low-rank representation has been proposed. We point out that, in the field of image processing, if the non-negative matrix decomposition method is adopted, each image is required to be straightened into a vector form. In the procedure of conversion, the structural information of the image content is lost and the space geometry structure of image is damaged. In order to avoid these problems, two improvement methods of the non-negative tensor decomposition algorithm have been proposed, and the two subspace dimension reduction methods have been used for image classification. At first, we put forward the new non-negative tensor decomposition algorithm based on graph. Based on the non-negative tensor decomposition algorithm for graph, the non-negative tensor decomposition algorithm has been further expanded. By learning lessons from graph theory and the advantages of manifold learning algorithm, we introduce the structural information of data sets into the non-negative tensor decomposition algorithm. Then, considering the construction of neighborhood graph for big data consuming too much time on the calculation, this paper presents a non-negative tensor decomposition algorithm based on low-rank representation. As the extension and the development of compressed sensing theory, the low-rank representation denotes that the rank of the matrix can be used as a measurement of sparsity. Since the rank of a matrix reflects the inherent property of the matrix, the low-rank analysis can effectively analyze and process the matrix data. This paper introduces the low-rank representation into tensor model, namely to introduce it into non-negative tensor decomposition algorithm and to further expand the non-negative tensor decomposition algorithm. Experimental results show that the classification accuracy of the two algorithms proposed in this paper is better compared to other existing algorithms.
     (3) A multistage nonnegative low-rank sparse matrix decomposition algorithm based on high-order singular value decomposition has been proposed. Firstly, the details of the calculation method for the low-rank sparse matrix decomposition algorithm are introduced. Then, the importance of the data arrangement for the image sequence is pointed out, as well as the impact to the data sorting after using high-order singular value decomposition. The method, which is to combine the high-order singular value decomposition and low-rank sparse matrix decomposition, has been introduced. It further manifests that the video image sequence data can be separated by low-rank of tensor representation into foreground and background. Inspired by these, we proposed a higher order singular value decomposition method via a multistage nonnegative low-rank sparse matrix decomposition algorithm. In order to ensure that the characteristics of the video image sequence data are not undermined, and to implement that the pure additive description of the original video data, this method introduces the non-negative constraints and decomposes the data information into time and space information at each step. It is worth mentioning that, because of this gradual decomposition method, the non-negative constraints are particularly of great importance. Secondary or higher decomposition process of low-rank matrix is only for sparse matrix and the decomposition results of the sparse matrix is corresponding to the time information (motion information), and the result of the low-rank matrix is the space information (background information). From the experiments of two image sequences, they show the promising results of the proposed method in this part for extracting the foreground and background information.
引文
[1]席晓聪.图像分类方法研究[D].济南:山东大学,2013.
    [2]Bach, J.R., C. Fuller, A. Gupta, et al. Virage image search engine:An open framework for image management [C]. Proceedings of Storage and Retrieval for Image and Video Databases,1996:76-87.
    [3]Pentland, A., R.W. Picard, and S. Sclaroff. Photobook:Content-based manipulation of image databases [J]. International Journal of Computer Vision, 1996,18(3):233-254.
    [4]Bao B-K, Zhu G, Shen J, Yan S. Robust image analysis with sparse representation on quantized visual features [J]. IEEE Transactions on Image Processing,2013,22(3):860-871.
    [5]Li J, Tao D. On preserving original variables in Bayesian PCA with application to image analysis [J]. IEEE Transactions on Image Processing,2012,21(12): 4830-4843.
    [6]Niang O, Thioune A, Gueirea M C E, et al. Partial differential equation-based approach for empirical mode decomposition:application on image analysis [J]. Image Processing, IEEE Transactions on,2012,21(9):3991-4001.
    [7]J.Rocchio. Relevance feedback in information retrieval [J]. In the SMART Retrieval System:Experiments in Automatic Document Processing,1971: 313-323.
    [8]A. Vailaya, M. Figueiredo, A. Jain et al. Image classification for content-based indexing [J]. IEEE Transactions on Image Processing,2001,10(1):117-130.
    [9]J. Sivic, B.C. Russell, A. A. Efros, et al. Discovering objects and their location in images [C]. Proceedings of International Conference on Computer Vision,2005: 370-377.
    [10]A. Bosch, A. Zisserman, X. Munoz. Scene classification via plsa [C]. Proceedings of European Conference on Computer Vision,2006:517-530.
    [11]P. Quelhas, F. Monay, J.M. Odobez, et al. A thousand wrds in a scene [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2007,29 (9): 1575-1589.
    [12]R. Fergus,L. Fei-Fei,P. Perona et al. Learning object categories from google's image search [C]. Proceedings of International Conference on Computer Vision, 2005:1816-1823.
    [13]Y. Li, W.Wang, W. Gao. A robust approach for object recognition[C]. Proceedings of 7th Pacific-Rim Conference on Multimedia,2007:262-269.
    [14]D.Liu and T. Chen. Unsupervised image categorization and object localization using topic models and correspondences between images [C]. Proceedings of the 11th International Conference on Computer Vision,2007:1-8.
    [15]Horster E, Lienhart R, Slaney M. Continuous visual vocabulary modelsfor plsa-based scene recognition [C]. Proceedings of the 2008 international conference on Content-based image and video retrieval. ACM,2008:319-328.
    [16]Quinlan J R. Learning efficient classification procedures and their application to chess end games [M]. Machine learning. Springer Berlin Heidelberg,1983: 463-482.
    [17]Quinlan J R. C4.5:programs for machine learning [M]. Morgan kaufmann,1993.
    [18]王煜.基于决策树和K最近邻算法的文本分类研究[D].天津:天津大学,2006.
    [19]P. Soucy, G.W. Mineau. A simple KNN algorithm for text categorization [C]. Proceedings of the IEEE International Conference on Data Mining.2001: 647-648.
    [20]Qiu Chen Yan, M.S. Nixon, R.I. Damper. Implementing the k-nearest neighbour rule via a neural network[C]. Proceedings of the IEEE International Conference on Neural Networks,1995,1:136-140.
    [21]L.I. Kuncheva, Fitness function in editing knn reference set by genetic algorithms [J]. Pattern Recognition.1997,30(6):1041-1049.
    [22]杨建良,王永成.基于KNN与自动检索的迭代近邻法在自动分类中的应用[J].情报学报,2004,23(2):137-141.
    [23]D. Michie, D.J. Spiegelhalter. Machine learning, neural and statistical classification [M]. London, UK:Ellis Horwood,1994.
    [24]Lu H, Setiono R, Liu H. Effective data mining using neural networks [J]. Knowledge and Data Engineering, IEEE Transactions on,1996,8(6):957-961.
    [25]Cover T M. Geometrical and statistical properties of systems of linear inequalities with applications in pattern recognition [J]. Electronic Computers, IEEE Transactions on,1965 (3):326-334.
    [26]D. S. Broomhead, D. Lowe. Multivariable functional interpolation and adaptive network [J]. Complex Systems.1988,2:321-355.
    [27]Carpenter G A, Grossberg S. A massively parallel architecture for a self-organizing neural pattern recognition machine [J]. Computer vision, graphics, and image processing,1987,37(1):54-115.
    [28]Sanchez A, David V. Advanced support vector machines and kernel methods [J]. Neurocomputing,2003,55(1):5-20.
    [29]Doumpos M, Zopounidis C. Additive support vector machines for pattern classification [J]. IEEE Trans on Systems, Man, and Cybernetics:Part B,2007, 37(3):540-550.
    [30]Jayadeva, Khemchandan I R, Chandra S. Twin support vector machines for pattern classification [J]. IEEE Trans on Pattern Analysis and Machine Intelligence,2007,29(5):905-910.
    [31]Wu Zhili, Li Chunhung, Joseph K, et al. Location estimation via support vector regression [J]. IEEE Trans on Mobile Computing,2007,6(3):311-321.
    [32]Hao Peiyi, Chiang J H. Fuzzy regression analysis by support vector learning approach [J]. IEEE Trans on Fuzzy Systems,2008,16(2):428-441.
    [33]Tang Y, Jin B, Sun Y, et al. Granular support vector machines for medical binary classification problems [C]. Computational Intelligence in Bioinformatics and Computational Biology,2004. CIBCB'04. Proceedings of the 2004 IEEE Symposium on. IEEE,2004:73-78.
    [34]李道国,苗夺谦,张东星,等.粒度计算研究综述[J].计算机科学,2005,32(9):1-12.
    [35]程伟,石扬,张燕平.粒度计算的三种主要方法[J].计算机技术与发展,2007,17(3):91-94.
    [36]DING Shi-fei, XU Li, ZHU Hong, et al. Research and progress of cluster algorithms based on granular computing [J]. International Journal of Digital Content Technology and its Applications,2010,4(5):96-104.
    [37]TANG Yu-chun, JIN Bo, ZHANG Yan-qing. Granular support vector machines with association rules mining for protein homology prediction [J]. Artificial Intelligence in Medicine,2005,35:121-134.
    [38]WANG Wen-jian, XU Zong-ben. A heuristic training in support vector regression [J]. Neurocomputing,2004,61:259-275.
    [39]张鑫,王文剑.一种基于粒度的支持向量机学习策略[J].计算机科学,2008,35(8a):101-103,116.
    [40]段丹青,陈松乔,杨卫军,等.使用粗糙集和支持向量机检测入侵[J].小型微型计算机系统,2008,29(4):627-630.
    [41]LI Ye, CAI Yun-ze, LI Yuan-gui, et al. Rough sets method for SVM data preprocessing [C]. IEEE Conference on Cybernetics and Intelligent Systems. Singapore:[s.n.],2004,1039-1042.
    [42]YU H, YANG-JIONG, HAN Jia-Wei, et al. Making SVMs scalable to large data sets using hierarchical cluster indexing [J]. Data Mining and Knowledge Discovery,2005,11(3):295-321.
    [43]文贵华,向君,丁月华.基于商空间粒度理论的大规模SVM分类算法[J].计算机应用研究,2008,25(8):2299-2301.
    [44]程伟,张燕平,赵姝.商空间理论框架下的SVM产量预测模型研究[J].中国农业大学学报,2009,14(5):135-139.
    [45]LIN Chun-fu, WANG Sheng-de. Fuzzy support vector machines [J]. IEEE Transactions on Neural Networks,2002,3(2):464-471.
    [46]安金龙,王正欧,马振平.基于密度法的模糊支持向量机[J].天津大学学报,2004,37(6):544-548.
    [47]JAYADCVA R, KHEMCHANDANI S C. Twin support vector machines for pattern classification [J]. IEEE Trans on Pattern Analysis and Machine Intelligence,2007,29(5):905-910.
    [48]顾亚祥,丁世飞.支持向量机研究进展[J].计算机科学,2011,38(2):14-17.
    [49]Wang Defeng, Daniel S Y, ERIC C T. Weighted mahalanobis distance kernels for support vector machines [J]. IEEE Trans on Neural Networks,2007,18(5): 1453-1462.
    [50]Kou Zhenzhen, Xu Jianhua, Zhang Xuegong, et al. An Improved Support Vector Machine Using Class-Median Vectors [C]. Proceedings of the 8thInternational Conference on Neural Information Processing. Shanghai, China,2001,2: 883-887.
    [51]Olshausen B A. Emergence of simple-cell receptive field properties by learning a sparse code for natural images [J]. Nature,1996,381(6583):607-609.
    [52]J. Wright, A. Y. Yang, A. Ganesh, et al. Robust Face Recognition via Sparse Representation [J]. Pattern Analysis and Machine Intelligence, IEEE Transactions on,2009,31(2):210-227.
    [53]Zou H, Hastie T, Tibshirani R. Sparse principal component analysis [J]. Journal of computational and graphical statistics,2006,15(2):265-286.
    [54]宋相法,焦李成.基于稀疏表示的多标记学习算法术[J].模式识别与人工智能,2012,25(1):124-129.
    [55]Vasilescu M A O, Terzopoulos D. Multilinear analysis of image ensembles: Tensorfaces [M]. Computer Vision-ECCV 2002. Springer Berlin Heidelberg, 2002:447-460.
    [56]Cai D, He X F, Han J W. Subspace Iearning based on tensor analysis [R]. Technique Report. UIUCDCS-R-2005-2572,2005.
    [57]He, X., Cai D., Niyogi, P. Tensor subspace analysis [C]. Proceedings of Neural Information Processing Systems (NIPS),2005,4(4):1.
    [58]Dijun Luo, Heng Huang, Chris Ding. Discriminative high order SVD:Adaptive tensor subspace selection for image classification, clustering, and retrieval [C]. 2011 International Conference on Computer Vision, Barcelona,2011:1443-1448.
    [59]Tao DC, Li XL, Wu XD, et al. Supervised tensor learning [J]. Knowledge and Information Systems,2007,13(1):1-42.
    [60]张乐飞,黄昕,张良培.高分辨率遥感影像的支持张量机分类方法[J].武汉大学学报.信息科学版,2012,37(3):314-317.
    [61]Zhang L, Zhang L, Tao D, et al. Tensor discriminative locality alignment for hyperspectral image spectral-spatial feature extraction [J]. Geoscience and Remote Sensing, IEEE Transactions on,2013,51(1):242-256.
    [62]Yanan Liu, Fei Wu, Yueting Zhuang, et al. Active post-refined multimodality video semantic concept detection with tensor representation [C]. Proceedings MM'08 of the 16th ACM international conference on Multimedia, New York, 2008:91-100.
    [63]Lathauwer LD. Signal processing based on multilinear algebra [D]. Belgium: Katholieke Universiteit Leuven,1997.
    [64]温静.基于张量子空间学习的视觉跟踪方法研究[D].西安:西安电子科技大学,201O.
    [65]Belhumeur P N, Hespanha J P, Kriegman D. Eigenfaces vs. fisherfaces: Recognition using class specific linear projection [J]. Pattern Analysis and Machine Intelligence, IEEE Transactions on,1997,19(7):711-720.
    [66]Turk M, Pentland A. Eigenfaces for recognition [J]. Journal of cognitive neuroscience,1991,3(1):71-86.
    [67]He, X., Niyogi, P. Locality preserving projections[C]. Proceedings of Neural Information Processing Systems (NIPS),2003,16:234-241.
    [68]牛少波.基于张量学习的目标识别技术研究[D].西安:西安电子科技大学,2010.
    [69]Vapnik V. The Nature of Statistical Learning Theory [M]. New York: Springer-Verlag,1995.
    [70]Vapnik. Estimation of dependencies based on empirical data [J]. New York: Springer-Verlag,1995.
    [71]Vapnik V. Statistical learning theory [J].1998.
    [72]王朝瑞.图论(第三版)[M].北京:北京理工大学出版社,2004.
    [73]Ng A Y, Jordan M I, Weiss Y. On Spectral Clustering:Analysis and an Algorithm [C]. Advances in neural information processing systems,2002,2:849-856.
    [74]Fiedler M. Algebraic connectivity of graphs [J]. Czech, Math,1997,23:298-305.
    [75]CHUNG F. Spectral graph theory [M]. Washington:Conference Board of the Mathematical Science,1997.
    [76]陈省身,陈维桓.微分儿何讲义[M].北京:北京大学出版社,1983.
    [77]Saul L K, Roweis S T. Think globally, fit locally:unsupervised learning of low dimensional manifolds [J]. The Journal of Machine Learning Research,2003,4: 119-155.
    [78]罗四维,赵连伟.基于谱图理论的流形学习算法[J].计算机研究与发展,2006,43(7):1173-1179.
    [79]Chung F R K. Spectral graph theory [M]. American Mathematical Soc.,1997.
    [80]胡三和.鲁棒流形学习算法研究[D].西安:西安电子科技大学,2011.
    [81]Scholkopf B, Smola A, Muller K R. Nonlinear component analysis as a kernel eigenvalue problem [J]. Neural computation,1998,10(5):1299-1319.
    [82]T.Cox, M. cox. Multidimensional Scaling [M]. London:Chapman&Hall,1994.
    [83]Tenenbaum J B, De Silva V, Langford J C. A global geometric framework for nonlinear dimensionality reduction [J]. Science,2000,290(5500):2319-2323.
    [84]Balasu bramanian M, Schwartz E L. The isomap algorithm and topological stability [J]. Science,2002,295(5552):7-7.
    [85]Roweis S T, Saul L K. Nonlinear dimensionality reduction by locally linear embedding [J]. Science,2000,290(5500):2323-2326.
    [86]L. K.Saul, S. T. Roweis. Think globally, fit locally:Unsupervised leaming of low dimensional manifolds [J]. Journal of Machine Learning Research,2003,4(6): 119-155.
    [87]Zhang Zhenyue, Zha Hongyuan.Principal manifolds and nonlinear dimensionality reduction via tangent space alignment [J]. SIAM Journal of Scientific Computing, 2004,26(1):313-338.
    [88]张振跃,查宏远.线性低秩逼近与非线性降维[J].中国科学A辑:数学,2005,35(3):273-285.
    [89]Belkin M, Niyogi P. Laplacian eigenmaps and spectral techniques for embedding and clustering [C]. Proceedings of Neural Information Processing Systems (NIPS).2001,14:585-591.
    [90]Belkin M, Niyogi P. Laplacian eigenmaps for dimensionality reduction and data representation [J]. Neural computation,2003,15(6):1373-1396.
    [91]Donoho D L, Grimes C. Hessian eigenmaps:Locally linear embedding techniques for high-dimensional data [J]. Proceedings of the National Academy of Sciences,2003,100(10):5591-5596.
    [92]Coifman R R, Lafon S, Lee A B, et al. Geometric diffusions as a tool for harmonic analysis and structure definition of data:Diffusion maps [J]. Proceedings of the National Academy of Sciences of the United States of America,2005,102(21):7426-7431.
    [93]Paatero P, Tapper U. Positive matrix factorization:A non-negative factor model with optimal utilization of error estimates of data values [J]. Environmetrics,1994,5(2):111-126.
    [94]P. Paatero. Least squares formulation of robust non-negative factor analysis [J]. Chemometrics and Intelligent Laboratory Systems,1997,37(1):23-35.
    [95]Paatero P. The multilinear engine-a table-driven, least squares program for solving multilinear problems, including the n-way parallel factor analysis model [J]. Journal of Computational and Graphical Statistics,1999,8(4):854-888.
    [96]D. D. Lee, H. S. Seung. Unsupervised learning by convex and conic coding [J]. Advance in Neural Information Proeessing Systems,1997:515-521.
    [97]D. D. Lee, H. S. Seung. Learning the parts of objects by non-negative matrix factorization [J]. Nature,1999,401(6755):788-791.
    [98]D. D. Lee, H. S. Seung. Algorithms for non-negative matrix factorization [J]. Advances in Neural Information Processing Systems, Cambridge; MIT Press, 2001:556-562.
    [99]胡俐蕊.非负矩阵分解方法及其在选票图像识别中的应用[D].合肥:安徽大学,2013.
    [100]J. Yang, D. Zhang, A. F. Frangi, et al. Two-dimensional PCA:a new approach to appearance-based face representation and recognition [J]. IEEE Trans Pattern Analysis and Machine Intelligence.2004,26(1):131-137.
    [101]http://www.vision.caltech.edu/Image_Datasets/Caltech101/Caltech101.html, 2006-04-05/2014-04-12.
    [102]B. Yao, L. Fei-Fei. Grouplet:A structured image representation for recognizing human and object interactions [C].2010 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), San Francisco,2010:9-16.
    [103]Welling M, Weber M. Positive tensor factorization [J]. Pattern Recognition Letters,2001,22 (12):1255-1261.
    [104]Ji Liu, Jun Liu, Peter Wonka, et al. Sparse non-negative tensor factorization using columnwise coordinate descent [J]. Pattern Recognition,2012,45(1):649-656.
    [105]Fengyu Cong, Anh Huy Phan, Piia Astikainen. Multi-domain Feature of Event-Related Potential Extracted by Nonnegative Tensor Factorization:5 vs. 14 Electrodes EEG Data [J]. Lecture Notes in Computer Science,2012,7191: 502-510.
    [106]Deng Cai, Xiaofei He, Jiawei Han et al. Graph Regularized Non-negative Matrix Factorization for Data Representation [J]. IEEE Trans on Pattern Analysis and Machine Intelligence,2011,33(8):1548-1560.
    [107]L. R. Tucker. Some mathematical notes of three-mode factor analysis [J]. Psychometrika,1966,31(3):279-311.
    [108]R. A. Harshman. Foundations of the PARAFAC procedure:models and conditions for an "explanatory" mul-modal factor analysis [J]. UCLA working papers in phoneties,1970(16):1-84.
    [109]Kroonenberg P M, De Leeuw J. Principal component analysis of three-mode data by means of alternating least squares algorithms [J]. Psychometrika,1980, 45(1):69-97.
    [110]Kim Y D, Choi S. Nonnegative tucker decomposition [C]. Computer Vision and Pattern Recognition,2007. CVPR'07. IEEE Conference on. IEEE,2007:1-8.
    [111]Tseng P. Convergence of a block coordinate descent method for nondifferentiable minimization [J]. Journal of optimization theory and applications,2001,109(3):475-494.
    [112]M. Belkin. Problems of Learning on Manifolds [D]. Chicago:University of Chicago,2003.
    [113]Candes E. J., Tao T. Decoding by linear programming [J]. IEEE Transactions on Information Theory,2005,51(12):4203-4215.
    [114]Donoho D. L. Compressed sensing [J]. IEEE Transactions on Information Theory,2006,52(4):1289-1306.
    [115]Candes E, Romberg J. Sparsity and incoherence in compressive sampling [J]. Inverse problems,2007,23(3):969.
    [116]Candes E J. The restricted isometry property and its implications for compressed sensing [J]. Comptes Rendus Mathematique,2008,346(9):589-592.
    [117]Maillard O A, Munos R. Compressed Least-Squares Regression [C]. Proceedings of Neural Information Processing Systems (NIPS),2009: 1213-1221.
    [118]Goyal V. K., Fletcher A. K., Rangan S. Compressive sampling and lossy compression [J]. IEEE Signal Processing Magazine,2008,25(2):48-56.
    [119]Babadi B., Kalouptsidis N., Tarokh V. Asymptotic achievability of the Cramer-Rao bound for noisy compressive sampling [J]. IEEE Transactions on Signal Processing,2009,57(3):1233-1236.
    [120]Engan K, Aase S O, Hakon Husoy J. Method of optimal directions for frame design[C]. Acoustics, Speech, and Signal Processing,1999. Proceedings.,1999 IEEE International Conference on. IEEE,1999,5:2443-2446.
    [121]Elad M. Sparse and redundant representations:from theory to applications in signal and image processing [M]. Springer,2010.
    [122]Duarte M F, Eldar Y C. Structured compressed sensing:From theory to applications [J]. Signal Processing, IEEE Transactions on,2011,59(9): 4053-4085.
    [123]Negahban S, Wainwright M J. Joint support recovery under high-dimensional scaling:Benefits and perils of l0,∞-regularization [J]. Advances in Neural Information Processing Systems,2008,21:1161-1168.
    [124]Blumensath T, Davies M E. Iterative hard thresholding for compressed sensing [J]. Applied and Computational Harmonic Analysis,2009,27(3):265-274.
    [125]Mallat S G, Zhang Z. Matching pursuits with time-frequency dictionaries [J]. Signal Processing, IEEE Transactions on,1993,41(12):3397-3415.
    [126]Tropp J., Gilbert A. C. Signal recovery from partial information via orthogonal matching pursuit [J]. IEEE Trans. Information Theory,2007,53(12):4655-4666.
    [127]Candes E J, Tao T. The power of convex relaxation:Near-optimal matrix completion [J]. Information Theory, IEEE Transactions on,2010,56(5): 2053-2080.
    [128]Fazel M. Matrix rank minimization with applications [D]. California:Stanford University,2002.
    [129]Recht B, Fazel M, Parrilo P A. Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization [J]. SIAM review,2010,52(3): 471-501.
    [130]Liu G, Lin Z, Yan S, et al. Robust recovery of subspace structures by low-rank representation [J]. Pattern Analysis and Machine Intelligence, IEEE Transactions on,2013,35(1):171-184.
    [131]Xu H, Caramanis C, Sanghavi S. Robust PCA via Outlier Pursuit [J]. IEEE Transactions on Information Theory,2012,58(5):3047-3064.
    [132]Candes E J, Li X, Ma Y, et al. Robust principal component analysis? [J]. Journal of the ACM (JACM),2011,58(3):11.
    [133]刘园园.快速低秩矩阵与张量恢复的算法研究[D].西安:西安电子科技大学,2013.
    [134]Jalali A, Chen Y, Sanghavi S, et al. Clustering partially observed graphs via convex optimization [C]. Proceedings of the 28th International Conference on Machine Learning (ICML-11),2011:1001-1008.
    [135]Lin Z, Ganesh A., Wright J., et al. Fast convex optimization algorithms for exact recovery of a corrupted low-rank matrix [R]. Technical Report UILU-ENG-09-2214, Univ. Illinois, Urbana-Champaign,2009.
    [136]Lin Z., Chen M., Wu L. The augmented Lagrange multiplier method for exact recovery of corrupted low-rank matrices [R]. Technical Report UILU-ENG-09-2215, Univ. Illinois, Urbana-Champaign,2009.
    [137]Yuan X., Yang J. F. Low rank and sparse matrix decomposition via alternating direction method [J]. Pacific Journal of Optimization,2013,9(1):167-180.
    [138]Cheney E W, Cline A K. Topics in sparse approximation [D]. Austin:University of Texas,2004.
    [139]Candes E J, Recht B. Exact matrix completion via convex optimization [J]. Foundations of Computational mathematics,2009,9(6):717-772.
    [140]Elhamifar E, Vidal R. Sparse subspace clustering [C]. Computer Vision and Pattern Recognition,2009. CVPR 2009. IEEE Conference on. IEEE,2009: 2790-2797.
    [141]Derksen H, Ma Y, Hong W, et al. Segmentation of multivariate mixed data via lossy coding and compression [C]. Electronic Imaging 2007. International Society for Optics and Photonics,2007:65080H-65080H-12.
    [142]Liu G, Lin Z, Yu Y. Robust subspace segmentation by low-rank representation [C]. Proceedings of the 27th International Conference on Machine Learning (ICML-10),2010:663-670.
    [143]Ni Y, Sun J, Yuan X, et al. Robust low-rank subspace segmentation with semidefinite guarantees [C]. Data Mining Workshops (ICDMW),2010 IEEE International Conference on. IEEE,2010:1179-1188.
    [144]Lin Z, Liu R, Su Z. Linearized Alternating Direction Method with Adaptive Penalty for Low-Rank Representation [C]. Proceedings of Neural Information Processing Systems (NIPS).2011,2:6.
    [145]Kim H, Park H, Drake B L. Extracting unrecognized gene relationships from the biomedical literature via matrix factorizations [J]. BMC bioinformatics,2007, 8(Suppl 9):S6.
    [146]http://www.cs.nyu.edu/-roweis/data.html,2014-04-12.
    [147]Tran L, Navasca C, Luo J. Video detection anomaly via low-rank and sparse decompositions [C].Image Processing Workshop (WNYIPW),2012 Western New York. IEEE,2012:17-20.
    [148]李其超.基于非负矩阵分解的医学图像分割方法研究[D].合肥:安徽大学,2012.
    [149]http://www.avss2013.org,2013-10-08/2014-04-12.