用户名: 密码: 验证码:
基于视觉感知的中国画图像语义自动分类研究
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
近年来随着中国画图像数字化的迅猛发展,有关中国画图像数字博物馆或数字图书馆的建立以及管理出现越来越迫切的需求,尤其是对中国画图像的处理技术成为了亟待解决问题中的关键,对中国画图像的低层特征提取、数据压缩、中国画图像语义自动标注、检索以及自动分类等的研究越来越广泛。困难之处:一是由于“语义鸿沟”的存在,在国画图像分类领域单纯利用低层全局视觉特征很难达到良好的分类效果;二是与自然场景图像不同,国画图像的特点是“以形写神”,其语义信息更加抽象和丰富,因此自然图像中常用的低层视觉特征描述子在中国画图像上的应用受到一定的限制。论文工作主要研究面向感知的中国画图像语义分类算法,主要创新点及其算法原理和实验效果如下。
     1.在中国画图像语义显著区域提取上,提出了一种基于低秩矩阵分解理论的图像显著区域提取算法。算法的原理是从低秩矩阵分解理论的角度出发,将图像的语义内容划分为显著部分和非显著部分。矩阵的非显著部分(即背景和干扰物)由于其内容的高冗余性,因此理论上可以对应为一个低秩结构,而显著目标具有一个或者多个特征的高差异性,因此可以对应为一个稀疏成分。以此提取出图像中的显著区域,并为满足进一步图像语义标注提供有效的表示模式。基于所提算法的显著图与其他七种算法比较结果.在目前MIT和Bruce两个眼动数据库以及MSRA数据库上给出所提出算法的实验结果:算法在低熵图像中性能较好,提出算法明显优于其他方法,提出的算法与人类视觉注意过程更一致。
     2.在基于语义类别的中国画图像分类方面,提出了一种基于语义视觉词包模型的中国画图像语义分类算法。算法的原理是:首先针对中国画图像的特点,利用简单的空间网格布局将输入的中国画图像划分为规则的图像子区域,并针对每个子区域在图像的每个颜色通道上提取尺度不变特征变换(Scale Invariant Feature Transform, SIFT)描述子,并将各通道的SIFT描述子线性融合构成Color-SIFT描述子,以描述中国画图像每个子区域的颜色-形状特征;其次,将自然图像场景中层表示中的视觉词包模型(Bag-of-Words, BOW)表示机制引入到中国画图像的语义表示中。针对自底向上的机制,提出一种简单有效的视觉注意力机制计算模型分析图像的显著性信息;而针对自顶向下的机制,基于中国画图像的语义类别标签,算法融入有监督学习策略,通过统计视觉单词在每个语义类别中的出现频度来对视觉单词进行语义加权,进而构建类别相关视觉单词出现频率直方图。最后,采用支持向量机分类器在所构建的中国画图像数据库上实现基于语义视觉词包模型的中国画图像分类算法。实验表明,在算法的总体性能方面,本算法在三类中国画图像语义分类的性能达到74.4%。
     3.在基于结构化信息的中国画图像分类方面,提出了一种多任务联合稀疏表示的中国画图像分类算法。中国画具有较为丰富的结构信息。据此提出了中国画图像结构化分析算法。其原理是:首先将一幅中国画图像分解成画主体、题跋、留白以及印章四个部分,然后根据每个部分的视觉和创作特点,提取了一系列特有的颜色和纹理特征,最后,引入多任务联合稀疏表示模型,将四个部分的特征进行了有效地融合,并对其进行分类。通过在大量的中国画图像集上的实验表明,所提出的结构分析算法能有效地对画图像进行结构分解,而基于多任务联合稀疏表示的分类策略性能也优于基于全局的分类方法。
In recent years, with the rapid development of Chinese digital painting images, how to establish the digital museum and manage the digital library becomes the urgent task. The technologies to process the Chinese painting image become the key problem. So the low-level feature extraction, data compression, semantic annotation, retrieval and automatic classification attract researchers' attention. There still exist some difficulties. First, it is difficulty to just use the features of low-level global visual to classify the Chinese painting images because of the "semantic-gap". Second, it is different with the natural scene images. The Chinese paintings' characteristic is "pursuit the inner spirit of the object", this will limit the low-level visual features' performance. This paper is mainly to research the Chinese painting images' classification algorithm. The innovation and principle of this algorithm as follows:
     1. This paper proposes an algorithm to extract the salient region in the image which is based on the low-rank matrix factorization theory. The principle of the algorithm starting from the low-rank matrix factorization theory, divide the semantic content of the image into salient-region and unsalient-region. The matrix of salient part due to its high content of redundancy, so it corresponds to a low-rank structure, while the salient target has one or more of the features with high difference and therefore can be map to a sparse composition. At last, the salient region is extract from the images, and this will provide an effective representation model for semantic annotation. The comparison results with seven other kind of algorithm also present in this paper. The results of this algorithm applied in the MIT, Bruce and MSRA's dataset, the algorithm performance in low-entropy image is better. The proposed algorithm is superior to other methods and accordance to the attention of human visual.
     2. In this paper, a semantic classification algorithm is proposed based on the bag-of-word model. The principle of the algorithm is:firstly use the simple space grid layout to divide the input image to regular area, extract the Scale Invariant features Transform (SIFT) in each of the component, fuse the SIFT descriptor in each channel to Color-SIFT descriptor, and this can describe the region-shape feature in each of the sub-region area in Chinese painting. Secondly, introduce the Bag-of-words (BOW) which is used in natural image field to semantic representation of the Chinese painting images. This paper proposed a simple and effective mechanism of visual attention computational model to analyze the salient region of images based on the bottom-up mechanism. In next section, this algorithm integrated into the supervised learning strategy, then statistical frequency of visual words in each of the semantic category and weighting the visual words semantic, build a visual word frequency histogram. Finally, using support vector machine classifier to form the algorithm which is based on the bag-of-words model. And the validation of this algorithm is applied on the images dataset which we mentioned in the paper. The results of experimental shows that the overall performance of our algorithm's accuracy reaches74.4%.
     3In this paper, we also proposed an algorithm of Chinese painting image classification based on the structured information. This is because that Chinese painting has rich structure information. The principle is:firstly, decompose the Chinese painting image into four parts:main body, inscription, liubai and seal, then based on the characteristics of each part extract the color and texture feature, finally integrate the four parts' feature into multi-tasking joint sparse representation model and use the feature to classify the images. The experiments show that the proposed algorithm can effectively decompose the Chinese painting images, and the classification strategy based on multi-tasking joint performance better than the global-based classification method.
引文
[1]Datta R, Joshi D, Li J, Wang J Z. Image retrieval:ideas, influences, and trends of the new age. ACM Computing Surveys[J],40(2):1-60,2008.
    [2]Liu Y, Zhang D S, Lu G, Ma W Y. A survey of content-based image retrieval with high-level semantics[J]. Pattern Recognition,40(1):262-282,2007.
    [3]Wang J Z, Li J, Wiederhold G.. SIMPLIcity:semantics-sensitive integrated matching for picture libraries[J]. IEEE Trans. on Pattern Analysis and Machine Intelligence,23(9): 947-963,2001.
    [4]Liu Y, Zhang D, Lu G. Region-based image retrieval with high-level semantics using decision tree learning[J]. Pattern Recognition,41(8):2554-2570,2008.
    [5]Deng Y, Manjunath B S. Unsupervised segmentation of color-texture regions in images and video[J]. IEEE Trans. on Pattern Analysis and Machine Learning,23(8):800-810,2001.
    [6]Shi J, Malik J. Normalized Cuts and Image Segmentation[J]. IEEE Trans. on Pattern Analysis and Machine Intelligence,22(8):888-905,2000.
    [7]Jia Li, James.Z.Wang. Studying Digital Imagery of Ancient Paintings by Mixtures of Stochastic Models[J]. IEEE Trans. On Image Processing, Vol.13, No.3, pp340-353,2003.
    [8]Jialie Shen. Stochastic modeling western paintings for effective classification[J]. Pattern Recognition, Vol.42, pp293-301,2009.
    [9]Shuqiang Jiang, Qingming Huang, Qixiang Ye, Wen Gao. An effective method to detect and categorize digitized traditional Chinese paintings[J]. Pattern Recognition Letters, Vol.27, No.7, pp734-746, May.2006.
    [10]Shuqiang Jiang, Qingming Huang, Tiejun Huang, Wen Gao. Visual Ontology Construction for Digitized Art Image Retrieval[J]. Journal of Computing Science and Technology, Vol.20, No.6, pp855-860, Nov.2005.
    [11]Danqing Zhang, Binh Pham, Yuefeng Li. Modelling traditional Chinese paintings for content-based image classification and retrieval[C]. in Proc. of IEEE Multimedia Modeling (MMM), pp 134-137,2004.
    [12]Qingyong Li, Siwei Luo, Zhongzhi Shi. Fuzzy aesthetic semantics description and extraction for art image retrieval[J]. Computers and Mathematics with Applications, Vol.57, pp1000-1009,2009.
    [13]陈俊杰,杜雅娟,李海芳。中国画的特征提取及分类[J]。计算机工程与应用vol.44,No.15,pp.166-169,2008.
    [14]吴海锋。中国山水画基本元素的自动分类算法研究[D]。浙江大学,硕士学位论文,2006.
    [15]关晓惠。计算机辅助的国画:分类、鉴别与系统[D]。浙江大学,硕士学位论文,2005.
    [16]Shuqiang Jiang, Wen Gao, Weiqiang Wang. Classifying Traditional Chinese Painting Images[C]. in Proc. of IEEE Pacific-Rim Conference on Multimedia (ICICS-PCM2003), Singapore, Dec.15-18,2003.
    [17]Shuqiang Jiang, Tiejun Huang. Categorizing Traditional Chinese Painting Images[C]. in Proc. of Pacific Rim Conference on Multimedia(PCM'04), Tokyo, Japan, Nov.30-Dec.3,2004.
    [18]Shuqiang Jiang, Tiejun Huang, Wen Gao. An Ontology-based Approach to Retrieve Digitized Art Images[C]. Web intelligence (IEEE/WIC/ACM WI 04), Beijing, China, Sep.20-24,2004.
    [19]Lowe D. Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision,60(2):91-110,2004.
    [20]Itti L, Koch C, Niebur E. A model of saliency-based visual attention for rapid scene analysis[J]. IEEE Trans. on Pattern Analysis and Machine Intelligence,20(11):1254-1259, 1998.
    [21]Feng S.h, XU. D, YANG X. Attention-driven Salient Edge(s) and Region(s) Extraction with Application to CBIR[J]. Signal Processing,90(1), pp.1-15,2010.
    [22]Treisman A M, Gelade G. A feature-integration theory of attention. Cognitive Psychology[J], 12(1):97-136,1980.
    [23]Ma Y F, Zhang H J. Contrast-based image attention analysis by using fuzzy growing[J]. In: Proc. of Int. Conf. on ACM Multimedia (ACM Multimedia'03), Berkeley, CA, USA, pp374-381, Nov.2003.
    [24]Sun Y, Fisher R. Object-based visual attention for computer vision[J]. Artificial Intelligence, 146:77-123,2003.
    [25]A. Bosch, X. Munoz, et al. A review:Which is the best way to organize/classify images by content[J]. Image and Vision Computing, Vol.25, No.6, pp778-791,2007.
    [26]A. Bosch, A. Zisserman. Scene classification using a hybrid generative discriminative approach[J]. IEEE Trans. On Pattern Analysis & Machine Intelligence, Vol.30, No.4, pp.712-727,2008.
    [27]A. Bosch, A. Zisserman, X. Munoz. Scene classification via plsa[C]. in Proc. of European Conference on Computer Vision(ECCV), Vol.4, pp517-530,2006.
    [28]Li Feifei, P. Perona, A bayesian hierarchical model for learning natural scene categories[C], In Proc. of IEEE Computer Society Conference on Computer Vision and Pattern Recognition(CVPR),2:524-531,2005.
    [29]S. Lazebnik, C. Schmid, J. Ponce. Beyond bags of features:spatial pyramid matching for recognizing natural scene categories[C]. In Proc. of IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR),2169-2178,2:2006.
    [30]Yang J., Jiang Yu-Gang, A. G. Hauptmann, Chong-Wah Ngo. Evaluating bag-of-visual-words representations in scene classification[C]. In Proc. of the ACM International Workshop on Multimedia Information Retrieval,2007:197-206,2007.
    [31]Lu Zhiwu, Peng Yuxin, Horace H.S. Image categorization via robust pLSA[C]. Pattern Recognition Letters,31(1):36-43,2010.
    [32]Monay F, Gatica-Perez D. Modeling semantic aspects for cross-media image indexing[J]. IEEE Trans. on Pattern Analysis and Machine Intelligence,29(10),1802-1817,2007.
    [33]A. Vailaya, A. Figueiredo, A. Jain, H. Zhang. Image classification for content-based indexing[J]. IEEE Trans. on Image Processing, Vol.10, pp117-129,2001.
    [34]Vogel J, Schiele B. Semantic modeling of natural scenes for content-based image retrieval[J]. International Journal of Computer Vision,72(2):133-157,2007.
    [35]Fan J, Gao Y, Luo H. Statistical modeling and conceptualization of natural images[J]. Pattern Recognition,38:865-885,2005.
    [36]Luo J, Savakis A. Indoor vs outdoor classification of consumer photographs using low-level and semantic features[C]. In:Proc. of IEEE Int. Conf. on Image Processing (ICIP'01), Thessaloniki, Greece, Oct.2001:745-748,2001.
    [37]Julia Vogel, Bernt Schiele. Natural Scene Retrieval Based on a Semantic Modeling Step[C]. In Proc. of International Conference on Image and Video Retrieval (CIVR), July 2004:207-215.
    [38]J. Luo, A.E. Savakis, A. Singhal. A bayesian network-based framework for semantic image understanding[J]. Pattern Recognition, Vol.38, pp919-934,2005.
    [39]Li L.J., R. Socher, Li. Feifei. Towards total scene understanding:classification, annotation and segmentation in an automatic framework[C]. In Proc. of the 2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR),2009: 2036-2043,2009.
    [40]R. Fergus, Li. Feifei, P. Perona, A. Zisserman. Learning object categories from google's image search[C]. In Proc. of Tenth IEEE International Conference on Computer Vision (ICCV),2005,2:1816-1823,2005.
    [41]A. Oliva, A. Torralba. Modeling the shape of the scene:a holistic representation of the spatial envelope[J], International Journal of Computer Vision, Vol.42, No.3, pp145-175, 2001.
    [42]A. Torralba. Contextual priming for object detection[J]. International Journal of Computer Vision,53 (2):169-191,2003.
    [43]N. Serrano, A. Savakis, J. Luo. Improved scene classification using efficient low-level features and semantic cues[J]. Pattern Recognition, Vol.37, ppl773-1784,2004.
    [44]Koen E. A. van de Sande, Theo Gevers and Cees G. M. Snoek. Evaluating Color Descriptors for Object and Scene Recognition[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,32 (9), pp.1582-1596,2010.
    [45]Pedro Quelhas, Florent Monay. Modeling scenes with local descriptors and latent aspects[J]. In Proc. of IEEE International Conference on Computer Vision (ICCV),1:883-890, October 2005.
    [46]Farquhar J, Szedmak S. Improving bag of keypoints image categorization:Generative Models and PDF-Kernels[R]. Technical report. University of Southampton, February 2005.
    [47]Josef Sivic, Bryan C. Russell. Discovering objects and their location in images[C]. In Proc. of the Tenth IEEE International Conference on Computer Vision (ICCV),1:370-377, October 2005.
    [48]Quelhas P, Monay F. Thousand Words in a Scene[J]. IEEE Trans. on Pattern Analysis and Machine Intelligence,29(9):1575-1589,2007.
    [49]Fahad Shahbaz Khan, Joost van deWeijer, Maria Vanrell. Modulating Shape Features by Color Attention for Object Recognition[J], International Journal of Computer Vision,2011.
    [50]Khan, F. S., van de Weijer, J.,& Vanrell, M. Top-down color attention for object recognition[C]. In Proc. of ICCV,2009.
    [51]Payne A, Singh S. Indoor vs. outdoor scene classification in digital photographs[J]. Pattern Recognition,38(10):1533-1545,2005.
    [52]Boutell M R, Luo J, Shen X, Brown C M. Learning multi-label scene classification[J]. Pattern Recognition,37(9):1757-1771,2004.
    [53]Li Y, Shapiro L G, Bilmes J A. A generative/discriminative learning algorithm for image classification[C]. In:Proc. of IEEE Int. Conf. on Computer Vision (ICCV'05), Beijing, China, Oct.2005:1605-1612,2005.
    [54]Liu Jingen, Mubarak Shah. Scene Modeling Using Co-Clustering[C]. In Proc. of IEEE International Conference on Computer Vision (ICCV), June 2007:1-7,2007.
    [55]Pedro Quelhas, Florent Monay. Modeling scenes with local descriptors and latent aspects[C]. In Proc. of IEEE International Conference on Computer Vision (ICCV), October 2005, 1:883-890,2005.
    [56]Zhang J, Marszalek M, Lazebnik S. Local features and kernels for classification of texture and object categories:A comprehensive study[J]. International Journal of Computer Vision, 2007,73(2):213-238,2007.
    [57]Hofmann Thomas. Probabilistic Latent Semantic Analysis[C]. In Proc. of Uncertainty in Artificial Intelligence,1999:289-296,1999.
    [58]Blei David, Andrew Y., Michael Jordan. Latent Dirichlet allocation[J]. Journal of Machine Learning Research,3:993-1020,2003.
    [59]蒋树强,视觉媒体语义自动提取关键技术研究[D]。中科院研究生院,博士学位论文,2005.
    [60]冯炜,图像场景分类的关键技术研究[D]。北京交通大学,硕士学位论文,2008.
    [61]冯松鹤,面向感知的图像检索及自动标注算法研究[D]。北京交通大学,博士学位论文,2009.
    [62]A.M. Treisman and G. Gelade. A feature-integration theory of attention[J]. Cognitive Psychology,12(1):97-136,1980.
    [63]Tamar Avraham and Michael Lindenbaum. Esaliency (extended saliency):Meaningful attention using stochastic image modeling. IEEE Transactions on Pattern Analysis and Machine Intelligence,32(4):693-708,2010.
    [64]X. Hou and L. Zhang. Saliency detection:A spectral residual approach[C]. In IEEE conference on computer vision and pattern recognition, pages 1-8,2007.
    [65]L. Itti, C. Koch, and E. Niebur. A model of saliency-based visual attention for rapid scene analysis[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 20(11):1254-1259,1998.
    [66]D. Gao and N. Vasconcelos. Bottom-up saliency is a discriminant process[C]. In International Conference on Computer Vision, pages 1-6,2007.
    [67]N. Bruce and J. Tsotsos. Saliency based on information maximization[J]. In Neural Information Processing Systems, pages 155-162,2006.
    [68]T. Judd, k. Ehinger, Fredo Durand, and A. Torralba. Learning to predict where humans look[C]. In International Conference on Computer Vision,2009.
    [69]T. Liu, J. Sun, N.N. Zheng, X. Tang, and H.Y. Shum. Learning to detect a salient object[C]. In IEEE conference on computer vision and pattern recognition, pages 1-8,2007.
    [70]V. Navalpakkam and L. Itti. A goal oriented attention guidance model[J]. Lecture Notes in Computer Science,2525:453-461,2002.
    [71]S. Goferman, L. Zelnikmanor, and A. Tal. Context aware saliency detection[C]. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR2010), pages 2376-2383, 2010.
    [72]Neisser, U. Cognitive Psychology [J]. Appleton-Century-Crofts, New York.1967.
    [73]Y.Q. Hu, D. Rajan, and L. T. Chia. Detection of visual attention regions in images using robust subspace analysis[J]. Journal of Visual Communication and Image Representation, 19(3):199-216,2008.
    [74]V. Gopalakrishnan and Y. Huand D. Rajan. Random walks on graphs for salient object detection in images[J]. IEEE Transactions on Image Processing,19(12):3232-3242,2009.
    [75]J. Harel, C. Koch, and P. Perona. Graph-based visual saliency[J]. In Neural Information Processing Systems, pages 545-552,2006.
    [76]J. Yanand M. Zhu, H. Liu, and Y.C. Liu. Visual saliency detection via sparsity pursuit[J]. IEEE Signal Processing Letters,17(8):739-742,2010.
    [77]V. Gopalakrishnan and Y. Huand D. Rajan. Random walks on graphs for salient object detection in images[J]. IEEE Transactions on Image Processing,19(12):3232-3242,2009.
    [78]John Wright, Arvind Ganesh, Shankar Rao, Yigang Peng, and Yi Ma. Robust principal component analysis:Exact recovery of corrupted lowrank matrices via convex optimization[J]. In Neural Information Processing Systems,2009.
    [79]Lingyun Zhang, Matthew H Tong, Tim K Marks, Honghao Shan, and Garrison W Cottrell. Sun:A bayesian framework for saliency using natural statistics[J]. Journal of Vision,8(7):1-20,2008.
    [80]Lang Congyan, Liu Guangcan, Yu jian, Yanshuicheng. Saliency Detection by Multi-Task Sparsity Pursuit[J]. IEEE Transactions on Image Processing, vol 22.2012.02
    [81]G. Liu, Z. Lin, and Y. Yu. Robust subspace segmentation by low-rank representation[C]. In International Conference on Machine Learning, pp 663-670,2010.
    [82]Emmanuel Candes and Terence Tao. Near optimal signal recovery from random projections: Universal encoding strategies[J].2006(12) IEEE Transactions on Information Theory, VOL.52, No. 12, December 2006.
    [83]Anna Bosch, X. Munoz, et al. A review:Which is the best way to organize/classify images by content? [J], Image and Vision Computing, Vol.25, No.6, pp778-791,2007.
    [84]A. Vailaya, A. Figueiredo, A. Jain, H. Zhang. Image classification for content-based indexing[J]. IEEE Trans. on Image Processing, Vol.10, pp117-129,2001.
    [85]J. Shen, J. Shepherd, A.H.H. Ngu. Semantic-sensitive classification for large image libraries[J]. in Proc. of IEEE Multimedia Modeling(MMM), pp340-345,2005.
    [86]N. Serrano, A. Savakis, J. Luo. Improved scene classification using efficient low-level features and semantic cues[J]. Pattern Recognition, Vol.37, ppl773-1784,2004.
    [87]J. Fan, Y. Gao, H. Luo, G. Xu. Statistical modeling and conceptualization of natural images[J]. Pattern Recognition, Vol.38, pp865-885,2005.
    [88]J. Vogel, B. Schiele. Natural scene retrieval based on a semantic modeling step[C]. in Proc. of International Conference on Image and Video Retrieval(CIVR), Vol.3115, pp207-215,2004.
    [89]A. Bosch, A. Zisserman, X. Munoz. Scene classification via plsa[C]. in Proc. of European Conference on Computer Vision(ECCV), Vol.4, pp517-530,2006.
    [90]A. Oliva, A. Torralba. Modeling the shape of the scene:a holistic representation of the spatial envelope[J]. International Journal of Computer Vision, Vol.42, No.3, pp145-175,2001.
    [91]J. Luo, A.E. Savakis, A. Singhal. A bayesian network-based framework for semantic image understanding[J]. Pattern Recognition, Vol.38, pp919-934,2005.
    [92]F F. Li, P. Perona. A bayesian hierarchical model for learning natural scene categories[J]. in Proc. of IEEE Conference on Computer Vision and Pattern Recognition(CVPR), Washington, DC, USA, pp524-531,2005.
    [93]F F. Li, R. Fergus, et al.. One-shot learning of object recognition[J]. IEEE Trans. on Pattern Recognition & Machine Intelligence, Vol.28, No.4, pp.594-611,2006.
    [94]Jialie Shen. Stochastic modeling western paintings for effective classification[J]. Pattern Recognition, Vol.42, pp293-301,2009.
    [95]Shuqiang Jiang, Qingming Huang, Tiejun Huang, Wen Gao. Visual Ontology Construction for Digitized Art Image Retrieval[J]. Journal of Computing Science and Technology, Vol.20, No.6, pp855-860, Nov.2005.
    [96]Qingyong Li, Siwei Luo, Zhongzhi Shi. Fuzzy aesthetic semantics description and extraction for art image retrieval[J]. Computers and Mathematics with Applications, Vol.57, pp1000-1009,2009.
    [97]冈萨雷斯.数字图像处理(中)[M].电子工业出版社.PP 540-545,2003.
    [98]鲍泓,娄海涛.一种自动提取中国画作品中印章图像的方法[J],计算机科学,第36卷第五期2009.3.
    [99]A. Torralba and A. Oliva. Statistics of natural image statistics[J]. Network:computation in neural systems.14(3):391-412,2003.
    [100]J. Geusebroek, A. SMeulders. A six-stimulus theory for stochastic texture[J]. International Jounal of Computer Vision.62(1-2):7-16,2005.
    [101]M. K. Hu. Visual Pattern Recognition by Moment Invariants[J]. IRE Trans. On Info. Theory, 8(2):179-187,1962.
    [102]王耀明.图像的矩函数—原理、算法及应用[M].华东理工大学出版社,2002.
    [103]章毓晋.图像工程(上)—图像处理与分析[M]清华大学出版社,2007.
    [104]郑建彬,杨亚莉.基于整数小波系数的笔迹图像鉴别方法研究[J].武汉理工大学学报,28(1):110-113,2004.
    [105]X. Yuan and S. Yan. Visual classification with multi-task joint sparse representation[C]. In Proc. of CVPR, pp 3493-3500,2010.
    [106]F Ge, S Wang, T.C Liu. New benchmark for image segmentation evaluation[J]. In J. Electron. Imaging, vol.16, no.3, pp 033011.1-033011.16, Sep.2007.
    [107]J. Yan, M. Zhu, H. Liu, and Y. Liu, Visual saliency detection via sparsity pursuit[J], IEEE Signal Process. Lett., vol.17, no.8, pp.739-742,2010.
    [108]Fahad Shahbaz Khan, Joost van deWeijer, Maria Vanrell. Modulating Shape Features by Color Attention for Object Recognition[J]. International Journal of Computer Vision, 2011.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700