基于艺术风格的绘画图像分类研究
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
绘画是人类文明发展历程中一种重要的文化艺术表现形式,多年来产生了大量的绘画作品。针对这些绘画作品的研究,一直是人们了解人类的历史、文化、艺术以及科技的发展史,从而进一步推动人类文明发展的一种重要手段。随着数字化技术的发展和广泛应用,越来越多的绘画作品逐渐实现了数字化,使得大规模的绘画艺术分析成为可能。但随着大量数字化绘画图像资源的出现为研究者们带来了丰富研究资料的同时,研究者们又面临着如何有效使用这些海量数字资源的新问题。其中,如何利用计算机对这些海量绘画图像进行有效分类以便于研究者的进一步研究,是一个需要首先解决的重要课题。
     不同于一般的自然图像,绘画作品都是人工作品,具有自身的艺术风格等特有属性。而传统的计算机图像分类方法由于并未考虑到绘画图像的艺术风格的一些特性,并不适合直接应用于绘画图像上。本文研究利用绘画图像所特有的艺术风格特性来实现对大规模数字绘画图像的分类处理。
     本文主要工作与创新包括:
     1)提出了一种基于绘画技法的艺术风格描述符构建方法。在对大量艺术类文献研究分析的基础上,研究了中西方绘画和不同朝代的敦煌壁画两类数绘画图像样本,从绘画技法的角度展开了深入分析并提出了三个特征和三大类属性以及十六个特征来描述绘画图像的艺术风格的方法,并实现了基于该方法的绘画图像分类。实验结果验证了该方法的可行性。
     2)提出了一种基于艺术风格相似性规则的不同艺术风格的绘画图像分类方法。基于人脑认知机制和相似性原理,研究分析了艺术风格自身属性,建立了艺术风格相似性规则。在遵循规则的基础上,对艺术领域中普遍认可的风格特征进行量化,计算出图像风格的自我相似性描述符,然后计算出图像与其它所有样本图像的相似性系数来构成相似性矩阵,最后使用Adaboost算法实现未知绘画作品的类别判断。实验结果也证明了基于艺术风格相似性规则进行绘画图像分类的有效性。
     3)提出了一种基于显著性的不同艺术风格绘画图像分类方法。根据人类视觉对感观世界所呈现的视觉信息具有选择性视觉注意这一特性,首先提出了一种人类视觉显著性检测方法,通过结合颜色增强算法以及基于全局区域对比度方法各自的优点,来获取更为精确的显著性图并且在公开数据库上对这种方法进行了评估。然后在显著性图基础上进行基于概率模型的分类。该方法在不同艺术风格的敦煌壁画数据库以及Caltech101, Caltech256两个通用数据库上的实验结果表明,基于视觉显著性的方法应用于基于艺术风格的绘画图像分类是有效的,同时也可以扩展应用于通用的图像分类,其性能表现良好,说明该方法鲁棒性较强。
Painting is an important expression of culture and art in the development of human civilization, and over the years a large number of paintings have been produced. Therefore, researching on such paintings is treated as a critical tool for people to understand human history, culture, art, and technology development history, which could further promote the prosperity of human civilization. With the development and wide application of digital technology, more and more paintings gradually digitized, making large-scale paintings art analysis possible. However, researchers are facing new problems of how to effectively use these massive digital resources when a large number of digitized paintings bring a wealth of image resources for them. Consequently, how to ultilize the computer to classify these massive painting images to facilitate further study of the researchers is recognized as an important issue.
     For artificial works, there exists a deep gap between paintings and natural images, due to special characteristics of paintings. Therefore, general image classification methods are not suitable to be applied directly to the paintings. In this paper, we attempt to utilize special aesthetic style characteristics of paintings to accomplish classification tasks among massive digited painting images.
     The main contributions of this paper include:
     Firstly, based on painting techniques, we propose a method to construct the descriptor of aesthetic style. After careful studies on art literature and research on the Chinese-Western paintings and different dynasties Dunhuang murals, we sum three features or three categories of properties, sixteen features to describe the aesthetic style for Chinese-Western paintings or Dunhuang murals that lie in different dynasties respectively from the view of painting technology, and the classification of paintings has been achieved. Classification experiments have validated the proposed idea.
     Secondly, we present an aesthetic style classification method based on the aesthetic style similarity. By researching on cognitive mechanisms of the human brain and the principle of similarity, digging the art style attributes, we establish the aesthetic style similarity rules. Then, built upon on above rules, we quantify the aesthetic features that generally accepted in the art domain, calculate self-similarity descriptors of image-style, then compute similarity coefficients between the image and each other images to constitute the similarity matrix, and finally judge the unknown samples by using of the Adaboost algorithm. The experimental results also proved the efficiency of using the aesthetic style similarity rules for painting images classification.
     Finally, we propose a method for aesthetic style classification based on saliency map. Studies have shown that the human visual sensory has a selective visual attention on the information presented by the world. Firstly, the color enhancement algorithms and the global region contrast based algorithm are used to calculate the saliency map of images. Secondly, we classify images based on probability model by using of the computed saliency map. Experimental results on the different aesthetic style Dunhuang murals database, as well as two other common classification databases like Caltech101and Caltech256show that, the proposed method not only can be successfully applied to tasks of the aesthetic style classification of painting images, but also can be extended to generic image classification tasks, indicating the strong robustness of the method.
引文
[1]韩雪松.中国古代绘画品评理论研究[D].博士学位论文,上海大学,2010.
    [2]徐颂华,刘智满,潘云鹤.中国绘画与书法艺术的数字化实践——一种计算的探索[M].浙江大学出版社,2008.
    [3]海因里希.沃尔夫林,潘耀昌.艺术风格学[M].中国人民大学出版社,2004.
    [4]孙美君.中国水墨画的设色扩散与风格化绘制研究[D].博士学位论文,天津大学,2009.
    [5]张英俊.基于视觉特征的图像分类检索技术研究[D].硕士学位论文,中北大学,2010.
    [6]卢寅.基于语义的图像分类方法研究[D].硕十学位论文,华南理工大学,2011.
    [7]钱小燕.艺术风格图像的非真实感绘制理论与方法研究[D].博士学位论文,南京理工大学,2007.
    [8]G. Csurka, C. Dance, L. Fan, et al. Visual categorization with bags of keypoints [C]. In ECCV Workshop on Statistical Learning in Computer Vision,2004:1-22.
    [9]K. Grauman, T. Darrell. The pyramid match kernel:discriminative classification with sets of image features [C]. In proceedings of IEEE International Conference on Computer Vision,2005:1458-1465.
    [10]K. Grauman, T. Darrell. Unsupervised learning of categories from sets of partially matching image features [C]. In proceedings of IEEE International Conference on Computer Vision and Pattern Recognition,2006:19-25.
    [11]K. Grauman, T. Darrell. Approximate correspondences in high dimensions [C]. In Proceedings of Advances in Neural Information Processing Systems 19,2006: 505-512.
    [12]K. Grauman, T. Darrell. The pyramid match kernel:efficient learning with sets of features [J]. Journal of Machine Learning Research,2007(8):725-760.
    [13]S. Lazebnik, C. Schmid, J. Ponce. Beyond bags of features:spatial pyramid matching for recognizing natural scene categories [C]. In proceedings of IEEE International Conference on Computer Vision and Pattern Recognition,2006: 2169-2178.
    [14]B. Scholkopf, A. Smola. Learning with kernels [M]. MIT Press,2002.
    [15]J. Yang, K. Yu, Y. Gong, et al. Linear spatial pyramid matching using sparse coding for image classification [C]. In proceedings of IEEE Conference on Computer Vision and Pattern Recognition,2009:1794-1801.
    [16]J. Yang, K. Yu, T. Huang. Supervised translation-invariant sparse coding [C]. In proceedings of IEEE Conference on Computer Vision and Pattern Recognition,2010: 3517-3524.
    [17]J. Wang, J. Yang, K. Yu, et al.. Locality-constrained linear coding for image classification [C]. In proceedings of IEEE Conference on Computer Vision and Pattern Recognition,2010, pages:3360-3367.
    [18]A. Ferencz, E.G. Learned-Miller, J.Malik. Learning to locate informative features for visual identification [J]. International Journal of Computer Vision,2008,77(1-3): 3-24.
    [19]C. Ancuti, P. Bekaert. SIFT-CCH:Increasing the SIFTdistinctness by color co-occurrence histograms [C]. International Symposium on Image and Signal Processing and Analysis,2007:130-135.
    [20]E. N. Mortensen, H. Deng, L. Shapiro. A SIFT descriptor with global context [C]. In proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 2005:778-782.
    [21]C. Li, L. Ma. A new framework for feature descriptor based on SIFT [J]. Pattern recognition letters,2009,30(5):544-557.
    [22]T. Gevers, A. W. M. Smeulders. Color based object recognition [J]. Pattern recognition,1997,32(3):458-464.
    [23]M Weber. Unsupervised learning of models for object recognition [D]. Doctor's Thesis, California Institute of Technology,2000.
    [24]S. Agarwal, A. Awan, D. Roth. Learning to detect objects in images via a sparse, part-based representation [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2004,26(11), pages:1475-1490.
    [25]R. Fergus, P. Perona, A. Zisserman. A sparse object category model for efficient learning and exhaustive recognition [C]. In Proceedings of theIEEE Conference on Computer Vision and Pattern Recognition,2005(1):380-397.
    [26]B. Leibe, A. Leonardis, B. Schiele. Combined object categorization and segmentation with an implicit shape model [C]. In ECCV Workshop on Statistical Learning in Computer Vision,2004:17-32.
    [27]D. J. Crandall, D. P. Huttenlocher. Composite models of objects and scenes for category recognition [C]. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,2007:380-397.
    [28]D. J. Crandall, D. P. Huttenlocher. Weakly supervised learning of part-based spatial models for visual object recognition [C]. In Proceedings of the 9th European Conference on Computer Vision,2006:16-29.
    [29]P. Felzenszwalb, D. McAllester, D. Ramanan. Adiscriminatively trained, multiscale, deformable part model [C]. In Proceedings of IEEE conference on Computer Vision and Pattern Recognition,2008:1-8.
    [30]M. A. Fisehler, R. A. Elsehlager. The representation and matching of pictorial structures [C]. IEEE Transactions on Computer,1973,22(1):67-92.
    [31]T. K. Leung, M. C. Burl, P. Perona. Probablistic affine invariants for recognition [C]. In proceedings of IEEE Conference on Computer Vision and Pattern Recognition,1998:678-684.
    [32]K. Mikolajczyk, B. Leibe, B. Schiele. Multiple object class detection with a generative model [C]. In proceedings of IEEE Conference on Computer Vision and Pattern Recognition,2006:26-36.
    [33]B. Leibe, B. Schiele. Interleaved object categorization and segmentation [C]. In Proceedings of the British Machine Vision Conference,2003:759-768.
    [34]B. Leibe, E. Seemann, B. Schiele. Pedestrian detection in crowded scenes [C]. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 2005(1):878-885.
    [35]F. Li, R. Fergus, P. Perona. A Bayesian approach to unsupervised one-shot learning of object categories [C]. In Proceedings of IEEE International Conference on Computer Vision,2003:1134-1141.
    [36]P. F. Felzenszwalb, D. P. Huttenlocher. Efficient matching of pictorial structures [C]. In Peoceedings of IEEE Conference on Computer Vision and Pattern Recognition,2000:2066-2073.
    [37]G. Bouchard, B. TRiggs. Hierarchical part-based visual object categorization [C]. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 2005:710-715.
    [38]G. Carneiro, David. G. Lowe. Sparse flexible models of local features [C]. In Peoceedings of the 9th European Conference on Computer Vision,2006:29-43.
    [39]M. C.Burl, M.Weber, P. Perona. Aprobabilistic approach to object recognition using local photometry and global geometry [C]. In Proceedings of the 5th European Conference on Computer Vision,1998:628-641.
    [40]M. Weber, M. Welling, Pietro Perona. Unsupervised learning of models for recognition [C]. In Peoceedings of the 6th European Conference on Computer Vision,2000:18-32.
    [41]P. F. Felzenszwalb, D. P. Huttenlocher. Pictorial Structures for Object Recognition [J]. International Journal of Computer Vision,2005,61(1):55-79.
    [42]D. Crandall, P. F. Felzenszwalb. Spatial priors for part-based recognition using statistical models [J]. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition,2007:1-8.
    [43]G. Edelman. Second Nature:Brain Science and Human Knowledge [M]. Yale University Press,2006.
    [44]C. R. Johnson, E. Hendriks, I. Berezhonoy, et.al. Image processing for artist identification [J]. IEEE Signal Processing Magazine,2008,25(4):37-48.
    [45]D. Keren. Recognizing image "style" and activities in video using local features and naive bayes [J]. Pattern Recognition Letters,2003,24(16):2913-2922.
    [46]J. Li, J. Z. Wang. Studying digital imagery of ancient paintings by mixtures of stochastic models [J]. IEEE Transactions on Image Processing,2004,13(3): 340-353.
    [47]W. Xu, B. Wei, Y. Pan. Learning of image rendering style based on texture synthesis [J]. Engineering Journal of Wuhan University,2003,36(3):115-119.
    [48]L. Shamir, T. Macura, N. Orlov, et.al. Impressionism, expressionism, surrealism: automated recognition of painters and schools of art [J]. ACM Transactions on Applied Perception,2010,7(2):1-8.
    [49]L. Leslie, T. Chua, R. Jain. Annotation of paintings with high-level semantic concepts using transductive inference and ontology-based concept disambiguation [C]. In Proceedings of ACM International Conference on Multimedia,2007: 443-452.
    [50]M. Yelizaveta, C. Tat-Seng, J. Ramesh. Ontology-based annotation of paintings using transductive inference framework [C]. In Proceedings of the 13th International Conference on Multimedia Modeling,2007(4351):13-23.
    [51]M. Yelizaveta, C. Tat-Seng, J. Ramesh. Semi-supervised annotation of brushwork in paintings domain using serial combinations of multiple experts [C]. In Proceedings of ACM International Conference on Multimedia,2006:529-538.
    [52]S. Dhar, V. Ordonez, T. L. Berg. High level describable attributes for predicting aesthetic and interestingness [C]. In Proceedings of IEEE International conference on Computer Vision and Pattern Recognition,2011:1657-1664.
    [53]A. K. Moorthy, P. Obrador, N. Oliver. Towards computational models of visual aesthetic appeal of consumer videos [C]. In Peoceedings of the 11th European Conference on Computer Vision,2010(6315):1-14.
    [54]C. Li, T. Chen. Aesthetic visual quality assessment of paintings [J]. IEEE Journal of Selected Topics in Signal Processing,2009,3(2):236-252.
    [55]R. Datta, D. Joshi, J. Li, et.al. Studying aesthetics in photographic images using a computational approach [C]. In Peoceedings of the 9th European Conference on Computer Vision,2006:288-301.
    [56]Y. Ke, X. Tang, F. Jing. The design of high-level features for photo quality assessment [C]. In Peoceedings of IEEE International conference on Computer Vision and Pattern Recognition,2006:419-426.
    [57]F. Cutzu, R. Hammoud, A. Leykin. Estimating the photorealism of images: Distinguishing paintings from photographs [C]. In Peoceedings of IEEE International conference on Computer Vision and Pattern Recognition,2003: 305-312.
    [58]Q. Zhao, C. Koch. Learning a saliency map using fixated locations in natural scenes [J]. Journal of Vision,2011,11,3(9):1-15.
    [59]R. Dale, S. Geldof, J. P. Prost. CORAL:Using natural language generation for navigational assistance [C]. In Proceedings of the 26th Australasian Computer Science Conference,2003(16):1-10.
    [60]N. J. Butko, L. Zhang, G. W. Cottrell, et.al. Visual saliency model for robot cameras [C]. In Proceedings of ICRA,2008:2398-2403.
    [61]F. Guraya, F. A. Cheikh, A. Tremeau, et.al. Predictive saliency maps for surveillance videos [C]. In Proceedings of the 2010 Ninth International Symposium on Distributed Computing and Applications to Business, Engineering and Science, 2010:508-513.
    [62]L. Itti, C. Koch, E. Niebur. A model of saliency-based visual attention for rapid scene analysis [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1998,20(11):1254-1259.
    [61]C. Christopoulos, A. Skodras, T. Ebrahimi. The JPEG2000 still image coding system:an overview [J]. IEEE Transactions on Consumer Electronics,2000,46(4): 1103-1127.
    [64]T. Chen, M. Cheng, P. Tan, et.al. Sketch2Photo:internet image montage [J]. ACM Transactions on Graphics,2009,28(5):1-10.
    [65]H. Wu, Y. Wang, K. Feng, et.al. Resizing by symmetry-summarization [C]. Proceedings of ACM SIGGRAPH Asia 2010,2010,29(6):1-9.
    [66]S. Chikkerur, T. Serre, C. Tan, et.al. What and where:a Bayesian inference theory of attention [J]. Vision Research,2010,50(22):2233-2247.
    [67]N. Murray, M. Vanrell, X. Otazu, et.al. Saliency estimation using a non-parametric low-level vision model [C]. In Peoceedings of IEEE Conference on Computer Vision and Pattern Recognition,2011:433-440.
    [68]J. M. Wolfe, T. S. Horowitz. What attributes guide the deployment of visual attention and how do they do it? [J]. Nature Reviews Neuroscience,2004,5(6): 495-501.
    [69]L. Itti, C. Koch. Computational modeling of visual attention [J]. Nature Reviews Neuroscience,2001,2(3):194-203.
    [70]M. Cheng, G. Zhang, N. J. Mitra, et.al. Global contrast based salient region detection [C]. In Peoceedings of IEEE Conference on Computer Vision and Pattern Recognition,2011:409-416.
    [71]F. S. Khan, J. van de Weijer, M. Vanrell. Top-down color attention for object recognition [C]. In Proceedings of IEEE International Conference on Computer Vision,2009:979-986.
    [72]J. M. Wolfe. Guided Search 4.0:Current progress with a model of visual search [J]. Integrated models of cognitive systems,2007:99-119.
    [73]R. K. Ansgar, Z. Li. Feature-specific interactions in salience from combined feature contrasts:Evidence for a bottom-up saliency map in V1 [J]. Journal of Vision,2007,7(6):1-14.
    [74]H. Jong. Seo, P. Milanfar. Static and space-time visual saliency detection by self-resemblance [J]. Journal of Vision,2009,9(12-15):1-27.
    [75]S. Goferman, L. Zelnik-Manor, A. Tal. Context-aware saliency detection [C]. In Peoceedings of IEEE Conference on Computer Vision and Pattern Recognition, 2010:2376-2383.
    [76]N. D. B. Bruce, J. K. Tsotsos. Saliency Based on Information Maximization [C]. In Proceedings of Advances in Neural Information Processing Systems,2006: 155-162.
    [77]R. Achanta, S. Hemami, F. Estrada, et.al. Frequency-tuned salient region detection [C]. In Peoceedings of IEEE International Conference on Computer Vision and Pattern Recognition,2009:1597-1604.
    [78]T. Liu, Z. Yuan, J. Sun, et.al.. Learning to detect a salient object [C]. In Proceedings of IEEE Computer Society Conference on Computer and Vision Pattern Recognition,2007:353-367.
    [79]Y. Zhai, and M. Shah. Visual attention detection in video sequences using spatiotemporal cues [C]. In Proceedings of the 14th Annual ACM International
    Conference on Multimedia,2006:815-824.
    [80]B. W. Tatlera, R. J. Baddeleyb, I. D. Gilchristb. Visual correlates of fixation selection:effects of scale and time [J]. Vision Research,2005,45(5):643-659.
    [81]N. Pinto, D. D. Cox, J. J. Dicarlo. Why is real-world visual object recognition hard? [C]. PLoS Computational Biology,2008,4(1):151-156.
    [82]J. Mutch and D. G. Lowe. Multiclass object recognition with sparse, localized features [C]. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition,2006:11-18.
    [83]T. Serre, L. Wolf, S. Bileschi, et.al. Object recognition with cortex-like mechanisms [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2007,29(3):411-426.
    [84]D. G. Lowe. Distinctive image features from scale invariant keypoints [J]. International Journal of Computer Vision,2004,60(2):91-110.
    [85]J. Van Hateren, A. Van Der Schaaf. Independent component filters of natural images compared with simple cells in primary visual cortex [J]. Proceeings: Biological Science,1998(265):359-366.
    [86]D. J. Field. What is the goal of sensory coding? [J]. Neural Computation,1994, 6(4):559-601.
    [87]L. Itti, C. Koch. A saliency-based search mechanism for overt and covert shifts of visual attention [J]. Vision Research,2000,40(10-12):1489-1506.
    [88]P. Schiller. The neural control of visually guided eye movements [J]. Cognitive neuroscience of attention:a developmental perspective,1998:3-46.
    [89]L. Barrington, T. K. Marks, J. H. Hsiao, et.al. NIMBLE:A kernel density model of saccade-based visual memory [J]. Journal of Vision,2008(8):1-14.
    [90]N. Morioka. Learning object representations using sequential patterns [C]. In Proceedings of the 21st Australasian Joint Conference on Artificial Intelligence: Advances in Artificial Intelligence,2008:551-561.
    [91]L. Paletta, G. Fritz, C. Seifert. Q-learning of sequential attention for visual object recognition from informative local descriptors [C]. In Proceedings of the 22nd International Conference on Machine Learning,2005:649-656.
    [92]J. Lacroix, E. Postma, J. Van Den Herik, et.al. Toward a visual cognitive system using active top-down saccadic control [J]. Inernational Journal of Humanoid Robotics,2008(5):225-246.
    [93]D. M. Bradley, J. A. Bagnell. Differential sparse coding [C]. In Proceedings of Advances in Neural Information Processing Systems,2008:1-11.
    [94]Z. Zhu, C. Zhao, Y. Hou. Effective image classification method based on texture and color [J]. Journal of Convergence Information Technology,2012,7(4):19-26.
    [95]H. S. Own, A. E. Hassanien. Rough wavelet hybrid image classification scheme [J]. Journal of Convergence Information Technology,2008,3(4):65-75.
    [96]G. Edelman. Consciousness:how matter becomes imagination [M]. Penguin Press Science,2001.
    [97]D. G. Stork. Computer analysis of lighting style in fine art:steps towards inter-artist studies [C]. Computer Vision and Image Analysis of Art II,2011,7869(2): 1-11.
    [98]W. Meng. The artistic style and influence of Wangwei's ink paintings [J]. Traditional Chinese Painter, Tianjin People Art Publisher,2011:72-73.
    [99]J. L. Kirsch, R. A. Kirsch. The anatomy of painting style:description with computer rules [J]. Leonardo,1988,21(4):437-444.
    [100]J. H. Van Den Herik, E. O. Postma. Discovering the visual signature of painters [J]. In Future Directions for Intelligent Systems and Information Sciences, IEEE Transactions Systems,2000:129-147.
    [101]M. Yelizaveta, C. Tat-Seng, A. Irina. Analysis and retrieval of paintings using artistic color concepts [C]. IEEE International Conference on Multimedia and Expo, 2005:1246-1249.
    [102]K. Bai. The blank-leaving and its Origin of Chinese painting [J]. Journal of Shandong Academy of Arts, ShanDong University of Arts,2008(1):19-21.
    [103]B. Hyeran, L. whan. Applications of support vector machines for pattern recognition:a survey [C]. Proceedings of SVM '02 Proceedings of the First International Workshop on Pattern Recognition with Support Vector Machines, 2002:213-236.
    [104]X. Wan. The analysis of the mountain-water paintings and mountain-Water painting style in Dunhuang Fresco [M]. Master's thesis, Nanjing Normal University, China,2004.
    [105]T. Zhang. Mogao Grottoes in Dunhuang study art graphics research Northern Dynasties [M]. Master's thesis, Xi'an Academy of Fine Arts, China,2009.
    [106]J. Liu. The Study on the decorative language of the Dunhuang Murals [M]. Master's thesis, Xi'an Academy of Fine Arts, China,2007.
    [107]X. Guo. The enlightenment of Dunhuang Fresco color research in Sui-Tang Dynasties to modern color design [M]. Master's thesis, Beijing Institute of Fashion Technology. China.2010.
    [108]W. Ma. On Fei Tian's costume on the frescoes of Sui and Tang Dynasty in Dunhuang [M]. Master's thesis, Dong Hua University, China,2007.
    [109]J. Huang. The studies for Dunhuang fresco character style in the Tang Dynasty [M]. Doctor's thesis, China Academy of Art, China,2007.
    [110]X. Liu. Rendering 3D mountain and rock models in Chinese painting style [M]. Master's thesis, Shanghai Jiao Tong University, China,2007.
    [111]X. Wang, X. Qin, L. Xin. Advances in non-photorealistic rendering [J]. Computer Science,2010,37(9):20-27.
    [112]H. Wu, Research on the automatic classification algorithm of the basic elements in Chinese landscape paintings [M]. Master's thesis, Zhejiang University, China, 2006.
    [113]Y. Qi, J. Sun, Y. Shang. Basic art characters and graphical simulation for Chinese Ink Wash drawing [J]. Journal of Image and Graphics,2003,8(A):562-566.
    [114]http://en.wikipedia.org/wiki/Rule_of_thirds.
    [115]http://en.wikipedia.org/wiki/Joshua_Reynolds.
    [116]W. Duan, J. Fan. Classification of Chinese art works collection. Chinese Dunhuang murals collection. Dunhuang Five Dynasties and Song Dynasty [T]. Liaoning Art Publication, Tianjing People Art Publication,2006.
    [117]A. K. Moorthy, A. Bovik. Efficient video quality assessment along temporal trajectories [J]. IEEE Transactions on Circuits and Systems for Video Technology, 2010,20(11):1653-1658.
    [118]C. He. Dunhuang murals and the gestures of color statues, http://bbs.foyuan.net/article-146776-1.html.
    [119]A. Vailaya, M. A. T. Figueiredo, A. K. Jain, et.al. Image classification for content-based indexing [J]. IEEE Transactions on Image Processing,2001,10(1): 117-130.
    [120]霍云,宗光华,孙明磊等.基于对称特征相似性的图像识别算法研究[J]光学技术,2009,35(2):292-295.
    [121]Q. Liu, H. Jin, X. Tang, et.al. A new extension of kernel feature and its application for visual recognition [J]. Neurocomputing,2008,71(10-12): 1850-1856.
    [122]C. Galleguillos, B. McFee, S. B, et.al. From region similarity to category discovery [C]. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition,2001:2665-2672.
    [123]C. Leistner, H. Grabner, H. Bischof. Semi-supervised boosting using visual similarity learning [C]. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition,2008:1-8.
    [124]Y. Freund, R. E. Schapire. A decision-theoretic generalization of online learning and an application to boosting [J]. Journal of Computer and System Sciences,1997,13(55):119-139.
    [125]H. Drucker, R. E. Schapiro, P. Simard. Boosting performance in neural networks [J]. International Journal of Pattern Recognition and Artificial Intelligence,1993, 7(4):705-719.
    [126]Y. Freund, and R. E. Schapire. A decision-theoretic generalization of on-line learning and an application to boosting [C]. In Proceedings of the Second European Conference on Computational Learning Theory,1995:23-37.
    [127]R. E. Schapire A brief introduction to boosting [C]. In Proceedings of the 16th International Joint Conference on Artificial Intelligence,1999:1401-1406.
    [128]黄蓉.论刘松年工笔人物画的艺术风格[D].硕十学位论文,扬州大学,2010.
    [129]王先岳.写生与新山水画图式风格的形成[D].博士学位论文,中国艺术研究院,2010.
    [130]M. Varma, A. Zisserman. Classifying images of materials:achieving viewpoint and illumination independence [C]. In Proceedings of the 7th European Conference on Computer Vision,2002:255-271.
    [131]P. V. Gehler, S. Nowozin. On feature combination for multiclass object classification [C]. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition,2009:221-228.
    [132]J. van de Weijer, T. Gevers, A.D. Bagdanov. Boosting color saliency in image feature detection [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2006,28(1):150-156.
    [133]J. Reynolds, R. Desimone. Interacting roles of attention and visual salience in V4 [J]. Neuron,2003,37(5):853-863.
    [134]M. Zhang, R. Alhajj. Improving the graph-based image segmentation method [C]. In Proceedings of the 18th IEEE International Conference on Tools with Artificial Intelligence,2006:1-6.
    [135]T. Kadir, A. Zisserman, M. Brady, et al. An affine invariant salient region detector [C]. In Proceedings of the 8th European Conference on Computer Vision, 2004:228-241.
    [136]D. Gao, V. Mahadevan, N. Vasconcelos. On the plausibility of the discriminate center-surround hypothesis for visual saliency [J]. Journal of Vision,2008,8(7): 1-18.
    [137]J. Stottinger, A. Hanbury, T. Gevers, et al. Lonely but attractive:Sparse color salient points for object retrieval and categorization [C]. In Proceedings of IEEE International Conference on Computer Vision and Pattern Recognition Workshop on Workshop on Feature Detectors and Descriptors,2009:1-8.
    [138]E. A. Koen, van de Sande, T. Gevers, et al. Evaluating color descriptors for object and scene recognition [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2010,32(9):1582-1596.
    [139]S. D. Zenzo. A note on the gradient of a multi-image [J]. Journal of Computer Vision, Graphics, and Image Processing,1986,33(1):116-125.
    [140]B. W. Tatler. The central fixation bias in scene viewing:Selecting an optimal viewing position independently of motor biases and image feature distributions [J]. Journal of Vision,2007,7(14):1-17.
    [141]P. F. Felzenszwalb, D. P. Huttenlocher. Efficient graph-based image segmentation [J]. International Journal of Computer Vision,2004,59(2):167-181.
    [142]T. Judd, K. Ehinger, F. Durand, et al. Learn to predict where humans look [C]. In Proceedings of IEEE International Conference on Computer Vision,2009: 2106-2113.
    [143]I. van der Linde, U. Rajashekar, A.C. Bovik, et al. DOVES:A database of visual eye movements [J]. Spatial Vision,2009,22(2):161-177.
    [144]X. Hou, L. Zhang. Dynamic visual attention:searching for coding length increments [C]. In Proceedings of the 22th Annual Conference on Neural Information Processing Systems,2008:681-688.
    [145]J. Harel, C. Koch, and P. Perona. Graph-based visual saliency [C]. In Proceedings of the 20th Annual Conference on Neural Information Processing Systems,2006:545-552.
    [146]X. Hou, L. Zhang. Saliency detection:A spectral residual approach [C]. In Proceedings of IEEE International Conference on Computer Vision and Pattern Recognition,2007,1(800):1-8.
    [147]L. Duan, C. Wu, J. Miao, et al. Visual saliency detection by spatially weighted dissimilarity [C]. In Proceedings of IEEE International Conference on Computer Vision and Pattern Recognition,2011:473-480.
    [148]D. A. R Vigo, J. van de Weijer, T. Gevers. Color edge saliency boosting using natural image statistics [C]. Proceedings of CGIV,2010.
    [149]Z. Koldovsky, P. Tichavsky, E. Oja. Efficient variant of algorithm FastICA for independent component analysis attaining the Cramer-Rao lower bound [J]. IEEE Transactions on Neural Networks,2006,17(5):1265-1277.
    [150]O. Boiman, E. Shechtman, M. Irani. In defense of Nearest-Neighbor based image classification [C]. In Proceedings of IEEE International Conference on Computer Vision and Pattern Recognition,2008:1-8.
    [151]F. Li, R. Fergus, P. Perona. Learning generative visual models from few training examples:an incremental bayesian approach tested on 101 object categories [J]. Computer Vision and Image Understanding,2007,106(1):59-70.
    [152]G. Griffin, A. Holub, P. Perona. The Caltech-256 [R]. Caltech Technical Report, 2007,7694.
    [153]C. Gu, J. Lim, P. ArbelZaez, et al. Recognition using regions [C]. In Proceedings of IEEE International Conference on Computer Vision and Pattern Recognition, 2009:1030-1037.
    [154]C. Kanan, G. Cottrell. Robust classification of objects, faces, and flowers using natural image statistics [C]. In Proceedings of IEEE International Conference on Computer Vision and Pattern Recognition,2010:2472-2479.
    [155]J. C. van Gemert, J.-M. Geusebroek, C. J. Veenman, et al. Kernel codebooks for scene categorization [C]. In Proceedings of the 10th European Conference on Computer Vision,2008:696-709.
    [156]A. Olmos, F. Kingdom. A biologically inspired algorithm for the recovery of shading and reflectance images [J]. Perception,2004,33(12):1463-1473.
    [157]A. Kovashka, M. Lease. Human and machine detection of stylistic similarity in art [C]. CrowdConf 2010,2010:1-9.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700