用户名: 密码: 验证码:
视觉信息质量感知模型及评价方法研究
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
视觉信息数字化波及到世界的每一个角落,人们将不断追求视觉信息的高清晰和高保真。但是视觉信息在采集、压缩、处理、传输和恢复的过程中都可能会引入失真,它们会给视觉信息的处理、分析和解译带来一定的障碍,也影响了人们正确认识客观世界的程度。因此,需要设计合理可靠的评价方法来精确度量视觉信息的感知质量,从而方便地指导视觉信息处理系统的优化、改进和提高,以最少的代价提供最优的视觉质量。
     本文针对视觉信息中自然场景图像的质量感知模型和评价方法这一基本问题,开展了系统深入的研究。以人类视觉系统基本特性为基础,探索视觉信息的感知特性,分析自然场景的统计规律,建模质量语义分层特性,构建视觉信息客观质量评价方法,度量视觉信息的保真度和可懂度,为视觉信息处理系统的设计和优化提供合理的依据。主要的研究内容概括如下:
     (1)考虑到人类视觉系统对不同的形态学成份具有不同的感知特性,并结合视觉恰可察觉差异模型,提出了一种基于形态学成份分析的全参考型图像质量评价方法。首先通过形态学成份分析把原始图像和失真图像都稀疏表示成纹理和卡通成份;然后分别建模和求解两种形态学成份的恰可察觉差异感知特性;最后计算原始图像和失真图像的感知特性的差异,并对两种形态学的误差进行融合得到失真图像的质量。实验结果表明此方法能够有效的评价不同失真类型的图像,并优于经典的图像质量评价算法。
     (2)提出了基于S-CIELAB颜色模型的全参考型图像质量评价方法。由于自然场景图像具有的色彩非常丰富,信息量非常大,图像失真引起的颜色信息丢失,对于图像的感知质量具有重要的影响。因此,基于人眼感知过程中颜色视觉特性,提出了基于符合人类视觉特性的S-CIELAB颜色模型的全参考型图像质量评价方法。首先将参考图像和待测图像变换到S-CIELAB颜色空间,然后对变换后的三个通道分别计算待测与原始图像间的结构相似性,最后融合三个通道得到最终图像质量。实验结果表明客观评价结果与主观评价结果达到较好的一致性。
     (3)提出了基于颜色分形结构特征的部分参考型图像质量评价方法。颜色信息在图像质量感知中起着至关重要的作用,但在很多情况下无法完全获取这些信息,因此提出了一种基于颜色分形结构模型的部分参考型图像质量评价方法。首先用分形结构模型分别提取原始图像和待测图像的颜色信息和结构信息,用以模拟自然图像中颜色的局部性和相似性;然后比较原始图像和失真图像特征的差异;最后运用支持向量回归将特征差异映射成图像质量。该方法有效地减少了对原始图像信息的依赖程度,具有更广泛的应用,且实验结果表明该算法的性能优于现有部分参考型质量评价方法。
     (4)提出了基于自然场景统计特征稀疏表示的无参考型图像质量评价方法。目前大多数无参考型图像质量评价方法只能针对一种或几种特定失真类型图像进行质量评价。因此,提出了一种针对自然场景图像的无参考型图像质量评价方法,实现了对不同类型失真图像的有效评测,具有通用性和普适性。首先将待测图像进行小波分解,提取其自然场景统计特征;然后通过稀疏表示对特征进行编码;最后通过稀疏编码系数对平均主观差异分数值进行加权求和得到视觉感知质量。实验结果表明,该方法评价结果与主观感知质量具有较高的一致性,较现有的无参考型图像质量评价方法和经典全参考型图像质量评价算法有更好的性能。
     (5)提出了基于视觉质量主题的真正无参考型图像质量评价方法。由于视觉显著性在质量评价中占有非常重要的地位,以视觉显著性作为先验,主导视觉隐主题的分布,提出了视觉显著性加权的分层狄利克雷过程混合模型。基于该模型提出了无参考型质量评价方法。首先对训练集图像进行质量感知特征提取并建立视觉字典;然后根据前面提出的模型计算与视觉质量有关的主题分布;最后计算失真图像的主题分布与原始图像的主题分布的差异得到最终的图像质量。在现有公开数据库上的实验验证了该方法在一致性和鲁棒性方面更具优势。
     上述的研究从全参考型到无参考型,对参考图像信息的依赖性逐渐降低,实际应用性逐渐增强,理论由浅入深,对视觉信息质量感知模型及评价方法进行了深入研究。本文的研究成果为视觉信息客观质量评价方法的研究开辟了新的思路,具有重要的理论意义及实用价值。
Visual information digitization has spread to every corner of the world, and peoplewill continue to pursue high definition and fidelity of visual information. However, theprocesses of acquisition, compression, processing, transmission and restorationintroduce different distortions to the visual information. This gives a great obstacle tothe process of visual information processing, analysis and interpretation, preventinghuman beings from understanding the objective world. Therefore, it is necessary todesign reasonable and reliable methods to measure perceived quality of visualinformation. These methods have guiding significance for optimizing, improving andoptimizing the visual information processing system. Thus the system can provide thebest visual quality with minimum cost.
     In this dissertation, a systematic study about fundamental issues of the qualityperception model and assessment metrics for visual information is carried out. Basedupon the fundamental characteristics of human visual system, this study explores theperceptual characteristics, analyzes the statistical regularities of natural scenes, modelsthe hierarchical quality semantic and finally constructs several objective qualityassessment methods. These methods measure the fidelity and intelligibility of visualinformation and provide a reasonable basis for the design and optimization of visualinformation processing system. The author’s major contributions are outlined as follows
     (1) Considering that the Human Visual System (HVS) has different perceptualcharacteristics for different morphological components, a novel image quality metric isproposed by incorporating Morphological Component Analysis (MCA) and HVS,which is capable of assessing the image with different kinds of distortion. Firstly,reference and distorted images are decomposed into linearly combined texture andcartoon components by MCA respectively. Then these components are turned intoperceptual features by Just Noticeable Difference (JND) which integrates maskingfeatures, luminance adaptation, and Contrast Sensitive Function (CSF). Finally, thediscrimination between the feature of the reference image and that of the distortedimage is quantified using a pooling strategy before the final image quality is obtained.Experimental results demonstrate that the performance of the proposed prevails oversome existing methods.
     (2) Image quality assessment based on S-CIELAB model. Most natural sceneimages captured by digital devices are color images, which bear huge information. Theloss of the color information caused by the distortion has great impact on the perceptual image quality. However, most existing IQA metrics are only designed for gray images.Hence, the S-CIELAB color model, which has an excellent performance for mimickingthe perceptual processing of human color vision, is incorporated with the geometricaldistortion measurement to assess the image quality. First, the reference and distortedimages are transformed into the S-CIELAB color perceptual space, and the transformedimages are evaluated by an existing metric in three color perceptual channels. Thefidelity factors of three channels are weighted to obtain the image quality. Experimentalresults show that the proposed methods are in good consistency with human subjectivequality scores.
     (3) Color fractal structure model for reduced-reference colorful image qualityassessment. Most of existing methods fail to take color information into consideration,although the color distortion is significant for the increasing color images. To solve theaforementioned problem, this paper proposed a novel IQA method which focuses on thecolor distortion. In particular, we extract color features based on the model of colorfractal structure. Then the color and structure features are mapped into visual qualityusing the support vector regression. Experimental results demonstrate that the proposedmethod is highly consistent with the human perception especially on images with colordistortion.
     (4) Sparse representation for no-reference image quality assessment. Since theexisting no-reference image quality assessment (IQA) is designed for the image withone or few specific distortions, a universal no-reference image quality assessmentmetric is introduced. It is a simple yet effective algorithm based upon the sparserepresentation of natural scene statistics (NSS) feature and it can predict the quality ofimage with different distortions. This algorithm consists of three key steps: extractingNSS features in the wavelet domain, representing features via sparse coding, andweighting differential mean opinion scores by the sparse coding coefficients to obtainthe final visual quality values. Thorough experiments show that the proposed algorithmis consistent with subjective perception values, and outperforms representative blindimage quality assessment algorithms and some full-reference metrics.
     (5) Visual-quality-related topics based “completely” no-reference image qualityassessment. Because visually salient areas are critical in subjective quality assessment,visual saliency weighted hierarchical Dirichlet processes (visual-saliency wHDP) isproposed by introducing visual saliency as the prior for observations to dominate theconstruction of latent topics. According to this model, no-reference image quality assessment metric is proposed, and it includes three parts: constructing visualvocabulary by extracting the quality-aware features from a training set, obtainingdistributions of visual-quality-related topics by training visual saliency wHDP, andestimating the quality of a test image by computing the difference between thevisual-quality-related topics found in the test image and those found in the originalimages contained in the training set. Thorough experiments show that the proposedvisual-saliency wHDP robustly produces quality-related topics and obtains promisingperformance.
     Extensive study has been conducted from full-reference IQA to no-reference IQA,which differs in the dependence of the references and practicability. The overallresearch on visual information perceptual model and quality assessment follows thegradual progress that is from the elementary to the profound. The fruit presented in thisdissertation opens up a new way for visual information assessment, which hasextremely important theoretical significance and practical value.
引文
[1]路文.基于视觉感知的影像质量评价方法研究.博士学位论文,西安电子科技大学,2009.
    [2]高新波,路文.视觉信息质量评价方法.西安:西安电子科技大学出版社,2011.
    [3] U. M. Chaskar, M. S. Sutaone, N. S. Shah, and T. Jaison,“Iris image quality assessment forbiometric application”, International Journal of Computer Science Issues,3(3):474-478,2012.
    [4] H. Fronthaler, K. Kollreider, and J. Bigun,“Automatic image quality assessment withapplication in biometrics”, in Computer Vision and Pattern Recognition Workshop,2006.
    [5] W. Luo, X. Wang, and X. Tang,“Content-based photo quality assessment”, in Proc. IEEE Int.Conf. Computer Vision (ICCV), pp.2206-2213,2011.
    [6] Y. Wang, T. Jiang, S. Ma, and W. Gao,“Novel spatio-temporal structural information basedvideo quality metric”, IEEE Trans. Circuits and Systems for Video Technology,22(7):989-998,2012.
    [7] E. Christophe, D. Léger, and C. Mailhes,“Quality criteria benchmark for hyperspectralimagery”, IEEE Trans. Geoscience and Remote Sensing,43(9):2103-2114,2005.
    [8] D. Marr. Vision: a Computational Investigation into the Human Representation and Processingof Visual Information. New York: W. H. Freeman and Company Press,1982.
    [9] Z. Wang, A. C. Bovik, and L. Lu,“Why is image quality assessment so difficult?” in Proc.IEEE Int. Con. Acoustics, Speech, and Signal Processing,4:3313-3316,2002.
    [10] ITU-R BT.500-13. Methodology for the Subjective Assessment of the Quality of TelevisionPictures,2013.
    [11] VQEG.“Subjective test plan”. Version3,2003. Available: http://www.vqeg.org/.
    [12] VQEG.“Final report from the video quality experts group on the validation of objective modelsof video quality assessment.” Phase I VQEG,2000. Available: http://www.vqeg.org/.
    [13] VQEG.“Final report from the video quality experts group on the validation of objective modelsof video quality assessment.” Phase II VQEG,2003. Available: http://www.vqeg.org/.
    [14] Z. Wang and A. C. Bovik, Modern Image Quality Assessment. New York: Morgan andClaypool Publishing Company,2006.
    [15] A. M. Eskicioglu, and P. S. Fisher,“Image quality measures and their performance”, IEEETrans. Communication,43(12):2959-2965,1995.
    [16] J. L. Mannos and D. J. Sakrison,“The effects of a visual fidelity criterion on the encoding ofimages”, IEEE Trans. Information Theory,4:525-536,1974.
    [17] S. Daly,“The visible difference predictor: An algorithm for the assessment of image fidelity”,in Digital Images and Human Vision, A. B. Watson, ed. Cambridge, MA: The MIT Press,179-206,1993.
    [18] J. Lubin,“A visual discrimination model for image system design and evaluation”, in VisualModels for Target Detection and Recognition, E. Peli, ed. Singapore, World Scientific,207-220,1995.
    [19] R. J. Safranek and J. D. Johnson,“A perceptually tuned sub-band image coder with imagedependent quantization and post-quantization data compression”, in Proc. IEEE Int. Conf.Acoust., Speech, and Signal Processing,1945-1948,1989.
    [20] P. C. Teo and D. J. Heeger,“Perceptual image distortion”, in Proc. SPIE, Human Vision, VisualProcessing, and Digital Display V,2179:127-141, San Jose, CA,1994.
    [21] A. B. Watson,“DCT quantization matrices visually optimized for individual images”, in Proc.SPIE Human vision, Visual Processing and Digital Display IV,1913(14):202-216, Bellingham,WA, USA,1993.
    [22] A. B. Watson, G. Y. Yang, J. A. Solomon, and J. Villasenor,“Visibility of wavelet quantizationnoise”, IEEE Trans. Image Processing,6(8):1164-1175,1997.
    [23] A. P. Bradley,“A wavelet difference predictor”, IEEE Trans. Image Processing,8(5):717-730,1999.
    [24] M. Miyahara, K. Kotani, and R. V. Algazi,“Objective Picture Quality Scale for Image Coding”,IEEE Trans. Communications,46(9):1215-1225,1998.
    [25] N, Damera-Venkata, T, D. Kite, W, S. Geisler, B, L. Evans, and A, C, Bovik,“Image qualityassessment based on a degradation model”, IEEE Trans. Image Processing,4(4):636-650,2000.
    [26] A, Shnayderman, A, Gusev, and A, M. Eskicioglu,“An SVD-based grayscale image qualitymeasure for local and global assessment.” IEEE Trans. Image Processing,15(2):422-429,2006.
    [27] D. M. Chandler, and S. S. Hemami,“A Wavelet-based visual signal-to-noise ratio for naturalimages.” IEEE Trans. Image Processing,16(9):2284-2298,2007.
    [28] Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli,“Image quality assessment: fromerror visibility to structural similarity.” IEEE Trans. Image Processing,13(4):600-612,2004.
    [29] H. R. Sheikh, A. C. Bovik, and G. D. Veciana,“An information fidelity criterion for imagequality assessment using natural scene statistics.” IEEE Trans. Image Processing,14(12):2117-2128,2005.
    [30] H. R. Sheikh, and A. C. Bovik,“Image information and visual quality.” IEEE Trans. ImageProcessing,15(2):430-444,2006.
    [31] Z. Wang, and Q. Li,“Information content weighting for perceptual image quality assessment.”IEEE Trans. Image Processing,20(5):1185-1198,2011.
    [32] L. Zhang, L. Zhang, X. Mou, and D. Zhang,“FSIM: a feature similarity index for image qualityassessment.” IEEE Trans. Image Processing,20(8):2378-2386,2011.
    [33] X. B. Gao, T. Wang, and J. Li,“A content-based image quality metric.” D. Slezak et al.(Eds.):RSFDGrC2005, Lecture Notes in Artificial Intelligence, LNAI3642,231-240,2005.
    [34] L. He, X. Gao, W. Lu, X. Li and D. Tao,“Image quality assessment based on S-CIELABmodel”, Signal, Image and Video Processing,5(3):283-290,2011.
    [35] H. R. Sheikh, M. F. Sabir and A. C. Bovik,“A statistical evaluation of recent full referenceimage quality assessment algorithms.” IEEE Trans. Image Processing,15(11):3440-3451,2006.
    [36] Video Quality Expert Group, RRNR-TV Group Test Plan Draft Version1.4.http://www.vqeg.org
    [37] Z. Wang and A. C. Bovik, Modern Image Quality Assessment. New York: Morgan andClaypool Publishing Company,2006.
    [38] V. M. Liu, J. Y. Lin, and K. G. Wang,“Objective image quality measure for block-based DCTcoding,” IEEE Trans. Consumer Electronics,43:511-516,1997.
    [39] A. C. Bovik, The Handbook of Image and Video Processing. Second Edition, Elsevier,2005.
    [40] T. M. Kusuma and H. J. Zepernick,“A reduced-reference perceptual quality metric forin-service image quality assessment”, in Proc. Joint First Workshop on Mobile Future andIEEE Symposium on Trends in Communications,71-74,2003.
    [41] I. P. Gunawan and M. Ghanbari,“Image quality assessment based on harmonics gain/lossinformation”, in Proc. IEEE Int. Conf. on Image Processing,1:29-32,2005.
    [42] Z. Wang and E. P. Simoncelli,“Reduced-reference image quality assessment using awavelet-domain natural image statistic model”, Human Vision and Electronic Imaging X. Proc.,5666(1):149-159,2005.
    [43] L. Ma, S. Li, F. Zhang, and K. N. Ngan,“Reduced-reference image quality assessment usingreorganized DCT-based image representation”, IEEE Trans. Multimedia,13(4):824-829,2011.
    [44] Z. Wang, G. Wu, H. R. Sheikh, E. P. Simoncelli, E. Yang, and A. C. Bovik,“Quality-awareimages.” IEEE Trans. Image Processing,15(6):1680-1689,2006.
    [45] S. Wang, D. Zheng, J. Zhao, W. J. Tam, and F. Speranza,“An image quality evaluation methodbased on digital watermarking,” IEEE Trans. Circuits and Systems for Video Technology,17(1):98-105,2007.
    [46] W. Lu, X. Gao, D. Tao, and X. Li,“A wavelet-based image quality assessment method”, Int.Journal of Wavelets, Multiresolution, and Information Processing,6(4):541-551,2008.
    [47] X. Gao, W. Lu, X. Li, and D. Tao,“Wavelet-based contourlet in quality evaluation of digitalimages”, Neurocomputing,72(12):378-385,2008.
    [48] W. Lu, X. Gao, D. Tao, and X. Li,“An image quality assessment metric based on contourlet”,in Proc. IEEE Int. Conf. on Image Processing,12-15,2008.
    [49] X. Gao, W. Lu, D. Tao, and X. Li,“Image quality assessment based on multiscale geometricanalysis”, IEEE Trans. Image Processing,18(7):1409-1423,2009.
    [50] X. Li, W. Lu, D. Tao, and X. Gao,“An image quality assessment based on wavelet structure”,Signal Processing,89(4):548-555,2009.
    [51] D. Tao, X. Li, W. Lu, and X. Gao,“Reduced-reference IQA in contourlet domain”, IEEE Trans.Systems, Man, and Cybernetics, Part B: Cybernetics,39(6),1623-1627,2009.
    [52] M. Carnec, C. P. Le, and D. Barba,“Visual features for image quality assessment with reducedreference,” in Proc. IEEE Int. Conf. Image Processing,2005, vol.1, pp.421-424.
    [53] Z. Wang, H. R. Sheikh, and A. C. Bovik, Objective video quality assessment, in: The Handbookof Video Databases: Design and Applications, B. Furht, O. Marqure ed., CRC Press,2003:1041-1078.
    [54] Z. Wang, H. R. Sheikh, and A. C. Bovik,“No-reference perceptual quality assessment of JPEGcompressed images,” in Proc. Int. Conf. Image Processing, vol.1, pp.477-480,2002.
    [55] A. C. Bovik, S. Liu,“DCT-domain blind measurement of blocking artifacts in DCT-codedimages,” in Proc. IEEE Int. Conf. Acoustics, Speech, and Signal Processing, vol.3, pp.1725-1728,2001.
    [56] Y. Yu, Z. Lu, H. Ling, and F. Zou,“No-reference perceptual quality assessment of JPEGimages using general regression neural network,” Lecture Notes in Computer Science,2006, vol.3972, pp.638-645.
    [57] R. V. Babu, A. Perkis,“An HVS-based no-reference perceptual quality assessment of JPEGcoded images using neural networks,” in Proc. IEEE Int. Conf. Image Processing, vol.1, pp.433-436,2005.
    [58] H. R. Sheikh, A. C. Bovik, and L. Cormack,“No-reference quality assessment using naturalscene statistics: JPEG2000,” IEEE Trans. Image Processing,14(11):1918-1927,2005.
    [59] T. Hanghang, L. Mingjing, Z. Hongjiang, et al.,“No-reference quality assessment forJPEG2000comopressed images,” in Proc. Int. Conf. Image Processing, vol.5, pp.3539-3542,2004.
    [60] A. K. Moorthy and A. C. Bovik,“A two-step framework for constructing blind image qualityindices”, IEEE Signal Processing Letters,17(6):513-516,2010.
    [61] A. K. Moorthy and A. C. Bovik,“Blind image quality assessment: from natural scene statisticsto perceptual quality”, IEEE Trans. Image Processing,20(12):3350-3364,2011.
    [62] H. Tang, N. Joshi, and A. Kapoor,“Learning a blind measure of perceptual image quality”, inProc. IEEE Conf. Computer Vision and Pattern Recognition (CVPR), pp.305-312,2011.
    [63] M. A. Saad, A. C. Bovik, and C. Charrier,“A DCT statistics based blind image quality index”,IEEE Signal Processing Letters,17(6):583-586,2011.
    [64] M. A. Saad, A. C. Bovik, and C. Charrier,“Blind image quality assessment: a natural scenestatistics approach in the DCT domain”, IEEE Trans. Image Processing,21(8):3339-3352,2012.
    [65] P. Ye, J. Kumar, L. Kang, and D. Doermann,“Unsupervised feature learning framework forno-reference image quality assessment”, in Proc. IEEE Conf. Computer Vision and PatternRecognition (CVPR), pp.1098-1105,2012.
    [66] P. Ye and D. Doermann,“No-reference image quality assessment in the spatial domain”, IEEETrans. Image Processing,21(12):4695-4708,2012.
    [67] L. He, D. Tao, X. Li, and X. Gao,“Sparse representation for blind image quality assessment”,in Proc. IEEE Conf. Computer Vision and Pattern Recognition (CVPR), pp.1146-1153,2012.
    [68] C. Li, A. C. Bovik, and X. Wu,“Blind image quality assessment using a general regressionneural network”, IEEE Trans. Neural Networks,22(5):793-799,2011.
    [69] A. Mittal, G. S. Muralidhar, J. Ghosh, and A. C. Bovik,“Blind image quality assessmentwithout human training using latent quality factors”, IEEE Signal Processing Letter,19(2):75-78,2012.
    [70] W. Lu, K. Zeng, D. Tao, Y. Yuan, and X. Gao,“No-reference image quality assessment incontourlet domain”, Neurocomputing,73(12):784-794,2010.
    [71] F. Gao, X. Gao, D. Tao, X. Li, L. He, and W. Lu,“Universal NR-IQA metrics based on localdependency”, in Proc. of Asian Conf. on Pattern Recognition (ACPR), pp.298-302,2011.
    [72] F. Gao, X. Gao, W. Lu, D. Tao, and X. Li,“An image quality assessment metric withno-reference using hidden Markov tree model”, in Proc. SPIE, Visual Communications andImage Processing (VCIP),774410-1-7,2010.
    [73] J. Shen, Q. Li, and G. Erlebacher,“Hybrid no-reference natural image quality assessment ofnoisy, blurry, JPEG2000, and JPEG images”, IEEE Trans. Image Processing,20(8):2089-2098,2012.
    [74] Alexandre Ciancio, André Luiz N. Targino da Costa, Eduardo A. B. da Silva, Amir Said,Ramin Samadani, and Pere Obrador,“No-reference blur assessment of digital pictures based onmultifeature classifiers”, IEEE Trans. Image Processing,20(1):64-75,2011.
    [75] M. Narwaria and W. Lin,“Objective image quality assessment based on support vectorregression”, IEEE Trans. Neural Networks,21(3):515-519,2010.
    [76] M. Narwaria, W. Lin, and A. E. Cetin,“Scalable image quality assessment with2Dmel-cepstrumand machine learning approach”, Pattern Recognition,45(1):299-313,2012.
    [77] M. Narwaria and W. Lin,“SVD-based quality metric for image and video using machinelearning”, IEEE Trans. System, Man, and Cybernernetics-Part B,42(2):347-364,2012.
    [1] A. B. Watson, Digital Images and Human Vision. Cambridge MA: MIT Press,1993.
    [2] W. F. Schreiber, Fundamentals of Electronic Imaging Systems. Springer-Verlag, New York,1993.
    [3] G. M. Shepherd, Foundations of the neuron doctrine. Oxford University Press,1991.
    [4] S. E. Cowie,“A place in history: Paul Broca and cerebral localization”, Journal of InvestigativeSurgery,13(6):297-8,2000.
    [5] Kandel, Eric R., James H. Schwartz, and Thomas M. Jessell, eds. Principles of neural science.Vol.4. New York: McGraw-Hill,2000.
    [6]“SparkNotes: Brain Anatomy: Parietal and Occipital Lobes” Archived from the original on2007-12-31. Retrieved2008-02-27.
    [7]“Short biography of Ernst Heinrich Weber.” Retrieved2007-11-09.
    [8] S. E. Palmer,“Modern theories of Gestalt perception”, In: Humphreys G W, ed. UnderstandingVision. CA: Blackwell,1992.
    [9] K. Koffk,“Perception: an introduction to the Gestalt theory”, Psychological Bulletin,19:531-585,1922.
    [10]D. Marr, Vision: a Computational Investigation into the Human Representation and Processingof Visual Information. New York: W. H. Freeman and Company Press,1982.
    [11]B. A. Wandell, Foundations of Vision, Sinauer Associates,1995.
    [12]G. Westheimer,“The eye as an optical instrument.” In: K. R. Boff L. Kaufman, and J. P. Thomas,eds., Handbook of Human Perception and Performance, Wiley and sons, New York,1986.
    [13]R. W. Rodieck,“The primate retina”, Comparative Primate Biology,4:203-278,1986.
    [14]R. W. Rodieck,“Quantitative analysis of cat retinal ganglion cell response to visual stimuli”,Vision Research,5:583-601,1965.
    [15]Tiandi Shou, Brain Mechanisms of Visual Information Processing, Shanghai Science andEducation Press, Shanghai,1997.
    [16]B. C. Skottun, R. L. DeValois, D. H. Grosof, J. A. Movshon, D. G. Albrecht, and A. B. Bonds,“Classifying simple and complex cells on the basis of response modulation”, Vision Research,31:1079-1086,1991.
    [17]Z. Li,“A neural model of contour in integration in the primary visual cortex”, Neuralcomputation,10:903-940,1998.
    [18]M. S. Gazzaniga, R. B. Ivry, G. R. Mangun,“Cognitive neuroscience”, Cambridge, MA: MITPress,2002.
    [19]O. J. Braddick, J. M. D. O’Brien, J. Wattam-Bell, J. Atkinson, T. Hartley, R. Turner,“Brainareas sensitive to coherent visual motion”, Perception,30,61-72,2010.
    [20]R. Born, D. Bradley,“Structure and function of visual area MT”, Annu Rev Neurosci,28:157-89,2005.
    [21]M. F. Bear, B. W. Connors, M. A. Paradiso,“Neuroscience: Exploring the brain.”3rd edition,Lippincott Williams&Wilkins press,2007.
    [22]T. T. Norton, J. E. Bailey and D. A. Corliss, Psychophysical Measurement of Visual Function.Butterworth-Heinemann, Boston,2002.
    [23]S. Winkler, Vision Models and Quality Metrics for Processing Applications. Dr Thesis.Diplom-Ingenieur der Elektrotechnik, Technische University Wien de nationality autrichienne,2000.
    [24]E. Peli, L. E. Arend, G. M. Young, R. B. Goldstein,“Contrast sensitivity to patch stimuli: Effectsof spatial bandwidth and temporal presentation”, Spatial Vision,7:1-14,1993.
    [25]G. E. Legge and J. M. Foley,“Contrast masking in human vision”, Journal of the OpticalSociety of America,70(12):1458-1471,1980.
    [26]B. G. Breitmeyer, Visual Masking: An Integrative Approach. New York: Oxford University Press,1984.
    [27] Weber's Law of Just Noticeable Difference, University of South Dakota:http://people.usd.edu/~schieber/coglab/WebersLaw.html
    [28]M. H. Rowe,“Trichromatic color vision in primates”, News in Physiological Sciences,17(3),93-98,2002.
    [29]L. M. Hurvich, D. Jameson,“An opponent-process theory of color vision”, PsychologicalReview64(6, Part I):384-404,1957.
    [30] M.D. Fairchild, Color Appearance Models,2nd Ed., Wiley, Chichester,2005.
    [31] M. Bertalmío, V. Caselles, E. Provenzi,“Issues About Retinex Theory and ContrastEnhancement.” International Journal of Computer Vision,83:101-119,2009.
    [32]U. Neisser,“Cognitive Psychology.” New York: Appleton, Century, Crofts,1967.
    [33]A. M. Treisman, G. Gelade,“A feature-integration theory of attention.” Cognitive Psychology,12(1):97-136,1980.
    [34]P. Cavanagh,“Attention: Exporting vision to the mind.” In C. Taddei-Ferretti (Ed.), Neuronalbasis and psychological aspects of consciousness. Singapore: World Scientific,1997.
    [35]R. M. Klein,“Inhibition of return.” Trends in Cognitive Sciences,4(4):138-47,2000.
    [36] L. Itti and C. Koch,“Computational modelling of visual attention,” Nat. Rev. Neurosci.,2(3):194-203,2001.
    [1] Z. Wang, and A. C. Bovik, Modern Image Quality Assessment. New York: Morgan and ClaypoolPublishing Company,2006.
    [2] Z. Wang, H. R. Sheikh, and A. C. Bovik,“Objective video quality assessment,” in TheHandbook of Video Databases: Design and Applications, Furht, B. and Marques, O. eds., CRCPress,1041-1078,2003.
    [3] D. M. Chandler, and S. S. Hemami,“VSNR: A wavelet-based visual singal-to-noise ratio fornatural image,” IEEE Trans. Image Processing,16(9):2284-2296,2007.
    [4] N. Damera-Venkata, T. D. Kite, W. S. Geisler, B. L. Evans, and A. C. Bovik,“Image qualityassessment based on a degradation model,” IEEE Trans. Image Processing,4(4):636-650,2000.
    [5] X. B. Gao, W. Lu, X. L. Li, and D. C. Tao,“Wavelet-based contourlet in quality evaluation ofdigital images,” Neurocomputing,72(12):378-385,2008.
    [6] W. Lu, X. B. Gao, D. C. Tao, X. L. Li,“A wavelet-based image quality assessment method,”International Journal of Wavelets, Multiresolution and Information Processing,6(4):541-551,2008.
    [7] X. B. Gao, W. Lu, X. L. Li, and D. C. Tao,“Image quality assessment based on multiscalegeometric analysis,” IEEE Trans. Image Processing,18(7):1409-1423,2009.
    [8] D. C. Tao, X. L. Li, W. Lu, and X. B. Gao,“Reduced-Reference IQA in Contourlet Domain,”IEEE Trans. Systems, Man, and Cybernetics, Part B: Cybernetics,39(6):1623-1627,2009.
    [9] P. G. Engeldrum,“A short image quality model taxonomy,” Journal of imaging science andtechnology,48(2):160-164,2004.
    [10]Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli,“Image quality assessment: fromerror visibility to structural similarity,” IEEE Trans. Image Processing,13(4):600-612,2004.
    [11]M. J. Fadili, J.-L.Starck, and F. Murtagh,“Inpainting and zooming using sparserepresentations,” The Computer Journal,52(1):64-79,2009.
    [12]J.-L. Starck, M. Elad, and D. L. Donoho,“Image decomposition via the combination of sparserepresentation and a variational approach,” IEEE Trans. Image Processing,14(10):1570-1582,2005.
    [13]J. Bobin, Y. Moudden, J.-L. Starck and M. Elad,“Morphological diversity and sourceseparation,” IEEE Trans. Signal Processing,13(7):409-412,2006.
    [14]J. Bobin, J.-L. Starck, M. J. Fadili, and Y. Moudden,“Sparsity, morphological diversity andblind source separation,” IEEE Trans. Image Processing,16(11):2662-2674,2007.
    [15]S. A. Klein, T. Carney, L. Barghout-Stein, and C. W. Tyler,“Seven models of masking,” in Proc.SPIE,3016,13-24, San Jose, CA,1997.
    [16] A. B. Watson, R. Borthwick, and M. Taylor,“Image quality and entropy masking,” in Proc.SPIE,3016,2-12, San Jose, CA,1997.
    [17]I. H ntsch and L. J. Karam,“Adaptive image coding with perceptual distortion control,” IEEETrans. Image Process,11(3):213-222,2002.
    [18]H. H. Y. Tong, A. N.Venetsanopoulos,“A perceptual model for JPEG applications based onblock classification, texture masking, and luminance masking,” in Proc. IEEE Int. Conf. ImageProcessing,1998.
    [19]W. F. Schreiber, Fundamentals of Electronic Imaging Systems, Spring-Verlag, New York,1993.
    [20]H. A. Peterson, A. J. Ahumada and A. B. Watson,“An improved detection model for DCTcoefficient quantization,” in Proc. SPIE Human Vision, Visual Processing, and Digital DisplayVI1913,191-201,1993.
    [21]X. H. Zhang, W. S. Lin, P. Xue,“Improved estimation for just noticeable visual distortion,”Signal Processing,85(4),795-808(2005).
    [22]X. H. Zhang, W. S. Lin, P. Xue,“Just-noticeable difference estimation with pixels in images,”Journal of visual communication&image representation,19(1):30-412008.
    [23]Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli,“The SSIM index for image qualityassessment,” http://www.cns.nyu.edu/lcv/ssim,2003.
    [24]Z. Wang, and A. C. Bovik,“A universal image quality index,” IEEE signal processing letters,9(3):81-84,2002.
    [25]H. R. Sheikh, Z. Wang, A. C. Bovik, and L. K. Cormack, Image and Video Quality AssessmentResearch at LIVE. http://live.ece.utexas.edu/research/quality/,2003.
    [26]Final report from the Video Quality Experts Group (VQEG) on the Validation of ObjectiveModels of Video Quality Assessment, Phase II VQEG,2003[Online]. Available:http://www.vqeg.org/.
    [1] M. A. Ali, and M. A. Klyne, Vision in Vertebrates. New York: Plenum Press. pp.174-175,1985.
    [2] James Tanaka, Daniel Weiskopf, and Pepper Williams,“The role of color in high-level vision”,TRENDS in Cognitive Sciences,5(5):211-215,2001.
    [3] D. M. Chandler, and S. S. Hemami,“VSNR: A wavelet-based visual signal-to-noise ratio fornatural image”, IEEE Trans. Image Processing,16(9):2284-2296,2007.
    [4] Wen Lu, Kai Zeng, Dacheng Tao, Yuan Yuan, and Xinbo Gao,“No-reference image qualityassessment in contourlet domain”, Neurocomputing,73(4-6):784-794:2010.
    [5] H. R. Sheikh, and A. C. Bovik,“Image information and visual quality”, IEEE Trans. ImageProcessing,15(2):430-444,2006.
    [6] Z. Wang, and A. C. Bovik, Modern Image Quality Assessment. New York: Morgan andClaypool Publishing Company,2006.
    [7] Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli,“Image quality assessment: fromerror visibility to structural similarity”, IEEE Trans. Image Processing,13(4):600-612,2004.
    [8] X. Gao, W. Lu, X. Li, and D. Tao,“Image quality assessment based on multiscale geometricanalysis”, IEEE Trans. Image Processing,18(7):1409-1423,2009.
    [9] X. Zhang, D. A. Silverstein, J. E. Farrell, and B. A. Wandell,“Color image quality metricS-CIELAB and its application on halftone texture visibility”, in Compcon’97Proceedings,IEEE, pp.44-48,1997.
    [10] X. Zhang, and B. A. Wandell,“Color image fidelity metrics evaluated using image distortionmaps”, Signal Processing,70(3):201-214,1998.
    [11] A. Toet, and M. P. Lucassen,“A new universal colour image fidelity metric”, displays,24:197-207,2003.
    [12] U. Rajashekar, Z. Wang, and E. P. Simoncelli,“Quantifying color image distortions based onadaptive spatio-chromatic signal decompositions”, in Proc. IEEE Int’l Conf on ImageProcessing, Cairo, Egypt, pp.2213-2216,2009.
    [13] X. Zhang, and B. A. Wandell,“A Spatial extension of CIELAB for digital color imagereproduction”, Journal of the Society for Information Display,5,61-63,1997.
    [14] X. Zhang, J. E. Farrell, and B. A. Wandell,“Applications of a spatial extension to CIELAB”SPIE,3025,154-157,1997.
    [15] Y.-K. Lee, and J. M. Powers,“Comparison of CIELAB ΔE*and CIEDE2000color-differencesafter polymerization and thermocycling of resin composites”, Dental Materials,21(7):678-682,2005.
    [16] C. Connolly, and T. Fliess,“A study of efficiency and accuracy in the transformation fromRGB to CIELAB color space”, IEEE Trans. Image Processing,6:1046-1048,1997.
    [17] A. B. Poirson, and B. A. Wandell,“Pattern-color separable pathways predict sensitivity tosimple colored patterns”, Vision Research,35(2):239-254,1996.
    [18] V. Monga, W. S. Geisler, and B. L. Evans,“Linear color-separable human visual system modelsfor vector error diffusion halftoning” IEEE Signal Processing Letters,10:93-97,2003.
    [19] H. R. Sheikh, Z. Wang, L. Cormack, and A. C. Bovik. LIVE Image Quality AssessmentDatabase.[Online] Available: http://live.ece.utexas.edu/research/quality,2003.
    [20] Final report from the Video Quality Experts Group (VQEG) on the Validation of ObjectiveModels of Video Quality Assessment, Phase II VQEG,2003.[Online]. Available:http://www.vqeg.org/.
    [21] J. Roufs,“Perceptual image quality: concept and measurement”, Philips Journal of Research,47:35-62,1993.
    [22] D. Tao, X. Li, W. Lu, X. Gao,“Reduced-reference iqa in contourlet domain”, IEEE Trans.Systems, Man, and Cybernetics, Part B,39(6):1623-1627,2009.
    [23] Z. Wang, and E. P. Simoncelli,“Reduced-reference image quality assessment using awavelet-domain natural image statistic model”, In: Human Vision and Electronic Imaging. SPIE,pp.149-159,2005.
    [24] F. Chapeau-Blondeau, J. Chauveau, D. Rousseau and P. Richard,“Fractal structure in the colordistribution of natural images,” Chaos, Solitons&Fractals,42(1):472-482,2009.
    [25] M. Ivanovici and N. Richard,“Fractal dimension of color fractal images,” IEEE Trans. ImageProcessing,20(1):227-235,2011.
    [26] M. A. Webster and J. D. Mollon.“Adaptation and the color statistics of natural images,” VisionResearch,37:3283-3298,1997.
    [27] G. J. Burton and I. R. Moorhead,“Color and spatial structure in natural scenes,” Appl. Opt.,26(1):157-170,1987.
    [28] C. A. Párraga, G. Brelstaff, T. Troscianko, and I. R. Moorehead,“Color and luminanceinformation in natural scenes,” J. Opt. Soc. Am. A,15(3):563-569,1998.
    [29] A. Turiel, N. Parga, D. L. Ruderman, and T. W. Cronin,“Multiscaling and information contentof natural color images,” Phys Rev E,62(1):1138-1148,2000.
    [30] C.-C. Chang and C.-J. Lin,“LIBSVM--A Library for Support Vector Machines”,http://www.csie.ntu.edu.tw/~cjlin/libsvm/.
    [1] Z. Wang and A. C. Bovik, Modern Image Quality Assessment. New York: Morgan andClaypool,2006.
    [2] A. M. Rohaly, J. Libert, P. Corriveau and A. Webster (eds.),“Final report from the VideoQuality Experts Group on the validation of objective models of video quality assessment,”Phase I, ftp://ftp.crc.ca/crc/vqeg,2000.
    [3] VQEG,“Final report from the Video Quality Experts Group on the validation of objectivemodels of video quality assessment,” Phase II, http://www.vqeg.org/,2003.
    [4] VQEG,“Validation of reduced-reference and no-reference objective models for standarddefinition television (VQEG Final Report of RRNR-TV Phase I Validation Test),” Phase I,http://www.vqeg.org/,2009.
    [5] Z. Wang, A. C. Bovik, and B. L. Evans,“Blind measurement of blocking artifacts in images,”in Proc. IEEE Int. Conf. Image Processing, vol.3, pp.981-984,2000.
    [6] C.-M. Liu, J.-Y. Lin, and C.-N. Wang,“Objective image quality measure for block-based DCTcoding,” IEEE Trans. Consum. Electron.,43(3):511-516,1997.
    [7] L. Meesters and J.-B. Martens,“A single-ended blockiness measure for JPEG-coded images,”Signal Process.,82(3):369-387,2002.
    [8] K. T. Tan and M. Ghanbari,“Frequency domain measurement of blockiness in MPEG-2codedvideo,” in Proc. IEEE Int. Conf. Image Processing, vol.3, pp.977-980, Sep.2000.
    [9] A. C. Bovik and S. Liu,“DCT-domain blind measurement of blocking artifacts in DCT-codedimages,” in Proc. IEEE Int. Conf. Acoustic, Speech, and Signal Processing, vol.3, pp.1725-1728,2001.
    [10] H. R. Sheikh, A. C. Bovik, and L. Cormack,“No-reference quality assessment using naturalscene statistics: JPEG2000,” IEEE Trans. Image Process.,14(11):1918-1927,2005.
    [11] A. K. Moorthy and A. C. Bovik,“A two-step framework for constructing blind image qualityindices,” IEEE Signal Process. Let.,17(5):513-516,2010.
    [12] M. A. Saad, A. C. Bovik, and C. Charrier,“A DCT statistics-based blind image quality index,”IEEE Signal Process. Let.,17(6):583-586,2010.
    [13] A. K. Moorthy and A. C. Bovik,“Blind image quality assessment: from scene statistics toperceptual quality,” IEEE Trans. Image Process.,20(12):3350-3364,2011.
    [14] M. A. Saad, A. C. Bovik, and C. Charrier,“Blind image quality assessment: a natural scenestatistics approach in the DCT domain,” IEEE Trans. Image Process.,21(8):3339-3352,2011.
    [15] F. Gao, X. Gao, W. Lu, D. Tao, X. Li,“An image quality assessment metric with no referenceusing hidden markov Tree Model,” Visual Communications and Image Processing, Proc. ofSPIE vol.7744,774410-1-7,2010.
    [16] Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli,“Image quality assessment: fromerror visibility to structural similarity,” IEEE Trans. Image Process.,13(4):600-612,2004.
    [17] H. R. Sheikh, A. C. Bovik, and G. de Veciana,“An information fidelity criterion for imagequality assessment using natural scene statistics,” IEEE Trans. Image Process.,14(12):2117-2128,2005.
    [18] H. R. Sheikh, and A. C. Bovik,“Image information and visual quality,” IEEE Trans. ImageProcess.,15(2):430-444,2006.
    [19] Z. Wang, G. Wu, H. R. Sheikh, E. P. Simoncelli, E.-H. Yang, A. C. Bovik,“Quality-awareimages,” IEEE Trans. Image Process.,15(6):1680-1689,2006.
    [20] Y. Weiss, W. T. Freeman,“What makes a good model of natural images?” IEEE ComputerSociety Conference on Computer Vision and Pattern Recognition (CVPR), pp.1-8,2007.
    [21] J. K. Romberg, H. Choi, and R. G. Baraniuk,“Bayesian tree-structured image modeling usingwavelets-domain hidden markov models,” IEEE Trans. Image Process.,10(7):1056-1068,2001.
    [22] A. Srivastava, A. B. Lee, E. P. Simoncelli, and S.-C. Zhu,“On advances in statistical modelingof natural images,” Journal of Mathematical Imaging and Vision,18:17-33,2003.
    [23] R. W. Buccigrossi, and E. P. Simoncelli,“Image compression via joint statisticalcharacterization in the wavelet domain,” IEEE Trans. Image Process.,8(12):1688-1701,1999.
    [24] A. van der Schaaf and J. van Hateren,“Modeling the power spectra of natural images: statisticsand information,” Vision Research,36(17):2759-2770,1996.
    [25] B. A. Olshausen and D. J. Field,“Natural image statistics and efficient coding,” Network,7:333-339,1996.
    [26] E. P. Simoncelli and B. A. Olshausen,“Natural image statistics and neural representation,”Annual Review of Neuroscience,24:1193-216,2001.
    [27] B. Olshausen and D. Field,“Sparse coding with an overcomplete basis set: a strategy employedby V1?” Vision Research,37:3311-3325,1997.
    [28] T. Serre,“Learning a dictionary of shape-components in visual cortex: comparison withneurons, humans and machines,” PhD dissertation, MIT,2006.
    [29] J. Yang, J. Wright, Y. Ma, T. Huang,“Image super-resolution as sparse representation of rawimage patches,” IEEE Computer Society Conference on Computer Vision and PatternRecognition (CVPR), pp.1-8,2008.
    [30] J. Wright, A. Y. Yang, A. Ganesh, S. S. Sastry, and Y. Ma,“Robust face recognition via sparserepresentation,” IEEE Trans. Pattern Analysis and Machine Intelligence,31(2):1-18,2009.
    [31] D. L. Donoho,“For most large underdetermined systems of linear equations, the minimall1-norm solution is also the sparsest solution,” Comm. On Pure and Applied Math,59(6):797-829,2006.
    [32] R. Tibshirani,“Regression shrinkage and selection via the lasso,” J. Royal. Statist. Soc B.,58(1):267-288,1996.
    [33] S.-J. Kim, K. Koh, M. Lustig, S. Boyd, and D. Gorinevsky,“An interior-point method forlarge-scale l1-regularized least squares,” IEEE Journal on Selected Topics in Signal Process.,1(4):606-617,2007.
    [34] S. Mallat,“A theory for multiresolution decomposition: the wavelet representation,” IEEETrans. Pattern Analysis and Machine Intelligence,11(7):674-693,1989.
    [35] H. R. Sheikh, Z. Wang, L. Cormack, and A. C. Bovik. LIVE Image Quality AssessmentDatabase,2003,[Online]. Available: http://live.ece.utexas.edu/research/quality.
    [36] N. Ponomarenko, V. Lukin, A. Zelensky, K. Egiazarian, M. Carli, and F. Battisti,“TID2008-Adatabase for evaluation of full-reference visual quality assessment metrics,” Advances ofModern Radioelectronics,10:30-45,2009.
    [37] E. C. Larson and D. M. Chandler,“Categorical Image Quality (CSIQ) Database,”[Online].Available: http://vision.okstate.edu/csiq.
    [38] P. L. Callet, and F. Autrusseau,“Subjective quality assessment-IVC database,”[Online].Available: http://www.irccyn.ec-nantes.fr/ivcdb/.
    [39] Y. Horita, K. Shibata, Y. Kawayoke, and Z. M. Parves Sazzad,“MICT Image QualityEvaluation Database,”[Online]. Available: http://mict.eng. u-toyama.ac.jp/mictdb.html.
    [40] Recommendation ITU-R BT.500-12,“Methodology for the subjective assessment of thequality of television pictures,” ITU-R, Geneva,1974-2009.
    [1] Z. Wang and A. C. Bovik, Modern Image Quality Assessment. New York: Morgan andClaypool,2006.
    [2] Z. Wang and A. C. Bovik,“Reduced-and no-reference image quality assessment,” IEEE SignalProcessing Magazine,28(6):29-10,2011.
    [3] H. R. Sheikh, A. C. Bovik, and L. Cormack,“No-reference quality assessment using naturalscene statistics: JPEG2000,” IEEE Trans. Image Processing,14(11):1918-1927,2005.
    [4] Z. Wang, A. C. Bovik, and B. L. Evans,“Blind measurement of blocking artifacts in images,” inProc. IEEE Int. Conf. Image Processing, vol.3, pp.981-984,2000.
    [5] L. Li and Z.-S. Wang,“Compression quality prediction model for JPEG2000,” IEEE Trans.Image Processing,19(2):384-398,2010.
    [6] A. K. Moorthy and A. C. Bovik,“A two-step framework for constructing blind image qualityindices,” IEEE Signal Processing Letters,17(6):513-516,2010.
    [7] A. K. Moorthy and A. C. Bovik,“Blind image quality assessment: from natural scene statisticsto perceptual quality,” IEEE Trans. Image Processing,20(12):3350-3364,2011.
    [8] H. Tang, N. Joshi, and A. Kapoor,“Learning a blind measure of perceptual image quality,” inProc. IEEE Conf. Computer Vision and Pattern Recognition (CVPR), pp.305-312,2011.
    [9] M. A. Saad, A. C. Bovik, and C. Charrier,“A DCT statistics based blind image quality index,”IEEE Signal Processing Letters,17(6):583-586,2011.
    [10]M. A. Saad, A. C. Bovik, and C. Charrier,“Blind image quality assessment: a natural scenestatistics approach in the DCT domain,” IEEE Trans. Image Processing,21(8):3339-3352,2012.
    [11]P. Ye, J. Kumar, L. Kang, and D. Doermann,“Unsupervised feature learning framework forno-reference image quality assessment,” in Proc. IEEE Conf. Computer Vision and PatternRecognition (CVPR), pp.1098-1105,2012.
    [12]P. Ye and D. Doermann,“No-reference image quality assessment using visual codebooks,”IEEE Trans. Image Processing, vol.21, no.7,2012.
    [13]L. He, D. Tao, X. Li, and X. Gao,“Sparse representation for blind image quality assessment,” inProc. IEEE Conf. Computer Vision and Pattern Recognition (CVPR), pp.1146-1153,2012.
    [14]M. Narwaria and W. Lin,“Objective image quality assessment based on support vectorregression”, IEEE Trans. Neural Networks,21(3):515-519,2010.
    [15]C. Li, A. C. Bovik, and X. Wu,“Blind image quality assessment using a general regressionneural network,” IEEE Trans. Neural Networks,22(5):793-799,2011.
    [16]A. Mittal, G. S. Muralidhar, J. Ghosh, and A. C. Bovik,“Blind image quality assessmentwithout human training using latent quality factors,” IEEE Signal Processing Letters,19(2):75-78,2012.
    [17]A. Mittal, A. K. Moorthy, and A. C. Bovik,“Blind/referenceless image spatial qualityevaluator,” in Proc. Asilomar Conf. Signals, Systems and Computers, pp.723-727,2011.
    [18]U. Engelke, H. Kaprykowsky, H.-J. Zepernick, and P. Ndjiki-Nya,“Visual attention in qualityassessment,” IEEE Signal Processing Magazine,28(6):50-59,2011.
    [19]Y. W. Teh, M. I. Jordan, M. J. Beal, and D. M. Blei,“Hierarchical Dirichlet Processes,” Journalof the American Statistical Association,101:1566-1581,2006.
    [20]Y. W. Teh and M. I. Jordan,“Hierarchical Bayesian nonparametric models with applications,”Bayesian Nonparametrics, Cambridge University Press,2010.
    [21]F. Perazzi, P. Krahenbuhl, Y. Pritch, and A. Hornung,“Saliency filters: contrast based filteringfor salient region detection,” in Proc. IEEE Conf. Computer Vision and Pattern Recognition(CVPR), pp.733-740,2012.
    [22]L. Itti and C. Koch,“Computational modelling of visual attention,” Nat. Rev. Neurosci.,2(3):194-203,2001.
    [23]A. Srivastava, A. Lee, E. Simoncelli, and S. Zhu,“On advances in statistical modeling of naturalimages,” Journal of Mathematical Imaging and Vision,18(1):17-33,2003.
    [24]M. J. Wainwright and E. P. Simoncelli,“Scale mixtures of Gaussians and the statistics of naturalimages,” Advances in Neural Information Processing Systems (NIPS),12(1):855-861,2000.
    [25]R. W. Buccigrossi and E. P. Simoncelli,“Image compression via joint statisticalcharacterization in the wavelet domain,” IEEE Trans. Image Processing,8(12):1688-1701,1999.
    [26]H. R. Sheikh, Z. Wang, L. Cormack, and A. C. Bovik,(2003) Live image quality assessmentdatabase release2(LIVE II)[Online]. Available: http://live.ece.utexas.edu/research/quality.
    [27]N. Ponomarenko, V. Lukin, A. Zelensky, K. Egiazarian, M. Carli, and F. Battisti,“TID2008-adatabase for evaluation of full-reference visual quality assessment metrics,” Advances ofModern Radioelectronics,10:30-45,2009.
    [28]E. C. Larson and D. M. Chandler,(2009) Categorical image quality (CSIQ) database [Online].Available: http://vision.okstate.edu/csiq.
    [29]P. L. Callet and F. Autrusseau,(2006) Subjective quality assessment-IVC database [Online].Available: http://www.irccyn.ec-nantes.fr/ivcdb/.
    [30]Y. Horita, K. Shibata, Y. Kawayoke, and Z. M. P. Sazzad,(2000) MICT image qualityevaluation database [Online]. Available: http://mict.eng.u-toyama.ac.jp/mictdb.html.
    [31]VQEG,(2003)“Final report from the Video Quality Experts Group on the validation ofobjective models of video quality assessment,” Phase II,[Online]. Available:http://www.vqeg.org/.
    [32]VQEG,(2009) Validation of reduced-reference and no-reference objective models for standarddefinition television, Phase I,[Online]. Available: http://www.vqeg.org/.
    [33]Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli,“Image quality assessment: fromerror visibility to structural similarity,” IEEE Trans. Image Processing,13(4):600-612,2004.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700