一种融合深度基于灰度共生矩阵的感知模型
详细信息    查看全文 | 推荐本文 |
  • 英文篇名:Perceptual Model Based on GLCM Combined with Depth
  • 作者:叶鹏 ; 王永芳 ; 夏雨蒙 ; 安平
  • 英文作者:YE Peng;WANG Yong-fang;XIA Yu-meng;AN Ping;Shanghai Institute for Advanced Communication and Data Science,Shanghai University;School of Communication and Information Engineering,Shanghai University;
  • 关键词:JND模型 ; 图像分解 ; 灰度共生矩阵 ; CM模型 ; 深度信息
  • 英文关键词:Just noticeable distortion model;;Image decomposition;;Gray-level co-occurrence matrix(GLCM);;Contrast masking model;;Depth information
  • 中文刊名:JSJA
  • 英文刊名:Computer Science
  • 机构:上海大学上海先进通信与数据科学研究院;上海大学通信与信息工程学院;
  • 出版日期:2019-03-15
  • 出版单位:计算机科学
  • 年:2019
  • 期:v.46
  • 基金:国家自然科学基金:QoE驱动下的基于内容分析的3D视频感知编码研究(61671283);国家自然科学基金:面向高清/超高清的感知3D视频稀疏编码理论与技术研究(61301113)资助
  • 语种:中文;
  • 页:JSJA201903012
  • 页数:5
  • CN:03
  • ISSN:50-1075/TP
  • 分类号:98-102
摘要
恰可察觉失真模型(JND)是一种人眼感知模型,它是图像/视频压缩中去除冗余最为有效的方法之一。针对现有JND模型对比掩盖效应(CM)的计算不够完善及深度信息的考虑不够准确的问题,文中提出了一种融合深度基于灰度共生矩阵的JND模型。首先,采用总变分分解模型将图像分解为结构部分和纹理部分,对结构部分采用Canny算子处理,对纹理部分采用灰度共生矩阵处理,两个部分形成更准确的CM模型;结合背景亮度掩盖效应,建立了一种基于灰度共生矩阵的像素域JND模型。然后,在对人眼深度感知进行研究的基础上,引入新的深度加权模型。最后,建立了一种新的融合深度基于灰度共生矩阵的感知模型。实验结果表明,所提出的模型更一致于人的视觉感知。相对于已有的JND模型,所提JND模型能够容忍更多的失真,且拥有更好的感知质量。
        Just Noticeable Distortion(JND) model is a kind of perceptual model,which is one of the most effective methods to remove the visual redundancy in image/video compression.Because the calculation of the contrast masking effect(CM) is not perfect and the consideration of depth information is not accurate in the existing JND model,this paper proposed a JND model combined with depth based on gray level co-occurrence matrix(GLCM).Firstly,the image is decomposed into the edge part and the texture part by the total variance(TV) method,the edge part is processed by Canny operator and the texture part is processed by GLCM.A more accurate CM model is formed by incorporating above two parts.Further,a new JND model based on gray-level co-occurrence Matrix is established by combining the background brightness masking effect.Besides,based on the human depth perception,a novel depth weighting model is proposed.Finally,a new perceptual model combined with depth based on GLCM is established.The experimental results show that the proposed model is more consistent with the human visual perception.Comparing with the existing JND model,the proposed model can tolerate more distortion and has much better perceptual quality.
引文
[1] WANG Y F,ZHU K H,WU J,et al.Asymmetric perceptual video coding system and method based on just noticeable distortion model:CN 106331707 A[P].2017-01-11.
    [2] LIU J,WANG Y F,WU C F,et al.Improved JND model and its application in image coding[J].Video Engineering,2011,35(13):15-18.(in Chinese)刘静,王永芳,武翠芳,等.改进的JND模型及其在图像编码中的应用[J].电视技术,2011,35(13):15-18.
    [3] BOUCHAKOUR M,JEANNIC G,AUTRUSSEAU F.JND mask adaptation for wavelet domain watermarking[C]//IEEE International Conference on Multimedia and Expo.IEEE,2010:201-204.
    [4] LIN W,DONG L,XUE P.Visual distortion gauge based on discrimination of noticeable contrast changes.[J].IEEE Transactions on Circuits & Systems for Video Technology,2005,15(7):900-909.
    [5] YANG X K,LING W S,LU Z K,et al.Just noticeable distortion modeland its applications in video coding[J].Signal Processing Image Communication,2005,20(7):662-680.
    [6] LEGGE G E,FOLEY J M.Contrast masking in human vision [J].Journal of the Optical Society of America,1980,70(12):1458-1471.
    [7] CHOU C H,LI Y C.A perceptually tuned subband image coder based on the measure of just-noticeable-distortion profile[J].IEEE Transactions on Circuits & Systems for Video Technology,1995,5(6):467-476.
    [8] LIU A,LIN W,PAUL M,et al.Just Noticeable Difference for Images With Decomposition Model for Separating Edge and Textured Regions[J].IEEE Transactions on Circuits & Systems for Video Technology,2010,20(11):1648-1652.
    [9] WU J,LIN W,SHI G,et al.Pattern masking estimation in ima- ge with structuraluncertainty[J].IEEE Transactions on Image Processing A Publication of the IEEE Signal Processing Society,2013,22(12):4892-4904.
    [10] SILVA D V S X D,FERNANDO W A C,NUR G,et al.3D video assessment with Just Noticeable Difference in Depth evaluation[C]//IEEE International Conference on Image Processing.IEEE,2010:4013-4016.
    [11] ZHANG H W.Multiple description coding of three-dimensional depth images[D].Beijing:Beijing Jiaotong University,2016.(in Chinese)张慧雯.三维深度图像的多描述编码[D].北京:北京交通大学,2016.
    [12] YIN W,GOLDFARB D,OSHER S.A comparison of three total variation based texture extraction models[J].Journal of Visual Communication and Image Representation,2007,18(3):240-252.
    [13] HARALICK R M,SHANMUGAM K,DINSTEIN I.Textural Features for Image Classification[J].IEEE Transactions on Systems Man & Cybernetics,1973,SMC-3(6):610-621.
    [14] WANG H M,SHI P.Image texture feature extraction method[J].Journal of Communication University of China Science and Technology,2006,13(1):49-52.(in Chinese)王惠明,史萍.图像纹理特征的提取方法[J].中国传媒大学学报(自然科学版),2006,13(1):49-52.
    [15] BO H,MA F L,JIAO L C.Analysis of image grayscale co-occurrence matrix computation problems [J].Acta Electronica Sinica,2006,34(1):155-158.(in Chinese)薄华,马缚龙,焦李成.图像纹理的灰度共生矩阵计算问题的分析[J].电子学报,2006,34(1):155-158.
    [16] FRISTON K.The free-energy principle:A unified brain theory?[J].Nature Reviews Neuroscience,2010,11(2):127.
    [17] ECKERT M P,BRADLEY A P.Perceptual quality metrics applied to still image compression[J].Signal Processing,1998,70(3):177-200.
    [18] KANG X,HAN C,YANG Y,et al.SAR Image Edge Detection by Ratio-based Harris Method[C]//IEEE International Con-ference on Acoustics,Speech and Signal Processing(ICASSP 2006).IEEE,2006:2.
    [19] YANG X,LIN W,LU Z,et al.Motion-compensated residue preprocessing in video coding based on just-noticeable-distortion profile[J].IEEE Transactions on Circuits & Systems for Video Technology,2005,15(6):742-752.
    [20] FEHN C.Depth-image-based rendering (DIBR),compression, and transmission for a new approach on 3D-TV[J].Proc Spie,2004,5291:93-104.
    [21] CHENG H,ZHANG J,WU Q,et al.Stereoscopic visual saliency prediction based on stereo contrast and stereo focus[J].Eurasip Journal on Image & Video Processing,2017,2017(1):61.
    [22] OTSU N.A Threshold Selection Method from Gray-Level Histograms[J].IEEE Transactions on Systems Man & Cyberne-tics,1979,9(1):62-66.
    [23] International Telecommunicaton Union.ITU-R BT:500-11: Methodology for the Subjective Assessment of the Quality of Television Pictures (Geneva,2002)[OL].http://www.itu.int/rec/R-REC-BT.500/en.
    [24] HORE A,ZIOU D.Image Quality Metrics:PSNR vs. SSIM[C]∥ International Conference on Pattern Recognition.IEEE,2010:2366-2369.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700