时-空域主动视觉注意的网络视频质量评估
详细信息    查看全文 | 推荐本文 |
  • 英文篇名:Spatial-Temporal Active Visual Attention Based Network Video Quality Assessment
  • 作者:张杰 ; 冯欣 ; 刘智
  • 英文作者:ZHANG Jie;FENG Xin;LIU Zhi;College of Electrical and Electronic Engineering,Chongqing University of Technology;College of Computer Science and Engineering,Chongqing University of Technology;
  • 关键词:无线网络 ; 视频质量评估 ; 主动注意 ; 视觉显著性
  • 英文关键词:wireless network;;video quality assessment;;active attention;;saliency detection
  • 中文刊名:CGGL
  • 英文刊名:Journal of Chongqing University of Technology(Natural Science)
  • 机构:重庆理工大学电气与电子工程学院;重庆理工大学计算机科学与工程学院;
  • 出版日期:2019-06-12 15:46
  • 出版单位:重庆理工大学学报(自然科学)
  • 年:2019
  • 期:v.33;No.408
  • 基金:教育部人文社会科学研究青年项目(17YJCZH043);; 重庆市教委科学技术研究项目(KJ1600937);; 重庆市基础科学与前沿技术研究一般项目(cstc2017jcyjAX0339);; 重庆市基础研究与前沿探索项目(cstc2018jcyjAX0287);; 重庆理工大学青年科研项目星火支持计划(2014XH12)
  • 语种:中文;
  • 页:CGGL201907017
  • 页数:8
  • CN:07
  • ISSN:50-1205/T
  • 分类号:138-145
摘要
为了实现满足无线网络视频实时性要求的视频质量评估,提出了一种基于时-空域主动视觉注意的部分参考客观质量评估方法。该方法充分考虑了网络丢包失真的局部显著性及其在视频的空间和时间域对视觉注意的影响,提出了基于时-空域显著事件的视频显著性检测算法。通过对参考视频和损伤视频提取时-空域显著性注意信息,综合考虑网络视频的多丢包、多失真的质量评估方法,进一步提出了基于时-空域主动视觉注意变化的部分参考客观质量评估方法,并在无线网络环境下视频会议的数据中进行了对比实验。实验结果表明:提出的方法比传统质量评估方法能更好地反映人眼的视觉感知质量,且能满足网络视频质量评估的实时性要求。
        This paper presents a reduced-reference spatial-temporal active visual attention based objective quality assessment metric for real-time network video quality assessment. We first proposed a spatial-temporal video saliency detection algorithm that considers the impact of sudden stimuli of human visual attention. Based on the extracted spatial-temporal visual attention information for both reference video and distorted video, we then proposed a saliency variation-based video quality assessment method for network videos that contain multi-loss and multi-distortion patterns. We carried out comparative experiments on the data of video conferencing in the wireless network environment.Experimental results show that the quality assessment method based on the variation of spatialtemporal active visual attention could evaluate the visual perception quality of network videos better than the traditional method,and is applicable for real-time quality assessment of network videos.
引文
[1] AZAZA A,KABBAI L,ABDELLAOUI M,et al. Salient regions detection method inspired from human visual system anatomy[C]//2016 2nd International Conference on Advanced Technologies for Signal and Image Processing(ATSIP). USA:IEEE,2016:155-160.
    [2] KOULOHERIS J L,LU L,WANG Z. Method and system for objective quality assessment of image and video streams:US Patent 7,170,933[P]. 2007-1-30.
    [3] LI W J,LUO Q,YU P,et al. Reduced-reference video Qo E assessment method based on image feature information[C]//2015 17th Asia-Pacific Network Operations and Management Symposium(APNOMS). USA:IEEE,2015:519-522.
    [4] FENG X,LIU T,YANG D,et al. Saliency based objective quality assessment of decoded video affected by packet losses[C]//2008 15th IEEE International Conference on Image Processing.[S. l.]:IEEE,2008:2560-2563.
    [5] AMOR M B,KAMMOUN F,MASMODI N. A pretreatment using saliency map Harris to improve MSU blocking metric performance for encoding H264/AVC:Saliency map for video quality assessment[C]//2016 International Image Processing,Applications and Systems(IPAS).[S.l.]:IEEE,2016:1-4.
    [6] LIU Z,ZOU W,LE M O. Saliency tree:A novel saliency detection framework[J]. IEEE Transactions on Image Processing,2014,23(5):1937-1952.
    [7] WU J,LIU Y,SHI G,et al. Saliency change based reduced reference image quality assessment[C]//2017IEEE Visual Communications and Image Processing(VCIP).[S. l.]:IEEE,2017:1-4.
    [8] MEN H,LIN H,SAUPE D. Empirical evaluation of noreference VQA methods on a natural video quality database[C]//2017 Ninth International Conference on Quality of Multimedia Experience(Qo MEX). USA:IEEE,2017:1-3.
    [9]凌云,夏军,屠彦,等.视觉感兴趣区的提取及其在视频图像质量评估中的应用[J].东南大学学报(自然科学版),2009,39(4):753-757.
    [10]冯欣,杨丹,张凌.基于视觉注意力变化的网络丢包视频质量评估[J].自动化学报,2011,37(11):1322-1331.
    [11] MANASA K,CHANNAPPAYYA S S. An optical flowbased full reference video quality assessment algorithm[J]. IEEE Transactions on Image Processing,2016,25(6):2480-2492.
    [12] SEDANO I,PRIETO G,BRUNNSTRM K,et al. Application of full-reference video quality metrics in IPTV[C]//2017 IEEE International Symposium on Broadband Multimedia Systems and Broadcasting(BMSB). USA:IEEE,2017:1-4.
    [13] ITTI L,KOCH C,NIEBUR E. A model of saliency-based visual attention for rapid scene analysis[J]. IEEE Transactions on Pattern Analysis&Machine Intelligence,1998,20(11):1254-1259.
    [14] HOU X,HAREL J,KOCH C. Image signature:Highlighting sparse salient regions[J]. IEEE transactions on pattern analysis and machine intelligence,2011,34(1):194-201.
    [15] ZHANG L,TONG M H,MARKS T K,et al. SUN:A Bayesian framework for saliency using natural statistics[J]. Journal of Vision,2008,8(7):32-32.
    [16] LU S,TAN C,LIM J H. Robust and efficient saliency modeling from image co-occurrence histograms[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2013,36(1):195-201.
    [17] LU S,LIM J H. Saliency modeling from image histograms[C]//European Conference on Computer Vision. Berlin,Heidelberg:Springer,2012:321-332.
    [18] AMIRSHAHI S A,LARABI M C. Spatial-temporal video quality metric based on an estimation of Qo E[C]//2011Third International Workshop on Quality of Multimedia Experience. USA:IEEE,2011:84-89.
    [19] ALSHAWI T,LONG Z,ALREGIB G. Unsupervised uncertainty estimation using spatiotemporal cues in video saliency detection[J]. IEEE Transactions on Image Processing,2018,27(6):2818-2827.
    [20] PINSON M H,CHOI L K,BOVIK A C. Temporal video quality model accounting for variable frame delay distortions[J]. IEEE Transactions on Broadcasting,2014,60(4):637-649.