融合相位一致性与二维主成分分析的视觉显著性预测
详细信息    查看全文 | 推荐本文 |
  • 英文篇名:Integrating Phase Congruency and Two-dimensional Principal Component Analysis for Visual Saliency Prediction
  • 作者:徐威 ; 唐振民
  • 英文作者:Xu Wei;Tang Zhen-min;School of Computer Science and Engineering, Nanjing University of Science and Technology;
  • 关键词:图像处理 ; 视觉显著性 ; 人眼关注点预测 ; 相位一致性 ; 2维主成分分析
  • 英文关键词:Image processing;;Visual saliency;;Eye fixation prediction;;Phase congruency;;Two-Dimensional Principal Component Analysis(2DPCA)
  • 中文刊名:DZYX
  • 英文刊名:Journal of Electronics & Information Technology
  • 机构:南京理工大学计算机科学与工程学院;
  • 出版日期:2015-09-15
  • 出版单位:电子与信息学报
  • 年:2015
  • 期:v.37
  • 基金:国家自然科学基金(61473154)资助课题
  • 语种:中文;
  • 页:DZYX201509009
  • 页数:8
  • CN:09
  • ISSN:11-4494/TN
  • 分类号:61-68
摘要
为了更加有效地预测图像中吸引视觉注意的关键区域,该文提出一种融合相位一致性与2维主成分分析(2DPCA)的显著性方法。该方法不同于传统的利用相位谱的方式,而是提出采用相位一致性(PC)获取图像中重要的特征点和边缘信息,经快速漂移超像素优化后,融合局部和全局颜色对比度,生成低层特征显著图。接着提出利用2DPCA提取图像块的主成分后,计算主成分空间中图像块的局部和全局可区分性,得到模式显著图。最后,通过空间离散度度量分配合适的权重,使两者融合,提取显著性区域。在两种人眼跟踪数据库上与5种经典算法的实验对比结果表明,该算法能更加准确地预测人眼视觉关注点。
        In order to predict the pivotal visually attractive image regions more effectively, a novel saliency method using the phase congruency and the two-Dimensional Principal Component Analysis(2DPCA) is proposed in this paper. Firstly, the phase congruency is utilized to extract the most important feature points and the edge informations in the frequency domain, which is different from the conventional phase spectrum based methods. Then, after the quick shift superpixel based refinement, these features are incorporated with the local and global color contrast, to generate the low-level feature based saliency map. Then, the 2DPCA is adopted to extract the principal component vectors of image patches. The local and global distinctness between the different image patches in the principal component space are computed to get the pattern saliency map. Finally, these two complementary maps are integrated through the weighting strategy based on the spatial variance measure. The comparable experimental results on two benchmark eye tracking databases of the proposed method and 5 state-of-the-art methods show that the proposed method can predict eye fixation more accurately.
引文
[1]Li W T,Chang H S,Lien K C,et al..Exploring visual and motion saliency for automatic video object extraction[J].IEEE Transactions on Image Processing,2013,22(7):2600-2610.
    [2]Chen D Y and Luo Y S.Preserving motion-tolerant contextual visual saliency for video resizing[J].IEEE Transactions on Multimedia,2013,15(7):1616-1627.
    [3]Borji A,Sihite D N,and Itti L.Quantitative analysis of human-model agreement in visual saliency modeling:a comparative study[J].IEEE Transactions on Image Processing,2013,22(1):55-69.
    [4]Itti L,Koch C,and Niebur E.A model of saliency-based visual attention for rapid scene analysis[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,1998,20(11):1254-1259.
    [5]Harel J,Koch C,and Perona P.Graph-based visual saliency[C].Proceedings of the Annual Conference on Neural Information Processing Systems,Vancouver,Canada,2007:545-552.
    [6]Bruce N D and Tsotsos J K.Saliency based on information maximization[C].Proceedings of the Annual Conference on Neural Information Processing Systems,Whistler,Canada,2006:155-162.
    [7]Borji A and Itti L.Exploiting local and global rarities for saliency detection[C].Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition,Providence,USA,2012:478-485.
    [8]Judd T,Ehinger K,and Durand F.Learning to predict where humans look[C].Proceedings of the IEEE International Conference on Computer Vision,Kyoto,Japan,2009:2106-2113.
    [9]Vig E,Dorr M,and David C.Large-scale optimization of hierarchical features for saliency prediction in natural images[C].Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition,Columbus,USA,2014:2798-2805.
    [10]Hou X and Zhang L.Saliency detection:a spectral residual approach[C].Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition,Minneapolis,USA,2007:1-8.
    [11]Li J,Levine M D,An X J,et al..Visual saliency based on scale-space analysis in the frequency domain[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,2013,35(4):996-1010.
    [12]Yang J,Zhang D,Frangi A F,et al..Two-dimensional PCA:a new approach to appearance-based face representation and recognition[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,2004,26(1):131-137.
    [13]Jiang H Z,Wu Y,and Yuan Z J.Probabilistic salient object contour detection based on superpixels[C].Proceedings of the IEEE International Conference on Image Processing,Melbourne,Australia,2013:3069-3072.
    [14]Kovesi P.Phase congruency detects corners and edges[C].Proceedings of the Australian Pattern Recognition Society Conference,Sydney,Australia,2003:309-318.
    [15]Vedaldi A and Soatto S.Quick shift and kernel methods for mode seeking[C].Proceedings of the European Conference on Computer Vision,Marseille,France,2008:705-718.
    [16]Wei Y C,Wen F,and Zhu W J.Geodesic saliency using background priors[C].Proceedings of the European Conference on Computer Vision,Florence,Italy,2012:29-42.
    [17]Cheng M M,Warrell J,Lin W Y,et al..Efficient salient region detection with soft image abstraction[C].Proceedings of the IEEE International Conference on Computer Vision,Sydney,Australia,2013:1529-1536.
    [18]Shi T L,Liang M,and Hu X L.A reverse hierarchy model for predicting eye fixations[C].Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition,Columbus,USA,2014:23-28.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700