用户名: 密码: 验证码:
融合目标增强与稀疏重构的显著性检测
详细信息    查看全文 | 推荐本文 |
  • 英文篇名:Saliency detection via object enhancement and sparse reconstruction
  • 作者:郭鹏飞 ; 金秋 ; 刘万军
  • 英文作者:Guo Pengfei;Jin Qiu;Liu Wanjun;School of Software,Liaoning Technical University;
  • 关键词:显著检测 ; 全局颜色对比 ; 稀疏重构 ; 误差传播 ; 目标增强
  • 英文关键词:saliency detection;;global color contrast;;sparse reconstruction;;error propagation;;object enhancement
  • 中文刊名:ZGTB
  • 英文刊名:Journal of Image and Graphics
  • 机构:辽宁工程技术大学软件学院;
  • 出版日期:2017-09-16
  • 出版单位:中国图象图形学报
  • 年:2017
  • 期:v.22;No.257
  • 基金:国家自然科学基金项目(61172144);; 辽宁省教育厅科学技术研究一般基金项目(L2015216)~~
  • 语种:中文;
  • 页:ZGTB201709008
  • 页数:11
  • CN:09
  • ISSN:11-3758/TB
  • 分类号:70-80
摘要
目的为了解决图像显著性检测中存在的边界模糊,检测准确度不够的问题,提出一种基于目标增强引导和稀疏重构的显著检测算法(OESR)。方法基于超像素,首先从前景角度计算超像素的中心加权颜色空间分布图,作为前景显著图;由图像边界的超像素构建背景模板并对模板进行预处理,以优化后的背景模板作为稀疏表示的字典,计算稀疏重构误差,并利用误差传播方式进行重构误差的校正,得到背景差异图;最后,利用快速目标检测方法获取一定数量的建议窗口,由窗口的对象性得分计算目标增强系数,以此来引导两种显著图的融合,得到最终显著检测结果。结果实验在公开数据集上与其他12种流行算法进行比较,所提算法对具有不同背景复杂度的图像能够较准确的检测出显著区域,对显著对象的提取也较为完整,并且在评价指标检测上与其他算法相比,在MSRA10k数据集上平均召回率提高4.1%,在VOC2007数据集上,平均召回率和F检验分别提高18.5%和3.1%。结论本文提出一种新的显著检测方法,分别利用颜色分布与对比度方法构建显著图,并且在显著图融合时采用一种目标增强系数,提高了显著图的准确性。实验结果表明,本文算法能够检测出更符合视觉特性的显著区域,显著区域更加准确,适用于自然图像的显著性目标检测、目标分割或基于显著性分析的图像标注。
        Objective The human visual system can acquire regions of interest for different scenes based on the visual attention mechanism. Each image contains one or more salient objects. Saliency detection involves imitating the visual attention mechanism to obtain important information in an image,thereby improving the efficiency and accuracy of image processing.Saliency detection methods can be used not only in detecting a target object,but also in image annotation and retrieval,object recognition,image clipping,image segmentation,image compression,and other fields. Saliency detection is a research hot spot in computer vision. Although existing significant detection methods have achieved good results,several problems remain,such as the blurring of significant boundaries due to foreground and background noises. Therefore,the accuracy of saliency detection should be improved. Saliency detection methods based on pixels or regions,such as super pixels,can effectively describe the features of salient regions. However,these pixels or regions exist alone and have noreal object significance; that is,complete descriptions of objects are lacking. Objectness detection involves obtaining object information by sliding windows. We propose a saliency detection algorithm via object enhancement and sparse reconstruction( OESR) to introduce object descriptions while preserving the effective description of salient features to solve the problems of fuzzy boundaries and improve the accuracy of image saliency detection. The objectness detection method is not used to directly access windows as the final salient objects. We consider window information as an object description to enhance the effectiveness of salient features. Method The input image is segmented by super pixels,and several super pixel regions are obtained. A central weighted color spatial distribution model is adopted. The model is based on the idea that when a wide range of colors exist,these colors are less likely to belong to a salient region. This model utilizes the color information of an image,but the method is based on pixels. Consequently,the final result lacks structured information. We calculate the color space distribution feature on super pixels to introduce structured information. First,from the foreground point of view,the Gauss mixture model is used to model all colors,and the probability of each pixel corresponding to the C color component is calculated. The probability of each super pixel corresponding to the C color component is calculated through the probability of pixels within a super pixel. The color spatial distribution based on super pixels is calculated through its probability and location information. We use the super pixel color spatial distribution map as the foreground saliency map. Second,from the background point of view,we introduce a sparse reconstruction error based on the idea of contrast to describe the feature difference between a super pixel and its surrounding super pixels. We construct the background template by using the super pixel features of the image boundaries. We conduct pretreatment on the template using the k-means clustering algorithm for a combined treatment on the boundary of super pixels to obtain representative boundary features. Super pixels with similar features are merged in each direction; thus,good background templates are obtained.The optimized background template is used as a sparse representation dictionary to compute sparse reconstruction errors.The reconstruction error of a super pixel is corrected in its 8-neighbor to solve the region discontinuity caused by the oversegmentation of an image. After correction,the saliency region becomes smooth,and the sparse reconstruction error is set as the salient value to obtain the background difference map. Finally,from the object point of view,the target enhancement coefficient is calculated using the objectness detection method. We use a fast target detection method to obtain a certain number of proposed windows with various scales. Each window assigns an object score based on the possibility of containing the object. If a pixel belongs to a salient region,then the more target windows that contain this pixel,the higher object scores it will have,and the greater its significance. The target enhancement coefficient is calculated from the object scores of the proposed windows. The foreground saliency map and the background difference map are fused,with the target enhancement coefficient as the guide. We obtain a high-contrast salient map with a significant foreground and a suppressed background. Result The proposed algorithm is compared with 12 methods on two public data sets( i. e.,MSRA10 k and VOC2007). The precision,recall,F-measure,and mean absolute error( MAE) are evaluated on the two data sets. The visual contrast of the experiment indicates that the salient objects detected using OESR are complete and accurate,and the method can also effectively deal with images with complex background. OESR uses the target enhancement coefficient in the salient map fusion step. The final salient region has high brightness,and background suppression works efficiently.The P-R curve,average precision,average recall,and average F-measure value indicate that OESR exhibits certain advantages in the three evaluation indexes. Compared with other methods,our method has improved recall of 4. 1% on the MSRA10 k data set. In the VOC2007 data set,the recall of our method improves by 18. 5%,and F-measure improves by3. 1%. Recall represents the recall situation of images to be detected. An improvement in recall shows that the color features and sparse reconstruction features can describe the salient features effectively,and the introduction of object information ensures the integrity of salient regions. The result of the MAE evaluation index also reflects the advantage of OESR in terms of overall performance. Conclusion A new saliency detection method is proposed in this study. The method uses color spatial distribution and sparse reconstruction error to produce a saliency map. Object enhancement coefficients are adopted in the combination of the two maps to improve the accuracy of saliency maps. Experimental results show that the algorithm can detect accurate salient regions,which agree with human vision characteristics. This method is suitable for saliency detection,target segmentation,and image annotation based on saliency analysis.
引文
[1]Treisman A M,Gelade G.A feature-integration theory of attention[J].Cognitive Psychology,1980,12(1):97-136.[DOI:10.1016/0010-0285(80)90005-5]
    [2]Zhang J,Hu W W,Chen Z H,et al.Multi-model fused framework for image annotation[J].Journal of Computer-Aided Design&Computer Graphics,2014,26(3):472-478.[张静,胡微微,陈志华,等.多模型融合的多标签图像自动标注[J].计算机辅助设计与图形学学报,2014,26(3):472-478.]
    [3]Chen T,Cheng M M,Tan P,et al.Sketch2Photo:internet image montage[J].ACM Transactions on Graphics,2009,28(5):#124.[DOI:10.1145/1618452.1618470]
    [4]Makovski T,Jiang Y V.Feature binding in attentive tracking of distinct objects[J].Visual Cognition,2009,17(1-2):180-194.[DOI:10.1080/13506280802211334]
    [5]Luo P,Tian Y L,Wang X G,et al.Switchable deep network for pedestrian detection[C]//Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition.Columbus,USA:IEEE,2014:899-906.[DOI:10.1109/CVPR.2014.120]
    [6]Kim W,Kim C.A novel image importance model for contentaware image resizing[C]//Proceedings of the 2011 18th IEEEInternational Conference on Image Processing.Brussels:IEEE,2011:2469-2472.[DOI:10.1109/ICIP.2011.6116161]
    [7]Zhao S Y,Li F X,Shen J B,et al.Image saliency detection using red-black wavelet[J].Journal of Computer-Aided Design&Computer Graphics,2014,26(10):1789-1793.[赵三元,李凤霞,沈建冰,等.基于红黑小波的图像显著性检测[J].计算机辅助设计与图形学学报,2014,26(10):1789-1793.]
    [8]Itti L.Automatic foveation for video compression using a neurobiological model of visual attention[J].IEEE Transactions on Image Processing,2004,13(10):1304-1318.[DOI:10.1109/TIP.2004.834657]
    [9]Koch C,Ullman S.Shifts in selective visual attention:towards the underlying neural circuitry[J].Human Neurobiology,1985,4(4):219-227.
    [10]Itti L,Koch C,Niebur E.A model of saliency-based visual attention for rapid scene analysis[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,1998,20(11):1254-1259.[DOI:10.1109/34.730558]
    [11]Ma Y F,Zhang H J.Contrast-based image a ttention analysis by using fuzzy growing[C]//Proceedings of the Eleventh ACM International Conference on Multimedia.Berkeley,CA,USA:ACM,2003:374-381.[DOI:10.1145/957013.957094]
    [12]Zhai Y,Shah M.Visual attention detection in video sequences using spatiotemporal cues[C]//Proceedings of the 14th ACM International Conference on Multimedia.Santa Barbara,CA,USA:ACM,2006:815-824.[DOI:10.1145/1180639.1180824]
    [13]Liu T,Yuan Z J,Sun J,et al.Learning to detect a salient object[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,2011,33(2):353-367.[DOI:10.1109/TPAMI.2010.70]
    [14]Achanta R,Estrada F,Wils P,et a1.Salient region detection and segmentation[C]//Proceedings of the 6th International Conference on Computer Vision Systems.Santorini,Greece:Springer,2008:66-75.[DOI:10.1007/978-3-540-79547-6_7]
    [15]Goferman S,Zelnik-Manor L,Tal A.Context-aware saliency detection[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,2012,34(10):1915-1926.[DOI:10.1109/TPAMI.2011.272]
    [16]Cheng M M,Zhang G X,Mitra N J,et al.Global contrast based salient region detection[C]//Proceedings of the 2011 IEEE Conference on Computer Vision and Pattern Recognition.Providence,RI:IEEE,2011:409-416.[DOI:10.1109/CVPR.2011.5995344]
    [17]Yan Qiong,Xu Li,Shi Jianping,et al.Hierarchical saliency detection[C]//Proceedings of the 2013 IEEE Conference on Computer Vision and Pattern Recognition.Portland,Oregon,USA:IEEE,2013:1155-1162.[DOI:10.1109/CVPR.2013.153]
    [18]Harel J,Koch C,Perona P.Graph-based visual saliency[C]//Proceedings of the 2006 Advances in Neural Information Processing Systems.Vancouver,British Columbia,Canada:Bradford Book,2006:545-552.
    [19]Li X H,Lu H C,Zhang L H,et al.Saliency detection via dense and sparse reconstruction[C]//Proceedings of the 2013 IEEE International Conference on Computer Vision.Sydney,Australia:IEEE,2013:2976-2983.[DOI:10.1109/ICCV.2013.370]
    [20]Qian S,Chen Z H,Lin M Q,et al.Saliency detection based on conditional random field and image segmentation[J].Acta Automatica Sinica,2015,41(4):711-724.[钱生,陈宗海,林名强,等.基于条件随机场和图像分割的显著性检测[J].自动化学报,2015,41(4):711-724.][DOI:10.16383/j.aas.2015.c140328]
    [21]Zhang Q R.Saliency detection algorithm based on background prior[J].Journal of Image and Graphics,2016,21(2):165-173.[张巧荣.利用背景先验的显著性检测算法[J].中国图象图形学报,2016,21(2):165-173.][DOI:10.11834/jig.20160205]
    [22]Jiang J,Lu P,Zhu H L,et al.Salient object detection using contrast and background priors[J].Journal of Computer-Aided Design&Computer Graphics,2016,28(1):82-89.[蒋娇,陆平,朱恒亮,等.融合对比度与背景先验的显著目标检测算法[J].计算机辅助设计与图形学学报,2016,28(1):82-89.][DOI:10.3969/j.issn.1003-9775.2016.01.011]
    [23]Cheng M M,Zhang Z,Lin W Y,et al.BING:binarized normed gradients for objectness estimation at 300fps[C]//Proceedings of2014 IEEE Conference on Computer Vision and Pattern Recognition.Columbus,OH,USA:IEEE,2014:3286-3293.[DOI:10.1109/CVPR.2014.414]
    [24]Achanta R,Shaji A,Smith K,et al.SLIC superpixels,EPFL-RE-PORT-149300[R].Lausanne,Switzerland:Ecole Polytechnique Fedrale de Lausanne,2010.
    [25]Hou X D,Zhang L Q.Saliency detection:a spectral residual approach[C]//Proceedings of IEEE Conference on Computer Vision and Pattern Recognition.Minneapolis,Minnesota,USA:IEEE,2007:1-8.[DOI:10.1109/CVPR.2007.383267]
    [26]Zhang L Y,Tong M H,Marks T K,et al.SUN:a bayesian framework for saliency using natural statistics[J].Journal of Vision,2008,8(7):#32.[DOI:10.1167/8.7.32]
    [27]Achanta R,Hemami S,Estrada F,et al.Frequency-tuned salient region detection[C]//Proceedings of IEEE Conference on Computer Vision and Pattern Recognition.Miami,Florida,USA:IEEE,2009:1597-1604.[DOI:10.1109/CVPR.2009.5206596]
    [28]Seo H J,Milanfar P.Static and space-time visual saliency detection by self-resemblance[J].Journal of Vision,2009,9(12):#15.[DOI:10.1167/9.12.15]
    [29]Rahtu E,Kannala J,Salo M,et al.Segmenting salient objects from images and videos[C]//Proceedings of the 11th European conference on Computer Vision.Heraklion,Crete,Greece:Springer,2010:366-379.[DOI:10.1007/978-3-642-15555-0_27]
    [30]Jiang H Z,Wang J D,Yuan Z J,et al.Automatic salient object segmentation based on context and shape prior[C]//Proceedings of the 22nd British Machine Vision Conference.Dundee,UK:University of Dundee,2011:110.1-110.12.[DOI:10.5244/C.25.110]
    [31]Duan L J,Wu C P,Miao J,et al.Visual saliency detection by spatially weighted dissimilarity[C]//Proceedings of the 2011 IEEEConference on Computer Vision and Pattern Recognition.Providence,RI:IEEE,2011:473-480.[DOI:10.1109/CVPR.2011.5995676]
    [32]Zhu W J,Liang S,Wei Y C,et al.Saliency optimization from robust background detection[C]//Proceedings of 2014 IEEE Conference on Computer Vision and Pattern Recognition.Columbus,USA:IEEE,2014:2814-2821.[DOI:10.1109/CVPR.2014.360]
    [33]Otsu N.A threshold selection method from gray-level histograms[J].IEEE Transactions on Systems,Man,and Cybernetics,1979,9(1):62-66.[DOI:10.1109/TSMC.1979.4310076]

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700