基于凸包改进的流行排序显著性检测
详细信息    查看全文 | 推荐本文 |
  • 英文篇名:Saliency Detection Based on Improved Manifold Ranking via Convex Hull
  • 作者:林晓 ; 刘祖祥 ; 郑晓妹 ; 黄继风 ; 马利庄
  • 英文作者:Lin Xiao;Liu Zuxiang;Zheng Xiaomei;Huang Jifeng;Ma Lizhuang;The College of Information, Mechanical and Electrical Engineering, Shanghai Normal University;College of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University;
  • 关键词:凸包 ; 流行排序 ; 显著性检测
  • 英文关键词:convex hull;;manifold ranking;;saliency detection
  • 中文刊名:JSJF
  • 英文刊名:Journal of Computer-Aided Design & Computer Graphics
  • 机构:上海师范大学信息与机电工程学院;上海交通大学电子信息与电气工程学院;
  • 出版日期:2019-05-15
  • 出版单位:计算机辅助设计与图形学学报
  • 年:2019
  • 期:v.31
  • 基金:国家自然科学基金(61872242,61502220,61775139)
  • 语种:中文;
  • 页:JSJF201905009
  • 页数:10
  • CN:05
  • ISSN:11-2925/TP
  • 分类号:75-84
摘要
针对传统的基于图的流行排序显著性检测算法仅仅依赖边界背景先验显著图来提取前景种子,影响最后的排序结果,使得显著性检测结果较差的问题,提出结合凸包提取更精确的前景种子进行流行排序的算法.首先提取图像边界结点作为背景种子进行流行排序得到背景估计显著图,并将该显著图二值化得到粗略的前景区域;然后通过颜色增强的Harris角点检测算法获得图像角点,并用其构造粗略包含显著目标的凸包;最后将凸包和前景区域相结合提取更精确的前景种子进行流行排序得到最后的显著图.在3个公开的图像数据集上,与其他经典算法相比,该算法在PR曲线、MAE值和F-measure上均获得了提升.
        Aiming at the issue that traditional saliency detection algorithm via graph-based manifold ranking only relies on the boundary background prior saliency map to extract foreground seeds, affects the final ranking result, and makes saliency map worse, this paper proposes an algorithm that obtains more accurate foreground seeds to make manifold ranking by combining convex hull. First, a background saliency map is obtained by manifold ranking utilizing boundary nodes as background seeds, then a foreground region is got by a binary segmentation of the saliency map. Second, convex hull roughly containing saliency objects is constructed by Harris corner with color boosting of the input image. Finally, saliency map is computed by using manifold ranking algorithm, whose foreground seeds are from the combination of convex hull and foreground region. Experiment results show that the proposed algorithm gains improvement compared with some other classic algorithms in terms of PR curves, MAE values and F-measure on three public datasets.
引文
[1]Ji Yuhang,Ma Lizhuang.Stability-based tree for disparity refinement[J].Journal of Computer-Aided Design&Computer Graphics,2016,28(12):2159-2167(in Chinese)(季雨航,马利庄.基于稳定树的立体匹配视差优化算法[J].计算机辅助设计与图形学学报,2016,28(12):2159-2167)
    [2]Lin Xiao,Shen Yang,Ma Lizhuang,et al.Image resizing based on shape and structure-preserving of salient object[J].Computer Science,2014,41(12):288-292(in Chinese)(林晓,沈洋,马利庄,等.显著物体形状结构保持的图像缩放方法[J].计算机科学,2014,41(12):288-292)
    [3]Qiao Congbin,Sheng Bin,Wu Wen,et al.Video deblurring by segmentation from motion[J].Journal of Computer-Aided Design&Computer Graphics,2015,27(11):2108-2115(in Chinese)(谯从彬,盛斌,吴雯,等.基于运动分割的视频去模糊[J].计算机辅助设计与图形学学报,2015,27(11):2108-2115)
    [4]Cai Y F,Liu Z,Wang H,et al.Saliency-based pedestrian detection in far infrared images[J].IEEE Access,2017,5:5013-5019
    [5]Itti L,Koch C,Niebur E.A model of saliency-based visual attention for rapid scene analysis[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,1998,20(11):1254-1259
    [6]Erdem E,Erdem A.Visual saliency estimation by nonlinearly integrating features using region covariances[J].Journal of Vision,2013,13(4):Article No.11
    [7]Chang K Y,Liu T L,Chen H T,et al.Fusing generic objectness and visual saliency for salient object detection[C]//Proceedings of the IEEE International Conference on Computer Vision.Los Alamitos:IEEE Computer Society Press,2011:914-921
    [8]Judd T,Ehinger K,Durand F,et al.Learning to predict where humans look[C]//Proceedings of the 12th IEEE International Conference on Computer Vision.Los Alamitos:IEEE Computer Society Press,2010:2106-2113
    [9]Achanta R,Hemami S,Estrada F,et al.Frequency-tuned salient region detection[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.Los Alamitos:IEEE Computer Society Press,2009:1597-1604
    [10]Margolin R,Tal A,Zelnik-Manor L.What makes a patch distinct?[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.Los Alamitos:IEEE Computer Society Press,2013:1139-1146
    [11]Cheng M M,Mitra N J,Huang X L,et al.Global contrast based salient region detection[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,2015,37(3):569-582
    [12]Wang L,Xue J R,Zheng N N,et al.Automatic salient object extraction with contextual cue[C]//Proceedings of the IEEEInternational Conference on Computer Vision.Los Alamitos:IEEE Computer Society Press,2011:105-112
    [13]Yang J M,Yang M H.Top-down visual saliency via joint CRFand dictionary learning[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,2017,39(3):576-588
    [14]Lu H C,Zhang X N,Qi J Q,et al.Co-bootstrapping saliency[J].IEEE Transactions on Image Processing,2017,26(1):414-425
    [15]Wei Y C,Wen F,Zhu W J,et al.Geodesic saliency using background priors[C]//Proceedings of the 12th European Conference on Computer Vision.Heidelberg:Springer,2012:29-42
    [16]Zhu W J,Liang S,Wei Y C,et al.Saliency optimization from robust background detection[C]//Proceedings of the IEEEConference on Computer Vision and Pattern Recognition.Los Alamitos:IEEE Computer Society Press,2014:2814-2821
    [17]Xie Y L,Lu H C.Visual saliency detection based on Bayesian model[C]//Proceedings of the 18th IEEE International Conference on Image Processing.Los Alamitos:IEEE Computer Society Press,2011:645-648
    [18]Jiang B W,Zhang L H,Lu H C,et al.Saliency detection via absorbing markov chain[C]//Proceedings of the IEEE International Conference on Computer Vision.Los Alamitos:IEEEComputer Society Press,2013:1665-1672
    [19]Li C Y,Yuan Y C,Cai W D,et al.Robust saliency detection via regularized random walks ranking[C]//Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition.Los Alamitos:IEEE Computer Society Press,2015:2710-2717
    [20]Yang C,Zhang L H,Lu H C,et al.Saliency detection via graph based manifold ranking[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.Los Alamitos:IEEE Computer Society Press,2013:3166-3173
    [21]Zhou L,Yang Z H,Zhou Z T,et al.Salient region detection using diffusion process on a two-layer sparse graph[J].IEEETransactions on Image Processing,2017,26(12):5882-5894
    [22]Xiao Y,Wang L M,Jiang B,et al.A global and local consistent ranking model for image saliency computation[J].Journal of Visual Communication and Image Representation,2017,46:199-207
    [23]Achanta R,Shaji A,Smith K,et al.SLIC superpixels compared to state-of-the-art superpixel methods[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,2012,34(11):2274-2282
    [24]Perazzi F,Krahenbuhl P,Pritch Y,et al.Saliency filters:contrast based filtering for salient region detection[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.Los Alamitos:IEEE Computer Society Press,2012:733-740
    [25]Zhou D Y,Weston J,Gretton A,et al.Ranking on data manifolds[C]//Proceedings of Advances in Neural Information Processing Systems.Cambridge:MIT Press,2004:169-176
    [26]Cheng M M,Warrell J,Lin W Y,et al.Efficient salient region detection with soft image abstraction[C]//Proceedings of the IEEE International Conference on Computer Vision.Los Alamitos:IEEE Computer Society Press,2013:1529-1536
    [27]Goferman S,Zelnik-Manor L,Tal A.Context-aware saliency detection[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,2012,34(10):1915-1926
    [28]Yan Q,Xu L,Shi J P,et al.Hierarchical saliency detection[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.Los Alamitos:IEEE Computer Society Press,2013:1155-1162
    [29]Xie Y L,Lu H C,Yang M H.Bayesian saliency via low and midlevel cues[J].IEEE Transactions on Image Processing,2013,22(5):1689-1698
    [30]Li X H,Lu H C,Zhang L H,et al.Saliency detection via dense and sparse reconstruction[C]//Proceedings of the IEEE International Conference on Computer Vision.Los Alamitos:IEEEComputer Society Press,2013:2976-2983

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700