图像协同关联性约束的研究与应用
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
目标间的协同关联性约束研究是计算机视觉领域中的热点问题之一。其主要指在单张图像处理受到制约或瓶颈时,通过引入其他图像或参考源,分析相互间关系,构建全局的关联性约束,以此来辅助提升图像处理的效果。但是现有的协同关联性约束算法大多数集中于多张图像之间的近似物体关联约束,而且算法构造复杂、计算量大。在本论文中,我们将深入研究协同关联性约束的核心思想,理解其构成形式。一方面,进一步挖掘多图像间的关联性约束,提高多张图片间的的协同处理效果。另一方面,我们还将借鉴协同关联分割的思想,将其协同约束的思想拓展到单幅图像的应用中,通过构建单张图像不同区域间的关联,以及单图像相关源间的关联,提高图像的处理效果。本文的主要研究内容包括:
     1)多张图像间的关联性约束:我们通过聚类构建多张图像之间的协同关联性约束。基于图像生物视觉显著性原理和多图像间目标关联性约束,提出一种基于聚类方法的多图片间的协同显著性目标检测算法。协同显著性主要指多图像中的重复出现的同一或近似的视觉显著性物体。我们的算法通过聚类构建多图像之间的物体全局关联结构,依据三种自底向上的协同显著性测度,包括特征对比度、中心偏移度、以及图间分布度,对图像进行聚类级别的协同显著性测量,最终将各测度结果进行融合,提取出多张图片中的协同显著性目标。我们的协同显著性物体检测算法避免了大量繁重的学习训练过程,具有简单高效的特点。该算法不仅在多张图片的协同显著性物体检测中取得优异结果,而且在单张图片的视觉显著性物体检测中,也取得了良好的测试结果。此外,我们还深入挖掘协同显著性物体检测算法的相关应用,提出四种典型应用:协同分割、鲁棒的图像测距、弱监督学习、以及视频前景提取等,进一步彰显了协同显著性检测在图像处理中的应用潜力。
     2)单张图像不同区域间的关联性约束:我们挖掘单张图像中物体自身的几何结构约束,对图像的不同区域间建立协同联系。自然图片中的物体往往满足一定的几何结构,这使得各个区域之间可以构建协同关联性约束。我们利用物体自身的这种几何结构约束,引入协同分割思想,提出一种图像内关联区域之间的几何结构约束分割算法。为了反映不同区域之间的协同关联性约束,我们首先利用物体自身的几何结构约束建立一个像素级别的稠密几何结构映射矩阵,该矩阵在描述前景物体的位置分布的同时,也反映了物体自身的几何结构约束。然后我们将这种几何结构矩阵引入到基于图的能量函数模型中,提出新的基于几何结构约束的图分割模型,该模型满足子模性,可以通过图分割算法优化求解。除此之外,我们还将该几何结构约束模型拓展到基于组件的分割框架中,以满足弱几何约束、以及复杂几何约束的物体分割需求。实验表明,这种不同区域之间的协同关联性约束提供了一种高级别的图像语义关联,相对于仅仅使用低级别特征的分割算法,我们的算法取得了更好的分割效果。
     3)单图像相关源间的关联性约束:除了单张图片不同区域间的关联以外,我们还发现,同图片的不同处理源之间也可以协同关联性约束。通过研究这些不同处理源之间的关联性约束,建立单幅图像与镜头畸变、以及单幅图像与其不同操作处理层之间的协同关联性,我们提出一种基于图像畸变度的广角图像伪造区域盲检测分割算法。镜头的径向几何畸变在绝大多数图像处理中通常被认为是需要消除的不良性质,但我们提出该几何畸变反映了镜头的内部结构属性,而且对图像提供一个全局的几何约束。因此我们将这种镜头径向几何畸变性质作为度量指标,引入到图像篡改盲检测取证算法中。首先,提出了一种镜头径向畸变映射模型,在该模型中,空间直线将在映射半球上投影成一个大切圆。其次,依据该几何约束,设计了两种底层测度用于获取目标图像的伪造取证分布图,检测图像中伪造物体的位置。最后,通过构造基于图分割的能量函数,将目标图像与其伪造取证分布图关联起来,作为单图像的两个处理层,构建协同关联,提取出像素级别的目标取证呈现。
     通过研究我们发现,协同关联性约束不仅存在于多图像之间,而且也可以存在于单张图像之中,通过图像的不同结构区域、以及相关源之间体现出来。这种协同关联性约束可以突破现有单张图像的局限,提高单/多张图像处理的结果,提供新的结构语义描述,以及提供有效的参考验证准则。
Correspondence constraint is the hot issue in computer vision. Most existing corre-spondence constraint methods focus on the similar objects in multiple images, and theyare often computationally demanding. In this paper, we study the theory of correspon-dence constraint, demonstrate potential usages of the correspondence constraint. On onehand, we discover the correspondence constraint between the multiple images, and pro-mote its performance. On other hand, we extend the idea of correspondence constraintinto the single image processing, and generate the relation corresponding between themultiple regions and processing sources of the single image. Our paper includes:
     1) Correspondence constraint between the multiple images: we employ the cluster-ing to generate the correspondence constraint between the multiple images. We introducea new cluster-based algorithm for co-saliency detection, which is based on laws of thevisually salient stimuli and correspondence constraint. Global correspondence betweenthe multiple images is implicitly learned during the clustering process. Three visual at-tention cues are devised to efectively measure the cluster saliency. The advantage ofour method is mostly bottom-up without heavy learning, and has the property of beingsimple, general, efcient, and efective. Experimental results demonstrate the advantagesof the proposed method over the competing co-saliency methods. Our method on sin-gle image also outperforms most the state-of-the-art methods. Furthermore, we applythe co-saliency method on four applications to demonstrate the potential usages of theco-saliency map.
     2) Correspondence constraint between the multiple regions in the single image: weemploy the geometry structure to build correspondence constraint between the multipleregions in the single image. We take the geometry structure constraint into the foregroundextraction, and propose a novel geometry constraint segmentation method for extractingthe foreground. Firstly, the geometry foreground map is used to represent the geometrystructure of the image, which includes the geometry matching magnitude and the fore-ground location prior. Then, the geometry constraint model is built by introducing thisgeometry structure into the graph-based segmentation function. Finally, the segmenta-tion result is obtained via graph cut. Moreover, our geometry constraint segmentation isalso extended to the weak geometry object under a part-based framework. Experimentsdemonstrate that the high-level property of geometry constraint significantly improves thelow-level segmentation results.
     3) Correspondence constraint between the multiple processing sources of the singleimage: we build the correspondence constraint between the single image and lens dis-tortion/multiple processing layers, and propose a novel forensic method for detecting theforgery object. We employ the radial distortion as the intrinsic property of the lens, whichcould ofer a global constraint. A modified spherical projection model is adopted, whichis equivalent to the other captured rays-based models of the fisheye lens with only onefree parameter. In this model, the straight world line is projected into a great circle onthe viewing sphere, which provides a unique geometric constraint. Two saliency measurecues are provided to compute the untrustworthy likelihoods of the candidate lines. Finally,a fake saliency map is obtained according to the untrustworthy likelihood to segment thefake region.
     Above all, we find out that the correspondence constraint is not only valid on themultiple images, but appears in the single image processing, which could be providedby multiple regions and processing sources. This correspondence constraint breaks thelimited of single image, which improves the performance of image processing, providesthe new semantics structure, and ofers the efective reference.
引文
[1] Shi J, Malik J. Normalized cuts and image segmentation[J]. IEEE Transactions on PatternAnalysis and Machine Intelligence,2000,22(8):888–905.
    [2] Tu Z, Zhu S. Image Segmentation by Data-Driven Markov Chain Monte Carlo[J]. IEEE Trans-actions on Pattern Analysis and Machine Intelligence,2002,24(5):657–673.
    [3] Rother C, Kolmogorov V, Blake A.“GrabCut”: interactive foreground extraction using iteratedgraph cuts[J]. ACM Transactions on Graphics,2004,23(3):309–314.
    [4] Grady L. Random Walks for Image Segmentation[J]. IEEE Transactions on Pattern Analysisand Machine Intelligence,2006,28(11):1768–1783.
    [5] Rother C, Minka T, Blake A, et al. Cosegmentation of Image Pairs by Histogram Matching-Incorporating a Global Constraint into MRFs[C]. IEEE Conference on Computer Vision andPattern Recognition (CVPR),2006,1:993–1000.
    [6] Hochbaum D S, Singh V. An efcient algorithm for Co-segmentation[C]. International Confer-ence on Computer Vision (ICCV),2009:269–276.
    [7] Vicente S, Kolmogorov V, Rother C. Cosegmentation Revisited: Models and Optimization[C].European Conference on Computer Vision (ECCV),2010:465–479.
    [8] Batra D, Kowdle A, Parikh D, et al. Interactively Co-segmentating Topically Related Im-ages with Intelligent Scribble Guidance[J]. International Journal of Computer Vision,2011,93(3):273–292.
    [9] Mukherjee L, Singh V, Peng J. Scale invariant cosegmentation for image groups[C]. IEEEConference on Computer Vision and Pattern Recognition (CVPR),2011:1881–1888.
    [10] Vicente S, Rother C, Kolmogorov V. Object cosegmentation[C]. IEEE Conference on ComputerVision and Pattern Recognition (CVPR),2011:2217–2224.
    [11] Kim G, Xing E. On multiple foreground cosegmentation[C]. International Conference onComputer Vision (ICCV),2012:837–844.
    [12] Boykov Y, Kolmogorov V. An experimental comparison of min-cut/max-flow algorithms forenergy minimization in vision[J]. IEEE Transactions on Pattern Analysis and Machine Intelli-gence,2004,26(9):1124–1137.
    [13] Kolmogorov V, Zabin R. What energy functions can be minimized via graph cuts?[J]. IEEETransactions on Pattern Analysis and Machine Intelligence,2004,26(2):147–159.
    [14] Szeliski R, Zabih R, Scharstein D, et al. A Comparative Study of Energy Minimization Methodsfor Markov Random Fields with Smoothness-Based Priors[J]. IEEE Transactions on PatternAnalysis and Machine Intelligence,2008,30(6):1068–1080.
    [15] Mukherjee L, Singh V, Dyer C. Half-integrality based algorithms for cosegmentation of im-ages[C]. IEEE Conference on Computer Vision and Pattern Recognition (CVPR),2009:2028–2035.
    [16] Fan J, Wu Y, Dai S. Discriminative spatial attention for robust tracking[C]. European Conferenceon Computer Vision (ECCV),2010:480–493.
    [17] Kim G, Xing E, Fei-Fei L, et al. Distributed Cosegmentation via Submodular Optimizationon Anisotropic Difusion[C]. International Conference on Computer Vision (ICCV),2011:169–176.
    [18] Cheng M M, Zhang G X, Mitra N J, et al. Global contrast based salient region detection[C].IEEE Conference on Computer Vision and Pattern Recognition (CVPR),2011:409–416.
    [19] Itti L, Koch C, Niebur E. A model of saliency-based visual attention for rapid scene analysis[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,1998,20(11):1254–1259.
    [20] Lee W, Huang T, Yeh S, et al. Learning-Based Prediction of Visual Attention for Video Sig-nals[J]. IEEE Transactions on Image Processing,2011,20(11):3028–3038.
    [21] Toet A. Computational versus Psychophysical Bottom-Up Image Saliency: A ComparativeEvaluation Study[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2011,33(11):2131–2146.
    [22] Valenti R, Sebe N, Gevers T. What Are You Looking at? Improving Visual Gaze Estimation bySaliency[J]. International Journal of Computer Vision,2012,98(3):324–334.
    [23] Hou X, Zhang L. Saliency Detection: A Spectral Residual Approach[C]. IEEE Conference onComputer Vision and Pattern Recognition (CVPR),2007:1–8.
    [24] Liu T, Yuan Z, Sun J, et al. Learning to Detect a Salient Object[J]. IEEE Transactions on PatternAnalysis and Machine Intelligence,2011,33(2):353–367.
    [25] Lang C, Liu G, Yu J, et al. Saliency Detection by Multitask Sparsity Pursuit[J]. IEEE Transac-tions on Image Processing,2012,21(3):1327–1338.
    [26] Jacobs D, Goldman D, Shechtman E. Cosaliency: where people look when comparing im-ages[C]. ACM symposium on User interface software and technology,2010:219–228.
    [27] Chen H. Preattentive co-saliency detection[C]. International Conference on Image Processing(ICIP),2010:1117–1120.
    [28] Li H, Ngan K. A Co-saliency Model of Image Pairs[J]. IEEE Transactions on Image Processing,2011,20(12):3365–3375.
    [29] Chang K, Liu T, Lai S. From co-saliency to co-segmentation: An efcient and fully unsupervisedenergy minimization model[C]. IEEE Conference on Computer Vision and Pattern Recognition(CVPR),2011:2129–2136.
    [30] Tan H, Ngo C. Common pattern discovery using earth mover’s distance and local flow maxi-mization[C]. International Conference on Computer Vision (ICCV),2005:1222–1229.
    [31] Yuan J, Wu Y. Spatial Random Partition for Common Visual Pattern Discovery[C]. InternationalConference on Computer Vision (ICCV),2007:1–8.
    [32] Toshev A, Shi J, Daniilidis K. Image Matching via Saliency Region Correspondences[C]. IEEEConference on Computer Vision and Pattern Recognition (CVPR),2007:1–8.
    [33] Cho M, Shin Y, Lee K. Co-recognition of Image Pairs by Data-Driven Monte Carlo ImageExploration[C]. European Conference on Computer Vision (ECCV),2008:144–157.
    [34] Yang L, Geng B, Cai Y, et al. Object Retrieval Using Visual Query Context[J]. IEEE Transac-tions on Multimedia,2011,13(6):1295–1307.
    [35] Goferman S, Tal A, ZelnikManor L. Puzzle-like Collage.[J]. Computer Graphic Forum,2010,29(2):459–468.
    [36] Tu Z, Bai X. Auto-Context and Its Application to High-Level Vision Tasks and3D Brain Im-age Segmentation[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2010,32(10):1744–1757.
    [37] Kumar M P, Torr P H S, Zisserman A. OBJCUT: Efcient Segmentation Using Top-Down andBottom-Up Cues[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2010,32(3):530–545.
    [38] Maire M, Yu S, Perona P. Object detection and segmentation from joint embedding of parts andpixels[C]. International Conference on Computer Vision (ICCV),2011:2142–2149.
    [39] Li Y, Sun J, Tang C, et al. Lazy snapping[J]. ACM Transactions on Graphics,2004,23(3):303–308.
    [40] Blake A, Rother C, Brown M, et al. Interactive image segmentation using an adaptive GMMRFmodel[C]. European Conference on Computer Vision (ECCV),2004:428–441.
    [41] Yang A, Huang K, Rao S, et al. Symmetry-based3-D reconstruction from perspective images[J].Computer Vision and Image Understanding,2005,99(2):210–240.
    [42] Liu Y, Hel-Or H, Kaplan C, et al. Computational Symmetry in Computer Vision and ComputerGraphics[J]. Foundations and Trends@in Computer Graphics and Vision,2010,5(1-2):1–195.
    [43] Jiang N, Tan P, Cheong L. Multi-view repetitive structure detection[C]. International Conferenceon Computer Vision (ICCV),2011:535–542.
    [44] Zhao P, Quan L. Translation symmetry detection in a fronto-parallel view[C]. IEEE Conferenceon Computer Vision and Pattern Recognition (CVPR),2011:1009–1016.
    [45] Lee S, Liu Y. Curved Glide-Reflection Symmetry Detection[J]. IEEE Transactions on PatternAnalysis and Machine Intelligence,2012,34(2):266–278.
    [46] Yang A, Rao S, Huang K, et al. Geometric segmentation of perspective images based onsymmetry groups[C]. IEEE Conference on Computer Vision and Pattern Recognition (CVPR),2003:1251–1258.
    [47] Park H, Martin G, Bhalerao A. Structural Texture Segmentation using Afne Symmetry[C].IEEE International Conference on Image Processing,2007:49–52.
    [48] Sun Y, Bhanu B. Symmetry integrated region-based image segmentation[C]. IEEE Conferenceon Computer Vision and Pattern Recognition (CVPR),2009:826–831.
    [49] RiklinRaviv T, Sochen N, Kiryati N. On Symmetry, Perspectivity, and Level-Set-Based Segmen-tation[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2009,31(8):1458–1471.
    [50] Elor Y, Shaked D, Bruckstein A. Crazy-Cuts: From Theory to App[J]. The MathematicalIntelligencer,2012,34:50–55.
    [51] Devernay F, Faugeras O. Straight lines have to be straight[J]. Machine Vision and Applications,2001,13(1):14–24.
    [52] Claus D, Fitzgibbon A. A rational function lens distortion model for general cameras[C]. IEEEConference on Computer Vision and Pattern Recognition (CVPR),2005,1:213–219.
    [53] Kannala J, Brandt S. A generic camera model and calibration method for conventional, wide-angle, and fish-eye lenses[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2006,28(8):1335–1340.
    [54] Liu H, Javed O, Taylor G, et al. Omni-Directional Surveillance for Unmanned Water Vehi-cles[C]. The Eighth International Workshop on Visual Surveillance,2008.
    [55] Fu H, Cao Z, Cao X. Embedded omni-vision navigator based on multi-object tracking[J]. Ma-chine Vision and Applications,2011,22(2):349–358.
    [56] Fitzgibbon A W. Simultaneous linear estimation of multiple view geometry and lens distor-tion[C]. IEEE Conference on Computer Vision and Pattern Recognition (CVPR),2001,1:125–132.
    [57] Hartley R, Kang S B. Parameter-Free Radial Distortion Correction with Center of Distor-tion Estimation[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2007,29(8):1309–1321.
    [58] Tardif J, Sturm P, Trudeau M, et al. Calibration of Cameras with Radially Symmetric Distor-tion[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2009,31(9):1552–1566.
    [59] Kukelova Z, Pajdla T. A Minimal Solution to Radial Distortion Autocalibration[J]. IEEE Trans-actions on Pattern Analysis and Machine Intelligence,2011,33(12):2410–2422.
    [60] Micˇusˇ′k B, Pajdla T. Estimation of omnidirectional camera model from epipolar geometry[C].IEEE Conference on Computer Vision and Pattern Recognition (CVPR),2003,1:485–490.
    [61] Wei J, Li C, Hu S, et al. Fisheye Video Correction[J]. IEEE Transactions on Visualization andComputer Graphics, To appear.
    [62] Lyu S, Farid H. How Realistic is Photorealistic?[J]. IEEE Transactions on Signal Processing,2005,53(2):845–850.
    [63] Farid H. Image forgery detection[J]. IEEE Signal Processing Magazine,2009,26(2):16–25.
    [64] Popescu A, Farid H. Exposing Digital Forgeries by Detecting Traces of Re-sampling[J]. IEEETransactions on Signal Processing,2005,53(2):758–767.
    [65] Fan Z, de Queiroz R L. Identification of bitmap compression history: JPEG detection and quan-tizer estimation[J]. IEEE Transactions on Image Processing,2003,12(2):230–235.
    [66] Luo W, Huang J, Qiu G. JPEG Error Analysis and Its Applications to Digital Image Forensics[J].IEEE Transactions on Information Forensics and Security,2010,5(3):480–491.
    [67] Johnson M, Farid H. Exposing digital forgeries through chromatic aberration[C]. Proceedingsof the8th ACM workshop on Multimedia and security,2006:48–55.
    [68] Popescu A, Farid H. Exposing Digital Forgeries in Color Filter Array Interpolated Images[J].IEEE Transactions on Signal Processing,2005,53(10):3948–3959.
    [69] Swaminathan A, Wu M, Liu K. Nonintrusive component forensics of visual sensors using outputimages[J]. IEEE Transactions on Information Forensics and Security,2007,2(1):91–106.
    [70] Cao H, Kot A. Accurate Detection of Demosaicing Regularity for Digital Image Forensics[J].IEEE Transactions on Information Forensics and Security,2009,4(4):899–910.
    [71] Swaminathan A, Wu M, Liu K. Digital image forensics via intrinsic fingerprints[J]. IEEETransactions on Information Forensics and Security,2008,3(1):101–117.
    [72] Johnson M, Farid H. Exposing Digital Forgeries in Complex Lighting Environments[J]. IEEETransactions on Information Forensics and Security,2007,2(3):450–461.
    [73] Liu Q, Cao X, Deng C, et al. Identifying Image Composites Through Shadow Matte Consis-tency[J]. IEEE Transactions on Information Forensics and Security,2011,6(3):1111–1122.
    [74] Johnson M, Farid H. Metric Measurements on a Plane from a Single Image[R].[S.l.]: Depart-ment of Computer Science, Dartmouth College,2006.
    [75] Zhang H, Guo X, Cao X. Water Reflection Detection Using a Flip Invariant Shape Detector[C].International Conference on Pattern Recognition (ICPR),2010:633–636.
    [76] Joulin A, Bach F, Ponce J. Discriminative clustering for image co-segmentation[C]. IEEEConference on Computer Vision and Pattern Recognition (CVPR),2010:1943–1950.
    [77] Wu H, Wang Y, Feng K, et al. Resizing by symmetry-summarization[J]. ACM Transactions onGraphics,2010,29(159):1–10.
    [78] Fang Y, Chen Z, Lin W, et al. Saliency Detection in the Compressed Domain for Adaptive ImageRetargeting[J]. IEEE Transactions on Image Processing,2012,21(9):3888–3901.
    [79] Nguyen M H, Torresani L, de la Torre F, et al. Weakly supervised discriminative localizationand classification: a joint learning process[C]. IEEE Conference on Computer Vision and PatternRecognition (CVPR),2009:1925–1932.
    [80] Zhai Y, Shah M. Visual attention detection in video sequences using spatiotemporal cues[C].ACM Multimedia,2006:815–824.
    [81] Achanta R, Hemami S, Estrada F, et al. Frequency-tuned salient region detection[C]. IEEEConference on Computer Vision and Pattern Recognition (CVPR),2009:1597–1604.
    [82] Duan L, Wu C, Miao J, et al. Visual saliency detection by spatially weighted dissimilarity[C].IEEE Conference on Computer Vision and Pattern Recognition (CVPR),2011:473–480.
    [83] Tatler B W. The central fixation bias in scene viewing: Selecting an optimal viewing posi-tion independently of motor biases and image feature distributions[J]. Journal of Vision,2007,7(14):1–17.
    [84] Judd T, Ehinger K, Durand F, et al. Learning to predict where humans look[C]. InternationalConference on Computer Vision (ICCV),2009:2106–2113.
    [85] Lang C, Nguyen T, Katti H, et al. Depth Matters: Influence of Depth Cues on VisualSaliency[M]//European Conference on Computer Vision (ECCV),2012:101–115.
    [86] Feichtinger H, Strohmer T. Gabor analysis and algorithms: Theory and applications[M].[S.l.]:Birkhauser,1998.
    [87] Liu L, Chen R, Wolf L, et al. Optimizing Photo Composition[J]. Computer Graphic Forum,2010,29(2):469–478.
    [88] Lempitsky V, Kohli P, Rother C, et al. Image segmentation with a bounding box prior[C].International Conference on Computer Vision (ICCV),2009:277–284.
    [89] Shotton J, Winn J, Rother C, et al. TextonBoost for Image Understanding: Multi-Class ObjectRecognition and Segmentation by Jointly Modeling Texture, Layout, and Context[J]. Interna-tional Journal of Computer Vision,2009,81(1):2–23.
    [90] Zhang R, Zhang Z. Efective Image Retrieval Based on Hidden Concept Discovery in ImageDatabase[J]. IEEE Transactions on Image Processing,2007,16(2):562–572.
    [91] Deselaers T, Ferrari V. Global and efcient self-similarity for object classification and detec-tion[C]. IEEE Conference on Computer Vision and Pattern Recognition (CVPR),2010:1633–1640.
    [92] Goldberger J, Gordon S, Greenspan H. Unsupervised image-set clustering using an informationtheoretic framework[J]. IEEE Transactions on Image Processing,2006,15(2):449–458.
    [93] Blaschko M, Lampert C. Correlational spectral clustering[C]. IEEE Conference on ComputerVision and Pattern Recognition (CVPR),2008:1–8.
    [94] Jiang W, Er G, Dai Q, et al. Similarity-based online feature selection in content-based imageretrieval[J]. IEEE Transactions on Image Processing,2006,15(3):702–712.
    [95] Wang P, Wang J, Zeng G, et al. Salient Object Detection for Searched Web Images viaGlobal Saliency[C]. IEEE Conference on Computer Vision and Pattern Recognition (CVPR),2012:3194–3201.
    [96] Kim G, Torralba A. Unsupervised Detection of Regions of Interest Using Iterative Link Analy-sis[C]. The Conference on Neural Information Processing Systems,2009:961–969.
    [97] Deselaers T, Alexe B, Ferrari V. Localizing objects while learning their appearance[C]. Euro-pean Conference on Computer Vision (ECCV),2010:452–466.
    [98] Zhu J, Wu J, Wei Y, et al. Unsupervised Object Class Discovery via Saliency-Guided MultipleClass Learning[C]. IEEE Conference on Computer Vision and Pattern Recognition (CVPR),2012:3218–3225.
    [99] Yang M, Yuan J, Wu Y. Spatial selection for attentional visual tracking[C]. IEEE Conferenceon Computer Vision and Pattern Recognition (CVPR),2007:1–8.
    [100] Ken F, Miyazato K, Kimura A, et al. Saliency-based video segmentation with graph cutsand sequentially updated priors[C]. IEEE International Conference on Multimedia and Expo,2009:638–641.
    [101] Lee Y, Kim J, Grauman K. Key-segments for video object segmentation[C]. InternationalConference on Computer Vision (ICCV),2011:1995–2002.
    [102] Rahtu E, Kannala J, Salo M, et al. Segmenting Salient Objects from Images and Videos[C].European Conference on Computer Vision (ECCV),2010:366–379.
    [103] Mishra A, Aloimonos Y, Cheong L, et al. Active Visual Segmentation[J]. IEEE Transactions onPattern Analysis and Machine Intelligence,2012,34(4):639–653.
    [104] Mishra A, Shrivastava A, Aloimonos Y. Segmenting “simple” objects using RGB-D[C]. IEEEInternational Conference on Robotics and Automation (ICRA),2012:4406–4413.
    [105] Carreira J, Sminchisescu C. Constrained parametric min-cuts for automatic object segmenta-tion[C]. IEEE Conference on Computer Vision and Pattern Recognition (CVPR),2010:3241–3248.
    [106] Feng J, Wei Y, Tao L, et al. Salient object detection by composition[C]. International Conferenceon Computer Vision (ICCV),2011:1028–1035.
    [107] Alexe B, Deselaers T, Ferrari V. Measuring the Objectness of Image Windows[J]. IEEE Trans-actions on Pattern Analysis and Machine Intelligence,2012,34(11):2189–2202.
    [108] Kuettel D, Ferrari V. Figure-ground segmentation by transferring window masks[C]. IEEEConference on Computer Vision and Pattern Recognition (CVPR),2012:558–565.
    [109] Endres I, Hoiem D. Category Independent Object Proposals[C]. European Conference onComputer Vision (ECCV),2010:575–588.
    [110] Loy G, Eklundh J. Detecting Symmetry and Symmetric Constellations of Features[C]. EuropeanConference on Computer Vision (ECCV),2006:508–521.
    [111] Lee S, Liu Y. Skewed Rotation Symmetry Group Detection[J]. IEEE Transactions on PatternAnalysis and Machine Intelligence,2010,32(9):1659–1672.
    [112] Prasad V, Yegnanarayana B. Finding axes of symmetry from potential fields[J]. IEEE Transac-tions on Image Processing,2004,13(12):1559–1566.
    [113] Keller Y, Shkolnisky Y. A Signal Processing Approach to Symmetry Detection[J]. IEEE Trans-actions on Image Processing,2006,15(8):2198–2207.
    [114] Cho M, Lee K M. Bilateral Symmetry Detection via Symmetry-Growing[C]. British MachineVision Conference,2009:1–11.
    [115] Guo X, Cao X. MIFT: A framework for feature descriptors to be mirror reflection invariant[J].Image and Vision Computing,2012,30(8):546–556.
    [116] Hartley R I, Zisserman A. Multiple View Geometry in Computer Vision[M].[S.l.]: CambridgeUniversity Press,2004.
    [117] Boykov Y, Veksler O, Zabih R. Fast approximate energy minimization via graph cuts[J]. IEEETransactions on Pattern Analysis and Machine Intelligence,2001,23(11):1222–1239.
    [118] Felzenszwalb P, Huttenlocher D. Efcient Graph-Based Image Segmentation[J]. InternationalJournal of Computer Vision,2004,59(2):167–181.
    [119] Szummer M, Kohli P, Hoiem D. Learning CRFs Using Graph Cuts[C]. European Conferenceon Computer Vision (ECCV),2008:582–595.
    [120] Levinshtein A, Dickinson S, Sminchisescu C. Multiscale symmetric part detection and group-ing[C]. International Conference on Computer Vision (ICCV),2009:2162–2169.
    [121] Hou X, Zhang L. Dynamic Visual Attention: Searching for coding length increments[C]. TheConference on Neural Information Processing Systems,2008:681–688.
    [122] Blum H. A transformation for extracting new descriptors of shape[J]. Models for the perceptionof speech and visual form,1967,19(5):362–380.
    [123] Bai X, Wang X, Latecki L, et al. Active skeleton for non-rigid object detection[C]. InternationalConference on Computer Vision (ICCV),2009:575–582.
    [124] Wang J, Markert K, Everingham M. Learning Models for Object Recognition from NaturalLanguage Descriptions[C]. British Machine Vision Conference,2009:1–11.
    [125] Benosman R, Kang S B. Panoramic Vision: Sensors, Theory, and Applications[M].[S.l.]:Springer Verlag,2001.
    [126] Geyer C, Daniilidis K. Catadioptric Projective Geometry[J]. International Journal of ComputerVision,2001,45(3):223–243.
    [127] Hughes C, Denny P, Glavin M, et al. Equidistant Fish-Eye Calibration and Rectification byVanishing Point Extraction[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2010,32(12):2289–2296.
    [128] Geyer C, Daniilidis K. A Unifying Theory for Central Panoramic Systems and PracticalImplications[C]. European Conference on Computer Vision (ECCV),2000:445–461.
    [129] Ying X, Hu Z. Can We Consider Central Catadioptric Cameras and Fisheye Cameraswithin a Unified Imaging Model?[C]. European Conference on Computer Vision (ECCV),2004,3021:442–455.
    [130] Courbon J, Mezouar Y, Eck L, et al. A generic fisheye camera model for robotic applications[C].International Conference on Intelligent Robots and Systems,2007:1683–1688.
    [131] Ying X, Hu Z. Catadioptric camera calibration using geometric invariants[J]. IEEE Transactionson Pattern Analysis and Machine Intelligence,2004,26(10):1260–1271.
    [132] Ying X, Hu Z, Zha H. Fisheye Lenses Calibration Using Straight-Line Spherical PerspectiveProjection Constraint[C]. Asian Conference on Computer Vision (ACCV),2006:61–70.
    [133] Ferrari V, Fevrier L, Jurie F, et al. Groups of Adjacent Contour Segments for Object Detection[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,2008,30(1):36–51.
    [134] Arbelaez P, Maire M, Fowlkes C, et al. From Contours to Regions: An Empirical Evaluation[C].IEEE Conference on Computer Vision and Pattern Recognition (CVPR),2009:2294–2301.
    [135] Pan X, Lyu S. Region Duplication Detection Using Image Feature Matching[J]. IEEE Transac-tions on Information Forensics and Security,2010,5(4):857–867.
    [136] Zhang C, Guo X, Cao X. Duplication Localization and Segmentation[C]. Advances in Multi-media Information Processing,2010,6297:578–589.
    [137] Amerini I, Ballan L, Caldelli R, et al. A SIFT-Based Forensic Method for Copy/Move AttackDetection and Transformation Recovery[J]. IEEE Transactions on Information Forensics andSecurity,2011,6(3):1099–1110.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700