仿人眼尺度自动选择理论和应用研究
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
人类视觉的一项重要特性就是能够在多尺度下识别场景中的事物,既可统管全局亦可细致入微。因此,关于人类视觉多尺度感知的研究是仿人眼视觉特性研究中的重点和难点,在仿生视觉中占据重要的位置。当前,图像分析与识别技术的高速发展对图像解释提出了更高的要求,而图像分辨率的提高和更多细节的呈现,也使得通过细节认识图像模式有了可能,驱使图像局部特征理论及其应用研究在机器视觉领域独树一帜,蓬勃发展。本文从分析人脑视皮层的尺度选择机理着手,提出新颖的自动尺度选择方法,为图像局部特征的尺度赋予更鲜明的意义,并为基于特征尺度的图像分割提供一种新颖的思路。论文的主要内容如下:
     第一章,阐述本课题的相关研究背景、目的和意义,综述人类视觉认识机理以及图像局部特征检测与描述在国内外相关领域的研究现状,给出仿人眼尺度选择研究中所要面临的关键问题,总结了本课题的主要研究内容。
     第二章,分析人类视觉感知模型与尺度空间的关系,对高斯尺度空间的理论和性质进行阐述,剖析了高斯尺度空间所存在的缺陷。阐述在高斯尺度空间中对信号进行尺度选择的基础理论和方法,为后续研究的开展奠定基础。
     第三章,提出基于极值路径的DoG尺度优化方法(EP-DoG),在对高斯尺度空间进行剖析的基础之上,对特征点在尺度空间中的“漂移”行为进行分析,通过极值路径搜索法得到特征点在尺度空间各个层面的“漂移”路径,并在尺度空间中沿着极值路径搜索DoG取得极值响应的尺度作为特征尺度,实验证明EP-DoG优于DoG尺度选择方法,可以更为有效的选择特征尺度并去除重复冗余的特征。
     第四章,提出一种针对角点的自相关椭圆稳定性度量的SMM尺度选择法(Second Moment Matrix)。研究了特征角点邻域自相关函数的椭圆表示方法以及在尺度空间中自相关椭圆的形变度量方法,构造自相关椭圆形变度量函数用于尺度选择,即SMM尺度选择方法。通过实验分析得出,SMM尺度选择方法具备优良的有效性和适用性。提出不依赖于尺度空间的梯度频谱能量尺度选择法GFT(Gradient Frequency Transform),构造GFT边界能量函数用于度量特征区域边界梯度的均匀性和疏密程度,选择最小能量的稳定驻点所对应的尺度作为特征尺度。实验证明,GFT尺度选择方法较其它的尺度选择方法更为有效,并且能很好的面对图像旋转、尺度伸缩等变化。
     第五章,提出基于特征尺度的边缘分割方法。针对现有边缘特征尺度选择方法无法为图像边缘选择具有实际意义尺度的问题,提出了一种边缘特征尺度选择方法。通过对边缘线在尺度空间中的演化进行分析,得到边缘线段的特征尺度,并成功应用于边缘分割,为图像分割提供了一种新颖的思路。实验证明,本文提出的边缘特征尺度选择方法所选择的尺度具有实际意义,能够有效地应用在边缘分割当中,分离出各个尺度范围的边缘线段,得到显著边缘。
     第六章,提出基于自动尺度局部特征配准的混叠运动目标分割方法。以分割混叠运动目标为出发点,研究在复杂动态背景下更为鲁棒的运动分割算法。采用GFT尺度选择法为运动特征点选择尺度并构建局部描述,通过帧间配准得到局部位移量,并据此分割混叠运动目标。实验结果表明本文所提出的运动分割算法优于混合高斯模型和时差法,基于自动尺度局部特征配准的混叠运动目标分割算法在面对多目标混叠时能够有效地将其分割为互不重叠的目标区域。
     第七章对全文作出总结,阐述了本课题的研究内容和创新点,并对后续研究工作作出了展望。
Human vision system is able to recognize objects by using multi-scale analysis, and that account for the ability of understanding whole shapes and local details in the meanwhile. Thus it is important to study the mechanism of scale perception, and this research topic remains a hot but difficult point in the domain of biologic vision. At present, techniques of image analysis and recognition have been developed quickly, results in a higher demand of image interpretation, and it is more likely to fulfill the task of modeling image details for the high definition technology being able to supply high resolution images. All these factors are pushing the development of theory and application of local features, therefore more and more researchers pay their attention on this topic. Beginning with analysis of scale selecting mechanism in vision cortex of human beings, this paper innovatively proposed a series of approaches for automatic scale selection, for the purpose of selecting more meaningful scales for features and supporting high level image segmentation. The main contents are as follows:
     In chapter1, the related study background and significance are introduced. On the basis of referring to domestic and international associated documents, system components, key technologies and the current application research situation of human vision and local feature are summarized. Then the main problems and new challenges in the application and research of automatic scale selection for bionic eyes are analyzed, and the main work of this study is summarized.
     In chapter2, the relativity of human vision mechanism and scale space is analyzed and the basic theories as well as properties and limitation of Gaussian scale space are expounded. Finally, the fundamental principle and theories of automatic scale selection in Gaussian scale space are introduced.
     In chapter3, an improved method named EP-DoG is proposed to resolve the problem of feature shifting from a scale level to another scale level. By analyzing the behavior of feature point as the analysis scale changing, an extreme path will be searched to describe the shifting route of feature point, and the characteristic scale will be chosen if the feature reaches a max DoG response in this scale level. The comparative experimental results show that the proposed method gain better performance than DoG.
     In chapter4, an innovative method named SMM is proposed to select scale for corner points detected in scale space, and the scale level in which the shape of ellipse determined by second moment matrix of a corner point remains unchanged would be the characteristic scale for this corner point. Besides, another innovative method named GFT is also proposed to select scale for features detected without scale space, and the round area around the feature point which reaches a min GFT energy value will be regarded as feature area and the radius of this circle is considered characteristic scale. The comparative experimental results show that these two methods perform better than existing methods and are both effective and robust while facing changes like rotation, zooming and blur.
     In chapter5, a novel method is proposed to select edge scale automatically, and this method can be used to split complicated interleaving edges into more meaningful edge segments that can be classified easily according to their features. Firstly, edge extreme path in Gaussian scale space is searched to obtain characteristic scale for each edge point, through the way of calculating maximum distance that edge point travels from one scale level to adjacent one and analyzing the four forms of extreme path evolution. Then the interleaving edges are split into pieces with different scales according their scale histogram and combine the edge pieces connecting with each other through extreme path in scale space. Experimental results show that the proposed method is effective in edge segmentation based on characteristic scales and saliency of edge.
     In chapter6, a segmenting method based on matching of local feature with automatic scale is presented for splitting overlapped moving objects'region into pieces, and each piece corresponds to only one moving object. To gain better result of motion segmentation, two novel motion segmenting methods, which take advantage of both GMM and TD, are proposed. The GFT method is employed to select scale for motion feature points and feature descriptor is utilized to match features in two frames of video sequence, therefore the histogram of displacement between matched features is constructed to separate objects moving in different speed and direction. Experimental results show that the proposed method is able to separate overlapped moving objects effectively.
     In chapter7, the major work of the thesis is summarized. The conclusion and innovations of this thesis are introduced. Finally, the future development topics are presented in order to provide guidance for researchers, who are interested in such kind of projects.
引文
[1]Roberts L. Machine perception of three-dimensional solids [M]. Optical and Electro-Optical Information Processing. Cambridge:MIT Press,1965:159-197.
    [2]Marr,姚国正,刘磊,等译.视觉计算理论[M].北京:科学出版社,1988.
    [3]Riesenhuber M, Poggio T. Hierarchical Models of Object Recognition in Cortex[J]. Nature Neuroscience,1999,2(11):1019-1025.
    [4]Serre T, Kouh M, Cadieu C, et al. A theory of object recognition:computations and circuits in the feedforward path of the ventral stream in primate visual cortex[GB/OL]. CBCL Paper #259/AI Memo #2005-036, Massachusetts Institute of Technology, Cambridge, MA, December,2005.
    [5]Hu M K. Visual pattern recognition by moments invariants[J]. IRE Transactions on Information Theory,1962,8(2):179-187.
    [6]Li Yajun. Reforming the theory of invariant moments for pattern recognition[J]. Pattern Recogntion,1992,25(7):723-730.
    [7]Bradski R G. Real-time face and object tracking as a component of a perceptual user interface[C]. IEEE Workshop on Application of Computer Vision,1998: 214-219.
    [8]Maldonado-Bascon, Lafuente-Arroyo S, Gil-Jimenez P, et al. Road-sign detection and recognition based on support vector machines[J]. IEEE Transactions on Intelligent Transportation Systems,2007,8(2):264-278.
    [9]Foumaud Y D, Sehnlid C, Horaud R. Image Matching with Scale Adjustment[J]. Computer Vision and Image Understanding,2004,93:75-194.
    [10]Qin L, Zeng W, Gao W, et al. Local invariant descriptor for image matching. Proeeedings of the Conference on Acoustics, Speech, and Signal Processing,2005, 2:1025-1028.
    [11]李芳芳,肖本林,贾永红等.SIFT算法优化及其用于遥感影像自动配准[J].武汉大学学报(信息科学版),2009,34(10):1245-1249.
    [12]Dorko G, Schmid C. Selection of scale-invariant parts for object class recognition[C]. Proceedings of the 9th International Conference on Computer Vision, 2003:634-640.
    [13]Tran S, Davis L. Robust object tracing with regional affine invariant features[C]. Proceedings of the 11th International Conference on Computer Vision,2007.
    [14]Barni M, Cox I J, Kalker T. Digital watermarking[C]. Proceedings of 4th International Workshop on Digital Watermarking,2005:15-19.
    [15]Lehmann T M, Guld M O, Thies C. Content-based image retrieval in medical applications [J]. Methods of Information in Medical,2010,2:354-361.
    [16]Hubel D H, Wiesel T N. Receptive fields and functional architecture in two nonstriate visual areas (18 and 19) of the cat[J]. Journal of neurophysiology,1965, 28:229-289.
    [17]Hubel D H, Wiesel T N. Receptive Fields Binocular Interaction and Functional Architecture in the Cats' Visual Cortex[J]. The Journal of Physiology,1962, 160(1):106-154.
    [18]Fukushima K, Wake N. Handwritten Alphanumeric Character Recognition by the Neocognition[J]. IEEE Transaction on Neural Networks,1991,2(3):355-364.
    [19]Wallis G, Rolls E. A Model of Invariant Object Recognition in the Visual System[J]. Progress in Neurobiology,1997,51(1):167-194.
    [20]Mel B W. SEEMORE:Combining Color, Shape and Texture Histogramming in a Neurally Inspired Approach to Visual Object Recognition Neural Computation,1997, 9(4):777-804.
    [21]Mutch J, Lowe D G. Multiclass Object Recognition with Sparse Localized Feature[C]. Proceedings of the Computer Society Conference on Computer Vision and Pattern Recognition,2006:11-18.
    [22]Serre T, Wolf L, Bileschi S, et al. Robust Object Recognition with Cortex-Like Mechanisms [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2007,29(3):411-426.
    [23]Hochstein S, Ahissar M. View from the Top Hierarchies and Reverse Hierarchies in the Visual System[J]. Neuron,2002,36(5):791-804.
    [24]Roelfsema P R. Cortical algorithms for perceptual grouping[J]. Annual Review of Neuroscience,2006,29:203-227. 124
    [25]Moravec H P. Towards automatic visual obstacle avoidance[C]. Proc.5th International Joint Conference on Artificial Intelligence,1977.
    [26]Harris C, Stephens M. A combined corner and edge detector[C]. Alvey vision conference,1988:147-152.
    [27]Lindeberg T. Scale-space theory:a basic tool for analyzing structures at different scales[J]. Journal of Applied Statistics,1994,21(1/2):225-270.
    [28]Lindeberg T. Feature Detection with Automatic Scale Selection[J]. International Journal of Computer Vision,1998,30(2):79-116.
    [29]Mikolajczyk K, Schmid C. An affine invariant interest point detector[C]. Proceedings of the 8th International Conference on Computer vision,2002:128-142.
    [30]Mikolajczyk K, Schmid C. Scale and Affine Invariant Interest Point Detectors[J]. International Journal of Computer Vision,2004,60(1):63-86.
    [31]Mikolajczyk K, Tuytelaars T, Schmid C, et al. A Comparison of Affine Region Detectors[J]. International Journal of Computer Vision,2005,65(1/2):43-72.
    [32]Lowe D G. Object recognition from local scale-invariant features[C]. International Conference on Computer Vision,1999,2:1150-1157.
    [33]Lowe D G. Distinctive Image Features from Scale-Invariant Keypoints[J]. International Journal of Computer Vision,2004,60(2):91-110.
    [34]Morel J M, Yu Gushen. ASIFT:A new framework for fully affine invariant image comparison[J]. Journal on Imaging Sciences,2009:1-31.
    [35]Yu Guoshen, Morel J M. A fully affine invariant image comparison method[C]. IEEE International Conference on Acoustics, Speech and Signal Processing, 2009:1597-1600.
    [36]Yan ke, Sukthankar R. PCA-SIFT:a more distinctive representation for local image descriptors[C]. Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition,2004,2:506-513.
    [37]Lazebnik S, Schimid C, Ponce J. Sparse Texture Representation Using Affine-Invariant Neighborhoods [C]. Proceedings of Conference on Computer Vision and Pattern Recognition,2003:319-324.
    [38]Schmid C, Mohr R. Local Grayvalue Invariants for Image Retrieval[J]. IEEE Transaction on Pattern Analysis and Machine Intelligence,1997,19(5):530-534.
    [39]Schmid C, Mohr R, Bauckhage C. Comparing and evaluating interest points[C]. Proceedings of the 6th International Conference on Computer Vision,1998:230-235.
    [40]Bay H, Tuytelaars T, Gool L V. SURF:Speeded Up Robust Features[C]. Prcoceedings of the European Conference on Computer Vision,2006:404-417.
    [41]Bay H, Ess A, Tuytelaars T, et al. Speeded-up robust features (SURF) [J]. International Journal on Computer Vision and Image Understanding,2008, 110(3):346-359.
    [42]Ohba K, Ikeuchi K. Detectability, Uniqueness, and Reliability of Eigen Windows for Stable Verification of Partially Occluded Objects[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,1997,19(9):1043-1047.
    [43]Ohba K, Ikeuchi K. Recognition of the Multi Specularity Objects Using the Eigen Window[C]. Proceedings of the International Conference on Pattern Recognition,1996:692-696.
    [44]Lienhart R. An extended set of Haar-like features for rapid object detection[C]. International Conference on Image Processing,2002,1:900-903.
    [45]Rattarangsi A, Chin R T. Scale-based detection of corners of planar curves[C]. Proceedings of the 10th International Conference on Pattern Recognition,1990, 1:923-930.
    [46]Mokhtarian F, Suomela R. Robust image corner detection through curvature scale space[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,1998, 20(12):1376-1381.
    [47]He X C, Yung N H C. Curvature scale space corner detector with adaptive threshold and dynamic region of support[C]. Proceedings of the 17th International Conference on Pattern Recognition,2004,2:791-794.
    [48]Dyana A, Das S. MST-CSS(Multi-Spectro-Temporal Curvature Scale Space), a Novel Spatio-Temporal Representation for Content-Based Video Retrieval[J]. IEEE Transactions on Circuits and Systems for Video Technology,2010,20(8):1080-1094.
    [49]Zhang X D, Lei M, Yang D, et al. Multi-scale curvature product for robust image corner detection in curvature scale space[J]. Pattern Recognition Letters,2007, 28(5):545-554.
    [50]Mokhtarian F, Mohanna F. Enhancing the curvature scale space corner detector[C]. Proceedings of Scandinavian Conference on Image Analysis, 2001:145-152.
    [51]Mokhtarian F, Abbasi S. Affine curvature scale space with affine length parametrisation[J]. Pattern Analysis and Application,2001,4(1):1-8.
    [52]Smith S M, Brady J M. SUSAN-a new approach to low level image processing[J]. International Journal of Computer Vision,1997,23(1):45-78.
    [53]Kadir T, Zisserman A, Brady M. An Affine Invariant Salient Region Detector[C]. Prcoceedings of the European Conference on Computer Vision,2004:228-241.
    [54]Matas J, Chum O, Urban M, et al. Robust wide baseline stereo from maximally stable extremal regions[J]. Image and Vision Computing,22(10):761-767.
    [55]Doucet A, Freitas J, Gordon N. Sequential Monte Carlo Methods in Practice. New York:Springer-Verlag,2001.
    [56]Arulampalam S, Maskell S, Gordon N, et al. A tutorial on particle filters for on-line non-linear/non-Gaussian Bayesian tracking[J]. IEEE Transactions on Signal Processing,2002,50(2):174-188.
    [57]Nummiaro K, Koller-Meier E, Gool L V. An adaptive color-based particle filter[J]. Image and Vision Computing,2002,21:99-110.
    [58]Kyriakides I, Morrell D, Papandreou-Suppappola A. Sequential Monte Carlo methods for tracking multiple targets with deterministic and stochastic constraints [J]. IEEE Transactions on Signal Processing,2008,56(3):937-948.
    [59]Nowak E, Triggs B. Sampling strategies for bag-of-features image classification[C]. Proceedings of the European Conference on Computer Vision, 2006:490-503.
    [60]Tuytelaars T, Gool L V, Haene L D, et al. Matching of affinely invariant regions for visual servoing[C]. International Conference Robotics and Automation,1999, 2:1601-1606.
    [61]Tuytelaars T, Gool L Van. Wide baseline stereo matching based on local, affinely invariant regions[C]. Proceedings of the 11th British Machine Vision Conference, 2000:412-425.
    [62]Tuytelaars T, Gool L V. Matching Widely Seperated Views based on Affine Invariant Regions[J]. International Journal on Computer Vision,59(l):61-85.
    [63]Cai H P, Lin L, Yi S. An Affine Invariant Region Detector Using the 4th Differential Invariant[C].19th IEEE International Conference on Tools with Artificial Intelligence,2007:540-543.
    [64]陈涛.图像仿射不变特征提取方法研究[D].国防科技大学,博士学位论文,2006.
    [65]Beaudet P R. Rotational invariant image operators[C]. Proceedings of the IEEE International Conference on Pattern Recognition,1978:579-583.
    [66]Rothganger F, Lazebnik S, Schmid C, et al.3D Object Modeling and Recognition Using Affine-Invariant Patches and Multi-View Spatial Constraints[C]. IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2003,2:272-277.
    [67]Larios N, Deng H L, Zhang W, et al. Automated insect identification through concatenated histograms of local appearance features:feature vector generation and region detection for deformable objects[J]. Machine Vision and Applications,2008, 19(2):105-123.
    [68]Belongie S, Malik J, Puzicha J. Shape Matching and Object Recognition Using Shape Contexts[J]. IEEE Transaction on Pattern Analysis and Machine Intelligence, 2002,2(4):509-522.
    [69]Tuzel O, Porikli F, Meer P. Region Covariance:A fast descriptor for detection and classification[C]. Proceedings of European Conference on Computer Vision, 2006:589-600.
    [70]Mindru F, Tuytelaars T. Moment Invariants for Recognition under Changing Viewpoint and Illumination[J]. Computer Vision and Image Understanding,2004, 94:3-27.
    [71]Chaumette F. Image Moments:A General and Useful Set of Features for Visual Servoing[J]. IEEE Transaction on Robotics,2004,20(4):714-723.
    [72]Suk T, Flusser J. Combined Blur and Affine Moments Invariant and Their use in Pattern Recognition[J]. Pattern Recognition,2003,36:2895-2907.
    [73]Bhattacharya D, Sinta S. Invariant of Stero Images via the Theory of Complex Moments[J]. Pattern Reconition,1997,30(9):1373-1386.
    [74]Khotanzad A, Hong Y H. Invariant Image Recognition by Zernike Moments [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,1990, 12(5):489-497.
    [75]Mukundan R, Ong S H, Lee PA. Image Analysis by Tchebichef Moments[J]. IEEE Transaction on Image Processing,2001,10(9):1357-1364.
    [76]Yap PT, Paramesran R, Ong SH. Image Analysis by Krawtchouk Moments[J]. IEEE Transaction on Image Processing,2001,12(11):1367-1377.
    [77]Koenderink J J, Doom A V. Representation of local geometry in the visual system[J]. Biological Cybernetics,1987,55(6):367-375.
    [78]Freeman W, Adelson E. The Design and Use of Steerable Filters[J]. IEEE Transaction on Pattern Analysis and Machine Intelligence,1991,13(9):891-906.
    [79]Gool L, Moons T, Ungureanu D. Affine/photometric invariants for planar intensity patterns[C]. Proceedings of the European Conference on Computer Vision, 1996:642-651.
    [80]Koenderink J J. The Structure of Image[J]. Biological Cybernetics,1984,50: 363-370.
    [81]Ullman C, Koch S. Shifts in selective visual attention:Towards the underlying neural circuitry[J]. Human Neurobiology,1985,4:219-227.
    [82]Koch E, Niebur Ch. Control of selective visual attention:Modeling the where pathway[J]. Advances in Neural Information Processing Systems,1996,8:802-808.
    [83]Koch E, Niebur Ch. Computational architectures for attention[J]. Cambridge, Massachusetts:MIT Press,1998.163-186.
    [84]刘伟,张宏,童勤业.视觉注意计算模型及其在自然图像压缩中的应用[J].浙江大学学报(工学版),2007,41(4):650-654.
    [85]梁冬泰.多尺度多元图像分析机器视觉检测理论及其应用研究[D].浙江大学,博士学位论文,2009.
    [86]Rosenfeld A, Thurston M. Edge and Curve Detection for Visual Scene Analysis[J]. IEEE Transactions on Computers,1971, C-20(5):562-569.
    [87]Klinger A. Pattern and search statisties[J]. In Optimizing Methods in Statisties (J.S.Rustagi, ed.),(NewYork),Academic Press,1971.
    [88]Crowley J L. A Representation for Visual lnformation[D]. Carnegie-Mellon University, Roboties Institute, Pittsburgh, Pennsylvania, PhD thesis,1981.
    [89]Witkin A P. Scale Space Filtering[C]. Proc. Of Int. Jiont Conf. Artificial Intelligence,1983:1019-1020.
    [90]Marr D C, Hildreth E C. Theory of edge detection[C]. Proc.R.Soc. London,1980, 207(B):187-217.
    [91]Koenderink J J. The structure of images[J]. Biological Cybernetics,1984,50: 363-396.
    [92]Koenderink J J, Doom A J van. Representation of local geometry in the visual system[J]. Biological Cybernetics,1987,55:367-375.
    [93]Perona P, Malik J. Scale Space and Edge Detection Using Anisotropic Diffusion[J]. IEEE Transaction on Image Processing,1990,12(7):629-639.
    [94]Alvarez L. Axioms and Fundamental Equations of Image Processing[J]. Arch. Rational Mech. Anal,1993,123:199-257.
    [95]Guichard F, Morel J M. Image iterative smoothing and P.D.E's[R]. Lecture Notes, Beijing,1999.
    [96]Guilermo S. Geometric Partial Differential Equations and Image Analysis[J]. Cambridge:Cambridge University Press,2001.
    [97]Pollak I. Nonlinear Scale Space Analysis in Image Processing[D]. Massachusetts Institute of Technology, PhD Thesis,1995.
    [98]Weichert J. Anisotropic Diffusion in Image Processing, University of Kaiserslautenr, PhD Thesis,1996.
    [99]Lindeberg T. Scale-space theory in computer vision[M]. Kluwer Academic Publisher,1994.
    [100]Florack L M J. ter Haar Romeny B M, Koenderink J J, et.al. Scale and the differential structure of images[J]. Image and Vision Computing,1992,10(6).
    [101]Young R A. The Gaussian derivative theory of spatial vision:Ⅰ. Retinal mechanisms[J]. Spatial Vision,1987,2(4):273-293.
    [102]Young R A. The Gaussian derivative theory of spatial vision:Ⅱ.Cortical mode[J]. Spatial Vision,2001,14(3-4):321-389.
    [103]Lindeberg T. Edge detection and ridge detection with automatic scale selection[J]. International Journal of Computer Vision,1998,30(2).
    [104]Mikolajczyk K, Schmid C. Indexing based on scale invariant interest points[C]. Proceedings of the IEEE International Conference on Computer Vision, 2001:525-531.
    [105]Chomat O, Colin de Verdifere V, Hall D,.et al. Local scale selection for Gaussian based description techniques[C]. In Proceedings of the 6th European Conference on Computer Vision, Dublin, Ireland, pages 117-133,2000.
    [106]王永明,王贵锦.图像局部不变性特征与描述[M].国防工业出版社,2010.
    [107]沈凤麟,叶中付,钱玉美.统计信号处理和分析.中国科学技术大学出版社,2002.
    [108]章鹏.多尺度特征检测:方法和应用研究[D].中国科技大学,博士论文,2010.
    [109]王洪星.角点在轮廓尺度空间的行为分析与检测研究[D].重庆大学,硕士学位论文,2010.
    [110]胡俊华.图像局部不变特征及其应用研究[D].中国科技大学,博士论文,2009.
    [111]Mikolajczyk K. Detection of local features invariant to affine transformations[D]. INRIA, PhD papers,2002.
    [112]Arbelaez P, Marie M, Fowlkes C, et al. From contours to regions:An empirical evaluation[C]. IEEE Conference on Computer Vision and Pattern Recognition, 2009:2294-2301.
    [113]Arbelaez P, Marie M, Fowlkes C, et al. Contour Detection and Hierarchical Image Segmentation[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2011,33(5):898-916.
    [114]Dollar P, Tu Z W, Belongie S. Supervised Learning of Edges and Object Boundaries[C]. IEEE Computer Society Conference on Computer Vision and Pattern Recognition,2006,2:1964-1971.
    [115]Barinova O, Lempitsky V, Kholi P. On Detection of Multiple Object Instances Using Hough Transforms[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2012,34(9):1773-1784.
    [116]Wang H Z, Oliensis J. Generalizing Edge Detection to Contour Detection for Image Segmentation[J]. Computer Vision and Image Understanding,2010, 114(7):731-744.
    [117]Mishra A K, Fieguth P W, Clausi D A. Decoupled Active Contour (DAC) for Boundary Detection[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2011,33(2):310-324.
    [118]Al-Diri B, Hunter A, Steel D. An Active Contour Model for Segmenting and Measure Retinal Vessels[J]. IEEE Transactions on Medical Imaging,2009, 28(9):1488-1497.
    [119]Coleman S A, Scotney B W, Suganthan S. Multi-scale Edge Detection on Range and Intensity Images[J]. Pattern Recognition,2011,44(4):821-838.
    [120]Ren X F. Multi-scale Improves Boundary Detection in Natural Images[C]. Computer Vision-ECCV 2008, Lecture Notes in Computer Science,2008, 5304:533-545.
    [121]Stauffer C, Grimson W E L. Adaptive background mixture models for real-time tracking[C]. IEEE Computer Society Conference on Computer Vision and Pattern Recognition,1999:246-252.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700