视觉注意模型及其在目标感知中的应用研究
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
人类初级视觉系统根据当前相关的行为和视觉任务,使用注意机制来处理重要信息。通过这种处理方式,可以有效地平衡计算资源、减少时间消耗以及解决复杂场景下不同视觉任务问题。在计算机处理复杂场景信息的过程中,应用视觉注意机制可以把有限的计算能力更加有效地分配给重要的处理任务。视觉注意计算模型一般使用两种信息引导注意力的转移:自底向上基于图像显著性的信息和自顶向下基于任务的信息。如何有效地利用这两种信息指导注意力迅速关注到兴趣目标区域,为进一步的目标识别奠定基础,具有十分重要的意义。本论文运用神经科学、模式识别和图像处理理论,深入分析了生物视觉信息处理过程的相关内容,进行了计算机视觉注意机制的研究,并将其应用在目标搜索和识别上去。本论文完成的主要工作如下:
     研究了视觉关注区域提取方法。结合基于显著度的区域选择方法和尺度空间主结构方法提取视觉关注区域。对于一幅输入的彩色图像,根据数据驱动注意模型找到显著点,使用基于显著度的区域选择方法得到显著区域。然后,将彩色图像转化为灰度图像,使用尺度空间主结构方法获得局部极值点坐标和对应尺度。在已求得的显著区域内,寻找最大响应极值点,并在相应尺度上确定图斑区域。最后合并这两个空间区域,获得包含目标的区域。这种分割结果相对粗糙,给出的不是严格的目标边界,但是可以有效地覆盖目标,减少数据冗余。
     研究了基于对象积累的视觉注意模型。图斑是存在于尺度空间中目标重要结构的反映,利用图斑引导感知分组过程,可以使注意力更好地关注于任务相关的区域。通过引入多尺度图斑,模型能够有效关联高层语义(先验知识)和底层特征,并基于图斑特征建立先验知识的表达形式。对于给定新的场景,模型首先通过视觉预注意阶段计算得到中间数据,提取图斑特征。然后使用事先建立的基于图斑特征的先验知识,迅速有效地引导视觉注意力关注任务相关区域。最后利用对象积累机制合并图斑区域,实现感知分组,提取完整目标区域。模型很好地利用了自顶向下和自底向上的信息。实验将新模型和显著区域提取模型及波谱残留模型进行比较,证明了本论文所提出模型的优越性。
     研究了基于对象积累视觉注意机制的目标搜索和识别模型。本论文提出了一种基于对象积累机制的目标自动学习方法,在图斑引导下使用对象积累机制获得目标积累过程中的能量变化趋势,形成目标表达向量。同时,提出了一种基于对象积累机制的目标搜索和识别方法,将目标表达向量作为自顶向下的先验知识,与来源于图像的自底向上的底层信息结合起来,利用图斑特征引导注意力转移,迭代积累对象,提取完整目标区域,并提供初步识别结果。实验中,新模型对200幅图像中的40个不同目标对象进行学习和识别,获得了88.5%的识别率,证明了本论文所提出模型的有效性。
     最后研究了基于SIFT算子评估视觉关注区域有效性的方法。目前,计算机视觉注意常用的计算模型仍然存在很多问题:一方面模型无法充分利用自底向上的图像信息和预处理过程中产生的中间数据,实际计算效率与生物视觉系统的感知效率仍然存在一定的差距:另一方面模型引入自顶向下先验知识的方式、方法还有待进一步改进。产生的直接后果就是提取到的关注区域不能够合理、完全地覆盖目标。就目标识别而言,完整地提取目标区域,约减冗余数据十分关键。为了比较不同视觉注意模型提取得到的关注区域的有效性,判断其对目标识别结果的影响,本论文基于SIFT目标识别算法,提出了一种新颖的评估方法,可以获得较为客观的评估结果,避免人的主观评价而产生误差。
The primate visual system employs an attention mechanism to limit processing to important information that is currently relevant to behaviors or visual tasks. It can efficiently deal with the balance between computing resources, time cost and performing different visual tasks in a normal, cluttered and dynamic environment. The application of visual attention mechanism in computational model can assign the finite computation resources to more important tasks. There exist two ways by which information can be used to direct attention, bottom-up, image-based saliency cues and top-down, task-dependent guidance cues. How to use the two kind of cues efficiently, guide attention to target-relevant regions promptly and serve for object recognition perfectly, is of great significance. Based on the theory on neuroscience, pattern recognition and image processing, the biological visual attention procedures are deeply analyzed. The visual attention mechanism and its application on object search and recognition are developed. In summary, the following main works have been accomplished in this dissertation.
     Development of a new approach of visual attended region extraction. A new model of region extraction is proposed based on saliency-based region selection and scale-space primal sketch. For a input color image, the extent of object is estimated by means of saliency-based region selection, which considers feature that contributes most to the saliency map in bottom-up visual attention model. After that, the color image is changed into gray-level image and the local maxima on each scale are computed. The blob of largest response is picked, which is in the same area with the salient region obtained from the previous step, and then these spatial regions are combined together. The segmentation obtained is coarse in the sense that the localization of object boundaries may not be rigid. However, the segmentation is safe in the manner that those regions can be served as attended regions which extremely reduce the data redundancy.
     Development of a new visual attention model based on object-accumulation visual mechanism. From the research on visual attended region extraction, blobs can be reckoned as the reflection of important structure in scale-space. Therefore, the information of blob feature can guide perceptual grouping and lead the attention to task-relevant regions. By introducing multi-level blobs and connecting blob properties and low-level features in our model, the knowledge representations for prior information can be built by blob features. For any new given scene, the proposed model can use the prior knowledge to render th object more salient by enhancing their features which are characteristic of the object, then recursively group regions together to form objects, guided by blob features extracted from the intermediate data computed at pre-attention stage. Selective visual attention in the proposed model can be effectively directed to task-relevant regions. The comparison of the proposed model against other attention models proved its superiority.
     Development of a model for object searching and recognition based on object-accumulation visual attention mechanism. For the effective description of object and forming the top-down information, an automatic object learning approach based on object-accumulation mechanism is proposed. The approach can reuse the data in current visual attention framework to represent target object, produce accumulation strategy, and output the object representation vector. Accordingly an object search and recognition approach based on object-accumulation mechanism is proposed. The object representation vector served as top-down information can be combined with bottom-up information from image. Taking into account blob feature extracted from multi-scale set of low-level feature maps, the model recursively combines regions to form objects, promptly guide the attention to search relevant object, fully extract object region, and provide primary recognition result. The proposed model acquired 88.5%recognition rate which proved its efficiency.
     Development of a novel method for evaluating how well the attended regions contribute to the recognition of the target based on sift algorithm. At present, there are some problems existed in visual attention model. In one hand, the model cannot fully utilize the bottom-up information in image and intermediate data produced in the pre-attention stage. As for complex scene, there exists huge gap between the computation efficiency of the computional model and the perception efficiency of the biological visual system. In the other hand, the way how to introduce the prior information is not plausible, the general model for good adaptation cannot be acquired. The consequence is that the attended region extracted by computational model cannot have a comprehensive coverage of the target. Better coverage of target region for attention can better serve for recognition. Based on SIFT recognition algorithm, a novel evaluation approach is proposed to achieve an objective validity description instead of judging by people subjectively in previous works. Firstly, SIFT features have been extracted from a reference image and stored in a database as object learning in advance. For attended regions extracted by different visual attention models, the algorithm computes the SIFT features in the region and compares them with the keypoints stored for each object in the database. Secondly, a formula is defined to compute the validity of the attended region based on the accuracy of the fit of the SIFT keypoints and probable number of region SIFT keypoints. The comparison of the proposed model with classic evaluation criterion recall-precision demonstrated its superiority.
引文
[1]Sun Y, Fisher R. Object-based visual attention for computer vision. Artificial Intelligence,2003,146(1):77-123.
    [2]Sun Y, Fisher R, Wang F, et al. A computer vision model for visual-object-based attention and eye movements. Computer Vision and Image Understanding,2008, 112(2):126-142.
    [3]Fuster J. Inferotemporal units in selective visual attention and short-term memory. Journal of Neurophysiology,1990,64(3):681.
    [4]Bozma H, Yaleln H. Visual processing and classification of items on a moving conveyor:a selective perception approach. Robotics and Computer-Integrated Manufacturing,2002,18(2):125-133.
    [5]Soyer C, Bozma H, Istefanopulos Y. Attentional sequence-based recognition: Markovian and evidential reasoning. IEEE Transactions on Systems, Man, and Cybernetics, Part B,2003,33(6):937-950.
    [6]Kowler E, Anderson E, Dosher B, et al. The role of attention in the programming of saccades. Vision Research,1995,35(13):1897-1916.
    [7]McPeek R, Maljkovic V, Nakayama K. Saccades require focal attention and are facilitated by a short-term memory system. Vision Research,1999,39(8): 1555-1566.
    [8]Hoffman J, Subramaniam B. The role of visual attention in saccadic eye movements. Perception and Psychophysics,1995,57(6):787-795,
    [9]Hoffman J. Visual attention and eye movements. Attention,1998:119-153.
    [10]Henderson J. Visual attention and eye movement control during reading and picture viewing. Eye movements and visual cognition:Scene perception and reading,1992:260-283.
    [11]Andrews D. Perception of contour orientation in the central fovea part I:Short lines. Vision Research,1967,7(11-12):975-997.
    [12]Kinchla R, Wolfe J. The order of visual processing:Top-down, bottom-up or middle-out. Perception and Psychophysics,1979,25(3):225-231.
    [13]Wolfe J, Butcher S, Lee C, et al. Changing your mind:On the contributions of top-down and bottom-up guidance in visual search for feature singletons. Journal of Experimental Psychology-Human Perception and Performance,2003,29(2): 483-501.
    [14]Wojciulik E, Kanwisher N, Delgado M, et al. The Generality of Parietal Involvement in Visual Attention. Neuron,1999,23(4):747-764.
    [15]Ungerleider L, Mishkin M. Two cortical visual systems. Analysis of visual behavior,1982:549-586.
    [16]Weichselgartner E, Sperling G. Dynamics of automatic and controlled visual attention. Science,1987,238(4828):778-780.
    [17]Desimone R, Duncan J. Neural mechanisms of selective visual attention. Annual review of neuroscience,1995,18(1):193-222.
    [18]Driver J, Davis G, Russell C, et al. Segmentation, attention and phenomenal visual objects. Cognition,2001,80(1-2):61-95.
    [19]Humphreys G. Neural representation of objects in space:a dual coding account. Philosophical Transactions of the Royal Society B:Biological Sciences,1998, 353(1373):1341-1351.
    [20]Treisman A, Gelade G. A feature-integration theory of attention. Cognitive psychology,1980,12(1):97-136.
    [21]Wolfe J. Guided search 2.0. A revised model of visual search. Psychonomic Bulletin & Review,1994,1(2):202-238.
    [22]Duncan J. Selective attention and the organization of visual information. Journal of Experimental Psychology:General,1984,113(4):501-517.
    [23]Farah M, Wallace M, Vecera S. What" and "where" in visual attention:Evidence from the neglect syndrome. Unilateral neglect:Clinical and experimental studies, 1993:123-137.
    [24]Fink G, Dolan R, Halligan P, et al. Space-based and object-based visual attention: shared and specific neural domains. Brain,1997,120(11):2013.
    [25]Duncan J, Humphreys G, Ward R. Competitive brain activity in visual attention. Current Opinion in Neurobiology,1997,7(2):255-261.
    [26]Duncan J. Converging levels of analysis in the cognitive neuroscience of visual attention. Philosophical Transactions of the Royal Society B:Biological Sciences, 1998,353(1373):1307.
    [27]Phaf R, Van der Heijden A, Hudson P. SLAM:A connectionist model for attention in visual selection tasks. Cognitive psychology,1990,22(3):273-341.
    [28]Farah M. Visual agnosia:Disorders of object recognition and what they tell us about normal vision. MIT press Cambridge, MA,1990.
    [29]Walley R, Weiden T. Lateral inhibition and cognitive masking:A neuropsychological theory of attention. Psychological Review,1973,80(4): 284-302.
    [30]Sagi D, Julesz B. Enhanced detection in the aperture of focal attention during simple discrimination tasks. Nature,1986,321(6071):693-695.
    [31]Posner M, Petersen S. The attention system of the human brain. Annual review of neuroscience,1990,13(1):25-42.
    [32]Eriksen C, St James J. Visual attention within and around the field of focal attention:A zoom lens model. Perception & Psychophysics,1986,40(4):225-240.
    [33]Treisman A. Features and objects:The fourteenth Bartlett memorial lecture. The Quarterly Journal of Experimental Psychology Section A,1988,40(2):201-237.
    [34]Koch C, Ullman S. Shifts in selective visual attention:towards the underlying neural circuitry. Human neurobiology,1985,4(4):219-227.
    [35]Olshausen B, Anderson C, Van Essen D. A neurobiological model of visual attention and invariant pattern recognition based on dynamic routing of information. Journal of Neuroscience,1993,13(11):4700-4719.
    [36]Anderson C, Van Essen D. Shifter circuits:a computational strategy for dynamic aspects of visual processing, in:Proceedings of the National Academy of Sciences. USA:National Academy of Sciences,1987.6297-6301.
    [37]Niebur E, Koch C. A model for the neuronal implementation of selective visual attention based on temporal correlation among neurons. Journal of Computational Neuroscience,1994,1(1):141-158.
    [38]Ahmad S. VISIT:An efficient computational model of human visual attention. University of Illinois at Urbana-Champaign, Champaign, IL,1992.
    [39]Itti L, Koch C, Niebur E. A model of saliency-based visual attention for rapid scene analysis. IEEE Transactions on pattern analysis and machine intelligence, 1998,20(11):1254-1259.
    [40]Itti L, Koch C. Computational modelling of visual attention. Nature Reviews Neuroscience,2001,2(3):194-203.
    [41]Biederman I, Mezzanotte R, Rabinowitz J. Scene perception:Detecting and judging objects undergoing relational violations. Cognitive psychology,1982, 14(2):143-177.
    [42]Rensink R. The dynamic representation of scenes. Visual Cognition,2000,7(1): 17-42.
    [43]Torralba A. Contextual modulation of target saliency. Advances in neural information processing systems,2002,2:1303-1310.
    [44]Torralba A. Contextual priming for object detection. International Journal of Computer Vision,2003,53(2):169-191.
    [45]Rybak I, Gusakova V, Golovan A, et al. A model of attention-guided visual perception and recognition. Vision research,1998,38(15):2387-2400.
    [46]Rimey R, Brown C. Selective attention as sequential behavior:Modeling eye movements with an augmented hidden markov model, in:Proceedings of DARPA Image Understanding Workshop:1990.840-849.
    [47]Salah A, Alpaydin E, Akarun L. A Selective Attention-Based Method for Visual Pattern Recognition with Application to Handwritten Digit Recognition and Face Recognition IEEE Transactions on Pattern Analysis and Machine Intelligence, 2002,24(3):420-425.
    [48]Navalpakkam V, Itti L. Search goal tunes visual features optimally. Neuron,2007, 53(4):605-617.
    [49]Navalpakkam V, Itti L. An integrated model of top-down and bottom-up attention for optimizing detection speed, in:2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. New York, NY, United states:Institute of Electrical and Electronics Engineers Computer Society,2006.2049-2056.
    [50]Navalpakkam V, Itti L. Modeling the influence of task on attention. Vision research, 2005,45(2):205-231.
    [51]Duncan J, Humphreys G. Visual search and stimulus, similarity. Psychological Review,1989,96(3):433-458.
    [52]Grossberg S, Raizada R. Contrast-sensitive perceptual grouping and object-based attention in the laminar circuits of primary visual cortex. Vision research,2000, 40(10-12):1413-1432.
    [53]Behrmann M, Zemel R, Mozer M. Object-based attention and occlusion:Evidence from normal participants and a computational model. Journal of Experimental Psychology:Human Perception and Performance,1998,24(4):1011-1036.
    [54]Orabona F, Metta G, Sandini G. A Proto-object based visual attention model. Attention in Cognitive Systems. Theories and Systems from an Interdisciplinary Viewpoint,2008:198-215.
    [55]Villasenor J, Belzer B, Liao J. Wavelet filter evaluation for image compression. IEEE Transactions on image processing,1995,4(8):1053-1060.
    [56]Taubman D. High performance scalable image compression with EBCOT; IEEE Transactions on image processing,2000,9(7):1158-1170.
    [57]Barnsley M, Hurd L. Fractal image compression. A.K. Peters Ltd,1993.
    [58]Pennebaker W, Mitchell J. JPEG still image data compression standard. Kluwer academic publishers,1993.
    [59]Christopoulos C, Skodras A, Ebrahimi T. The JPEG2000 still image coding system: an overview. IEEE Transactions on Consumer Electronics,2000,46(4): 1103-1127.
    [60]Taubman D, Marcellin M, Rabbani M. JPEG2000:Image compression fundamentals, standards and practice. Kluwer Academic Publishers,2002.
    [61]Marcellin M, Gormish M, Bilgin A, et al. An overview of JPEG-2000. in:2000. 523-541.
    [62]Itti L. Models of bottom-up and top-down visual attention:[PHD thesis]. California Institute of Technology,2000.
    [63]Breazeal C, Edsinger A, Fitzpatrick P, et al. Social constraints on animate vision. IEEE Intelligent Systems,2000,15(4):32-37.
    [64]Indiveri G. Modeling selective attention using a neuromorphic analog VLSI device. Neural computation,2000,12(12):2857-2880.
    [65]Backer G, Mertsching B, Bollmann M. Data-and model-driven gaze control for an active-vision system. IEEE Transactions on Pattern Analysis and Machine Intelligence,2001:1415-1429.
    [66]Miura J, Kanda T, Shirai Y. An active vision system for real-time traffic sign recognition, in:2000 IEEE Intelligent Transportation Systems. Proceedings. Dearborn, MI, USA:2000.52-57.
    [67]Aryananda L, Weber J. Mertz:A quest for a robust and scalable active vision humanoid head robot. in:2004 4th IEEE-RAS International Conference on Humanoid Robots. Santa Monica, CA, United states:Institute of Electrical and Electronics Engineers Inc,2004.513-532.
    [68]Yee H, Pattanaik S, Greenberg D. Spatiotemporal sensitivity and visual attention for efficient rendering of dynamic environments. ACM Transactions on Graphics (TOG),2001,20(1):39-65.
    [69]Myszkowski K, Rokita P, Tawara T. Perceptually-informed accelerated rendering of high quality walkthrough sequences, in:Proceedings of the 10th Eurographics Workshop on Rendering. Grenada, Spain:Springer-Verlag,1999.5-18.
    [70]Horvitz E, Lengyel J. Perception, attention, and resources:A decision-theoretic approach to graphics rendering. in:Proceedings on Uncertainty in Artificial Intelligence. San Francisco, Calif.:Don Gelger and Prakash Stenoy, Eds, Morgan Kaufmann Publishers, Inc.,1997.238-249.
    [71]Sundstedt V, Chalmers A, Cater K, et al. Top-down visual attention for efficient rendering of task related scenes. in:Vision, Modeling and Visualization:Citeseer, 2004.209-216.
    [72]郑南宁.计算机视觉与模式识别.国防工业出版社,1998.
    [73]王爱群,郑南宁.基于小波的具有选择性注意力机制的初级视觉模型.西安交通大学学报,1995,29(001):46-51.
    [74]隋成华,郑洪.利用子波变换模拟人眼视觉信息提取过程的研究.光子学报,2000,29(008):734-737.
    [75]龙甫荟,郑南宁.一种引入注意机制的视觉计算模型.中国图象图形学报:A辑,1998,3(007):592-595.
    [76]桑农,李正龙,张天序.人类视觉注意机制在目标检测中的应用.红外与激光工程,2004,33(001):38-42.
    [77]余波.基于时空编码的多通道目标背景分割神经网络模型:[博士学位论文].复旦大学,1999.
    [78]张永平,郑南宁.具有选择注意机制的视觉系统模型.生物物理学报,1998,14(003):485-492.
    [79]张永平.具有侧抑制机制的视觉系统模型及在图像边缘提取中的应用.电子学报,1998,26(008):99-101.
    [80]李武,李朝义.视觉感受野外整合野研究的进展.神经科学,1994,1(002):1-6.
    [81]李武,李朝义.猫纹状皮层神经元整合野的形态和范围.生理学报,1995,47(002):111-119.
    [82]姚海珊,李朝义.猫纹状皮层神经元整合野结构的对称性及空间总合特性.生物物理学报,1998,14(003):493-500.
    [83]汪云九,齐翔林,邢静等.广义Gabor函数模型和感受野某些特性曲线的模拟.中国科学:B辑,1989:386-393.
    [84]汪云九,齐翔林.初级视觉的Gabor函数模型的研究进展.生物物理学报,1993,9(003):508-513.
    [85]齐翔林,汪云九.初级视觉信息的Gabor小波表达研究.自然科学进展:国家重点实验室通讯,1996,6(005):608-612.
    [86]杨谦,齐翔林,汪云九.简单细胞方位选择性感受野组织形成的神经网络模型.中国科学(C辑),2000,30(4):412-419.
    [87]邱志诚,寿天德.视网膜神经节细胞感受野的一种新模型:Ⅱ.神经节细胞方位选择性中心周边相互作用机制.生物物理学报,2000,16(002):296-302.
    [88]敖新宇,范思陆.上下视野空间选择性注意的ERP研究.生物物理学报,2000,16(001):73-80.
    [89]孙复川,顾凡及.视觉图像辨认眼动中的Top—own信息处理.生物物理学报,1994,10(003):431-438.
    [90]吴冰,孙复川.旋转汉字识别的眼动特性.心理学报,1999,31(001):7-14.
    [91]李兵,孙俊世,马明红等.胼胝体输入对猫皮层17/18交界区细胞方向和取向选择性的影响.生物物理学报,1993,9(003):407-414.
    [92]Burt P, Adelson E. The Laplacian pyramid as a compact image code. IEEE Trans. Communications,1983,31(1):532-554-.
    [93]Adelson E, Anderson C, Bergen J, et al. Pyramid methods in image processing. RCA engineer,1984,29(6):33-41.
    [94]Daugman J. Uncertainty relation for resolution in space, spatial frequency, and orientation optimized by two-dimensional visual cortical filters. Journal of the Optical Society of America A,1985,2(7):1160-1169.
    [95]Heslenfeld D, Kenemans J, Kok A, et al. Feature processing and attention in the human visual system:an overview. Biological Psychology,1997,45(1-3): 183-215.
    [96]Emmanouil T, Treisman A. Dividing attention across feature dimensions in statistical processing of perceptual groups. Perception & psychophysics,2008, 70(6):946.
    [97]Treisman A. Perceptual grouping and attention in visual search for features and for objects. Journal of Experimental Psychology:Human Perception and Performance, 1982,8(2):194-214.
    [98]Roelfsema P, Lamme V, Spekreijse H. Object-based attention in the primary visual cortex of the macaque monkey. Nature,1998,395(6700):376-381.
    [99]Scholl B. Objects and attention:The state of the art. Cognition,2001,80(1-2): 1-46.
    [100]Sun Y, Fisher R. Hierarchical selectivity for object-based visual attention, in: Biologically Motivated Computer Vision. Second International Workshop, BMCS 2002. Proceedings. Tubingen, Germany:Springer-Verlag,2002.135-150.
    [101]Kadir T, Brady M. Saliency, scale and image description. International Journal of Computer Vision,2001,45(2):83-105.
    [102]Hou X, Zhang L. Saliency detection:A spectral residual approach, in:CVPR'07. IEEE Conference on Computer Vision and Pattern Recognition. Minneapolis, MN, USA:IEEE,2007.2438-2445.
    [103]. Walther D, Koch C. Modeling attention to salient proto-objects. Neural Networks, 2006,19(9):1395-1407.
    [104]Walther D, Rutishauser U, Koch C, et al. Selective visual attention enables learning and recognition of multiple objects in cluttered scenes. Computer Vision and Image Understanding,2005,100(1-2):41-63.
    [105]YU Z, WONG H. A rule based technique for extraction of visual attention regions based on real-time clustering. IEEE transactions on multimedia,2007,9(4): 766-784.
    [106]Philiastides M, Sajda P. Temporal characterization of the neural correlates of perceptual decision making in the human brain. Cerebral Cortex,2006,16(4):509.
    [107]Ploran E, Nelson S, Velanova K, et al. Evidence accumulation and the moment of recognition:dissociating perceptual recognition processes using fMRI. Journal of Neuroscience,2007,27(44):11912.
    [108]Lindeberg T. Detecting salient blob-like image structures, and their scales with a scale-space primal sketch:a method for focus-of-attention. International Journal of Computer Vision,1993,11(3):283-318.
    [109]Lindeberg T. Feature detection with automatic scale selection. International Journal of Computer Vision,1998,30(2):79-116.
    [110]Iijima T. Basic theory of pattern normalization (for the case of a typical one dimensional pattern). Bulletin of the Electrotechnical Laboratory,1962,26(25): 368-388.
    [111]Witkin A. Scale-space filtering, in:Proceedings of the Eighth international joint conference on Artificial intelligence Karlsruhe, West Germany:Morgan Kaufmann Publishers Inc.,1983.1019-1022.
    [112]Koenderink J. The structure of images. Biological cybernetics,1984,50(5): 363-370.
    [113]Yuille A. Scaling theorems for zero crossings. IEEE Transactions on pattern analysis and machine intelligence,1986,8:15-25.
    [114]Lindeberg T. Scale-space theory in computer vision. The Kluwer International Series in Engineering and Computer Science,1993.
    [115]Florack L. The syntactical structure of scalar images:[PHD thesis]. Utrecht University, The Netherlands,1993.
    [116]Pereira S, Pun T. Fast robust template matching for affine resistant image watermarks. IEEE Transactions on image processing,2000,9(6):199-210.
    [117]Bradski G. Computer vision face tracking for use in a perceptual user interface. Intel Technology Journal,1998,2(2):12-21.
    [118]Lowe D. Object recognition from local scale-invariant features, in:Published by the IEEE Computer Society,1999.1150-1157.
    [119]Lowe D. Distinctive image features from scale-invariant keypoints. International journal of computer vision,2004,60(2):91-110.
    [120]Peters R, Iyer A, Itti L, et al. Components of bottom-up gaze allocation in natural images. Vision Research,2005,45(18):2397-2416.
    [121]Avraham T, Lindenbaum M. Esaliency-a stochastic attention model incorporating similarity information and knowledge-based preferences. in:Inter. Workshop on the Representation and Use of Prior Knowledge in Vision (WRUPKV), with ECCV. Graz, Austria:2006.
    [122]Draper B, Lionelle A. Evaluation of selective attention under similarity transforms. Computer Vision and Image Understanding,2003,100(1-2):152-171.
    [123]Marmitt G, Duchowski A. Modeling visual attention in vr:Measuring the accuracy of predicted scanpaths. Eurographics 2002, Short Presentations,2002:217-226.
    [124]Hugli H, Jost T, Ouerhani N. Model, performance for visual attention in real 3D color scenes. Artificial Intelligence and Knowledge Engineering Applications:A Bioinspired Approach:469-478.
    [125]Michalke T, Gepperth A, Schneider M, et al. Towards a human-like vision system for resource-constrained intelligent cars. in:ICVS. Germany:Bielefeld University eCollections,2007.264-275
    [126]Hawes N, Wyatt J. Towards context-sensitive visual attention, in:Proceedings of the Second International Cognitive Vision Workshop. Graz, Austria:2006.
    [127]Frintrop S, Backer G, Rome E. Goal-directed search with a top-down modulated computational attention system. Pattern Recognition:117-124.
    [128]Aziz M, Mertsching B. An attentional approach for perceptual grouping of spatially distributed patterns. Pattern Recognition:345-354.
    [129]Aziz M, Mertsching B. Color saliency and inhibition using static and dynamic scenes in region based visual attention. Attention in Cognitive Systems. Theories and Systems from an Interdisciplinary Viewpoint,2008:234-250.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700