视觉显著性模型研究及其在影像处理中的应用
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
随着仿生学的发展,在计算机视觉研究领域,研究者已使用视觉神经解剖学和神经心理学领域的研究成果指导计算机视觉研究,通过模仿人类视觉特性,构造出更灵活、更先进的计算机视觉算法。视觉注意模型正是基于仿生学发展起来的,其能够快速搜索到人类感兴趣的目标,该类目标被称为显著性目标。该类模型被称为显著性模型。
     目前,在视觉注意模型研究领域,一个具有代表性的视觉注意模型是Itti模型。该模型是模拟人类自底向上的视觉特性产生的,但作者并没有解释模型中的数学算法为何能够模拟人类的自底向上特性,导致较难理解算法本质,影响显著性模型研究的推进。基于此,本文深入分析了该模型。依据分析,提出了一些新的视觉注意模型,同时将Itti模型应用到遥感影像变化检测中。具体为:
     1、在Itti模型中,使用高斯金字塔产生强度显著图。在本文研究中,平均金字塔、小波低通金字塔分别能被用于产生强度显著图。实验发现,来自上述三类金字塔的强度显著图彼此非常相似。本文从数学和图像处理角度深入分析了平均金字塔产生的强度显著图。通过分析发现,强度显著图中突出的区域为与背景对比强烈的区域,同时指出,来自于高斯金字塔或小波金字塔的强度显著图具有同样属性。即显著区域仍为与背景对比强烈的区域。
     2、仍使用Gabor金字塔,但改变金字塔图像结合方式,提出了四种新的生成方向特征图方法,并且它们产生的方向特征图与Itti模型产生的方向特征图相似。本文从数学和图像处理角度深入分析了这些方向特征图,发现在方向特征图中,显著性区域仍为与背景对比强烈的区域。在分析方向特征图过程中,归纳出了函数或算法能用于产生方向特征图的条件,基于这些条件,理论上推测出一些现有算法能够被用于产生方向显著图。其中有一个条件,Gabor函数不完全满足。如果能够提出一个函数,完全满足该条件,将产生更好的方向特征图。
     3、基于归纳出的生成方向特征图条件,构造了三个新的用于产生方向特征图的函数。其中一个函数相似于Gabor函数,其产生的方向特征图相似于Gabor函数产生的方向特征图。其它两个函数比Gabor函数和刚提及的函数简单,但能完全满足Gabor函数不完全满足的条件,因此能产生更好的方向特征图。实验证实了该结论。
     4、基于归纳出的产生方向特征图条件,提出了两种新的产生显著图方式。一种是使用图像离散余弦变换相位谱信息产生显著图;另一种是使用小波变换产生显著图。通过实验发现,两类方法产生的显著图都能较准确突出人类关注的目标。
     5、深入分析Itti模型产生颜色显著图部分发现,产生颜色显著图方式相似于产生强度显著图方式。不同之处,仅为输入的数据不同。一为强度分量,另一为颜色分量。基于此,推断出平均金字塔、小波低通金字塔也能被用于产生颜色显著图。强度显著图中突出的区域为与背景对比强烈的区域,方向显著图中突出的区域仍为与背景对比强烈的区域。由此,推断出所有产生方向显著图的方式都可用于产生颜色显著图。此外,本文还深入分析了Itti模型对噪声鲁棒的原因,并指出其存在的不足。
     6、在遥感影像变化检测中,噪声是影响变化检测准确性的一个重要因素。本文将视觉显著性模型应用到变化检测,减少噪声对变化检测的影响,提高检测的准确性。
     此外,本文还深入研究了边缘分组模型。该类模型又被称为形状显著性模型,属于视觉注意模型。通过研究发现,目前大部分边缘分组模型仅考虑完全形态心理学中的封闭性、紧凑性、平滑性、对称性和凸性几个指标,没有将完全形态心理学的平行性指标引入边缘分组。基于此,本文将完全形态心理学的平行性指标引入边缘分组,构建了一个新的边缘分组模型,用于检测遥感影像中的机场目标。
With the development of bionics, many researchers in the computer vision have developed many novel machine algorithems in terms of outcomes from the research of neuroanatomy and visual neurophysiology. By simulating the characteristcs of human vision, some novel computer vision models are proposed. Visual attention models, which are included in these computer vision models, are proposed by simulating the bottom-up phase of human vision. They can be used to detect important objects which attract human eye in scene.
     In visual attention models, one classic and representative model is Itti model [8]. This model can process a scene image to generate a saliency map in which some objects, which attract human eye and are named as saliency objects, in the scene image are popped out. As well known, the reason why the saliency map from Itti model can pop out saliency objects was just explained in terms of the viewpoint of biologically-plausible, which results in an obstacle that it is hard to understand the real nature of Itti model. In order to find the real nature of Itti model, we analyse the model in detail from the viewpoint of image processing and mathmatics. Based on the analysis, we find the reason why Itti model can pop out saliency objects and propose some new ways to generate saliency map. These new ways and theory analysis are described as follows.
     (1) In Itti model, Gaussian pyramid is used to generate intensity conspicuity map. In our research, an interesting phenomenon is discovered. The phenomenon is that all of low-pass pyramids, including Gaussian pyramid, average pyramid, and wavelet pyramid generated by using the low-pass part of wavelet transform, can be used to generate intensity conspicuity map. Furthermore, these intensity conspicuity maps from low-pass pyramids are very similar to each other. As well known, the reason why intensity conspicuity map from Itti model can pop out saliency objects was just explained in terms of the viewpoint of biologically-plausible, which results in an obstacle that it is hard to understand the real nature of the intensity conspicuity map. In this paper, intensity conspicuity map from average pyramid is analyzed in detail from the aspect of image processing. The reason why the regions that have high intensity contrast can be popped out in the intensity conspicuity maps is explained. Meanwhile, the reason, why the conclusion from analyzing the intensity conspicuity map from average pyramid can be seen as the conclusion of the intensity conspicuity maps from all of low-pass pyramids, will be explained briefly.
     (2) Orientation conspicuity map is an important element in forming saliency map. Here, we discover other four ways which can be used to generate orientation feature maps besides the way used in Itti model. The orientation feature maps from these ways are similar to each other. We analyze these ways of generating orientation feature maps from the viewpoint of image processing. Based on the analysis, we find that the regions having high intensity contrast can be popped out in orientation conspicuity map.
     (3) We abstract three requirements which are used to ensure that the orientation conspicuity map from Gabor filter can be used to saliency detection. In addition, besides the three requirements, we add a modified requirement. If a new function satisfies the modified requirement besides the three requirements, the new function would be superior to the Gabor fitler when they are used to generate orientation conspicuity maps. Based on the theoretical analysis for orientation conspcuity map from Gabor fitler and four requirements, we propose three new functions which can be used to generate orientation conspicuity maps. The orientation conspicuity maps from two of three new functions will be better than the orientation conspicuity maps from Gabor fitler when they are used to generate orientation conspicuity maps.
     (4) Based on the theory analysis for orientation conspicuity map from Gabor filter, we propose two new ways to generate orientation map and analyse an existing saliency model. A new saliency model is based on wavelet transform. The other is based on phase spectrum of color information.
     (5) Color conspicuity map is an important component in the process of forming saliency map. In this paper, we study the way of generating color feature map and find that it is similar to the way of generating intensity feature map. Therefore, all of the low-pass pyramids used in generating intensity feature map can be applied to color feature map. Because in intensity feature map and orientation conspicuity map all the salient regions describe the intensity contrast between object and background, the method of generating orientation conspicuity map can also be used to generate color conspicuity map. Itti model has the merit of robust to noise. We analyze the model and discover that the robustness comes from the operation that all of the feature maps under different scales are resized to a same scale (σ=4). Further, we verify the theoretical analysis of the two aspects of saliency map studied in this paper by experiments.
     (6) A novel technique based on visual attention and context-sensitive is proposed for noise reduction in unsuperivised change detection. The technique is composed of two steps. The first step is that the intensity conspicuity maps algorithm of Itti model is used to process the difference image produced by comparing images acquired on the same area at different times. And a comparison map is produced. The second step is as follows: Bayes rule is used to distinguish the changed pixel in the comparion map. A changed detection map is made. Then, Markov Rondom Fields model is used to process the changed detection map. And the false changed pixels are removed. Experimental results confirm that the model can still detect the changed areas exactly when the noise intensity value in the images acquired on same area at different time is very large.
     Furthermore, a novel edge-grouping model is proposed in this paper. Edge-grouping belongs to visual attention. Most of existing edge-grouping models only detect the boundaries with closure, good continuation, proximity, convex and symmetry. In the poposed model, the boundaries of parallelism structure can be detected. This model is applied to airport detection. The accuracy of this model for airport detection is attractive.
引文
[1] von Grünau M., Iordanova M., Visual selection: Facilitation due to stimulus saliency, In: Proceedings of the II Workshop on Cybernetic Vision, 1998, pp.15-20.
    [2] Niebur E., Koch C., Computational architectures for attention, In Parasuraman R editor. The Attentive Brain, 1998, pp.163-186.
    [3] Bacon W. F., Egeth H. E., Overriding Stimulus-driven Attentional Capture, Perception and Psychophysics, 1994, 55(3), pp. 485-496.
    [4] Yantis S., Hillstrom A. P., Stimulus-driven Attentional Capture:Evidence from Equiluminant Visual Objects, Journal of Experimental Psychology: Human Perception and Performance, 1994, 20(1), pp. 95-107.
    [5] Nakayama K., Mackeben M., Sustained and Transient Components of Focal Visual Attention, Vision Research, 1989, 29(11), pp. 1631-1647.
    [6] Braun J., Sagi D., Vision Outside the Focus of Attention, Percept Psychophy, 1990, 48(1), pp. 45-58.
    [7] Koch C., Ullman S., Shifts in Selective Visual-attention: Towards the Underlying Neural Circuitry, Human Neurobiology, 1985, 4(4), pp. 219-227.
    [8] Itti L., Koch C., Niebur E., A Model of Saliency-based Visual Attention for Rapid Scene Analysis, IEEE Trans. on Pattern Analysis and Machine Intelligence, 1998, 20(11), pp.1254-1259.
    [9] Shokoufandeh A., Marsic I., Dickinson S. J., View-based object recognition using saliency maps, Image and Vision Computing, 1999, 17(5-6), pp.445-460
    [10] Kadir T., Scale, Saliency, Scene Description. PhD thesis, Oxford of Britain: University of Oxford, 2001.
    [11] Kadir T., Brady M., Saliency, scale and image description, International Journal of Computer Vision, 2001, 45(2), pp.83-105
    [12] Gilles S., Robust description and matching of Images, PhD thesis, Oxford of Britain, University of Oxford, 1998.
    [13] Hou X. D., Zhang L. Q., Saliency detection: A spectral residual approach, In: IEEE Conf Computer Vision and Pattern Recognition, 2007, pp.1-8.
    [14]李武跃,李志强,方涛等,基于颜色相位谱的显著性检测,上海交通大学学报,2008,42(10), pp.1613-1617.
    [15] Li Z. Q., Fang T., Huo H, A saliency model based on wavelet transform and visual attention, Science in china Series F: Information Science. In press.
    [16] Wolfe J. M., Guided Search 2.0: A Revised Model of Visual Search, Psychon. Buletin and Review, 1994, 1(2), pp. 202-238.
    [17] Olshausen B. A., Anderson C H, Essen D C V, A Neurobiological Model of Visual Attention and Invariant Pattern Recognition Based on Dynamic Routing of Information, Journal of Neuroscience, 1993, 13, pp. 4700-1947.
    [18] Milanese R., Gil S., Pun T., Attentive Mechanisms for Dynamic and Static Scene Analysis, Optical. Engineering, 1995, 34(8), pp. 2428-2434.
    [19] Balnja S., Pomerleau D. A., Expectation-based Selective Attention for Visual Monitoring and Control of a Robot Vehicle, Robotics and Autonomous Systems, 1997, 22(3), pp. 329-344.
    [20] Niebur E., Koch C., Computational Architectures for Attention, Rarasuraman R. The Attention Brain, MA: MIT Press, 1998, pp. 163-186.
    [21] Carson C., Belongie S., Greespan H., Malik J., Blobworld: image segmentation using expectationmaximization and its application to image querying, IEEE Trans on Pattern Analysis and Machine Intelligence, 2002,24(8), pp.1026-1038.
    [22] Aziz M. Z., Mertsching B., Fast and Robust Generation of Feature Maps for Region-Based Visual Attention, IEEE Tansactions on Image Processing, 2008, 17(5), pp. 633-644.
    [23] Wang S., Stahl S. J., Bailey A., Dropps M., Global Detection of Salient Convex Boundaries, International Journal of Computer Vision, 2007, 71(3), pp.337-359.
    [24] Shashua A., Ullman S., Structural saliency: The detectionof globally salient structures using a locally connected network, In International Conference on Computer Vision, 1988, pp. 321–327.
    [25] Alter T., Basri R., Extracting salient contours from images: An analysis of the saliency network, International Journal of Computer Vision, 1998, pp. 51–69.
    [26] Elder J., Zucker S., Computing contour closure, In European Conference on Computer Vision, 1996, pp. 399–412.
    [27] Wang S., Wang J., Kubota T., From fragments to salient closed boundaries: An in-depth study, In IEEE Conference on Computer Vision and Pattern Recognition, 2004, pp. II:291–298.
    [28] Wang S., Kubota T., Siskind J., Wang J., Salient closed boundary extraction with ratio contour, IEEE Transactions on Pattern Analysis and Machine Intelligence, 2005, 27(4), pp.546–561.
    [29] Williams L., Thornber K. K., A comparison measures for detecting natural shapes in cluttered background, International Journal of Computer Vision, 2000, 34(2/3),pp.81–96.
    [30] Mahamud S., Williams L. R., Thornber K. K.,Xu K., Segmentation of multiple salient closed contours from real images, IEEE Transactions on Pattern Analysis and Machine Intelligence, 2003, 25(4), pp.433–444.
    [31] Stahl J. S., Wang S., Edge Grouping Combining Boundary and Region Information, IEEE Transactions on Image Processing, 2003, 16(10), pp. 2590-2606.
    [32] Huttenlocher D., Wayner P., Finding convex edge groupings in an image, International Journal of Computer Vision, 1992, 8(1),7–29.
    [33] Jacobs D., Robust and efficient detection of convex groups, IEEE Transactions on Pattern Analysis and Machine Intelligence, 1996, 18(1), pp.23–37.
    [34] Estrada F., Jepson A., Perceptual grouping for contour extraction, In International Conference on Pattern Recognition, 2004, 2, pp. 32–35.
    [35] Estrada F., Jepson A., Controlling the search for convex groups, Technical Report CSRG-482, Department of Computer Science, University of Toronto, 2004.
    [36] Mohan R., Nevatia R., Perceptual organization for scene segmentation and description, IEEE Transactions on Pattern Analysis and Machine Intelligence, 1992, 14(6), pp.616–635.
    [37] Liu T., Geiger D., Yuille A. L., Segmenting by seeking the symmetry axis, In International Conference on Pattern Recognition, 1998, pp. 994–998.
    [38] Ogniewicz R. L., K¨ubler O., Hierarchic Voronoi skeletons, Pattern Recognition, 1995, 28(3), pp.343–359.
    [39] Prasad V. S. N., Yegnanarayana B., Finding axes of symmetry from potential fields, IEEE Transactions on Image Processing, 2004, 13(12), pp.1559–1566.
    [40] Heijmans H. J. A. M., Tuzikov A. V., Similarity and symmetry measures for convex shapes using Minkowski addition, IEEE Transactions on Pattern Analysis and Machine Intelligence, 1998, 20(9), pp.980–993.
    [41] Shroff H., Ben-Arie J., Finding shape axes using magnetic fields, IEEE Transactions on Image Processing, 1999, 8(10), pp.1388–1394.
    [42] Siddiqi K., Bouix S., Tannenbaum A., Zucker S. W., The Hamilton-Jacobi skeleton, In IEEE International Conference on Computer Vision, 1999, 2, pp. 828–834.
    [43] Stahl J. S., Wang S. Globally Optimal Grouping for Symmetric Closed Boundaries by Combining Boundary and Region Information, IEEE Transactions on Pattern Analysis andMachine Intelligence, 2008, 30(3), pp.395–411.
    [44] Itti L., Koch C., Feature Combination Strategies For Saliency-based Vision Attention System, Journal of Electronic Imaging, 2001, 10(1), pp.161–169.
    [45] [Online]. Available: http://bcmi.sjtu.edu.cn/houxiaodi/.
    [46] Daubechies, Ten Lectures on Wavelets, Philadelphia, PA: SIAM, 1992.
    [47] Kingsbury N., Complex Wavelets for Shift Invariant Analysis and Filtering of Signals, Applied and Computational Harmonic Analysis, 2001, 10(3), pp. 234-253.
    [48] Selesnick I. W., Baraniuk R. G., and Kingsbury N. G., The Dual-tree Complex Wavelet Transform, IEEE Signal Processing Magzine, 2005, 22(6), pp. 123-151.
    [49][Online].Available:http://matlabserver.cs.rug.nl/edgedetectionweb/web/edgedetection_params.html
    [50] Mehrotra R., Namuduri K. R., and Ranganathan N., Gabor Filter-based Edge Detection, Pattern recognition, 1992, 25(12), pp. 1479-1494.
    [51] Brigham E. O., Yuen C. K., The Fast Fourier Transform, IEEE Transaction on Systems, Man and Cybernetics, 1978, 8(2), pp.146-158.
    [52] Ahmed N., Natarajan T., Rao K. R., Discrete Cosine Transfrom, IEEE Transaction on Computers, 1974, C-23(1), pp.90-93.
    [53] Li C. T., Lou C. D., Edge Detection Based on The Multiresolution Fourier Transform, In IEEE Workshop on Signal Processing Systems, 1999, pp. 686-693.
    [54] Wober M. A., Yang Y. B., Reisch M. L., System and method for image edge detection using discrete cosine transforms, 1998.
    [55] Roberts L. G., Machine Perception of Three Dimensional Solids, in Optical and Electro Optical Information Processing(J. T. Tippett et al. Eds.), 1965, pp.159-197, M.I.T Press, Cambridge, Mass.
    [56] Peli T., Malah D., A Study of Edge Detection Algorithms, Computer Graphics and Image Processing, 1982, 20, pp.1-21.
    [57] Hale J. H. G., Detection of Elementary Features in a Picture by Non-Line Local Numerical Processing, Proc. Third Int. Joint Conf. on Pattern Recognition, 1976, pp.764-768.
    [58] Rosenfeld A., Kak A. C., Digital Picture Processing, Academic Press, New York, 1976.
    [59] Rosenfeld A., A Nonlinear Edge Detection Technique, Proc, IEEE, 1970, 58, pp.814-816.
    [60] Rosenfeld A.,Thurston M., Edge and Cure Detection for Visual Scene Analysis, IEEE Transaction Computers, 1971, C-20, pp.562-569.
    [61] Rosenfeld A., Thurston M., Lee Y. H., Edge and Curve Detection: Further Experiments, IEEE Transaction Computers, 1972, C-21, pp.677-715.
    [62] Prewitt J. M. S., Object enhancement and extraction, in Picture Processing and Psychopictorics, Academic Press, New York, 1970.
    [63] Kanopoulos N., Vasanthavada N., Baker R. L., Design of An Image Edge Detection Filter using The Sobel Operator, IEEE Jounal of Solid-State Circuits, 1988, 23(2), pp.358-367.
    [64] Pratt W. K., Digital Image Processing, New York, NY: Wiley, 1978.
    [65] Berzins V., Accuracy of Laplacian Edge Detectors, Computer Vision, Graphics, and Image Processing, 1984, 27(2), pp.195-210.
    [66] Levitt J. B., Lund J. S., Contrast dependence of contextual effects in primate visual cortex. nature, 1997, 387 (6628), pp.73-76.
    [67] Nothdurft H. C., Saliency effects across dimensions in visual search. Vision Research, 1993, 33(5-6), pp.839-844.
    [68] Nothdurft H. C., Salience from feature contrast: additivity across dimensions. Vision Research, 2000, 40(10), pp.1183-1201.
    [69] Petkovic T., Krapac J., Shape description with Fourier descriptors. Tehnical Report. 2002.
    [70] Kunttu I., Lepist O. L., Rauhamaa J, Visa A., Multiscale Fourier descriptors for defect image retrieval. Pattern Recogniton Letters, 2006, 27(2), pp.123-132.
    [71] Chun S. L., Chia H. L., New forms of shape invariants from elliptic fourier descriptors. Pattern Recognition, 1987, 20(5), pp.535-545.
    [72] Kim H. K., Kim J. D., Region-based shape descriptor. invariant to rotation, scale and translation. Signal Process-Image, 2000, 16(1), pp.87-93.
    [73] Mokhtarian F., Mackworth A. K., Scale-based description and recognition of planar curves and two-dimensional shapes, IEEE Transactions on Pattern Analysis and Machine Intelligence, 1986, 8(1), pp.34-43.
    [74] Wertheimer M., Laws of organization in perceptual forms (partial translation). In A Sourcebook of Gestalt Psychology, W.D. Ellis, (Ed.), New York: Harcourt, Brace, 1938, pp. 71-88.
    [75] Rosin P. L., Grouping Curved Lines, Machine Graphics and Vision, 1994.
    [76] Veelaert P., Reestablishing consistency of uncertain geometric relations in digital images, Lecture Notes in Computer Science: Geometry, Morphology and Computational Imaging, 2003, 2616, pp. 268-281.
    [77] Elder J., Krupnik A., Johnston L., Contour grouping with prior models, IEEE Transactions on Pattern Analysis and Machine Intelligence, 2003, 25(6), pp.661-674.
    [78] Saund E., Finding perceptually closed paths in sketches and drawings, IEEE Transactions on Pattern Analysis and Machine Intelligence, 2003, 25(4), pp.475-491.
    [79] Amir A., Lindenbaum M., A generic grouping algorithm and its quantitative analysis, IEEE Transactions on Pattern Analysis and Machine Intelligence, 1998, 20(2), pp.168–185.
    [80] Guy G., Medioni G., Inferring global perceptual contours from local features, International Journal of Computer Vision, 1996, 20(1), pp.113–133.
    [81] Sarkar S., Boyer K., Quantitative measures of change based on feature organization: Eigenvalues and eigenvectors, In IEEE Conference on Computer Vision and Pattern Recognition, 1996, pp. 478-483.
    [82] Broadbent D. E., Perception and Communication, London: Pergamon Press, 1958.
    [83] Broadbent D. E., Broadbent M. H., From detection to identification: Response to multiple targets in rapid serial visual presentation. Perception & Psychophysics, 1987, 42, pp.105-113.
    [84]Deutsch J. A., Deutsch D., Attention: Some theoretical considerations, Psychological Review, 1963, 70, pp. 80-90.
    [85]Duncan J., Selective attention and the organization of visual information, Journal of Experimental Psychology: General, 1984, 113, pp. 501-517.
    [86] Duncan J., Ward R., Shapiro K. L., Direct measurement of attentional dwell time in human vision, Nature, 1994, 369, pp. 313-315.
    [87] Treisman A., Contextual cues in selective listening, Quarterly Journal of Experimental Psychology, 1960, 12, pp. 242-248.
    [88] Pashler H., Processing stages in overlapping tasks : Evidence for a central bottleneck, Journal of Experimental Psychology : Human Perception & Performance, 1984, 10, pp. 358-377.
    [89] Averbach E., Coriell A. S., Short-term memory in vision, Bell System Technical Journal, 1961, 40, pp. 309-328.
    [90] Eriksen C. W., Hoffman J. E., The extent of processing of noise elements during selective encoding from visual displays, Perception & Psychphysics, 1973, 14, pp. 155-160.
    [91] Sperling G., The information available in brief visual presentations, Psychological Monographs : General and Applied, 1960, 74, pp. 1-29.
    [92] Posner M. I., Cohen Y., Components of visual orienting, In H. Bouma & D. G. Bouwhuis (Eds.), Attention and Performance X, Hillside, NJ: Erlbaum, 1984, pp. 55-66.
    [93] Averbach E., Coriell A. S., Short-term memory in vision, Bell System Technical Journal, 1961, 40, pp. 309-328.
    [94] Eriksen C. W., Hoffman J. E., The extent of processing of noise elements during selective encoding from visual displays, Perception & Psychphysics, 1973, 14, pp. 155-160.
    [95] Posner M. I., Orienting of attention, Quarterly Journal of Experimental Psychology, 1980, 32, pp. 3-25.
    [96] Jonides J., Voluntary versus automatic control over the mind's eye, In J. Long & A. Baddeley (Eds.), Attention and Performance IX, Hillsdale, NJ: Lawrence Erlbaum Associates. 1981, pp. 187-203.
    [97] Krose B. J., Julesz B., The control and speed of shifts of attention, Vision Research, 1989, 29, pp. 1607-1619.
    [98] Dagenbach D., Carr T. H., Inhibitory processes in attention, memory, and language, San Diego, CA, US: Academic Press, Inc., 1994.
    [99] Remington R., Pierce L., Moving attention: Evidence for time-invariant shifts of visual selective attention, Perception & Psychophysics, 1984, 35, pp. 393-399.
    [100] Sagi D., Julesz B., Fast noninertial shifts of attention, Spatial Vision, 1985, 1, pp. 141-149.
    [101] Sperling G., Weichselgartner E., Episodic theory of the dynamics of spatial attention, Psychological Review, 1995, 102, pp. 503-532.
    [102] Shulman G. L., Remington R. W., McLean J. P., Moving attention through visual space, Journal of Experimental Psychology: Human Perception and Performance, 1979, 5, pp. 522-526.
    [103] Tsal Y., Movement of attention across the visual field, Journal of Experimental Psychology: Human Perception & Performance, 1983, 9, pp. 523-530.
    [104] Yantis S., On analog movements of visual attention, Perception & Psychophysics, 1988, 43, pp. 203-206.
    [105] Eriksen C. W., Yeh Y., Allocation of attention in the visual field, Journal of Experimental Psychology: Human Perception & Performance, 1985, 11, pp. 583-597.
    [106] Castiello U., Umilta C., Splitting focal attention, Journal of Experimental Psychology: Human Perception & Performance, 1992, 18, pp. 837-848.
    [107] McCormick P. A., Klein R. M., Johnston S., Splitting versus sharing focal attention: Comment on Castiello and Umilta (1992), Journal of Experimental Psychology: Human Perception and Performance, 1998, 24, pp. 350-357.
    [108] Cheal M., Lyon D. R., Central and peripheral precuing of forced-choice discrimination,Quarterly Journal of Experimental Psychology, A, 1991, pp. 859-880.
    [109] Nakayama K., Mackeben M., Sustained and transient components of focal visual attention, Vision Research, 1989, 29, pp. 1631-1647.
    [110] Weichselgartner E., Sperling G., Dynamics of automatic and controlled visual attention, Science, 1987, 238, pp. 778-780.
    [111] Yantis S., Jonides J., Abrupt visual onsets and selective attention: Evidence from visual search, Journal of Experimental Psychology: Human Perception & Performance, 1984, 10, pp. 601-621.
    [112] Remington R. W., Johnston J. C., Yantis S., Involuntary attentional capture by abrupt onsets, Perception & Psychophysics, 1992, 51, pp. 279-290.
    [113]Jonides J., Yantis S., Uniqueness of abrupt visual onset in capturing attention, Perception & Psychophysics, 1988, 43, pp. 346-354.
    [114] Pashler H., Cross-dimensional interaction and texture segregation, Perception & Psychophysics, 1988, 43, pp. 307-318.
    [115] Theeuwes J., Cross-dimensional perceptual selectivity, Perception & Psychophysics, 1991, 50, pp. 184-193.
    [116] Theeuwes J., Perceptual selectivity for color and form, Perception & Psychophysics, 1992, 51, pp. 599-606.
    [117] Bacon W. F., Egeth H. E., Overriding stimulus-driven attentional capture, Perception & Psychophysics, 1994, 55, pp. 485-496.
    [118] Duncan J., Humphreys G. W., Visual search and stimulus similarity, Psychological Review, 1989, 96, pp. 433-458.
    [119] Grossberg S., Mingolla E., Ross W. D., A neural theory of attentive visual search: Interactions of boundary, surface, spatial, and object representations, Psychological Review, 1994, 101(3), pp. 470-489.
    [120] Muller H. J., Humphreys G. W., Donnelly N., SEarch via Recursive Rejection (SERR): Visual search for single and dual form-conjunction targets, Journal of Experimental Psychology: Human Perception & Performance, 1994, 20, pp. 235-258.
    [121] Treisman A., Sato S., Conjunction search revisited, Journal of Experimental Psychology: Human Perception & Performance, 1990, 16, pp. 459-478.
    [122] Luck S. J., Vogel E. K., The capacity of visual working memory for features and conjunctions, Nature, 1997, 390, pp. 279-281.
    [123] Milliken B., Tipper S. P., Attention and inhibition, In H. Pashler (Ed.), Attention, East Sussex: Psychology Press Ltd, 1998, pp. 191-221.
    [124] Watson D. G., Humphreys G. W., Visual marking: Prioritizing selection for new objects by top-down attentional inhibition of old objects, Psychological Review, 1997, 104, pp.90-122.
    [125] Maylor E. A., Hockey R., Inhibitory components of externally controlled covert orienting in visual space, Journal of Experimental Psychology: Human Perception and Performance, 1985, 11, pp. 777-787.
    [126] Schmidt W. C., Inhibition of return is not detected using illusory line motion, Perception & Psychophysics, 1996, 58(6), pp. 883-898.
    [127] Itti L., Koch C., A Saliency-based Search Mechanism for Overt and Covert Shifts of Visual Attention, Vision Research, 2000, 40(10-12), pp. 1489-1506.
    [128] Itti L., Koch C., Computational Modeling of Visual Attention, Nature reviews, Neuroscience, 2001, 2(3), pp. 194-203.
    [129] Niebur E., Koch C., A model for the neuronal implementation of selective visual attention based on temporal correlation among neurons, Journal of Computing Neuroscience, 1994, 1-2, pp.141-158.
    [130] Deco G., Schurmann B., A hierarchical neural system with attentional top-down enhancement of the spatial resolution for object recognition, Vision Research, 2000, 40(20), pp. 2845-2859.
    [131] Deco G., Zihl J., A neurodynamical model of visual attention: feedback enhancement of spatial resolution in a hierarchical system, Journal of Computing Neuroscience, 2001, 10(3), pp. 231-253.
    [132] Theeuwes J., Visual selective attention: a theoretical analysis, Acta Psychology, 1993, 83(2), pp. 93-154.
    [133] Itti L., Automatic Foveation for video compression using a neurobiological model of visual attention, IEEE Transaction on image processing, 2004, 13(10), pp.1304-1318.
    [134]张鹏,王润生,基于视点转移和视区追踪的图像显著区域检测,软件学报,2004, 15(6), pp. 891-898.
    [135] Dong L., Izquierdo E., A biologically inspired system for classification of natural images, IEEE Transaction on circuits and systems for video technology, 2007, 17(5), pp.590-603.
    [136] Siagian C., Itti L., Rapid biologically-inspired scene classification using features shared with visual attention, IEEE Transactions on Pattern Analysis and Machine Intelligence, 2007, 29(2), pp.300-312.
    [137] Singh A., Digital change detection techniques using remotely sensed data, International Journal of Remote Sensing, 1989, 10(6), pp. 989-1003.
    [138] Fung T., An assessment of TM imagery for land-cover change detection, IEEE Transactions on Geoscience and Remote Sensing, 1990, 28(4), pp. 681-684.
    [139] Muchoney D. M., Haack B. N., Change detection for monitoring forest defoliation, Photogrammetric Engineering and Remote Sensing, 1994, 60(10), pp. 1243-1251.
    [140] Bruzzone L., Fernàndez D., Automatic analysis of the difference image for unsupervised change detection, IEEE Transactions on Geoscience and Remote Sensing, 2000, 38(3), pp. 1171-1182.
    [141] Bruzzone L., Cossu D., An adaptive approach to reducing registration noise effects in unsupervised change detection, IEEE Transactions on Geoscience and Remote Sensing, 2003, 41(11), pp. 2455-2465.
    [142] Ghosh S., Bruzzone L., Patra S., Bovolo F., Ghosh A., A context-sensitive technique for unsupervised change detection based on Hopfield-type neural networks, IEEE Transactions on Geoscience and Remote Sensing, 2007, 45(3), pp. 778-789.
    [143] Fonseca L. M. G., Manjunath B. S., Registration techniques for multisensor sensed imagery, Photogrammetric Engineering and Remote Sensing, 1996, 62(9), pp. 1049-1056.
    [144] Goshtasby A. A., Moigne J. L., Special issue on image registration, Pattern Recognition, 1999, 32(1).
    [145] Moigne J. L., An automated parallel image registration technique based on the correlation of wavelet features, IEEE Transactions on Geoscience and Remote Sensing, 2002, 40(8), pp. 1849-1864.
    [146] Stow D., Reducing misregistration effects for pixel-level analysis of land-cover change. International Journal of Remote Sensing, 1999, 20, pp. 2477-2483.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700