摘要
针对Itti视觉选择性注意模型不具有子特征图显著图归一化过程中权值随任务改变而改变的问题,借鉴自主发育在视觉选择性注意学习的研究成果,提出一种权值可发育视觉选择性注意模型作为图像特征提取的学习机制。该算法采用三层自组织神经网络和Itti视觉选择性注意模型相结合的决策进行寻优,通过对模型的训练学习获取最优权值更新。这样既可以保证在初期特征提取内容的完整性,又降低了系统对不同任务条件的约束性,提高了模型特征提取能力。利用权值可发育视觉选择性注意模型对图像进行感兴趣区域特征提取实验,结果表明,该方法能够提高特征提取准确性、减少运算时间,获得了良好的动态性能。
In the Itti visual selective attention model,the weight does not change as the task changes during the saliency map normalization of child feature map. Therefore,a visual selective attention model with weight development is proposed to be the learning mechanism of image feature extraction by learning from the research achievements of autonomous development in visual selective attention learning. In the algorithm,the strategy of combining three-layer self-organized neural network with Itti visual selective attention model is used for optimization. The optimal weight update is obtained by training and learning of the model,which can not only guarantee the completeness of the initial feature extraction content,but also reduce the constraint of the system on different task conditions,and improve the feature extraction capability of the model. An interested-area feature extraction experiment was carried out for images by using the visual selective attention model with weight development. The results show that the proposed method can improve the accuracy of feature extraction,reduce the computation time,and obtain a good dynamic performance.
引文
[1]ITTI L,KOCH C,NIEBUR E.A model of saliency-based visual attention for rapid scene analysis[J].IEEE transactions on pattern analysis&machine intelligence,1998,20(11):1254-1259.
[2]ITTI L,KOCH C.Computational modelling of visual attention[J].Nature reviews neuroscience,2001,2(3):194-203.
[3]NYAMSUREN E,TAATGEN N A.The synergy of top-down and bottom-up attention in complex task:going beyond saliency models[C]//Proceedings of the 35th Annual Conference of the Cognitive Science Society.Austin:Cognitive Science Society,201:3181-3186.
[4]JUDD T,EHINGER K,DURAND F,et al.Learning to predict where humans look[C]//Proceedings of 12th IEEE International Conference on Computer Vision.Kyoto:IEEE,2009:2106-2113.
[5]BORJI A.Boosting bottom-up and top-down visual features for saliency estimation[C]//Proceedings of IEEE Conference on Computer Vision and Pattern Recognition.Providence:IEEE,2012:438-445.
[6]ZHAO Q,KOCH C.Learning a saliency map using fixated locations in natural scenes[J].Journal of vision,2011,11(3):74-76.
[7]王凤娇,田媚,黄雅平,等.基于眼动数据的分类视觉注意模型[J].计算机科学,2016,43(1):85-88.WANG Fengjiao,TIAN Mei,HUANG Yaping,et al.Classification model of visual attention based on eye movement data[J].Computer science,2016,43(1):85-88.
[8]ALMáSSY N,EDELMAN G M,SPORNS O.Behavioral constraints in the development of neuronal properties:a cortical model embedded in a real-world device[J].Cerebral cortex,1998,8(4):346-361.
[9]BERRIDGE K C.Motivation concepts in behavioral neuroscience[J].Physiology&behavior,2004,81(2):179-209.
[10]WENG J.Three theorems:brain-like networks logically reason and optimally generalize[C]//Proceedings of International Joint Conference on Neural Networks.San Jose:IEEE,2011:2983-2990.
[11]LUCIW M,WENG J.Where-what network 3:developmental top-down attention for multiple foregrounds and complex backgrounds[C]//Proceedings of International Joint Conference on Neural Networks.Barcelona:IEEE,2010:1-8.
[12]WENG J,LUCIW M.Brain-like emergent spatial processing[J].IEEE transactions on autonomous mental development,2012,4(2):161-185.