多稳定神经网络的动力学分析
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
大脑是一个结构最复杂、机理最神秘、功能最完善的体系。人类高级智能活动如感觉、思考、学习和记忆等都是大脑皮层中由大量神经元所构成的神经网络作用的结果。人工神经网络是模拟生物神经网络的结构和原理而提出的。20世纪80年代以来,神经网络的研究再次得到许多科学家的关注,并在理论和应用方面,取得了大量新的研究成果。
     神经网络的动力学分析是其走向实际应用的重要理论基础。一般地,网络稳定模式有两种:单稳定性和多稳定性。多稳定性是指网络中有多个稳定的平衡点。它抓住了生物神经网络的最本质特征,更深刻地揭示出了生物神经网络的内在机制。
     当大脑受到外界刺激时,大脑皮层中的神经元被激活,这时神经元的活动就会发生一些微妙的变化,从而导致由神经元所构成的神经网络对该刺激产生某种响应。然而,我们对这些复杂而微妙的变化知之甚少。2003年,背景神经网络模型的提出,为解决这一问题提供了一条新的思路。然而,由于原始的背景神经网络模型是一个耦合的非线性动力系统,方程中“除”运算给理论分析带来了很大的困难。该模型的理论研究至今尚不完善,仍然存在很多亟待解决的问题。因此,在本文中,我们主要提出了几类改进的背景神经网络模型,并讨论了它们的动力学行为。
     本文的主要创新成果如下:
     (1)提出了一类改进的具有一致点火率的背景神经网络模型,研究了该网络的动力学行为。找到了网络的不变集;给出了网络有界性的充分条件;证明了网络的完全收敛性。
     (2)提出了一类具有两个子网络的改进的背景神经网络模型,分析了该网络的收敛性。给出了网络全局吸引集的数学表达式;利用Jacobian矩阵,导出了网络平衡点的局部稳定性条件;通过构造一个新的能量函数,严格地证明了网络的完全收敛性。
     (3)提出了一类N维的改进的背景神经网络模型,详细讨论了该网络的四个基本的动力学性质:有界性、不变性、全局吸引性和完全收敛性。导出了网络具有不变集的条件;给出了网络不变集和全局吸引集的数学表达式;通过构造新的能量函数,证明了网络的完全收敛性。
     (4)利用原始的背景神经网络模型的一个等价模型,提出了一类具有无限个神经元的改进的背景神经网络模型。给出了在两种情况下,即背景输入为零和不为零时,网络的连续吸引子的存在条件及其数学表达式。
     (5)研究了任意指数的改进的背景神经网络模型的多稳定计算。给出了具有任意指数的一维网络的有限个平衡点存在的条件以及稳定性的判定条件,证明了该网络的完全收敛性;导出了具有任意指数的二维网络的平衡点的局部稳定性条件。
     (6)研究了一类二维神经网络的动力学行为。给出了该网络的不变集并证明了它的有界性;构造了一个封闭的曲线,并利用向量场的旋转数理论,导出了网络至少存在一个平衡点的条件。
     上述研究结果,对我们进一步研究多稳定神经网络,将起到积极的推动作用。
The brain is a system, which has the most complex structure, the most mysterious mechanism and the most perfect function. The high-ranking intelligent activities of human, such as the feeling, thinking, learning, and memory, are the results of neural networks. Artificial neural networks are proposed to simulate structure and mechamism of biological neural networks. Since the 1980s, artificial neural networks have attracted much interest of many scientists and have made significant process in both theory and application.
     The dynamical properties of neural networks play crucial roles in their applications. Generally speaking, the stable mode of neural networks can be divided into two classes: monostability and multistability. A multistable neural network can possess multiple stable equilibria. It characterizes properties of biological neural network in essence and depicts internal mechanism of neural biology in deep.
     Neurons in the brain cortex are activitied when the brain is affected by an external stimulus. A response is triggered by a certain stimulus in the neural system composed of huge neurons. Neural activity evoked by the stimulus has been changed subtly. However, little is known about the complex and subtle change. In 2003, the proposed background neural netowrk has provided a new theoretical model. However, since the model is a coupled and nonlinear dynamical system, the form of division in the equation brought many difficulties to us. Moreover, there still exist a lot of unresolved problems about the original model. Therefore, in the dissertation, we propose several classes of improved background neural network model and discuss its dynamical behavior.
     The main contributions of the dissertation are as follows:
     (1) In Chapter 2, a class of improved background neural network model with uniform firing is proposed. Dynamical behavior of the proposed model is studied. Conditions for boundness and invariant set of networks are estabilished. By constructing a new energy function, complete convergence of networks is proved.
     (2) Chapter 3 focuses on a class of improved background neural networks with two subnetworks. Convergence of the networks is investigated. Global attractive set of the network is obtained. By using Jacobian matrix, a local stable condition for equilibrium point of the network is derived. Complete convergence of the network is rigorous proved by constructing a new energy function.
     (3) Chapter 4 presents a class of improved n-dimensional background neural networks. Four basic dynamical properties are discussed in detail: boundness, invarianty, global attractivity, and complete convergence. An invariant set is obtained. Moreover, the expressions of invariant set and global attractive set are given respectively. By using a new energy function again, complete convergence of the networks is proved.
     (4) In Chapter 5, based on the equivalent model of original background neural networks, a class of improved background neural network model with a relatively large number of neurons is proposed. In two cases, i.e, the background input is zero or nonzero, the conditions for continuous attractors are derived and the representations of them are obtained.
     (5) In Chapter 6, multistability for one-dimensional and two-dimensional improved background neural network models with arbitrary exponents are studied. For one-dimensional case, conditions for the existence and stability of equilibrium points are derived and complete convergence is investigated. For two-dimensional case, local stable condition for equilibrium point is also derived.
     (6) Finally, in Chapter 7, the dynamical behavior of a class of two-dimensional neural networks is discussed. An invariant set and boundedness of networks are given. By using the winder number of the vector field and constructing a closed curve, we obtain a condition under which there at least exists an equilibrium point in the network.
引文
[1] A. Le Gall, V. Zissimopoulos. Extended Hopfield models for combinatorial optimization. IEEE Transactions on Neural Networks, 1999, 10(1):72-80
    [2] S. Bharitkar, K. Tsuchiya, Y. Takefuji. Microcode optimization with neural networks. IEEE Transactions on Neural Networks, 1999, 10(3):698-703
    [3] G. Grassi. A new approach to design cellular neural networks for associative memories. IEEE Transactions on Circuits and Systems I: Regular Papers, 1997, 44(9):835-838
    [4] G. Grassi. On discrete-time cellular neural networks for associative memories. IEEE Transactions on Circuits and Systems I: Regular Papers, 2001, 48(1):107-111
    [5] L. O. Chua, T. Roska. The CNN paradigm, IEEE Transactions on Circuits and Systems I: Regular Papers, 1993, 40(3): 147-156
    [6] S. Mitra, S. K. Pal, P. Mitra. Data mining in soft computing framework: a survey, IEEE Transactions on Neural Networks, 2002, 13(1):3-14
    [7] R. H. R. Hahnloser, R. Sarpeshkar, M. A. Mahowald, et al. Digital selection and analogue amplification coexist in a cortex-inspired silicon circuit. Nature, 2000, 405:947-951
    [8] T. Asai, M. Ohtani, H. Yonezu. Analog integrated circuits for the Lotka-Volterra competitive neural networks. IEEE Transactions on Neural Networks, 1999, 10(5):1222-1231
    [9] B. Blumenfeld, D. Bibitchkov, M. Tsodyks. Neural network model of the primary visual cortex: from functional architecture to lateral connectivity and back. Journal of Computational Neuroscience, 2006, 20(2): 219-241
    [10] R. Ben-Yishai, R. Lev Bar-Or, H. Sompolinsky. Theory of orientation tuning in visual cortex. Proceedings of the National Academy of Sciences, 1995, 92(9):3844-3848
    [11] R. C. Justin, J. D. Raymond, J. F. Karl. Attractor models of working memory and their modulation by reward. Biological Cybernetics, 2008, 98(1):11-18
    [12] V. Singh, S. M. Rao. Application of image processing and radial basis neural network techniques for ore sorting and ore classification. Minerals Engineering, 2005, 18(15):1412-1420
    [13] Y. Tao. Fuzzy cellular neural networks and their applications to image processing. Advances in Imaging and Electron Physics, 1999, 109:265-446
    [14] B. Monica, M. Marco, S. Lorenzo. Recursive neural networks and their applications to imageprocessing. Advances in Imaging and Electron Physics, 2006, 140:1-60
    [15] G. A. Carpenter. Neural network models for pattern recognition and associative memory. Neural Networks, 1989, vol. 2(4):243-257
    [16] G. R. Ryan. HAVNET: A new neural networks architecture for pattern recognition. Neural Networks, 1997, 10(1):139-151
    [17] A. Jose. A color pattern recognition problem based on the multiple classes random neural network model. Neurocomputing, 2004, 61:71-83
    [18] L. Sou-Sen, C. Chee-Nan, C. Shiu-Lin. Data mining for tunnel support stability: neural network approach. Automation in Construction, 2001, 10(4):429-441
    [19] M. Carven, J. Shavlik. Using neural networks for data mining. Future Generation Computer Systems, 1997, 13:211-229
    [20] L. W. Man, Y. L. Shing, S. L. Kwong. Data mining of Bayesian networks using cooperative coevolution. Decision Support Systems, 2004, 38(3):451-472
    [21] P. Churchland, T. J. Sejnowski. The computational brain, Cambridge, MA: MIT Press, 1992
    [22] E. Salinas. Background synaptic activity as a switch between dynamical states in a network. Neural Computation, 2003, 15(7):1439-1475
    [23] W. Schulta, R. Romo. Role of primate basal ganglia and frontal cortex in the internal generation of movements. I. Preparatory activity in the anterior striatum. Experimental Brain Research, 1992, 91(3): 363-384
    [24] J. Lauwereyns, M. Sakagami, K. Tsutsui, et al. Responses to task-irrelevant visual features by primate prefrontal neurons. Journal of Neurophysiology, 2001, 86(4):2001-2010
    [25] M. A. Sommer, R. H. Wurtz. Frontal eye field sends delay activity related to movement, memory, and vision to the superior colliculus. Journal of Neurophysiology, 2001, vol. 85(4):1673-1685
    [26] M. Zhang, S. Barash. Neuronal switching of sensorimotor transformations for antisaccades. Nature, 2000, 408:971-975
    [27] J.-R. Duhamel, C. L. Colby, M. E. Goldberg. The updating of the representation of visual space in parietal cortex by intended eye movements. Science, 1992, 255:90-92
    [28] L. J. Toth, J. A. Assad. Dynamic coding of behaviourally relevant stimuli in parietal cortex. Nature, 2002, 415:165-168
    [29] J. M. Weimann, E. Marder. Swithching neurons are integral members of multiple oscillatory networks. Current Biology, 1994, 4(10):896-902
    [30] J. F. Linden, A. Grunewald, R. A. Andersen. Responses to auditory stimuli in macaque lateral intraparietal area. II. Behavioral modulation. Journal of Neurophysiology, 1999, 82(1):343-358
    [31] D. Terman, J. E. Rubin, A. C. Yew, et al. Activity patterns in a model for the subthalamopallidal network of the basal ganglia. Journal of Neurophysiology, 2002, 22(7):2963-2976
    [32] H. R. Wilson, R. Humanski. Spatial frequency adaptation and contrast gain control. Vision Research, 1993, 33(8):1133-1149
    [33] M. Carandini, D. J. Heeger, J. A. Movshon. Linearity and normalization in simple cells of the macaque primary visual cortex. The Journal of Neuroscience, 1997, 17(21):8621-8644
    [34] M. Carandini, D. J. Heeger. Summation and division by neurons in primate visual cortex. Science, 1994, 264:1333-1336
    [35] E. P. Simoncelli, D. J. Heeger. A model of neuronal responses in visual area MT. Vision Research, 1998, 38(5):743-761
    [36] H. R. Wilson. Spikes, decisions, and actions. Oxford: Oxford University Press, 1998
    [37] O. Schwartz, E. P. Simoncelli. Natural signal statistics and sensory gain control. Nature Neuroscience, 2001, 4(8):819-825
    [38] S. Deneve, P. E. Latham. A. Pouget, Reading population codes: a neural implementation of ideal observers. Nature Neuroscience, 1999, 2(8):740-745
    [39] N. Brady, D. J. Field. Local contrast in natural images: normalisation and coding efficiency. Perception, 2000, 29:1041-1055
    [40] W. S. McCulloch, W. H. Pitts. A logical calculus of the ideas immanent in nervous activity. Bulletin of Mathematical Biophysics, 1943, 5:115-133
    [41] L. N. Cooper, F. Liberman, E. Oja. A theory for the acquisition and loss of neuron specificity in visual cortex. Biological Cybernetics, 1979, 33(1):9-28
    [42] C. von der Malsburg. Self-organization of orientation sensitive cells in the striate cortex. Biological Cybernetics, 1973, 14(2):85-100
    [43] M. M. Nass, L. N. Cooper. A theory for the development of feature detecting cells in visual cortex. Biological Cybernetics, 1975, 19(1):1-18
    [44] R. Perez, L. Glass, R. J. Shlaer. Development of specificity in the cat visual cortex. Journal of Mathematical Biology, 1975, 1:275-288
    [45] A. Takeuchi, S. Amari. Formation of topographic maps and columnar microstructures in nerve fields. Biological Cybernetics, 1979, 35(2):63-72
    [46] R. L. T. Hahnloser. On the piecewise analysis of linear threshold neural networks. Neural Networks, 1998, 11: 691-697
    [47] Z. Yi, K. K. Tan. Multistability of discrete-time recurrent neural networks with unstaurating piecewise linear activation functions. IEEE Transactions on Neural Networks, 2004, 15(2):329-335
    [48] H. Wersing, W. J. Beyn, H. Ritter. Dynamical stability conditions for recurrent neural networks with unsaturating piecewise linear transfer functions. Neural Computation, 2001, 13(8):1811-1825
    [49] H. J. Tang, K. C. Tan, Z. Yi. Neural Networks: Computational Models and Applications. Heidelberg: Springer-Verlag, 2007
    [50] K. C. Tan, H. J. Tang, W. Z. Qualitative analysis for recurrent neural networks with linear threshold transfer functions. IEEE Transactions on Circuits and Systems I: Regular Papers, 2005, 52(5):1003-1012
    [51] Z. Yi, K. K. Tan, T. H. Lee. Multistability analysis for recurrent neural networks with unsaturating piecewise linear transfer functions. Neural Computation, 2003, 15(3):639-662
    [52] Z. H. Mao, S. G. Massaqoi. Dynamics of winner-take-all competition in recurrent neural networks with lateral inhibition. IEEE Transactions on Neural Networks, 2007, 18(1):55-69
    [53] H. S. Seung. How the brain keeps the eyes still. Proc. Nat. Acad. Sci., 1996, 93:13339-13344
    [54] R. J. Douglas, C. Koch, M. Mahowald, et al. Recurrent excitation in neocortical circuits. Science, 1995, 269:981-985
    [55] L. Zhang, Z. Yi, J. Yu. Multiperiodicity and attractivity of delayed recurrent neural networks with unsaturating piecewise linear transfer functions. IEEE Transactions on Neural Networks, 2008, 19(1):158-167
    [56] J.-C. Yen, J.-I. Guo, H.-C. Chen. A new k-winners-take all neural network and its array architecture. IEEE Transactions on Neural Networks, 1998, 9(5):901-912
    [57] J.-F. Yang, C.-M. Chen. Winner-take-all neural networks using the highest threshold. IEEE Transactions on Neural Networks, 2000, 11(1):194-199
    [58] E. Oja. A simplified neuron model as a principal component analyzer. Journal of Mathematical Biology, 1982, 15(3): 267-273
    [59] G. G. Denisov, Vl. V. Kocharovsky, M. L. Kulygin. Nonlinear nonequilibrium processes in a silicon switch of high-power microwave radiation. Bulletin of the Russian Academy of Sciences: Physics, 2009, 73(1):91-95
    [60] H. Qu, Z. Yi, X. B. Wang. Switching ananlysis of 2-D neural networks with nonsaturating linear threshold transfer functions. Neurocomputing, 2008, 72:413-419
    [61] L. O. Chua, L. Yang. Cellular neural networks: Theory. IEEE Transactions on Circuits and Systems, 1988, 35(10):1273-1290
    [62] C. W. Wu, L, O, Chua. A more rigorous proof of complete stability of cellular neural networks. IEEE Transactions on Circuits and Systems I: Regular Papers, 1997, 44(4):370-371
    [63] M. Gilli. Stability of cellular neural networks and delayed neural networks with nonpositive templates and nonmonotonic output functions. IEEE Transactions on Circuits and Systems I: Regular Papers, 1994, 41(8):518-528
    [64] N. Takahashi. A new sufficient condition for complete stability of cellular neural networks with delay. IEEE Transactions on Circuits and Systems I: Regular Papers, 2000, 47(6):793-799
    [65] J. J. Hopfield. Neurons with graded response have collective computational properties like those of two-state neurons. Proceedings of the National Academy of Sciences, 1984, 81(10):3088-3092
    [66] X. B. Liang, J. Wang. A recurrent neural network for nonlinear optimization with a continuously differentiable objective function and bound constranints. IEEE Transactions on Neural Networks, 2000, 11(6):1251-1262
    [67] J. Feng. Origin of firing varibility of the integrate-and-fire model. Neurocomputing, 1999, 26-27:117-122
    [68] R. Osan, J. Rubin, R. Curtu, et al. Traveling waves in a one-dimensional integrate-and-fire neural network with finite support connectivity. Neurocomputing, 2003, 52-54:869-875
    [69] D. Hansel, G. Mato, C. Meunier, et al. On numerical simulations of integrate-and-fire neural networks. Neural Computation, 1998, 10(2):467-483
    [70] C. Bartolozzi, G. Indiveri. Selective attention implemented with dynamic synapses and integrate-and-fire neurons, Neurocomputing, 2006, 69:1971-1976
    [71] O. Schwartz, E. P. Simoncelli. Natural signal statistics and sensory gain control. Nature Neuroscience, 2001, 4(8):819-825
    [72] P. M. Baker, J. Grigull, P. S. Pennefather, et al. Frequency switches in inhibitory networks with anesthetics. Neurocomputing, 2001, 38-40:467-474
    [73] K. Pawelzik, J. Kohlmorgen, K. Müller. Annealed competition of experts for a segmentation and classification of switching dynamics. Neural Computation, 1996, 8(2):340-356
    [74] R. L. Carlo, C. C. Carson. A spiking neuron model for binocular rivalry, Journal ofComputation Neuroscience, 2002, 12(1):39-53
    [75] Y. Kakimoto, K. Aihara. Hierarchical spatio-temporal dynamics of a chaotic neural network for multistable binocular rivalry. New Mathematics and Natural Computation, 2009, 5(1):123-134
    [76] L. Zhang, Z. Yi, J. Yu, et al. Some multistability properties of bidirectional associative memory recurrent neural networks with unsaturating piecewise linear transfer functions. Neurocomputing, 2009, 72(16-18):3809-3817
    [77] Y. He, G. P. Liu, D. Rees, et al. Stability analysis for neural networks with time-varying interval delay. IEEE Transactions on Neural Networks, 2007, 18(6):1850-1854
    [78] M. A. Cohen, S. Grossberg. Absolute stability of global pattern formation and parallel memory storage by competitive neural networks. IEEE Transactions on Systems, Man, and Cybernetics, 1983, 13:815-826
    [79] S. Guo, L. Huang. Stability analysis of Cohen-Grossberg neural networks. IEEE Transactions on Neural Networks, 2006, 17(1):106-117
    [80] S. Wu, S. Amari. Computing with continuous attractors: stability and online aspects. Neural Computation, 2005, 17(10):2215-2239
    [81] G. Pedro, M. Christian. Lifetime and stability in line attractor networks of short-term memory. Proceedings of the second french conference on Computational Neuroscience (Neurocomp08), 0065
    [82] D. Durstewitz, J. K. Seamans, T. J. Sejnowski. Neurocomputational models of working memory. Nature Neuroscience, 2000, 3:1184-1191
    [83] S. M. Stringer, E. T. Rolls, T. P. Trappenberg. Self-organizing continuous attractor network models of hippocampal spatial view cells. Neurobiology of Learing and Memory, 2005, 83:79-92
    [84] G. Papp, M. P. Witter, A. Treves. The CA3 network as a memory store for spatial representations. Learning & Memory, 2007, 14:732-744
    [85] S. Leutgeb, J. K. Leutgeb, M. B. Moser, et al. Place cells, spatial maps and the population code for memory. Current Opinion in Neurobiology, 2005, 15(6):738-746
    [86] S. Amari. Dynamics of pattern formation in lateral-inhibition type neural fields. Biological Cybernetics, 1977, 27:77-87
    [87] C. D. Brody, R. Romo, A. Kepecs. Basic mechanisms for graded persistent activity: discrete attractors, continuous attractors, and dynamic representations. Current Opinion in Neurobiology, 2003, 13(2):204-211
    [88] S. Deneve, P. E. Latham, A. Pouget. Reading population codes: a neural implementation of ideal observers. Nature Neuroscience, 1999, 2(8):740-745
    [89] A. P. Georgopoulos, J. F. Kalaska, R. Caminiti, et al. On the relations between the direction of two-dimensional arm movements and cell discharge in primate motor cortex. The Journal of Neuroscience, 1982, 2(11):1527-1537
    [90] D. Hansel, H. Sompolinsky. Modeling feature selectivity in local cortical circuits. In C. Koch and I. Segev (Eds.). Methods in neuronal modeling: From synapses to networks. Cambridge, MA: MIT Press, 1988
    [91] J. H. Maunsell, D. C. Van Essen. Functional properties of neurons in middle temporal visual area of the macaque monkey. I. Selectivity for stimulus direction, speed, and orientation. Journal of Neurophysiology, 1983, 49(5):1127-1147
    [92] E. T. Rolls, R. G. Robertson, P. Georges-Francois. The repersentation of space in the primate hippocampus. Society for Neuroscience Abstracts, 1995, 21:1494
    [93] D. J. Amit. The Hebbian paradigm reintegrated: local reverberations as internal representation. Behavioral and Brain Sciences, 1995, 18:617-626
    [94] L. Zou, H. J. Tang, K. C. Tan, et al. Analysis of continuous attractors for 2-D linear threshold neural networks. IEEE Transactions on Neural Networks, 2009, 20(1): 175-180
    [95] H. S. Seung. Continuous attractors and oculomotor control. Science, 2000, 290: 2268-2269
    [96] H. S. Seung, D. D. Lee, B. Y. Reis, et al. Stability of the memory of eye position in a recurrent network of conductance-based model neurons. Neuron, 2000, 26(1):259-271
    [97] J. S. Taube. Head direction cells and the neurophysiological basis for a sense of direction. Progress in Neurobiology, 1998, 55(3):225-256
    [98] T. Trappenberg. Dynamic cooperation and competition in a network of spiking neurons. Fifth International Conference on Neural and Intelligent Processing(ICONIP98), 1299-1302
    [99] T. Trappenberg. Continuous attractor neural networks. Recent developments in biologically inspired computing. Hershey, PA: Idee Group, 2003
    [100] X. J. Wang. Synaptic reverberation underlying mnemonic persistent activity. Trends in Neuroscience, 2001, 24(8):455-463
    [101] M. A. Wilson, B. L. McNaughton. Dynamics of the hippocampal ensemble code for space. Science, 1993, 261:1055-1058
    [102] K. Zhang. Representation of spatial orientation by the intrinsic dynamics of the head-direction cell ensemble: a theory. The Journal of Neuroscience, 1996, 16(6):2112-2126
    [103] T. P. Trappenberg. Why is our capacity of working memory so large? Neural Information Processing-Letters and Reviews. 2003, 1(3):97-101
    [104] X. J. Wang. Synaptic basis of cortical persistent activity: the importance of NMDA receptors to working memory. The Journal of Neuroscience, 1999, 19(21):9587-9603
    [105] M. Paul, D. B. Carlos, R. Ranulfo, et al. A recurrent network model of somatosensory parametric working memory in the prefrontal cortex. Cerebral Cortex, 2003, 13(11):1208-1218
    [106] A. Angelo, G. Wulfram. Spatial orientation in navigating agents: modeling head-direction cells. Neurocomputing, 2001, 38(40):1059-1065
    [107] S. T. Jeffrey, P. B. Joshua. Persistent neural activity in head direction. Cerebral Cortex, 2003, 13(11):1162-1172
    [108] N. Brunel. Persistent activity and the single-cell frequency-current curve in a cortical network model. Network: Computation in Neural Systems, 2000, 11(4):261-280
    [109] J. J. Knierim. Dynamic interactions between local surface cues, distal landmarks, and intrinsic circuitry in Hippocampal place cells. The Journal of Neuroscience, 2002, 22(14):6254-6264
    [110] A. P. Georgopoulos, J. F. Kalaska, R. Caminiti, et al. On the relations between the direction of two-dimensional arm movements and cell discharge in primate motor cortex. The Journal of Neuroscience, 1982, 2(11):1527-1537
    [111] A. Compte, X. J. Wang. Tuning curve shift by attention modulation in cortical neurons: a computational study of its mechanisms. Cerebral Cortex, 2006, 16(6):761-778
    [112] M. Wolfgang, J. Prashant. E. D. Sontag. Principles of real-time computing with feedback applied to cortical microcircuit models. Proceedings of Advances in Neural Information Processing Systems, 2006:872-879
    [113] D. B. Carlos, R. Ranulfo, K. Adam. Basic mechanisms for graded persistent activity: discrete attractors, continuous attractors, and dynamic representations. Current Opinion in Neurobiology, 2003, 13(2):204-211
    [114] B. Y. Rani, H. David, S. Haim. Traveling waves and the processing of weakly tuned inputs in a cortical network module. Journal of Computational Neuroscience, 1977, 4:57-77
    [115] K. M. Christian, D. B. Carlos. Design of continuous attractor networks with monotonic tuning using a symmetry principle. Neural Computation, 2008, 20(2):452-485
    [116] B. Christian, B. Nicolas, A. Angelo. A continuous attractor network model without recurrent excitation: maintenance and integration in the head direction cell system. Journal of Computational Neuroscience, 2005, 18:205-227
    [117] H. Tang, K. C. Tan, W. Zhang. Cyclic dynamics analysis for networks of linear threshold neurons. Neural Computation, 2005, 17(1):97-114
    [118] H. Zhang, Z. Yi, L. Zhang. Continuous attractors of a class of recurrent neural networks. Computers & Mathematics with Applications, 2008, 56:3130-3137
    [119] C. R. Laing, C. C. Chow. Stationary bumps in networks of spiking neurons. Neural Computation, 2001, 13:1473-1494
    [120] S. Bernhard, S. Alexander, K. R. Muller. Nonlinear component analysis as a kernel eigenvalue problem. Technical Report of Max-Planck-Institute, 1996, 44:1-18
    [121] T. Yukihiro, O. Hiroshi. Extracting information in a graded manner from a neural network system with continuous attractors. IEEE Transactions on Neural Networks, 2004, 4:3095-3100
    [122] R. D. Oliveira, L. Monteiro. Continuous attractors in recurrent neural networks and phase space learing. Proceedings Sixth Brazilian Symposium on Neural Networks, 2000, 203-208
    [123] S. M. Stringer, E. T. Rolls, P. Taylor. Learning movement sequences with a delayed reward signal in a hierarchical model of motor function. Neural Networks, 2007, 20(2):172-181
    [124] G. Mongillo, D. J. Amit, N. Brunel. Retrospective and prospective persistent activity induced by Hebbian learning in a recurrent cortical network. European Journal of Neuroscience, 2003, 18(7):2011-2024
    [125] M. Tsodyks, T. Sejnowski. Associative memory and hippocampal place cells. International Journal of Neural Systems, 1995, 6:81-86
    [126] G. Mongillo, O. Barak, M. Tsodyks. Synaptic theory of working memory. Science, 2008, 319:1543-1546
    [127] H. S. Seung, D. D. Lee. The manifold ways of perception. Science, 2000, 290:2268-2269
    [128] Y. Roudi, A. Treves. Representing where along with what information in a model of a cortical patch. PLoS Computational Biology, 2008, 4(3):1-20
    [129] S. Wu, K. Hamaguchi, S. I. Amari. Dynamics and computation of continuous attractors. Neural Computation, 2008, 20(4):994-1025
    [130] S. Wu, S. I. Amari, H. Nakahara. Population coding and decoding in a neural field: a computational study. Neural Computation, 2002, 14(5):999-1026
    [131] C. C. Fung, K. Y. Wong, S. Wu. A moving bump in a continuous manifold: a comprehensive study of the tracking dynamics of continuous attractor neural networks. Neural Computation, 2010, 22(3):752-792
    [132] S. Wu, K. Hamaguchi, S. I. Amari. The tracking speed of continuous attractors. Lecture Notesin Computer Science, 2007, 4491:926-934
    [133] M .C. W. van Rossum, G. G. Turrigiano, S. B. Nelson. Fast propagation of firing rates through layered networks of noisy neurons. The Journal of Neuroscience, 2002, 22(5):1956-1966
    [134] M. Diesmann, M-O. Gewaltig, A. Aertsen. Stable propagation of synchronous spiking in cortical neural networks. Nature, 1999, 402:529-533
    [135] D. I. V. Aronol. Ordinary differential equations. Roger Cooke. Berlin: Springer-Verlag, 1992
    [136] Z. F. Zhang, T. R. Ding, W. Z. Huang, et al. Qualitative theory of differential equations. Math. Monnographs. American Mathematical Society, Providence, 1992
    [137] J. Cao, M. Xiao. Stability and Hopf bifurcation in a simplified BAM neural network with two time delays. IEEE Transactions on Neural Networks, 2007, 18(2):416-430
    [138] L. Olien, J. Belair. Bifurcations, stability, and monotonicity properties of a delayed neural network model. Physica D, 1997, 102(3-4):349-363
    [139] W. W. Diek, W. C. Schieve. Stability and chaos in an inertial two-neuron system. Physica D, 1997, 105(4):267-284
    [140] P. K. Das, W. C. Schieve, Z. J. Zeng. Chaos in an effective foru-nueron neural network. Physics Letters A, 1991, 161(1):60-66
    [141] L. Chen, K. Aihara. Chaotic simulated annealing by a nerual network model with transient chaos. Neural Networks, 1995, 8(6):915-930
    [142] X.-S. Yang, Q. Yuan. Chaos and transient chaos in simple Hopfield neural networks. Neurocomputing, 2005, 69(1-3):232-241
    [143] A. Potapov, M. K. Ali. Robust chaos in neural networks. Physics Letters A, 2000, 277(6):310-322
    [144] C. K. Machens, R. Romo, C. D. Brody. Flexible control of mutual inhibition: A neural model of two-interval discrimination. Science, 2005, 307(5712):1121-1124
    [145] P. Theodoni, G. Kovacs, M. W. Greenlee, et al. Neuronal adaptation effects in decision making. Journal of Neuroscience, 2011, 31(1):234-246
    [146] J. Heinzle, K. Hepp, K. A. C. Martin. A biologically realistic cortical model of eye movement control in reading. Psychological Review, 2010, 117(3):808-830
    [147] R. Storchi, G. E. M. Biella, D. Liberati, et al. Extraction and Characterization of Essential Discharge Patterns from Multisite Recordings of Spiking Ongoing Activity. PLoS One, 2009, 4(1):e4299
    [148] A. Shpiro, R. Moreno-Bote, N. Rubin, et al. Balance between noise and adaptation incompetition models of perceptual bistability. Journal of Computational Neuroscience, 2009, 27(1):37-54
    [149] R. Curtu, A. Shpiro, N. Rubin, et al. Mechanisms for frequency control in neuronal competition models. SIAM Journal on Applied Dynamical Systems, 2008, 7(2):609-649
    [150] A. Kumar, S. Schrader, A. Aertsen, et al. The high-conductance state of cortical networks. Neural Computation, 2008, 20(1):1-43
    [151] J. Heinzle, K. Hepp, K. A. C. Martin. A microcircuit model of the frontal eye fields. Journal of Neuroscience, 2007, 27(35):9341-9353

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700