Introduction Temporal coding of the dynamics

BioSystems 58 2000 159 – 165 Neural networks through the hourglass Tatyana Turova 1 Mathematical Center, Lund Uni6ersity, 22100 Lund, Sweden Abstract The effect of the synaptic plasticity on the dynamics of a large neural network is studied. Our approach is analytic but inspired by the data, both simulated and experimental. We explain formation of the small strongly connected assemblies within a dynamical network following Hebb’s rule. Also, we find the conditions for the synchrony effect in the stochastic network in the absence of large synchronized input. © 2000 Elsevier Science Ireland Ltd. All rights reserved. Keywords : Attractor; Neural network; Ornstein – Uhlenbeck process www.elsevier.comlocatebiosystems

1. Introduction

The concept of using the theory of random processes to describe neural activity has been elaborated since the 1960s. Analysis reported by Griffith 1971 indicated already remarkable vari- ability of the behaviour of stochastic biological neural networks, but it was not until the 1990s when the existence of phase transitions was proved rigorously Cottrell, 1992; Karpelevich et al., 1995; Malyshev and Turova, 1997. Given numerous simulated results on the be- haviour of large stochastic networks, it is natural to try to make more links between computational and analytic approaches. For this purpose, I con- sider here two quite different examples from the literature: one is a result of numerical simulations by Xing and Gerstein 1996, and the other is reported as physiological data by Charpier et al. 1994. In both these examples, the effect of synaptic plasticity on the cooperative behaviour of the large network is studied. In this paper a mathematical model is presented by means of which one can observe qualitatively the same phenomena. I shall discuss which features of this model are essential and even necessary for pos- sessing the same qualities. Then, based on the known results and also on the presented analysis, I will be able to answer some of the questions posed by Charpier et al. 1994, Xing and Ger- stein 1996.

2. The model

Consider a model that fulfils a minimum re- quirement to be called ‘neuronal’, i.e. it takes into account the spiking nature of the neuronal activ- E-mail address : tatyanamaths.lth.se T. Turova. 1 On leave from the Institute of Mathematical Problems in Biology, RAS, Russia. 0303-264700 - see front matter © 2000 Elsevier Science Ireland Ltd. All rights reserved. PII: S 0 3 0 3 - 2 6 4 7 0 0 0 0 1 1 9 - 2 ity, the exponential decay of the post-synaptic potentials, and the exponential decay of the devia- tion of the membrane potential from the rest potential. As an example, I take a model that has been introduced by Kryukov 1978 under the assumption that the electrical activity the mem- brane potential of a single independent neuron can be described by some Markov process. In fact, a model for a single neuron similar to that described in Section 2.1 appears already as a ‘realistic neuronal model’ in the monograph by Griffith 1971. In the form presented here, the model was formulated and studied by Turova 1996, 1997. 2 . 1 . A model neuron The neuron is modelled as one point neglecting the propagation of membrane current along the axons and the dendrites. The activity of each neuron is described by the stochastic point pro- cess of the consecutive firing moments spikes. Assume that any individual neuron is ‘active’, i.e. the sequence of its spikes forms a renewal process. More exactly, in the absence of interactions, let the inter-spike intervals be independent random variables with a generic variable Y = d inf{r \ 0: hr = ye − a r }, 1 where y \ 0, a \ 0, and ht is Ornstein – Uhlen- beck process defined by dht = − aht dt + dWt, with h0 = 0. In this case, the density of Y is p Y 6 = 2a 32 e 2a6 y pe 2a6 − 1 3 exp − a y 2 e 2a6 − 1 , 6 \ 0. 2 Varying parameter y, one can adjust the mean of the inter-spike interval to the data. Roughly speaking, the mean inter-spike interval is an in- creasing function of y. 2 . 2 . A network The neurons of the network are numerated by the sites i L¤Z 2 . The evolution of the ith neu- ron is described by a membrane potential x i t R and a threshold function y i t R + , t ] 0, that are continuous stochastic processes except for, at most, a countable number of points of discontinu- ity. It is assumed that x i t B y i t for all t ] 0, except for the random moments 0 B t i 1 B t i 2 , …, when x i t i n ] y i t i n , n ] 1. We call these moments t i n , n ] 1, the moments of ‘firing’ of the ith neu- ron. At any moment of firing of the ith neuron, the membrane potential x i t and the threshold y i t are reset jumpwise to the values 0 and y, respectively, i.e. lim o ¡0 x i t i n + o = 0, lim o ¡0 y i t i n + o = y, and the process of the membrane potential accu- mulation is repeated until the next firing. Set t i 0 for all i L, and define for t]0 S i t: = t − t i n , if t t i n , t i n + 1 ]. Thus S i t denotes the time elapsed since the last firing of the ith neuron until the moment t. The threshold function y i t is defined by y i t: = ye − a S i t , t ] 0. The evolution of the membrane potentials of the interacting neurons is given by the following system x i t: = x i 0 + h i n [S i t] + It, i L, if t t i n , t i n + 1 ], 3 where x0 − ;y L is the initial state, h i n t, i L, n]1 are independent copies of ht, and the interaction term is It = k ] 1, i j L:t i n B t j k 5 t a ij e − a t − t j k , if t t i n , t i n + 1 ]. 4 This form of interaction suggested by Kryukov et al. 1990 has the advantage that being mathemat- ically tractable, as indeed it still resembles physio- logical data: it takes into account the exponential decay of the post-synaptic potentials, and the connection constants a ij can be chosen as positive as well as negative to model excitatory and in- hibitory connections, correspondingly. In order to study the dynamics of the spike trains generated by the model of Eq. 3, I introduce an embedded process Rt = [R i

t, i L]R

+ L such that each of its compo- nents R i t = time until the next firing of the ith neuron assu ming no interaction takes place meanwhile. It has been proved by Turova 1996 that [R i t, i L] t ] 0 = d [X i t, i L] t ] 0 5 where the process Xt is known as an ‘hourglass model’. The name is suggested by the straightfor- ward analogy with a sandglass and spinglass, and is used in the sense of a ‘timeglass’. In fact, ‘timglas’ means sand-glass in Swedish. The hour- glass model has a very clear description, which is now outlined briefly referring for the details to Turova 1996, 1997. As long as all the compo- nents of Xt are strictly positive, they decrease from the initial state X0 linearly in time with rate one until the first time, t z , that one of the components reaches zero for some z L: X z t z = 0. Then X z is reset to X z t z + = Y z 1 , where Y z 1 is an independent copy of the variable Y already defined. At the same moment, every trajectory X j t with j in the interacting neighbourhood of z receives, instantaneously, a random increment u zj t. After moment t z , the foregoing dynamics are repeated. Observe that the positive, i.e., excitatory, con- nections a zj result in the negative sign of u zj t, which shorten the time until the next firing and might even cause a simultaneous firing of the jth neuron, in which case X j is also reset to X j t z + = Y j 1 . On the contrary, the inhibitory connections resulting in the positive sign of u zj increase instan- taneously X j t, and therefore delay the next firing. The distributions of u ij t are derived from the model. In particular, if a ij = − a B 0, then given that the post-synaptic jth neuron is in a state X j t = u \ 0, on can derive the following formula for the density of u ij t: 2a 32 e 2a6 ae − a u pe 2a6 − 1 3 exp − a a 2 e − 2au e 2a6 − 1 , 6 \ 0. 6 Notice that the hourglass model with time-inde- pendent interactions had been introduced and studied independently as a model for the neuronal activity by Cottrell 1992. With the results of Turova 1996, it became possible to choose the parameters of this model in consistency with those of the standard model of interacting membrane potentials. The equality of Eq. 5 implies that the spike trains of the model of Eq. 3 and the process Xt are equal in distribution. But the process Xt is much easier to treat analytically since it has piece-wise linear trajectories. I shall emphasize that the definition of the hourglass process does not require any conditions on the original model. Therefore, it appears to be a quite useful probabilistic tool.

3. Temporal coding of the dynamics

It has been observed in many simulations see, for example, Xing and Gerstein, 1996; Cottrell et al., 1997, as well as proved analytically for some class of the hourglass models Karpelevich et al., 1995; Cottrell and Turova, 2000, that the net- work with strong enough inhibitions splits, with time, into two subsets. These are a subset of active, i.e. infinitely often firing, neurons and a subset of inactive neurons. Furthermore, the sub- set of the inactive neurons remains unchanged as it reaches a certain state, so that ‘‘special manipu- lations are required to activate these silent neu- rons’’, as noticed by Xing and Gerstein 1996. The subsets of inactive neurons can be used for the description of the limiting states of the dy- namics. Recall the definition of the attractors given by Malyshev and Turova 1997 for the transient hourglass models. Let V, S, P be the underlying probability space of the process Xt. For any A ¦ L let VA¦V be the set of all trajectories v V such that X i t, v “ as t “ , if and only if i A. 7 Here X i t, v denotes a particular realization of the random trajectory X i t. It is clear that V= A VA, where A runs over all subsets of L. Definition 1. We call an ‘attractor’ any non-empty set A such that P[ VA]\0. The attractors defined here are the only stable patterns of the silent neurons for the transient hourglass models. Let us rewrite this definition. For any T \ 0, n Z + and i L set p i n, T: = 1 T c [n − 1T B t 5 nT: the ith neuron fires at time t] Definition 2. We call an ‘attractor’ any non-empty set A such that lim n “ p i n, 1 = 0 if i A, while for any j Q A, there exists lim T “ p j 1, T \ 0. According to the results by Turova 1996, any finite model Eqs. 3 and 4 is ergodic for any fixed connection constants. Therefore, in this case, there are no attractors in the sense of the previous definition. Instead, we introduce a ‘meta-stable state’. Choose constant T c \ 0 large enough when compared with other time characteristics of the network, and modify Definition 2 as follows. Definition 3. We call a ‘meta-stable state’ any non-empty set A for which, with a positive proba- bility, there exists an infinite sequence {n l } l ] 1 such that p i n l , T c = 0 l ] 1 if i A. Here is the first conclusion one can draw on the basis of using hourglass model. If the interactions of the system are such that the interactions u ij t of the corresponding hourglass model are time and space homogeneous, then in the presence of strong enough inhibitions the system moves into one of its attractors and stays there ‘forever’. Recall that for a one-dimensional model with nearest-neighbour inhibitory interactions, all the possible attractors were classified by Karpelevich et al. 1995. These attractors are random, we observe them on a microscopic scale. However, on a larger scale, we obtain a deterministic macro image due to the law of large numbers. The structure of equilibrium measures on attractors has been studied by Malyshev and Turova 1997.

4. Random graphs and Hebb’s rule