The attractors defined here are the only stable patterns of the silent neurons for the transient
hourglass models. Let us rewrite this definition. For any T \ 0, n Z
+
and i L set p
i
n, T: = 1
T c
[n − 1T B t 5
nT: the ith neuron fires at time t]
Definition 2. We call an ‘attractor’ any non-empty set A such that
lim
n
p
i
n, 1 = 0 if i A, while for any j Q A, there exists
lim
T
p
j
1, T \ 0.
According to the results by Turova 1996, any finite model Eqs. 3 and 4 is ergodic for any
fixed connection constants. Therefore, in this case, there are no attractors in the sense of the previous
definition. Instead, we introduce a ‘meta-stable state’. Choose constant T
c
\ 0 large enough when
compared with other time characteristics of the network, and modify Definition 2 as follows.
Definition 3. We call a ‘meta-stable state’ any non-empty set A for which, with a positive proba-
bility, there exists an infinite sequence {n
l
}
l ] 1
such that p
i
n
l
, T
c
= 0 l ] 1
if i A. Here is the first conclusion one can draw on the
basis of using hourglass model. If the interactions of the system are such that the interactions u
ij
t of the corresponding hourglass model are time
and space homogeneous, then in the presence of strong enough inhibitions the system moves into
one of its attractors and stays there ‘forever’. Recall that for a one-dimensional model with
nearest-neighbour inhibitory interactions, all the possible attractors were classified by Karpelevich
et al. 1995. These attractors are random, we observe them on a microscopic scale. However,
on a larger scale, we obtain a deterministic macro image due to the law of large numbers. The
structure of equilibrium measures on attractors has been studied by Malyshev and Turova 1997.
4. Random graphs and Hebb’s rule
The discussion in this section is inspired by the results of Xing and Gerstein 1996, who have
observed in particular that, in the presence of strong enough inhibition, a homogeneous net-
work after the training based on the Hebb rule becomes composed of the stable groups of neu-
rons. The neurons within a group are strongly connected, while the connections between the
groups are weak. The striking feature is that when compared with the size of the whole network, the
size of any group is small.
A natural question arises: why do we observe only ‘small’ groups after the training based on the
Hebb rule? To answer this question, I shall analyze the
behaviour of our network under similar condi- tions. To eliminate the boundary effect, let L be a
two-dimensional torus. We assume that the ith neuron sends the excitatory impulses to the neu-
rons
numerated by
the sites
in D
E
i = { j L: 0B
i−j 5d}, and it sends inhibitory impulses
to D
I
i = { j L: dB i−j 5D},
where the constants D
E
i =D
E
and D
I
i =D
I
are independent of i. To be able to demonstrate the attractors, I shall
consider a network such that the corresponding hourglass process has interactions that become
state independent after the training based on the following rule attributed to Hebb see also Xing
and Gerstein 1996.
1. Synaptic strength is increased when pre- and post-synaptic neurons fire in near synchrony.
2. The total outward synaptic strength from a neuron holds constant.
In order to implement this rule into the present model, I fix positive constants B
I
and B
E
arbitrar- ily, and choose an interaction term It in Eq. 3
so that the interactions of the corresponding hourglass process equal u
ij
tq
ij
t, where u
ij
t, t \ 0, are independent copies of the independent
random variables u
ij
, i L, jD
I
i D
E
i such that
u
ij
B with Eu
ij
= − B
I
, if j D
I
i u
ij
\ with Eu
ij
= B
E
, if j D
E
i and 0 5 q
ij
t 5 D
E
are the following random func- tions with piece-wise constant right-continuous
trajectories. Set
q
ij
0 = 1 for
all i
L, j D
I
i D
E
i , and choose 0 B o B EYD
E
. Then t + o is a discontinuity point of q
ij
·, j D
E
i , if the ith neuron fires at time t and the jth neuron
fires within time interval t, t + o]. More precisely, let F
i
t ¦ D
E
i be the set of the neurons which fire within time interval t,t + o]. Then, set for all
j F
i
t q
ij
t + o =
Á Ã
Í Ã
Ä q
ij
t + 1
1 + o D
E
F
i
t −
q
ij
t ,
if F
i
t BD
E
1 if
F
i
t =D
E
8 and for j D
E
i ¯F
i
t q
ij
t + o = q
ij
t − 1
D
E
− F
i
t 1+o
D
E
−
j F
i
t
q
ij
t .
Cottrell and Turova 2000 have proved for the model with a specific architecture of the interacting
neighbourhood that, for any value w
E
of the excitatory connections, there exists a critical value
w
I cr
w
E
of the inhibitory connections that separates the ergodic and transient cases. Furthermore, it is
proved by the same authors, and simulated for other types of interacting neighbourhoods by Cot-
trell et al. 1997, that w
I cr
w
E
is a non-increasing function. It is natural then to conjecture here also
that, when B
E
is fixed and q
ij
t
1, there is a constant B
I cr
independent of the size of the network such that the system is transient if B
I
\ B
I cr
and ergodic when B
I
B B
I cr
. I shall use random graphs to illustrate the
dynamics of accumulating the ‘strong’ connections is the network due to the Hebb rule. The random
graph Gt consists of the set of vertices L and the set Et of the directed edges i, j L×L. Set
E0 = ¥. Then, for any t\0, there is a directed
edge at time t from i to j, if q
ij
t ] [11 + o]D
E
. Thus, the neurons at the nodes of the connected
component of Gt fire in near synchrony. The dynamics of graph Gt basically has two
phases: 1. accumulation of the connected components,
when the excitatory connections play the major role; and
2. formation of the stable groups of the connected neurons along the connected components, when
the inhibitory connections come into play. Consider these phases more closely. Notice that
the probability of appearance of an edge in this graph is at most the probability that, in the
neighbourhood of the ith firing neuron, there will be a neuron that fires in near synchrony. Simple
analysis shows that this probability is proportional to omax
x
p
Y
x. Then it is not difficult to obtain the following bound for the length of any connected
component L of the graph Gt at time t EY
E L5CoD
E
max
x
p
Y
x for some positive constant C independent of
L. But as soon as there is at least one edge in the
graph Gt, those neurons that are in the D
I
neighbourhoods of the connected component re- ceive, roughly speaking, twice as many inhibitory
impulses as there were from a neuron in a free node of the graph. Thus, when the size of the connected
subgraph becomes of the order of B
I cr
B
I
, this group will be surrounded by the silent neurons, and the
dynamics of this group will be independent of the rest of the active neurons. This is in a perfect
agreement with the fact observed by Xing and Gerstein 1996 that the size of the connected group
is decreasing when the inhibition is increasing.
I conclude that, after the training Eq. 8, any limiting structure of the network will be composed
of small groups of the connected neurons, unless the coefficients q
ij
t are kept bounded from above by a small constant independent of D
E
. But the latter would probably reduce the meaning of the
Hebb rule.
5. Role of weak inhibitory connections