posed for time-homogeneous diffusion processes and mentioned in the previous subsection, in a
previous paper cf. Di Nardo et al., 1998b it has been proved that the FPT density of a Gauss –
Markov process can be obtained by solving the simple, non-singular, Volterra second kind inte-
gral Eq. 5, with St C
1
[t , and
c [St, t
y, t]= St − mt
2 −
St − mt 2
h
1
th
2
t − h
2
th
1
t h
1
th
2
t − h
2
th
1
t −
y − mt 2
h
2
th
1
t − h
2
th
1
t h
1
th
2
t − h
2
th
1
t ×
f[St, t y, t].
By making use of this result, in Di Nardo et al. 1998b an efficient numerical procedure based on
a repeated Simpson’s rule has been proposed to evaluate
FPT densities
of Gauss – Markov
processes. In the following, we present a special non-sta-
tionary Gauss – Markov neuronal model and make use of such numerical procedure to analyze
the corresponding firing pdf’s.
3. A special Gauss – Markov neuronal model
In this section, we consider the neuronal model {Xt, t I}, with I
[0, + , characterized by
the mean mt = r + ljt,
12 where
j t =
Á Ã
Í Ã
Ä aq
q − a e
− tq
− e
− ta
if a q te
− tq
if a = q, and by the covariance
cs, t = s
2
q 2
e
− t − sq
, 13
with l \ 0, r R, , q \ 0, a \ 0 and s \ 0. As is easily seen, the above conditions characterizing
Gauss – Markov processes are satisfied by Eqs. 12 and 13. Indeed, in this case we have h
1
t = s
2
q e
tq
2 and h
2
t = e
− tq
. Hence, here we are defining a Gauss – Markov neuronal model. Re-
calling Eq. 11, the coefficients A
1
x, t and A
2
x, t for the underlying process are, respectively,
given by A
1
x, t = − 1
q x − r + le
− ta
, A
2
x, t = s
2
. 14
Hence, the infinitesimal moments of the OU neuronal model turn out to be a special case of
model expressed by Eq. 14. Indeed, for l = 0 Eq. 14 yields A
1
x, t = − x − rq; moreover, when l \ 0 and a
¡0 the drift A
1
x, t goes to −
x − rq. Let us consider the deterministic model sug-
gested by Eqs. 1 and 14 in the absence of randomness, i.e. with s
2
= 0 described by
dxt dt
= − 1
q x − r + le
− ta
, xt
= x .
It is not hard to see that xt
= x
e
− t − t
q
+ r
[1 − e
− t − t
q
] +
l [jt − jt
e
− t − t
q
]. 15
Recalling Eqs. 12 and 13, we note that, similarly to the OU model, again the conditional
mean Mt t
, given by the first of Eq. 10, identifies with Eq. 15. Furthermore, if t
= 0 and
x =
r , from Eq. 15 one has
xt = r + ljt, that coincides with the mean mt given by Eq.
12. We note that relations Eq. 14 can be a poste-
riori interpreted in the following way. The neu- ron’s membrane potential is not only subject to
the usual spontaneous exponential decay and to endogenous random components, but it also expe-
riences an external input whose magnitude, how- ever, exponentially damps with the time-constant
a
. Hence, the effect of such an input depends on the two parameters a and l. In other words, such
Fig. 3. Firing pdf’s for the neuronal model of Section 3, for t
= 0, x
= r = −
70, s
2
= 2, S = − 60, q = 5, l = 14 and
a = 1, 2, 3, 4. Decreasing modes refer to increasing values of a.
parameters mimic the effect of an external neu- ron’s input whose initial strength l exponentially
damps with the time-constant a. When x 5
r ,
which is the interesting case in neurobiology, from Eq. 15 it follows that xt is initially increasing,
to decrease monotonically as t towards the resting potential r, after reaching a maximum.
Note the significant diversity of behavior of xt for l \ 0 and for l = 0, the latter representing the
deterministic version of the OU model. Indeed, if l =
0, xt monotonically tends to the resting potential r for all x
r .
4. Numerical results