Directory UMM :Data Elmu:jurnal:S:Stochastic Processes And Their Applications:Vol87.Issue2.2000:
                                                                                Stochastic Processes and their Applications 87 (2000) 199–234
www.elsevier.com/locate/spa
Extremes and upcrossing intensities for P-dierentiable
stationary processes
J.M.P. Albina; b; ∗
a Department
of Mathematics, Chalmers University of Technology and University of Goteborg,
S-412 96 Goteborg, Chapel Hill, NC, USA
b Center for Stochastic Processes, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
Received 25 November 1998; received in revised form 22 November 1999
Abstract
Given a stationary dierentiable in probability process {(t)}t∈R we express the asymptotic behaviour of the tail P{supt∈[0; 1] (t)¿u} for large u through a certain functional of the conditional
law (′ (1)|(1)¿u). Under technical conditions this functional becomes the upcrossing intensity
(u) of the level u by (t). However, by not making explicit use of (u) we avoid the often
hard-to-verify technical conditions required in the calculus of crossings and to relate upcrossings
to extremes. We provide a useful criterion for verifying a standard condition of tightness-type
used in the literature on extremes. This criterion is of independent value. Although we do
not use crossings theory, our approach has some impact on this subject. Thus we complement
existing results due to, e.g. Leadbetter (Ann. Math. Statist. 37 (1983) 260 –267) and Marcus
(Ann. Probab. 5 (1977) 52–71) by providing a Rnew and useful set of technical conditions which
∞
ensure the validity of Rice’s formula (u) = 0 zf(1); ′ (1) (u; z) d z. As examples of applican
tion we study extremes of R -valued Gaussian processes with strongly dependent component
processes, and of totally skewed moving averages of -stable motions. Further we prove Belayev’s multi-dimensional version of Rice’s formula for outcrossings through smooth surfaces of
c 2000 Elsevier Science B.V. All rights reserved.
Rn -valued -stable processes.
MSC: Primary 60G70; secondary 60E07; 60F10; 60G10; 60G15
Keywords: Extrema; Local extrema; Sojourn; Upcrossing; Rice’s formula; Belyaev’s formula;
Stationary process; Gaussian process; 2 -process; -Stable process
0. Introduction
Given a dierentiable stationary stochastic process {(t)}t∈R there is, under technical
conditions, a direct relation between the asymptotic behaviour of P{supt∈[0; 1] (t)¿u}
for large u and that of the expected number of upcrossings (u) of the level u by
{(t)}t∈[0; 1] (e.g., Leadbetter and Rootzen, 1982; Leadbetter et al., 1983, Chapter 13;
∗
Research supported by NFR grant F-AA=MA 9207-310, and by M.R. Leadbetter.
E-mail address: [email protected] (J.M.P. Albin)
c 2000 Elsevier Science B.V. All rights reserved.
0304-4149/00/$ - see front matter
PII: S 0 3 0 4 - 4 1 4 9 ( 9 9 ) 0 0 1 1 0 - 6
200
J.M.P. Albin / Stochastic Processes and their Applications 87 (2000) 199–234
Albin, 1990, Theorem 7; Albin, 1992). Under additional technical conditions one further
has the explicit expression
Z ∞
zf(1); ′ (1) (u; z)d z
(0.1)
Rice’s formula : (u) =
0
(e.g., Leadbetter, 1966; Marcus, 1977; Leadbetter et al., 1983, Chapter 7).
However, although very reasonable, the above mentioned two sets of “technical
conditions” are quite forbidding, and have only been veried for Gaussian processes
and processes closely related to them. Hence, although conceptually satisfying, the
upcrossing approach to extremes have neither led far outside “Gaussian territory”, nor
generated many results on extremes that cannot be easier proved by other methods. See,
e.g. Adler and Samorodnitsky (1997) and Albin (1992) for outlines of the technical
problems associated with the formula (0.1), and with relating the asymptotic behaviour
of P{supt∈[0; 1] (t)¿u} to that of (u), respectively.
In Section 3 we show that, under certain conditions on the functions q and w,
)
(
1
1 (1) − u
′
(1)¿u
P sup (t)¿u ∼ P{(1)¿u} + P (1)¿
x
x q(u)
t∈[0;1]
×
P{(1)¿u}
q(u)
(0.2)
as u ↑ û ≡ sup{x ∈ R: P{(1)¿x}¿0} and x ↓ 0 (in that order). Note that, assuming
absolute continuity, the right-hand side of (0.2) is equal to
Z ∞ Z z
f(1); ′ (1) (u + xqy; z) dy d z:
(0.3)
P{(1)¿u} +
0
0
If we send x ↓ 0 before u ↑ û, (0.1) shows that, under continuity assumptions, (0.3)
behaves like P{(1)¿u} + (u). However, technical conditions that justify this change
of order between limits, and thus the link between extremes and upcrossings, can
=
seldom be veried outside Gaussian contexts. In fact, since P{supt∈[0; 1] (t)¿u} ∼
P{(1)¿u} + (u) for processes that cluster (e.g., Albin, 1992, Section 2), the “technical conditions” are not only technical! So while (0.2) is a reasonably general result,
the link to (u) can only be proved in more special cases.
Non-upcrossing-based approaches to local extremes rely on the following conditions:
(1) weak convergence (as u ↑ û) of the nite dimensional distributions of the process
{(w(u)−1 [(1 − q(u)t) − u] | (1)¿u)}t∈R ;
(2) excursions above high levels do not last too long (local independence); and
(3) a certain type of tightness for the convergence in (1).
It is rare that (w−1 [(1 − qt) − u] | (1)¿u) can be handled sharp enough to verify
(1). Somewhat less seldom one can carry out the estimates needed to prove
Assumption 1. There exist random variables ′ and ′′ with
)
(
 
2 ′′ %
q(u)
′′
60 a:s: and lim sup E
(1)¿u ¡∞ for some %¿1;
w(u)
u↑û
(0.4)
J.M.P. Albin / Stochastic Processes and their Applications 87 (2000) 199–234
201
and such that
(1 − q(u)t) − u (1) − u q(u)t′
q(u)2 t 2 ′′
¿ (1)¿u
lim P
−
+
−
u↑û
w(u)t
w(u)t
w(u)t
2w(u)t
=0
(0.5)
for each choice of ¿0 and t 6= 0.
Further there exists a random variable ′ such that; for each choice of ¿0;
(1 − q(u)t) − u (1) − u q(u)t′
¿ (1)¿u 6C|t|%
(0.6)
−
+
P
w(u)t
w(u)t
w(u)t
for u¿ũ and t 6= 0; for some constants C = C()¿0; ũ = ũ() ∈ R and % = %()¿1.
Of course, one usually takes ′ = ′ = ′ (1) in Assumption 1, where ′ (t) is a
stochastic derivative of (t). The variable ′′ is not always needed in (0.5) [so that
(0.5) holds with ′′ = 0]; and ′′ is not always choosen to be ′′ (1) when needed.
In Section 2 we prove that (0.6) implies the tightness-requirement (3), and in Section 3
that (0.5) can replace the requirement about weak convergence (1).
Condition (2) cannot be derived from Assumption 1. But by requiring that (0.5)
holds for t-values t = t(u) → ∞ as u ↑ û, it is possible to weaken the meaning of
“too long” in (2) and thus allow “longer” local dependence. This is crucial for the
application to Rn -valued Gaussian processes in Section 5. [The exact statement of
condition (2) and the mentioned weakened version of it are given in Assumption 2 of
Section 2 and the requirement (3.2) of Theorem 2, respectively.]
In Section 4 we show that a version of (0.6) can be used to evaluate the upcrossing
intensity (u) through the relation (e.g., Leadbetter et al., 1983, Section 7:2)
(u) = lim s−1 P{(1 − s)¡u¡(1)}:
s↓0
(0.7)
A process {(t)}Rt∈R with innitely divisible nite dimensional distributions has representation (t) = x∈S ft (x) dM (x) in law, for some function f(·) (·): R × S → R and
independently scattered innitely divisible random measure M on a suitable space S.
Putting
Z
@
dM (x)
ft (x)
′ (1) ≡
@t
S
t=1
when f(·) is smooth, we thus get
(1 − qt) − u (1) − u qt′ (1)
+
−
wt
wt
wt
 
Z
f1−qt (x) − f1 (x)
@
q
+ ft (x)
dM (x):
=
w S
qt
@t
t=1
When ′ = ′ = ′ (1) and ′′ = 0 the probabilities in (0.5) and (0.6) are bounded by
 
Z
q f1−qt − f1
@
dM ¿u +
f1 ±
P
+ ft
w
qt
@t t=1
S
Z
f1 dM ¿u
P
S
202
J.M.P. Albin / Stochastic Processes and their Applications 87 (2000) 199–234
for each choice of
¿0. Thus the events under consideration in (0.5) and (0.6) has
been reduced to tail events for univariate innitely divisible random variables, which
can be crucially more tractable to work with than the “traditional conditions” (1) and
(2). We show how this works on -stable processes in Section 6.
Let {X (t)}t∈R be a separable n|1-matrix-valued centred stationary and two times
mean-square dierentiable Gaussian process with covariance function R(t) = E{X (s)
X (s+t)T }. In Section 5 we nd the asymptotic behaviour of P{supt∈[0; 1] X (t)T X (t)¿u}
as u → ∞ under the additional assumption that
lim inf t −4
t→0
inf
{x∈Rn :kxk=1}
xT [I − R(t)R(t)T ]x¿0:
This is an important generalization of results in the literature which are valid only
when (X ′ (1) |X (1)) has a non-degenerate normal distribution in Rn , i.e.,
lim inf t −2
t→0
inf
{x∈Rn :kxk=1}
xT [I − R(t)R(t)T ]x¿0:
Let {M (t)}t∈R denote an -stable Levy motion
R ∞ with ¿1 that is totally skewed to
the left. Consider the moving average (t) = −∞ f(t − x) dM (x); t ∈ R; where f is
a non-negative suciently smooth function in L (R). In Section 6 we determine the
asymptotic behaviour of P{supt∈[0; 1] (t)¿u} as u → ∞. This result cannot be proven
by traditional approaches to extremes since the asymptotic behaviour of conditional
totally skewed -stable distributions is not known (cf. (1)).
In Section 7 we prove a version of Rice’s formula named after Belyaev (1968)
for the outcrossing intensity through a smooth surface of an Rn -valued stationary and
P-dierentiable -stable process {X (t)}t∈R with independent component processes.
1. Preliminaries
All stochastic variables and processes that appear in this paper are assumed to be
dened on a common complete probability space (
; F; P).
In the sequel {(t)}t∈R denotes an R-valued strictly stationary stochastic process,
and we assume that a separable and measurable version of (t) have been choosen.
Such a version exists, e.g. assuming P-continuity almost everywhere (Doob, 1953,
Theorem II:2:6), and it is to that version our results apply.
Let G(x) ≡ P{(1)6x} and û ≡ sup{x ∈ R: G(x)¡1}. We shall assume that G
belongs to a domain of attraction of extremes. Consequently there exist functions
w: (−∞; û) → (0; ∞) and F: (−x̂; ∞) → (−∞; 1) such that
lim
u↑û
1 − G(u + xw(u))
= 1 − F(x) for x ∈ (−x̂; ∞); for some x̂ ∈ (0; ∞]: (1.1)
1 − G(u)
If G is Type II-attracted we can assume that x̂ = −1, F(x) = 1 − (1 + x)−
for some
¿0, and w(u) = u. Otherwise G is Type I- or Type III-attracted, and then we can
take x̂ = ∞ and F(x) = 1 − e−x , and assume that w(u) = o(u) with
w(u + xw(u))=w(u) → 1
locally uniformly for x ∈ R as u ↑ û:
(1.2)
See, e.g., Resnick (1987, Chapter 1) to learn more about the domains of attraction.
J.M.P. Albin / Stochastic Processes and their Applications 87 (2000) 199–234
203
In general, the tail of G is by far the most important factor aecting the tail behaviour
of the distribution of supt∈[0; 1] (t). Virtually all marginal distributions G that occur in
the study of stationary processes belong to a domain of attraction.
In most assumptions and theorems we assume that a function q: (−∞; û) → (0; ∞)
with Q ≡ lim supu↑û q(u)¡∞ have been specied. The rst step when applying these
results is to choose a suitable q. Inferences then depend on which assumptions hold
for this choice of q. In particular, we will use the requirement
R () ≡ lim sup q(u)=q(u − w(u))
satises lim sup R ()¡∞:
u↑û
(1.3)
↓0
Sometimes we also need a second function D : (−∞; û) → (0; ∞) such that
D(u)61=q(u) and
lim sup D(u) = Q −1 (= + ∞ when Q = 0):
(1.4)
u↑û
In order to link the asymptotic behaviour of P{supt∈[0; 1] (t)¿u} to that of
(′ (1)|(1)¿u), we make intermediate use of the tail behaviour of the sojourn time
Z t
L(u) ≡ L(1; u) where L(t; u) ≡
I(u; û) ((s)) ds for t¿0;
0
and its rst moment E{L(u)} = P{(1)¿u} = 1 − G(u). Our study of sojourns, in turn,
crucially relies on the sequence of identities
Z 1
Z x
1
P{L(u)=q¿y}
P{L(s; u)=q6x; (s)¿u} ds
dy =
E{L(u)=q}
E{L(u)} 0
0
)
Z 1 ( Z s=q
P
I(u; û) ((1 − qt)) dt6x (1)¿u ds
=
0
0
= qx +
Z
1
P
qx
(Z
0
s=q
)
I(u; û) ((1 − qt)) dt6x (1)¿u ds:
(1.5)
In (1:5) the rst equality is rather deep, and has long been used by Berman (1982,
1992). See, e.g. Albin (1998, Eq. (3.1)) for a proof. The second equality follows easily
from stationarity, while the third is trivial.
Convention. Given functions h1 and h2 we write h1 (u) 4 h2 (u) when lim supu↑û (h1 (u)−
h2 (u))60 and h1 (u) ¡ h2 (u) when lim inf u↑û (h1 (u) − h2 (u))¿0.
2. First bounds on extremes. Tightness
In Propositions 1 and 2 below we use (a strong version of) (2) and (3), respectively,
to derive upper and lower bounds for the tail P{supt∈[0; 1] (t)¿u}. When combined
these bounds show that
)
(
P
sup (t)¿u
t∈[0;1]
asymptotically behaves like E{L(u)}=q(u) as u ↑ û;
204
J.M.P. Albin / Stochastic Processes and their Applications 87 (2000) 199–234
except for a multiplicative factor bounded away from 0 and ∞. In Theorem 1 we
further prove that (0.6) implies condition (3) in the shape of Assumption 3 below.
Our lower bound rely on the following requirement:
Assumption 2. We have
lim lim sup
d→∞
u↑û
Z
1=q(u)
d∧(1=q(u))
P{(1 − q(u)t)¿u|(1)¿u} dt = 0:
Assumption 2 requires that if (1)¿u, then (t) have not spent too much time above
the level u prior to t =1. This means a certain degree of “ local independence ”. Clearly,
Assumption 2 holds trivially when Q ¿0.
As will be shown in Section 3, Assumption 2 can be relaxed if (0.5) is assumed
to hold also for t-values t = t(u) → ∞ as u ↑ û. This gives an opportunity to allow
stronger local dependence than is usual in local theories of extremes.
Typically, Assumption 2 should be veried employing the estimate
P{(1 − q(u)t)¿u|(1)¿u}6P{(1 − q(u)t) + (1)¿2u}=P{(1)¿u}:
(2.1)
This is the way Assumption 2 is veried for the -stable process in Section 6 [by
invoking such a computation in Albin (1999)], and the Gaussian process in Example 1
at the end of Section 3 can also be dealt with very easily using this approach. [There
is an array of complicated estimates of probabilities like P{(1 − q(u)t)¿u|(1)¿u}
in the Gaussian literature, all of which are bettered or equaled by (2.1).] It seems
that it was Weber (1989) who rst observed the “trick” (2.1) in the context of local
extremes. This trick can be crucial in non-Gaussian contexts.
Proposition 1. If Assumption 2 holds; then we have
q(u)
P
lim inf
u↑û
E{L(u)}
(
)
sup (t)¿u ¿0:
t∈[0;1]
Proof. Clearly we have
q(u)
P
lim inf
u↑û
E{L(u)}
1
¿ lim inf
u↑û
x
1
=
x
Z
x
0
(
P{L(u)=q¿y}
dy
E{L(u)=q}
1 − lim sup
u↑û
sup (t)¿u
t∈[0;1]
)
Z
x
∞
P{L(u)=q¿y}
dy
E{L(u)=q}
!
(2.2)
205
J.M.P. Albin / Stochastic Processes and their Applications 87 (2000) 199–234
for each x¿0. Given an ∈ (0; 1), (1:5) and Assumption 2 further show that
Z ∞
P{L(u)=q¿y}
dy
lim sup
E{L(u)=q}
u↑û
x
)
Z 1 ( Z d∧(s=q)
P
I(u; û) ((1 − qt)) dt¿(1 − )x (1)¿u ds
6 lim sup
u↑û
0
0
)
Z 1 ( Z s=q
P
I(u; û) ((1 − qt)) dt¿x (1)¿u ds
+ lim sup
u↑û
0
d∧(s=q)
60 +
60 +
1
lim sup
x u↑û
Z
s=1
1
lim sup
x u↑û
Z
t=1=q
t=s=q
Z
s= 0
t=d∧(s=q)
t=d∧(1=q)
P{(1 − qt)¿u|(1)¿u} dt ds
P{(1 − qt)¿u|(1)¿u} dt
1 2
for x¿d=(1 − ) and d¿d0 ; for some d0 ¿1:
x
Choosing x = d0 =(1 − ) and inserting in (2.2) we therefore obtain
(
)
(1 − )
1−
q(u)
P sup (t)¿u ¿
1−
¿0:
lim inf
u↑û
E{L(u)}
d0
d0
t∈[0;1]
(2.3)
60 +
Of course, an upper bound on extremes require an assumption of tightness type:
Assumption 3. We have
q(u)
P
lim lim sup
a↓0
E{L(u)}
u↑û
(
sup (t)¿u;
t∈[0;1]
max
{k∈Z:06aq(u)k61}
(akq(u))6u
)
= 0:
Assumption 3 were rst used by Leadbetter and Rootzen (1982), and it plays a
central role in the theory for both local and global extremes (e.g., Leadbetter and
Rootzen, 1982; Leadbetter et al., 1983, Chapter 13; Albin, 1990). Assumption 3 is
usually very dicult to verify.
Proposition 2. If Assumption 3 holds; then we have
(
)
q(u)
P sup (t)¿u ¡∞:
lim sup
E{L(u)}
t∈[0;1]
u↑û
Proof. Clearly we have
q(u)
P
lim sup
E{L(u)}
u↑û
(
sup (t)¿u
t∈[0;1]
q(u)
P
6 lim sup
E{L(u)}
u↑û
(
q(u)
P
+ lim sup
E{L(u)}
u↑û
)
sup (t)¿u;
t∈[0;1]
max (akq)6u
06aqk61
max (akq)¿u :
06aqk61
)
206
J.M.P. Albin / Stochastic Processes and their Applications 87 (2000) 199–234
Here the rst term on the right-hand side is nite (for a¿0 suciently small) by
Assumption 3, while the second term is bounded by
lim sup
u↑û
q(u)
((1 + [1=(aq)])(1 − G(u)))6Q + a−1 ¡∞:
E{L(u)}
To prove that (0.6) implies Assumption 3 we recall that if {(t)}t∈[a; b] is separable
and P-continuous, then every dense subset of [a; b] is a separant for (t).
Theorem 1. Suppose that (1:1) and (1:3) hold. If in addition (0:6) holds and (t) is
P-continuous; then Assumption 3 holds.
Proof. Letting n ≡
(
P
Pn
k=1
2−k and En ≡
S 2n
−n
k= 0 {(aqk2 )¿u
(0)6u; sup (t)¿u + aw; (aq)6u
t∈[0;aq]
=P
6P
6P
n
(
E0c ;
2
∞ [
[
−n
{(aqk2
)¿u + aw}
n=1 k= 0
(
E0c ;
∞
[
En
n=1
(∞
[
n=1
+ n aw} we have
)
)
)
)
(En \ En−1 )
n−1
∞ 2X
−1
X
P (aq(2k)2−n )6u + n−1 aw; (aq(2k + 1)2−n )¿u + n aw;
6
n=1 k= 0
(aq(2k + 2)2−n )6u + n−1 aw
	
∞
X
(1 − aq2−n ) − u (1) − u aq2−n ′
−
+
¡ − 2−n a; (1)¿u
2n−1 P
6
w
w
w
n=1
∞
X
(1 + aq2−n ) − u (1) − u aq2−n ′
−n
n−1
−
−
¡ − 2 a; (1)¿u
2 P
+
w
w
w
n=1
+
∞
X
2n−1 P
n=1
62
∞
X
aq2−n ′
aq2−n ′
¿0; −
¿0
w
w
2n−1 C(2−n a)% P{(1)¿u}
n=1
= o(a)P{(1)¿u} as a ↓ 0:
(2.4)
207
J.M.P. Albin / Stochastic Processes and their Applications 87 (2000) 199–234
Consequently it follows that
q(u)
P
lim sup
E{L(u)}
u↑û
(
sup (t)¿u + aw; max
06aqk61
t∈[0;1]
q(u)
P
6lim sup
E{L(u)}
u↑û
(
sup
(akq)6u
(t)¿u + aw;
06t6([1=(aq)]+1)aq
)
max
06k6[1=(aq)]+1
)
(akq)6u
)
(
q(u)
([1=(aq)] + 1)P (0)6u; sup (t)¿u + aw; (aq)6u
6lim sup
E{L(u)}
t∈[0; aq]
u↑û
6lim sup(a−1 + q) o(a)
u↑û
→0
as a ↓ 0:
(2.5)
Hence it is sucient to prove that
q(u)
P
() ≡ lim sup lim sup
E{L(u)}
a↓0
u↑û
(
u¡ sup (t)6u + aw
t∈[0;1]
)
→0
as ↓ 0:
Put ũ ≡ u − aw, w̃ ≡ w(ũ) and q̃ ≡ q(ũ) for a; ¿0. Then we have [cf. (1.2)]
1
u = ũ + aw¿ũ + aw̃
2
and
u = u + aw6ũ + 2aw̃
for u suciently large. Combining (2.5) with (1.1) and (1.3) we therefore obtain
() 6 lim sup lim sup
a↓0
×P
u↑û
R (a)(1 − F(−a))q̃
E{L(ũ)}
(
1
sup (t)¿ũ + aw̃; max (ak q̃)6ũ
06aq̃k61
2
t∈[0;1]
)
R (a)(1 − F(−a))q̃
E{L(ũ)}
a↓0
u↑û
×P ũ¡ max (ak q̃)6ũ + 2aw̃
+ lim sup lim sup
06aq̃k61
R (a)(1 − F(−a))
P{u¡(0)6u + 2aw}
a(1 − G(u))
a↓0
u↑û
!
!
F(2a)
lim sup
lim sup R (a)
a
a↓0
a↓0
6 0 + lim sup lim sup
6
→0
as ↓ 0:
208
J.M.P. Albin / Stochastic Processes and their Applications 87 (2000) 199–234
3. Extremes of P-dierentiable processes
We are now prepared to prove (0.2). As mentioned in the introduction, we want the
option to employ a stronger version of (0.5) [(3.1) below] in order to be able to relax
Assumption 2 [to (3.2)]:
Theorem 2. Assume that there exist random variables ′ and ′′ such that
Z D(u)
(1 − q(u)t) − u (1) − u q(u)t′
q(u)2 t 2 ′′
P
−
+
−
w(u) t
w(u) t
w(u)t
2 w(u)t
0
¿| (1)¿u dt → 0
(3.1)
as u ↑ û for each choice of ¿0 [where D(u) satises (1:4)]; and that
Z 1=q(u)
P{(1 − q(u)t)¿u|(1)¿u} dt = 0:
lim sup
u↑û
(3.2)
D(u)
If in addition (0:4); (1:1); and Assumption 3 hold; then we have
q(u) ′ 1 (1) − u
1
¿
(1)¿u
¡∞:
lim sup lim sup q(u) + P
x
w(u)
x w(u)
x↓0
u↑û
As x ↓ 0 we further have
(
)
q(u)
P sup (t)¿u − q(u)
lim sup
E{L(u)}
t∈[0;1]
u↑û
 
q(u) ′ 1 (1) − u
1
¿
(1)¿u → 0 :
− P
x
w(u)
x w(u)
Proof. Choosing constants ; ¿0, (1:5) and Markov’s inequality show that
Z
1 x P{L(u)=q¿y}
dy − q
x 0
E{L(u)=q}
(Z
)
Z
s=q
(1 − qt) − u
1 1
dt6x (1)¿u ds
P
I(0; ∞)
=
x qx
w
0
1
6
x
Z
1
P
(Z
D∧(s=q)
I(; ∞)
0
qx
q2 t 2 ′′
(1) − u qt ′
−
+
w
w
2w
dt
)
6(1 + )x| (1)¿u ds
+
1
x
Z
0
1
P
D
q2 t 2 ′′
(1 − qt) − u
(1) − u qt′
−
+
−
w
w
2
w
w
0
¿x| (1)¿u ds
Z
I(; ∞)
dt
209
J.M.P. Albin / Stochastic Processes and their Applications 87 (2000) 199–234
(1) − u q (1 + )x ′
q2 (1 + )2 x2 ′′
−
+
6 (1)¿u
w
w
2w
)
(
Z
1
(1) − u
1
ds + P
6 (1)¿u
+
x {s∈[qx;1]:(1+)x¿D∧(s=q)}
x
w
1
6 P
x
1 1
+
x x
Z
D
P
0
q2 t 2 ′′
(1 − qt) − u
(1) − u qt ′
−
+
−
w
w
2w
w
¿| (1)¿u dt
(3.3)
for 0¡x¡Q −1 . By (0:4) and (1.1), the rst term on the right-hand side is
′
q
1 − (1) − u
1
¿
6 P
(1)¿u
x
w
(1 + )2 x
w
1
1 − (1) − u
(1) − u x(3%−1)=(2%)
1
−
6
(1)¿u
+ P
x
(1 + )2 x
w
(1 + )x
(1 + )2 x
w
2
q (1 + )2 x2 ′′
1
6 − x(3%−1)=(2%) (1)¿u
+ P
x
2w
1
1
(1) − u
(1) − u
1
6
−
+ P
(1)¿u
x
(1 + )x
w
(1 + )x (1 + )2 x
w
′
F((1 + )x(3%−1)=(2%) =)
1
1 − (1) − u
q
(1)¿u
+
4 P
¿
2
x
w
(1 + ) x
w
x
)
(
q2 ′′ %
1 (1 + )2% x2%
(1)¿u
E
+
x x(3%−1)=2
2w
+
F((1 + ))
x
for ∈ (0; 1):
Further the second term in (3.3) is 4 Q for D¿(1 + )x, the third ∼ F()=x by
(1.1), and the fourth 4 0 by (3.1). Taking x̃ = (1 + )2 x=(1 − ), (3.3) thus give
1
x̃
x˜
′
q
1
1 (1) − u
P{L(u)=q¿y}
dy − q − P
¿
(1)¿u
E{L(u)=q}
x̃
w
x̃
w
0
Z x
P{L(u)=q¿y}
1
1−
dy − q
6
2
(1 + )
x 0
E{L(u)=q}
′
q
1 − (1) − u
1
(1)¿u
¿
− P
x
w
(1 + )2 x
w
Z
+
1− 1
(1 + )2 x
Z
x
x˜
P{L(u)=q¿y}
dy
E{L(u)=q}
210
J.M.P. Albin / Stochastic Processes and their Applications 87 (2000) 199–234
1 − F((1 + )x(3%−1)=(2%) =)
(1 + )2
x
4
1−
(1 + )2% x(%−1)=2 lim sup E
+
(1 + )2
u↑û
(
)
 
q2 ′′ %
2 w (1)¿u
1−
1 − F()
1 − F((1 + ))
+
Q+
2
2
(1 + )
x
(1 + )
(1 + )2 x
)
(
q
1 − (1 + )2 x − (1 − ) x
lim sup
P sup (t)¿u
+
(1 + )2
(1 − ) x
E{L(u)}
t∈[0;1]
u↑û
+
→0
as ↓ 0; ↓ 0; x ↓ 0 and ↓ 0 (in that order);
where we used Proposition 2 in the last step. It follows that
Z x
P{L(u)=q¿y}
1
dy − q
lim sup lim sup
x 0
E{L(u)=q}
x↓0
u↑û
′
1 (1) − u
q
1
(1)¿u
60:
¿
− P
x
w
x
w
(3.4)
To proceed we note that (3.2) combines with a version of argument (2.3) to show
that, given an ¿0, there exists an u0 = u0 ()¡û such that
#
Z 1 "Z s=q
P{(1 − qt)¿u|(1)¿u} dt ds62 for u ∈ [u0 ; û):
0
D∧(s=q)
Through a by now familiar way of reasoning we therefore obtain
Z
1 x P{L(u)=q¿y}
dy − q
x 0
E{L(u)=q}
)
(Z
Z
D∧(s=q)
(1 − qt) − u
1 1
dt6(1 − )x (1)¿u ds
P
I(0; ∞)
¿
x qx
w
0
1
−
x
¿
Z
P
0
1 − qx
P
x
1 1
−
x x
¡
1
Z
s=q
D
Z
0
1 − qx
P
x
)
I(u; û) ((1 − qt)) dt¿x (1)¿u ds
D∧(s=q)
(Z
I(0; ∞)
0
1
"Z
(1 − qt) − u
w
dt6(1 − )x (1)¿u
#
s=q
D∧(s=q)
P{(1 − qt)¿u|(1)¿u} dt ds
D
q2 t 2 ′′
(1) − u qt ′
−
+
w
w
2w
0
6(1 − 2)x| (1)¿u
Z
I(−; ∞)
dt
J.M.P. Albin / Stochastic Processes and their Applications 87 (2000) 199–234
D
211
(1 − qt) − u (1) − u qt ′
q2 t 2 ′′
dt
I(; ∞)
−
+
−
w
w
w
2w
0
1 1 2
¿ x(1)¿u −
x x
1 − qx
(1) − u q (1 − 2)x ′
¿
P
−
6 − (1)¿u
x
w
w
Z
(1 − qt) − u (1) − u qt ′
q2 t 2 ′′
1 1 D
P
−
+
−
−
x x 0
w
w
w
2w
¿| (1)¿u dt − 2
x
′
1
(1) − u
q
1 − qx
P
¿
¡
(1)¿u
x
w
(1 − 2)2 x
w
(1) − u
1
1
− P
+
x
(1 − 2)x
w
(1 − 2)x
(1) − u
1
¿
(1)¿u − x2
(1 − 2)2 x
w
′
F( 21 (1 − 2))
1
(1) − u
q
1 − qx
(1)¿u
−
P
¿
− 2
¡
2
x
w
(1 − 2) x
w
x
x
1
− P
x
Z
for 0¡x¡Q −1 and 0¡¡ 12 . Consequently we have
q
P
E{L(u)}
−
(
1
P
(1 − 2)2 x
¡ lim inf
u↑û
×
sup (t)¿u
t∈[0;1]
)
−q
q′
1
(1) − u
¿
(1)¿u
w
(1 − 2)2 x
w
1
(1 − qx) (1 − 2)2
Z x
P{L(u)=q¿y}
1
dy − q
x 0
E{L(u)=q}
′
1
(1) − u
q
1 − qx
¿
P
−
(1)¿u
x
w
(1 − 2)2 x
w
(
)
q
1
P sup (t)¿u
−1
− lim sup
(1 − qx) (1 − 2)2
E{L(u)}
t∈[0;1]
u↑û
¿−
F( 21 (1 − 2))
−
(1 − Q x) (1 − 2)2 x (1 − Q x) (1 − 2)2 x2
212
J.M.P. Albin / Stochastic Processes and their Applications 87 (2000) 199–234
Q x (1 − 2)2 + 4(1 − )
q
P
lim sup
−
2
(1 − Q x) (1 − 2)
E{L(u)}
u↑û
→0
as ↓ 0; ↓ 0 and x ↓ 0
(
sup (t)¿u
t∈[0;1]
)
(in that order);
where we used Proposition 2 in the last step. Hence we conclude that
)
(
q
P sup (t)¿u − q
lim inf lim inf
x↓0
u↑û
E{L(u)}
t∈[0;1]
!
′
1 (1) − u
q
1
¿0;
¿
− P
(1)¿u
x
w
x
w
(3.5)
which in particular (again using Proposition 2) implies that
)!
′
1 (1) − u
q
1
¿
lim sup lim sup q + P
(1)¿u
x
w
x
w
x↓0
u↑û
1
6 lim sup lim sup q + P
x
x↓0
u↑û
)
q′ 1 (1) − u
¿
(1)¿u
w
x
w
q
P
−
E{L(u)}
q
P
+ lim sup
E{L(u)}
u↑û
(
(
sup (t)¿u
t∈[0;1]
sup (t)¿u
t∈[0;1]
)!
)
(3.6)
¡∞:
Using (1:5), (3.4) and (3.6) we readily deduce that
q
P L(u)=q6x; max (akq)¿u
lim sup lim sup
0¡aqk61
E{L(u)}
x↓0
u↑û
6 lim sup lim sup
x↓0
u↑û
[1=(aq)]
X
q
P{L(akq; u)=q6x; (akq)¿u}
E{L(u)}
k=1
[1=(aq)] Z akq
X
1
P{L(t; u)=q6x; (t)¿u} dt
6 lim sup lim sup
a E{L(u)}
x↓0
u↑û
a(k−1)q
k=1
1
1
P{L(t; u)=q6x; (t)¿u} dt
a E{L(u)} 0
x↓0
u↑û
Z
x 1 x P{L(u)=q¿y}
dy − q
6 lim sup lim sup
a x 0
E{L(u)=q}
x↓0
u↑û
′
q
1 (1) − u
1
¿
(1)¿u
− P
x
w
x
w
6 lim sup lim sup
Z
213
J.M.P. Albin / Stochastic Processes and their Applications 87 (2000) 199–234
x
+ lim sup lim sup
a
x↓0
u↑û
1
q+ P
x
60:
q′ 1 (1) − u
¿
(1)¿u
w
x
w
(3.7)
Further a calculation similar to (3.3) – (3.4) combines with (3.6) to reveal that
lim sup P{L(u)=q6x|(1)¿u}
u↑û
= lim sup P
u↑û
(Z
1=q
(Z
d∧(1=q)
I(0; ∞)
0
6 lim sup P
u↑û
(1 − qt) − u
w
I(; ∞)
0
+ lim sup P
u↑û
(Z
d
I(; ∞)
0
¿x| (1)¿u
6 lim sup P
u↑û
1
x
+ lim sup
u↑û
6 lim sup P
u↑û
→0
)
dt6x (1)¿u
(1) − u qt ′
−
w
w
)
dt6(1 + )x (1)¿u
(1 − qt) − u
(1) − u qt ′
−
−
w
w
w
dt
)
(1) − u q (1 + )x ′
−
6 (1)¿u + F()
w
w
Z
0
d
P
(1 − qt) − u
(1) − u qt ′
−
¿ (1)¿u dt
−
w
w
w
q′
1
(1) − u
¿
(1)¿u + F((1 + )) + F()
w
(1 + )2 x
w
as ↓ 0; ↓ 0 and x ↓ 0 (in that order):
(3.8)
In view of the obvious fact that
)
(
sup (t)¿u
P
t∈[0;1]
1
6
x
Z
x
1
x
Z
x
6
P
0
0
+P
(
{L(u)=q¿y} ∪
max (akq)¿u ∪
06aqk61
(
sup (t)¿u
t∈[0;1]
P{L(u)=q¿y} dy + P L(u)=q6x; max (akq)¿u
06aqk61
(
sup (t)¿u;
t∈[0;1]
)
max (akq)6u ;
06aqk61
))
dy
214
J.M.P. Albin / Stochastic Processes and their Applications 87 (2000) 199–234
Assumption 3 combines with (3.4) and (3.7) – (3.8) to show that
)
(
q
P sup (t)¿u − q
lim sup lim sup
E{L(u)}
t∈[0;1]
x↓0
u↑û
1
− P
x
!
q′ 1 (1) − u
¿
(1)¿u
w
x
w
Z x
P{L(u)=q¿y}
1
dy − q
6 lim sup lim sup
x
E{L(u)=q}
x↓0
u↑û
0
′
q
1 (1) − u
1
¿
− P
(1)¿u
x
w
x
w
q
+ lim sup lim sup lim sup
P L(u)=q6x; max (akq)¿u
0¡aqk61
E{L(u)}
a↓0
x↓0
u↑û
+ lim sup lim sup qP{L(u)=q6x|(1)¿u}
x↓0
u↑û
q
P
+ lim sup lim sup
E{L(u)}
a↓0
u↑û
(
sup (t)¿u;
t∈[0;1]
max (akq)6u
06aqk61
6 0:
)
(3.9)
The theorem now follows from inequalities (3.5) – (3.6) and (3.9).
In Theorem 2 the absence of Assumption 2 disallows use of Proposition 1. However,
if we do make Assumption 2, then a stronger limit theorem can be proved:
Corollary 1. Suppose that (1:1) and (1:3) hold. If in addition Assumptions 1 and 2
hold; and (t) is P-continuous; then the conclusion of Theorem 2 holds with
q(u) ′ 1 (1) − u
1
¿
(1)¿u
¿0:
lim inf lim inf q(u) + P
x↓0
u↑û
x
w(u)
x w(u)
Proof. By Theorem 1, (0:6) implies Assumption 3. Moreover (0:5) implies that (3.1)
holds for some function D(u) → Q −1 (suciently slowly) as u ↑ û. Since Assumption
2 implies that (3.2) holds for any function D(u) → Q −1 , it follows that the hypothesis
of Theorem 2 holds. Further (3.9) and Proposition 1 show that
′
q
1 (1) − u
1
¿
lim inf lim inf q + P
(1)¿u
x↓0
u↑û
x
w
x
w
¿ lim inf lim inf
x↓0
u↑û
1
q+ P
x
)
q′ 1 (1) − u
¿
(1)¿u
w
x
w
q
P
−
E{L(u)}
(
sup (t)¿u
t∈[0;1]
)!
J.M.P. Albin / Stochastic Processes and their Applications 87 (2000) 199–234
q
P
+ lim inf
u↑û
E{L(u)}
(
sup (t)¿u
t∈[0;1]
215
)
¿0:
Example 1. Let {(t)}t∈R be a stationary zero-mean and unit-variance Gaussian process, so that (1.1) holds with w(u) = (1 ∨ u)−1 and F(x) = 1 − e−x . Assume that the
covariance function r(t) ≡ E{(s) (s + t)}¡1 for t ∈ (0; 1] and satises
r(t) = 1 + 21 r ′′ (0) t 2 + t 2 o(|ln|t||−1 ) as t → 0:
(3.10)
Put q(u) = (1 ∨ u)−1 , so that (1.3) holds. We have C1 t 2 61 − r(t)6C2 t 2 for |t|61,
for some constants C1 ; C2 ¿0. Since 1 + r(t)62, it follows readily that
P{(1 − qt)¿u|(1)¿u} 6 2 P{(1 − qt)¿(1)¿u|(1)¿u}
6 2 P{(1 − qt) − r(qt)(1)¿(1 − r(qt))u}
p
6 2 P{N (0; 1)¿ C1 =2 qtu } for |qt|61;
so that Assumption 2 holds. Clearly (1 + t) − r(t)(1) − t ′ (1) is independent of
(1), and has variance 1 − r(t)2 − t 2 r ′′ (0) + 2t r ′ (t) = t 2 o(|ln|t||−1 ) by (3.10). Taking
′ = ′ = ′ (1) and ′′ = 0 in Assumption 1, (0:4) holds trivially while the fact that
(0:5) and (0:6) hold follows from the estimates
(1 − qt) − (1) + qt ′ (1)
¿ (1)¿u
P
wt
(1 − qt) − r(qt)(1) + qt ′ (1)
6P
¿2
wt
[1 − r(qt)](1)
1
¿
P
+
2
P{(1)¿u}
wt
1
+
:
P |N (0; 1)|¿
6P |N (0; 1)|¿
2 o(|ln|qt||−1=2 )
P{(1)¿u}
2 C2 qt
Let X be a unit-mean exponential random variable that is independent of ′ (1). Since
(1) and ′ (1) are independent, and (u((1) − u) | (1)¿u) → d X by (1.1), we have
1
1 (1) − u
1
(1)¿u
= P{′ (1)¿X=x}
P ′ (1)¿
lim
u→∞ x
x q(u)
x
Z
1 ∞ −y
e P{′ (1)¿y=x} dy
=
x 0
Z ∞
e−yx P{′ (1)¿y} dy
=
0
→ E{[′ (1)]+ }
p
= −r ′′ (0)=(2) as x ↓ 0:
216
J.M.P. Albin / Stochastic Processes and their Applications 87 (2000) 199–234
Adding things up we therefore recover the well-known fact that
) p
(
p
−r ′′ (0) P{N (0; 1)¿u}
−r ′′ (0) −u2 =2
P sup (t)¿u ∼ √
∼
e
1=u
2
2
t∈[0;1]
as u → ∞:
4. Rice’s formula for P-dierentiable stationary processes
In Theorem 3 below we derive two expressions for the upcrossing intensity of the
process {(t)}t∈R in terms of the joint distribution of (1) and its stochastic derivative
′ (1). Our result is a useful complement and alternative to existing results in the
literature.
Theorem 3. Let {(t)}t∈R be a stationary process with a.s. continuous sample paths.
Choose an open set U ⊆ R and assume that
(1)
has a density function that is essentially bounded on U:
(i) Assume that there exist a random variable ′ (1) and a constant %¿1 such that
(
)
 
(1 − s) − (1) + s′ (1) %
(4.1)
lim ess sup E
(1) = y f(1) (y) = 0;
s↓0
s
y∈U
and such that
(1 − s) − (1) + s ′ (1)
= 0:
lim E
s↓0
s
(4.2)
Then the upcrossing intensity (u) of a level u ∈ U by (t) is given by
(u) = lim s−1 P{(1) − s ′ (1)¡u¡(1)}:
s↓0
(4.3)
(ii) Assume that there exist a random variable ′ (1) and a constant %¿1 such that
 
(1 − s) − (1) + s′ (1) %
(1) = y f(1) (y) = 0 for u ∈ U:
lim ess sup E
 
s↓0
s
y¿u
(4.4)
Then the upcrossing intensity (u) for levels u ∈ U is given by (4:3).
(iii) Assume that (4:3) holds and that E{[′ (1)+ ]}¡∞. Then we have
Z ∞
ess inf P{′ (1)¿x|(1) = y} f(1) (y) d x
¿
lim
u¡y¡u+
↓0 0
for u ∈ U:
(u)
Z ∞
′
ess sup P{ (1)¿x|(1) = y} f(1) (y) d x
 6 lim
↓0
0
u¡y¡u+
(iv) Assume that (4:3) holds; that there exists a constant %¿1 such that
ess sup E{[′ (1)+ ]% |(1) = y} f(1) (y)¡∞;
y∈U
(4.5)
J.M.P. Albin / Stochastic Processes and their Applications 87 (2000) 199–234
and that E{[′ (1)+ ]}¡∞. Then (u) for levels u ∈ U is nite and satises
 Z ∞
lim ess inf P{′ (1)¿x|(1) = y} f(1) (y) d x
¿
↓0 u¡y¡u+
0
(u)
6
Z
217
∞
(4.6)
′
lim ess sup P{ (1)¿x|(1) = y} f(1) (y) d x:
↓0 u¡y¡u+
0
(v) Assume that (4:3) holds; that there exists a constant %¿1 such that
ess sup E{[′ (1)+ ]% |(1) = y} f(1) (y)¡∞ for u ∈ U:
(4.7)
y¿u
Then (u) for levels u ∈ U is nite and satises (4:6).
(vi) Assume that (4:6) holds and that ((1); ′ (1)) has a density such that
Z ∞
f(1); ′ (1) (u; y) dy is a continuous function of u ∈ U for almost all x¿0:
x
Then (u) for levels u ∈ U is given by Rice’s formula (0:1).
(vii) Assume that (4:6) holds; that
(1) has a density that is continuous and bounded away from zero on U;
and that
((1); ′ (1)) has a density f(1); ′ (1) (u; y) that is a continuous function of u ∈ U
for each y¿0. Then (u) for levels u ∈ U is given by Rice’s formula (0:1).
Proof. (i) Using (0:7) together with (4.1) and (4.2) we obtain
(u) = lim inf s−1 P{(1 − s)¡u¡(1)}
s↓0
(1) − u
1
P ′ (1)¿(1 − )
; (1)¿u
s
s
6 lim inf
s↓0
+ lim sup s−1 P{u¡(1)6u + s}
s↓0
+ lim sup
s↓0
(1) − u
1
P ′ (1)6(1 − )
; (1 − s)¡u;
s
s
u + s¡(1)6u +
1
(1) − u
P ′ (1)6(1 − )
; (1 − s)¡u; (1)¿u +
s
s
s↓0
1
(1) − u
1
lim inf P ′ (1)¿
; (1)¿u
6
1 − s↓0 s
s
+ lim sup
+ lim sup ess sup f(1) (y)
s↓0
u¡y¡u+s
218
J.M.P. Albin / Stochastic Processes and their Applications 87 (2000) 199–234
=s
(1 − s) − (1) + s′ (1)
¡ − x (1) = u + xs
+ lim sup
P
s
s↓0
(1 − s) − (1) + s′ (1)
1
¡−
×f(1) (u + xs) d x + lim sup P
s
s
s
s↓0
1
(1) − u
1
lim inf P ′ (1)¿
; (1)¿u
6
1 − s↓0 s
s
Z
+ lim sup ess sup f(1) (y)
s↓0
u¡y¡u+s
)
 
(1 − s) − (1) + s′ (1) %
(1) = y
+ lim sup ess sup E
 
s
u¡y¡u+
s↓0
=s
(1 − s) − (1) + s′ (1)
dx
1
+ lim sup E
×f(1) (y)
(x)%
s
s↓0
1
(1) − u
→ lim inf P ′ (1)¿
; (1)¿u
s↓0
s
s
Z
as ↓ 0 and ↓ 0; for ¿0 small:
(4.8)
The formula (4.3) follows using this estimate together with its converse
(1) − u
1
; (1)¿u
(u) ¿ lim sup P ′ (1)¿(1 + )
s
s
s↓0
− lim sup s−1 P {u¡(1)6u + s}
s↓0
1
(1) − u
′
; (1 − s)¿u; u + s¡(1)6u +
− lim sup P (1)¿(1 + )
s
s
s↓0
1
(1) − u
; (1 − s)¿u; (1)¿u +
− lim sup P ′ (1)¿(1 + )
s
s
s↓0
1
(1) − u
1
′
lim sup P (1)¿
; (1)¿u
¿
1 + s↓0 s
s
− lim sup ess sup f(1) (y)
s↓0
− lim sup
s↓0
u¡y¡u+s
Z
=s
)
(1 − s) − (1) + s′ (1)
P
¿x (1) = u + xs
s
(
(1 − s) − (1) + s′ (1)
1
¿
×f(1) (u + xs) d x − lim sup P
s
s
s
s↓0
(1) − u
1
; (1)¿u
→ lim sup P ′ (1)¿
s
s
s↓0
as ↓ 0 and ↓ 0; for ¿0 small:
(4.9)
219
J.M.P. Albin / Stochastic Processes and their Applications 87 (2000) 199–234
(ii) When (4.4) holds we do not need (4.1) and (4.2) in (4.8) because the estimates
that used (4.1) and (4.2) can be replaced with the estimate
(1) − u
1
; (1 − s)¡u; (1)¿u + s
lim sup P ′ (1)6(1 − )
s
s
s↓0
Z ∞
(1 − s) − (1) + s′ (1)
¡ − x (1) = u + xs
P
6 lim sup
s
s↓0
×f(1) (u + xs) d x
)
(
 
(1 − s) − (1) + s′ (1) %
(1) = y
6 lim sup ess sup E
 
s
y¿u
s↓0
×f(1) (y)
Z
∞
dx
= 0:
(x)%
In (4.9) we make similar use of the fact that [under (4.4)]
(1) − u
1
′
; (1 − s)¿u; (1)¿u + s
lim sup P (1)¿(1 + )
s
s
s↓0
Z ∞
(1 − s) − (1) + s′ (1)
¿x (1) = u + xs
6 lim sup
P
s
s↓0
×f(1) (u + xs) d x = 0:
(iii) Since E{′ (1)+ }¡∞ there exists a sequence {sn }∞
n=1 such that
p
p
−1
′
P{ (1)¿[sn |ln(sn )|] }¡sn = |ln(sn )| for n¿1 and sn ↓ 0 as n → ∞:
[This is an immediate consequence of the elementary fact that we must have
lim inf x → ∞ x ln(x) P{′ (1)¿x} = 0.] Using (4.3) and (4.5) we therefore obtain
(1) − u
1
′
; (1)¿u
(u) = lim inf P (1)¿
s↓0
s
s
−1
Z √
|ln(sn )| ]
[sn
6 lim sup
n→∞
0
P {′ (1)¿x|(1) = u + xsn } f(1) (u + xsn ) d x
+ lim sup sn−1 P{′ (1)¿[sn
n→∞
6
Z
∞
p
|ln(sn )|]−1 }
ess sup P{′ (1)¿x|(1) = y}f(1) (y) d x
u¡y¡u+
0
+ lim sup 1=
n→∞
p
|ln(sn )| for each
(4.10)
¿0:
On the other hand (4.3) alone yields
Z
P{′ (1)¿x|(1) = u + xs} f(1) (u + xs) d x
(u) ¿ lim lim sup
→∞
¿ lim
→∞
s↓0
Z
0
0
ess inf P{′ (1)¿x|(1) = y} f(1) (y) d x
u¡y¡u+
for each ¿0:
220
J.M.P. Albin / Stochastic Processes and their Applications 87 (2000) 199–234
(iv) When (4.5) holds the estimate (4.10) can be sharpened to
Z
(u) 6 lim sup
P{′ (1)¿x|(1) = u + xsn }f(1) (u + xsn ) d x
n→∞
0
[sn
√
|ln(sn )| ]−1
E{[′ (1)+ ]% |(1) = u + xsn }
f(1) (u + xsn ) d x
x%
n→∞
p
+ lim sup sn−1 P{′ (1)¿[sn |ln(sn )| ]−1 }
+ lim sup
Z
n→∞
6
Z
lim ess sup P{′ (1)¿x|(1) = y}f(1) (y) d x
0
↓0 u¡y¡u+
′
+ %
+ lim sup ess sup E{[ (1) ] |(1) = y} f(1) (y)
n→∞
y∈U
Z
[sn
√
|ln(sn )| ]−1
dx
x%
p
+ lim sup 1= |ln(sn )|
n→∞
Z
→
∞
lim ess sup P{′ (1)¿x|(1) = y}f(1) (y) d x
0
¡∞
↓0 u¡y¡u+
as → 0
for each ¿0:
(v) When (4.7) holds, one can replace the estimates of sn−1 P{′ (1)¿[sn
in (4.11) [that uses E{′ (1)+ }¡∞] with the estimates
Z ∞
P{′ (1)¿x|(1) = u + xs} f(1) (u + xs) d x
lim sup √
s↓0
[s
p
|ln(sn )|]−1 }
|ln(s)|]−1
′
+ %
6 lim sup ess sup E{[ (1) ] |(1) = y} f(1) (y)
s↓0
(4.11)
y¿u
Z
∞
[s
√
|ln(s)|]−1
dx
x%
= 0:
(vi) and (vii) Statement (vi) follows from (4.6). The theorem of Schee (1947) shows
that the hypothesis of (vi) holds when that of (vii) does.
Example 2. Let {(t)}t∈R be a mean-square dierentiable standardized and stationary
Gaussian process with covariance function r (cf. Example 1). Then we have
 
(1 − s) − (1) + s′ (1) %
E
(1) = y f(1) (y)
s
  
%
 
(1 − s) − r(s)(1) + s′ (1) %
+ 1 − r(s) |y|% f(1) (y);
62% E
s
s
so that (4.4) holds. Since (4.7) holds trivially, Theorem 3 [(ii), (v) and (vii)] shows
that Rices’s formula holds when ′ (1) is non-degenerate.
J.M.P. Albin / Stochastic Processes and their Applications 87 (2000) 199–234
221
5. Extremes of Gaussian processes in R n with strongly dependent components
Let {X (t)}t∈R be a separable n|1-matrix-valued centred stationary and two times
mean-square dierentiable Gaussian process with covariance function
R ∋ t y R(t) = E{X (s)X (s + t)T }
= I + r ′ t + 12 r ′′ t 2 + 16 r ′′′ t 3 +
1 (iv) 4
t
24 r
+ o(t 4 ) ∈ Rn|n
(5.1)
as t → 0 (where I is the identity in Rn|n ): It is no loss to assume that R(0) is diagonal,
since this can be achieved by a rotation that does not aect the statement of Theorem 4
below. Further, writing P for the projection on the subspace of Rn|1 spanned by eigenvectors of R(0) with maximal eigenvalue, the probability that a local extremum of
X (t) is not generated by PX (t) is asymptotically neglible (e.g., Albin 1992, Section
5). Hence it is sucient to study PX (t), and assuming (without loss by a scaling
argument) unit maximal eigenvalue, we arrive at R(0) = I .
To ensure that {X (t)}t∈[0; 1] does not have periodic components we require that
E{X (t)X (t)T |X (0)} = I − R(t)R(t)T
is non-singular:
Writing Sn ≡ {z ∈ Rn|1 : z T z = 1}, this requirement becomes
inf xT [I − R(t)R(t)T ]x¿0
x∈Sn
for each choice of t ∈ [ − 1; 1]:
(5.2)
The behaviour of local extremes of the 2 -process (t) ≡ X (t)T X (t) will then depend
on the behaviour of I − R(t)R(t)T as t → 0.
Sharpe (1978) and Lindgren (1980) studied extremes of 2 -processes with independent component processes. Lindgren (1989) and Albin (1992, Section 5) gave extensions to the general “non-degenerate” case when (5.1) and (5.2) hold and
lim inf t −2 inf xT [I − R(t)R(t)T ]x¿0:
t→0
x∈Sn
(5.3)
Using (5.1) in an easy calculation, (5.3) is seen to be equivalent with the requirement
that the covariance matrix r ′ r ′ − r ′′ of (X ′ (1) | X (1)) is non-singular.
We shall treat 2 -processes which do not have to be “non-degenerate” in the sense
(5.3). More specically, we study local extremes of the process {(t)}t∈[0; 1] , satisfying
(5.1) and (5.2), under the very weak additional assumption that
lim inf t −4 inf xT [I − R(t)R(t)T ]x¿0:
t→0
x∈Sn
(5.4)
Now let n (·) denote the (n − 1)-dimensional Hausdor measure over Rn|1 .
Theorem 4. Consider a centred and stationary Gaussian process {X (t)}t∈R in Rn|1 ;
n¿2; satisfying (5:1), (5:2) and (5:4). Then we have
(
)
1
T
P sup X (t) X (t)¿u
lim √
u→∞
u P{X (1)T X (1)¿u}
t∈[0;1]
=
Z
z∈Sn
1 p T ′ ′
dn (z)
√
z [r r − r ′′ ]z n
:
(Sn )
2
(5.5)
222
J.M.P. Albin / Stochastic Processes and their Applications 87 (2000) 199–234
Proof (By application of Theorem 2). The density function g and distribution function
G of the random variable (t) ≡ X (t)T X (t) satisfy
n −1
x(n−2)=2 e−x=2 for x¿0;
g(x) = 2−n=2
2
n −1
u(n−2)=2 e−x=2 as u → ∞:
(5.6)
1 − G(u) ∼ 2−(n−2)=2
2
Consequently (1.1) holds with w(u) = 2 and F(x) = 1 − e−x .
To verify Assumption 1 we take ′ = ′ = 2X (1)T X ′ (1) and ′′ = 4X (1)T (r ′′ −
′ ′
r r )X (1). Since r ′ is skew-symmetric, so that X (1)T r ′ X (1) = 0, we then obtain
(1 − t) − (1) + t′
= k X (1 − t) − R(t)X (1) k2
+ 2X (1)T R(t)T [X (1 − t) − R(t)X (1) + tX ′ (1) + tr ′ X (1)]
+ 2X (1)T [I − R(t)T ][tX ′ (1) + tr ′ X (1)]
−2X (1)T [I − R(t)T R(t)]X (1):
(5.7)
Here the fact that E{X (1 − t)X ′ (1)T } = R′ (t) = −R′ (−t)T implies that
X (1 − t) − R(t)X (1)
X ′ (1) + r ′ X (1)
and
are independent of X (1):
(5.8)
T
In the notation Var{X } ≡ E{XX }, (5.1) implies that
Var{X (1 − t) − R(t)X (1)} = (r ′ r ′ − r ′′ )t 2 + o(t 2 );
Var{[I − R(t)T ][tX ′ (1) + tr ′ X (1)]} = (r ′ )T (r ′ r ′ − r ′′ )r ′ t 4 + o(t 4 ):
(5.9)
Using (5.1) together with the elementary fact that r (iii) is skew-symmetric we get
Var{R(t)T [X (1 − t) − R(t)X (1) + tX ′ (1) + tr ′ X (1)]}
62Var{R(t)T [X (1 − t) − X (1) + tX ′ (1)]}
+2Var{R(t)T [(I − R(t))X (1) + tr ′ X (1)]}
= 41 (r (iv) + r ′′ r ′′ )t 4 + o(t 4 ):
(5.10)
′′ ′
′ ′′
Noting the easily established fact that r r = r r we similarly obtain
X (1)T [I − R(t)T R(t)]X (1) = X (1)T [(r ′ r ′ − r ′′ )t 2 + O(t 4 )] X (1):
(5.11)
Putting q(u) ≡ (1 ∨ u)−1=2 we have (1.3). Using (5.7) – (5.11) we further see that
(1 − qt) − u (1) − u qt′
¿|(1)¿u
−
+
P
wt
wt
wt
6
n
X
i=1
(
P |(X (1 − qt) − R(qt)X (1))i | ¿
(
+ P k X (1) k ¿
r
u
4t
),
r
t
2n
)
P{k X (1) k2 ¿u}
J.M.P. Albin / Stochastic Processes and their Applications 87 (2000) 199–234
+
n
X
i=1
(
P |(R(qt)T [X (1 − qt) − R(qt)X (1) + qtX ′ (1)
+ qtr ′ X (1)])i |¿
+
n
X
i=1
223
r
t 3
4nu
)
(
P |([I − R(qt)T ][qtX ′ (1) + qtr ′ X (1)])i |¿
r
t 3
4nu
)
t
P{k X (1) k2 ¿u}
+ P |X (1)T [I − R(qt)R(qt)T ]X (1)|¿
4
6K1 exp{−K2 (u=t)}
6K3 |t|2
for u¿u0 and |t|6t0 ;
(5.12)
for some constants K1 ; K2 ; K3 ; u0 ; t0 ¿0. Hence (0.6) holds with C = K3 ∧ t0−2 . By
application of Theorem 1 it thus follows that Assumption 3 holds.
Choosing D(u) ≡ (1 ∨ u)5=16 and using (5.7) – (5.11) as in (5.12) we obtain
(1 − qt) − u (1) − u qt′
q2 t 2 ′′
¿ (1)¿u
−
+
−
P
wt
wt
wt
2wt
(
r )
n
X
t
6
P |(X (1 − qt) − R(qt)X (1))i | ¿
2n
i=1
√
+ P{k X (1) k ¿ 2u}=P{k X (1) k2 ¿u}
(
n
X
P |(R(qt)T [X (1 − qt) − R(qt)X (1)
+
i=1
t
+ qt X ′ (1) + qtr ′ X (1)])i |¿ √
4 2nu
+
n
X
i=1
)
t
P ([I − R(qt)T ][qtX ′ (1) + qtr ′ X (1)])i ¿ √
4 2nu
t
+ P |X (1)T (I − R(qt)T R(qt) + [r ′′ − r ′ r ′ ]q2 t 2 )X (1)|¿
4
P{k X (1) k2 ¿u}
6 K4 exp{−K5 u3=8 }
for |t|6D(u) and u suciently large;
for some constants K4 ; K5 ¿0. It follows immediately that (3.1) holds.
Suppose that r ′ r ′ − r ′′ = 0, so that the right-hand side of (5.5) is zero. Then (5.9),
(5.10) and (5.12) show that (0.6) holds for some q(u) = o(u−1=2 ) that is non-increasing
224
J.M.P. Albin / Stochastic Processes and their Applications 87 (2000) 199–234
[so that (1.3) holds]. Now Theorem 1 and Proposition 2 show that also the left-hand
side of (5.5) is zero. We can thus without loss assume that r ′ r ′ − r ′′ 6= 0.
In order to see how (5.5) follows from Theorem 2, we note that
n
Z
d (x)
z2
1
p
g(y):
exp −
f(1); ′ (1) (y; z) =
T [r ′ r ′ − r ′′ ]x
T
′
′
′′
8yx
n (Sn )
8x [r r − r ]x
x∈Sn
[See the proof of Theorem 6 in Section 7 for an explanation of this formula.] Sending
u → ∞, and using (5.6), we therefore readily obtain
Z ∞ Z z
f(1); ′ (1) (u + xqy; z) dy d z
0
0
∼
Z
2 p T ′ ′
dn (z)
√
g(u)
u z [r r − r ′′ ]z n
(Sn )
2
∼
Z
1 p T ′ ′
dn (z) 1 − G(u)
√
z [r r − r ′′ ]z n
(Sn ) q(u)
2
z∈Sn
z∈Sn
for x¿0. In view of (0.3), (5.5) thus follows from the conclusion of Theorem 2.
In order to verify the hypothesis of Theorem 2 it remains to prove (3.2). To that
end we note that [cf (5.7)]
(1 − t) − (1) = k X (1 − t) − R(t)X (1) k2
+2X (1)T R(t)T [X (1 − t) − R(t)X (1)]
−2X (1)T [I − R(t)T R(t)]X (1):
(5.13)
Conditional on the value of X (1), we here have [recall (5.8)]
Var{X (1)T R(t)T [X (1 − t) − R(t)X (1)]}
= X (1)T R(t)T [I − R(t)R(t)T ] R(t)X (1)
= X (1)T ([I − R(t)T R(t)] − [I − R(t)T R(t)][I − R(t)T R(t)]T )X (1)
6 X (1)T [I − R(t)T R(t)]X (1):
T
T
T
(5.14)
T
4
Since x [I − R(t) R(t)]x = x [I − R(−t)R(−t) ]x¿K6 t for x ∈ Sn and |t|61, for
some constant K6 ¿0 by (5.2) and (5.4), we have X (1)T [I −R(qt)T R(qt)]X (1)¿K6 u1=4
for k X (1) k2 ¿u and t ∈ [D(u); q(u)]. Using (5.13) and (5.14) this give
P{(1 − q(u)t)¿u|(1)¿u}
6 2P{(1 − q(u)t)¿(1)¿u|(1)¿u}
6 2 P{k X (1 − qt) − R(qt)X (1) k2
¿X (1)T [I − R(qt)T R(qt)]X (1) | k X (1) k2 ¿u}
p
+2 P{N (0; 1)¿ X (1)T [I − R(qt)T R(qt)] X (1)| k X (1) k2 ¿u}
J.M.P. Albin / Stochastic Processes and their Applications 87 (2000) 199–234
62
n
X
i=1
(
r
P |(X (1 − qt) − R(qt)X (1))i |¿
K6 u1=4
2n
225
)
p
+ 2P{N (0; 1)¿ K6 u1=4 }:
It follows that (3.2) holds.
6. Extremes of totally skewed moving averages of -stable motion
Given an ∈ (1; 2] we write Z ∈ S (; ) when Z is a strictly -stable random variable
with characteristic function
io
h
  
n
sign()
for ∈ R:
(6.1)
E{exp[iZ]} = exp −|| 1 − i tan
2
Here the scale = Z ¿0 and the skewness = Z ∈ [ − 1; 1] are parameters. Further
we let {M (t)}t∈R denote an -stable motion that is totally skewed to the left, that is,
M (t) denotes a Levy process that satises M (t) ∈ S (|t|1= ; −1).
For a function g : R → R we write ghi ≡ sign(g) |g| . Further we dene
Z ∞
g(x)d x for g ∈ L1 (R) and k g k ≡ h|g| i1= for g ∈ L (R):
hgi ≡
−∞
It is well known that (e.g., Samorodnitsky and Taqqu, 1994, Proposition 3:4:1)
Z
g dM ∈ S (k g k ; −hghi i= k g k ) for g ∈ L (R):
(6.2)
R
Given a non-negative function f ∈ L (R) that satises (6.4) – (6.8) below, we shall
study the behaviour of local extremes for the moving average process
Z ∞
f(t − x) dM (x) for t ∈ R:
(6.3)
(t) = separable version of
−∞
Note that by (6.2) we have (t) ∈ S (k f k ; −1); so that {(t)}t∈R is a stationary
-stable process that is totally skewed
                www.elsevier.com/locate/spa
Extremes and upcrossing intensities for P-dierentiable
stationary processes
J.M.P. Albina; b; ∗
a Department
of Mathematics, Chalmers University of Technology and University of Goteborg,
S-412 96 Goteborg, Chapel Hill, NC, USA
b Center for Stochastic Processes, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
Received 25 November 1998; received in revised form 22 November 1999
Abstract
Given a stationary dierentiable in probability process {(t)}t∈R we express the asymptotic behaviour of the tail P{supt∈[0; 1] (t)¿u} for large u through a certain functional of the conditional
law (′ (1)|(1)¿u). Under technical conditions this functional becomes the upcrossing intensity
(u) of the level u by (t). However, by not making explicit use of (u) we avoid the often
hard-to-verify technical conditions required in the calculus of crossings and to relate upcrossings
to extremes. We provide a useful criterion for verifying a standard condition of tightness-type
used in the literature on extremes. This criterion is of independent value. Although we do
not use crossings theory, our approach has some impact on this subject. Thus we complement
existing results due to, e.g. Leadbetter (Ann. Math. Statist. 37 (1983) 260 –267) and Marcus
(Ann. Probab. 5 (1977) 52–71) by providing a Rnew and useful set of technical conditions which
∞
ensure the validity of Rice’s formula (u) = 0 zf(1); ′ (1) (u; z) d z. As examples of applican
tion we study extremes of R -valued Gaussian processes with strongly dependent component
processes, and of totally skewed moving averages of -stable motions. Further we prove Belayev’s multi-dimensional version of Rice’s formula for outcrossings through smooth surfaces of
c 2000 Elsevier Science B.V. All rights reserved.
Rn -valued -stable processes.
MSC: Primary 60G70; secondary 60E07; 60F10; 60G10; 60G15
Keywords: Extrema; Local extrema; Sojourn; Upcrossing; Rice’s formula; Belyaev’s formula;
Stationary process; Gaussian process; 2 -process; -Stable process
0. Introduction
Given a dierentiable stationary stochastic process {(t)}t∈R there is, under technical
conditions, a direct relation between the asymptotic behaviour of P{supt∈[0; 1] (t)¿u}
for large u and that of the expected number of upcrossings (u) of the level u by
{(t)}t∈[0; 1] (e.g., Leadbetter and Rootzen, 1982; Leadbetter et al., 1983, Chapter 13;
∗
Research supported by NFR grant F-AA=MA 9207-310, and by M.R. Leadbetter.
E-mail address: [email protected] (J.M.P. Albin)
c 2000 Elsevier Science B.V. All rights reserved.
0304-4149/00/$ - see front matter
PII: S 0 3 0 4 - 4 1 4 9 ( 9 9 ) 0 0 1 1 0 - 6
200
J.M.P. Albin / Stochastic Processes and their Applications 87 (2000) 199–234
Albin, 1990, Theorem 7; Albin, 1992). Under additional technical conditions one further
has the explicit expression
Z ∞
zf(1); ′ (1) (u; z)d z
(0.1)
Rice’s formula : (u) =
0
(e.g., Leadbetter, 1966; Marcus, 1977; Leadbetter et al., 1983, Chapter 7).
However, although very reasonable, the above mentioned two sets of “technical
conditions” are quite forbidding, and have only been veried for Gaussian processes
and processes closely related to them. Hence, although conceptually satisfying, the
upcrossing approach to extremes have neither led far outside “Gaussian territory”, nor
generated many results on extremes that cannot be easier proved by other methods. See,
e.g. Adler and Samorodnitsky (1997) and Albin (1992) for outlines of the technical
problems associated with the formula (0.1), and with relating the asymptotic behaviour
of P{supt∈[0; 1] (t)¿u} to that of (u), respectively.
In Section 3 we show that, under certain conditions on the functions q and w,
)
(
1
1 (1) − u
′
(1)¿u
P sup (t)¿u ∼ P{(1)¿u} + P (1)¿
x
x q(u)
t∈[0;1]
×
P{(1)¿u}
q(u)
(0.2)
as u ↑ û ≡ sup{x ∈ R: P{(1)¿x}¿0} and x ↓ 0 (in that order). Note that, assuming
absolute continuity, the right-hand side of (0.2) is equal to
Z ∞ Z z
f(1); ′ (1) (u + xqy; z) dy d z:
(0.3)
P{(1)¿u} +
0
0
If we send x ↓ 0 before u ↑ û, (0.1) shows that, under continuity assumptions, (0.3)
behaves like P{(1)¿u} + (u). However, technical conditions that justify this change
of order between limits, and thus the link between extremes and upcrossings, can
=
seldom be veried outside Gaussian contexts. In fact, since P{supt∈[0; 1] (t)¿u} ∼
P{(1)¿u} + (u) for processes that cluster (e.g., Albin, 1992, Section 2), the “technical conditions” are not only technical! So while (0.2) is a reasonably general result,
the link to (u) can only be proved in more special cases.
Non-upcrossing-based approaches to local extremes rely on the following conditions:
(1) weak convergence (as u ↑ û) of the nite dimensional distributions of the process
{(w(u)−1 [(1 − q(u)t) − u] | (1)¿u)}t∈R ;
(2) excursions above high levels do not last too long (local independence); and
(3) a certain type of tightness for the convergence in (1).
It is rare that (w−1 [(1 − qt) − u] | (1)¿u) can be handled sharp enough to verify
(1). Somewhat less seldom one can carry out the estimates needed to prove
Assumption 1. There exist random variables ′ and ′′ with
)
(
2 ′′ %
q(u)
′′
60 a:s: and lim sup E
(1)¿u ¡∞ for some %¿1;
w(u)
u↑û
(0.4)
J.M.P. Albin / Stochastic Processes and their Applications 87 (2000) 199–234
201
and such that
(1 − q(u)t) − u (1) − u q(u)t′
q(u)2 t 2 ′′
¿ (1)¿u
lim P
−
+
−
u↑û
w(u)t
w(u)t
w(u)t
2w(u)t
=0
(0.5)
for each choice of ¿0 and t 6= 0.
Further there exists a random variable ′ such that; for each choice of ¿0;
(1 − q(u)t) − u (1) − u q(u)t′
¿ (1)¿u 6C|t|%
(0.6)
−
+
P
w(u)t
w(u)t
w(u)t
for u¿ũ and t 6= 0; for some constants C = C()¿0; ũ = ũ() ∈ R and % = %()¿1.
Of course, one usually takes ′ = ′ = ′ (1) in Assumption 1, where ′ (t) is a
stochastic derivative of (t). The variable ′′ is not always needed in (0.5) [so that
(0.5) holds with ′′ = 0]; and ′′ is not always choosen to be ′′ (1) when needed.
In Section 2 we prove that (0.6) implies the tightness-requirement (3), and in Section 3
that (0.5) can replace the requirement about weak convergence (1).
Condition (2) cannot be derived from Assumption 1. But by requiring that (0.5)
holds for t-values t = t(u) → ∞ as u ↑ û, it is possible to weaken the meaning of
“too long” in (2) and thus allow “longer” local dependence. This is crucial for the
application to Rn -valued Gaussian processes in Section 5. [The exact statement of
condition (2) and the mentioned weakened version of it are given in Assumption 2 of
Section 2 and the requirement (3.2) of Theorem 2, respectively.]
In Section 4 we show that a version of (0.6) can be used to evaluate the upcrossing
intensity (u) through the relation (e.g., Leadbetter et al., 1983, Section 7:2)
(u) = lim s−1 P{(1 − s)¡u¡(1)}:
s↓0
(0.7)
A process {(t)}Rt∈R with innitely divisible nite dimensional distributions has representation (t) = x∈S ft (x) dM (x) in law, for some function f(·) (·): R × S → R and
independently scattered innitely divisible random measure M on a suitable space S.
Putting
Z
@
dM (x)
ft (x)
′ (1) ≡
@t
S
t=1
when f(·) is smooth, we thus get
(1 − qt) − u (1) − u qt′ (1)
+
−
wt
wt
wt
Z
f1−qt (x) − f1 (x)
@
q
+ ft (x)
dM (x):
=
w S
qt
@t
t=1
When ′ = ′ = ′ (1) and ′′ = 0 the probabilities in (0.5) and (0.6) are bounded by
Z
q f1−qt − f1
@
dM ¿u +
f1 ±
P
+ ft
w
qt
@t t=1
S
Z
f1 dM ¿u
P
S
202
J.M.P. Albin / Stochastic Processes and their Applications 87 (2000) 199–234
for each choice of
¿0. Thus the events under consideration in (0.5) and (0.6) has
been reduced to tail events for univariate innitely divisible random variables, which
can be crucially more tractable to work with than the “traditional conditions” (1) and
(2). We show how this works on -stable processes in Section 6.
Let {X (t)}t∈R be a separable n|1-matrix-valued centred stationary and two times
mean-square dierentiable Gaussian process with covariance function R(t) = E{X (s)
X (s+t)T }. In Section 5 we nd the asymptotic behaviour of P{supt∈[0; 1] X (t)T X (t)¿u}
as u → ∞ under the additional assumption that
lim inf t −4
t→0
inf
{x∈Rn :kxk=1}
xT [I − R(t)R(t)T ]x¿0:
This is an important generalization of results in the literature which are valid only
when (X ′ (1) |X (1)) has a non-degenerate normal distribution in Rn , i.e.,
lim inf t −2
t→0
inf
{x∈Rn :kxk=1}
xT [I − R(t)R(t)T ]x¿0:
Let {M (t)}t∈R denote an -stable Levy motion
R ∞ with ¿1 that is totally skewed to
the left. Consider the moving average (t) = −∞ f(t − x) dM (x); t ∈ R; where f is
a non-negative suciently smooth function in L (R). In Section 6 we determine the
asymptotic behaviour of P{supt∈[0; 1] (t)¿u} as u → ∞. This result cannot be proven
by traditional approaches to extremes since the asymptotic behaviour of conditional
totally skewed -stable distributions is not known (cf. (1)).
In Section 7 we prove a version of Rice’s formula named after Belyaev (1968)
for the outcrossing intensity through a smooth surface of an Rn -valued stationary and
P-dierentiable -stable process {X (t)}t∈R with independent component processes.
1. Preliminaries
All stochastic variables and processes that appear in this paper are assumed to be
dened on a common complete probability space (
; F; P).
In the sequel {(t)}t∈R denotes an R-valued strictly stationary stochastic process,
and we assume that a separable and measurable version of (t) have been choosen.
Such a version exists, e.g. assuming P-continuity almost everywhere (Doob, 1953,
Theorem II:2:6), and it is to that version our results apply.
Let G(x) ≡ P{(1)6x} and û ≡ sup{x ∈ R: G(x)¡1}. We shall assume that G
belongs to a domain of attraction of extremes. Consequently there exist functions
w: (−∞; û) → (0; ∞) and F: (−x̂; ∞) → (−∞; 1) such that
lim
u↑û
1 − G(u + xw(u))
= 1 − F(x) for x ∈ (−x̂; ∞); for some x̂ ∈ (0; ∞]: (1.1)
1 − G(u)
If G is Type II-attracted we can assume that x̂ = −1, F(x) = 1 − (1 + x)−
for some
¿0, and w(u) = u. Otherwise G is Type I- or Type III-attracted, and then we can
take x̂ = ∞ and F(x) = 1 − e−x , and assume that w(u) = o(u) with
w(u + xw(u))=w(u) → 1
locally uniformly for x ∈ R as u ↑ û:
(1.2)
See, e.g., Resnick (1987, Chapter 1) to learn more about the domains of attraction.
J.M.P. Albin / Stochastic Processes and their Applications 87 (2000) 199–234
203
In general, the tail of G is by far the most important factor aecting the tail behaviour
of the distribution of supt∈[0; 1] (t). Virtually all marginal distributions G that occur in
the study of stationary processes belong to a domain of attraction.
In most assumptions and theorems we assume that a function q: (−∞; û) → (0; ∞)
with Q ≡ lim supu↑û q(u)¡∞ have been specied. The rst step when applying these
results is to choose a suitable q. Inferences then depend on which assumptions hold
for this choice of q. In particular, we will use the requirement
R () ≡ lim sup q(u)=q(u − w(u))
satises lim sup R ()¡∞:
u↑û
(1.3)
↓0
Sometimes we also need a second function D : (−∞; û) → (0; ∞) such that
D(u)61=q(u) and
lim sup D(u) = Q −1 (= + ∞ when Q = 0):
(1.4)
u↑û
In order to link the asymptotic behaviour of P{supt∈[0; 1] (t)¿u} to that of
(′ (1)|(1)¿u), we make intermediate use of the tail behaviour of the sojourn time
Z t
L(u) ≡ L(1; u) where L(t; u) ≡
I(u; û) ((s)) ds for t¿0;
0
and its rst moment E{L(u)} = P{(1)¿u} = 1 − G(u). Our study of sojourns, in turn,
crucially relies on the sequence of identities
Z 1
Z x
1
P{L(u)=q¿y}
P{L(s; u)=q6x; (s)¿u} ds
dy =
E{L(u)=q}
E{L(u)} 0
0
)
Z 1 ( Z s=q
P
I(u; û) ((1 − qt)) dt6x (1)¿u ds
=
0
0
= qx +
Z
1
P
qx
(Z
0
s=q
)
I(u; û) ((1 − qt)) dt6x (1)¿u ds:
(1.5)
In (1:5) the rst equality is rather deep, and has long been used by Berman (1982,
1992). See, e.g. Albin (1998, Eq. (3.1)) for a proof. The second equality follows easily
from stationarity, while the third is trivial.
Convention. Given functions h1 and h2 we write h1 (u) 4 h2 (u) when lim supu↑û (h1 (u)−
h2 (u))60 and h1 (u) ¡ h2 (u) when lim inf u↑û (h1 (u) − h2 (u))¿0.
2. First bounds on extremes. Tightness
In Propositions 1 and 2 below we use (a strong version of) (2) and (3), respectively,
to derive upper and lower bounds for the tail P{supt∈[0; 1] (t)¿u}. When combined
these bounds show that
)
(
P
sup (t)¿u
t∈[0;1]
asymptotically behaves like E{L(u)}=q(u) as u ↑ û;
204
J.M.P. Albin / Stochastic Processes and their Applications 87 (2000) 199–234
except for a multiplicative factor bounded away from 0 and ∞. In Theorem 1 we
further prove that (0.6) implies condition (3) in the shape of Assumption 3 below.
Our lower bound rely on the following requirement:
Assumption 2. We have
lim lim sup
d→∞
u↑û
Z
1=q(u)
d∧(1=q(u))
P{(1 − q(u)t)¿u|(1)¿u} dt = 0:
Assumption 2 requires that if (1)¿u, then (t) have not spent too much time above
the level u prior to t =1. This means a certain degree of “ local independence ”. Clearly,
Assumption 2 holds trivially when Q ¿0.
As will be shown in Section 3, Assumption 2 can be relaxed if (0.5) is assumed
to hold also for t-values t = t(u) → ∞ as u ↑ û. This gives an opportunity to allow
stronger local dependence than is usual in local theories of extremes.
Typically, Assumption 2 should be veried employing the estimate
P{(1 − q(u)t)¿u|(1)¿u}6P{(1 − q(u)t) + (1)¿2u}=P{(1)¿u}:
(2.1)
This is the way Assumption 2 is veried for the -stable process in Section 6 [by
invoking such a computation in Albin (1999)], and the Gaussian process in Example 1
at the end of Section 3 can also be dealt with very easily using this approach. [There
is an array of complicated estimates of probabilities like P{(1 − q(u)t)¿u|(1)¿u}
in the Gaussian literature, all of which are bettered or equaled by (2.1).] It seems
that it was Weber (1989) who rst observed the “trick” (2.1) in the context of local
extremes. This trick can be crucial in non-Gaussian contexts.
Proposition 1. If Assumption 2 holds; then we have
q(u)
P
lim inf
u↑û
E{L(u)}
(
)
sup (t)¿u ¿0:
t∈[0;1]
Proof. Clearly we have
q(u)
P
lim inf
u↑û
E{L(u)}
1
¿ lim inf
u↑û
x
1
=
x
Z
x
0
(
P{L(u)=q¿y}
dy
E{L(u)=q}
1 − lim sup
u↑û
sup (t)¿u
t∈[0;1]
)
Z
x
∞
P{L(u)=q¿y}
dy
E{L(u)=q}
!
(2.2)
205
J.M.P. Albin / Stochastic Processes and their Applications 87 (2000) 199–234
for each x¿0. Given an ∈ (0; 1), (1:5) and Assumption 2 further show that
Z ∞
P{L(u)=q¿y}
dy
lim sup
E{L(u)=q}
u↑û
x
)
Z 1 ( Z d∧(s=q)
P
I(u; û) ((1 − qt)) dt¿(1 − )x (1)¿u ds
6 lim sup
u↑û
0
0
)
Z 1 ( Z s=q
P
I(u; û) ((1 − qt)) dt¿x (1)¿u ds
+ lim sup
u↑û
0
d∧(s=q)
60 +
60 +
1
lim sup
x u↑û
Z
s=1
1
lim sup
x u↑û
Z
t=1=q
t=s=q
Z
s= 0
t=d∧(s=q)
t=d∧(1=q)
P{(1 − qt)¿u|(1)¿u} dt ds
P{(1 − qt)¿u|(1)¿u} dt
1 2
for x¿d=(1 − ) and d¿d0 ; for some d0 ¿1:
x
Choosing x = d0 =(1 − ) and inserting in (2.2) we therefore obtain
(
)
(1 − )
1−
q(u)
P sup (t)¿u ¿
1−
¿0:
lim inf
u↑û
E{L(u)}
d0
d0
t∈[0;1]
(2.3)
60 +
Of course, an upper bound on extremes require an assumption of tightness type:
Assumption 3. We have
q(u)
P
lim lim sup
a↓0
E{L(u)}
u↑û
(
sup (t)¿u;
t∈[0;1]
max
{k∈Z:06aq(u)k61}
(akq(u))6u
)
= 0:
Assumption 3 were rst used by Leadbetter and Rootzen (1982), and it plays a
central role in the theory for both local and global extremes (e.g., Leadbetter and
Rootzen, 1982; Leadbetter et al., 1983, Chapter 13; Albin, 1990). Assumption 3 is
usually very dicult to verify.
Proposition 2. If Assumption 3 holds; then we have
(
)
q(u)
P sup (t)¿u ¡∞:
lim sup
E{L(u)}
t∈[0;1]
u↑û
Proof. Clearly we have
q(u)
P
lim sup
E{L(u)}
u↑û
(
sup (t)¿u
t∈[0;1]
q(u)
P
6 lim sup
E{L(u)}
u↑û
(
q(u)
P
+ lim sup
E{L(u)}
u↑û
)
sup (t)¿u;
t∈[0;1]
max (akq)6u
06aqk61
max (akq)¿u :
06aqk61
)
206
J.M.P. Albin / Stochastic Processes and their Applications 87 (2000) 199–234
Here the rst term on the right-hand side is nite (for a¿0 suciently small) by
Assumption 3, while the second term is bounded by
lim sup
u↑û
q(u)
((1 + [1=(aq)])(1 − G(u)))6Q + a−1 ¡∞:
E{L(u)}
To prove that (0.6) implies Assumption 3 we recall that if {(t)}t∈[a; b] is separable
and P-continuous, then every dense subset of [a; b] is a separant for (t).
Theorem 1. Suppose that (1:1) and (1:3) hold. If in addition (0:6) holds and (t) is
P-continuous; then Assumption 3 holds.
Proof. Letting n ≡
(
P
Pn
k=1
2−k and En ≡
S 2n
−n
k= 0 {(aqk2 )¿u
(0)6u; sup (t)¿u + aw; (aq)6u
t∈[0;aq]
=P
6P
6P
n
(
E0c ;
2
∞ [
[
−n
{(aqk2
)¿u + aw}
n=1 k= 0
(
E0c ;
∞
[
En
n=1
(∞
[
n=1
+ n aw} we have
)
)
)
)
(En \ En−1 )
n−1
∞ 2X
−1
X
P (aq(2k)2−n )6u + n−1 aw; (aq(2k + 1)2−n )¿u + n aw;
6
n=1 k= 0
(aq(2k + 2)2−n )6u + n−1 aw
∞
X
(1 − aq2−n ) − u (1) − u aq2−n ′
−
+
¡ − 2−n a; (1)¿u
2n−1 P
6
w
w
w
n=1
∞
X
(1 + aq2−n ) − u (1) − u aq2−n ′
−n
n−1
−
−
¡ − 2 a; (1)¿u
2 P
+
w
w
w
n=1
+
∞
X
2n−1 P
n=1
62
∞
X
aq2−n ′
aq2−n ′
¿0; −
¿0
w
w
2n−1 C(2−n a)% P{(1)¿u}
n=1
= o(a)P{(1)¿u} as a ↓ 0:
(2.4)
207
J.M.P. Albin / Stochastic Processes and their Applications 87 (2000) 199–234
Consequently it follows that
q(u)
P
lim sup
E{L(u)}
u↑û
(
sup (t)¿u + aw; max
06aqk61
t∈[0;1]
q(u)
P
6lim sup
E{L(u)}
u↑û
(
sup
(akq)6u
(t)¿u + aw;
06t6([1=(aq)]+1)aq
)
max
06k6[1=(aq)]+1
)
(akq)6u
)
(
q(u)
([1=(aq)] + 1)P (0)6u; sup (t)¿u + aw; (aq)6u
6lim sup
E{L(u)}
t∈[0; aq]
u↑û
6lim sup(a−1 + q) o(a)
u↑û
→0
as a ↓ 0:
(2.5)
Hence it is sucient to prove that
q(u)
P
() ≡ lim sup lim sup
E{L(u)}
a↓0
u↑û
(
u¡ sup (t)6u + aw
t∈[0;1]
)
→0
as ↓ 0:
Put ũ ≡ u − aw, w̃ ≡ w(ũ) and q̃ ≡ q(ũ) for a; ¿0. Then we have [cf. (1.2)]
1
u = ũ + aw¿ũ + aw̃
2
and
u = u + aw6ũ + 2aw̃
for u suciently large. Combining (2.5) with (1.1) and (1.3) we therefore obtain
() 6 lim sup lim sup
a↓0
×P
u↑û
R (a)(1 − F(−a))q̃
E{L(ũ)}
(
1
sup (t)¿ũ + aw̃; max (ak q̃)6ũ
06aq̃k61
2
t∈[0;1]
)
R (a)(1 − F(−a))q̃
E{L(ũ)}
a↓0
u↑û
×P ũ¡ max (ak q̃)6ũ + 2aw̃
+ lim sup lim sup
06aq̃k61
R (a)(1 − F(−a))
P{u¡(0)6u + 2aw}
a(1 − G(u))
a↓0
u↑û
!
!
F(2a)
lim sup
lim sup R (a)
a
a↓0
a↓0
6 0 + lim sup lim sup
6
→0
as ↓ 0:
208
J.M.P. Albin / Stochastic Processes and their Applications 87 (2000) 199–234
3. Extremes of P-dierentiable processes
We are now prepared to prove (0.2). As mentioned in the introduction, we want the
option to employ a stronger version of (0.5) [(3.1) below] in order to be able to relax
Assumption 2 [to (3.2)]:
Theorem 2. Assume that there exist random variables ′ and ′′ such that
Z D(u)
(1 − q(u)t) − u (1) − u q(u)t′
q(u)2 t 2 ′′
P
−
+
−
w(u) t
w(u) t
w(u)t
2 w(u)t
0
¿| (1)¿u dt → 0
(3.1)
as u ↑ û for each choice of ¿0 [where D(u) satises (1:4)]; and that
Z 1=q(u)
P{(1 − q(u)t)¿u|(1)¿u} dt = 0:
lim sup
u↑û
(3.2)
D(u)
If in addition (0:4); (1:1); and Assumption 3 hold; then we have
q(u) ′ 1 (1) − u
1
¿
(1)¿u
¡∞:
lim sup lim sup q(u) + P
x
w(u)
x w(u)
x↓0
u↑û
As x ↓ 0 we further have
(
)
q(u)
P sup (t)¿u − q(u)
lim sup
E{L(u)}
t∈[0;1]
u↑û
q(u) ′ 1 (1) − u
1
¿
(1)¿u → 0 :
− P
x
w(u)
x w(u)
Proof. Choosing constants ; ¿0, (1:5) and Markov’s inequality show that
Z
1 x P{L(u)=q¿y}
dy − q
x 0
E{L(u)=q}
(Z
)
Z
s=q
(1 − qt) − u
1 1
dt6x (1)¿u ds
P
I(0; ∞)
=
x qx
w
0
1
6
x
Z
1
P
(Z
D∧(s=q)
I(; ∞)
0
qx
q2 t 2 ′′
(1) − u qt ′
−
+
w
w
2w
dt
)
6(1 + )x| (1)¿u ds
+
1
x
Z
0
1
P
D
q2 t 2 ′′
(1 − qt) − u
(1) − u qt′
−
+
−
w
w
2
w
w
0
¿x| (1)¿u ds
Z
I(; ∞)
dt
209
J.M.P. Albin / Stochastic Processes and their Applications 87 (2000) 199–234
(1) − u q (1 + )x ′
q2 (1 + )2 x2 ′′
−
+
6 (1)¿u
w
w
2w
)
(
Z
1
(1) − u
1
ds + P
6 (1)¿u
+
x {s∈[qx;1]:(1+)x¿D∧(s=q)}
x
w
1
6 P
x
1 1
+
x x
Z
D
P
0
q2 t 2 ′′
(1 − qt) − u
(1) − u qt ′
−
+
−
w
w
2w
w
¿| (1)¿u dt
(3.3)
for 0¡x¡Q −1 . By (0:4) and (1.1), the rst term on the right-hand side is
′
q
1 − (1) − u
1
¿
6 P
(1)¿u
x
w
(1 + )2 x
w
1
1 − (1) − u
(1) − u x(3%−1)=(2%)
1
−
6
(1)¿u
+ P
x
(1 + )2 x
w
(1 + )x
(1 + )2 x
w
2
q (1 + )2 x2 ′′
1
6 − x(3%−1)=(2%) (1)¿u
+ P
x
2w
1
1
(1) − u
(1) − u
1
6
−
+ P
(1)¿u
x
(1 + )x
w
(1 + )x (1 + )2 x
w
′
F((1 + )x(3%−1)=(2%) =)
1
1 − (1) − u
q
(1)¿u
+
4 P
¿
2
x
w
(1 + ) x
w
x
)
(
q2 ′′ %
1 (1 + )2% x2%
(1)¿u
E
+
x x(3%−1)=2
2w
+
F((1 + ))
x
for ∈ (0; 1):
Further the second term in (3.3) is 4 Q for D¿(1 + )x, the third ∼ F()=x by
(1.1), and the fourth 4 0 by (3.1). Taking x̃ = (1 + )2 x=(1 − ), (3.3) thus give
1
x̃
x˜
′
q
1
1 (1) − u
P{L(u)=q¿y}
dy − q − P
¿
(1)¿u
E{L(u)=q}
x̃
w
x̃
w
0
Z x
P{L(u)=q¿y}
1
1−
dy − q
6
2
(1 + )
x 0
E{L(u)=q}
′
q
1 − (1) − u
1
(1)¿u
¿
− P
x
w
(1 + )2 x
w
Z
+
1− 1
(1 + )2 x
Z
x
x˜
P{L(u)=q¿y}
dy
E{L(u)=q}
210
J.M.P. Albin / Stochastic Processes and their Applications 87 (2000) 199–234
1 − F((1 + )x(3%−1)=(2%) =)
(1 + )2
x
4
1−
(1 + )2% x(%−1)=2 lim sup E
+
(1 + )2
u↑û
(
)
q2 ′′ %
2 w (1)¿u
1−
1 − F()
1 − F((1 + ))
+
Q+
2
2
(1 + )
x
(1 + )
(1 + )2 x
)
(
q
1 − (1 + )2 x − (1 − ) x
lim sup
P sup (t)¿u
+
(1 + )2
(1 − ) x
E{L(u)}
t∈[0;1]
u↑û
+
→0
as ↓ 0; ↓ 0; x ↓ 0 and ↓ 0 (in that order);
where we used Proposition 2 in the last step. It follows that
Z x
P{L(u)=q¿y}
1
dy − q
lim sup lim sup
x 0
E{L(u)=q}
x↓0
u↑û
′
1 (1) − u
q
1
(1)¿u
60:
¿
− P
x
w
x
w
(3.4)
To proceed we note that (3.2) combines with a version of argument (2.3) to show
that, given an ¿0, there exists an u0 = u0 ()¡û such that
#
Z 1 "Z s=q
P{(1 − qt)¿u|(1)¿u} dt ds62 for u ∈ [u0 ; û):
0
D∧(s=q)
Through a by now familiar way of reasoning we therefore obtain
Z
1 x P{L(u)=q¿y}
dy − q
x 0
E{L(u)=q}
)
(Z
Z
D∧(s=q)
(1 − qt) − u
1 1
dt6(1 − )x (1)¿u ds
P
I(0; ∞)
¿
x qx
w
0
1
−
x
¿
Z
P
0
1 − qx
P
x
1 1
−
x x
¡
1
Z
s=q
D
Z
0
1 − qx
P
x
)
I(u; û) ((1 − qt)) dt¿x (1)¿u ds
D∧(s=q)
(Z
I(0; ∞)
0
1
"Z
(1 − qt) − u
w
dt6(1 − )x (1)¿u
#
s=q
D∧(s=q)
P{(1 − qt)¿u|(1)¿u} dt ds
D
q2 t 2 ′′
(1) − u qt ′
−
+
w
w
2w
0
6(1 − 2)x| (1)¿u
Z
I(−; ∞)
dt
J.M.P. Albin / Stochastic Processes and their Applications 87 (2000) 199–234
D
211
(1 − qt) − u (1) − u qt ′
q2 t 2 ′′
dt
I(; ∞)
−
+
−
w
w
w
2w
0
1 1 2
¿ x(1)¿u −
x x
1 − qx
(1) − u q (1 − 2)x ′
¿
P
−
6 − (1)¿u
x
w
w
Z
(1 − qt) − u (1) − u qt ′
q2 t 2 ′′
1 1 D
P
−
+
−
−
x x 0
w
w
w
2w
¿| (1)¿u dt − 2
x
′
1
(1) − u
q
1 − qx
P
¿
¡
(1)¿u
x
w
(1 − 2)2 x
w
(1) − u
1
1
− P
+
x
(1 − 2)x
w
(1 − 2)x
(1) − u
1
¿
(1)¿u − x2
(1 − 2)2 x
w
′
F( 21 (1 − 2))
1
(1) − u
q
1 − qx
(1)¿u
−
P
¿
− 2
¡
2
x
w
(1 − 2) x
w
x
x
1
− P
x
Z
for 0¡x¡Q −1 and 0¡¡ 12 . Consequently we have
q
P
E{L(u)}
−
(
1
P
(1 − 2)2 x
¡ lim inf
u↑û
×
sup (t)¿u
t∈[0;1]
)
−q
q′
1
(1) − u
¿
(1)¿u
w
(1 − 2)2 x
w
1
(1 − qx) (1 − 2)2
Z x
P{L(u)=q¿y}
1
dy − q
x 0
E{L(u)=q}
′
1
(1) − u
q
1 − qx
¿
P
−
(1)¿u
x
w
(1 − 2)2 x
w
(
)
q
1
P sup (t)¿u
−1
− lim sup
(1 − qx) (1 − 2)2
E{L(u)}
t∈[0;1]
u↑û
¿−
F( 21 (1 − 2))
−
(1 − Q x) (1 − 2)2 x (1 − Q x) (1 − 2)2 x2
212
J.M.P. Albin / Stochastic Processes and their Applications 87 (2000) 199–234
Q x (1 − 2)2 + 4(1 − )
q
P
lim sup
−
2
(1 − Q x) (1 − 2)
E{L(u)}
u↑û
→0
as ↓ 0; ↓ 0 and x ↓ 0
(
sup (t)¿u
t∈[0;1]
)
(in that order);
where we used Proposition 2 in the last step. Hence we conclude that
)
(
q
P sup (t)¿u − q
lim inf lim inf
x↓0
u↑û
E{L(u)}
t∈[0;1]
!
′
1 (1) − u
q
1
¿0;
¿
− P
(1)¿u
x
w
x
w
(3.5)
which in particular (again using Proposition 2) implies that
)!
′
1 (1) − u
q
1
¿
lim sup lim sup q + P
(1)¿u
x
w
x
w
x↓0
u↑û
1
6 lim sup lim sup q + P
x
x↓0
u↑û
)
q′ 1 (1) − u
¿
(1)¿u
w
x
w
q
P
−
E{L(u)}
q
P
+ lim sup
E{L(u)}
u↑û
(
(
sup (t)¿u
t∈[0;1]
sup (t)¿u
t∈[0;1]
)!
)
(3.6)
¡∞:
Using (1:5), (3.4) and (3.6) we readily deduce that
q
P L(u)=q6x; max (akq)¿u
lim sup lim sup
0¡aqk61
E{L(u)}
x↓0
u↑û
6 lim sup lim sup
x↓0
u↑û
[1=(aq)]
X
q
P{L(akq; u)=q6x; (akq)¿u}
E{L(u)}
k=1
[1=(aq)] Z akq
X
1
P{L(t; u)=q6x; (t)¿u} dt
6 lim sup lim sup
a E{L(u)}
x↓0
u↑û
a(k−1)q
k=1
1
1
P{L(t; u)=q6x; (t)¿u} dt
a E{L(u)} 0
x↓0
u↑û
Z
x 1 x P{L(u)=q¿y}
dy − q
6 lim sup lim sup
a x 0
E{L(u)=q}
x↓0
u↑û
′
q
1 (1) − u
1
¿
(1)¿u
− P
x
w
x
w
6 lim sup lim sup
Z
213
J.M.P. Albin / Stochastic Processes and their Applications 87 (2000) 199–234
x
+ lim sup lim sup
a
x↓0
u↑û
1
q+ P
x
60:
q′ 1 (1) − u
¿
(1)¿u
w
x
w
(3.7)
Further a calculation similar to (3.3) – (3.4) combines with (3.6) to reveal that
lim sup P{L(u)=q6x|(1)¿u}
u↑û
= lim sup P
u↑û
(Z
1=q
(Z
d∧(1=q)
I(0; ∞)
0
6 lim sup P
u↑û
(1 − qt) − u
w
I(; ∞)
0
+ lim sup P
u↑û
(Z
d
I(; ∞)
0
¿x| (1)¿u
6 lim sup P
u↑û
1
x
+ lim sup
u↑û
6 lim sup P
u↑û
→0
)
dt6x (1)¿u
(1) − u qt ′
−
w
w
)
dt6(1 + )x (1)¿u
(1 − qt) − u
(1) − u qt ′
−
−
w
w
w
dt
)
(1) − u q (1 + )x ′
−
6 (1)¿u + F()
w
w
Z
0
d
P
(1 − qt) − u
(1) − u qt ′
−
¿ (1)¿u dt
−
w
w
w
q′
1
(1) − u
¿
(1)¿u + F((1 + )) + F()
w
(1 + )2 x
w
as ↓ 0; ↓ 0 and x ↓ 0 (in that order):
(3.8)
In view of the obvious fact that
)
(
sup (t)¿u
P
t∈[0;1]
1
6
x
Z
x
1
x
Z
x
6
P
0
0
+P
(
{L(u)=q¿y} ∪
max (akq)¿u ∪
06aqk61
(
sup (t)¿u
t∈[0;1]
P{L(u)=q¿y} dy + P L(u)=q6x; max (akq)¿u
06aqk61
(
sup (t)¿u;
t∈[0;1]
)
max (akq)6u ;
06aqk61
))
dy
214
J.M.P. Albin / Stochastic Processes and their Applications 87 (2000) 199–234
Assumption 3 combines with (3.4) and (3.7) – (3.8) to show that
)
(
q
P sup (t)¿u − q
lim sup lim sup
E{L(u)}
t∈[0;1]
x↓0
u↑û
1
− P
x
!
q′ 1 (1) − u
¿
(1)¿u
w
x
w
Z x
P{L(u)=q¿y}
1
dy − q
6 lim sup lim sup
x
E{L(u)=q}
x↓0
u↑û
0
′
q
1 (1) − u
1
¿
− P
(1)¿u
x
w
x
w
q
+ lim sup lim sup lim sup
P L(u)=q6x; max (akq)¿u
0¡aqk61
E{L(u)}
a↓0
x↓0
u↑û
+ lim sup lim sup qP{L(u)=q6x|(1)¿u}
x↓0
u↑û
q
P
+ lim sup lim sup
E{L(u)}
a↓0
u↑û
(
sup (t)¿u;
t∈[0;1]
max (akq)6u
06aqk61
6 0:
)
(3.9)
The theorem now follows from inequalities (3.5) – (3.6) and (3.9).
In Theorem 2 the absence of Assumption 2 disallows use of Proposition 1. However,
if we do make Assumption 2, then a stronger limit theorem can be proved:
Corollary 1. Suppose that (1:1) and (1:3) hold. If in addition Assumptions 1 and 2
hold; and (t) is P-continuous; then the conclusion of Theorem 2 holds with
q(u) ′ 1 (1) − u
1
¿
(1)¿u
¿0:
lim inf lim inf q(u) + P
x↓0
u↑û
x
w(u)
x w(u)
Proof. By Theorem 1, (0:6) implies Assumption 3. Moreover (0:5) implies that (3.1)
holds for some function D(u) → Q −1 (suciently slowly) as u ↑ û. Since Assumption
2 implies that (3.2) holds for any function D(u) → Q −1 , it follows that the hypothesis
of Theorem 2 holds. Further (3.9) and Proposition 1 show that
′
q
1 (1) − u
1
¿
lim inf lim inf q + P
(1)¿u
x↓0
u↑û
x
w
x
w
¿ lim inf lim inf
x↓0
u↑û
1
q+ P
x
)
q′ 1 (1) − u
¿
(1)¿u
w
x
w
q
P
−
E{L(u)}
(
sup (t)¿u
t∈[0;1]
)!
J.M.P. Albin / Stochastic Processes and their Applications 87 (2000) 199–234
q
P
+ lim inf
u↑û
E{L(u)}
(
sup (t)¿u
t∈[0;1]
215
)
¿0:
Example 1. Let {(t)}t∈R be a stationary zero-mean and unit-variance Gaussian process, so that (1.1) holds with w(u) = (1 ∨ u)−1 and F(x) = 1 − e−x . Assume that the
covariance function r(t) ≡ E{(s) (s + t)}¡1 for t ∈ (0; 1] and satises
r(t) = 1 + 21 r ′′ (0) t 2 + t 2 o(|ln|t||−1 ) as t → 0:
(3.10)
Put q(u) = (1 ∨ u)−1 , so that (1.3) holds. We have C1 t 2 61 − r(t)6C2 t 2 for |t|61,
for some constants C1 ; C2 ¿0. Since 1 + r(t)62, it follows readily that
P{(1 − qt)¿u|(1)¿u} 6 2 P{(1 − qt)¿(1)¿u|(1)¿u}
6 2 P{(1 − qt) − r(qt)(1)¿(1 − r(qt))u}
p
6 2 P{N (0; 1)¿ C1 =2 qtu } for |qt|61;
so that Assumption 2 holds. Clearly (1 + t) − r(t)(1) − t ′ (1) is independent of
(1), and has variance 1 − r(t)2 − t 2 r ′′ (0) + 2t r ′ (t) = t 2 o(|ln|t||−1 ) by (3.10). Taking
′ = ′ = ′ (1) and ′′ = 0 in Assumption 1, (0:4) holds trivially while the fact that
(0:5) and (0:6) hold follows from the estimates
(1 − qt) − (1) + qt ′ (1)
¿ (1)¿u
P
wt
(1 − qt) − r(qt)(1) + qt ′ (1)
6P
¿2
wt
[1 − r(qt)](1)
1
¿
P
+
2
P{(1)¿u}
wt
1
+
:
P |N (0; 1)|¿
6P |N (0; 1)|¿
2 o(|ln|qt||−1=2 )
P{(1)¿u}
2 C2 qt
Let X be a unit-mean exponential random variable that is independent of ′ (1). Since
(1) and ′ (1) are independent, and (u((1) − u) | (1)¿u) → d X by (1.1), we have
1
1 (1) − u
1
(1)¿u
= P{′ (1)¿X=x}
P ′ (1)¿
lim
u→∞ x
x q(u)
x
Z
1 ∞ −y
e P{′ (1)¿y=x} dy
=
x 0
Z ∞
e−yx P{′ (1)¿y} dy
=
0
→ E{[′ (1)]+ }
p
= −r ′′ (0)=(2) as x ↓ 0:
216
J.M.P. Albin / Stochastic Processes and their Applications 87 (2000) 199–234
Adding things up we therefore recover the well-known fact that
) p
(
p
−r ′′ (0) P{N (0; 1)¿u}
−r ′′ (0) −u2 =2
P sup (t)¿u ∼ √
∼
e
1=u
2
2
t∈[0;1]
as u → ∞:
4. Rice’s formula for P-dierentiable stationary processes
In Theorem 3 below we derive two expressions for the upcrossing intensity of the
process {(t)}t∈R in terms of the joint distribution of (1) and its stochastic derivative
′ (1). Our result is a useful complement and alternative to existing results in the
literature.
Theorem 3. Let {(t)}t∈R be a stationary process with a.s. continuous sample paths.
Choose an open set U ⊆ R and assume that
(1)
has a density function that is essentially bounded on U:
(i) Assume that there exist a random variable ′ (1) and a constant %¿1 such that
(
)
(1 − s) − (1) + s′ (1) %
(4.1)
lim ess sup E
(1) = y f(1) (y) = 0;
s↓0
s
y∈U
and such that
(1 − s) − (1) + s ′ (1)
= 0:
lim E
s↓0
s
(4.2)
Then the upcrossing intensity (u) of a level u ∈ U by (t) is given by
(u) = lim s−1 P{(1) − s ′ (1)¡u¡(1)}:
s↓0
(4.3)
(ii) Assume that there exist a random variable ′ (1) and a constant %¿1 such that
(1 − s) − (1) + s′ (1) %
(1) = y f(1) (y) = 0 for u ∈ U:
lim ess sup E
s↓0
s
y¿u
(4.4)
Then the upcrossing intensity (u) for levels u ∈ U is given by (4:3).
(iii) Assume that (4:3) holds and that E{[′ (1)+ ]}¡∞. Then we have
Z ∞
ess inf P{′ (1)¿x|(1) = y} f(1) (y) d x
¿
lim
u¡y¡u+
↓0 0
for u ∈ U:
(u)
Z ∞
′
ess sup P{ (1)¿x|(1) = y} f(1) (y) d x
 6 lim
↓0
0
u¡y¡u+
(iv) Assume that (4:3) holds; that there exists a constant %¿1 such that
ess sup E{[′ (1)+ ]% |(1) = y} f(1) (y)¡∞;
y∈U
(4.5)
J.M.P. Albin / Stochastic Processes and their Applications 87 (2000) 199–234
and that E{[′ (1)+ ]}¡∞. Then (u) for levels u ∈ U is nite and satises
 Z ∞
lim ess inf P{′ (1)¿x|(1) = y} f(1) (y) d x
¿
↓0 u¡y¡u+
0
(u)
6
Z
217
∞
(4.6)
′
lim ess sup P{ (1)¿x|(1) = y} f(1) (y) d x:
↓0 u¡y¡u+
0
(v) Assume that (4:3) holds; that there exists a constant %¿1 such that
ess sup E{[′ (1)+ ]% |(1) = y} f(1) (y)¡∞ for u ∈ U:
(4.7)
y¿u
Then (u) for levels u ∈ U is nite and satises (4:6).
(vi) Assume that (4:6) holds and that ((1); ′ (1)) has a density such that
Z ∞
f(1); ′ (1) (u; y) dy is a continuous function of u ∈ U for almost all x¿0:
x
Then (u) for levels u ∈ U is given by Rice’s formula (0:1).
(vii) Assume that (4:6) holds; that
(1) has a density that is continuous and bounded away from zero on U;
and that
((1); ′ (1)) has a density f(1); ′ (1) (u; y) that is a continuous function of u ∈ U
for each y¿0. Then (u) for levels u ∈ U is given by Rice’s formula (0:1).
Proof. (i) Using (0:7) together with (4.1) and (4.2) we obtain
(u) = lim inf s−1 P{(1 − s)¡u¡(1)}
s↓0
(1) − u
1
P ′ (1)¿(1 − )
; (1)¿u
s
s
6 lim inf
s↓0
+ lim sup s−1 P{u¡(1)6u + s}
s↓0
+ lim sup
s↓0
(1) − u
1
P ′ (1)6(1 − )
; (1 − s)¡u;
s
s
u + s¡(1)6u +
1
(1) − u
P ′ (1)6(1 − )
; (1 − s)¡u; (1)¿u +
s
s
s↓0
1
(1) − u
1
lim inf P ′ (1)¿
; (1)¿u
6
1 − s↓0 s
s
+ lim sup
+ lim sup ess sup f(1) (y)
s↓0
u¡y¡u+s
218
J.M.P. Albin / Stochastic Processes and their Applications 87 (2000) 199–234
=s
(1 − s) − (1) + s′ (1)
¡ − x (1) = u + xs
+ lim sup
P
s
s↓0
(1 − s) − (1) + s′ (1)
1
¡−
×f(1) (u + xs) d x + lim sup P
s
s
s
s↓0
1
(1) − u
1
lim inf P ′ (1)¿
; (1)¿u
6
1 − s↓0 s
s
Z
+ lim sup ess sup f(1) (y)
s↓0
u¡y¡u+s
)
(1 − s) − (1) + s′ (1) %
(1) = y
+ lim sup ess sup E
s
u¡y¡u+
s↓0
=s
(1 − s) − (1) + s′ (1)
dx
1
+ lim sup E
×f(1) (y)
(x)%
s
s↓0
1
(1) − u
→ lim inf P ′ (1)¿
; (1)¿u
s↓0
s
s
Z
as ↓ 0 and ↓ 0; for ¿0 small:
(4.8)
The formula (4.3) follows using this estimate together with its converse
(1) − u
1
; (1)¿u
(u) ¿ lim sup P ′ (1)¿(1 + )
s
s
s↓0
− lim sup s−1 P {u¡(1)6u + s}
s↓0
1
(1) − u
′
; (1 − s)¿u; u + s¡(1)6u +
− lim sup P (1)¿(1 + )
s
s
s↓0
1
(1) − u
; (1 − s)¿u; (1)¿u +
− lim sup P ′ (1)¿(1 + )
s
s
s↓0
1
(1) − u
1
′
lim sup P (1)¿
; (1)¿u
¿
1 + s↓0 s
s
− lim sup ess sup f(1) (y)
s↓0
− lim sup
s↓0
u¡y¡u+s
Z
=s
)
(1 − s) − (1) + s′ (1)
P
¿x (1) = u + xs
s
(
(1 − s) − (1) + s′ (1)
1
¿
×f(1) (u + xs) d x − lim sup P
s
s
s
s↓0
(1) − u
1
; (1)¿u
→ lim sup P ′ (1)¿
s
s
s↓0
as ↓ 0 and ↓ 0; for ¿0 small:
(4.9)
219
J.M.P. Albin / Stochastic Processes and their Applications 87 (2000) 199–234
(ii) When (4.4) holds we do not need (4.1) and (4.2) in (4.8) because the estimates
that used (4.1) and (4.2) can be replaced with the estimate
(1) − u
1
; (1 − s)¡u; (1)¿u + s
lim sup P ′ (1)6(1 − )
s
s
s↓0
Z ∞
(1 − s) − (1) + s′ (1)
¡ − x (1) = u + xs
P
6 lim sup
s
s↓0
×f(1) (u + xs) d x
)
(
(1 − s) − (1) + s′ (1) %
(1) = y
6 lim sup ess sup E
s
y¿u
s↓0
×f(1) (y)
Z
∞
dx
= 0:
(x)%
In (4.9) we make similar use of the fact that [under (4.4)]
(1) − u
1
′
; (1 − s)¿u; (1)¿u + s
lim sup P (1)¿(1 + )
s
s
s↓0
Z ∞
(1 − s) − (1) + s′ (1)
¿x (1) = u + xs
6 lim sup
P
s
s↓0
×f(1) (u + xs) d x = 0:
(iii) Since E{′ (1)+ }¡∞ there exists a sequence {sn }∞
n=1 such that
p
p
−1
′
P{ (1)¿[sn |ln(sn )|] }¡sn = |ln(sn )| for n¿1 and sn ↓ 0 as n → ∞:
[This is an immediate consequence of the elementary fact that we must have
lim inf x → ∞ x ln(x) P{′ (1)¿x} = 0.] Using (4.3) and (4.5) we therefore obtain
(1) − u
1
′
; (1)¿u
(u) = lim inf P (1)¿
s↓0
s
s
−1
Z √
|ln(sn )| ]
[sn
6 lim sup
n→∞
0
P {′ (1)¿x|(1) = u + xsn } f(1) (u + xsn ) d x
+ lim sup sn−1 P{′ (1)¿[sn
n→∞
6
Z
∞
p
|ln(sn )|]−1 }
ess sup P{′ (1)¿x|(1) = y}f(1) (y) d x
u¡y¡u+
0
+ lim sup 1=
n→∞
p
|ln(sn )| for each
(4.10)
¿0:
On the other hand (4.3) alone yields
Z
P{′ (1)¿x|(1) = u + xs} f(1) (u + xs) d x
(u) ¿ lim lim sup
→∞
¿ lim
→∞
s↓0
Z
0
0
ess inf P{′ (1)¿x|(1) = y} f(1) (y) d x
u¡y¡u+
for each ¿0:
220
J.M.P. Albin / Stochastic Processes and their Applications 87 (2000) 199–234
(iv) When (4.5) holds the estimate (4.10) can be sharpened to
Z
(u) 6 lim sup
P{′ (1)¿x|(1) = u + xsn }f(1) (u + xsn ) d x
n→∞
0
[sn
√
|ln(sn )| ]−1
E{[′ (1)+ ]% |(1) = u + xsn }
f(1) (u + xsn ) d x
x%
n→∞
p
+ lim sup sn−1 P{′ (1)¿[sn |ln(sn )| ]−1 }
+ lim sup
Z
n→∞
6
Z
lim ess sup P{′ (1)¿x|(1) = y}f(1) (y) d x
0
↓0 u¡y¡u+
′
+ %
+ lim sup ess sup E{[ (1) ] |(1) = y} f(1) (y)
n→∞
y∈U
Z
[sn
√
|ln(sn )| ]−1
dx
x%
p
+ lim sup 1= |ln(sn )|
n→∞
Z
→
∞
lim ess sup P{′ (1)¿x|(1) = y}f(1) (y) d x
0
¡∞
↓0 u¡y¡u+
as → 0
for each ¿0:
(v) When (4.7) holds, one can replace the estimates of sn−1 P{′ (1)¿[sn
in (4.11) [that uses E{′ (1)+ }¡∞] with the estimates
Z ∞
P{′ (1)¿x|(1) = u + xs} f(1) (u + xs) d x
lim sup √
s↓0
[s
p
|ln(sn )|]−1 }
|ln(s)|]−1
′
+ %
6 lim sup ess sup E{[ (1) ] |(1) = y} f(1) (y)
s↓0
(4.11)
y¿u
Z
∞
[s
√
|ln(s)|]−1
dx
x%
= 0:
(vi) and (vii) Statement (vi) follows from (4.6). The theorem of Schee (1947) shows
that the hypothesis of (vi) holds when that of (vii) does.
Example 2. Let {(t)}t∈R be a mean-square dierentiable standardized and stationary
Gaussian process with covariance function r (cf. Example 1). Then we have
(1 − s) − (1) + s′ (1) %
E
(1) = y f(1) (y)
s
%
(1 − s) − r(s)(1) + s′ (1) %
+ 1 − r(s) |y|% f(1) (y);
62% E
s
s
so that (4.4) holds. Since (4.7) holds trivially, Theorem 3 [(ii), (v) and (vii)] shows
that Rices’s formula holds when ′ (1) is non-degenerate.
J.M.P. Albin / Stochastic Processes and their Applications 87 (2000) 199–234
221
5. Extremes of Gaussian processes in R n with strongly dependent components
Let {X (t)}t∈R be a separable n|1-matrix-valued centred stationary and two times
mean-square dierentiable Gaussian process with covariance function
R ∋ t y R(t) = E{X (s)X (s + t)T }
= I + r ′ t + 12 r ′′ t 2 + 16 r ′′′ t 3 +
1 (iv) 4
t
24 r
+ o(t 4 ) ∈ Rn|n
(5.1)
as t → 0 (where I is the identity in Rn|n ): It is no loss to assume that R(0) is diagonal,
since this can be achieved by a rotation that does not aect the statement of Theorem 4
below. Further, writing P for the projection on the subspace of Rn|1 spanned by eigenvectors of R(0) with maximal eigenvalue, the probability that a local extremum of
X (t) is not generated by PX (t) is asymptotically neglible (e.g., Albin 1992, Section
5). Hence it is sucient to study PX (t), and assuming (without loss by a scaling
argument) unit maximal eigenvalue, we arrive at R(0) = I .
To ensure that {X (t)}t∈[0; 1] does not have periodic components we require that
E{X (t)X (t)T |X (0)} = I − R(t)R(t)T
is non-singular:
Writing Sn ≡ {z ∈ Rn|1 : z T z = 1}, this requirement becomes
inf xT [I − R(t)R(t)T ]x¿0
x∈Sn
for each choice of t ∈ [ − 1; 1]:
(5.2)
The behaviour of local extremes of the 2 -process (t) ≡ X (t)T X (t) will then depend
on the behaviour of I − R(t)R(t)T as t → 0.
Sharpe (1978) and Lindgren (1980) studied extremes of 2 -processes with independent component processes. Lindgren (1989) and Albin (1992, Section 5) gave extensions to the general “non-degenerate” case when (5.1) and (5.2) hold and
lim inf t −2 inf xT [I − R(t)R(t)T ]x¿0:
t→0
x∈Sn
(5.3)
Using (5.1) in an easy calculation, (5.3) is seen to be equivalent with the requirement
that the covariance matrix r ′ r ′ − r ′′ of (X ′ (1) | X (1)) is non-singular.
We shall treat 2 -processes which do not have to be “non-degenerate” in the sense
(5.3). More specically, we study local extremes of the process {(t)}t∈[0; 1] , satisfying
(5.1) and (5.2), under the very weak additional assumption that
lim inf t −4 inf xT [I − R(t)R(t)T ]x¿0:
t→0
x∈Sn
(5.4)
Now let n (·) denote the (n − 1)-dimensional Hausdor measure over Rn|1 .
Theorem 4. Consider a centred and stationary Gaussian process {X (t)}t∈R in Rn|1 ;
n¿2; satisfying (5:1), (5:2) and (5:4). Then we have
(
)
1
T
P sup X (t) X (t)¿u
lim √
u→∞
u P{X (1)T X (1)¿u}
t∈[0;1]
=
Z
z∈Sn
1 p T ′ ′
dn (z)
√
z [r r − r ′′ ]z n
:
(Sn )
2
(5.5)
222
J.M.P. Albin / Stochastic Processes and their Applications 87 (2000) 199–234
Proof (By application of Theorem 2). The density function g and distribution function
G of the random variable (t) ≡ X (t)T X (t) satisfy
n −1
x(n−2)=2 e−x=2 for x¿0;
g(x) = 2−n=2
2
n −1
u(n−2)=2 e−x=2 as u → ∞:
(5.6)
1 − G(u) ∼ 2−(n−2)=2
2
Consequently (1.1) holds with w(u) = 2 and F(x) = 1 − e−x .
To verify Assumption 1 we take ′ = ′ = 2X (1)T X ′ (1) and ′′ = 4X (1)T (r ′′ −
′ ′
r r )X (1). Since r ′ is skew-symmetric, so that X (1)T r ′ X (1) = 0, we then obtain
(1 − t) − (1) + t′
= k X (1 − t) − R(t)X (1) k2
+ 2X (1)T R(t)T [X (1 − t) − R(t)X (1) + tX ′ (1) + tr ′ X (1)]
+ 2X (1)T [I − R(t)T ][tX ′ (1) + tr ′ X (1)]
−2X (1)T [I − R(t)T R(t)]X (1):
(5.7)
Here the fact that E{X (1 − t)X ′ (1)T } = R′ (t) = −R′ (−t)T implies that
X (1 − t) − R(t)X (1)
X ′ (1) + r ′ X (1)
and
are independent of X (1):
(5.8)
T
In the notation Var{X } ≡ E{XX }, (5.1) implies that
Var{X (1 − t) − R(t)X (1)} = (r ′ r ′ − r ′′ )t 2 + o(t 2 );
Var{[I − R(t)T ][tX ′ (1) + tr ′ X (1)]} = (r ′ )T (r ′ r ′ − r ′′ )r ′ t 4 + o(t 4 ):
(5.9)
Using (5.1) together with the elementary fact that r (iii) is skew-symmetric we get
Var{R(t)T [X (1 − t) − R(t)X (1) + tX ′ (1) + tr ′ X (1)]}
62Var{R(t)T [X (1 − t) − X (1) + tX ′ (1)]}
+2Var{R(t)T [(I − R(t))X (1) + tr ′ X (1)]}
= 41 (r (iv) + r ′′ r ′′ )t 4 + o(t 4 ):
(5.10)
′′ ′
′ ′′
Noting the easily established fact that r r = r r we similarly obtain
X (1)T [I − R(t)T R(t)]X (1) = X (1)T [(r ′ r ′ − r ′′ )t 2 + O(t 4 )] X (1):
(5.11)
Putting q(u) ≡ (1 ∨ u)−1=2 we have (1.3). Using (5.7) – (5.11) we further see that
(1 − qt) − u (1) − u qt′
¿|(1)¿u
−
+
P
wt
wt
wt
6
n
X
i=1
(
P |(X (1 − qt) − R(qt)X (1))i | ¿
(
+ P k X (1) k ¿
r
u
4t
),
r
t
2n
)
P{k X (1) k2 ¿u}
J.M.P. Albin / Stochastic Processes and their Applications 87 (2000) 199–234
+
n
X
i=1
(
P |(R(qt)T [X (1 − qt) − R(qt)X (1) + qtX ′ (1)
+ qtr ′ X (1)])i |¿
+
n
X
i=1
223
r
t 3
4nu
)
(
P |([I − R(qt)T ][qtX ′ (1) + qtr ′ X (1)])i |¿
r
t 3
4nu
)
t
P{k X (1) k2 ¿u}
+ P |X (1)T [I − R(qt)R(qt)T ]X (1)|¿
4
6K1 exp{−K2 (u=t)}
6K3 |t|2
for u¿u0 and |t|6t0 ;
(5.12)
for some constants K1 ; K2 ; K3 ; u0 ; t0 ¿0. Hence (0.6) holds with C = K3 ∧ t0−2 . By
application of Theorem 1 it thus follows that Assumption 3 holds.
Choosing D(u) ≡ (1 ∨ u)5=16 and using (5.7) – (5.11) as in (5.12) we obtain
(1 − qt) − u (1) − u qt′
q2 t 2 ′′
¿ (1)¿u
−
+
−
P
wt
wt
wt
2wt
(
r )
n
X
t
6
P |(X (1 − qt) − R(qt)X (1))i | ¿
2n
i=1
√
+ P{k X (1) k ¿ 2u}=P{k X (1) k2 ¿u}
(
n
X
P |(R(qt)T [X (1 − qt) − R(qt)X (1)
+
i=1
t
+ qt X ′ (1) + qtr ′ X (1)])i |¿ √
4 2nu
+
n
X
i=1
)
t
P ([I − R(qt)T ][qtX ′ (1) + qtr ′ X (1)])i ¿ √
4 2nu
t
+ P |X (1)T (I − R(qt)T R(qt) + [r ′′ − r ′ r ′ ]q2 t 2 )X (1)|¿
4
P{k X (1) k2 ¿u}
6 K4 exp{−K5 u3=8 }
for |t|6D(u) and u suciently large;
for some constants K4 ; K5 ¿0. It follows immediately that (3.1) holds.
Suppose that r ′ r ′ − r ′′ = 0, so that the right-hand side of (5.5) is zero. Then (5.9),
(5.10) and (5.12) show that (0.6) holds for some q(u) = o(u−1=2 ) that is non-increasing
224
J.M.P. Albin / Stochastic Processes and their Applications 87 (2000) 199–234
[so that (1.3) holds]. Now Theorem 1 and Proposition 2 show that also the left-hand
side of (5.5) is zero. We can thus without loss assume that r ′ r ′ − r ′′ 6= 0.
In order to see how (5.5) follows from Theorem 2, we note that
n
Z
d (x)
z2
1
p
g(y):
exp −
f(1); ′ (1) (y; z) =
T [r ′ r ′ − r ′′ ]x
T
′
′
′′
8yx
n (Sn )
8x [r r − r ]x
x∈Sn
[See the proof of Theorem 6 in Section 7 for an explanation of this formula.] Sending
u → ∞, and using (5.6), we therefore readily obtain
Z ∞ Z z
f(1); ′ (1) (u + xqy; z) dy d z
0
0
∼
Z
2 p T ′ ′
dn (z)
√
g(u)
u z [r r − r ′′ ]z n
(Sn )
2
∼
Z
1 p T ′ ′
dn (z) 1 − G(u)
√
z [r r − r ′′ ]z n
(Sn ) q(u)
2
z∈Sn
z∈Sn
for x¿0. In view of (0.3), (5.5) thus follows from the conclusion of Theorem 2.
In order to verify the hypothesis of Theorem 2 it remains to prove (3.2). To that
end we note that [cf (5.7)]
(1 − t) − (1) = k X (1 − t) − R(t)X (1) k2
+2X (1)T R(t)T [X (1 − t) − R(t)X (1)]
−2X (1)T [I − R(t)T R(t)]X (1):
(5.13)
Conditional on the value of X (1), we here have [recall (5.8)]
Var{X (1)T R(t)T [X (1 − t) − R(t)X (1)]}
= X (1)T R(t)T [I − R(t)R(t)T ] R(t)X (1)
= X (1)T ([I − R(t)T R(t)] − [I − R(t)T R(t)][I − R(t)T R(t)]T )X (1)
6 X (1)T [I − R(t)T R(t)]X (1):
T
T
T
(5.14)
T
4
Since x [I − R(t) R(t)]x = x [I − R(−t)R(−t) ]x¿K6 t for x ∈ Sn and |t|61, for
some constant K6 ¿0 by (5.2) and (5.4), we have X (1)T [I −R(qt)T R(qt)]X (1)¿K6 u1=4
for k X (1) k2 ¿u and t ∈ [D(u); q(u)]. Using (5.13) and (5.14) this give
P{(1 − q(u)t)¿u|(1)¿u}
6 2P{(1 − q(u)t)¿(1)¿u|(1)¿u}
6 2 P{k X (1 − qt) − R(qt)X (1) k2
¿X (1)T [I − R(qt)T R(qt)]X (1) | k X (1) k2 ¿u}
p
+2 P{N (0; 1)¿ X (1)T [I − R(qt)T R(qt)] X (1)| k X (1) k2 ¿u}
J.M.P. Albin / Stochastic Processes and their Applications 87 (2000) 199–234
62
n
X
i=1
(
r
P |(X (1 − qt) − R(qt)X (1))i |¿
K6 u1=4
2n
225
)
p
+ 2P{N (0; 1)¿ K6 u1=4 }:
It follows that (3.2) holds.
6. Extremes of totally skewed moving averages of -stable motion
Given an ∈ (1; 2] we write Z ∈ S (; ) when Z is a strictly -stable random variable
with characteristic function
io
h
n
sign()
for ∈ R:
(6.1)
E{exp[iZ]} = exp −|| 1 − i tan
2
Here the scale = Z ¿0 and the skewness = Z ∈ [ − 1; 1] are parameters. Further
we let {M (t)}t∈R denote an -stable motion that is totally skewed to the left, that is,
M (t) denotes a Levy process that satises M (t) ∈ S (|t|1= ; −1).
For a function g : R → R we write ghi ≡ sign(g) |g| . Further we dene
Z ∞
g(x)d x for g ∈ L1 (R) and k g k ≡ h|g| i1= for g ∈ L (R):
hgi ≡
−∞
It is well known that (e.g., Samorodnitsky and Taqqu, 1994, Proposition 3:4:1)
Z
g dM ∈ S (k g k ; −hghi i= k g k ) for g ∈ L (R):
(6.2)
R
Given a non-negative function f ∈ L (R) that satises (6.4) – (6.8) below, we shall
study the behaviour of local extremes for the moving average process
Z ∞
f(t − x) dM (x) for t ∈ R:
(6.3)
(t) = separable version of
−∞
Note that by (6.2) we have (t) ∈ S (k f k ; −1); so that {(t)}t∈R is a stationary
-stable process that is totally skewed