The proof of Theorem 1 is given in Section 4 which follows the ideas mentioned above, dealing with the double exceedance clusters in D and outside D, showing that a double exceedance cluster occurs
with much smaller probability than within D, which gives the presented result. For the domain D with the subintervals we apply Lemma 1. The lower bound needs again Lemma 1, but also the
results in Lemma 8. The proof of the second statement of Theorem 1 is much simpler.
Similar ideas are applied in the proof of Theorem 2 based on different intervals. We have to consider the three cases
α 1, = 1 and 1 separately since the path behavior of the conditioned Gaussian process plays a role. This is similar but technically more complicated to Theorem D.3 in [8], when
different relations between smoothness of trajectories and smoothness of variance in its maximum point lead to quite different type of considerations.
We note that limiting conditioned processes are fractional Brownian motions with trend, where the Brownian motions have positive dependent increments if
α 1, independent increments if α = 1, and negative correlated increments if
α 1. The major contribution to the asymptotic probability comes in all three cases from events where X s
u, X t u with s, t separated by not more than ǫ + o1 with o1 → 0 as u → ∞. Again we apply subintervals and the Bonferroni inequality,
with the double sum method for the lower bounds where the subintervals are adapted to the three different cases of
α. In all four cases considered by Theorems 1 and 2, one has to choose the lengths of the two-dimensional small intervals carefully in Lemma 1 and 2, to hold the double sum infinitely
smaller than the sum of probabilities in the Bonferroni inequality. The cases of Theorem 1 and Theorem 2 iii are similar because the smoothness of the variance exceeds the smoothness of the
trajectories. Therefore, we choose the same two-dimensional ’subintervals’ and prove these cases in the same way.
The second part of Theorem 2 is as for the second statement of Theorem 1, and is not repeated.
4 Lemmas
We write aΛ = {ax : x ∈ Λ} and a
1
, a
2
+ Λ = {a
1
, a
2
+ x : x ∈ Λ}, for any real numbers a, a
1
, a
2
and set Λ ⊂ R
2
. Let A be a set in R, and A
δ
:= {t : inf
s ∈A
|t − s| ≤ δ} its δ-extension, with δ 0. We denote the covariance matrix of two centered random vectors
U, V by
cov U, V = E
UV
T
and
cov U =EUU
T
. In the following, we let
τ be a point in [0, T ] which may depend on u and lies in the neighborhood of t
m
where r τ is either twice continuously differentiable in case of Condition C2 or continuously
differentiable in case of Condition C3 with t
m
= ǫ. Lemma 1 and 2 deal with the events of interest on small intervals assuming the condition
C2 and C3, respectively. Here the limiting conditioned process enters with the Pickands type conditions. For
S ∈ S and Λ ⊂ R
2
, denote pu; S, Λ := P
[
s,t∈0,τ+u
−2α
Λ
{X s u, X t u, T
s
S }
1833
Lemma 1. Let X t be a Gaussian process with mean zero and covariance function rt satisfying assumptions C1 and C2. Let Λ be a closed subset of R
2
. i Then for any
τ = τu with |τ − t
m
| = Ou
−1
p log u as u
→ ∞, and any S ∈ S ,
pu; S, Λ ∼ h
α
Λ
1 + rt
m 2
α
Ψ
2
u, rτ, 4
as u → ∞, where
h
α
Λ = Z
∞ −∞
Z
∞ −∞
e
x+ y
P
[
s,t∈Λ
p 2B
α
s − |s|
α
x, p
2 ˜ B
α
t − |t|
α
y
d x d y,
with B
α
, ˜ B
α
are independent copies of the fractional Brownian motion with the Hurst parameter α2.
In particular, for Λ
1
and Λ
2
, closed subsets of R, pu; S, Λ
1
× Λ
2
∼ H
α,0
Λ
1
1 + rt
m 2
α
H
α,0
Λ
2
1 + rt
m 2
α
Ψ
2
u, rτ 5
as u → ∞.
ii Further, for any S ∈ S
1
there exist C 0, δ 0 such that
pu; S, Λ ≤ C e
−δu
2
Ψ
2
u, rτ. 6
Remark 3: Note that if |τ − t
m
| = ou
−1
, then Ψ
2
u, rτ ∼ Ψ
2
u, rt
m
as u → ∞.
Lemma 2. Let X t be a Gaussian process with mean zero and covariance function rt satisfying assumptions C1 and C3 with
α ≤ 1. Let Λ be a closed subset of R
2
. i Let
τ = τu, be such that |τ − ǫ| = Ou
−2
log u as u → ∞. Then for any S ∈ S
and α 1,
pu; S, Λ ∼ h
α
Λ
1 + rǫ
2 α
Ψ
2
u, rτ 7
as u → ∞. If α = 1, 7 holds with h
α
replaced by ˜h
1
Λ = Z
∞ −∞
Z
∞ −∞
e
x+ y
P
n [
s,t∈Λ
p 2B
1
s − |s| − r
′
ǫs x, p
2 ˜ B
1
t − |t| + r
′
ǫt y o
d x d y ii Statement ii of Lemma 1 holds also in this case.
Proof of Lemma 1 and 2: The proofs of both lemmas can be derived partially together with the same steps, where it does not matter whether t
m
is an inner point or the boundary point ǫ. Some
deviations are induced by this difference of t
m
, hence with different smoothness conditions around t
m
. Therefore, we give both proofs simultaneously, as much as possible, and some parts we have to separate for the cases t
m
ǫ and t
m
= ǫ, using condition C2 or C3. This we indicate by paragraphs
1834
denoted by ’Part for Lemma 1’ and ’Part for Lemma 2’. If both cases can be dealt with together, we denote the paragraph as ’Common part’.
Statement i: Common part: Let S ∈ S
which means that there are closed sets A ⊂ A
and B
⊂ B . Obviously, rt
−1 in a neighborhood of t
m
. We have for any u 0, denoting for short,
K = 0, τ + u
−2α
Λ and UK, S =
[
s,t∈K
{ X s u, X t u, T
s
S },
pu; S, Λ = u
−2
Z Z P
UK, S
| X 0 = u − x
u , X
τ = u − y
u
× f
X 0,X τ
u − x
u , u
− y
u d x d y.
8 Consider first the conditional probability in 8. Denote by P
x, y
the family of conditional probabili- ties given X 0 = u
−
x u
, X τ = u −
y u
. Let κ 0 be small such that the κ-extensions of A and B are
still subsets of A and B
, respectively, A
κ
⊂ A , B
κ
⊂ B , then the corresponding event S
κ
∈ S , and
for all sufficiently large u and all s, t ∈ K, S
κ
⊂ T
s
S. Note that S
κ
is independent of s, if s, t ∈ K.
Hence UK, S
⊇ S
κ
∩ [
s,t∈K
{ X s u, X t u} = S
κ
∩ UK, Ω. Now we prove that P
x, y
S
κ
∩ UK, Ω ∼ P
x, y
UK, Ω as u → ∞. For the conditional mean of X v, using inequality rs
− rt
2
≤ 21 − rt − s and the conditions of the two lemmas, we have by simple algebra,
M
x y
v, u := E
X v X 0 = u
− x
u , X
τ = u − y
u .
= u − xurv − rτ − vrτ + u − yurτ − v − rvrτ
1 − r
2
τ = u
rv + r τ − v
1 + r τ
+ 1
u g
1
v, τx + g
2
v, τ y = umv
1 + O
u
−α
log u
α2
+ Ou
−1
g
1
v, t
m
x + g
2
v, t
m
y, where g
1
and g
2
are continuous bounded functions. The conditional variance can be estimated as follows,
V
x, y
v := varX v|X 0, X τ = det covX 0, X
τ, X v 1
− r
2
τ ≤ 1.
9 We have by the construction of
S
κ
, inf
v ∈A
κ
mv 1 and sup
v ∈B
κ
mv 1. Similarly as 9, we get
that V
x, y
v, v
′
:= varX v − X v
′
|X 0, X τ ≤ varX v − X v
′
≤ C|v − v
′
|
α
. Hence there exists an a.s. continuous zero mean Gaussian process Y v with variance V v and
variance of increments V v, v
′
. Using Fernique’s inequality and 9, for any positive
δ
1
minmin
v ∈A
κ
mv − 1, 1 − max
v ∈B
κ
mv, we derive for all sufficiently large u, P
x, y
UK, Ω \S
κ
≤ P
x, y
Ω\S
κ
≤ minP inf
v ∈A
κ
Y v + M
x y
v, u u, P sup
v ∈B
κ
Y v + M
x y
v, u u ≤ C exp−δ
2 1
u
2
2, 1835
which gives the desired result P
x, y
UK, S ≥ P
x, y
S
κ
∩ UK, Ω ≥ P
x, y
UK, Ω − C exp−δ
2 1
u
2
2. 10
Notice that also P
x, y
UK, S ≤ P
x, y
UK, Ω . 11
Now we study the integrand in 8 replacing P
x, y
UK, S by P
x, y
UK, Ω. To this end we consider the limit behavior of the conditional distributions of the vector process
ξ
u
t, η
u
t, where ξ
u
t = uX u
−2α
t − u + x, η
u
t = uX τ + u
−2α
t − u + y,
given ξ
u
0, η
u
0 = 0, 0 that is X 0 = u − xu, X τ = u − yu. These Gaussian processes describe the cluster behavior which are separated by at least
ǫ. We need to know the mean and the covariance structure of
ξ
u
s and η
u
s with the limiting expressions for the corresponding limiting processes
ξs and ηs. We have, E
ξ
u
t η
u
t ξ
u
η
u
= E
ξ
u
t η
u
t
+ R
t
R
−1
ξ
u
0 − Eξ
u
η
u
0 − Eη
u
,
12 where
R
t
:= E
ξ
u
t − Eξ
u
t η
u
t − Eη
u
t
ξ
u
0 − Eξ
u
η
u
0 − Eη
u
⊤
. Further,
E ξ
u
0 = Eξ
u
t = x − u
2
, E η
u
0 = Eη
u
t = y − u
2
, 13
var ξ
u
0 = var η
u
0 = u
2
, cov ξ
u
0, η
u
0 = u
2
r τ,
cov ξ
u
0, ξ
u
t = covη
u
0, η
u
t = u
2
ru
−2α
t, cov
ξ
u
0, η
u
t = u
2
r τ + u
−2α
t, cov ξ
u
t, η
u
0 = u
2
r τ − u
−2α
t. 14
We write ru
−2α
t = 1 − u
−2
|t|
α
+ ou
−2
, r
τ ± u
−2α
t = r τ ± u
−2α
t r
′
τ + θ
±
u
−2α
t, where
|θ
±
| ≤ 1. Obviously, if α 1, it follows for both lemmas, that r
τ ± u
−2α
t = r τ + ou
−2
. 15
Part for Lemma 1: For this lemma the last relation 15 also holds for
α ∈ [1, 2 by using |τ− t
m
| = Ou
−1
p log u. Indeed, we get
|r
′
τ+θ
±
u
−2α
t −r
′
t
m
| = Ou
−1
p log u and again r
τ±u
−2α
t = r
τ + ou
−2
. This implies that with the notation r = rτ and r
′
= r
′
τ R
t
= u
2
1
− u
−2
|t|
α
+ ou
−2
r + ou
−2
r + ou
−2
1 − u
−2
|t|
α
+ ou
−2
= R
− |t|
α
I + o1,
1836
where I denotes the identity matrix. Note that R
= u
2
1
r r
1
and R
−1
= 1
u
2
1 − r
2
1
−r −r
1
Multiplying the matrices gives R
t
R
−1
= I + u
−2
|t|
α
1 − r
2
−1
r r
−1
+ ou
−2
, as u
→ ∞. From 12 and 13 we immediately get that E
ξ
u
t η
u
t
ξ
u
η
u
=
= − |t|
α
1 + r
1 1
+ o1
16 as u
→ ∞.
Part for Lemma 2: Let α = 1. We have
R
t
= R +
−|t| −r
′
t r
′
t −|t|
+ o1.
Multiplying by R
−1
, we get R
t
R
−1
= I + u
−2
1 − r
2
−|t| −r
′
t r
′
t −|t|
1
−r −r
1
+ ou
−2
= I + u
−2
1 − r
2
−|t| + r r
′
t r
|t| − r
′
t r
|t| + r
′
t −|t| − r r
′
t
+ ou
−2
. For
α 1, we have R
t
R
−1
= I − |t|
α
R
−1
+ ou
−2
, as u → ∞. By 12 and 13, for α ≤ 1, E
ξ
u
t η
u
t
ξ
u
η
u
=
= − |t|
α
1 + r
1 1
+
t r
′
1 + ˜r
−1 1
1
α = 1 + o1 =
− |t|
α
1 + r
1 1
+
t r
′
1 + r
−1 1
1
α = 1 + o1 17
as u → ∞.
Common part: Since the conditional expectation is linear, the o1 terms in 16, 17 have the structure
|x| + | y|o
u
, with o
u
→ 0 as u → ∞ uniformly in x, y ∈ R. Now we compute the conditional covariance matrix of the vector
ξ
u
t − ξ
u
s, η
u
t
1
− η
u
s
1 ⊤
given ξ
u
0, η
u
0. We have
cov
ξ
u
t − ξ
u
s η
u
t
1
− η
u
s
1
ξ
u
η
u
= cov
ξ
u
t − ξ
u
s η
u
t
1
− η
u
s
1
− C cov
ξ
u
η
u
−1
C
⊤
, 18
where C = cov
ξ
u
t − ξ
u
s η
u
t
1
− η
u
s
1
,
ξ
u
η
u
1837
is the matrix of covariances of the two random vectors. Then, as u → ∞,
var ξ
u
t − ξ
u
s = varη
u
t − η
u
s = 2u
2
1 − ru
−2α
t − s ∼ 2|t − s|
α
19
Part for Lemma 1: Using the Taylor expansion, we get by C2 as u → ∞
cov ξ
u
t − ξ
u
s, η
u
t
1
− η
u
s
1
= u
2
rτ + u
−2α
t
1
− t + rτ + u
−2α
s
1
− s − rτ + u
−2α
t
1
− s − rτ + u
−2α
s
1
− t = u
2
u
−2α
r
′
τt
1
− t + s
1
− s − t
1
+ s − s
1
+ t + Ou
−4α
= Ou
2 −4α
= o1 20
Part for Lemma 2: In this case the second derivative is not used. Since α ≤ 1, the statement holds
in the same way by C3.
cov ξ
u
t − ξ
u
s, η
u
t
1
− η
u
s
1
= u
2
u
−2α
r
′
τt
1
− t + s
1
− s − t
1
+ s − s
1
+ t + ou
−2
= o1. 21
Common Part: Further we have for both lemmas,
cov ξ
u
t − ξ
u
s, ξ
u
0 = covη
u
t − η
u
s, η
u
0 = u
2
rtu
−2α
− rsu
−2α
= O1, cov
ξ
u
t − ξ
u
s, η
u
0 = u
2
rτ − u
−2α
t − rτ − u
−2α
s = Ou
2 −2α
, cov
η
u
t
1
− η
u
s
1
, ξ
u
0 = Ou
2 −2α
, so each element of the matrix
C cov
ξ
u
η
u
−1
C
⊤
is bounded by Ou
4 −4α
u
2
= Ou
2 −4α
= o1 22
as u → ∞. This implies together that 18 can be written as
cov
ξ
u
t − ξ
u
s η
u
t
1
− η
u
s
1
ξ
u
η
u
=
2
|t − s|
α
2 |t
1
− s
1
|
α
1 + o1
as u → ∞. Since the conditional variance is bounded by the unconditional one, we get that
var ξ
u
t − ξ
u
s | ξ
u
0, η
u
0 ≤ C|t − s|
α
, 23
var η
u
t − η
u
s | ξ
u
0, η
u
0 ≤ C|t − s|
α
, 24
for all t, s ∈ [0, ∞. Thus we proved that for any T 0, the distribution of the Gaussian vector
process ξ
u
t, η
u
t conditioned on ξ
u
0 = η
u
0 = 0 converges weakly in C[−T, T ] to the distribution of the Gaussian vector process
ξt, ηt, t ∈ [−T, T ]. This implies that lim
u →∞
P
x, y
UK, Ω = P
[
s,t∈Λ
{ ξs x, ηt y}
.
1838
Furthermore, we have for ξ and η the following representations:
Part for Lemma 1: The limit process are
ξt = p
2B
α
t − |t|
α
1 + rt
m
and ηt =
p 2e
B
α
t − |t|
α
1 + rt
m
.
Part for Lemma 2: The limit processes are
ξt = p
2B
α
t − |t|
α
+ r
′
t1 α = 1
1 + r and
ηt = p
2e B
α
t − |t|
α
− r
′
t1 α = 1
1 + r .
Common Part: Domination: We want to apply the dominated convergence theorem for the integral in 8 divided by Ψ
2
u, r, hence to 1 + r
−2
Z Z P
x, y
UK, Ω f
u
d x d y, where f
u
= exp
x + y 1 + r
− x
2
− 2r x y + y
2
2u
2
1 − r
2
.
25 We construct an integrable dominating function with separate representations in the four quadrants
as follows. Use 11 and bound the probability P
x, y
UK, Ω. Let T 0 be such that Λ ⊂ [−T, T ] × [−T, T ].
1. For the quadrant x