Asymptotics of the scaling factor µ
Note that condition 4.4.94 can be met because of condition 4.1.27. Proof of Lemma 4.4.6: Combining Lemma 4.4.4, Lemma 4.4.5 and formula
4.4.94 we see that there exist constants M
i
, tending to zero as i → ∞, such
that for all x ∈ K
Ni
and ω ∈
C
ξ
i
, K
[0, ∞
g
∗
x
k
i
− E
ω
g
∗
X
N
i
τ
i
− σ
k
i
E
ω
gX
N
i
τ
i
≤ M
i
. 4.4.96
We may conclude from 4.4.96 that
E
ω
gX
N
i
τ
i
≤ σ
k
i
−1
M
i
+ g
∗ ∞
. 4.4.97
Since σ
k
i
→ ∞ as i → ∞ and since g is continuous on K and nonzero on K
◦
, there exists constants ˜
M
i
, tending to zero as i → ∞, such that for all x ∈ K
Ni
and ω ∈
C
ξ
i
, K
[0, ∞
E
ω
g
∗
X
N
i
τ
i
≤ ˜ M
i
. 4.4.98
When we insert this into 4.4.96, then after a suitable redefinition of our constants M
i
we arrive at 4.4.95. We now translate the statement in Lemma 4.4.6 about the conditional law of
X
N
i
given X
N
i
ξ
i
t
t ≥0
= ωt
t ≥0
4.4.99 into a statement about the unconditional law.
Lemma 4.4.7 For i
∈
N
let X
N
i
be a solution of 4.1.5 with initial condition 4.4.22 and let λ
i
i ∈
N
be constants satisfying 4.4.94. For i ∈
N
and ξ ∈
N
i
let Y
i ξ
be given by Y
i ξ
: =
∞
σ
k
i
gX
N
i
t − g
∗
x
k
i
λ
−1 i
e
t λ
i
dt. 4.4.100
Then there exist constants M
i
, tending to zero as i → ∞, such that for all x ∈
K
Ni
E Y
i
Y
i ξ
i
≤ M
i
. 4.4.101
Proof of Lemma 4.4.7: Note that almost surely Y
i ξ
≥ −g
∗ ∞
, and hence E
|Y
i ξ
| =
∞ −g
∞
y P[Y
i ξ
∈ dy] =
−g
∞
y P[Y
i ξ
∈ dy] +
∞
y P[Y
i ξ
∈ dy] ≤ g
∞
+ E[Y
i ξ
] i
∈
N
, ξ ∈
N
i
. 4.4.102
Lemma 4.4.6 implies that for a suitable version of the conditional expectation |E[Y
i
|Y
i ξ
i
= y]| ≤ M
i
y ≥ −g
∗ ∞
, 4.4.103
and by symmetry the same is true for the conditional expectation of Y
i ξ
i
given Y
i
. It follows that
E[Y
i
Y
i ξ
i
] =
∞ −g
∗ ∞
E[Y
i
Y
i ξ
i
|Y
i ξ
i
= y]P[Y
i ξ
i
∈ dy] =
∞ −g
∗ ∞
y E[Y
i
|Y
i ξ
i
= y]P[Y
i ξ
i
∈ dy] ≤
∞ −g
∗ ∞
y E[Y
i
|Y
i ξ
i
= y] P[Y
i ξ
i
∈ dy] ≤ M
i
g
∗ ∞
+ E[Y
i ξ
i
] ≤ M
i
g
∞
+ M
i
, 4.4.104
where in the last step we used that E[Y
i ξ
i
] =
∞ −g
∗ ∞
E[Y
i ξ
i
|Y
i
= y]P[Y
i
∈ dy] ≤
∞ −g
∗ ∞
E[Y
i ξ
i
|Y
i
= y] P[Y
i
∈ dy] ≤ M
i
. 4.4.105
Lemma 4.4.8 For i
∈
N
let X
N
i
be a solution of 4.1.5 with initial condition 4.4.22, and let λ
i
i ∈
N
be constants satisfying 4.4.94. Then there exist con- stants M
i
, tending to zero as i → ∞, such that for all x ∈ K
Ni
E
∞
g
∗
X
N
i
, k
i
t − σ
k
i
1 N
k
i
i ξ
: ξ≤k
i
gX
N
i
ξ
t λ
−1 i
e
t λ
i
dt
2
≤ M
i
. 4.4.106
Proof of Lemma 4.4.8: Defining random variables Y
i ξ
as in Lemma 4.4.7 and using symmetry, we see that
E
∞
g
∗
x
k
i
− σ
k
i
1 N
k
i
i ξ
: ξ≤k
i
gX
N
i
ξ
t λ
−1 i
e
t λ
i
dt
2
= 1
N
2k
i
i ξ
: ξ≤k
i
η :
η≤k
i
E Y
i ξ
Y
i η
= N
k
i
i
N
k
i
i
− N
k
i
−1 i
N
2k
i
i
E[Y
i
Y
i ξ
i
] +
1 N
2k
i
i ξ
: ξ≤k
i
η :
η≤k
i
ξ−η≤k
i
−1
E Y
i ξ
Y
i η
≤ E[Y
i
Y
i ξ
i
] +
1 N
i
g
∗ ∞
+ σ
k
i
g
∞
, 4.4.107
where σ
k
i
∼ c
−k
i
c1 − c and hence σ
k
i
N
i
→ 0 by 4.4.94. But E
∞
g
∗
x
k
i
− g
∗
X
N
i
, k
i
t λ
−1 i
e
t λ
i
dt
2
≤ E
∞
g
∗
x
k
i
− g
∗
X
N
i
, k
i
t
2
λ
−1 i
e
t λ
i
dt ≤ g
∗ ∞
E
∞
x
k
i
− X
N
i
, k
i
t
2
λ
−1 i
e
t λ
i
dt ≤ g
∗ ∞
M λ
i
N
k
i
i
4.4.108
by Corollary 4.4.2. Here λ
i
N
k
i
i
→ 0 as i → ∞ by 4.4.94, and combining 4.4.107 and 4.4.108 and applying Lemma 4.4.7 we arrive at 4.4.106.
Lemma 4.4.9 For i
∈
N
let X
N
i
be a solution of 4.1.5 with initial condition 4.1.6. Then for i
∈
N
there exists positive constants γ
i
, M
i
satisfying γ
i
≪ c
k
i
and M
i
≪ 1 i → ∞, such that for all t ≥ 0 E
∞
g
∗
ˆ X
i
t + s − ˆ
G
i
t + s
γ
−1 i
e
−sγ
i
ds
2
≤ M
i
. 4.4.109
Proof of Lemma 4.4.9: Let us write
R
i
t : = g
∗
ˆ X
i
t − ˆ
G
i
t = g
∗
X
N
i
, k
i
β
i
t − σ
k
i
1 N
k
i
i ξ
: ξ≤k
i
gX
N
i
ξ
β
i
t. 4.4.110