where R is a constant such that |x − y| ≤ R for all x, y ∈ K and in the last step
we use that N ≥ 2 and c 1.
Proof of formula 4.1.29: Since the terms with n
≥ 1 in 4.2.2 tend to zero uniformly as i
→ ∞, it suffices to show that lim
i →∞
E
T
σ
k
i
c
k
i
X
N
i
, k
i
+1
β
i
t − ˆX
i
t − c
∗
[θ − ˆX
i
t]
2
dt = 0.
4.3.6 Since σ
k
c
k
→ c
∗
as k → ∞ recall 4.1.15 and 4.1.23, it thus suffices to show
that lim
i →∞
E
T
X
N
i
, k
i
+1
β
i
t − θ
2
dt = 0.
4.3.7 We use Lemma 4.3.1 to estimate X
N
i
, k
i
+1
= θ E
T
X
N
i
, k
i
+1
β
i
t − θ
2
dt ≤ T sup
≤s≤β
i
T
E X
N
i
, k
i
+1
s − θ
2
≤ M β
i
T
2
N
k
i
+1 i
. 4.3.8
Since T is fixed, the right-hand side tends to zero provided that lim
i →∞
β
i
N
k
i
+1 i
= 0. 4.3.9
Inserting β
i
= σ
k
i
N
k
i
i
and σ
k
i
∼ c
−k
i
c1 − c, we find that this condition
amounts to lim
i →∞
c
k
i
N
i
= ∞. 4.3.10
But the latter holds for any c ∈ 0, 1 because of condition 4.1.27.
4.4 Convergence of the diffusion rate
4.4.1 Strategy of the proof
In this section the essential ideas behind Theorem 4.1.3 will have to come in. In particular, we will need to explain how the universal large space-time diffusion
function g
∗
arises and why the scaling of time with the factor σ
k
i
N
k
i
is the correct one. Before we embark on the calculations that will give us the convergence in
4.1.30, we outline the heuristics of the proof. STEP 1: We fix a t
≥ 0 and look at the process X
N
i
ξ
β
i
t + s
ξ :
ξ≤k
i
−1, s∈[0,T
i
]
, 4.4.1
with σ
k
i
−1
N
k
i
−1 i
≪ T
i
≪ N
k
i
i
. 4.4.2
Thus, we consider the evolution of a k
i
− 1-block on a time scale that is long with respect to σ
k
i
−1
N
k
i
−1 i
the presumed time scale of the k
i
− 1-block average, but short with respect to N
k
i
i
. Note that condition 4.4.2 can be met because of condition 4.1.27.
The assumption that T
i
≪ N
k
i
i
allows us to simplify the stochastic differential equations in 4.1.8. First, we can neglect the terms in the summation with k
≥ k
i
+ 1, because they are of order N
−k
i
i
and will not be felt on times T
i
≪ N
k
i
i
. Second, according to Lemma 4.3.1, the block average X
N
i
, k
i
can be considered as essentially fixed over times T
i
≪ N
k
i
i
, and hence we expect that the time evolution of the system in 4.4.1 can be approximated by the equations
d X
N
i
ξ
β
i
t + s =
c N
i k
i
−1
X
N
i
, k
i
ξ
β
i
t − X
N
i
ξ
β
i
t + s
ds +
k
i
−1 k
=1
c N
i k
−1
X
N
i
, k
ξ
β
i
t + s − X
N
i
ξ
β
i
t + s
ds +
2gX
N
i
ξ
β
i
t + sd B
i ξ
β
i
t + s
s ≥ 0, ξ ≤ k
i
− 1. 4.4.3
Next comes the essential point in the argument. We expect that the condition σ
k
i
−1
N
k
i
−1 i
≪ T
i
is sufficient to guarantee that solutions of 4.4.3 reach equi- librium on the time scale T
i
, conditional to the k
i
-block average X
N
i
, k
i
ξ
β
i
t. The system in 4.1.5 as a whole does not have a true equilibrium distribution; instead,
it was shown in Swart [41] that the distribution of the system tends to a mixture of trivial extremal measures as t
→ ∞. However, as was recognized by Dawson and Greven in [10], the system in 4.1.5 goes through a series of ‘local equilibria’ as
time tends to infinity, where k
i
-blocks of ever larger size reach a temporary and approximate ‘local’ equilibrium at times of the appropriate order of magnitude. It
is from the properties of these local equilibria that our result will follow.
STEP 2: Let us condition the system on X
N
i
, k
i
β
i
t = ˆθ,
4.4.4 and assume that the system in 4.4.1, conditioned on 4.4.4, is in equilibrium. For
ξ ≤ k
i
− 1 and η ≤ k
i
, we define the covariance function C
s
ξ − η := Cov
X
N
i
ξ
β
i
t + s, X
N
i
η
β
i
t + s
, 4.4.5
where the covariance of two K -valued random variables X and Y is defined as CovX, Y :
= E[X · Y ] − E[X] · E[Y ] 4.4.6
with x · y :=
α
x
α
y
α
the usual inner product on
R
d
. A covariance calculation as in Swart [41] gives that for
ξ ≤ k
i ∂
∂ s
C
s
ξ =
η
a
k
i
−1 N
i
η − ξ[C
s
η − C
s
ξ ]
+2dδ
0,ξ
E[gX
N
i
β
i
t + s] − 2
c N
i k
i
C
s
ξ , 4.4.7
where a
k N
is the k-block interaction kernel a
k N
ξ :
=
k l
=ξ
1 N
l
c N
l −1
. 4.4.8
Using our assumption about local equilibrium, we set
∂ ∂
s
C
s
ξ = 0 in 4.4.7 and
we assume that E[gX
N
i
ξ
β
i
t +s] does not depend on s. Now we can solve C
s
ξ in terms of E[gX
N
i
ξ
β
i
t + s] and a random walk on
i
: = {ξ ∈
N
i
: ξ ≤ k
i
− 1} 4.4.9
that jumps from site ξ to site η with rate a
k
i
−1 N
i
η − ξ and that is killed in each site
with rate
c N
i
k
i
. Indeed, denoting by P
i t
η − ξ the probability that this random
walk moves from site ξ to site η in time t, we have the representation C
s
ξ = d E[gX
N
i
β
i
t + s]
∞
P
i t
ξ dt.
4.4.10 Note that with probability one the random walk is eventually killed, so that the
integral on the right-hand side is finite. Picking ξ = 0, we get
VarX
N
i
β
i
t + s = dµ
i
E[gX
N
i
β
i
t + s]
4.4.11 with
µ
i
: =
∞
P
i t
0dt 4.4.12
the expected time the random walk starting in 0 spends at the origin. STEP 3: It turns out that we can also express the expectation of any harmonic
function of X
N
i
ξ
β
i
t + s in terms of the above random walk. Indeed, we have the
representation see Swart [41], Lemma 3.1.6 in this dissertation E[ f X
N
i
β
i
t + s] = E
f ˆθ +
ξ
P
i s
ξ [X
N
i
ξ
β
i
t − ˆθ]
4.4.13