Theorem 1.2.3. Suppose 1.4, 1.13–1.14 and that
κ
2
2
G0 1. Then, P[
|η
a ∞
||η
b ∞
|] = 1 + κ
2
Ga − b
2 − κ
2
G0 , a, b
∈ Z
d
. The proof of Theorem 1.2.3 will be presented in section 3.2. We refer the reader to [11] for similar
formulae for discrete time models.
1.3 SDE description of the process
We now give an alternative description of the process in terms of a stochastic differential equation SDE, which will be used in the proof of Lemma 2.1.1 below. We introduce random measures on
[0, ∞ × [0, ∞
Z
d
by N
z
dsdξ = X
i ≥1
1
{T
z,i
, K
z,i
∈ dsdξ}, N
z t
dsdξ = 1
{s≤t}
N
z
dsdξ. 1.21
Then, N
z
, z ∈ Z
d
are independent Poisson random measures on [0, ∞ × [0, ∞
Z
d
with the intensity ds
× PK ∈ dξ. The precise definition of the process η
t t
≥0
is then given by the following stochastic differential equation:
η
t,x
= η
0,x
+ X
z ∈Z
d
Z N
z t
dsdξ
ξ
x −z
− δ
x,z
η
s −,z
. 1.22
By 1.4, it is standard to see that 1.22 defines a unique process η
t
= η
t,x
, t ≥ 0 and that η
t
is Markovian.
Proof of 1.9: Since |η
t
| is obviously nonnegative, we will prove the martingale property. By 1.22, we have
|η
t
| = |η | +
X
z ∈Z
d
Z N
z t
dsdξ |ξ| − 1 η
s −,z
, and hence
1 |η
t
| = |η | − κ
1
Z
t
|η
s
|ds + X
z ∈Z
d
Z N
z t
dsdξ |ξ| − 1 η
s −,z
. We have on the other hand that
κ
1
Z
t
|η
s
|ds = X
z ∈Z
d
Z
t
ds Z
PK ∈ ξ|ξ| − 1η
s,z
. Plugging this into 1, we see that the right-hand-side of 1 is a martingale.
966
2 Lemmas
2.1 Markov chain representations for the point functions
We assume 1.4 throughout, but not 1.13–1.14 for the moment. To prove the Feynman-Kac formula for two-point function, we introduce some notation.
For x, y, ex, ey ∈ Z
d
, Γ
x, ex, y,ey
def
= P[K
x − y
− δ
x, y
δ
ex,ey
+ K
ex−ey
− δ
ex,ey
δ
x, y
] +P[K
x − y
− δ
x, y
K
ex− y
− δ
ex, y
]δ
y, ey
, 2.1
V x
def
= X
y, ey∈Z
d
Γ
x,0, y, ey
= 2κ
1
+ X
y ∈Z
d
P[K
y
− δ
y,0
K
x+ y
− δ
x+ y,0
]. 2.2
Note that V x
− ex = X
y, ey∈Z
d
Γ
x, ex, y,ey
. 2.3
Remark: The matrix Γ introded above appears also in [5, page 442, Theorem 3.1], since it is a fundamental tool to deal with the two-point function of the linear system. However, the way we use
the matrix will be different from the ones in the existing literature.
We now prove the Feynman-Kac formula for two-point function, which is the basis of the proof of Theorem 1.2.1:
Lemma 2.1.1. Let X , e X = X
t
, e X
t t
≥0
, P
x, ex
X , e X
be the continuous-time Markov chain on Z
d
× Z
d
starting from x, ex, with the generator
L
X , e X
f x, ex =
X
y, ey∈Z
d
Γ
x, ex, y,ey
f y, ey − f x, ex
, where Γ
x, ex, y,ey
is defined by 2.1. Then, for t, x, ex ∈ [0, ∞ × Z
d
× Z
d
, P[η
t,x
η
t, ex
] = P
x, ex
X , e X
exp
Z
t
V X
s
− e X
s
ds
η
0,X
t
η
0, e X
t
,
2.4 where V is defined by 2.2.
Proof: We first show that ut, x, ex
def
= P[η
t,x
η
t, ex
] solves the integral equation
1 ut, x,
ex − u0, x, ex = Z
t
L
X , e X
+ V x − exus, x, exds. By 1.22, we have
η
t,x
η
t, ex
− η
0,x
η
0, ex
= X
y ∈Z
d
Z N
y
dsdξF
x, ex, y
s−, ξ, η,
967
where F
x, ex, y
s, ξ, η =
ξ
x − y
− δ
x, y
η
s, ex
η
s, y
+ ξ
ex− y
− δ
ex, y
η
s,x
η
s, y
+ ξ
x − y
− δ
x, y
ξ
ex− y
− δ
ex, y
η
2 s, y
Therefore, ut, x,
ex − u0, x, ex =
X
y ∈Z
d
Z
t
ds Z
P[F
x, ex, y
s, ξ, η]PK ∈ ξ =
Z
t
X
y, ey∈Z
d
Γ
x, ex, y,ey
us, y, eyds
2.3
= Z
t
X
y, ey∈Z
d
Γ
x, ex, y,ey
us, y, ey − us, x, ex + V x − exus, x, ex
ds =
Z
t
L
X , e X
+ V x − exus, x, exds. We next show that
2 sup
t ∈[0,T ]
sup
x, ex∈Z
d
|ut, x, ex| ∞ for any T ∈ 0, ∞. We have by 1.4 and 1.22 that, for any p
∈ N
∗
, there exists C
1
∈ 0, ∞ such that P[η
p t,x
] ≤ C
1
X
y: |x− y|≤r
K
Z
t
P[η
p s, y
]ds, t ≥ 0. By iteration, we see that there exists C
2
∈ 0, ∞ such that P[η
p t,x
] ≤ e
C
2
t
X
y ∈Z
d
e
−|x− y|
1 + η
p 0, y
, t ≥ 0, which, via Schwarz inequality, implies 4.
The solution to 1 subject to 2 is unique, for each given η . This can be seen by using Gronwall’s
inequality with respect to the norm kuk =
P
x, ex∈Z
d
e
−|x|
|ux, ex|. Moreover, the RHS of 2.4 is a solution to 1 subject to the bound 2. This can be seen by adapting the argument in [8, page
5,Theorem 1.1]. Therefore, we get 2.4.
Remark: The following Feynman-Kac formula for one-point function can be obtained in the same way as Lemma 2.1.1:
P[η
t,x
] = e
κ
1
t
P
x X
[η
0,X
t
], t, x ∈ [0, ∞ × Z
d
, 2.5
where κ
1
is defined by 1.7 and X
t t
≥0
, P
x X
is the continuous-time random walk on Z
d
starting from x, with the generator
L
X
f x = X
y ∈Z
d
P[K
x − y
] f y − f x .
968
Lemma 2.1.2. We have
X
y, ey∈Z
d
Γ
x, ex, y,ey
= X
y, ey∈Z
d
Γ
y, ey,x,ex
, 2.6
if and only if 1.14 holds. In addition, 1.14 implies that V x = 2κ
1
+ κ
2
δ
x,0
. 2.7
Proof: We let cx = P
y ∈Z
d
P[K
y
− δ
y,0
K
x+ y
− δ
x+ y,0
]. Then, c0 = κ
2
and, X
y, ey∈Z
d
Γ
x, ex, y,ey
= 2κ
1
+ cx − ex, cf. 2.2–2.3, X
y, ey∈Z
d
Γ
y, ey,x,ex
= 2κ
1
+ δ
x, ex
X
y ∈Z
d
c y. These imply the desired equivalence and 2.7.
We assume 1.14 from here on. Then, by 2.6, X , e X is stationary with respect to the counting
measure on Z
d
× Z
d
. We denote the dual process of X , e X by Y, e
Y = Y
t
, e Y
t t
≥0
, P
x, ex
Y, e Y
, that is, the continuous time Markov chain on Z
d
× Z
d
starting from x, ex, with the generator
L
Y, e Y
f x, ex =
X
y, ey∈Z
d
Γ
y, ey,x,ex
f y, ey − f x, ex
. 2.8
Thanks to 2.6, L
X , e X
and L
Y, e Y
are dual operators on ℓ
2
Z
d
× Z
d
.
Remark: If we additionally suppose that P[K
p x
] = P[K
p −x
] for p = 1, 2 and x ∈ Z
d
, then, Γ
x, ex, y,ey
= Γ
y, ey,x,ex
for all x, ex, y, ey ∈ Z
d
. Thus, X , e X and Y, e
Y are the same in this case. The relative motion Y
t
− e Y
t
of the components of Y, e Y is nicely identified by:
Lemma 2.1.3. Y
t
− e Y
t t
≥0
, P
x, ex
Y, e Y
and S
2t t
≥0
, P
x −ex
S
cf. 1.12 have the same law. Proof: Since Y, e
Y is shift invariant, in the sense that Γ
x+v, ex+v, y+v,ey+v
= Γ
x, ex, y,ey
for all v ∈ Z
d
, Y
t
− e Y
t t
≥0
, P
x, ex
Y, e Y
is a Markov chain. Moreover, its jump rate is computed as follows. For x 6= y, X
z ∈Z
d
Γ
y+z,z,x,0
= P[K
x − y
] + P[K
y −x
] + δ
x,0
X
z ∈Z
d
P[K
y+z
− δ
y,z
K
z
− δ
0,z
]
1.14
= P[K
x − y
] + P[K
y −x
].
To prove Theorem 1.2.1, the use of Lemma 2.1.1 is made not in itself, but via the following lemma. It is the proof of this lemma, where the duality of X , e
X and Y, e Y plays its role.
969
Lemma 2.1.4. For a bounded g : Z
d
× Z
d
→ R, X
x, ex∈Z
d
P[ η
t,x
η
t, ex
]gx, ex
= X
x, ex∈Z
d
η
0,x
η
0, ex
P
x, ex
Y, e Y
exp
κ
2
Z
t
δ Y
s
− e Y
s
ds
gY
t
, e Y
t
.
2.9 In particular, for a bounded f : Z
d
→ R, X
x, ex∈Z
d
P[η
t,x
η
t, ex
] f x − ex = X
x, ex∈Z
d
η
0,x
η
0, ex
P
x −ex
S
exp
κ
2
2 Z
2t
δ S
u
du f S
2t
.
2.10 Proof: It follows from Lemma 2.1.1 and 2.7 that
1 LHS of 2.9 =
X
x, ex∈Z
d
P
x, ex
X , e X
exp
κ
2
Z
t
δ X
s
− e X
s
ds
η
0,X
t
η
0, e X
t
gx,
ex. We now observe that the operators
f x, ex 7→ P
x, ex
X , e X
exp
κ
2
Z
t
δ X
s
− e X
s
ds
f X
t
, e X
t
,
f x, ex 7→ P
x, ex
Y, e Y
exp
κ
2
Z
t
δ Y
s
− e Y
s
ds
f Y
t
, e Y
t
are dual to each other with respect to the counting measure on Z
d
× Z
d
. Therefore, RHS of 1 = RHS of 2.9.
Taking gx, ex = f x − ex in particular, we have by 2.9 and Lemma 2.1.3 that
LHS of 2.10 = X
x, ex∈Z
d
η
0,x
η
0, ex
P
x, ex
Y, e Y
exp
κ
2
Z
t
δ Y
s
− e Y
s
ds
f Y
t
− e Y
t
= X
x, ex∈Z
d
η
0,x
η
0, ex
P
x −ex
S
exp
κ
2
Z
t
δ S
2u
du
f S
2t
= RHS of 2.10.
Remark: In the case of BCPP, D. Griffeath obtained a Feynman-Kac formula for
X
y ∈Z
d
P[η
t,x
η
t, ex+ y
] [4, proof of Theorem 1]. However, this does not seem to be enough for our purpose. Note that
the Feynman-Kac formulae in the present paper Lemma 2.1.1 and Lemma 2.1.4 are stronger, since they give the expression for each summand of the above summation.
970
2.2 Central limit theorems for Markov chains