so that 1.28 becomes: L
S
hx +
1 2
δ
0,x
X
y,z
h y − zβ
y,z
= 0, x ∈ Z
d
, cf. 1.21. Under the assumption 1.26, a choice of such function h is given by h = 1 + cG, where
c = E[
|K| − 1
2
] 1
−
1 2
P
x, y ∈Z
d
Gx − yβ
x, y
.
3 Let π
d
be the return probability for the simple random walk on Z
d
. Also, let 〈 ·, · 〉 and ∗ be the
inner product of ℓ
2
Z
d
and the discrete convolution respectively. We then have that 1.22
⇐⇒ λ
1 2d1
−2π
d
for BCPP, E[W
2
]
2|k|−1G0 〈 G∗k,k 〉
for the potlatch process. 1.29
For BCPP, 1.29 can be seen from that cf. [NY09a, p. 965] β
x, y
=
1 {x = 0} + λ1{|x| = 1}
2dλ + 1 δ
x, y
, and G0 = 2dλ + 1
2dλ 1
1 − π
d
. To see 1.29 for the potlatch process, we note that
1 2
k + ˇk ∗ G = |k|G − δ , with ˇk
x
= k
−x
and that β
x, y
= E[W
2
]k
x
k
y
− k
x
δ
y,0
− k
y
δ
x,0
+ δ
x,0
δ
y,0
. Thus,
X
x, y ∈Z
d
Gx − yβ
x, y
= E[W
2
]〈 G ∗ k, k 〉 − 〈 G, k + ˇk 〉 + G0 =
E[W
2
]〈 G ∗ k, k 〉 + 2 − 2|k| − 1G0, from which 1.29 for the potlatch process follows.
1.4 SDE description of the process
We now give an alternative description of the process in terms of a stochastic differential equation SDE. We introduce random measures on [0,
∞ × [0, ∞
Z
d
by N
z
dsdξ = X
i ≥1
1 {T
z,i
, K
z,i
∈ dsdξ}, N
z t
dsdξ = 1
{s≤t}
N
z
dsdξ. 1.30
Then, N
z
, z ∈ Z
d
are independent Poisson random measures on [0, ∞ × [0, ∞
Z
d
with the intensity ds
× PK ∈ ·. The precise definition of the process η
t t
≥0
is then given by the following stochastic differential equation:
η
t,x
= η
0,x
+ X
z ∈Z
d
Z N
z t
dsdξ
ξ
x −z
− δ
x,z
η
s −,z
. 1.31
By 1.3, it is standard to see that 1.31 defines a unique process η
t
= η
t,x
, t ≥ 0 and that η
t
is Markovian. 643
2 Proofs
It is convenient to introduce the following notation: ν = PK ∈ · ∈ P [0, ∞
Z
d
, the law of K. 2.1
e N
z
dsdξ = N
z
dsdξ − dsνdξ, e N
z t
dsdξ = 1
s ≤t
e N
z
dsdξ. 2.2
2.1 Proof of Theorem 1.3.1
The proof of Theorem 1.3.1 is based on the following
Lemma 2.1.1.
{ |η
∞
| = 0, survival } = ¨ Z
∞
R
s
ds = ∞
« , a.s.
2.3 Moreover, there exists a constant c 0 such that: 1.18 holds a.s. on the event
n R
∞
R
s
ds = ∞
o .
Proof: We see from 1.31 that |η
t
| = |η | +
X
z
Z e
N
z t
dsdξ|η
s −
||ξ| − 1ρ
s −,z
cf. 2.2 =
|η | +
Z
t
|η
s −
|d M
s
where M
t
= X
z
Z e
N
z t
dsdξ|ξ| − 1ρ
s −,z
. Then, by the Doléans-Dale exponential formula e.g., [HWY92, p. 248, 9.39],
|η
t
| = exp M
t
D
t
, where
D
t
= Y
s ≤t
1 + ∆M
s
exp −∆M
s
, with ∆M
t
= M
t
− M
t −
. Note also the predictable quadratic variation of M
·
is given by
1 〈 M 〉
t
= E[|K| − 1
2
] Z
t
R
s
ds. Since
−1 ≤ ∆M
t
≤ b
K
− 1 ∞, we have that See e.g.[HWY92, p. 222, 8.32]
2 { 〈 M 〉
∞
∞ } ⊂ {[M]
∞
∞, M
t
converges as t ր ∞} a.s.
3 { 〈 M 〉
∞
= ∞ } ⊂ lim
t →∞
〈 M 〉
t
[M ]
t
= 1, lim
t →∞
M
t
〈 M 〉
t
= 0 a.s.
644
where [M ]
t
= X
s ≤t
∆M
s 2
We start with the “ ⊃” part of 2.3: Note that 1 + ue
−u
≤ e
−c
1
u
2
for −1 ≤ u ≤ b
K
− 1, where c
1
is a constant. We suppose that R
∞
R
s
ds = ∞, or equivalently that, 〈 M 〉
∞
= ∞. Then, for large t, exp M
t
D
t
≤ exp M
t
− c
1
[M ]
t 3
≤ exp
− c
1
2 〈 M 〉
t
1
≤ exp
−c
2
Z
t
R
s
ds
This shows that R
∞
R
s
ds = ∞ implies |η
∞
| = 0, together with the bound 1.18. We now turn to the “
⊂” part of 2.3: We need to prove that
4 {
R
∞
R
s
ds ∞ survival}
a.s.
⊂ { |η
∞
| 0}. We have
5 {
R
∞
R
s
ds ∞}
1–2
⊂ {M
t
converges as t ր ∞} a.s.
On the other hand, X
s ≤t
1 + ∆M
s
exp −∆M
s
− 1 ≤
e 2
[M ]
t
, since
|1 + ue
−u
− 1| ≤ eu
2
2 for u ≥ −1. Thus,
6 {
R
∞
R
s
ds ∞, survival} ⊂
D
t
converges to a positive limit as t ր ∞
a.s. We now obtain 4 by 5–6.
We state one more technical lemma:
Lemma 2.1.2. Suppose that:
P lim
t →∞
r
−t
|η
t
| 0 0,
2.4 for some r 0. Then,
{survival} = { lim
t →∞
r
−t
|η
t
| 0}, P-a.s. 2.5
Proof: We follow the argument in [CY10, Lemma 4.3.1], which goes back to [Gri83, p. 701]. For s, y ∈ [0, ∞ × Z
d
, let η
s, y t
= η
s, y t,x
x ∈Z
d
, t ∈ [0, ∞ be the process starting from time s, with one
particle at y: η
s, y t,x
= δ
x, y
+ X
z ∈Z
d
Z N
z s,s+t]
dudξξ
x −z
− δ
x,z
η
s, y u
−,z
, where N
z s,s+t]
= N
z s+t
− N
z s
. Then, for all t ≥ s,
η
t,x
= X
y
η
s, y
η
s, y t
−s,x
and hence |η
t
| = X
y
η
s, y
|η
s, y t
−s
|.
645
The assumption 2.4 implies that: P
inf
t ≥0
r
−t
|η
t
| 0 0,
and hence that:
1 δ
def
= P inf
t ≥0
r
−t
|η
t
| ǫ 0 for some ǫ ∈ 0, 12.
We now define a sequence of stopping times σ
1
σ
2
. . . as follows. σ
1
= inf{t 0 ; 0 |η
t
| ≤ ǫr
t
}. Note at this point that:
2 Pσ
1
= ∞ ≥ δ, thanks to 1. Suppose that σ
1
, . . . , σ
ℓ
ℓ ≥ 1 have already been defined. If σ
ℓ
= ∞, we set σ
n
= ∞ for all n
≥ ℓ+1. Suppose that σ
ℓ
∞. Then η
σ
ℓ
6≡ 0. Let Y
ℓ
be the minimum, in the lexicographical order, of y
∈ Z
d
such that η
σ
ℓ
, y
6= 0. We now define σ
ℓ+1
by: σ
ℓ+1
= σ
ℓ
+ inf{t 0 ; 0 |η
σ
ℓ
,Y
ℓ
t
| ≤ ǫr
t
}. It is easy to see from the construction that:
3
Pσ
ℓ
∞ i.o. = 0. Indeed, we have
Pσ
ℓ+1
∞|F
σ
ℓ
= Pσ
1
∞
2
≤ 1 − δ, and hence
Pσ
ℓ+1
∞ = Pσ
ℓ
∞, σ
ℓ+1
∞ =
Pσ
ℓ
∞, Pσ
ℓ+1
∞|F
σ
ℓ
≤ 1 − δPσ
ℓ
∞ ≤ 1 − δ
ℓ+1
by induction. Then, 3 follows from the Borel-Cantelli lemma. By 3, we can pick a random ℓ
∈ N such that Pσ
ℓ
∞, σ
ℓ+1
= ∞ = 1. Let us focus on the event {σ
ℓ
∞, σ
ℓ+1
= ∞}. Then, η
σ
ℓ
,Y
ℓ
t
is defined and |η
t
| ≥ η
σ
ℓ
,Y
ℓ
|η
σ
ℓ
,Y
ℓ
t −σ
ℓ
| for all t ≥ σ
ℓ
. Note also that, on the event of survival, σ
ℓ+1
= ∞ implies that: |η
σ
ℓ
,Y
ℓ
t −σ
ℓ
| ≥ ǫr
t −σ
ℓ
for t ≥ σ
ℓ
. Thus, a.s. on the event of survival,
|η
t
| ≥ η
σ
ℓ
,Y
ℓ
|η
σ
ℓ
,Y
ℓ
t −σ
ℓ
| ≥ η
σ
ℓ
,Y
ℓ
ǫr
t −σ
ℓ
for t ≥ σ
ℓ
646
hence {survival}
a.s.
⊂ { lim
t →∞
r
−t
|η
t
| 0}. This proves 2.5.
Proof of Theorem 1.3.1: a: If P
|η
∞
| 0 0, then, by Lemma 2.1.2, {survival} = {|η
∞
| 0} a.s. We see from this and 2.3 that
R
∞
R
s
ds ∞ a.s. on the event of survival, while
R
∞
R
s
ds ∞ is
obvious outside the event of survival. b: This follows from Lemma 2.1.1
2.2 Proof of Theorem 1.3.2