Let λ
′
1. Since g
1 n
→ ˜ θ
1
and g
2 n
→ ˜ θ
2
there is a m
2
≥ m
1
such that the following bounds apply
1 + ˜
θ
2
λ
′
p η
n
≤ z
2 n
z
2 n
−1
and z
1 n
z
1 n
−1
≤ 1 + λ
′
˜ θ
1
p η
n
for all n ≥ m
2
. Consequently, for all n ≥ m
2
, we have that log
b
n+k
b
n
≤ log
z
1 n+k
z
1 n
≤
n+k
X
j=n+1
log 1 +
λ
′
˜ θ
1
pη
j
≤ λ
′
θ
n+k
X
j=n+1
p η
n
. Similarly, by the mean value theorem
log b
n+k
b
n
≥
n+k
X
j=n+1
log
1 + ˜
θ
2
λ
′
pη
j
≥
˜ θ
2
λ
′
1 + λ
′−1
˜ θ
2
p η
n n+k
X
j=n+1
pη
j
since η
n
is decreasing. By letting the constant m
1
above be sufficiently large, the difference | ˜
θ
2
−θ | can be made arbitrarily small, and by increasing m
2
, the constant λ
′
1 can be chosen arbitrarily close to one.
3.3 Uniform target: path-wise behaviour
Section 3.2 characterised the behaviour of the sequence E S
n
when the chain X
n n
≥2
follows the ‘adaptive random walk’ recursion 6. In this section, we shall verify that almost every sample path
S
n n
≥1
of the same process are increasing. Let us start by expressing the process S
n
in terms of an auxiliary process Z
n n
≥1
.
Lemma 13. Let u ∈ R
d
be a unit vector and suppose the process X
n
, M
n
, S
n n
≥1
is defined through 3, 4 and 6, where W
n n
≥1
are i.i.d. following a spherically symmetric, non-degenerate distribution. Define the scalar process Z
n n
≥2
through Z
n+1
:= u
T
X
n+1
− M
n
kS
1 2
n
u k
13 where
kxk := p
x
T
x stands for the Euclidean norm. Then, the process Z
n
, S
n n
≥2
follows u
T
S
n+1
u = [1 + η
n+1
Z
2 n+1
− 1]u
T
S
n
u 14
Z
n+1
= θ ˜ W
n+1
+ U
n
Z
n
15 where ˜
W
n n
≥2
are non-degenerate i.i.d. random variables and U
n
:= 1 − η
n
1 + η
n
Z
2 n
− 1
−12
. The proof of Lemma 13 is given in Appendix B.
It is immediate from 14 that only values |Z
n
| 1 can decrease u
T
S
n
u. On the other hand, if both η
n
and η
n
Z
2 n
are small, then the variable U
n
is clearly close to unity. This suggests a nearly random walk behaviour of Z
n
. Let us consider an auxiliary result quantifying the behaviour of this random walk.
54
Lemma 14. Let n ≥ 2, suppose ˜Z
n −1
is F
n −1
-measurable random variable and suppose ˜ W
n n
≥n
are respectively F
n n
≥n
-measurable and non-degenerate i.i.d. random variables. Define for ˜ Z
n
for n
≥ 2 through ˜
Z
n+1
= ˜ Z
n
+ θ ˜ W
n+1
. Then, for any N ,
δ
1
, δ
2
0, there is a k ≥ 1 such that
P
1 k
k
X
j=1
1
{| ˜Z
n+ j
|≤N}
≥ δ
1
F
n
≤ δ
2
a.s. for all n ≥ 1 and k ≥ k
. Proof. From the Kolmogorov-Rogozin inequality, Theorem 36 in Appendix C,
P ˜ Z
n+ j
− ˜Z
n
∈ [x, x + 2N] | F
n
≤ c
1
j
−12
for any x ∈ R, where the constant c
1
0 depends on N , θ and on the distribution of W
j
. In particular, since ˜
Z
n+ j
− ˜Z
n
is independent of ˜ Z
n
, one may set x = −Z
n
− N above, and thus P
| ˜Z
n+ j
| ≤ N F
n
≤ c
1
j
−12
. The estimate E
1
k
k
X
j=1
1
{| ˜Z
n+ j
|≤N}
F
n
≤ c
1
k
k
X
j=1
j
−12
≤ c
2
k
−12
implies P k
−1
P
k j=1
1
{| ˜Z
n+ j
|≤N}
≥ δ
1
F
n
≤ δ
−1 1
c
2
k
−12
, concluding the proof. The technical estimate in the next Lemma 16 makes use of the above mentioned random walk
approximation and guarantees ultimately a positive ‘drift’ for the eigenvalues of S
n
. The result requires that the adaptation sequence
η
n n
≥2
is ‘smooth’ in the sense that the quotients converge to one.
Assumption 15. The adaptation weight sequence η
n n
≥2
⊂ 0, 1 satisfies lim
n →∞
η
n+1
η
n
= 1.
Lemma 16. Let n ≥ 2, suppose Z
n −1
is F
n −1
-measurable, and assume Z
n n
≥n
follows 15 with non-degenerate i.i.d. variables ˜
W
n n
≥n
measurable with respect to F
n n
≥n
, respectively, and the adaptation weights
η
n n
≥n
satisfy Assumption 15. Then, for any C ≥ 1 and ε 0, there are indices
k ≥ 1 and n
1
≥ n such that P
L
n,k
F
n
≤ ε a.s. for all n ≥ n
1
, where
L
n,k
:=
k
X
j=1
log h
1 + η
n+ j
Z
2 n+ j
− 1 i
kCη
n
.
55
Proof. Fix a γ ∈ 0, 23. Define the sets A
n: j
:= ∩
j i=n+1
{Z
2 i
≤ η
−γ i
} and A
′ i
:= {Z
2 i
η
−γ i
}. Write the conditional expectation in parts as follows,
P
L
n,k
F
n
= P
L
n,k
, A
n:n+k
F
n
+ P
L
n,k
, A
′ n
F
n
+
n+k
X
i=n+1
P
L
n,k
, A
n:i −1
, A
′ i
F
n
.
16 Let
ω ∈ A
′ i
for any n i ≤ n + k and compute
log
1 + η
i
Z
2 i
− 1
≥ log
1 + η
i
η
−γ i
− 1
≥ log 1 + 2η
i
kC ≥
2 η
i
kC 1 + 2
η
i
kC ≥ kCη
n
whenever n ≥ n
is sufficiently large, since η
n
→ 0, and by Assumption 15. That is, if n is sufficiently large, all but the first term in the right hand side of 16 are a.s. zero. It remains to show the
inequality for the first. Suppose now that Z
2 n
≤ η
−γ n
. One may estimate U
n
= 1 − η
n 1
2
1
− η
n
Z
2 n
1 − η
n
+ η
n
Z
2 n
1 2
≥ 1 − η
n 1
2
1 −
η
1 −γ
n
1 − η
n 1
2
≥ 1 − η
1 −γ
n 1
2
1 − 2η
1 −γ
n
1 − η
n 1
2
≥ 1 − c
1
η
1 −γ
n
where c
1
:= 2 sup
n ≥n
1 − η
n −12
∞. Observe also that U
n
≤ 1. Let k
≥ 1 be from Lemma 14 applied with N = p
8C + 1+1, δ
1
= 18 and δ
2
= ε, and fix k ≥ k +1.
Let n ≥ n
and define an auxiliary process ˜ Z
n j
j ≥n
−1
as ˜ Z
n j
≡ Z
j
for n − 1 ≤ j ≤ n + 1, and for
j n + 1 through
˜ Z
n j
= Z
n+1
+ θ
j
X
i=n+2
˜ W
i
. For any n + 2
≤ j ≤ n + k and ω ∈ A
n: j
, the difference of ˜ Z
n j
and Z
j
can be bounded by | ˜Z
n j+1
− Z
j+1
| ≤ |Z
j
||1 − U
j
| + | ˜Z
n j
− Z
j
| ≤ c
1
η
1 −
3 2
γ j
+ | ˜Z
n j
− Z
j
| ≤ · · · ≤ c
1 j
X
i=n+1
η
1 −
3 2
γ i
≤ c
1
η
1 −
3 2
γ n
j
X
i=n+1
η
i
η
n 1
−
3 2
γ
≤ c
2
j − nη
1 −
3 2
γ n
by Assumption 15. Therefore, for sufficiently large n ≥ n
, the inequality | ˜Z
n j
− Z
j
| ≤ 1 holds for all n
≤ j ≤ n + k and ω ∈ A
n:n+k
. Now, if ω ∈ A
n:n+k
, the following bound holds log
h 1 +
η
j
Z
2 j
− 1 i
≥ log
1 + η
j
min{N, |Z
j
|}
2
− 1
≥
1
{| ˜Z
n j
|N}
log
1 + η
j
N − 1
2
− 1
+
1
{| ˜Z
n j
|≤N}
log
1 − η
j
≥
1
{| ˜Z
n j
|N}
1 − β
j
η
j
8C −
1
{| ˜Z
n j
|≤N}
1 + β
j
η
j
56
by the mean value theorem, where the constant β
j
= β
j
C, η
j
∈ 0, 1 can be selected arbitrarily small whenever j is sufficiently large. Using this estimate, one can write for
ω ∈ A
n:n+k k
X
j=1
log h
1 + η
n+ j
Z
2 n+ j
− 1 i
≥ 1 − β
n
X
j ∈I
+ n+1:k
η
n+ j
8C − 1 + β
n k
X
j=1
η
n+ j
where I
+ n+1:k
:= { j ∈ [1, k] : ˜Z
n n+ j
N }. Define the sets
B
n,k
:=
1 k
− 1
k −1
X
j=1
1
{| ˜Z
n+ j+1
|≤N}
≤ δ
1
.
Within B
n,k
, it clearly holds that I
+ n+1:k
≥ k − 1 − k − 1δ
1
= 7k − 18. Thereby, for all ω ∈ B
n,k
∩ A
n:n+k k
X
j=1
log h
1 + η
n+ j
Z
2 n+ j
− 1 i
≥ η
n
k
1 − β
n
7 2
inf
1 ≤ j≤k
η
n+ j
η
n
C − 1 + β
n
sup
1 ≤ j≤k
η
n+ j
η
n
≥ kCη
n
for sufficiently large n ≥ 1, as then the constant β
n
can be chosen small enough, and by Assumption 15. In other words, if n
≥ 1 is sufficiently large, then B
n,k
∩ A
n:n+k
∩ L
n,k
= ;. Now, Lemma 14 yields P
L
n,k
, A
n:n+k
F
n
= P
L
n,k
, A
n:n+k
, B
n,k
F
n
+ P
L
n,k
, A
n:n+k
, B
∁ n,k
F
n
≤ P B
∁ n,k
| F
n
≤ ε. Using the estimate of Lemma 16, it is relatively easy to show that u
T
S
n
u tends to infinity, if the adaptation weights satisfy an additional assumption.
Assumption 17. The adaptation weight sequence η
n n
≥2
⊂ 0, 1 is in ℓ
2
but not in ℓ
1
, that is,
∞
X
n=2
η
n
= ∞ and
∞
X
n=2
η
2 n
∞.
Theorem 18. Assume that X
n n
≥2
follows the ‘adaptive random walk’ recursion 6 and the adap- tation weights
η
n n
≥2
satisfy Assumptions 15 and 17. Then, for any unit vector u ∈ R
d
, the process u
T
S
n
u → ∞ almost surely.
Proof. The proof is based on the estimate of Lemma 16 applied with a similar martingale argument as in Vihola 2009.
Let k ≥ 2 be from Lemma 16 applied with C = 4 and ε = 12. Denote ℓ
i
:= ki + 1 for i ≥ 0 and, inspired by 14, define the random variables T
i i
≥1
by T
i
:= min ¨
kM η
ℓ
i −1
,
ℓ
i
X
j= ℓ
i −1
+1
log h
1 + η
j
Z
2 j
− 1 i
«
57
with the convention that η
= 1. Form a martingale Y
i
, G
i i
≥1
with Y
1
≡ 0 and having differences dY
i
:= T
i
− E
T
i
G
i −1
and where
G
1
≡ {;, Ω} and G
i
:= F
ℓ
i
for i ≥ 1. By Assumption 17,
∞
X
i=2
E
dY
2 i
≤ c
∞
X
i=1
η
2 ℓ
i
∞ with a constant c = ck, C
0, so Y
i
is a L
2
-martingale and converges a.s. to a finite limit M
∞
e.g. Hall and Heyde 1980, Theorem 2.15.
By Lemma 16, the conditional expectation satisfies E
T
i+1
G
i
≥ kCη
ℓ
i
1 − ε +
ℓ
i+1
X
j= ℓ
i
+1
log1 − η
j
ε ≥ kη
ℓ
i
when i is large enough, and where the second inequality is due to Assumption 15. This implies, with Assumption 17, that
P
i
E
T
i
G
i −1
= ∞ a.s., and since Y
i
converges a.s. to a finite limit, it holds that
P
i
T
i
= ∞ a.s. By 14, one may estimate for any n =
ℓ
m
with m ≥ 1 that
logu
T
S
n
u ≥ logu
T
S
1
u +
m
X
i=1
T
i
→ ∞ as m
→ ∞. Simple deterministic estimates conclude the proof for the intermediate values of n.
3.4 Stability with one-dimensional uniformly continuous log-density