Noting that µ is a probability measure and passing to the complementary events, we also get
lim sup
n →∞
X
k ∈A
X
k
n ¶ µA. Hence, given
ε 0, we can pick N ∈ N so large that µN, ∞ ε, and obtain for the total variation norm
lim sup
n ↑∞
1 2
∞
X
k=0
X
k
n − µ
k
¶ lim sup
n ↑∞
1 2
N
X
k=0
X
k
n − µ
k
+
1 2
lim
n ↑∞
∞
X
k=N +1
X
k
n +
1 2
µN , ∞ ¶ ε. This establishes almost sure convergence of X n to
µ in the total variation norm.
3.2 Proof of Theorem 1.1 b
We now show that the conditional law of the outdegree of a new node converges almost surely in the weak topology to a Poisson distribution. In the first step we will prove that, for
η ∈ 0, 1, and the affine linear attachment rule f k =
ηk + 1, one has almost sure convergence of Y
n
:=
1 n
P
n m=1
Z [m, n] = 〈X n, id〉 to y := 11 − η. First observe that Y
n+1
=
1 n+1
nY
n
+
n
X
m=1
∆Z [m, n] = Y
n
+
1 n+1
−Y
n
+
n
X
m=1
∆Z [m, n] ,
where ∆ Z [m, n] := Z [m, n + 1] − Z [m, n]. Given the past F
n
of the network formation, each ∆Z [m, n] is independent Bernoulli distributed with success probability
1 n
ηZ [m, n] + 1. Conse- quently,
E [Y
n+1
|F
n
] = Y
n
+
1 n+1
−Y
n
+
n
X
m=1 1
n
ηZ [m, n] + 1 = Y
n
+
1 n+1
−1 − ηY
n
+ 1 ,
and 〈Y 〉
n+1
− 〈Y 〉
n
¶
1 n+1
2
n
X
m=1 1
n
ηZ [m, n] + 1 =
1 n+1
2
[ηY
n
+ 1]. 12
Now note that due to Theorem 1.5 which can be used here, as it will be proved independently of this section there is a single node that has maximal indegree for all but finitely many times. Let m
∗
denote the random node with this property. With Remark 1.11 we conclude that almost surely log
Z [m
∗
, n] ∼ log n
η
. 13
Since for sufficiently large n Y
n
= 1
n
n
X
m=1
Z [m, n] ¶ Z [m
∗
, n], equations 12 and 13 imply that
〈Y 〉
·
converges almost surely to a finite random variable. 1239
Next, represent the increment Y
n+1
− Y
n
as Y
n+1
− Y
n
=
1 n+1
−1 − ηY
n
+ 1 + ∆M
n+1
, 14
where ∆M
n+1
denotes a martingale difference. We shall denote by M
n n
∈N
the corresponding martingale, that is M
n
= P
n m=2
∆M
m
. Since 〈Y 〉
·
is convergent, the martingale M
n
converges almost surely. Next, we represent 14 in terms of ¯
Y
n
= Y
n
− y as the following inhomogeneous linear difference equation of first order:
¯ Y
n+1
= 1
−
1 −η
n+1
¯ Y
n
+ ∆M
n+1
... The corresponding starting value is ¯
Y
1
= Y
1
− y = − y, and we can represent its solution as ¯
Y
n
= − y h
1 n
+
n
X
m=2
∆M
m
h
m n
for h
m n
:=
if n m
Q
n l=m+1
1 −
1 −η
l
if n ¾ m. Setting ∆h
m n
= h
m n
− h
m −1
n
we conclude with an integration by parts argument that
n
X
m=2
∆M
m
h
m n
=
n
X
m=2
∆M
m
1 −
n
X
k=m+1
∆h
k n
= M
n
−
n
X
m=2 n
X
k=m+1
∆M
m
∆h
k n
= M
n
−
n
X
k=3 k
−1
X
m=2
∆M
m
∆h
k n
= M
n
−
n
X
k=3
M
k −1
∆h
k n
. 15
Note that h
m n
and ∆h
m n
tend to 0 as n tends to infinity so that P
n k=m+1
∆h
k n
= 1 − h
m n
tends to 1. With M
∞
:= lim
n →∞
M
n
and ǫ
m
= sup
n¾m
|M
n
− M
∞
| we derive for m ¶ n M
n
−
m
X
k=3
M
k −1
∆h
k n
−
n
X
k=m+1
M
k −1
∆h
k n
¶ |M
n
− M
∞
| |
{z }
→0
+
m
X
k=3
|M
k −1
| ∆h
k n
| {z
}
→0
+ |
n
X
k=m+1
M
∞
− M
k −1
∆h
k n
| |
{z }
¶ ǫ
m
+ 1 −
n
X
k=m+1
∆h
k n
|M
∞
| |
{z }
→0
.
Since lim
m →∞
ǫ
m
= 0, almost surely, we thus conclude with 15 that P
n m=2
∆M
m
h
m n
tends to 0. Consequently, lim
n →∞
Y
n
= y, almost surely. Next, we show that also 〈µ, id〉 = y. Recall that µ is the unique invariant distribution satisfying
µ Q = 0 see 11 for the definition of Q. This implies that for any k
∈ N f k
− 1 µ
k −1
− f k + 1 µ
k
= 0, or equivalently, µ
k
= f k − 1 µ
k −1
− f k µ
k
Thus 〈µ, id〉 =
∞
X
k=1
k µ
k
=
∞
X
k=1
k f k
− 1 µ
k −1
− f k µ
k
. 1240
One cannot split the sum into two sums since the individual sums are not summable. However, noticing that the individual term f k
µ
k
k ≈ k
2
µ
k
tends to 0, we can rearrange the summands to obtain
〈µ, id〉 = f 0 µ +
∞
X
k=1
f k µ
k
= 〈µ, f 〉 = η〈µ, id〉 + 1. This implies that
〈µ, id〉 = y and that for any m ∈ N 〈X n, 1l
[m,∞
· id〉 = 〈X n, id〉 − 〈X n, 1l
[0,m
· id〉 → 〈µ, 1l
[m,∞
· id〉, almost surely.
Now, we switch to general attachment rules. We denote by f an arbitrary attachment rule that is dominated by an affine attachment rule f
a
. The corresponding degree evolutions will be denoted by
Z [m, n] and Z
a
[m, n], respectively. Moreover, we denote by µ and µ
a
the limit distributions of the empirical indegree distributions. Since by assumption f ¶ f
a
, one can couple both degree evolutions such that
Z [m, n] ¶ Z
a
[m, n] for all n ¾ m ¾ 0. Now 〈X n, f 〉 ¶ 〈X n, 1l
[0,m
· f 〉 + 〈X
a
n, 1l
[m,∞
· f
a
〉 so that almost surely
lim sup
n →∞
〈X n, f 〉 ¶ 〈µ, 1l
[0,m
· f 〉 + 〈µ
a
, 1l
[m,∞
· f
a
〉. Since m can be chosen arbitrarily large we conclude that
lim sup
n →∞
〈X n, f 〉 ¶ 〈µ, f 〉. The converse estimate is an immediate consequence of Fatou’s lemma. Hence,
lim
n →∞
E h
n
X
m=1
∆Z [m, n] F
n
i = 〈µ, f 〉.
Since, conditional on F
n
, P
n m=1
∆Z [m, n] is a sum of independent Bernoulli variables with success probabilities tending uniformly to 0, we finally get that
L P
n m=1
∆Z [m, n]|F
n
converges in the weak topology to a Poisson distribution with parameter
〈µ, f 〉.
4 Large deviations
In this section we derive tools to analyse rare events in the random network... We provide large and moderate deviation principles for the temporal development of the indegree of a given vertex.
This will allow us to describe the indegree evolution of the node with maximal indegree in the case of weak preferential attachment. The large and moderate deviation principles are based on an
exponential approximation to the indegree evolution processes, which we first discuss.
1241
4.1 Exponentially good approximation