place depends on the whole composition of the urn. This includes all the models discussed so far. For example, if in the 3-type 2-tuple model one lets the rate at
which one color replaces another depend linearly on the amount of the third color present, then one arrives at the 3-type 3-tuple model. If in the 2-type 2-tuple model
one lets the resampling rate depend on the amount of both types present in the urn, then one finds the operator
A f x : = x
2
1 − x
2 ∂
2
∂ x
2
, 1.1.64
known as Kimura’s random selection model. One of the main goals in this disser- tion is to show that for infinite systems of interacting diffusions such modifications
of the diffusion function do not influence the behavior of the system on large scales, both in space and time.
1.2 Overview of the three articles
1.2.1 Renormalization theory
Renormalization theory is one of the most succesful techniques for understanding universal large scale behavior of interacting particle systems, at least on the level
of heuristic and non-rigorous calculations. The basic idea of the theory is quite simple. First, one needs to find a way to describe a system on a series of ever larger
scales. A usual way is to group the particles into blocks, consisting of a particle and a few of its neighbours, then group these blocks into larger blocks, and so on. With
each scale is associated a set of variables describing the system as if viewed from ever larger distances, where details of the local behavior become ever less visible.
For example, the first set of variables may give the precise state of each particle, the second set only the average value of all particles in a block, and the third set
only averages over blocks of blocks, etc. Each time one goes to a larger scale, the probability law describing the new variables is a marginal of the law describing the
old variables. Thus, in principle, one has a map describing how to go from the old variables to the new larger scale variables. This map is called a renormalization
transformation. If it is the case that under iteration of this transformation different local laws converge to one and the same global law, then one has universal behavior
on large scales.
In practice it is not so easy to realize this renormalization scheme. In order to make it work, one needs an efficient way to describe the probability law of
the renormalized variables. However, it often happens that while the law of the local variables has nice properties, the renormalized law has not for example, it
may be non-Markovian or non-Gibbsian. In such cases a rigorous study of the renormalization transformation is very hard and frequently impossible.
In some special cases we are lucky and the renormalized system admits a nice description. Apart from the fact that it is nice that at least sometimes rigorous
renormalization calculations are possible, the study of these cases is also interest- ing from a more fundamental point of view. If we understand better why univer-
sality occurs here, we may also find ways to understand systems for which the renormalization scheme does not work so nicely.
The research contained in this dissertation started in 1995, inspired by just such an example of a system for which the renormalization scheme works. This is
a system of linearly interacting diffusions on the hierarchical group, introduced in the next section.
1.2.2 Renormalization of interacting diffusions
By definition, the N -dimensional hierarchical group is
N
: =
i
k k
=1,2,...
: i
k
∈ {0, . . . , N − 1}, i
k
= 0 finitely often .
1.2.1 With componentwise addition modulo N , this is a countable Abelian group. We
denote the origin by 0 = 0, 0, . . .. Think of i ∈
N
as an address: then i
1
is the house number, i
2
the street, i
3
the town, and so on.
N
is ordered in a hierarchical way, where N houses form a street, N streets form a town, N towns
form a province, and so on. One defines
i := min{k ∈
N
: i
l
= 0 ∀l k}. 1.2.2
We call i − j the hierarchical distance between i and j. For example, if i and j
are in the same town, but not in the same street, they are at hierarchical distance 2. Now let us imagine that each i
∈
N
represents an urn with balls of p colors, and let us write X
α i
t for the relative frequency of color α in urn i at time t. We consider the process
X
N
= X
N
t
t ≥0
= X
N,α i
t
α =1,...,p−1
t ≥0, i∈
N
, 1.2.3
where X
N i
t takes values in ˆ K
p
. For reasons that will become clear later we choose to denote the N -dependence of our process explicitly. We assume that the initial
frequencies of the colors are described by some θ ∈ ˆ
K
p
X
N i
= θ i
∈
N
. 1.2.4
If the urns are subject to a resampling mechanism and a migration mechanism as described in section 1.1.6, and the total number of balls in each urn is large, then
we expect X
N
to solve the martingale problem for an operator of the form A f x :
=
i j
a j − i
α
x
α j
− x
α i
∂ ∂
x
α i
f x +
i αβ
w
αβ
x
i ∂
2
∂ x
α i
∂ x
β i
f x, 1.2.5
with domain the
C
2
-functions f that depend on finitely many x
α i
only. Here the dif- fusion matrix w can be the p-type q-tuple diffusion matrix w
p,q
, originating from a q-tuple resampling mechanism, but we also allow for more general w, originating
from a composition-dependent resampling mechanism. We choose the migration kernel a in such a way that the strength of the migra-
tion between two urns depends only on their hierarchical distance. The collection of all urns at hierarchical distance at most k from an urn i
{ j ∈
N
: j − i ≤ k}
1.2.6 we call the k-block around i . We fix constants c
1
, c
2
, . . . ∈ 0, ∞ and for all
k = 1, 2, . . . we let the balls in our urns be subject to the following migration
mechanism: With rate c
k
N
k −1
each ball in an urn i chooses a random urn in the k-block around i possibly itself and migrates to that urn. This means that the
migration kernel a is given by
ai =
∞ k
=i
c
k
N
2k −1
. 1.2.7
To understand why 1.2.7 is the correct formula, note that a ball in urn i decides with rate c
k
N
k −1
to jump to another urn in the k-block around i . If k ≥ i, this
urn is with probability N
−k
the origin. The process X
N
can be represented, on an appropriately chosen probability space equipped with p
−1-dimensional independent Brownian motions B
i i
∈
N
, as a solution to the following system of stochastic differential equations:
d X
N,α i
t =
∞ k
=1
c
k
N
k −1
X
N,k,α i
t − X
N,α i
t dt
+
β
σ
αβ
X
N i
td B
β i
t i
∈
N
, α = 1, . . . , d, t ≥ 0,
1.2.8 where
1 2
γ
σ
αγ
xσ
βγ
x = w
αβ
x 1.2.9
and X
N,k i
t is the k-block average around i : X
N,k,α i
t : = N
−k j :
j−i≤k
X
N,α i
t. 1.2.10