Proposition 2.11 Product formula. Let f ∈ L
2 s
µ
p
and g ∈ L
2 s
µ
q
, p, q ≥ 1, and suppose moreover that f
⋆
l r
g ∈ L
2
µ
p+q −r−l
for every r = 1, . . . , p ∧ q and l = 1, . . . , r such that l 6= r. Then, I
p
f I
q
g =
p ∧q
X
r=0
r
p r
q
r
r
X
l=0
r
l
I
p+q −r−l
â
f ⋆
l r
g
, 6
with the tilde ∼ indicating a symmetrization, that is,
â f
⋆
l r
gx
1
, . . . , x
p+q −r−l
= 1
p + q − r − l X
σ
f ⋆
l r
gx
σ1
, . . . , x
σp+q−r−l
, where
σ runs over all p + q − r − l permutations of the set {1, . . . , p + q − r − l}.
2.4 Stein’s method: measuring the distance between random vectors
We write g ∈ C
k
R
d
if the function g : R
d
→ R admits continuous partial derivatives up to the order k.
Definition 2.12. 1. The
Hilbert-Schmidt inner product and the Hilbert - Schmidt norm on the
class of d × d real matrices, denoted respectively by 〈·, ·〉
H.S.
and k · k
H.S.
, are defined as follows: for every pair of matrices A and B,
〈A, B〉
H.S.
:= TrAB
T
and kAk
H.S.
= p
〈A, A〉
H.S.
, where T r ·
indicates the usual trace operator.
2. The operator norm of a d × d real matrix A is given by kAk
op
:= sup
kxk
Rd
=1
kAxk
R
d
. 3. For every function g : R
d
7→ R, let kgk
Li p
:= sup
x 6= y
|gx − g y| kx − yk
R
d
, where
k · k
R
d
is the usual Euclidian norm on R
d
. If g ∈ C
1
R
d
, we also write M
2
g := sup
x 6= y
k∇gx − ∇g yk
R
d
kx − yk
R
d
, If g
∈ C
2
R
d
, M
3
g := sup
x 6= y
kHess gx − Hess g yk
op
kx − yk
R
d
, where Hess gz stands for the Hessian matrix of g evaluated at a point z.
4. For a positive integer k and a function g ∈ C
k
R
d
, we set kg
k
k
∞
= max
1 ≤i
1
≤...≤i
k
≤d
sup
x ∈R
d
∂
k
∂ x
i
1
. . . ∂ x
i
k
gx .
1493
In particular, by specializing this definition to g
2
= g
′′
and g
3
= g
′′′
, we obtain kg
′′
k
∞
= max
1 ≤i
1
≤i
2
≤d
sup
x ∈R
d
∂
2
∂ x
i
1
∂ x
i
2
gx .
kg
′′′
k
∞
= max
1 ≤i
1
≤i
2
≤i
3
≤d
sup
x ∈R
d
∂
3
∂ x
i
1
∂ x
i
2
∂ x
i
3
gx .
Remark 2.13. 1. The norm
kgk
Li p
is written M
1
g in [5]. 2. If g
∈ C
1
R
d
, then kgk
Li p
= sup
x ∈R
d
k∇gxk
R
d
. If g ∈ C
2
R
d
, then M
2
g = sup
x ∈R
d
kHess gxk
op
.
Definition 2.14. The distance d
2
between the laws of two R
d
-valued random vectors X and Y such that E
kX k
R
d
, E kY k
R
d
∞, written d
2
X , Y , is given by d
2
X , Y = sup
g ∈H
|E[gX ] − E[gY ]|, where
H indicates the collection of all functions g ∈ C
2
R
d
such that kgk
Li p
≤ 1 and M
2
g ≤ 1.
Definition 2.15. The distance d
3
between the laws of two R
d
-valued random vectors X and Y such that E
kX k
2 R
d
, E kY k
2 R
d
∞, written d
3
X , Y , is given by d
3
X , Y = sup
g ∈H
|E[gX ] − E[gY ]|, where
H indicates the collection of all functions g ∈ C
3
R
d
such that kg
′′
k
∞
≤ 1 and kg
′′′
k
∞
≤ 1.
Remark 2.16. The distances d
2
and d
3
are related, respectively, to the estimates of Section 3 and Section 4. Let j = 2, 3. It is easily seen that, if d
j
F
n
, F → 0, where F
n
, F are random vectors in R
d
, then necessarily F
n
converges in distribution to F . It will also become clear later on that, in the definition of d
2
and d
3
, the choice of the constant 1 as a bound for kgk
Li p
, M
2
g, kg
′′
k
∞
, kg
′′′
k
∞
is arbitrary and immaterial for the derivation of our main results indeed, we defined d
2
and d
3
in order to obtain bounds as simple as possible. See the two tables in Section 4.2 for a list of available
bounds involving more general test functions. The following result is a d-dimensional version of Stein’s Lemma; analogous statements can be
found in [5, 11, 23] – see also Barbour [1] and Götze [6], in connection with the so-called “generator approach” to Stein’s method. As anticipated, Stein’s Lemma will be used to deduce an
explicit bound on the distance d
2
between the law of a vector of functionals of ˆ N and the law of
a Gaussian vector. To this end, we need the two estimates 7 which is proved in [11] and 8 which is new.
From now on, given a d × d nonnegative definite matrix C, we write N
d
0, C to indicate the law of a centered d-dimensional Gaussian vector with covariance C.
1494
Lemma 2.17 Stein’s Lemma and estimates. Fix an integer d ≥ 2 and let C = {Ci, j : i, j =
1, . . . , d } be a d × d nonnegative definite symmetric real matrix.
1. Let Y be a random variable with values in R
d
. Then Y ∼ N
d
0, C if and only if, for every twice differentiable function f : R
d
7→ R such that E|〈C, Hess f Y 〉
H.S.
| + E|〈Y, ∇ f Y 〉
R
d
| ∞, it holds that
E [〈Y, ∇ f Y 〉
R
d
− 〈C, Hess f Y 〉
H.S.
] = 0 2. Assume in addition that C is positive definite and consider a Gaussian random vector X
∼ N
d
0, C. Let g : R
d
7→ R belong to C
2
R
d
with first and second bounded derivatives. Then, the function U
g defined by U
gx := Z
1
1 2t
E [g
p t x +
p 1
− tX − gX ]d t is a solution to the following partial differential equation with unknown function f :
gx − E[gX ] = 〈x, ∇ f x〉
R
d
− 〈C, Hess f x〉
H.S.
, x ∈ R
d
. Moreover, one has that
sup
x ∈R
d
kHess U gx
k
H.S.
≤ kC
−1
k
op
kCk
1 2
op
kgk
Li p
, 7
and M
3
U g
≤ p
2 π
4 kC
−1
k
3 2
op
kCk
op
M
2
g. 8
Proof. We shall only show relation 8, as the proof of the remaining points in the statement can be found in [11]. Since C is a positive definite matrix, there exists a non-singular symmetric matrix A
such that A
2
= C, and A
−1
X ∼ N
d
0, I
d
. Let U gx = hA
−1
x, where hx =
Z
1
1 2t
E [g
A
p t x +
p 1
− tA
−1
X − g
A
A
−1
X ]d t and g
A
x = gAx. As A
−1
X ∼ N
d
0, I
d
, the function h solves the Stein’s equation 〈x, ∇hx〉
R
d
− ∆hx = g
A
x − E[g
A
Y ], where Y
∼ N
d
0, I
d
and ∆ is the Laplacian. On the one hand, as Hess g
A
x = AHess gAxA recall that A is symmetric, we have
M
2
g
A
= sup
x ∈R
d
kHess g
A
xk
op
= sup
x ∈R
d
kAHess gAxAk
op
= sup
x ∈R
d
kAHess gxAk
op
≤ kAk
2 op
M
2
g =
kCk
op
M
2
g, where the inequality above follows from the well-known relation
kABk
op
≤ kAk
op
kBk
op
. Now write h
A
−1
x = hA
−1
x: it is easily seen that Hess U
gx = Hess h
A
−1
x = A
−1
Hess hA
−1
xA
−1
. 1495
It follows that M
3
U g =
M
3
h
A
−1
= sup
x 6= y
kHess h
A
−1
x − Hess h
A
−1
yk
op
kx − yk =
sup
x 6= y
kA
−1
Hess hA
−1
xA
−1
− A
−1
Hess hA
−1
yA
−1
k
op
kx − yk ≤ kA
−1
k
2 op
× sup
x 6= y
kHess hA
−1
x − Hess hA
−1
y k
op
kx − yk ×
kA
−1
x − A
−1
y k
kA
−1
x − A
−1
y k
≤ kA
−1
k
2 op
× sup
x 6= y
kHess hA
−1
x − Hess hA
−1
y k
op
kA
−1
x − A
−1
y k
× kA
−1
k
op
= kC
−1
k
3 2
op
M
3
h. Since M
3
h ≤
p 2
π 4
M
2
g
A
according to [5, Lemma 3], relation 8 follows immediately.
3 Upper bounds obtained by Malliavin-Stein methods
We will now deduce one of the main findings of the present paper, namely Theorem 3.3. This result allows to estimate the distance between the law of a vector of Poisson functionals and the law of
a Gaussian vector, by combining the multi-dimensional Stein’s Lemma 2.17 with the algebra of the Malliavin operators. Note that, in this section, all Gaussian vectors are supposed to have a positive
definite covariance matrix.
We start by proving a technical lemma, which is a crucial element in most of our proofs.
Lemma 3.1. Fix d ≥ 1 and consider a vector of random variables F := F
1
, . . . , F
d
⊂ L
2
P. Assume that, for all 1
≤ i ≤ d, F
i
∈ dom D, and E[F
i
] = 0. For all φ ∈ C
2
R
d
with bounded derivatives, one has that
D
z
φF
1
, . . . , F
d
=
d
X
i=1
∂ ∂ x
i
φF D
z
F
i
+
d
X
i, j=1
R
i j
D
z
F
i
, D
z
F
j
, z ∈ Z, where the mappings R
i j
satisfy |R
i j
y
1
, y
2
| ≤ 1
2 sup
x ∈R
d
∂
2
∂ x
i
∂ x
j
φx × |y
1
y
2
| ≤ 1
2 kφ
′′
k
∞
| y
1
y
2
|. 9
Proof. By the multivariate Taylor theorem and Lemma 2.5, D
z
φF
1
, . . . , F
d
= φF
1
, . . . , F
d
ω + δ
z
− φF
1
, . . . , F
d
ω =
φF
1
ω + δ
z
, . . . , F
d
ω + δ
z
− φF
1
ω, . . . , F
d
ω =
d
X
i=1
∂ ∂ x
i
φF
1
ω, . . . , F
d
ωF
i
ω + δ
z
− F
i
ω + R =
d
X
i=1
∂ ∂ x
i
φD
z
F
i
+ R, 1496
where the term R represents the residue: R = RD
z
F
1
, . . . , D
z
F
d
=
d
X
i, j=1
R
i j
D
z
F
i
, D
z
F
j
, and the mapping y
1
, y
2
7→ R
i j
y
1
, y
2
verifies 9.
Remark 3.2. Lemma 3.1 is the Poisson counterpart of the multi-dimensional “chain rule” verified by the Malliavin derivative on a Gaussian space see [9, 11]. Notice that the term R does not appear
in the Gaussian framework.
The following result uses the two Lemmas 2.17 and 3.1, in order to compute explicit bounds on the distance between the laws of a vector of Poisson functionals and the law of a Gaussian vector.
Theorem 3.3 Malliavin-Stein inequalities on the Poisson space. Fix d
≥ 2 and let C = {Ci, j : i, j = 1, . . . , d
} be a d ×d positive definite matrix. Suppose that X ∼ N
d
0, C and that F = F
1
, . . . , F
d
is a R
d
-valued random vector such that E[F
i
] = 0 and F
i
∈ dom D, i = 1, . . . , d. Then, d
2
F, X ≤ kC
−1
k
op
kCk
1 2
op
v u
u t
d
X
i, j=1
E [Ci, j − 〈DF
i
, −DL
−1
F
j
〉
L
2
µ 2
] 10
+ p
2 π
8 kC
−1
k
3 2
op
kCk
op
Z
Z
µdzE
d
X
i=1
|D
z
F
i
|
2 d
X
i=1
|D
z
L
−1
F
i
|
.
11 Proof. If either one of the expectations in 10 and 11 are infinite, there is nothing to prove:
we shall therefore work under the assumption that both expressions 10–11 are finite. By the definition of the distance d
2
, and by using an interpolation argument identical to the one used at the beginning of the proof of Theorem 4 in [5], we need only show the following inequality:
|E[gX ] − E[gF]| ≤ AkC
−1
k
op
kCk
1 2
op
v u
u t
d
X
i, j=1
E [Ci, j − 〈DF
i
, −DL
−1
F
j
〉
L
2
µ 2
] 12
+ p
2 π
8 B
kC
−1
k
3 2
op
kCk
op
Z
Z
µdzE
d
X
i=1
|D
z
F
i
|
2 d
X
i=1
|D
z
L
−1
F
i
|
for any g ∈ C
∞
R
d
with first and second bounded derivatives, such that kgk
Li p
≤ A and M
2
g ≤ B. To prove 12, we use Point ii in Lemma 2.17 to deduce that
1497
|E[gX ] − E[gF]| = |E[〈C, Hess U
gF 〉
H.S.
− 〈F, ∇U gF
〉
R
d
]| =
E
d
X
i, j=1
Ci, j ∂
2
∂ x
i
∂ x
j
U gF
−
d
X
k=1
F
k
∂ ∂ x
k
U gF
=
d
X
i, j=1
E
Ci, j ∂
2
∂ x
i
∂ x
j
U gF
+
d
X
k=1
E
δD L
−1
F
k
∂ ∂ x
k
U gF
=
d
X
i, j=1
E
Ci, j ∂
2
∂ x
i
∂ x
j
U gF
−
d
X
k=1
E
®
D
∂ ∂ x
k
U gF
,
−DL
−1
F
k
¸
L
2
µ
.
We write ∂
∂ x
k
U gF :=
φ
k
F
1
, . . . , F
d
= φ
k
F . By using Lemma 3.1, we infer
D
z
φ
k
F
1
, . . . , F
d
=
d
X
i=1
∂ ∂ x
i
φ
k
F D
z
F
i
+ R
k
,
with R
k
=
d
P
i, j=1
R
i, j,k
D
z
F
i
, D
z
F
j
, and
|R
i, j,k
y
1
, y
2
| ≤ 1
2 sup
x ∈R
d
∂
2
∂ x
i
∂ x
j
φ
k
x × |
y
1
y
2
|. It follows that
|E[gX ] − E[gF]| =
d
X
i, j=1
E
Ci, j ∂
2
∂ x
i
∂ x
j
U gF
−
d
X
i,k=1
E
∂
2
∂ x
i
∂ x
k
U gF
〈DF
i
, −DL
−1
F
k
〉
L
2
µ
+
d
X
i, j,k=1
E
〈R
i, j,k
DF
i
, DF
j
, −DL
−1
F
k
〉
L
2
µ
≤ Æ
E [kHess U
gF k
2 H.S.
] × v
u u
t
d
X
i, j=1
E h
Ci, j − 〈DF
i
, −DL
−1
F
j
〉
L
2
µ
2
i + |R
2
|, where
R
2
=
d
X
i, j,k=1
E [〈R
i, j,k
DF
i
, DF
j
, −DL
−1
F
k
〉
L
2
µ
].
1498
Note that 7 implies that kHess U
gF k
H.S.
≤ kC
−1
k
op
kCk
1 2
op
kgk
Li p
. By using 8 and the fact kg
′′′
k
∞
≤ M
3
g, we have |R
i, j,k
y
1
, y
2
| ≤ 1
2 sup
x ∈R
d
∂
3
∂ x
i
∂ x
j
∂ x
k
U g y
× | y
1
y
2
| ≤
p 2
π 8
M
2
gkC
−1
k
3 2
op
kCk
op
× | y
1
y
2
| ≤ p
2 π
8 B
kC
−1
k
3 2
op
kCk
op
× | y
1
y
2
|, from which we deduce the desired conclusion.
Now recall that, for a random variable F = ˆ N h = I
1
h in the first Wiener chaos of ˆ N , one has
that DF = h and L
−1
F = −F. By virtue of Remark 2.16, we immediately deduce the following
consequence of Theorem 3.3.
Corollary 3.4. For a fixed d ≥ 2, let X ∼ N
d
0, C, with C positive definite, and let F
n
= F
n,1
, ..., F
n,d
= ˆ N h
n,1
, ..., ˆ N h
n,d
, n ≥ 1, be a collection of d-dimensional random vectors living in the first Wiener chaos of ˆ
N . Call K
n
the covariance matrix of F
n
, that is: K
n
i, j = E[ ˆ N h
n,i
ˆ N h
n, j
] = 〈h
n,i
, h
n, j
〉
L
2
µ
. Then, d
2
F
n
, X ≤ kC
−1
k
op
kCk
1 2
op
kC − K
n
k
H.S.
+ d
2
p 2
π 8
kC
−1
k
3 2
op
kCk
op d
X
i=1
Z
Z
|h
n,i
z|
3
µdz. In particular, if
K
n
i, j → Ci, j and Z
Z
|h
n,i
z|
3
µdz → 0 13
as n → ∞ and for every i, j = 1, ..., d, then d
2
F
n
, X → 0 and F
n
converges in distribution to X .
Remark 3.5. 1. The conclusion of Corollary 3.4 is by no means trivial. Indeed, apart from the
requirement on the asymptotic behavior of covariances, the statement of Corollary 3.4 does not contain any assumption on the joint distribution of the components of the random vectors
F
n
. We will see in Section 5 that analogous results can be deduced for vectors of multiple in- tegrals of arbitrary orders. We will also see in Corollary 4.3 that one can relax the assumption
that C is positive definite. 2. The inequality appearing in the statement of Corollary 3.4 should also be compared with the
following result, proved in [11], yielding a bound on the Wasserstein distance between the laws of two Gaussian vectors of dimension d
≥ 2. Let Y ∼ N
d
0, K and X ∼ N
d
0, C, where K and C are two positive definite covariance matrices. Then, d
W
Y, X ≤ QC, K×kC −Kk
H.S.
, where
QC, K := min {kC
−1
k
op
kCk
1 2
op
, kK
−1
k
op
kKk
1 2
op
}, and d
W
denotes the Wasserstein distance between the laws of random variables with values in R
d
.
1499
4 Upper bounds obtained by interpolation methods
4.1 Main estimates