getdoc6e81. 348KB Jun 04 2011 12:04:30 AM
El e c t ro n ic
Jo ur
n a l o
f P
r o
b a b il i t y Vol. 14 (2009), Paper no. 35, pages 978–1011.
Journal URL
http://www.math.washington.edu/~ejpecp/
Rates of convergence for minimal distances in the central
limit theorem under projective criteria
Jérôme Dedecker
∗Florence Merlevède
†Emmanuel Rio
‡Abstract
In this paper, we give estimates of ideal or minimal distances between the distribution of the normalized partial sum and the limiting Gaussian distribution for stationary martingale differ-ence sequdiffer-ences or stationary sequdiffer-ences satisfying projective criteria. Applications to functions of linear processes and to functions of expanding maps of the interval are given.
Key words:Minimal and ideal distances, rates of convergence, Martingale difference sequences, stationary sequences, projective criteria, weak dependence, uniform mixing.
AMS 2000 Subject Classification:Primary 60F05.
Submitted to EJP on November 5, 2007, final version accepted June 6, 2008.
∗Université Paris 6, LSTA, 175 rue du Chevaleret, 75013 Paris, FRANCE. E-mail: jerome.dedecker@upmc.fr
†Université Paris Est, Laboratoire de mathématiques, UMR 8050 CNRS, Bâtiment Copernic, 5 Boulevard Descartes,
77435 Champs-Sur-Marne, FRANCE. E-mail: florence.merlevede@univ-mlv.fr
‡Université de Versailles, Laboratoire de mathématiques, UMR 8100 CNRS, Bâtiment Fermat, 45 Avenue des
(2)
1
Introduction and Notations
LetX1,X2, . . . be a strictly stationary sequence of real-valued random variables (r.v.) with mean zero and finite variance. Set Sn = X1+X2+· · ·+Xn. By Pn−1/2S
n we denote the law of n −1/2S
n and by Gσ2 the normal distribution N(0,σ2). In this paper, we shall give quantitative estimates of the
approximation ofPn−1/2S
n byGσ2 in terms of minimal or ideal metrics.
LetL(µ,ν) be the set of the probability laws onR2 with marginalsµ andν. Let us consider the
following minimal distances (sometimes called Wasserstein distances of orderr)
Wr(µ,ν) =
infn
Z
|x−y|rP(d x,d y):P∈ L(µ,ν)o if 0<r<1 infn
Z
|x−y|rP(d x,d y)1/r:P∈ L(µ,ν)o ifr≥1 .
It is well known that for two probability measuresµandν onRwith respective distributions
func-tions (d.f.) F andG,
Wr(µ,ν) =
Z 1
0
|F−1(u)−G−1(u)|rdu 1/r
for anyr≥1. (1.1) We consider also the following ideal distances of order r (Zolotarev distances of order r). For two probability measuresµandν, andr a positive real, let
ζr(µ,ν) =sup
nZ
f dµ− Z
f dν: f ∈Λro,
whereΛr is defined as follows: denoting byl the natural integer such thatl < r ≤l+1,Λr is the class of real functions f which arel-times continuously differentiable and such that
|f(l)(x)−f(l)(y)| ≤ |x−y|r−l for any(x,y)∈R×R. (1.2)
It follows from the Kantorovich-Rubinstein theorem (1958) that for any 0<r≤1,
Wr(µ,ν) =ζr(µ,ν). (1.3)
For probability laws on the real line, Rio (1998) proved that for anyr>1,
Wr(µ,ν)≤cr ζr(µ,ν)
1/r
, (1.4)
wherecris a constant depending only onr.
For independent random variables, Ibragimov (1966) established that if X1 ∈ Lp for p ∈]2, 3],
then W1(Pn−1/2S
n,Gσ2) = O(n
1−p/2) (see his Theorem 4.3). Still in the case of independent r.v.’s,
Zolotarev (1976) obtained the following upper bound for the ideal distance: ifX1∈Lpforp∈]2, 3],
thenζp(Pn−1/2S
n,Gσ2) =O(n
1−p/2). From (1.4), the result of Zolotarev entails that, for p
∈]2, 3],
Wp(Pn−1/2S
n,Gσ2) =O(n
1/p−1/2)(which was obtained by Sakhanenko (1985) for any p>2). From
(1.1) and Hölder’s inequality, we easily get that for independent random variables inLp withp ∈
]2, 3],
Wr(Pn−1/2S
n,Gσ2) =O(n
−(p−2)/2r) for any 1
(3)
In this paper, we are interested in extensions of (1.5) to sequences of dependent random variables. More precisely, for X1 ∈ Lp and p in ]2, 3] we shall give Lp-projective criteria under which: for r∈[p−2,p]and(r,p)6= (1, 3),
Wr(Pn−1/2S
n,Gσ2) =O(n
−(p−2)/2 max(1,r)). (1.6)
As we shall see in Remark 2.4, (1.6) applied tor =p−2 provides the rate of convergenceO(n−
p−2 2(p−1))
in the Berry-Esseen theorem.
When (r,p) = (1, 3), Dedecker and Rio (2008) obtained that W1(Pn−1/2S
n,Gσ2) = O(n
−1/2) for
stationary sequences of random variables in L3 satisfying L1 projective criteria or weak
depen-dence assumptions (a similar result was obtained by Pène (2005) in the case where the vari-ables are bounded). In this particular case our approach provides a new criterion under which
W1(Pn−1/2S
n,Gσ2) =O(n
−1/2logn).
Our paper is organized as follows. In Section 2, we give projective conditions for stationary martin-gales differences sequences to satisfy (1.6) in the case(r,p)6= (1, 3). To be more precise, let(Xi)i∈Z
be a stationary sequence of martingale differences with respect to some σ-algebras (Fi)i∈Z (see
Section 1.1 below for the definition of(Fi)i∈Z). As a consequence of our Theorem 2.1, we obtain
that if(Xi)i∈Zis inLpwithp∈]2, 3]and satisfies
∞
X
n=1
1
n2−p/2
E
S2n
n
F0
−σ2
p/2<∞, (1.7)
then the upper bound (1.6) holds provided that(r,p)6= (1, 3). In the case r = 1 and p = 3, we obtain the upper boundW1(Pn−1/2S
n,Gσ2) =O(n
−1/2logn).
In Section 3, starting from the coboundary decomposition going back to Gordin (1969), and using the results of Section 2, we obtain Lp-projective criteria ensuring (1.6) (if (r,p) 6= (1, 3)). For
instance, if(Xi)i∈Zis a stationary sequence ofLp random variables adapted to(Fi)i∈Z, we obtain
(1.6) for any p ∈]2, 3[ and any r ∈[p−2,p]provided that (1.7) holds and the series E(Sn|F0)
converge inLp. In the case wherep=3, this last condition has to be strengthened. Our approach
makes also possible to treat the case of non-adapted sequences.
Section 4 is devoted to applications. In particular, we give sufficient conditions for some functions of Harris recurrent Markov chains and for functions of linear processes to satisfy the bound (1.6) in the case (r,p) 6= (1, 3) and the rate O(n−1/2logn) when r = 1 and p = 3. Since projective criteria are verified under weak dependence assumptions, we give an application to functions of
φ-dependent sequences in the sense of Dedecker and Prieur (2007). These conditions apply to unbounded functions of uniformly expanding maps.
1.1
Preliminary notations
Throughout the paper, Y is aN(0, 1)-distributed random variable. We shall also use the following notations. Let(Ω,A,P)be a probability space, andT :Ω7→Ωbe a bijective bimeasurable
transfor-mation preserving the probabilityP. For aσ-algebraF0 satisfyingF0 ⊆ T−1(F0), we define the nondecreasing filtration(Fi)i∈Z byFi = T−i(F0). LetF−∞ =
T
k∈ZFk andF∞=
W
k∈ZFk. We shall denote sometimes byEi the conditional expectation with respect toFi. LetX0be a zero mean
(4)
2
Stationary sequences of martingale differences.
In this section we give bounds for the ideal distance of order r in the central limit theorem for stationary martingale differences sequences(Xi)i∈Zunder projective conditions.
Notation 2.1. For anyp>2, define the envelope normk.k1,Φ,p by
kXk1,Φ,p=
Z 1
0
(1∨Φ−1(1−u/2))p−2QX(u)du
whereΦdenotes the d.f. of theN(0, 1)law, andQX denotes the quantile function of|X|, that is the cadlag inverse of the tail function x →P(|X|>x).
Theorem 2.1. Let(Xi)i∈Zbe a stationary martingale differences sequence with respect to(Fi)i∈Z. Let
σdenote the standard deviation of X0. Let p∈]2, 3]. Assume thatE|X0|p<∞and that
∞
X
n=1
1
n2−p/2
E
Sn2
n
F0
−σ2
1,Φ,p<∞, (2.1)
and
∞
X
n=1
1
n2/p
E
S2n
n
F0
−σ2
p/2<∞. (2.2)
Then, for any r ∈ [p−2,p] with (r,p) 6= (1, 3), ζr(Pn−1/2S
n,Gσ2) = O(n
1−p/2), and for p = 3,
ζ1(Pn−1/2S
n,Gσ2) =O(n
−1/2logn).
Remark 2.1. Leta>1 and p>2. Applying Hölder’s inequality, we see that there exists a positive constant C(p,a) such thatkXk1,Φ,p ≤ C(p,a)kXka. Consequently, if p ∈]2, 3], the two conditions (2.1) and (2.2) are implied by the condition (1.7) given in the introduction.
Remark 2.2. Under the assumptions of Theorem 2.1, ζr(Pn−1/2S
n,Gσ2) = O(n
−r/2) if r < p−2.
Indeed, let p′ =r+2. Since p′< p, if the conditions (2.1) and (2.2) are satisfied for p, they also hold forp′. Hence Theorem 2.1 applies withp′.
From (1.3) and (1.4), the following result holds for the Wasserstein distances of orderr.
Corollary 2.1. Under the conditions of Theorem 2.1, Wr(Pn−1/2S
n,Gσ2) =O(n
−(p−2)/2 max(1,r))for any
r in[p−2,p], provided that(r,p)6= (1, 3).
Remark 2.3. Forpin]2, 3],Wp(Pn−1/2S
n,Gσ2) =O(n
−(p−2)/2p). This bound was obtained by Sakha-nenko (1985) in the independent case. For p < 3, we have W1(Pn−1/2S
n,Gσ2) = O(n
1−p/2). This
bound was obtained by Ibragimov (1966) in the independent case.
Remark 2.4. Recall that for two real valued random variables X,Y, the Ky Fan metricα(X,Y) is defined byα(X,Y) =inf{ǫ >0 :P(|X−Y|> ǫ)≤ǫ}. LetΠ(µ,ν)be the Prokhorov distance between
µandν. By Theorem 11.3.5 in Dudley (1989) and Markov inequality, one has, for anyr>0,
(5)
Taking the minimum over the random couples (X,Y) with law L(µ,ν), we obtain that, for any 0< r ≤1, Π(µ,ν)≤(Wr(µ,ν))1/(r+1). Hence, if Πn is the Prokhorov distance between the law of
n−1/2Sn and the normal distributionN(0,σ2), Πn≤(Wr(Pn−1/2S
n,Gσ2))
1/(r+1)for any 0<r ≤1 . Taking r=p−2, it follows that under the assumptions of Theorem 2.1,
Πn=O(n−
p−2
2(p−1)) if p<3 andΠ
n=O(n−1/4
p
logn) if p=3. (2.3)
For p in]2, 4], under (2.2), we have thatkPni=1E(X2i −σ2|Fi−1)kp/2 =O(n2/p)(apply Theorem
2 in Wu and Zhao (2006)). Applying then the result in Heyde and Brown (1970), we get that if
(Xi)i∈Zis a stationary martingale difference sequence inLpsuch that (2.2) is satisfied then kFn−Φσk∞=O n−
p−2 2(p+1).
whereFnis the distribution function ofn−1/2SnandΦσis the d.f. of Gσ2. Now
kFn−Φσk∞≤ 1+σ−1(2π)−1/2
Πn.
Consequently the bounds obtained in (2.3) improve the one given in Heyde and Brown (1970), provided that (2.1) holds.
Remark 2.5. If (Xi)i∈Zis a stationary martingale difference sequence inL3 such thatE(X02) =σ2
and
X
k>0
k−1/2kE(Xk2|F0)−σ2k3/2<∞, (2.4)
then, according to Remark 2.1, the conditions (2.1) and (2.2) hold forp=3. Consequently, if (2.4) holds, then Remark 2.4 giveskFn−Φσk∞=O n−1/4plogn
. This result has to be compared with Theorem 6 in Jan (2001), which states thatkFn−Φσk∞=O(n−1/4) if
P
k>0kE(X 2
k|F0)−σ
2
k3/2<
∞.
Remark 2.6. Notice that if(Xi)i∈Zis a stationary martingale differences sequence, then the
condi-tions (2.1) and (2.2) are respectively equivalent to
X
j≥0
2j(p/2−1)k2−jE(S2
2j|F0)−σ2k1,Φ,p<∞, and
X
j≥0
2j(1−2/p)k2−jE(S2
2j|F0)−σ2kp/2<∞.
To see this, letAn=kE(S2
n|F0
−E(S2
n)k1,Φ,pandBn=kE(Sn2|F0
−E(S2
n)kp/2. We first show that AnandBnare subadditive sequences. Indeed, by the martingale property and the stationarity of the sequence, for all positiveiand j
Ai+j = kE(S2
i + (Si+j−Si)2|F0
−E(S2
i + (Si+j−Si)2)k1,Φ,p
≤ Ai+kE (Si+j−Si)2−E(S2j)|F0k1,Φ,p.
Proceeding as in the proof of (4.6), p. 65 in Rio (2000), one can prove that, for anyσ-fieldA and any integrable random variableX,kE(X|A)k1,Φ,p≤ kXk1,Φ,p. Hence
kE (Si+j−Si)2−E(S2
j)|F0
k1,Φ,p≤ kE((Si+j−Si)2−E(S2j)|Fi
k1,Φ,p.
By stationarity, it follows thatAi+j≤Ai+Aj. SimilarlyBi+j≤Bi+Bj. The proof of the equivalences then follows by using the same arguments as in the proof of Lemma 2.7 in Peligrad and Utev (2005).
(6)
3
Rates of convergence for stationary sequences
In this section, we give estimates for the ideal distances of order r for stationary sequences which are not necessarily adapted toFi.
Theorem 3.1. Let(Xi)i∈Zbe a stationary sequence of centered random variables inLp with p∈]2, 3[,
and letσ2n=n−1E(S2
n). Assume that
X
n>0
E(Xn|F0)and
X
n>0
(X−n−E(X−n|F0))converge inLp, (3.1)
and
X
n≥1
n−2+p/2kn−1E(S2
n|F0)−σ2nkp/2<∞. (3.2) Then the seriesPk∈ZCov(X0,Xk)converges to some nonnegativeσ2, and
1. ζr(Pn−1/2S
n,Gσ2) =O(n
1−p/2)for r
∈[p−2, 2], 2. ζr(Pn−1/2S
n,Gσ2n) =O(n
1−p/2)for r
∈]2,p].
Remark 3.1. According to the bound (5.40), we infer that, under the assumptions of Theorem 3.1, the condition (3.2) is equivalent to
X
n≥1
n−2+p/2kn−1E(Sn2|F0)−σ2kp/2<∞. (3.3)
The same remark applies to the next theorem withp=3.
Remark 3.2. The result of item 1 is valid withσn instead ofσ. On the contrary, the result of item 2 is no longer true ifσn is replaced byσ, because for r ∈]2, 3], a necessary condition forζr(µ,ν)
to be finite is that the two first moments ofν and µ are equal. Note that under the assumptions of Theorem 3.1, both Wr(Pn−1/2S
n,Gσ2) and Wr(Pn−1/2Sn,Gσ2n) are of the order of n
−(p−2)/2 max(1,r).
Indeed, in the case wherer∈]2,p], one has that
Wr(Pn−1/2S
n,Gσ2)≤Wr(Pn−1/2Sn,Gσ2n) +Wr(Gσ2n,Gσ2), and the second term is of order|σ−σn|=O(n−1/2).
In the case wherep=3, the condition (3.1) has to be strengthened.
Theorem 3.2. Let(Xi)i∈Z be a stationary sequence of centered random variables inL3, and letσ2n =
n−1E(S2n). Assume that (3.1) holds for p=3and that
X
n≥1
1
n
X
k≥n
E(Xk|F0)
3<∞ and
X
n≥1
1
n
X
k≥n
(X−k−E(X−k|F0))
3<∞. (3.4) Assume in addition that
X
n≥1
n−1/2kn−1E(S2
n|F0)−σ2nk3/2<∞. (3.5) Then the seriesPk∈ZCov(X0,Xk)converges to some nonnegativeσ
(7)
1. ζ1(Pn−1/2S
n,Gσ2) =O(n
−1/2logn),
2. ζr(Pn−1/2S
n,Gσ2) =O(n
−1/2)for r
∈]1, 2], 3. ζr(Pn−1/2S
n,Gσ2n) =O(n
−1/2)for r
∈]2, 3].
4
Applications
4.1
Martingale differences sequences and functions of Markov chains
Recall that the strong mixing coefficient of Rosenblatt (1956) between twoσ-algebras A and B is defined by α(A,B) =sup{|P(A∩B)−P(A)P(B)| : (A,B)∈ A × B }. For a strictly stationary
sequence(Xi)i∈Z, letFi=σ(Xk,k≤i). Define the mixing coefficientsα1(n)of the sequence(Xi)i∈Z
by
α1(n) =α(F0,σ(Xn)).
For the sake of brevity, letQ=QX0 (see Notation 2.1 for the definition). According to the results of
Section 2, the following proposition holds.
Proposition 4.1. Let (Xi)i∈Z be a stationary martingale difference sequence in Lp with p ∈]2, 3].
Assume moreover that the series
X
k≥1
1
k2−p/2
Z α1(k) 0
(1∨log(1/u))(p−2)/2Q2(u)du and X
k≥1
1
k2/p
Z α1(k)
0
Qp(u)du2/p (4.1)
are convergent.Then the conclusions of Theorem 2.1 hold.
Remark 4.1. From Theorem 2.1(b) in Dedecker and Rio (2008), a sufficient condition to get
W1(Pn−1/2S
n,Gσ2) =O(n
−1/2logn)is
X
k≥0
Z α1(n) 0
Q3(u)du<∞.
This condition is always strictly stronger than the condition (4.1) whenp=3.
We now give an example. Consider the homogeneous Markov chain(Yi)i∈Zwith state spaceZ
de-scribed at page 320 in Davydov (1973). The transition probabilities are given bypn,n+1=p−n,−n−1= an forn≥0, pn,0=p−n,0=1−an forn>0, p0,0=0,a0=1/2 and 1/2≤an<1 for n≥1. This chain is irreducible and aperiodic. It is Harris positively recurrent as soon asPn≥2Πnk−=11ak<∞. In that case the stationary chain is strongly mixing in the sense of Rosenblatt (1956).
Denote by K the Markov kernel of the chain (Yi)i∈Z. The functions f such that K(f) =0 almost
everywhere are obtained by linear combinations of the two functions f1 and f2 given by f1(1) =1,
f1(−1) =−1 and f1(n) = f1(−n) =0 ifn6=1, and f2(0) =1, f2(1) = f2(−1) =0 and f2(n+1) = f2(−n−1) =1−an−1ifn>0. Hence the functions f such thatK(f) =0 are bounded.
If(Xi)i∈Zis defined byXi= f(Yi), with K(f) =0, then Proposition 4.1 applies if
(8)
which holds as soon as P0(τ = n) = O(n−1−p/2(logn)−p/2−ε), where P0 is the probability of the chain starting from 0, and τ= inf{n> 0,Xn = 0}. Now P0(τ = n) = (1−an)Πin=−11ai for n≥ 2. Consequently, if
ai=1− p 2i
1+ 1+ε
logi
forilarge enough ,
the condition (4.2) is satisfied and the conclusion of Theorem 2.1 holds.
Remark 4.2. If f is bounded and K(f) 6= 0, the central limit theorem may fail to hold forSn =
Pn
i=1(f(Yi)−E(f(Yi))). We refer to the Example 2, page 321, given by Davydov (1973), whereSn properly normalized converges to a stable law with exponent strictly less than 2.
Proof of Proposition 4.1. Let Bp(F0) be the set of F0-measurable random variables such that
kZkp≤1. We first notice that
kE(X2k|F0)−σ2kp/2= sup
Z∈Bp/(p−2)(F 0)
Cov(Z,Xk2).
Applying Rio’s covariance inequality (1993), we get that
kE(X2
k|F0)−σ2kp/2≤2
Z α1(k)
0
Qp(u)du2/p,
which shows that the convergence of the second series in (4.1) implies (2.2). Now, from Fréchet (1957), we have that
kE(X2
k|F0)−σ2k1,Φ,p=sup
E
((1∨ |Z|p−2)|E(X2
k|F0)−σ2|),Z F0-measurable, Z∼ N(0, 1) .
Hence, settingǫk=sign(E(Xk2|F0)−σ2),
kE(X2
k|F0)−σ2k1,Φ,p=sup
Cov(ǫk(1∨ |Z|p−2),Xk2),ZF0-measurable,Z∼ N(0, 1) .
Applying again Rio’s covariance inequality (1993), we get that
kE(X2
k|F0)−σ2k1,Φ,p≤C
Z α1(k)
0
(1∨log(u−1))(p−2)/2Q2(u)du, which shows that the convergence of the first series in (4.1) implies (2.1).
4.2
Linear processes and functions of linear processes
In what follows we say that the series Pi∈Zai converges if the two series
P
i≥0ai and
P
i<0ai converge.
Theorem 4.1. Let(ai)i∈Z be a sequence of real numbers in ℓ2 such that P
i∈Zai converges to some
real A. Let (ǫi)i∈Z be a stationary sequence of martingale differences in Lp for p ∈]2, 3]. Let Xk =
P
j∈Zajǫk−j, andσ
2
n=n−
1E(S2
n). Let b0=a0−A and bj=aj for j6=0. Let An=
P
j∈Z( Pn
k=1bk−j)2.
If An=o(n), thenσ2
n converges toσ2=A2E(ǫ02). If moreover
∞
X
n=1
1
n2−p/2
E
1
n
Xn
j=1
ǫj
2 F0
−E(ǫ20)
p/2<∞, (4.3) then we have
(9)
1. If An=O(1), thenζ1(Pn−1/2S
n,Gσ2) =O(n
−1/2log(n)), for p=3,
2. If An=O(n(r+2−p)/r), thenζr(Pn−1/2S
n,Gσ2) =O(n
1−p/2), for r
∈[p−2, 1]and p6=3, 3. If An=O(n3−p), thenζr(Pn−1/2S
n,Gσ2) =O(n
1−p/2), for r
∈]1, 2], 4. If An=O(n3−p), thenζr(Pn−1/2S
n,Gσ2n) =O(n
1−p/2), for r∈]2,p].
Remark 4.3. If the condition given by Heyde (1975) holds, that is ∞
X
n=1
X
k≥n
ak2<∞ and ∞
X
n=1
X
k≤−n
ak2<∞, (4.4) thenAn=O(1), so that it satisfies all the conditions of items 1-4.
Remark 4.4. Under the additional assumptionPi∈Z|ai|<∞, one has the bound
An≤4Bn, where Bn=
n
X
k=1
X
j≥k
|aj|2+ X
j≤−k
|aj|2. (4.5)
Proof of Theorem 4.1.We start with the following decomposition:
Sn=A n
X
j=1
ǫj+ ∞
X
j=−∞
Xn
k=1 bk−j
ǫj. (4.6)
LetRn =P∞j=−∞(Pnk=1bk−j)ǫj. SincekRnk22 =Ankǫ0k22 and since|σn−σ| ≤ n−1/2kRnk2, the fact
that An = o(n) implies that σn converges to σ. We now give an upper bound for kRnkp. From Burkholder’s inequality, there exists a constantC such that
kRnkp≤C
n
∞
X
j=−∞
Xn
k=1 bk−j
2
ǫ2j
p/2
o1/2
≤Ckǫ0kp
p
An. (4.7)
According to Remark 2.1, since (4.3) holds, the two conditions (2.1) and (2.2) of Theorem 2.1 are satisfied by the martingale Mn = A
Pn
k=1ǫk. To conclude the proof, we use Lemma 5.2 given in Section 5.2, with the upper bound (4.7).
Proof of Remarks 4.3 and 4.4. To prove Remark 4.3, note first that
An=
n
X
j=1
X−j
l=−∞
al+
∞
X
l=n+1−j
al2+
∞
X
i=1
n+Xi−1
l=i
al2+
∞
X
i=1
X−i
l=−i−n+1 al2.
It follows easily thatAn=O(1)under (4.4). To prove the bound (4.5), note first that
An≤3Bn+ ∞
X
i=n+1
n+Xi−1
l=i
|al|
2
+
∞
X
i=n+1
X−i
l=−i−n+1
|al|
2
(10)
LetTi=
P∞
l=i|al|andQi =
P−i
l=−∞|al|. We have that ∞
X
i=n+1
n+Xi−1
l=i
|al|
2
≤ Tn+1
∞
X
i=n+1
(Ti−Tn+i)≤nTn2+1
∞
X
i=n+1
X−i
l=−i−n+1
|al|2 ≤ Qn+1
∞
X
i=n+1
(Qi−Qn+i)≤nQ2n+1. Sincen(Tn2+1+Q2n+1)≤Bn, (4.5) follows.
In the next result, we shall focus on functions of real-valued linear processes
Xk=h X
i∈Z
aiǫk−i−E
h X
i∈Z
aiǫk−i, (4.8)
where(ǫi)i∈Zis a sequence of iid random variables. Denote by wh(.,M)the modulus of continuity of the functionhon the interval[−M,M], that is
wh(t,M) =sup{|h(x)−h(y)|,|x−y| ≤t,|x| ≤M,|y| ≤M}.
Theorem 4.2. Let(ai)i∈Zbe a sequence of real numbers inℓ2 and(ǫi)i∈Zbe a sequence of iid random
variables inL2. Let Xk be defined as in (4.8) andσ2
n= n−1E(Sn2). Assume that h isγ-Hölder on any
compact set, with wh(t,M)≤C tγMα, for some C>0,γ∈]0, 1]andα≥0. If for some p∈]2, 3],
E(|ǫ0|2∨(α+γ)p)<∞ and X
i≥1
ip/2−1 X
|j|≥i
a2jγ/2<∞, (4.9)
then the seriesPk∈ZCov(X0,Xk)converges to some nonnegativeσ2, and
1. ζ1(Pn−1/2S
n,Gσ2) =O(n
−1/2logn), for p=3,
2. ζr(Pn−1/2S
n,Gσ2) =O(n
1−p/2)for r
∈[p−2, 2]and(r,p)6= (1, 3), 3. ζr(Pn−1/2S
n,Gσ2n) =O(n
1−p/2)for r
∈]2,p].
Proof of Theorem 4.2.Theorem 4.2 is a consequence of the following proposition:
Proposition 4.2. Let(ai)i∈Z,(ǫi)i∈Zand(Xi)i∈Zbe as in Theorem 4.2. Let(ǫi′)i∈Zbe an independent
copy of(ǫi)i∈Z. Let V0=
P
i∈Zaiǫ−i and
M1,i=|V0| ∨
X
j<i
ajǫ−j+
X
j≥i
ajǫ−′ j
and M2,i=|V0| ∨
X
j<i
ajǫ−′j+
X
j≥i
ajǫ−j
.
If for some p∈]2, 3],
X
i≥1 ip/2−1
wh
X
j≥i
ajǫ−j
,M1,i
p<∞ and X
i≥1 ip/2−1
wh
X
j<−i
ajǫ−j
,M2,−i
p<∞,
(4.10)
(11)
To prove Theorem 4.2, it remains to check (4.10). We only check the first condition. Since
wh(t,M)≤C tγMα and the random variablesǫi are iid, we have
wh X j≥i
ajǫ−j
,M1,i
p ≤ C X j≥i
ajǫ−j
γ
|V0|α
p+C
X j≥i
ajǫ−j
γ pk|V0|
α kp, so that wh X
j≥i
ajǫ−j
,M1,i
p
≤C2α
X
j≥i
ajǫ−j
α+γ p+ X
j≥i
ajǫ−j
γ p
k|V0|αkp+2α
X
j<i
ajǫ−j
α p .
From Burkholder’s inequality, for anyβ >0,
X
j≥i
ajǫ−j
β p= X
j≥i
ajǫ−j
β βp≤K
X
j≥i
a2j
β/2
kǫ0k
β
2∨βp.
Applying this inequality withβ=γorβ =α+γ, we infer that the first part of (4.10) holds under (4.9). The second part can be handled in the same way.
Proof of Proposition 4.2. Let Fi = σ(ǫk,k ≤ i). We shall first prove that the condition (3.2) of Theorem 3.1 holds. We write
kE(S2n|F0) − E(S2n)kp/2≤2
n
X
i=1
n−i
X
k=0
kE(XiXk+i|F0)−E(XiXk+i)kp/2
≤ 4
n
X
i=1
n
X
k=i
kE(XiXk+i|F0)kp/2+2
n
X
i=1
i
X
k=1
kE(XiXk+i|F0)−E(XiXk+i)kp/2.
We first control the second term. Let ǫ′ be an independent copy of ǫ, and denote by Eǫ(·) the conditional expectation with respect toǫ. Define
Yi=
X
j<i
ajǫi−j, Yi′=
X
j<i
ajǫ′i−j,Zi=
X
j≥i
ajǫi−j, and Zi′=
X
j≥i
ajǫ′i−j.
TakingFℓ=σ(ǫi,i≤ℓ), and settingh0=h−E(h(
P
i∈Zaiǫi)), we have
kE(XiXk+i|F0)−E(XiXk+i)kp/2
=
Eǫ
h0(Yi′+Zi)h0(Yk′+i+Zk+i)
−Eǫ
h0(Yi′+Zi′)h0(Yk′+i+Zk′+i)
p/2.
Applying first the triangle inequality, and next Hölder’s inequality, we get that
kE(XiXk+i|F0)−E(XiXk+i)kp/2 ≤ kh0(Yk′+i+Zk+i)kpkh0(Yi′+Zi)−h0(Yi′+Zi′)kp
(12)
Letm1,i =|Yi′+Zi| ∨ |Yi′+Zi′|. Sincewh
0(t,M) =wh(t,M), it follows that
kE(XiXk+i|F0)−E(XiXk+i)kp/2 ≤ kh0(Y′
k+i+Zk+i)kp
wh X
j≥i
aj(ǫi−j−ǫ′i−j
,m1,i
p
+ kh0(Yi′+Zi′)kp
wh X
j≥k+i
aj(ǫk+i−j−ǫk′+i−j
,m1,k+i
p.
By subadditivity, we obtain that
wh X
j≥i
aj(ǫi−j−ǫ′i−j)
,m1,i
p≤ wh X
j≥i
ajǫi−j
,m1,i
p+ wh X
j≥i
ajǫi′−j
,m1,i
p.
Since the three couples(Pj≥iajǫi−j,m1,i),(Pj≥iajǫ′i−j,m1,i)and(Pj≥iajǫ−j,M1,i)are identically distributed, it follows that
wh X j≥i
aj(ǫi−j−ǫ′i−j)
,m1,i
p≤2
wh X j≥i
ajǫ−j
,M1,i
p.
In the same way
wh X
j≥k+i
aj(ǫk+i−j−ǫ′k+i−j)
,m1,k+i
p≤2
wh X
j≥k+i
ajǫ−j
,M1,k+i
p. Consequently X
n≥1
1
n3−p/2
n
X
i=1
i
X
k=1
kE(XiXk+i|F0)−E(XiXk+i)kp/2<∞
provided that the first condition in (4.10) holds.
We turn now to the control ofPni=1Pnk=ikE(XiXk+i|F0)kp/2. We first write that
kE(XiXk+i|F0)kp/2 ≤ kE (Xi−E(Xi|Fi+[k/2]))Xk+i|F0
kp/2+kE E(Xi|Fi+[k/2])Xk+i|F0
kp/2
≤ kX0kpkXi−E(Xi|Fi+[k/2])kp+kX0kpkE(Xk+i|Fi+[k/2])kp.
Let b(k) =k−[k/2]. SincekE(Xk+i|Fi+[k/2])kp=kE(Xb(k)|F0)kp, we have that
kE(Xk+i|Fi+[k/2])kp
=
Eǫ
h X
j<b(k)
ajǫ′b(k)−j+ X
j≥b(k)
ajǫb(k)−j
−h X
j<b(k)
ajǫ′b(k)−j+ X
j≥b(k)
ajǫ′b(k)−j
p. Using the same arguments as before, we get that
kE(Xk+i|Fi+[k/2])kp=kE(Xb(k)|F0)kp≤2
wh X
j≥b(k)
ajǫ−j
,M1,b(k)
p. (4.11)
In the same way,
Xi−E(Xi|Fi+[k/2]) p = Eǫ
h X
j<−[k/2]
ajǫi−j+
X
j≥−[k/2]
ajǫi−j
−h X
j<−[k/2]
ajǫ′i−j+ X
j≥−[k/2]
ajǫi−j
(13)
Let
m2,i,k=
X
j∈Z
aiǫi−j
∨
X
j<−[k/2]
ajǫi′−j+
X
j≥−[k/2]
ajǫi−j
.
Using again the subbadditivity of t → wh(t,M), and the fact that (Pj<−[k/2]ajǫi−j,m2,i,k),
(Pj<−[k/2]ajǫi′−j,m2,i,k)and(
P
j<−[k/2]ajǫ−j,M2,−[k/2])are identically distributed, we obtain that
Xi−E(Xi|Fi+[k/2])
p=
X−[k/2]−E(X−[k/2]|F0)
p≤2
wh
X
j<−[k/2]
ajǫ−j
,M2,−[k/2]
p. (4.12) Consequently
X
n≥1
1
n3−p/2
n
X
i=1
n
X
k=i
kE(XiXk+i|F0)kp/2<∞
provided that (4.10) holds. This completes the proof of (3.2).
Using the bounds (4.11) and (4.12) (taking b(k) = nin (4.11) and[k/2] = nin (4.12)), we see that the condition (3.1) of Theorem 3.1 (and also the condition (3.4) of Theorem 3.2 in the case
p=3) holds under (4.10).
4.3
Functions of
φ-dependent sequences
In order to include examples of dynamical systems satisfying some correlations inequalities, we introduce a weak version of the uniform mixing coefficients (see Dedecker and Prieur (2007)). Definition 4.1. For any random variable Y = (Y1,· · ·,Yk) with values in Rk define the function
gx,j(t) =1It≤x−P(Yj≤ x). For anyσ-algebraF, let
φ(F,Y) = sup
(x1,...,xk)∈Rk
E
Yk
j=1
gxj,j(Yj)
F−E
Yk
j=1
gxj,j(Yj)
∞.
For a sequenceY= (Yi)i∈Z, whereYi =Y0◦Ti andY0 is aF0-measurable and real-valued r.v., let
φk,Y(n) = max
1≤l≤k i sup l>...>i1≥n
φ(F0,(Yi1, . . . ,Yil)).
Definition 4.2. For anyp≥1, letC(p,M,PX)be the closed convex envelop of the set of functions f
which are monotonic on some open interval ofRand null elsewhere, and such thatE(|f(X)|p)<M. Proposition 4.3. Let p∈]2, 3]and s≥p. Let Xi= f(Yi)−E(f(Yi)), where Yi=Y0◦Ti and f belongs
toC(s,M,PY0). Assume that
X
i≥1
i(p−4)/2+(s−2)/(s−1)φ2,Y(i)(s−2)/s<∞. (4.13)
Then the conclusions of Theorem 4.2 hold.
Remark 4.5. Notice that ifs= p= 3, the condition (4.13) becomesPi≥1φ2,Y(i)1/3 < ∞, and if
(14)
Proof of Proposition 4.3. Let Bp(F0) be the set of F0-measurable random variables such that
kZkp≤1. We first notice that
kE(Xk|F0)kp≤ kE(Xk|F0)ks= sup
Z∈Bs/(s−1)(F 0)
Cov(Z,f(Yk)).
Applying Corollary 6.2 with k= 2 to the covariance on right hand (take f1 = Id and f2 = f), we
obtain that
kE(Xk|F0)ks ≤ sup
Z∈Bs/(s−1)(F 0)
8(φ(σ(Z),Yk))(s−1)/skZks/(s−1)(φ(σ(Yk),Z))1/sM1/s
≤ 8(φ1,Y(k))(s−1)/sM1/s, (4.14) the last inequality being true becauseφ(σ(Z),Yk)≤φ1,Y(k) andφ(σ(Yk),Z)≤ 1. It follows that the conditions (3.1) (for p∈]2, 3]) and (3.4) (forp=3) are satisfied under (4.13). The condition (3.2) follows from the following lemma by takingb= (4−p)/2.
Lemma 4.1. Let Xi be as in Proposition 4.3, and let b∈]0, 1[.
If X
i≥1
i−b+(s−2)/(s−1)φ2,Y(i)(s−2)/s<∞, then
X
n>1
1
n1+bk
E(S2
n|F0)−E(Sn2)kp/2<∞.
Proof of Lemma 4.1. Since,
kE(S2
n|F0)−E(S2n)kp/2≤2
n
X
i=1
n−i
X
k=0
kE(XiXk+i|F0)−E(XiXk+i)kp/2,
we infer that there existsC>0 such that
X
n>1
1
n1+bk
E(S2
n|F0)−E(Sn2)kp/2≤C
X
i>0
X
k≥0
1
(i+k)bkE(XiXk+i|F0)−E(XiXk+i)kp/2. (4.15)
We shall bound upkE(XiXk+i|F0)−E(XiXk+i)kp/2in two ways. First, using the stationarity and the
upper bound (4.14), we have that
kE(XiXk+i|F0)−E(XiXk+i)kp/2≤2kX0E(Xk|F0)kp/2≤16kX0kpM1/s(φ1,Y(k))(s−1)/s. (4.16) Next, note that
kE(XiXk+i|F0)−E(XiXk+i)kp/2 = sup
Z∈Bs/(s−2)(F 0)
Cov(Z,XiXk+i)
= sup
Z∈Bs/(s−2)(F 0)
E((Z−E(Z))XiXk+i).
Applying Corollary 6.2 withk=3 to the termE((Z−E(Z))XiXk+i)(take f1=Id, f2= f3= f), we
obtain thatkE(XiXk+i|F0)−E(XiXk+i)kp/2is smaller than
sup Z∈Bs/(s−2)(F
0)
(15)
Sinceφ(σ(Z),Yi,Yk+i)≤φ2,Y(i)andφ(σ(Yi),Z,Yk+i)≤1,φ(σ(Yk+i),Z,Yi)≤1, we infer that
kE(XiXk+i|F0)−E(XiXk+i)kp/2≤32(φ2,Y(i))(s−2)/sM2/s. (4.17)
From (4.15), (4.16) and (4.17), we infer that the conclusion of Lemma 4.1 holds provided that
X
i>0
[i
(s−2)/(s−1)]
X
k=1
1
(i+k)b
(φ2,Y(i))(s−2)/s+
X
k≥0
[k
(s−1)/(s−2)]
X
i=1
1
(i+k)b
(φ1,Y(k))(s−1)/s<∞. Here, note that
[i(s−2)/(s−1)]
X
k=1
1
(i+k)b ≤i −b+ss−2
−1 and
[k(s−1)/(s−2)]
X
i=1
1
(i+k)b ≤
[2k(s−1)/(s−2)]
X
m=1
1
mb ≤Dk
(1−b)(s−1)
(s−2),
for someD>0. Sinceφ1,Y(k)≤φ2,Y(k), the conclusion of Lemma 4.1 holds provided
X
i≥1
i−b+ss−2−1φ 2,Y(i)
s−2
s <∞ and
X
k≥1 k(1−b)
(s−1) (s−2)φ
2,Y(k)
s−1
s <∞.
To complete the proof, it remains to prove that the second series converges provided the first one does. If the first series converges, then
lim n→∞
2n
X
i=n+1
i−b+ss−−21φ 2,Y(i)
s−2
s =0 . (4.18)
Since φ2,Y(i) is non increasing, we infer from (4.18) that φ2,Y(i)1/s = o(i−1/(s−1)−(1−b)/(s−2)). It follows that φ2,Y(k)(s−1)/s ≤ Cφ2,Y(k)(s−2)/sk−1/(s−1)−(1−b)/(s−2) for some positive constant C, and the second series converges.
4.3.1 Application to Expanding maps
LetBV be the class of bounded variation functions from[0, 1]toR. For anyh∈BV, denote bykdhk
the variation norm of the measuredh.
LetT be a map from[0, 1]to[0, 1]preserving a probabilityµon[0, 1], and let
Sn(f) =
n
X
k=1
(f ◦Tk−µ(f)).
Define the Perron-Frobenius operatorK fromL2([0, 1],µ)toL2([0, 1],µ)viathe equality
Z 1
0
(Kh)(x)f(x)µ(d x) =
Z 1
0
h(x)(f ◦T)(x)µ(d x). (4.19) A Markov KernelK is said to beBV-contracting if there existC >0 andρ∈[0, 1[such that
(1)
Notice first that
kE Sn(Sn−E(Sn|Fn))|F0k3/2 = kE (Sn−E(Sn|Fn))2|F0k3/2 ≤ kSn−E(Sn|Fn)k23,
which is bounded under (3.1). Now forpn= [pn], we write X
k≥0
(E(X−k|Fn)−E(X−k|F0)) =X k≥0
(E(X−k|Fn)−E(X−k|Fp
n)) +
X
k≥0
(E(X−k|Fp
n)−E(X−k|F0)).
Note that
X
k≥0
(E(X−k|Fpn)−E(X−k|F0))
3 = X
k≥0
(X−k−E(X−k|F0))−
X
k≥0
(X−k−(E(X−k|Fpn))
3 ≤ X
k≥0
(X−k−E(X
−k|F0))
3+ X
k≥pn
(X−k−(E(X
−k|F0))
3,
which is bounded under (3.1). Next, since the random variablePk≥0(E(X−k|Fp
n)−E(X−k|F0))is
Fpn-measurable, we get
E
SnX
k≥0
(E(X
−k|Fpn)−E(X−k|F0))|F0
3/2 ≤ E
SpnX
k≥0
(E(X−k|Fpn)−E(X−k|F0))|F0
3/2 +kE(Sn−Sp
n|Fpn)k3
X
k≥0
(E(X
−k|Fpn)−E(X−k|F0))
3
≤kSpnk3+kE(Sn−pn|F0)k3
X
k≥0
(E(X−k|Fpn)−E(X−k|F0))
3≤C
pp n,
by using (3.1) and (5.45). Hence, sincepn= [pn], we get that ∞ X n=1 1 n3/2 E
SnX
k≥0
(E(X
−k|Fpn)−E(X−k|F0))
F0
3/2<∞.
It remains to show that ∞ X n=1 1 n3/2 E Sn X
k≥0
(E(X−k|Fn)−E(X−k|Fpn))
F0
3/2<∞. (5.55) Note first that
X
k≥0
(E(X−k|Fn)−E(X−k|Fpn))
3 = X
k≥0
(X−k−E(X−k|Fn))− X
k≥0
(X−k−E(X−k|Fpn))
3 ≤ X
k≥n
(X−k−E(X
−k|F0))
3+ X
k≥pn
(X−k−E(X
−k|F0))
(2)
It follows that
E
SnX
k≥0
(E(X
−k|Fn)−E(X−k|Fpn))|F0
3/2 ≤Cpn
X
k≥pn
(X−k−E(X−k|F0))
3+
X
k≥n
(X−k−E(X−k|F0))
3
.
by taking into account (5.45). Consequently (5.55) will follow as soon as X
n≥1
1
n
X
k≥[pn]
(X−k−E(X−k|F0))
3<∞,
which holds under the second part of the condition (3.4), by applying Lemma 5.3 with h(x) = kPk≥[x](X−k−E(X−k|F0))k3. This ends the proof of the theorem.
6
Appendix
6.1
A smoothing lemma.
Lemma 6.1. Let r >0and f be a function such that|f|Λ
r <∞(see Notation 5.1 for the definition of the seminorm| · |Λr). Letφt be the density of the law N(0,t
2). For any real p
≥r and any positive t,|f ∗φt|Λp ≤cr,pt
r−p
|f|Λr for some positive constant cr,p depending only on r and p. Furthermore cr,r=1.
Remark 6.1. In the case where p is a positive integer, the result of Lemma 6.1 can be written as
kf ∗φ(p)t k∞≤cr,ptr−p|f|Λr .
Proof of Lemma 6.1. Let jbe the integer such that j<r≤ j+1. In the case wherepis a positive
integer, we have
(f ∗φt)(p)(x) = Z
f(j)(u)−f(j)(x)
φ(p−t j)(x−u)du sincep− j≥1 .
Since|f(j)(u)−f(j)(x)| ≤ |x−u|r−j|f|Λr, we obtain that
|(f ∗φt)(p)(x)| ≤ |f|Λr
Z
|x−u|r−j|φt(p−j)(x−u)|du≤ |f|Λr
Z
|u|r−j|φt(p−j)(u)|du.
Using that φ(pt −j)(x) = t−p+j−1φ1(p−j)(x/t), we conclude that Lemma 6.1 holds with the constant
cr,p=R|z|r−j|φ1p−j(z)|dz.
The casep=r is straightforward. In the case wherepis such that j<r<p< j+1, by definition |f(j)∗φt(x)− f(j)∗φt(y)| ≤ |f|Λr|x−y|
r−j.
Also, by Lemma 6.1 applied withp= j+1,
|f(j)∗φt(x)−f(j)∗φt(y)| ≤ |x−y|kf(j+1)∗φtk∞≤ |f|Λrcr,j+1t
r−j−1
(3)
Hence by interpolation,
|f(j)∗φt(x)− f(j)∗φt(y)| ≤ |f|Λrt
r−pc(p−r)/(j+1−r)
r,j+1 |x−y|p−
j.
It remains to consider the case where r ≤ i< p≤ i+1. By Lemma 6.1 applied successively with
p=iandp=i+1, we obtain that |f(i)∗φt(x)| ≤ |f|Λrcr,it
r−i and
|f(i+1)∗φt(x)| ≤ |f|Λrcr,i+1t
r−i−1 .
Consequently
|f(i)∗φt(x)−f(i)∗φt(y)| ≤ |f|Λrt
r−i(2c
r,i∧cr,i+1t−1|x−y|),
and by interpolation,
|f(i)∗φt(x)−f(i)∗φt(y)| ≤ |f|Λrt
r−p(2c
r,i)1−p+ic p−i
r,i+1|x− y|
p−i.
6.2
Covariance inequalities.
In this section, we give an upper bound for the expectation of the product of k centered random variablesΠki=1(Xi−E(Xi)).
Proposition 6.1. Let X = (X1,· · ·,Xk)be a random variable with values inRk. Define the number
φ(i) = φ(σ(Xi),X1, . . . ,Xi−1,Xi+1, . . . ,Xk) (6.1) = sup
x∈Rk
E
Yk
j=1,j6=i (1IX
j>xj−P(Xj>xj))|σ(Xi)
−E
Yk
j=1,j6=i (1IX
j>xj−P(Xj>xj))
∞.
Let Fi be the distribution function of Xi and Qi be the quantile function of|Xi|(see Section 4.1 for the definition). Let Fi−1be the generalized inverse of Fi and let Di(u) = (Fi−1(1−u)−F−
1
i (u))+. We have
the inequalities
E
k Y
i=1
Xi−E(Xi)
≤
Z 1
0
Yk i=1
Di(u/φ(i))du (6.2)
and
E
k Y
i=1
Xi−E(Xi) ≤2
k Z 1
0
Yk i=1
Qi(u/φ(i))du. (6.3)
In addition, for any k-tuple(p1, . . . ,pk)such that1/p1+. . .+1/pk=1, we have
E
k Y
i=1
Xi−E(Xi)
≤2
k k Y
i=1
(φ(i))1/pikXikp
i. (6.4)
Proof of Proposition 6.1.We have that
E k Y
i=1
Xi−E(Xi)
= Z
E k Y
i=1
1IXi>xi−P(Xi>xi)
(4)
Now for alli, E k Y i=1 1IX
i>xi−P(Xi >xi)
=E
1IXi>xi
E
Yk
j=1,j6=i
(1IXj>xj−P(Xj>xj))|σ(Xi)
−E
Yk
j=1,j6=i
(1IXj>xj−P(Xj>xj))
=E 1IXi≤xi
E
Yk
j=1,j6=i
(1IXj>xj−P(Xj>xj))|σ(Xi)
−E
Yk
j=1,j6=i
(1IXj>xj−P(Xj>xj))
.
Consequently, for alli,
E k Y i=1 1IX
i>xi−P(Xi>xi)
≤φ(i) P(Xi≤xi)∧P(Xi>xi). (6.6)
Hence, we obtain from (6.5) and (6.6) that E k Y i=1
Xi−E(Xi) ≤ Z 1 0 Yk i=1 Z
1Iu/φ(i)<P(X
i>xi)1Iu/φ(i)≤P(Xi≤xi)d xi
du ≤ Z 1 0 Yk i=1 Z 1IF−1
i (u/φ(i))≤xi<Fi−1(1−u/φ(i))d xi
du,
and (6.2) follows. Now (6.3) comes from (6.2) and the fact thatDi(u)≤2Qi(u)(see Lemma 6.1 in Dedecker and Rio (2008)). Finally (6.4) follows by applying Hölder’s inequality to (6.3).
Definition 6.1. For a quantile function Q in L1([0, 1],λ), let F(Q,PX) be the set of functions f
which are nondecreasing on some open interval ofRand null elsewhere and such thatQ
|f(X)|≤Q. Let C(Q,PX) denote the set of convex combinations P∞i=1λifi of functions fi in F(Q,PX) where P∞
i=1|λi| ≤1 (note that the series P∞
i=1λifi(X)converges almost surely and inL1(PX)).
Corollary 6.1. Let X = (X1,· · ·,Xk) be a random variable with values in Rk and let the φ(i)’s be
defined by (6.1). Let (fi)1≤i≤k be k functions fromR to R, such that fi ∈ C(Qi,PXi). We have the inequality E k Y i=1
fi(Xi)−E(fi(Xi))
≤2
2k−1
Z 1
0
k Y
i=1
Qi u
φ(i)
du.
Proof of Corollary 6.1. Write for all 1 ≤ i ≤ k, fi = P∞j=1λj,ifj,i where
P∞
j=1|λj,i| ≤ 1 and fj,i ∈ F(Qi,PX
i). Clearly
E k Y i=1
fi(Xi)−E(fi(Xi))
≤
∞ X
j1=1
· · · ∞ X
jk=1
Yk i=1
|λji,i|
E k Y i=1
fji,i(Xi)−E(fji,i(Xi))
≤ sup
j1≥1,...,jk≥1
E k Y i=1
fji,i(Xi)−E(fji,i(Xi))
(5)
Since each fj
i,i is nondecreasing on some interval and null elsewhere,
φ(σ(fji,i(Xi)),fj1,1(X1), . . . ,fji−1,i−1(Xi−1),fji+1,i+1(Xi+1), . . . ,fjk,k(Xk))≤2
k−1φ(i).
Applying (6.3) to the right hand side of (6.7), we then derive that
E
k Y
i=1
fi(Xi)−E(fi(Xi)) ≤2
k Z 1
0
k Y
i=1
Qi u
2k−1φ(i)
du,
and the result follows by a change-of-variables.
Recall that for anyp≥1, the classC(p,M,PX)has been introduced in the definition 4.2.
Corollary 6.2. Let X = (X1,· · ·,Xk) be a random variable with values in Rk and let the φ(i)’s be
defined by (6.1). Let(p1, . . . ,pk)be a k-tuple such that1/p1+. . .+1/pk=1and let(fi)1≤i≤k be k
functions fromRtoR, such that fi∈ C(pi,Mi,PX
i). We have the inequality
E
k Y
i=1
fi(Xi)−E(fi(Xi))
≤2
2k−1
k Y
i=1
(φ(i))1/piM1/pi
i .
Acknowledgments. The authors are indebted to the referee for carefully reading the manuscript
and for helpful comments.
References
[1] Broise, A., Transformations dilatantes de l’intervalle et théorèmes limites. Études spectrales d’opérateurs de transfert et applications. Astérisque. 238. 1-109. (1996). MR1634271
[2] Davydov, Yu. A., Mixing conditions for Markov chains. Theor. Probab. Appl. 18. 312-328. (1973). MR0321183
[3] Dedecker, J. and Prieur, C., An empirical central limit theorem for dependent sequences.Stoch. Processes. Appl.117, 121-142. (2007). MR2287106
[4] Dedecker, J. and Rio, E., On mean central limit theorems for stationary sequences. Ann. Inst. Henri Poincaré Probab. Statist.44 (2008),683-726. MR2446294
[5] Dudley R. M., Real analysis and probability.Wadworsth Inc., Belmont, California. (1989). MR0982264
[6] Fréchet, M., Sur la distance de deux lois de probabilités.C. R. Acad. Sci. Paris. 244, 689-692. (1957). MR0083210
[7] Gordin, M. I., The central limit theorem for stationary processes,Dokl. Akad. Nauk SSSR. 188. 739-741. (1969). MR0251785
[8] Heyde, C. C. and Brown, B. M., On the departure from normality of a certain class of martingales.Ann. Math. Statist.41. 2161–2165. (1970). MR0293702
[9] Ibragimov, I. A., On the accuracy of Gaussian approximation to the distribution functions of sums of independent variables.Theor. Probab. Appl.11. 559-579. (1966). MR0212853
(6)
[10] Jan, C., Vitesse de convergence dans le TCL pour des processus associés à des systèmes dynamiques et aux produits de matrices aléatoires.Thèse de l’université de Rennes 1. (2001).
[11] Kantorovich, L.V. and Rubinstein G.Sh., Sur un espace de fonctions complètement additives.Vestnik Leningrad Univ., 13 (Ser. Mat. Astron. 2), 52-59. (1958)
[12] Peligrad, M. and Utev, S., A new maximal inequality and invariance principle for stationary sequences.
The Annals of Probability33, 798–815. (2005). MR2123210
[13] Pène F., Rate of convergence in the multidimensional central limit theorem for stationary processes. Application to the Knudsen gas and to the Sinai billiard.Annals of applied probability15, no. 4. 2331-2392. (2005). MR2187297
[14] Rio, E., Covariance inequalities for strongly mixing processes.Ann. Inst. H. Poincaré Probab. Statist., 29, 587-597. (1993) MR1251142
[15] Rio, E., Distances minimales et distances idéales.C. R. Acad. Sci. ParisSérie I, 326, 1127-1130 (1998). MR1647215
[16] Rio, E., Théorie asymptotique des processus aléatoires faiblement dépendants.Mathématiques et appli-cations de la SMAI. 31. Springer.(2000). MR2117923
[17] Rio, E., Upper bounds for minimal distances in the central limit theorem.Submitted to Ann. Inst. H. Poincaré Probab. Statist.(2007).
[18] Rosenblatt, M., A central limit theorem and a strong mixing condition,Proc. Nat. Acad. Sci. U. S. A. 42
43-47 (1956). MR0074711
[19] Sakhanenko, A. I., Estimates in an invariance principle.Trudy Inst. Mat., 5, "Nauka" Sibirsk. Otdel., Akad. Nauka SSSR, Novosibirsk. 5. 27-44 (1985) MR0821751
[20] Volný, D., Approximating martingales and the central limit theorem for strictly stationary processes.
Stoch. Processes Appl., 44, 41-74 (1993). MR1198662
[21] Wu, W. B. and Zhao, Z., Moderate deviations for stationary processes.Technical Report 571, University of Chicago. (2006)
[22] Zolotarev, V. M., Metric distances in spaces of random variables and their distributions.Math. URSS. Sbornik.30, 373-401. (1976). MR0467869