Directory UMM :Data Elmu:jurnal:S:Stochastic Processes And Their Applications:Vol91.Issue1.2001:
Stochastic Processes and their Applications 91 (2001) 115–149
www.elsevier.com/locate/spa
Generalization of Itˆo’s formula for smooth nondegenerate
martingales
S. Moret, D. Nualart ∗;1
Facultat de Matematiques, Universitat de Barcelona, Gran Via 585, 08007, Barcelona, Spain
Received 12 May 1999; received in revised form 4 May 2000; accepted 22 June 2000
Abstract
In this paper we prove the existence of the quadratic covariation [(@F=@xk )(X ); X k ] for all
16k6d, where F belongs locally to the Sobolev space W1; p (Rd ) for some p ¿ d and X is a
d-dimensional smooth nondegenerate martingale adapted to a d-dimensional Brownian motion.
This result is based on some moment estimates for Riemann sums which are established by
means of the techniques of the Malliavin calculus. As a consequence we obtain an extension of
Itˆo’s formula where the complementary term is one-half the sum of the quadratic covariations
c 2001 Elsevier Science B.V. All rights reserved.
above.
MSC: 60H05; 60H07
Keywords: Itˆo’s formula; Malliavin calculus; Quadratic covariation
1. Introduction
Let W = {Wt , t ∈ [0; T ]} be a d-dimensional Brownian motion, with d ¿ 1. Consider a d-dimensional square integrable martingale X = {Xt ; Rt ∈ [0; T ]}. It is well
Pd
t
known that X has a representation of the form Xtk = i=1 0 usk; i dWsi ; where for
all
1; : : : ; d; uk; i are continuous and adapted stochastic processes satisfying
R T k; ik;=
i 2
E|us | ds ¡ ∞.
0
Let F be a function which belongs locally to the Sobolev space W1; p (Rd ) for some
p ¿ d. The purpose of this paper is to prove the existence of the quadratic covariation
of the processes X k and (@F=@xk )(X ) for all k = 1; : : : ; d; dened as the following limit
in probability:
X
@F
@F
@F
k
k
k
(X ); X
(Xti+1 − Xti )
(Xt ) −
(Xt ) ;
(1.1)
= lim
n
@xk
@xk i+1
@xk i
t
t ∈D ; t ¡t
i
n i
∗ Corresponding author. Fax: +343-4021601.
E-mail address: [email protected] (D. Nualart).
1 Supported by the DGYCIT grant no. PB96-0087.
c 2001 Elsevier Science B.V. All rights reserved.
0304-4149/01/$ - see front matter
PII: S 0 3 0 4 - 4 1 4 9 ( 0 0 ) 0 0 0 5 8 - 2
116
S. Moret, D. Nualart / Stochastic Processes and their Applications 91 (2001) 115–149
where Dn is a sequence of partitions of [0; T ] such that
lim sup (ti+1 − ti ) = 0;
n
ti ∈Dn
sup sup
n
ti ∈Dn
ti+1
¡ ∞:
ti
The existence of this limit will allow us to prove the following extension of the Itˆo’s
formula:
d Z t
d
X
1 X @F
@F
(1.2)
(Xs ) dXsk +
(X ); X k :
F(Xt ) = F(0) +
2
@xk
0 @xk
t
k=1
k=1
Notice that for a smooth function F on Rd we have
Z t
d
X
@F
(X ); X k =
F(Xs ) ds:
@xk
0
t
k=1
The result (existence of the quadratic covariation and Itˆo’s formula) in the onedimensional case holds for any absolutely continuous function F such that its derivative
f belongs to L2loc (R) (Moret and Nualart, 2000), assuming suitable nondegeneracy and
regularity properties on the martingale X . The proof is based on the estimate
c
(1.3)
E(f(Xt )2 Z)6 √ kfk22
t
for any nice random variable Z, and for any function f ∈ L2 (R), which is derived
using the techniques of Malliavin calculus. Clearly, inequality (1.3) implies
Z T
√
E(f(Xt )2 Z) dt62 T ckfk22 :
0
When d ¿ 1, (1.3) is replaced by E(f(Xt )2 Z)6c′ t −d=2 kfk22 , and the right-hand side
of this inequality is not integrable. However, using exponential estimates for the law
of Xt and applying Holder’s inequality for some p ¿ d we can show, for some
constant M ,
Z
2
f(x)2 t −d=2 e−|x| =2tM d x6c′′ t −d=p kfk2p
E(f(Xt )2 Z)6c′
Rd
and hence, if f ∈ Lp (Rd ), the right-hand side of this inequality is integrable. In this
paper we will make use of this argument and for this reason we are forced to assume
that the partial derivatives of our function F are locally in Lp (Rd ) for some p ¿ d.
The approach we use in this paper was introduced by Follmer et al. (1995) to
treat the case F(Bt ); where F is an absolutely continuous function with locally square
integrable derivative and B is a one-dimensional Brownian motion. The results of
Follmer et al. (1995) have been extended to elliptic diusions by Bardina and Jolis
(1997) and to nondegenerate diusion processes with nonsmooth coecients in a recent
work of Flandoli et al. (2000).
In the d-dimensional case Follmer and Protter (2000), obtained an Itˆo’s formula for
1; 2
of a Brownian motion starting at x0 ; where x0 must be outside of
functions F ∈ Wloc
some polar set. There are also results for multidimensional diusion processes when
1; p
with p ¿ 2 ∨ d (Rozkosz, 1996) and for cadlag processes, when F ∈ C1 and
F ∈ Wloc
has a locally Holder continuous derivative (Errami et al., 1999).
S. Moret, D. Nualart / Stochastic Processes and their Applications 91 (2001) 115–149
117
Using integration with respect to the local time Wolf (1997) established an extension of Itˆo’s formula for semimartingales and absolutely continuous functions with
derivative in L1loc satisfying some technical assumptions. Eisenbaum (1997) has proved
a generalization of Itˆo’s formula to time-dependent functions of a Brownian motion, where the complementary term is a two-parameter integral with respect to the
local time.
By means of a regularization approach, Russo and Vallois (1995) obtained and Itˆo’s
formula for C1 transformations of time reversible continuous semimartingales. In the
framework of Dirichlet forms, an extension of Itˆo’s formula has been established by
Lyons and Zhang (1994).
The paper is organized as follows. Section 2 contains some basic material on
Malliavin calculus. In Section 3 we show a general result on the existence of the
quadratic covariation and Itˆo’s formula for d-dimensional martingales (Theorem 5).
Section 4 is devoted to prove basic estimates for stochastic integrals and its derivatives, and on the Sobolev norm of the inverse of the Malliavin matrix. Finally, in
Section 5 we apply these results to estimate the Riemann sums and deduce the main
result of the paper.
2. Preliminaries
Let W = {Wt , t ∈ [0; T ]} be a d-dimensional Brownian motion dened on the canonical probability space (
; F; P). That is,
is the space of continuous functions from
[0; T ] to Rd which vanish at zero, F is the Borel -eld on
completed with
respect to P, and P is the Wiener measure. For every t ∈ [0; T ] we denote by Ft
the -algebra generated by the random variables {Ws ; s6t} and the P-null sets. Let
H = LR2 ([0; T ]; Rd ): For any h ∈ H we denote the Wiener integral of h by W (h) =
Pd
T i
i
i=1 0 ht dWt :
We will use the notation |x| (resp. |t|) for the Euclidian norm of a vector x in Rd
Pd
(resp. a tensor t in Rd ⊗ : : j: :: ⊗Rd ). That is |t|2 = i1 ;:::;ij =1 (t i1 ; :::; ij )2 : We will also make
use of the notation hx; yi for the scalar product in Rd .
Let us rst introduce the derivative operator D. We denote by Cb∞ (Rn ) the set of all
innitely dierentiable functions f: Rn → R such that f and all of its partial derivatives
are bounded.
Let S denote the class of smooth cylindrical random variables of the form
F = f(W (h1 ); : : : ; W (hn ));
(2.1)
Cb∞ (Rn ),
and h1 ; : : : ; hn ∈ H . If F has form (2.1) we dene its
where f belongs to
derivative DF as the d-dimensional stochastic process given by
Dt F =
n
X
@f
(W (h1 ); : : : ; W (hn ))hi (t):
@xi
(2.2)
i=1
We will denote D(k) F the kth component of DF.
The operator D is closable from S⊂Lp (
) into Lp (
; H ) for each p¿1. We will
denote by D1; p the closure of the class of smooth random variables S with respect to
118
S. Moret, D. Nualart / Stochastic Processes and their Applications 91 (2001) 115–149
the norm
kFkp1; p = E(|F|p ) + E(kDFkpH ):
We can dene the iteration of the operator D in such a way that for a smooth
random variable F, the derivative Dtk1 ;:::;tk F is a k-parameter process. Then, for every
p¿1 and any natural number k we introduce the space Dk; p as the completion of the
family of smooth random variables S with respect to the norm
kFkpk; p = E(|F|p ) +
k
X
E(kD j FkpH ⊗j ):
(2.3)
j=1
Let V be a real separable Hilbert space. We can also introduce the corresponding
Sobolev spaces Dk; p (V ) of V -valued random variables. More precisely, if SV denotes
the family of V -valued random variables of the form
F=
n
X
vj ∈ V; Fj ∈ S;
Fj vj ;
j=1
Pn
we dene Dk F = j=1 Dk Fj ⊗ vj ; k¿1. Then Dk is a closable operator from SV ⊂
Lp (
; V ) into Lp (
; H ⊗k ⊗V ) for any p¿1. For any integer k¿1 and any real number
p¿1 we can dene a seminorm on SV by
kFkpk; p; V = E(kFkpV ) +
k
X
E(kD j FkpH ⊗j ⊗V ):
j=1
k; p
We denote by D (V ) the completion of SV with respect the seminorm k : kk; p; V .
We will denote by the adjoint of the operator D as an unbounded operator from
L2 (
) into L2 (
; H ). That is, the domain of , denoted by Dom , is the set of H -valued
square integrable random variables u such that there exists a square integrable random
variable (u) verifying
E(F(u)) = E(hDF; uiH )
(2.4)
RT
for any F ∈ S. We will make use of the notation (u)= 0 us dWs . We refer to Nualart,
1995a,b for a detailed account of the basic properties of the operators D and .
The following integration by parts formula will be one of the main ingredients in
the proof of our results.
Proposition 1. Fix m¿1 and 06a ¡ b6T . Let Y = (Y 1 ; : : : ; Y d ) be a random vector
in the space (Dm+1; p )d ; for all p ¿ 1. Dene the matrix
!
d Z b
X
(k) i (k) j
a; b
Dt Y Dt Y dt
:
(2.5)
Y =
k=1
a
16i; j6d
b −1 T
∈ p¿1 Lp (
). Let Z ∈ Dm; p ; for
is invertible a.s and (det
a;
Suppose that
Y )
all p ¿ 1. Then; for any function f ∈ Cb1 (Rd ) and for any multi-index ∈ {1; : : : ; d}m
we have
b
a;
Y
E((@ f)(Y )Z) = E(f(Y )Ha; b (Y; Z));
(2.6)
S. Moret, D. Nualart / Stochastic Processes and their Applications 91 (2001) 115–149
119
where Ha; b (Y; Z) is recursively given by
a; b
H(i)
(Y; Z) =
d Z
X
j=1
b
a
b −1
j
Z(
a;
Y )ij Ds Y dWs ;
(2.7)
Ha; b (Y; Z) = Ha;k b (Y; H(a;1b;:::;k−1 ) (Y; Z)):
Proof. By the chain rule we have
Ds (f(Y )) =
d
X
@i f(Y )Ds Y i
i=1
and as a consequence we obtain,
Z
b
a
hDs Y j ; Ds (f(Y ))i ds =
d
X
b
@i f(Y )(
a;
Y )ij :
i=1
Hence, using the duality relationship (2.4) for the operator yields
Z b
d
X
b −1
hDs Y j ; Ds (f(Y ))i ds
E(@i f(Y )Z) = E (
a;
Y )ij Z
a
j=1
a; b
(Y; Z)):
= E(f(Y )H(i)
We complete the proof by means of a recurrence argument.
Notice that by the Bouleau and Hirsch criterion (Bouleau and Hirsch, 1986) the
b
implies that the law of Y is absolutely continuous with respect to
condition on
a;
Y
the Lebesgue measure on Rd . Moreover, if the assumptions of Proposition 1 hold with
m = d, then the density of Y is given by
a; b
p(x) = E(1{Y ¿x} H(1;
:::; d) (Y; 1)):
(2.8)
Corollary 2. Let Y = (Y 1 ; : : : ; Y d ) be a random vector and Z a random variable
satisfying the assumptions of Proposition 1 with m=d. Suppose also E|Zf(Y )2 | ¡ ∞;
where f ∈ L2 (Rd ). Then; we have
Z
a; b
2
E(f(Y ) Z) =
f(x)2 E(1{Y ¿x} H(1;
(2.9)
:::; d) (Y; Z)) d x:
Rd
Proof. We can assume that f is bounded by replacing f2 by f2 ∧M and letting M tend
to innity. By the Lebesgue dierentiation theorem and using that Y has an absolutely
continuous probability distribution we obtain
n d Z
2
Y 1 +(1=n)
Y 1 −(1=n)
:::
Z
Y d +(1=n)
Y d −(1=n)
f(x1 ; : : : ; xd )2 d x1 : : : d xd → f(Y 1 ; : : : ; Y d )2 ;
120
S. Moret, D. Nualart / Stochastic Processes and their Applications 91 (2001) 115–149
a.s. as n tends to innity. For any xed x ∈ Rd set
fn (x; y) =
gn (x; y) =
d
n d Y
2
Z
1[xi −1=n; xi +1=n] (yi );
i=1
y1
:::
−∞
(2.10)
Z
yd
fn (x; 1 ; : : : ; d ) d1 : : : dd :
−∞
Then, by the dominated convergence theorem and applying Proposition 1 with =
(1; : : : ; d) to the function gn (x; ·) we get
!
d
n d Z
Y
f(x)2 E Z
1[xi −1=n; xi +1=n] (Y i ) d x
E(f(Y )2 Z) = lim
n
2
Rd
i=1
= lim
n
= lim
n
= lim
n
=
Z
Rd
Z
f(x)2 E(fn (x; Y )Z) d x
Z
f(x)2 E((@ gn )(x; Y )Z) d x
Rd
Rd
Z
Rd
a; b
f(x)2 E(gn (x; Y )H(1;
:::; d) (Y; Z)) d x
a; b
f(x)2 E(1{Y ¿x} H(1;
:::; d) (Y; Z)) d x;
which completes the proof.
We will make use of the following estimate for the k : kk; p -norm of the divergence
operator (Nualart, 1995a,b).
Proposition 3. The operator is continuous from Dk+1; p (V ⊗ H ) into Dk; p (V ) for
all p ¿ 1; k¿0. Hence; for any u ∈ Dk+1; p (V ⊗ H ) we have
k(u)kk; p; V 6ck; p kukk+1; p; V ⊗H
(2.11)
for some constant ck; p .
For any xed 06a ¡ T the following conditional version of the duality relationship
between the derivative and divergence operators holds
Z T
Z T
hDr F; ur i dr|Fa
ur dWr |Fa = E
(2.12)
E F
a
a
for all F ∈ D1; 2 and u such that u1[a; T ] ∈ Dom . Using this duality formula we can
formulate the following conditional version of equality (2.9).
Proposition 4. Let Y = (Y 1 ; : : : ; Y d ) be a random vector and Z a random variable
satisfying the assumptions of Proposition 1 with m = d. Let A be an Fa -measurable
121
S. Moret, D. Nualart / Stochastic Processes and their Applications 91 (2001) 115–149
random variable. Suppose also E|Zf(Y )2 | ¡ ∞; where f ∈ L2 (Rd ). Then; we have
Z
X
d−||
2
(−1)
f(x)2
E(f(Y ) Z|Fa ) =
Q (A)
⊂{1;:::; d}
a; b
×E(1{Y i ¿xi ; i∈; Y i ¡xi ; i6∈} H(1;
:::; d) (Y; Z)|Fa ) d x;
(2.13)
where Q (A) = {x ∈ Rd : Ai ¡ xi ; i ∈ ; Ai ¿ xi ; i 6∈ } and || is the cardinal of .
As a consequence, taking = {1; : : : ; d}, the conditional density of Y given Fa has
the following expression:
a; b
pa (x) = E(1{Y ¿x} H(1;
:::; d) (Y; 1)|Fa ):
Proof. As in Corollary 2 we have
Z
2
f(x)2 E(fn (x; Y )Z|Fa ) d x
E(f(Y ) Z|Fa ) = lim
n
Rd
= lim
n
X
⊂{1;:::; d}
Z
f(x)2 E(fn (x; Y )Z|Fa ) d x;
(2.14)
Q (A)
where fn is dened by (2.10). For any = {i1 ; : : : ; ij } ⊂{1; : : : ; d} consider the function
Z yi
Z yij Z ∞
Z ∞
1
:::
:::
fn (x; 1 ; : : : ; d ) d1 : : : dd :
gn (x; y) =
−∞
−∞
yij+1
yid
We have the following relationship between the functions fn and gn :
@ gn (x; :) = (−1)d−|| fn (x; :)
with = (1; : : : ; d):
From (2.14) and using a conditional version of Proposition 1, which can be proved
easily using (2.12), yields
E(f(Y )2 Z|Fa )
Z
X
(−1)d−|| lim
=
n
⊂{1;:::; d}
=
X
(−1)d−|| lim
X
(−1)d−||
n
⊂{1;:::; d}
=
Z
f(x)2 E(@ gn (x; Y )Z|Fa ) d x
Q (A)
Z
f(x)2 E(gn (x; Y )Ha; b (Y; Z)|Fa ) d x
Q (A)
f(x)2 E(1{Y i ¿xi ; i∈; Y i ¡xi ; i6∈} Ha; b (Y; Z)|Fa ) d x;
Q (A)
⊂{1;:::; d}
which completes the proof.
Denition 1. For any function f ∈ L2 ([0; T ]n ); any random variable F ∈ Dk; p , and
a
any process u such that ut ∈ Dk; p for all t ∈ [0; T ]; we dene k : ka; H ⊗n ; k : kF
k; p and
a
k : kF
k; p; H as
1=2
Z
2
;
f(s) ds
kfka; H ⊗n =
[a; T ]n
122
S. Moret, D. Nualart / Stochastic Processes and their Applications 91 (2001) 115–149
1=p
k
X
p
p
j
a
kFkF
=
E(|F|
|F
)
+
E(kD
Fk
|F
)
;
a
a
k; p
a; H ⊗j
j=1
a
kukF
k; p; H
1=p
k
X
= E(kukpa; H |Fa ) +
E(kD j ukpa; H ⊗( j+1) |Fa )
:
j=1
Then the following conditional version of inequality (2.11) holds
Fa
a
k(u1[a; T ] )kF
k; p 6cp kukk+1; p; H :
(2.15)
3. Existence of the quadratic covariation and an extension of Itˆo’s formula
Let X = {Xt ; t ∈ [0; T ]} be a d-dimensional continuous and adapted stochastic process. Consider a sequence Dn of partitions of [0; T ]. The points of a partition Dn will
be denoted by 0 = t0 ¡t1 ¡ · · · ¡tk(n) ¡tk(n)+1 = T . We will assume that this sequence
satises the following conditions:
lim sup (ti+1 − ti ) = 0;
n ti ∈Dn
L := sup sup
n ti ∈Dn
ti+1
¡∞:
ti
(3.1)
Denition 2. Given two stochastic processes Y ={Yt ; t ∈ [0; T ]} and Z={Zt ; t ∈ [0; T ]}
we dene their quadratic covariation as the stochastic process [Y; Z] given by the
following limit in probability, if it exists,
X
(Yti+1 − Yti )(Zti+1 − Zti ):
[Y; Z]t = lim
n
ti ∈Dn ; ti ¡t
Let W1; p (Rd ) denote the Sobolev space of functions in Lp (Rd ) such that the weak
1; p
(Rd ) the space of functions that
rst derivatives belong to Lp (Rd ): We denote by Wloc
1; p
1; p
coincide on each compact set with a function in W (Rd ). For any F ∈ Wloc
(Rd ) we
denote by fk = @F=@xk the kth weak partial derivative of F.
The next result provides sucient conditions for the existence of the quadratic covariation [f(X ); X k ] for all k = 1; : : : ; d, when f: Rd → R is in Lploc (Rd ) and X is a
d-dimensional martingale. Under these conditions we can write a change-of-variable
formula for a process of the form F(Xt ) with F in W1; p (Rd ), where the last term of
the formula is the sum with respect to k of the quadratic covariations 12 [fk (X ); X k ]; fk
being the kth weak partial derivative of F.
Theorem 5. Let X ={(Xt1 ; : : : ; XRtd ); t ∈ [0; T ]} be a continuous and adapted stochastic
Pd
t
process of the form Xtk = i=1 0 usk; i dWsi , where for all k; i = 1; : : : ; d; uk; i is adapted
R T k; i 2
and 0 (us ) ds ¡ ∞ a.s. Suppose that for all ¿ 0 there exist constants cj ; j = 1; 2;
such that for any n; for any k and for any t ∈ [0; T ]; we have,
Z T
k 2 k 2
f(Xs ) |us | ds ¿ 6c1 kfk2p ;
(3.2)
P
0
S. Moret, D. Nualart / Stochastic Processes and their Applications 91 (2001) 115–149
X
k
k
P
[f(Xti+1 ) − f(Xti )](Xti+1 − Xti ) ¿ 6c2 kfkp ;
n;
tit∈D
i ¡t
123
(3.3)
for any function f in CK∞ (Rd ) (innitely dierentiable with compact support). Then
the quadratic covariation [f(X ); X k ] exists for any function f in Lploc (Rd ) and for
1; p
(Rd ); the following Itˆo’s formula holds
any k. Moreover, for any function F ∈ Wloc
d
d Z t
X
1X
[fk (X ); X k ]t
(3.4)
fk (Xs ) dXsk +
F(Xt ) = F(0) +
2
0
k=1
k=1
for all t ∈ [0; T ]; where fk denotes the kth weak partial derivative of F:
Proof. Notice that by an easy approximation argument inequalities (3.2) and (3.3)
hold for any function f in Lp (Rd ).
Fix t ∈ [0; T ]; and set for all k = 1; : : : ; d;
X
[f(Xti+1 ) − f(Xti )](Xtki+1 − Xtki ):
Vnk (f) =
ti ∈Dn ; ti ¡t
For each n¿0 set Kn ={x ∈ Rd ; |x|6n} and consider the stopping time Tn = inf{t: Xt
∈= Kn }. Let ¿ 0 and take n0 in such a way that P(Tn0 6t)6. Let g be an innitely
dierentiable function with support included in Kn0 such that
Z
|g(x) − f(x)|p d x6p :
Kn0
For all k = 1; : : : ; d; and n; m¿n0 we have that
P(|Vnk (f) − Vmk (f)| ¿ ) 6 P(Tn0 6t) + P Tn0 ¿ t; |Vnk (f − g)| ¿
3
+ P Tn0 ¿ t; |Vmk (f − g)| ¿
3
+ P Tn0 ¿ t; |Vnk (g) − Vmk (g)| ¿
3
k
k
:
6 + 2c2 + P |Vn (g) − Vm (g)| ¿
3
We know that limn; m P(|Vnk (g) − Vmk (g)| ¿ =3) = 0 for all k = 1; : : : ; d: As a consequence, the quadratic covariation [f(X ); X k ] exists for any function f in Lploc (Rd ),
and
P(|[f(X ); X k ]t | ¿ )6c2 kfkp
(3.5)
for all k = 1; : : : ; d and for any f in Lp (Rd ).
We can approximate F by functions Fn ∈ C 2 (Rd ) ∩ Lp (Rd ) in such a way that the
partial derivatives satisfy that kfkn − fk kp converges to zero as n tends to innity.
In order to show Itˆo’s formula we can assume, by a localization argument, that the
124
S. Moret, D. Nualart / Stochastic Processes and their Applications 91 (2001) 115–149
process Xt takes values in a compact set K ⊂ Rd and that F and fk have support in
this set. We know that for each n Itˆo’s formula holds, that is,
d
d Z t
X
1X n
fkn (Xs ) dXsk +
[fk (X ); X k ]t :
(3.6)
Fn (Xt ) = F(0) +
2
0
k=1
k=1
[fkn (X ); X k ]t
converges in probability to [fk (X ); XRk ]t for all k =
By (3.5) we have that
t
1; : : : ; d; as n tends to innity. On the other hand, we need to prove that 0 fkn (Xs ) dXsk
Rt
converges in probability to 0 fk (Xs ) dXsk . This follows from the inequalities
Z t
Z t
M
n 2
k 2
n
k
(fk − fk ) (Xs )|us | ds ¿ M
P (fk − fk )(Xs ) dXs ¿ 6 2 + P
0
0
M
+ c1M kfkn − fk k2p :
2
Then taking the limit in (3.6) we obtain (3.4) and this completes the proof of the
theorem.
6
4. Basic estimates for stochastic integrals and Malliavin matrix
Let u = (ui; j )16i; j6d be a matrix of adapted processes ui; j = {uti; j ; t ∈ [0; T ]} such
RT
Pd R t k; i
i
that E 0 |us |2 ds ¡ ∞. Set Xtk =
i=1 0 us dWs . Let us introduce the following
hypotheses on the process u:
2
(H1)n; p For each t ∈ [0; T ] we have ut ∈ Dn; 2 (Rd ), and for some p¿2 we have
E|ur |p + E|Dt1 ur |p + · · · + E|Dt1 ; t2 ; :::; tn ur |p 6Kn; p
(H2)
(H3)
for any r; t ; : : : ; t ∈ [0; T ]:
Pd Pd 1 k; i n 2
2
i=1 |
k=1 ut vk | ¿ ¿ 0 for some constant , for all t ∈ [0; T ] and for all
d
v ∈ R such that |v| = 1.
|ut |6M for some constant M; for all t ∈ [0; T ].
This section will be devoted to obtain some estimations of the k : kn; p -norm of the
b
inverse of the Malliavin matrix
a;
Xb (Lemma 10 for n = 0 and Lemma 11) and of
Ha; b (Xb ; Z) (Lemma 12), plus the conditional versions of these results (Lemmas 13
and 14). Lemmas 6 –9, are previous estimates which are needed in order to prove the
above-mentioned results.
For the proof of the following results we will need Burkholder’s inequality for Hilbert
space valued martingales (see Metivier, 1982, E.2, p. 212). That is, if {Mt ; t ∈ [0; T ]}
is a continuous local martingale with values in a Hilbert space H; then for any p¿0
we have
EkMt kpH 6bp E([M ]p=2
t );
where
[M ]t =
∞
X
i=1
[hM; ei iH ]t
{ei ; i¿1} being a complete orthonormal system in H:
(4.1)
S. Moret, D. Nualart / Stochastic Processes and their Applications 91 (2001) 115–149
125
Lemma 6. Assume that u satises condition (H1)n; p for some p¿2 and n¿1. Then;
we have
sup
t1 ; :::; tn ∈[0; T ]
E|Dt1 ;:::;tn Xs |p 6cn;1 p ;
(4.2)
for some constant cn;1 p of the form cp Kn; p ; where cp depends on p and T .
Proof. Using the properties of the derivative operator we can write for t1 ; : : : ; tn 6t
Z s
n
X
Dt1 :::ti−1 ; ti+1 :::t n uti +
Dt1 ;:::;tn ur dWr :
Dt1 ; :::; tn Xs =
t1 ∨···∨tn
i=1
As a consequence, applying Burkholder’s inequality (4.1) we obtain
E|Dt1 ;:::;tn Xs |p
p
Z s
p !
n
X
62
E
Dt1 :::ti−1 ; ti+1 :::tn uti + E
Dt1 ;:::; t n ur dWr
t1 ∨···∨tn
i=1
Z s
p
Dt1 ;:::;tn ur dWr
62p−1 np sup E|Dt1 ;:::; tn−1 utn |p + E
t1 ;:::; tn
t1 ∨···∨tn
p=2
Z s
|Dt1 ;:::; tn ur |2 dr
62p−1 np Kn; p + 2p−1 bp E
p−1
t1 ∨···∨tn
p−1
62
p
Kn; p (n + bp T
p=2
);
where bp is the Burkholder constant. This proves (4.2).
Remark 1. The following inequality can be proved in an analogous way for any s¿a:
Z
p
|Dt1 ;:::; tn Xs | dt1 : : : dtn |Fa
E
[a;T ]n
6 n;1 p E
Z
[a;T ]n
+ n;2 p E
Z
|Dt1 ;:::; tn−1 utn |p dt1 : : : dtn |Fa
[a;T ]n+1
p
|Dt1 ;:::; tn ur | dt1 : : : dtn dr|Fa ;
where n;1 p = 2p−1 np and n;2 p = 2p−1 bp T p=2−1 :
Lemma 7. Fix 06a ¡ b6T . We dene a; b X k = Xbk − Xak : Then
(i) If u satises condition (H1)1; p for some p¿2; then
sup E|Dt (a; b X )|p 6c1 (b − a)p=2
(4.3)
t∈[0; a]
for some constant c1 depending on K1; p ; p and T .
(ii) If u satises condition (H1)2; p for some p¿2; then
Z T
p=2
|Ds Dt (a; b X )|2 ds
6c2 (b − a)p=2
sup E
t∈[0; a]
0
for some constant c2 depending on K2; p ; p and T .
(4.4)
126
S. Moret, D. Nualart / Stochastic Processes and their Applications 91 (2001) 115–149
Proof. Let us show (4.3). We know that for t6a
Z b
Dt (a; b X ) =
Dt us dWs :
(4.5)
a
Hence, we can write using Burkholder’s inequality (4.1)
!p=2
Z
b
E|Dt (a; b X )|p 6 bp E
a
|Dt us |2 ds
6 bp (b − a)p=2 sup E|Dt us |p
t; s
and (4.3) holds. In the same way, from (4.5) we have that if t ¡ a
p=2
Z T
2
|Ds Dt (a; b X )| ds
E
0
=E
Z
p−1
62
T
0
(
2 p=2
Z b
Ds Dt u dW ds
Dt us 1[a; b] (s) +
a
p=2
(b − a)
p
sup E|Ds ut | + T
p=2−1
E
s∈[0; T ]
Z T
6 2p−1 K2; p (b − a)p=2 + T p=2−1 bp E
0
Z
0
Z
a
6 2p−1 K2; p (b − a)p=2 (1 + T p=2 bp )
b
T
Z
p )
b
Ds Dt u dW ds
a
!p=2
|Ds Dt u |2 d
ds
and we obtain (4.4).
Lemma 8. Assume u satises condition (H1)n; p for some n¿0 and some p¿2. Then;
we have
ka; b X k kn; p 6cn;2 p (b − a)1=2 ;
(4.6)
for some constant cn;2 p depending on Kn; p ; n; p and T:
Proof. By the denition of k : kn; p -norm we have
ka; b X k kpn; p = E|a; b X k |p +
n
X
j=1
kD j (a; b X k )kpH ⊗j :
For the rst summand using Burkholder’s inequality (4.1) yields
Z
p
!p=2
Z b
b
k 2
k
k p
|us | ds
u dWs 6 bp E
E|a; b X | = E
a s
a
6 bp (b − a)p=2 sup E|usk |p :
06s6T
(4.7)
(4.8)
S. Moret, D. Nualart / Stochastic Processes and their Applications 91 (2001) 115–149
127
For the other terms, using again Burkholder’s inequality, we have that for all 16j6n
!
p
Z b
j
k
us dWs
E
D
⊗j
a
H
j
p
Z b
X
Dr1 ; :::; ri−1 ; ri+1 ; :::; rj urki 1[a; b] (ri ) +
Dr1 ;:::; rj uk dW
=E
⊗j
a
i=1
H
Z
p−1 p
62
j E
[0; T ] j−1
+2
p−1
Z
bp E
62
p=2
(b − a)
b
a
|Dr1 ; :::; rj−1 urkj |2 dr1 : : : drj
b
a
p−1
Z
kD j uk k2H ⊗j ⊗Rd
p
j T
( j−1)p=2
d
!p=2
!p=2
Kn; p + bp sup
EkD j uk kpH ⊗j ⊗Rd
6 2p−1 Kn; p T ( j−1)p=2 (b − a)p=2 (j p + bp T p=2 ):
(4.9)
Finally from (4.8) and (4.9) we obtain (4.7).
Lemma 9. Let x = {xs ; s ∈ [0; T ]} and y = {ys ; s ∈ [0; T ]} be d-dimensional stochastic
processes satisfying
n
X
E(kDi xs k2p
) ¡ ∞;
H ⊗i ⊗Rd
n
X
E(kDi ys k2p
)¡∞
H ⊗i ⊗Rd
(4.10)
for some n¿0 and some p¿1: Then; for any 06a ¡ b6T we have
Z
b
hxs ; ys i ds
6cn;3 p (Kn;x 2p Kn;y 2p )1=2p (b − a)
a
(4.11)
Kn;x 2p := sup
s∈[a; b] i=0
Kn;y 2p := sup
s∈[a; b] i=0
n; p
for some constant cn;3 p depending on n and p.
Proof. In order to simplify the proof we will suppose d = 1: On one hand we
have that
p
Z
Z
p !1=2
Z
p !1=2
b
b
b
2
2
xs ys ds 6 E
E
xs ds
E
ys ds
a
a
a
6 (b − a)p sup (E|xs |2p )1=2 sup (E|ys |2p )1=2
s∈[a; b]
6 (Kn;x 2p Kn;y 2p )1=2 (b − a)p ;
s∈[a; b]
128
S. Moret, D. Nualart / Stochastic Processes and their Applications 91 (2001) 115–149
where Kn;x 2p and Kn;y 2p are the constants dened by (4.10). On the other hand, for each
16j6n we have
!
p
Z b
j
xs ys ds
E
D
⊗j
a
H
p
j Z
X
b
j
i
j−i
D xs D ys ds
=E
i
a
⊗j
i=0
H
p
j p
Z b
X
j
6(j + 1)p−1
Di xs D j−i ys ds
E
⊗j
a
i
i=0
H
6(j + 1)p−1 (b − a)p
×
j p
X
j
i=0
i
6(j + 1)p−1
(
sup EkD
i
s∈[a;b]
xs k2p
H ⊗i
sup EkD
j−i
s∈[a;b]
ys k2p
H ⊗( j−i)
)1=2
j p
X
j
(Kn;x 2p Kn;y 2p )1=2 (b − a)p
i
i=0
and (4.11) holds.
b −1
b
Lemma 10. Let (
a;
the inverse of the matrix
a;
Xb )
Xb dened by (2:5). Suppose that
′
u satises hypotheses (H1)1; p′ for some p ¿12; and (H2). Then; for any 16p ¡
(p′ − 4)=4d we have
Z
a; b −1 p
p′
|Dt ur | dt dr (b − a)−p ;
(4.12)
E[(
Xb )ij ] 6 k1 + k2 E
[0; T ]2
for some constants k1 ; k2 depending on p; p′ ; T; d and .
Proof. We have that
b −1
a; b −1
|(
a;
Xb )ij | = |Aij (det
Xb ) |;
b
where Aij is the adjoint of (
a;
Xb )ij . Hence,
4p(d−1) 1=2
b −1 p
a; b −2p 1=2
] E[kDXb 1[a; b] kH
] :
E|(
a;
Xb )ij | 6cd; p E[(det
Xb )
(4.13)
For the second factor, using Lemma 6 with n = 1 and p = 4p(d − 1) yields
!2p(d−1)
Z
4p(d−1)
)=E
E(kDXb 1[a; b] kH
b
a
|Ds Xb |2 ds
6 (b − a)2p(d−1) sup E|Ds Xb |4p(d−1)
s
6 c1;1 4p(d−1) (b − a)2p(d−1) :
(4.14)
S. Moret, D. Nualart / Stochastic Processes and their Applications 91 (2001) 115–149
129
In order to estimate the rst factor we write
2 d
Z bX
d
d X
a; b
(k) j
T a; b d
Ds Xb vj ds :
det
Xb ¿ inf (v
Xb v) = inf
|v|=1
|v|=1
a k=1 j=1
Then we have for any h ∈ [0; 1] and using (H2)
2
Z bX
d X
d (k) j
Ds Xb vj ds
a k=1 j=1
Z
b
1
¿
2
Z
=
a
2
d X
d Z b
X
X
d
j; k
(k) j; i
i
vj (us +
Ds ur dWr ) ds
i=1 s
k=1 j=1
2
2
Z b
Z b
d
d X
d X
X
X
d
(k) j; i
i
j; k
vj
Ds ur dWr ds
vj us ds −
s
a+(b−a)(1−h) k=1 i; j=1
a+(b−a)(1−h) k=1 j=1
b
2 (b − a)h
−
¿
2
=
Z
2
b
Ds ur dWr ds
a+(b−a)(1−h) s
b
Z
2 (b − a)h
− Ih ;
2
where
Z
2
b
Ds ur dWr ds:
Ih =
a+(b−a)(1−h)
s
Z
b
We choose h = 4=(b − a)2 y1=d , where y¿c := 4d =(b − a)d 2d . Then, we can
write for any q¿2
!q
Z b
Z b
q
q−1 q−1
2
|Ds ur | dr ds
E|Ih | 6 bq (b − a) h E
s
a+(b−a)(1−h)
6 bq (b − a)2q−2 h2q−2 E
As a consequence,
b −2p
]=
E[(det
a;
Xb )
Z
∞
0
Z
[0; T ]2
b −1
2py2p−1 P{(det
a;
¿ y} dy
Xb )
2p
6 c + 2p
Z
∞
y
2p−1
c
6
6
|Ds ur |2q dr ds:
4d
(b − a)d 2d
4d
2d
2p
2p
P
+ 2p
b
det
a;
Xb
Z
c
1
¡
y
dy
∞
E|Ih |q y2p−1+q=d dy
(b − a)−2dp + 2pbq
4
2
2q−2
130
S. Moret, D. Nualart / Stochastic Processes and their Applications 91 (2001) 115–149
×E
Z
[0; T ]2
|Ds ur |2q dr ds
Z
∞
y2p−1−(q=d)+2=d dy
c
6 (b − a)−2dp 42dp −4dp 1 + 2pbq 4q −2q (b − a)q−2
Z
d
2q
E
|Ds ur | ds dr
×
q − 2dp − 2
[0; T ]2
Z
|Ds ur |2q ds dr (b − a)−2dp ;
6 c 1 + c2 E
(4.15)
[0; T ]2
where c1 and c2 are constants depending on q; ; T; d and p, and provided q ¿ 2dp + 2.
We will take p′ = 2q ¿ 4(dp + 1)¿12.
Finally, from (4.13) – (4.15) we get (4.12).
Remark 2. With the additional hypothesis (H3) the following conditional version of
the previous result holds:
Z
b −1 p
−p
p′
a
)
)
|F
]
6
(b
−
a)
|D
u
|
dt
dr|F
+
a
E
E[((
a;
a
t r
a
1
2
Xb ij
[0; T ]2
Z
× b1 + b2 E
[0; T ]2
|Ds ur |4p(d−1) ds dr|Fa
for some constants a1 ; a2 ; b1 ; b2 depending on M; ; d; T; p and p′ .
Proof. The proof is similar to that of Lemma 10. We have that
4p(d−1)
a; b −2p
b −1 p
|Fa ]1=2 E[kDXb 1[a; b] kH
|Fa ]1=2 :
E[((
a;
Xb )ij ) |Fa ]6cd; p E[(det
Xb )
The following conditional version of inequality (4.15) holds:
Z
b −2p
−2dp
p′
+
k
E
)
|F
]6(b
−
a)
|D
u
|
ds
dr|F
: (4.16)
k
E[(det
a;
1
2
a
s r
a
Xb
[0; T ]2
On the other hand, we have for some q¿4;
E[kDXb 1[a; b] kqH |Fa ]
= E
!q=2
b
Z
|Ds Xb |2 ds
a
b
(
q
62
q−1
ds|Fa
2 q=2
Z b
Ds u dW ds |Fa
us +
a
Z
6 E
a
q=2
M (b − a)
q=2−1
+ (b − a)
E
Z
a
b
Z
q
!)
b
Ds u dW ds|Fa
a
S. Moret, D. Nualart / Stochastic Processes and their Applications 91 (2001) 115–149
131
!q=2
Z b Z b
6 2q−1 M q (b − a)q=2 + bq (b − a)q=2−1 E
|Ds u |2 d ds|Fa
a
a
62
q−1
q=2
(b − a)
(
q
M + bq T
q=2−2
E
Z
b
a
Z
!)
b
q
|Ds u | d ds|Fa
a
:
(4.17)
Hence, from (4.16) and using (4.17) with q = 4p(d − 1) we obtain
Z
1=2
b −1 p
p′
−p
|D
u
|
ds
dr|F
+
k
E
)
)
|F
]
6
(b
−
a)
k
E[((
a;
s r
a
1
2
a
Xb ij
[0; T ]2
×
k1′
+
k2′ E
Z
4p(d−1)
[0; T ]2
|Ds ur |
Z
6 (b − a)−p a1 + a2 E
[0; T ]2
Z
× b 1 + b2 E
[0; T ]2
ds dr|Fa
1=2
′
|Ds ur |p ds dr|Fa
|Ds ur |4p(d−1) ds dr|Fa
where for the last inequality we have used the fact that
√
;
x61 + x.
Lemma 11. Assume u satises (H1)n+1; p for all p¿2 and some xed n¿0;
and (H2). Then; for all p¿2
b −1
4
−1
k(
a;
Xb ) kn; p 6cn; p (b − a)
for some constant cn;4 p depending on n; p; ; T and Kn+1; p′ ; where p′ ¿ 4dp(n+1)2 +4.
Proof. For any 06k6n we can write
b −1 p
EkDk ((
a;
Xb ) )kH ⊗k
6c
X
b −1 i1 a; b
a; b −1 ir a; b a; b −1 p
Ek(
a;
Xb ) D
Xb : : : (
Xb ) D
Xb (
Xb ) kH ⊗k
X
b p
a; b −1 p(1+r)
ir a; b p
E(kDi1
a;
)
Xb kH ⊗i1 : : : kD
Xb kH ⊗ir |(
Xb ) |
X
b p(r+1) 1=(r+1)
b p(1+r) 1=(r+1)
E(kDi1
a;
: : : E(kDir
a;
Xb kH ⊗i1 )
Xb kH ⊗ir )
i1 +···+ir =k
6c
i1 +···+ir =k
6c
i1 +···+ir =k
2
b −1 p(r+1) 1=(r+1)
×E(|(
a;
)
:
Xb ) |
In order to estimate the rst factors we put
p
p
b
a; b
EkDk (
a;
Xb )ij kH ⊗k 6 k(
Xb )ij kk; p
(4.18)
132
S. Moret, D. Nualart / Stochastic Processes and their Applications 91 (2001) 115–149
Z
bD
E
p
j
i
Ds Xb ; Ds Xb ds
=
a
k; p
6 c′ (b − a)p ;
(4.19)
where the last inequality has been obtained using Lemma 9 with x = DXbi and y =
DXbj (x and y satisfy the required hypotheses due to the Lemma 6). Finally, from
Lemma 10, (4.18) and (4.19) we obtain the desired result.
Lemma 12. Fix n; m¿1; p¿2 and 06a ¡ b6T . Suppose that u satises hypotheses
m
(H1)n+m+1; p′ for all p′ ¿2 and (H2). Let Z ∈ Dn+m; 2 p . Then; for any multi-index
∈ {1; : : : ; d}m we have
kHa; b (Xb ; Z)kn; p 6cn;5 p (b − a)−m=2 kZkn+m; 2m p ;
(4.20)
where cn;5 p is a constant depending on p; T; d and .
Proof. Using the continuity of the operator we have
kHa; b (Xb ; Z)kn; p
a; b
=kH(a;mb) (Xb ; H(1;
:::; m−1 ) (Xb ; Z))kn; p
d
X
j
a; b
a; b −1
=
(H
(X
;
Z)(
)
DX
1
)
Xb ij
b [a; b]
(1; :::; m−1 ) b
j=1
n; p
a; b
6dp−1 kH(1;
:::; m−1 ) (Xb ; Z)kn+1; 2p
d
X
j=1
j
b −1
k(
a;
Xb )ij kn+1; 4p kDXb 1[a; b] kn+1; 4p
a; b
a; b −1
6dp kH(1;
:::; m−1 ) (Xb ; Z)kn+1; 2p k(
Xb ) kn+1; 4p kDXb 1[a; b] kn+1; 4p :
(4.21)
Using Lemma 6 it is easy to see that
kDXb 1[a; b] kn+1; 4p 6c(b − a)1=2
(4.22)
and then, by Lemma 11, (4.21) and (4.22) we get
a; b
a; b
′
−1=2
kH(1;
kH(1;
:::; m ) (Xb ; Z)kn; p 6c (b − a)
:::; m−1 ) (Xb ; Z)kn+1; 2p :
By an iteration procedure we obtain (4.20).
Now we will deduce the conditional versions of the last two results.
Lemma 13. Fix n¿0; p¿2: Assume u satises (H1)n+1; p′ for all p¿2; (H2)
and (H3): Let 06a ¡ b6T . Then there exists a random variable Zn;a p such that
b −1 Fa
−1 a
k(
a;
Xb ) kn; p 6(b − a) Zn; p ;
where Zn;a p has the form
Z
n
X
1 + r1 E
Zn;a p = c
r=1
[0; T ]2
(4.23)
′
|Dt us |pr dt ds|Fa
S. Moret, D. Nualart / Stochastic Processes and their Applications 91 (2001) 115–149
Z
× 1 + r2 E
2
[0; T ]2
(
× 1+
n+1
X
r; m E
|Dt us |4p(r+1) (d−1) dt ds|Fa
Z
p(r+1)
[a;T ]m+1
m=0
|Ds1 ;:::;sm us |
133
ds1 : : : dsm ds|Fa
)
(4.24)
for some constants pr′ such that p(r + 1)2 ¡ (pr′ − 4)=4d and r1 ; r2 ; r; m depending
on ; M; p; d; r; pr′ and T .
Proof. As in proof of (4.18) in Lemma 11 we can write for 16k6n
b −1 p
E(kDk ((
a;
Xb ) )ka; H ⊗k |Fa )
X
6c
i1 +···+ir =k
b (r+1)p
ir a; b (r+1)p
{E(kDi1
a;
Xb ka; H ⊗i1 | Fa ) : : : E(kD
Xb ka; H ⊗ir |Fa )
2
b −1 p(r+1)
|Fa )}1=(r+1) :
×E(|(
a;
Xb ) |
(4.25)
We can obtain the following conditional version of inequality (4.19):
q
b
E(kDk (
a;
Xb )ij ka; H ⊗k |Fa )
k
D
=E
Z
a
b
hDs Xbi ; Ds Xbj i ds
!
q
a; H ⊗k
|Fa
!
q
Z b
k
X
k
j
m
i
k−m
hD Ds Xb ; D
Ds Xb i ds
= E
m
a
m=0
6 (k + 1)
q−1
a; H ⊗k
q
k
X
k
E
m
m=0
q
6c(b − a)q−1 (b − a)k( 2 −1)
×
k
X
E
[a;T ]m+1
m=0
×E
Z
Z
[a;T ]k−m+1
|Fa
q
Z
b
j
m
i
k−m
Ds Xb i ds
hD Ds Xb ; D
a
a; H ⊗k
|Ds1 ; :::; sm+1 Xbi |2q ds1 : : : dsm+1 |Fa
Z
k+1
X
E
m=0
[a;T ]m
|Fa
!
1=2
|Ds1 ; :::; sk−m+1 Xbj |2q ds1 : : : dsk−m+1 |Fa
6c(b − a)q T (kq=2)−k−1
1=2
|Ds1 ; :::; sm Xb |2q ds1 : : : dsm |Fa :
(4.26)
134
S. Moret, D. Nualart / Stochastic Processes and their Applications 91 (2001) 115–149
Notice that we need q¿4: From (4.26) taking q = (r + 1)p and using Remark 1 we
obtain
b q
q
E(kDk
a;
Xb ka; H ⊗k |Fa ) 6 (b − a)
×
k+1
X
m; q E
Z
|Ds1 ; :::; sm us | ds1 : : : dsm ds|Fa ;
[a;T ]m+1
m=0
q
(4.27)
2
1
where the constants m; q have the form m; q = C(m−1;
q + m; q ); with C depending on
1
2
k; q and T and m;
q ; m; q the constants of the Remark 1.
Using Remark 2 with exponent p(r + 1)2 we have that there exist constants
′
pr ; a1; r ; a2; r ; b1; r ; b2; r ; such that p(r + 1)2 ¡ (pr′ − 4)=4d and
2
b −1 p(r+1)
|Fa )
E(|(
a;
Xb ) |
Z
−p(r+1)2
a1; r + a2; r E
6(b − a)
pr′
[0; T ]2
× b1; r + b2; r E
Z
|Dt us | dt ds|Fa
qr
[0; T ]2
|Dt us | dt ds|Fa
(4.28)
;
where qr = 4p(r + 1)2 (d − 1): Hence, from (4.25), (4.27) and (4.28) we obtain for any
16j6n
b −1 p
E(kD j ((
a;
Xb ) )ka; H ⊗j |Fa )
6c(b − a)
p
j
X
× b1; r + b2; r E
×
r; m E
6c(b − a)p
(
× 1+
Z
[0; T ]2
Z
n
X
|Dt us |
p(r+1)
m=0
Z
|Dt us |pr dt ds|Fa
′
[0; T ]2
Z
× 1 + b1; r + b2; r E
p(r+1)
[a;T ]m+1
Z
1=(r+1)
ds1 : : : dsm ds|Fa
1 + a1; r + a2; r E
r; m E
dt ds|Fa
1=(r+1)
|Ds1 ; :::; sm us |
r=1
n+1
X
|Dt us | dt ds|Fa
4p(r+1)2 (d−1)
[a;T ]m+1
m=0
pr′
[0; T ]2
r=1
( j+1
X
a1; r + a2; r E
Z
[0; T ]2
|Ds1 ; :::; sm us |
ds1 : : : dsm ds|Fa
4p(r+1)2 (d−1)
|Dt us |
)r=(r+1)
dt ds|Fa
;
)
(4.29)
where r; m = m; p(r+1) and for the last inequality we have used that x 61 + x for all
¡ 1 and x¿0: From (4.29) and Remark 2 we obtain (4.23).
135
S. Moret, D. Nualart / Stochastic Processes and their Applications 91 (2001) 115–149
Lemma 14. Fix n¿1; p¿2: Suppose that u satises hypotheses (H2) and
(H1)n+m+1; p′ for all p′ ¿2. Then; for any multi-index ∈ {1; : : : ; d}m we have
−m=2 a
a
kHa; b (Xb ; 1)kF
Yn; p ;
n; p 6c(b − a)
(4.30)
where
Yn;a p =
m
Y
Zn;i p Vn;i p
(4.31)
i=1
a
i
with Zn;i p = Zn+i;
2i+1 p which is dened by (4:24) and Vn; p dened as
Vn;i p = 0i +
n+i+1
X
Z
ji E 1 +
[a; b] j+1
j=1
|Dt1 :::tj ur |2
i+1
p
dt1 : : : dtj dr|Fa
for some constants ji depending on M; d; T; n and p.
Proof. In the same way as in Lemma 12 and using also Lemma 13, we can obtain
the following inequality:
a; b
a; b −1 Fa
Fa
p
a
kHa; b (Xb ; 1)kF
n; p 6 d kH(1; :::; m−1 ) (Xb ; 1)kn+1; 2p k(
Xb ) kn+1; 4p
a
×kDXb 1[a; b] kF
n+1; 4p; H
a; b
Fa
a
6 dp (b − a)−1 Zn+1;
4p kH(1; :::; m−1 ) (Xb ; 1)kn+1; 2p
a
×kDXb 1[a; b] kF
n+1; 4p; H ;
(4.32)
a
where Zn+1;
4p is the random variable dened in Lemma 13. On the other hand, we
have
1=4p
!2p n+2
Z b
X
4p
2
j
a
=
E
|D
X
|
ds
|F
E(kD
X
k
|F
)
:
+
kDXb 1[a; b] kF
⊗j
s
b
a
b
a
n+1; 4p; H
H
a
j=2
For the rst term, using inequality (4.17) with the exponent 4p yields
!2p
Z b
|Ds Xb |2 ds
|Fa
E
a
624p−1 (b − a)2p
(
M 4p + b4p T 2p−2 E
Z
b
a
Z
b
a
|Ds u |4p d ds|Fa
!)
:
(4.33)
For the other terms we make use of Remark 1. Then, for all 26j6n + 2 we get
E(kD j Xb k4p
a; H ⊗j |Fa )
6E
Z
[a;b] j
Ds1 ; :::; sj Xb 2 ds1 : : : dsj
2p
|Fa
!
136
S. Moret, D. Nualart / Stochastic Processes and their Applications 91 (2001) 115–149
6(b − a) j(2p−1) E
2p
6(b − a) T
+ j;2 4p E
Z
[a;b] j
j(2p−1)−2p
Z
Ds1 ; :::; sj Xb 4p ds1 : : : dsj |Fa
j;1 4p E
Z
[a;T ] j
4p
|Dt1 ; :::; tj−1 utj | dt1 : : : dtj |Fa
4p
[a;T ] j+1
|Dt1 ; :::; tj ur | dt1 : : : dtj dr|Fa
(4.34)
:
Hence, from (4.33) and (4.34) we obtain
2p 1
a
kDXb 1[a; b] kF
n+1; 4p; H 6(b − a) Vn; p ;
where
Vn;1 p
=
01
+
n+2
X
j=1
j1 E
1+
Z
[a;b] j+1
4p
|Dt1 :::tj ur | dt1 : : : dtj dr|Fa
(4.35)
or some constants j1 depending on T; j; M and p: Then, from (4.32) we have
a; b
Fa
p
−1=2 a
a
Zn+1; 4p Vn;1 p kH(1;
kHa; b (Xb ; 1)kF
n; p 6d (b − a)
:::; m−1 ) (Xb ; 1)kn+1; 2p :
Applying recurrently the last inequality we obtain (4.30).
Remark 3. Notice that if u satises (H1)n+m+d+1; p′ for all p′ ¿1; then by the proper′
ties of the derivative operator we have that Yk;a p ∈ Dd; p for all p′ ¿2:
5. Existence of quadratic covariation and Itˆo’s formula for Brownian martingales
Let u = (ui; j )16i; j6d be a matrix of adapted processes ui; j = {uti; j ; t ∈ [0; T ]} such
RT
Pd R t
that E 0 |us |2 ds ¡ ∞. Set Xtk = i=1 0 usk; i dWsi .
We will assume henceforth that u satises hypothesis (H1)n; p for all p¿2 and all
06n62d + 1: We will call that hypothesis (H1). We will also suppose henceforth that
u satises (H2) and (H3).
Consider a partition = {0 = t0 ¡ t1 ¡ · · · ¡ tn+1 = t} of the interval [0; t] for some
xed t ∈ [0; T ] satisying
ti+1
¡ ∞:
(5.1)
L := sup
06i6n ti
Set i X k = Xtki+1 − Xtki , for 06i6n. The main result of this section are the following estimates, which, as we have seen before, imply the existence of the quadratic
covariation and the Itˆo’s formula for the process X .
We will denote c and cp general constants which may change along of all this
section.
Lemma 15. There exists a constant c such that for any function f ∈ Lp (Rd ) for some
d+1
p ¿ 2 and G ∈ D1; 2 we have
E(f(Xt )2 G)6ct −d=p kfk2p kGkd; 2d+1 :
(5.2)
S. Moret, D. Nualart / Stochastic Processes and their Applications 91 (2001) 115–149
137
Proof. By Corollary 2 with a = 0 and b = t; we have
Z
0; t
E(f(Xt )2 G) =
f(x)2 E(H(1;
:::; d) (Xt ; G)1{Xt ¿x} ) d x
Rd
6
Z
Rd
0; t
2 1=2
1=2
f(x)2 (E(H(1;
d x:
:::; d) (Xt ; G)) ) (E1{Xt ¿x} )
Applying Lemma 12 with a = 0; b = t; n = 0 and p = 2 yields
0; t
2 1=2
−d=2
kGkd; 2d+1 :
E(H(1;
:::; d) (Xt ; G) ) 6ct
(5.3)
On the other hand, using the exponential inequality for martingales and Holder’s
inequality we have that
E(1{Xt ¿x} )6
d
Y
P(Xtk ¿ xk )1=d 6
k=1
d
Y
e−(x
k 2
) =2tM 2 d
= e−kxk
2
=2tM 2 d
;
(5.4)
k=1
where M is the constant of hypothesis (H3). Then, from (5.3) and (5.4) we obtain
Z
2
2
2
−d=2
kGkd; 2d+1
f(x)2 e−kxk =2tM d d x
E(f(Xt ) G) 6 ct
Rd
6 ct
6c
−d=2
kGkd; 2d+1 kfk2p
2M 2 d
q
d=2q
Z
e
−kxk2 q=2tM 2 d
dx
Rd
1=q
t −d=2+d=2q kGkd; 2d+1 kfk2p ;
(5.5)
where q is such that (2=p) + (1=q) = 1; and as a consequence (5.2) holds.
Corollary 16. There exists a constant c such that for any function f ∈ Lp (Rd ) with
p ¿ d we have
Z T
f(Xs )2 |usk |2 ds6ckfk2p :
(5.6)
E
0
Proof. Applying Lemma 15 with G = |usk |2 yields
Z T
Z t
2 k 2
2
f(Xs ) |us | ds 6 ckfkp
E
s−d=p k|usk |2 kd; 2d+1 ds
0
0
6 ckfk2p
sup k|usk |2 kd; 2d+1
s
Z
T
s−d=p ds:
0
It only remains to prove that sups k|usk |2 kd; 2d+1 ¡ ∞. For any 16j6d we have
p
j−1
d X
X
j
−
1
p
EkD j (|usk |2 )kH ⊗j = E
2
Dm usk; i D j−m usk; i
m
i=1 m=0
6c
j−1
d X
X
i=1 m=0
′
6 c Kd; 2p ;
and that completes the proof.
H ⊗j
)1=2
)1=2 E(kD j−m usk; i k2p
E(kDm usk; i k2p
H ⊗j−m+1
H ⊗m+1
138
S. Moret, D. Nualart / Stochastic Processes and their Applications 91 (2001) 115–149
Proposition 17. There exists a constant c such that for any function f ∈ Lp (Rd ) with
p ¿ d we have
2
n
X
k
f(Xti )i X 6ckfk2p :
(5.7)
E
i=0
Proof. By the isometry of Itˆo’s stochastic integral we have
2
n
Z ti+1
n
X
X
k
2
k 2
f(Xti )i X = E
f(Xti )
|us | ds :
E
ti
i=0
i=0
Rt
Then using Lemma 15 with G = ti i+1 |usk |2 ds yields
Z ti+1
Z ti+1
n
n
X
X
k 2
|usk |2 ds 6ckfk2p
f(Xti )2
ti−d=p
|u
|
ds
E
s
ti
i=0
i=0
ti
:
(5.8)
d; 2d+1
In order to estimate the last factor, we make use of the Lemma 9 with x = y = uk ,
a = ti and b = ti+1 , and we get
Z ti+1
k 2
|us | ds
6c′ (ti+1 − ti ):
(5.9)
ti
d; 2d+1
Finally, from (5.8) and (5.9) we obtain
2
n
n
X
X
f(Xti )i X k 6 cc′ kfk2p
E
ti−d=p (ti+1 − ti )
i=0
i=0
6 cc′ Ld=p kfk2p
6c
′′
Z
t
s−d=p ds
0
kfk2p ;
where L is the constant appearing in condition (5.1).
Proposition 18. There exists a constant c such that for any function f ∈ CK∞ (Rd ) we
have
n
2
X
k
f(Xti+1 )i X 6ckfk2p :
E
(5.10)
i=0
Proof. In order to prove the proposition we will establish the following inequalities:
n
X
(5.11)
f(Xti+1 )2 (i X k )2 6c1 kfk2p ;
S1k := E
i=0
S2k := E
X
i¡j
f(Xti+1 )f(Xtj+1 )i X k j X k 6c2 kfk2p :
Proof of (5.11). Using Lemma 15 with G = (i X k )2 and t = ti+1 we have
n
X
−d=p
ti+1
k(i X k )2 kd; 2d+1 :
S1k 6ckfk2p
i=0
(5.12)
(5.13)
139
S. Moret, D. Nualart / Stochastic Processes and their Applications 91 (2001) 115–149
Using Holder’s inequality for the k : kk; p -norms and Lemma 8 with p = 2d+2 and n = d
we obtain
k(i X k )2 kd; 2d+1 6 ki X k k2d; 2d+2
6 c(ti+1 − ti )
and hence, from (5.13) we get
n
X
−d=p
(ti+1 − ti )ti+1
S1k 6 ckfk2p
i=0
6 ckfk2p
Z
t
s−d=p ds
0
6 c′ kfk2p :
Proof of (5.12). Our objective is to transform the martingale increments i X k and
j X k into terms which involve only Lebesgue integrals. More precisely, if Sijk :=
E(f(Xti+1 )f(Xtj+1 )i X k j X k ), we derive an equality of the form
Sijk = E(f(Xti+1 )f(Xtj+1 )Cijk );
where
(ti+1 − ti )(tj+1 − tj )
:
kCijk k2 6c p
ti+1 (tj+1 − ti+1 )
(5.14)
Using the duality relationship between the derivative operator D and the Itˆo stochastic
integral we can write for i ¡ j
Sijk = E(f(Xti+1 )f(Xtj+1 )i X k j X k )
f(Xti+1 )f(Xtj+1 )i X
=E
k
Z
tj+1
utk
dWt
tj
Z
=E
=
d
X
tj+1
tj
E
d
X
utk; l Dt(l) (f(Xti+1 )f(Xtj+1 )i X k ) dt
l=1
k
f(Xti+1 )i X (@m f)(Xtj+1 )
d
X
Z
tj+1
tj
m=1
=
!
d
X
!
utk; l Dt(l) Xtmj+1
l=1
dt
!
k
E(f(Xti+1 )i X k (@m f)(Xtj+1 )(∇uj Xtmj+1 ));
m=1
where for any random variable F we write
d Z tj+1
X
k
utk; l Dt(l) F dt:
∇uj F =
l=1
tj
k
We now apply Proposition 1 to Y = Xtj+1 and to Z = f(Xti+1 )i X k ∇uj Xtmj+1
and to the interval [a; b] = [ti+1 ; tj+1 ] in order to get rid o the partial derivatives
140
S. Moret, D. Nualart / Stochastic Processes and their Applications 91 (2001) 115–149
t
i+1
of f. Of course, new derivatives will appear from the Skorohod integral H(m)
; tj+1
k
(Xtj+1 ; f(Xti+1 )i X k ∇uj Xtmj+1 ) and a further analysis will be necessary. Then, Proposition 1 yields
k
E(f(Xti+1 )i X k (@m f)(Xtj+1 )(∇uj Xtmj+1 ))
t
i+1
= E(f(Xtj+1 )H(m)
; tj+1
k
(Xtj+1 ; f(Xti+1 )i X k (∇uj Xtmj+1 )))
= E(f(Xti+1 )f(Xtj+1 )i X k Bijk; m );
where
t
i+1
Bijk; m = H(m)
; tj+1
k
(Xtj+1 ; ∇uj Xtmj+1 ):
Applying again the duality relationship to the increment i X k we obtain
E(f(Xti+1 )f(Xtj+1 )i X k Bijk; m )
Z
=E
ti
=
d
X
ti+1
d
X
utk; l Dt(l) (f(Xti+1 )f(Xtj+1 )Bijk; m ) dt
l=1
!
k
E((@n f)(Xti+1 )f(Xtj+1 )Bijk; m (∇ui Xtni+1 ))
n=1
+
d
X
k
E(f(Xti+1 )(@n f)(Xtj+1 )Bijk; m (∇ui Xtnj+1 ))
n=1
k
+ E(f(Xti+1 )f(Xtj+1 )(∇ui Bijk; m )):
(5.15)
Notice that we still have twice the derivative of the function f that must be
eliminated. In order to do this, we write
Dt (f(Xti+1 )f(Xtj+1 )) = f(Xtj+1 )
d
X
(@n f)(Xti+1 )Dt Xtni+1
n=1
+ f(Xti+1 )
d
X
(@n f)(Xtj+1 )Dt Xtnj+1 :
(5.16)
n=1
Multiplying both members of (5.16) by Dt Xtmj+1 and integrating in the interval [ti+1 ; tj+1 ]
yields
Z tj+1
d
X
t ; tj+1
hDt Xtmj+1 ; Dt (f(Xti+1 )f(Xtj+1 ))i dt = f(Xti+1 )
)mn
(@n f)(Xtj+1 )(
Xi+1
t
j+1
ti+1
n=1
and as a consequence
t
f(Xti+1 )(∇f)(Xtj+1 ) = (
Xi+1
t
; tj+1 −1
j+1
)
Z
tj+1
ti+1
hDt Xtj+1 ; Dt (f(Xti+1 )f(Xtj+1 ))i dt
(5.17)
where ∇f = (@1 f; : : : ; @d f)′ . Multiplying both members of (5.16) by Dt Xtmi+1
141
S. Moret, D. Nualart / Stochastic Processes and their Applications 91 (2001) 115–149
and integrating in the interval [0; ti+1 ] yields
Z ti+1
hDt Xtmi+1 ; Dt (f(Xti+1 )f(Xtj+1 ))i dt
0
d
X
= f(Xtj+1 )
(@n f)(Xti+1 )(
Xti+1 )mn
n=1
+ f(Xti+1 )
d
X
(@n f)(Xtj+1 )
ti+1
0
n=1
and as a consequence
Z
hDt Xtmi+1 ; Dt Xtnj+1 i dt
(∇f)(Xti+1 )f(Xtj+1 )
=
−1
Xt
i+1
ti+1
Z
hDt Xti+1 ; Dt (f(Xti+1 )f(Xtj+1 ))i dt − f(Xti+1 )
0
(∇f)(X
)
;
ij
tj+1
(5.18)
where the matrix
Z
( ij )mn =
is dened by
ij
ti+1
0
hDt Xtmi+1 ; Dt Xtnj+1 i dt:
Substituting (5.17) into (5.18) we get
(∇f)(Xti+1 )f(Xtj+1 )
=
−1
Xti+1
Z
ti+1
t
i+1
ij (
Xt
−
hDt Xti+1 ; Dt (f(Xti+1 )f(Xtj+1 ))i dt
0
; tj+1 −1
)
j+1
=
d
X
tj+1
ti+1
From (5.15) we have
Sijk
Z
hDt Xtj+1 ; Dt (f(Xti+1 )f(Xtj+1 ))i :
(5.19)
k
E((∇f)′ (Xti+1 )(∇ui Xti+1 )f(Xtj+1 )Bijk; m )
m=1
+
d
X
k
E((∇f)′ (Xtj+1 )(∇ui Xtj+1 )f(Xti+1 )Bijk; m )
m=1
k
+ E(f(Xti+1 )f(Xtj+1 )(∇ui Bijk; m ));
k
k
k
(5.20)
k
k
k
where ∇ui Xtj+1 = (∇ui Xt1j+1 ; : : : ; ∇ui Xtdj+1 ) and ∇ui Xti+1 = (∇ui Xt1i+1 ; : : : ; ∇ui Xtdi+1 ): Now
substituting (5.19) and (5.17) into (5.20) we obtain
Sijk
=
d
X
m=1
E
(Z
tj+1
ti+1
hDt Xtj+1 ; Dt (f(Xti+1 )f(Xtj+1 ))i dt
′
t ; tj+1 −1 k k; m
) Gij Bij
(
Xi+1
tj+1
)
142
S. Moret, D. Nualart / Stochastic Processes and their Applications 91 (2001) 115–149
+
d
X
E
(Z
0
m=1
ti+1
hDt Xti+1 ; Dt (f(Xti+1 )f(Xtj+1 ))i dt
′
k; m
uk
(
−1
Xti+1 )(∇i Xti+1 )Bij
)
k
+ E(f(Xti+1 )f(Xtj+1 )(∇ui Bijk; m ));
where
k
Gijk = ∇ui Xtj+1 −
−1
′
uk
ij (
Xti+1 )∇i Xti+1
k
k
u
= ∇ui (Xtj+1 − Xti+1 ) − ij (
−1
Xt )∇i Xti+1
(5.21)
i+1
and ij is the matrix dened as
Z ti+1
hDt (Xtmj+1 − Xtmi+1 ); Dt Xtni+1 i dt:
( ij )mn =
(5.22)
0
Applying the duality relationship we obtain
Sijk = E(f(Xti+1 )f(Xtj+1 )Cijk );
where
Cijk =
d n
o
X
k
k
ti+1 ; tj+1
0; ti+1
(Xti+1 ; ∇ui Xtni+1 Bijk; m ) + ∇ui Bijk; m :
(Xtj+1 ; Gijk; n Bijk; m ) + H(n)
H(n)
m; n=1
(5.23)
Let us prove that the terms Cijk satisfy condition (5.14). Applying Lemma 12 with
n = 0; p = 2 and m = 1; yields
E|Cijk |2 6 c1
d n
X
(tj+1 − ti+1 )−1 kBijk; m k21; 8 kGijk; n k21; 8
m; n=1
o
k
k
−1
kBijk; m k21; 8 k∇ui Xtni+1 k21; 8 + E|∇ui Bijk; m |2 :
+ ti+1
(5.24)
Then, we will make use of
www.elsevier.com/locate/spa
Generalization of Itˆo’s formula for smooth nondegenerate
martingales
S. Moret, D. Nualart ∗;1
Facultat de Matematiques, Universitat de Barcelona, Gran Via 585, 08007, Barcelona, Spain
Received 12 May 1999; received in revised form 4 May 2000; accepted 22 June 2000
Abstract
In this paper we prove the existence of the quadratic covariation [(@F=@xk )(X ); X k ] for all
16k6d, where F belongs locally to the Sobolev space W1; p (Rd ) for some p ¿ d and X is a
d-dimensional smooth nondegenerate martingale adapted to a d-dimensional Brownian motion.
This result is based on some moment estimates for Riemann sums which are established by
means of the techniques of the Malliavin calculus. As a consequence we obtain an extension of
Itˆo’s formula where the complementary term is one-half the sum of the quadratic covariations
c 2001 Elsevier Science B.V. All rights reserved.
above.
MSC: 60H05; 60H07
Keywords: Itˆo’s formula; Malliavin calculus; Quadratic covariation
1. Introduction
Let W = {Wt , t ∈ [0; T ]} be a d-dimensional Brownian motion, with d ¿ 1. Consider a d-dimensional square integrable martingale X = {Xt ; Rt ∈ [0; T ]}. It is well
Pd
t
known that X has a representation of the form Xtk = i=1 0 usk; i dWsi ; where for
all
1; : : : ; d; uk; i are continuous and adapted stochastic processes satisfying
R T k; ik;=
i 2
E|us | ds ¡ ∞.
0
Let F be a function which belongs locally to the Sobolev space W1; p (Rd ) for some
p ¿ d. The purpose of this paper is to prove the existence of the quadratic covariation
of the processes X k and (@F=@xk )(X ) for all k = 1; : : : ; d; dened as the following limit
in probability:
X
@F
@F
@F
k
k
k
(X ); X
(Xti+1 − Xti )
(Xt ) −
(Xt ) ;
(1.1)
= lim
n
@xk
@xk i+1
@xk i
t
t ∈D ; t ¡t
i
n i
∗ Corresponding author. Fax: +343-4021601.
E-mail address: [email protected] (D. Nualart).
1 Supported by the DGYCIT grant no. PB96-0087.
c 2001 Elsevier Science B.V. All rights reserved.
0304-4149/01/$ - see front matter
PII: S 0 3 0 4 - 4 1 4 9 ( 0 0 ) 0 0 0 5 8 - 2
116
S. Moret, D. Nualart / Stochastic Processes and their Applications 91 (2001) 115–149
where Dn is a sequence of partitions of [0; T ] such that
lim sup (ti+1 − ti ) = 0;
n
ti ∈Dn
sup sup
n
ti ∈Dn
ti+1
¡ ∞:
ti
The existence of this limit will allow us to prove the following extension of the Itˆo’s
formula:
d Z t
d
X
1 X @F
@F
(1.2)
(Xs ) dXsk +
(X ); X k :
F(Xt ) = F(0) +
2
@xk
0 @xk
t
k=1
k=1
Notice that for a smooth function F on Rd we have
Z t
d
X
@F
(X ); X k =
F(Xs ) ds:
@xk
0
t
k=1
The result (existence of the quadratic covariation and Itˆo’s formula) in the onedimensional case holds for any absolutely continuous function F such that its derivative
f belongs to L2loc (R) (Moret and Nualart, 2000), assuming suitable nondegeneracy and
regularity properties on the martingale X . The proof is based on the estimate
c
(1.3)
E(f(Xt )2 Z)6 √ kfk22
t
for any nice random variable Z, and for any function f ∈ L2 (R), which is derived
using the techniques of Malliavin calculus. Clearly, inequality (1.3) implies
Z T
√
E(f(Xt )2 Z) dt62 T ckfk22 :
0
When d ¿ 1, (1.3) is replaced by E(f(Xt )2 Z)6c′ t −d=2 kfk22 , and the right-hand side
of this inequality is not integrable. However, using exponential estimates for the law
of Xt and applying Holder’s inequality for some p ¿ d we can show, for some
constant M ,
Z
2
f(x)2 t −d=2 e−|x| =2tM d x6c′′ t −d=p kfk2p
E(f(Xt )2 Z)6c′
Rd
and hence, if f ∈ Lp (Rd ), the right-hand side of this inequality is integrable. In this
paper we will make use of this argument and for this reason we are forced to assume
that the partial derivatives of our function F are locally in Lp (Rd ) for some p ¿ d.
The approach we use in this paper was introduced by Follmer et al. (1995) to
treat the case F(Bt ); where F is an absolutely continuous function with locally square
integrable derivative and B is a one-dimensional Brownian motion. The results of
Follmer et al. (1995) have been extended to elliptic diusions by Bardina and Jolis
(1997) and to nondegenerate diusion processes with nonsmooth coecients in a recent
work of Flandoli et al. (2000).
In the d-dimensional case Follmer and Protter (2000), obtained an Itˆo’s formula for
1; 2
of a Brownian motion starting at x0 ; where x0 must be outside of
functions F ∈ Wloc
some polar set. There are also results for multidimensional diusion processes when
1; p
with p ¿ 2 ∨ d (Rozkosz, 1996) and for cadlag processes, when F ∈ C1 and
F ∈ Wloc
has a locally Holder continuous derivative (Errami et al., 1999).
S. Moret, D. Nualart / Stochastic Processes and their Applications 91 (2001) 115–149
117
Using integration with respect to the local time Wolf (1997) established an extension of Itˆo’s formula for semimartingales and absolutely continuous functions with
derivative in L1loc satisfying some technical assumptions. Eisenbaum (1997) has proved
a generalization of Itˆo’s formula to time-dependent functions of a Brownian motion, where the complementary term is a two-parameter integral with respect to the
local time.
By means of a regularization approach, Russo and Vallois (1995) obtained and Itˆo’s
formula for C1 transformations of time reversible continuous semimartingales. In the
framework of Dirichlet forms, an extension of Itˆo’s formula has been established by
Lyons and Zhang (1994).
The paper is organized as follows. Section 2 contains some basic material on
Malliavin calculus. In Section 3 we show a general result on the existence of the
quadratic covariation and Itˆo’s formula for d-dimensional martingales (Theorem 5).
Section 4 is devoted to prove basic estimates for stochastic integrals and its derivatives, and on the Sobolev norm of the inverse of the Malliavin matrix. Finally, in
Section 5 we apply these results to estimate the Riemann sums and deduce the main
result of the paper.
2. Preliminaries
Let W = {Wt , t ∈ [0; T ]} be a d-dimensional Brownian motion dened on the canonical probability space (
; F; P). That is,
is the space of continuous functions from
[0; T ] to Rd which vanish at zero, F is the Borel -eld on
completed with
respect to P, and P is the Wiener measure. For every t ∈ [0; T ] we denote by Ft
the -algebra generated by the random variables {Ws ; s6t} and the P-null sets. Let
H = LR2 ([0; T ]; Rd ): For any h ∈ H we denote the Wiener integral of h by W (h) =
Pd
T i
i
i=1 0 ht dWt :
We will use the notation |x| (resp. |t|) for the Euclidian norm of a vector x in Rd
Pd
(resp. a tensor t in Rd ⊗ : : j: :: ⊗Rd ). That is |t|2 = i1 ;:::;ij =1 (t i1 ; :::; ij )2 : We will also make
use of the notation hx; yi for the scalar product in Rd .
Let us rst introduce the derivative operator D. We denote by Cb∞ (Rn ) the set of all
innitely dierentiable functions f: Rn → R such that f and all of its partial derivatives
are bounded.
Let S denote the class of smooth cylindrical random variables of the form
F = f(W (h1 ); : : : ; W (hn ));
(2.1)
Cb∞ (Rn ),
and h1 ; : : : ; hn ∈ H . If F has form (2.1) we dene its
where f belongs to
derivative DF as the d-dimensional stochastic process given by
Dt F =
n
X
@f
(W (h1 ); : : : ; W (hn ))hi (t):
@xi
(2.2)
i=1
We will denote D(k) F the kth component of DF.
The operator D is closable from S⊂Lp (
) into Lp (
; H ) for each p¿1. We will
denote by D1; p the closure of the class of smooth random variables S with respect to
118
S. Moret, D. Nualart / Stochastic Processes and their Applications 91 (2001) 115–149
the norm
kFkp1; p = E(|F|p ) + E(kDFkpH ):
We can dene the iteration of the operator D in such a way that for a smooth
random variable F, the derivative Dtk1 ;:::;tk F is a k-parameter process. Then, for every
p¿1 and any natural number k we introduce the space Dk; p as the completion of the
family of smooth random variables S with respect to the norm
kFkpk; p = E(|F|p ) +
k
X
E(kD j FkpH ⊗j ):
(2.3)
j=1
Let V be a real separable Hilbert space. We can also introduce the corresponding
Sobolev spaces Dk; p (V ) of V -valued random variables. More precisely, if SV denotes
the family of V -valued random variables of the form
F=
n
X
vj ∈ V; Fj ∈ S;
Fj vj ;
j=1
Pn
we dene Dk F = j=1 Dk Fj ⊗ vj ; k¿1. Then Dk is a closable operator from SV ⊂
Lp (
; V ) into Lp (
; H ⊗k ⊗V ) for any p¿1. For any integer k¿1 and any real number
p¿1 we can dene a seminorm on SV by
kFkpk; p; V = E(kFkpV ) +
k
X
E(kD j FkpH ⊗j ⊗V ):
j=1
k; p
We denote by D (V ) the completion of SV with respect the seminorm k : kk; p; V .
We will denote by the adjoint of the operator D as an unbounded operator from
L2 (
) into L2 (
; H ). That is, the domain of , denoted by Dom , is the set of H -valued
square integrable random variables u such that there exists a square integrable random
variable (u) verifying
E(F(u)) = E(hDF; uiH )
(2.4)
RT
for any F ∈ S. We will make use of the notation (u)= 0 us dWs . We refer to Nualart,
1995a,b for a detailed account of the basic properties of the operators D and .
The following integration by parts formula will be one of the main ingredients in
the proof of our results.
Proposition 1. Fix m¿1 and 06a ¡ b6T . Let Y = (Y 1 ; : : : ; Y d ) be a random vector
in the space (Dm+1; p )d ; for all p ¿ 1. Dene the matrix
!
d Z b
X
(k) i (k) j
a; b
Dt Y Dt Y dt
:
(2.5)
Y =
k=1
a
16i; j6d
b −1 T
∈ p¿1 Lp (
). Let Z ∈ Dm; p ; for
is invertible a.s and (det
a;
Suppose that
Y )
all p ¿ 1. Then; for any function f ∈ Cb1 (Rd ) and for any multi-index ∈ {1; : : : ; d}m
we have
b
a;
Y
E((@ f)(Y )Z) = E(f(Y )Ha; b (Y; Z));
(2.6)
S. Moret, D. Nualart / Stochastic Processes and their Applications 91 (2001) 115–149
119
where Ha; b (Y; Z) is recursively given by
a; b
H(i)
(Y; Z) =
d Z
X
j=1
b
a
b −1
j
Z(
a;
Y )ij Ds Y dWs ;
(2.7)
Ha; b (Y; Z) = Ha;k b (Y; H(a;1b;:::;k−1 ) (Y; Z)):
Proof. By the chain rule we have
Ds (f(Y )) =
d
X
@i f(Y )Ds Y i
i=1
and as a consequence we obtain,
Z
b
a
hDs Y j ; Ds (f(Y ))i ds =
d
X
b
@i f(Y )(
a;
Y )ij :
i=1
Hence, using the duality relationship (2.4) for the operator yields
Z b
d
X
b −1
hDs Y j ; Ds (f(Y ))i ds
E(@i f(Y )Z) = E (
a;
Y )ij Z
a
j=1
a; b
(Y; Z)):
= E(f(Y )H(i)
We complete the proof by means of a recurrence argument.
Notice that by the Bouleau and Hirsch criterion (Bouleau and Hirsch, 1986) the
b
implies that the law of Y is absolutely continuous with respect to
condition on
a;
Y
the Lebesgue measure on Rd . Moreover, if the assumptions of Proposition 1 hold with
m = d, then the density of Y is given by
a; b
p(x) = E(1{Y ¿x} H(1;
:::; d) (Y; 1)):
(2.8)
Corollary 2. Let Y = (Y 1 ; : : : ; Y d ) be a random vector and Z a random variable
satisfying the assumptions of Proposition 1 with m=d. Suppose also E|Zf(Y )2 | ¡ ∞;
where f ∈ L2 (Rd ). Then; we have
Z
a; b
2
E(f(Y ) Z) =
f(x)2 E(1{Y ¿x} H(1;
(2.9)
:::; d) (Y; Z)) d x:
Rd
Proof. We can assume that f is bounded by replacing f2 by f2 ∧M and letting M tend
to innity. By the Lebesgue dierentiation theorem and using that Y has an absolutely
continuous probability distribution we obtain
n d Z
2
Y 1 +(1=n)
Y 1 −(1=n)
:::
Z
Y d +(1=n)
Y d −(1=n)
f(x1 ; : : : ; xd )2 d x1 : : : d xd → f(Y 1 ; : : : ; Y d )2 ;
120
S. Moret, D. Nualart / Stochastic Processes and their Applications 91 (2001) 115–149
a.s. as n tends to innity. For any xed x ∈ Rd set
fn (x; y) =
gn (x; y) =
d
n d Y
2
Z
1[xi −1=n; xi +1=n] (yi );
i=1
y1
:::
−∞
(2.10)
Z
yd
fn (x; 1 ; : : : ; d ) d1 : : : dd :
−∞
Then, by the dominated convergence theorem and applying Proposition 1 with =
(1; : : : ; d) to the function gn (x; ·) we get
!
d
n d Z
Y
f(x)2 E Z
1[xi −1=n; xi +1=n] (Y i ) d x
E(f(Y )2 Z) = lim
n
2
Rd
i=1
= lim
n
= lim
n
= lim
n
=
Z
Rd
Z
f(x)2 E(fn (x; Y )Z) d x
Z
f(x)2 E((@ gn )(x; Y )Z) d x
Rd
Rd
Z
Rd
a; b
f(x)2 E(gn (x; Y )H(1;
:::; d) (Y; Z)) d x
a; b
f(x)2 E(1{Y ¿x} H(1;
:::; d) (Y; Z)) d x;
which completes the proof.
We will make use of the following estimate for the k : kk; p -norm of the divergence
operator (Nualart, 1995a,b).
Proposition 3. The operator is continuous from Dk+1; p (V ⊗ H ) into Dk; p (V ) for
all p ¿ 1; k¿0. Hence; for any u ∈ Dk+1; p (V ⊗ H ) we have
k(u)kk; p; V 6ck; p kukk+1; p; V ⊗H
(2.11)
for some constant ck; p .
For any xed 06a ¡ T the following conditional version of the duality relationship
between the derivative and divergence operators holds
Z T
Z T
hDr F; ur i dr|Fa
ur dWr |Fa = E
(2.12)
E F
a
a
for all F ∈ D1; 2 and u such that u1[a; T ] ∈ Dom . Using this duality formula we can
formulate the following conditional version of equality (2.9).
Proposition 4. Let Y = (Y 1 ; : : : ; Y d ) be a random vector and Z a random variable
satisfying the assumptions of Proposition 1 with m = d. Let A be an Fa -measurable
121
S. Moret, D. Nualart / Stochastic Processes and their Applications 91 (2001) 115–149
random variable. Suppose also E|Zf(Y )2 | ¡ ∞; where f ∈ L2 (Rd ). Then; we have
Z
X
d−||
2
(−1)
f(x)2
E(f(Y ) Z|Fa ) =
Q (A)
⊂{1;:::; d}
a; b
×E(1{Y i ¿xi ; i∈; Y i ¡xi ; i6∈} H(1;
:::; d) (Y; Z)|Fa ) d x;
(2.13)
where Q (A) = {x ∈ Rd : Ai ¡ xi ; i ∈ ; Ai ¿ xi ; i 6∈ } and || is the cardinal of .
As a consequence, taking = {1; : : : ; d}, the conditional density of Y given Fa has
the following expression:
a; b
pa (x) = E(1{Y ¿x} H(1;
:::; d) (Y; 1)|Fa ):
Proof. As in Corollary 2 we have
Z
2
f(x)2 E(fn (x; Y )Z|Fa ) d x
E(f(Y ) Z|Fa ) = lim
n
Rd
= lim
n
X
⊂{1;:::; d}
Z
f(x)2 E(fn (x; Y )Z|Fa ) d x;
(2.14)
Q (A)
where fn is dened by (2.10). For any = {i1 ; : : : ; ij } ⊂{1; : : : ; d} consider the function
Z yi
Z yij Z ∞
Z ∞
1
:::
:::
fn (x; 1 ; : : : ; d ) d1 : : : dd :
gn (x; y) =
−∞
−∞
yij+1
yid
We have the following relationship between the functions fn and gn :
@ gn (x; :) = (−1)d−|| fn (x; :)
with = (1; : : : ; d):
From (2.14) and using a conditional version of Proposition 1, which can be proved
easily using (2.12), yields
E(f(Y )2 Z|Fa )
Z
X
(−1)d−|| lim
=
n
⊂{1;:::; d}
=
X
(−1)d−|| lim
X
(−1)d−||
n
⊂{1;:::; d}
=
Z
f(x)2 E(@ gn (x; Y )Z|Fa ) d x
Q (A)
Z
f(x)2 E(gn (x; Y )Ha; b (Y; Z)|Fa ) d x
Q (A)
f(x)2 E(1{Y i ¿xi ; i∈; Y i ¡xi ; i6∈} Ha; b (Y; Z)|Fa ) d x;
Q (A)
⊂{1;:::; d}
which completes the proof.
Denition 1. For any function f ∈ L2 ([0; T ]n ); any random variable F ∈ Dk; p , and
a
any process u such that ut ∈ Dk; p for all t ∈ [0; T ]; we dene k : ka; H ⊗n ; k : kF
k; p and
a
k : kF
k; p; H as
1=2
Z
2
;
f(s) ds
kfka; H ⊗n =
[a; T ]n
122
S. Moret, D. Nualart / Stochastic Processes and their Applications 91 (2001) 115–149
1=p
k
X
p
p
j
a
kFkF
=
E(|F|
|F
)
+
E(kD
Fk
|F
)
;
a
a
k; p
a; H ⊗j
j=1
a
kukF
k; p; H
1=p
k
X
= E(kukpa; H |Fa ) +
E(kD j ukpa; H ⊗( j+1) |Fa )
:
j=1
Then the following conditional version of inequality (2.11) holds
Fa
a
k(u1[a; T ] )kF
k; p 6cp kukk+1; p; H :
(2.15)
3. Existence of the quadratic covariation and an extension of Itˆo’s formula
Let X = {Xt ; t ∈ [0; T ]} be a d-dimensional continuous and adapted stochastic process. Consider a sequence Dn of partitions of [0; T ]. The points of a partition Dn will
be denoted by 0 = t0 ¡t1 ¡ · · · ¡tk(n) ¡tk(n)+1 = T . We will assume that this sequence
satises the following conditions:
lim sup (ti+1 − ti ) = 0;
n ti ∈Dn
L := sup sup
n ti ∈Dn
ti+1
¡∞:
ti
(3.1)
Denition 2. Given two stochastic processes Y ={Yt ; t ∈ [0; T ]} and Z={Zt ; t ∈ [0; T ]}
we dene their quadratic covariation as the stochastic process [Y; Z] given by the
following limit in probability, if it exists,
X
(Yti+1 − Yti )(Zti+1 − Zti ):
[Y; Z]t = lim
n
ti ∈Dn ; ti ¡t
Let W1; p (Rd ) denote the Sobolev space of functions in Lp (Rd ) such that the weak
1; p
(Rd ) the space of functions that
rst derivatives belong to Lp (Rd ): We denote by Wloc
1; p
1; p
coincide on each compact set with a function in W (Rd ). For any F ∈ Wloc
(Rd ) we
denote by fk = @F=@xk the kth weak partial derivative of F.
The next result provides sucient conditions for the existence of the quadratic covariation [f(X ); X k ] for all k = 1; : : : ; d, when f: Rd → R is in Lploc (Rd ) and X is a
d-dimensional martingale. Under these conditions we can write a change-of-variable
formula for a process of the form F(Xt ) with F in W1; p (Rd ), where the last term of
the formula is the sum with respect to k of the quadratic covariations 12 [fk (X ); X k ]; fk
being the kth weak partial derivative of F.
Theorem 5. Let X ={(Xt1 ; : : : ; XRtd ); t ∈ [0; T ]} be a continuous and adapted stochastic
Pd
t
process of the form Xtk = i=1 0 usk; i dWsi , where for all k; i = 1; : : : ; d; uk; i is adapted
R T k; i 2
and 0 (us ) ds ¡ ∞ a.s. Suppose that for all ¿ 0 there exist constants cj ; j = 1; 2;
such that for any n; for any k and for any t ∈ [0; T ]; we have,
Z T
k 2 k 2
f(Xs ) |us | ds ¿ 6c1 kfk2p ;
(3.2)
P
0
S. Moret, D. Nualart / Stochastic Processes and their Applications 91 (2001) 115–149
X
k
k
P
[f(Xti+1 ) − f(Xti )](Xti+1 − Xti ) ¿ 6c2 kfkp ;
n;
tit∈D
i ¡t
123
(3.3)
for any function f in CK∞ (Rd ) (innitely dierentiable with compact support). Then
the quadratic covariation [f(X ); X k ] exists for any function f in Lploc (Rd ) and for
1; p
(Rd ); the following Itˆo’s formula holds
any k. Moreover, for any function F ∈ Wloc
d
d Z t
X
1X
[fk (X ); X k ]t
(3.4)
fk (Xs ) dXsk +
F(Xt ) = F(0) +
2
0
k=1
k=1
for all t ∈ [0; T ]; where fk denotes the kth weak partial derivative of F:
Proof. Notice that by an easy approximation argument inequalities (3.2) and (3.3)
hold for any function f in Lp (Rd ).
Fix t ∈ [0; T ]; and set for all k = 1; : : : ; d;
X
[f(Xti+1 ) − f(Xti )](Xtki+1 − Xtki ):
Vnk (f) =
ti ∈Dn ; ti ¡t
For each n¿0 set Kn ={x ∈ Rd ; |x|6n} and consider the stopping time Tn = inf{t: Xt
∈= Kn }. Let ¿ 0 and take n0 in such a way that P(Tn0 6t)6. Let g be an innitely
dierentiable function with support included in Kn0 such that
Z
|g(x) − f(x)|p d x6p :
Kn0
For all k = 1; : : : ; d; and n; m¿n0 we have that
P(|Vnk (f) − Vmk (f)| ¿ ) 6 P(Tn0 6t) + P Tn0 ¿ t; |Vnk (f − g)| ¿
3
+ P Tn0 ¿ t; |Vmk (f − g)| ¿
3
+ P Tn0 ¿ t; |Vnk (g) − Vmk (g)| ¿
3
k
k
:
6 + 2c2 + P |Vn (g) − Vm (g)| ¿
3
We know that limn; m P(|Vnk (g) − Vmk (g)| ¿ =3) = 0 for all k = 1; : : : ; d: As a consequence, the quadratic covariation [f(X ); X k ] exists for any function f in Lploc (Rd ),
and
P(|[f(X ); X k ]t | ¿ )6c2 kfkp
(3.5)
for all k = 1; : : : ; d and for any f in Lp (Rd ).
We can approximate F by functions Fn ∈ C 2 (Rd ) ∩ Lp (Rd ) in such a way that the
partial derivatives satisfy that kfkn − fk kp converges to zero as n tends to innity.
In order to show Itˆo’s formula we can assume, by a localization argument, that the
124
S. Moret, D. Nualart / Stochastic Processes and their Applications 91 (2001) 115–149
process Xt takes values in a compact set K ⊂ Rd and that F and fk have support in
this set. We know that for each n Itˆo’s formula holds, that is,
d
d Z t
X
1X n
fkn (Xs ) dXsk +
[fk (X ); X k ]t :
(3.6)
Fn (Xt ) = F(0) +
2
0
k=1
k=1
[fkn (X ); X k ]t
converges in probability to [fk (X ); XRk ]t for all k =
By (3.5) we have that
t
1; : : : ; d; as n tends to innity. On the other hand, we need to prove that 0 fkn (Xs ) dXsk
Rt
converges in probability to 0 fk (Xs ) dXsk . This follows from the inequalities
Z t
Z t
M
n 2
k 2
n
k
(fk − fk ) (Xs )|us | ds ¿ M
P (fk − fk )(Xs ) dXs ¿ 6 2 + P
0
0
M
+ c1M kfkn − fk k2p :
2
Then taking the limit in (3.6) we obtain (3.4) and this completes the proof of the
theorem.
6
4. Basic estimates for stochastic integrals and Malliavin matrix
Let u = (ui; j )16i; j6d be a matrix of adapted processes ui; j = {uti; j ; t ∈ [0; T ]} such
RT
Pd R t k; i
i
that E 0 |us |2 ds ¡ ∞. Set Xtk =
i=1 0 us dWs . Let us introduce the following
hypotheses on the process u:
2
(H1)n; p For each t ∈ [0; T ] we have ut ∈ Dn; 2 (Rd ), and for some p¿2 we have
E|ur |p + E|Dt1 ur |p + · · · + E|Dt1 ; t2 ; :::; tn ur |p 6Kn; p
(H2)
(H3)
for any r; t ; : : : ; t ∈ [0; T ]:
Pd Pd 1 k; i n 2
2
i=1 |
k=1 ut vk | ¿ ¿ 0 for some constant , for all t ∈ [0; T ] and for all
d
v ∈ R such that |v| = 1.
|ut |6M for some constant M; for all t ∈ [0; T ].
This section will be devoted to obtain some estimations of the k : kn; p -norm of the
b
inverse of the Malliavin matrix
a;
Xb (Lemma 10 for n = 0 and Lemma 11) and of
Ha; b (Xb ; Z) (Lemma 12), plus the conditional versions of these results (Lemmas 13
and 14). Lemmas 6 –9, are previous estimates which are needed in order to prove the
above-mentioned results.
For the proof of the following results we will need Burkholder’s inequality for Hilbert
space valued martingales (see Metivier, 1982, E.2, p. 212). That is, if {Mt ; t ∈ [0; T ]}
is a continuous local martingale with values in a Hilbert space H; then for any p¿0
we have
EkMt kpH 6bp E([M ]p=2
t );
where
[M ]t =
∞
X
i=1
[hM; ei iH ]t
{ei ; i¿1} being a complete orthonormal system in H:
(4.1)
S. Moret, D. Nualart / Stochastic Processes and their Applications 91 (2001) 115–149
125
Lemma 6. Assume that u satises condition (H1)n; p for some p¿2 and n¿1. Then;
we have
sup
t1 ; :::; tn ∈[0; T ]
E|Dt1 ;:::;tn Xs |p 6cn;1 p ;
(4.2)
for some constant cn;1 p of the form cp Kn; p ; where cp depends on p and T .
Proof. Using the properties of the derivative operator we can write for t1 ; : : : ; tn 6t
Z s
n
X
Dt1 :::ti−1 ; ti+1 :::t n uti +
Dt1 ;:::;tn ur dWr :
Dt1 ; :::; tn Xs =
t1 ∨···∨tn
i=1
As a consequence, applying Burkholder’s inequality (4.1) we obtain
E|Dt1 ;:::;tn Xs |p
p
Z s
p !
n
X
62
E
Dt1 :::ti−1 ; ti+1 :::tn uti + E
Dt1 ;:::; t n ur dWr
t1 ∨···∨tn
i=1
Z s
p
Dt1 ;:::;tn ur dWr
62p−1 np sup E|Dt1 ;:::; tn−1 utn |p + E
t1 ;:::; tn
t1 ∨···∨tn
p=2
Z s
|Dt1 ;:::; tn ur |2 dr
62p−1 np Kn; p + 2p−1 bp E
p−1
t1 ∨···∨tn
p−1
62
p
Kn; p (n + bp T
p=2
);
where bp is the Burkholder constant. This proves (4.2).
Remark 1. The following inequality can be proved in an analogous way for any s¿a:
Z
p
|Dt1 ;:::; tn Xs | dt1 : : : dtn |Fa
E
[a;T ]n
6 n;1 p E
Z
[a;T ]n
+ n;2 p E
Z
|Dt1 ;:::; tn−1 utn |p dt1 : : : dtn |Fa
[a;T ]n+1
p
|Dt1 ;:::; tn ur | dt1 : : : dtn dr|Fa ;
where n;1 p = 2p−1 np and n;2 p = 2p−1 bp T p=2−1 :
Lemma 7. Fix 06a ¡ b6T . We dene a; b X k = Xbk − Xak : Then
(i) If u satises condition (H1)1; p for some p¿2; then
sup E|Dt (a; b X )|p 6c1 (b − a)p=2
(4.3)
t∈[0; a]
for some constant c1 depending on K1; p ; p and T .
(ii) If u satises condition (H1)2; p for some p¿2; then
Z T
p=2
|Ds Dt (a; b X )|2 ds
6c2 (b − a)p=2
sup E
t∈[0; a]
0
for some constant c2 depending on K2; p ; p and T .
(4.4)
126
S. Moret, D. Nualart / Stochastic Processes and their Applications 91 (2001) 115–149
Proof. Let us show (4.3). We know that for t6a
Z b
Dt (a; b X ) =
Dt us dWs :
(4.5)
a
Hence, we can write using Burkholder’s inequality (4.1)
!p=2
Z
b
E|Dt (a; b X )|p 6 bp E
a
|Dt us |2 ds
6 bp (b − a)p=2 sup E|Dt us |p
t; s
and (4.3) holds. In the same way, from (4.5) we have that if t ¡ a
p=2
Z T
2
|Ds Dt (a; b X )| ds
E
0
=E
Z
p−1
62
T
0
(
2 p=2
Z b
Ds Dt u dW ds
Dt us 1[a; b] (s) +
a
p=2
(b − a)
p
sup E|Ds ut | + T
p=2−1
E
s∈[0; T ]
Z T
6 2p−1 K2; p (b − a)p=2 + T p=2−1 bp E
0
Z
0
Z
a
6 2p−1 K2; p (b − a)p=2 (1 + T p=2 bp )
b
T
Z
p )
b
Ds Dt u dW ds
a
!p=2
|Ds Dt u |2 d
ds
and we obtain (4.4).
Lemma 8. Assume u satises condition (H1)n; p for some n¿0 and some p¿2. Then;
we have
ka; b X k kn; p 6cn;2 p (b − a)1=2 ;
(4.6)
for some constant cn;2 p depending on Kn; p ; n; p and T:
Proof. By the denition of k : kn; p -norm we have
ka; b X k kpn; p = E|a; b X k |p +
n
X
j=1
kD j (a; b X k )kpH ⊗j :
For the rst summand using Burkholder’s inequality (4.1) yields
Z
p
!p=2
Z b
b
k 2
k
k p
|us | ds
u dWs 6 bp E
E|a; b X | = E
a s
a
6 bp (b − a)p=2 sup E|usk |p :
06s6T
(4.7)
(4.8)
S. Moret, D. Nualart / Stochastic Processes and their Applications 91 (2001) 115–149
127
For the other terms, using again Burkholder’s inequality, we have that for all 16j6n
!
p
Z b
j
k
us dWs
E
D
⊗j
a
H
j
p
Z b
X
Dr1 ; :::; ri−1 ; ri+1 ; :::; rj urki 1[a; b] (ri ) +
Dr1 ;:::; rj uk dW
=E
⊗j
a
i=1
H
Z
p−1 p
62
j E
[0; T ] j−1
+2
p−1
Z
bp E
62
p=2
(b − a)
b
a
|Dr1 ; :::; rj−1 urkj |2 dr1 : : : drj
b
a
p−1
Z
kD j uk k2H ⊗j ⊗Rd
p
j T
( j−1)p=2
d
!p=2
!p=2
Kn; p + bp sup
EkD j uk kpH ⊗j ⊗Rd
6 2p−1 Kn; p T ( j−1)p=2 (b − a)p=2 (j p + bp T p=2 ):
(4.9)
Finally from (4.8) and (4.9) we obtain (4.7).
Lemma 9. Let x = {xs ; s ∈ [0; T ]} and y = {ys ; s ∈ [0; T ]} be d-dimensional stochastic
processes satisfying
n
X
E(kDi xs k2p
) ¡ ∞;
H ⊗i ⊗Rd
n
X
E(kDi ys k2p
)¡∞
H ⊗i ⊗Rd
(4.10)
for some n¿0 and some p¿1: Then; for any 06a ¡ b6T we have
Z
b
hxs ; ys i ds
6cn;3 p (Kn;x 2p Kn;y 2p )1=2p (b − a)
a
(4.11)
Kn;x 2p := sup
s∈[a; b] i=0
Kn;y 2p := sup
s∈[a; b] i=0
n; p
for some constant cn;3 p depending on n and p.
Proof. In order to simplify the proof we will suppose d = 1: On one hand we
have that
p
Z
Z
p !1=2
Z
p !1=2
b
b
b
2
2
xs ys ds 6 E
E
xs ds
E
ys ds
a
a
a
6 (b − a)p sup (E|xs |2p )1=2 sup (E|ys |2p )1=2
s∈[a; b]
6 (Kn;x 2p Kn;y 2p )1=2 (b − a)p ;
s∈[a; b]
128
S. Moret, D. Nualart / Stochastic Processes and their Applications 91 (2001) 115–149
where Kn;x 2p and Kn;y 2p are the constants dened by (4.10). On the other hand, for each
16j6n we have
!
p
Z b
j
xs ys ds
E
D
⊗j
a
H
p
j Z
X
b
j
i
j−i
D xs D ys ds
=E
i
a
⊗j
i=0
H
p
j p
Z b
X
j
6(j + 1)p−1
Di xs D j−i ys ds
E
⊗j
a
i
i=0
H
6(j + 1)p−1 (b − a)p
×
j p
X
j
i=0
i
6(j + 1)p−1
(
sup EkD
i
s∈[a;b]
xs k2p
H ⊗i
sup EkD
j−i
s∈[a;b]
ys k2p
H ⊗( j−i)
)1=2
j p
X
j
(Kn;x 2p Kn;y 2p )1=2 (b − a)p
i
i=0
and (4.11) holds.
b −1
b
Lemma 10. Let (
a;
the inverse of the matrix
a;
Xb )
Xb dened by (2:5). Suppose that
′
u satises hypotheses (H1)1; p′ for some p ¿12; and (H2). Then; for any 16p ¡
(p′ − 4)=4d we have
Z
a; b −1 p
p′
|Dt ur | dt dr (b − a)−p ;
(4.12)
E[(
Xb )ij ] 6 k1 + k2 E
[0; T ]2
for some constants k1 ; k2 depending on p; p′ ; T; d and .
Proof. We have that
b −1
a; b −1
|(
a;
Xb )ij | = |Aij (det
Xb ) |;
b
where Aij is the adjoint of (
a;
Xb )ij . Hence,
4p(d−1) 1=2
b −1 p
a; b −2p 1=2
] E[kDXb 1[a; b] kH
] :
E|(
a;
Xb )ij | 6cd; p E[(det
Xb )
(4.13)
For the second factor, using Lemma 6 with n = 1 and p = 4p(d − 1) yields
!2p(d−1)
Z
4p(d−1)
)=E
E(kDXb 1[a; b] kH
b
a
|Ds Xb |2 ds
6 (b − a)2p(d−1) sup E|Ds Xb |4p(d−1)
s
6 c1;1 4p(d−1) (b − a)2p(d−1) :
(4.14)
S. Moret, D. Nualart / Stochastic Processes and their Applications 91 (2001) 115–149
129
In order to estimate the rst factor we write
2 d
Z bX
d
d X
a; b
(k) j
T a; b d
Ds Xb vj ds :
det
Xb ¿ inf (v
Xb v) = inf
|v|=1
|v|=1
a k=1 j=1
Then we have for any h ∈ [0; 1] and using (H2)
2
Z bX
d X
d (k) j
Ds Xb vj ds
a k=1 j=1
Z
b
1
¿
2
Z
=
a
2
d X
d Z b
X
X
d
j; k
(k) j; i
i
vj (us +
Ds ur dWr ) ds
i=1 s
k=1 j=1
2
2
Z b
Z b
d
d X
d X
X
X
d
(k) j; i
i
j; k
vj
Ds ur dWr ds
vj us ds −
s
a+(b−a)(1−h) k=1 i; j=1
a+(b−a)(1−h) k=1 j=1
b
2 (b − a)h
−
¿
2
=
Z
2
b
Ds ur dWr ds
a+(b−a)(1−h) s
b
Z
2 (b − a)h
− Ih ;
2
where
Z
2
b
Ds ur dWr ds:
Ih =
a+(b−a)(1−h)
s
Z
b
We choose h = 4=(b − a)2 y1=d , where y¿c := 4d =(b − a)d 2d . Then, we can
write for any q¿2
!q
Z b
Z b
q
q−1 q−1
2
|Ds ur | dr ds
E|Ih | 6 bq (b − a) h E
s
a+(b−a)(1−h)
6 bq (b − a)2q−2 h2q−2 E
As a consequence,
b −2p
]=
E[(det
a;
Xb )
Z
∞
0
Z
[0; T ]2
b −1
2py2p−1 P{(det
a;
¿ y} dy
Xb )
2p
6 c + 2p
Z
∞
y
2p−1
c
6
6
|Ds ur |2q dr ds:
4d
(b − a)d 2d
4d
2d
2p
2p
P
+ 2p
b
det
a;
Xb
Z
c
1
¡
y
dy
∞
E|Ih |q y2p−1+q=d dy
(b − a)−2dp + 2pbq
4
2
2q−2
130
S. Moret, D. Nualart / Stochastic Processes and their Applications 91 (2001) 115–149
×E
Z
[0; T ]2
|Ds ur |2q dr ds
Z
∞
y2p−1−(q=d)+2=d dy
c
6 (b − a)−2dp 42dp −4dp 1 + 2pbq 4q −2q (b − a)q−2
Z
d
2q
E
|Ds ur | ds dr
×
q − 2dp − 2
[0; T ]2
Z
|Ds ur |2q ds dr (b − a)−2dp ;
6 c 1 + c2 E
(4.15)
[0; T ]2
where c1 and c2 are constants depending on q; ; T; d and p, and provided q ¿ 2dp + 2.
We will take p′ = 2q ¿ 4(dp + 1)¿12.
Finally, from (4.13) – (4.15) we get (4.12).
Remark 2. With the additional hypothesis (H3) the following conditional version of
the previous result holds:
Z
b −1 p
−p
p′
a
)
)
|F
]
6
(b
−
a)
|D
u
|
dt
dr|F
+
a
E
E[((
a;
a
t r
a
1
2
Xb ij
[0; T ]2
Z
× b1 + b2 E
[0; T ]2
|Ds ur |4p(d−1) ds dr|Fa
for some constants a1 ; a2 ; b1 ; b2 depending on M; ; d; T; p and p′ .
Proof. The proof is similar to that of Lemma 10. We have that
4p(d−1)
a; b −2p
b −1 p
|Fa ]1=2 E[kDXb 1[a; b] kH
|Fa ]1=2 :
E[((
a;
Xb )ij ) |Fa ]6cd; p E[(det
Xb )
The following conditional version of inequality (4.15) holds:
Z
b −2p
−2dp
p′
+
k
E
)
|F
]6(b
−
a)
|D
u
|
ds
dr|F
: (4.16)
k
E[(det
a;
1
2
a
s r
a
Xb
[0; T ]2
On the other hand, we have for some q¿4;
E[kDXb 1[a; b] kqH |Fa ]
= E
!q=2
b
Z
|Ds Xb |2 ds
a
b
(
q
62
q−1
ds|Fa
2 q=2
Z b
Ds u dW ds |Fa
us +
a
Z
6 E
a
q=2
M (b − a)
q=2−1
+ (b − a)
E
Z
a
b
Z
q
!)
b
Ds u dW ds|Fa
a
S. Moret, D. Nualart / Stochastic Processes and their Applications 91 (2001) 115–149
131
!q=2
Z b Z b
6 2q−1 M q (b − a)q=2 + bq (b − a)q=2−1 E
|Ds u |2 d ds|Fa
a
a
62
q−1
q=2
(b − a)
(
q
M + bq T
q=2−2
E
Z
b
a
Z
!)
b
q
|Ds u | d ds|Fa
a
:
(4.17)
Hence, from (4.16) and using (4.17) with q = 4p(d − 1) we obtain
Z
1=2
b −1 p
p′
−p
|D
u
|
ds
dr|F
+
k
E
)
)
|F
]
6
(b
−
a)
k
E[((
a;
s r
a
1
2
a
Xb ij
[0; T ]2
×
k1′
+
k2′ E
Z
4p(d−1)
[0; T ]2
|Ds ur |
Z
6 (b − a)−p a1 + a2 E
[0; T ]2
Z
× b 1 + b2 E
[0; T ]2
ds dr|Fa
1=2
′
|Ds ur |p ds dr|Fa
|Ds ur |4p(d−1) ds dr|Fa
where for the last inequality we have used the fact that
√
;
x61 + x.
Lemma 11. Assume u satises (H1)n+1; p for all p¿2 and some xed n¿0;
and (H2). Then; for all p¿2
b −1
4
−1
k(
a;
Xb ) kn; p 6cn; p (b − a)
for some constant cn;4 p depending on n; p; ; T and Kn+1; p′ ; where p′ ¿ 4dp(n+1)2 +4.
Proof. For any 06k6n we can write
b −1 p
EkDk ((
a;
Xb ) )kH ⊗k
6c
X
b −1 i1 a; b
a; b −1 ir a; b a; b −1 p
Ek(
a;
Xb ) D
Xb : : : (
Xb ) D
Xb (
Xb ) kH ⊗k
X
b p
a; b −1 p(1+r)
ir a; b p
E(kDi1
a;
)
Xb kH ⊗i1 : : : kD
Xb kH ⊗ir |(
Xb ) |
X
b p(r+1) 1=(r+1)
b p(1+r) 1=(r+1)
E(kDi1
a;
: : : E(kDir
a;
Xb kH ⊗i1 )
Xb kH ⊗ir )
i1 +···+ir =k
6c
i1 +···+ir =k
6c
i1 +···+ir =k
2
b −1 p(r+1) 1=(r+1)
×E(|(
a;
)
:
Xb ) |
In order to estimate the rst factors we put
p
p
b
a; b
EkDk (
a;
Xb )ij kH ⊗k 6 k(
Xb )ij kk; p
(4.18)
132
S. Moret, D. Nualart / Stochastic Processes and their Applications 91 (2001) 115–149
Z
bD
E
p
j
i
Ds Xb ; Ds Xb ds
=
a
k; p
6 c′ (b − a)p ;
(4.19)
where the last inequality has been obtained using Lemma 9 with x = DXbi and y =
DXbj (x and y satisfy the required hypotheses due to the Lemma 6). Finally, from
Lemma 10, (4.18) and (4.19) we obtain the desired result.
Lemma 12. Fix n; m¿1; p¿2 and 06a ¡ b6T . Suppose that u satises hypotheses
m
(H1)n+m+1; p′ for all p′ ¿2 and (H2). Let Z ∈ Dn+m; 2 p . Then; for any multi-index
∈ {1; : : : ; d}m we have
kHa; b (Xb ; Z)kn; p 6cn;5 p (b − a)−m=2 kZkn+m; 2m p ;
(4.20)
where cn;5 p is a constant depending on p; T; d and .
Proof. Using the continuity of the operator we have
kHa; b (Xb ; Z)kn; p
a; b
=kH(a;mb) (Xb ; H(1;
:::; m−1 ) (Xb ; Z))kn; p
d
X
j
a; b
a; b −1
=
(H
(X
;
Z)(
)
DX
1
)
Xb ij
b [a; b]
(1; :::; m−1 ) b
j=1
n; p
a; b
6dp−1 kH(1;
:::; m−1 ) (Xb ; Z)kn+1; 2p
d
X
j=1
j
b −1
k(
a;
Xb )ij kn+1; 4p kDXb 1[a; b] kn+1; 4p
a; b
a; b −1
6dp kH(1;
:::; m−1 ) (Xb ; Z)kn+1; 2p k(
Xb ) kn+1; 4p kDXb 1[a; b] kn+1; 4p :
(4.21)
Using Lemma 6 it is easy to see that
kDXb 1[a; b] kn+1; 4p 6c(b − a)1=2
(4.22)
and then, by Lemma 11, (4.21) and (4.22) we get
a; b
a; b
′
−1=2
kH(1;
kH(1;
:::; m ) (Xb ; Z)kn; p 6c (b − a)
:::; m−1 ) (Xb ; Z)kn+1; 2p :
By an iteration procedure we obtain (4.20).
Now we will deduce the conditional versions of the last two results.
Lemma 13. Fix n¿0; p¿2: Assume u satises (H1)n+1; p′ for all p¿2; (H2)
and (H3): Let 06a ¡ b6T . Then there exists a random variable Zn;a p such that
b −1 Fa
−1 a
k(
a;
Xb ) kn; p 6(b − a) Zn; p ;
where Zn;a p has the form
Z
n
X
1 + r1 E
Zn;a p = c
r=1
[0; T ]2
(4.23)
′
|Dt us |pr dt ds|Fa
S. Moret, D. Nualart / Stochastic Processes and their Applications 91 (2001) 115–149
Z
× 1 + r2 E
2
[0; T ]2
(
× 1+
n+1
X
r; m E
|Dt us |4p(r+1) (d−1) dt ds|Fa
Z
p(r+1)
[a;T ]m+1
m=0
|Ds1 ;:::;sm us |
133
ds1 : : : dsm ds|Fa
)
(4.24)
for some constants pr′ such that p(r + 1)2 ¡ (pr′ − 4)=4d and r1 ; r2 ; r; m depending
on ; M; p; d; r; pr′ and T .
Proof. As in proof of (4.18) in Lemma 11 we can write for 16k6n
b −1 p
E(kDk ((
a;
Xb ) )ka; H ⊗k |Fa )
X
6c
i1 +···+ir =k
b (r+1)p
ir a; b (r+1)p
{E(kDi1
a;
Xb ka; H ⊗i1 | Fa ) : : : E(kD
Xb ka; H ⊗ir |Fa )
2
b −1 p(r+1)
|Fa )}1=(r+1) :
×E(|(
a;
Xb ) |
(4.25)
We can obtain the following conditional version of inequality (4.19):
q
b
E(kDk (
a;
Xb )ij ka; H ⊗k |Fa )
k
D
=E
Z
a
b
hDs Xbi ; Ds Xbj i ds
!
q
a; H ⊗k
|Fa
!
q
Z b
k
X
k
j
m
i
k−m
hD Ds Xb ; D
Ds Xb i ds
= E
m
a
m=0
6 (k + 1)
q−1
a; H ⊗k
q
k
X
k
E
m
m=0
q
6c(b − a)q−1 (b − a)k( 2 −1)
×
k
X
E
[a;T ]m+1
m=0
×E
Z
Z
[a;T ]k−m+1
|Fa
q
Z
b
j
m
i
k−m
Ds Xb i ds
hD Ds Xb ; D
a
a; H ⊗k
|Ds1 ; :::; sm+1 Xbi |2q ds1 : : : dsm+1 |Fa
Z
k+1
X
E
m=0
[a;T ]m
|Fa
!
1=2
|Ds1 ; :::; sk−m+1 Xbj |2q ds1 : : : dsk−m+1 |Fa
6c(b − a)q T (kq=2)−k−1
1=2
|Ds1 ; :::; sm Xb |2q ds1 : : : dsm |Fa :
(4.26)
134
S. Moret, D. Nualart / Stochastic Processes and their Applications 91 (2001) 115–149
Notice that we need q¿4: From (4.26) taking q = (r + 1)p and using Remark 1 we
obtain
b q
q
E(kDk
a;
Xb ka; H ⊗k |Fa ) 6 (b − a)
×
k+1
X
m; q E
Z
|Ds1 ; :::; sm us | ds1 : : : dsm ds|Fa ;
[a;T ]m+1
m=0
q
(4.27)
2
1
where the constants m; q have the form m; q = C(m−1;
q + m; q ); with C depending on
1
2
k; q and T and m;
q ; m; q the constants of the Remark 1.
Using Remark 2 with exponent p(r + 1)2 we have that there exist constants
′
pr ; a1; r ; a2; r ; b1; r ; b2; r ; such that p(r + 1)2 ¡ (pr′ − 4)=4d and
2
b −1 p(r+1)
|Fa )
E(|(
a;
Xb ) |
Z
−p(r+1)2
a1; r + a2; r E
6(b − a)
pr′
[0; T ]2
× b1; r + b2; r E
Z
|Dt us | dt ds|Fa
qr
[0; T ]2
|Dt us | dt ds|Fa
(4.28)
;
where qr = 4p(r + 1)2 (d − 1): Hence, from (4.25), (4.27) and (4.28) we obtain for any
16j6n
b −1 p
E(kD j ((
a;
Xb ) )ka; H ⊗j |Fa )
6c(b − a)
p
j
X
× b1; r + b2; r E
×
r; m E
6c(b − a)p
(
× 1+
Z
[0; T ]2
Z
n
X
|Dt us |
p(r+1)
m=0
Z
|Dt us |pr dt ds|Fa
′
[0; T ]2
Z
× 1 + b1; r + b2; r E
p(r+1)
[a;T ]m+1
Z
1=(r+1)
ds1 : : : dsm ds|Fa
1 + a1; r + a2; r E
r; m E
dt ds|Fa
1=(r+1)
|Ds1 ; :::; sm us |
r=1
n+1
X
|Dt us | dt ds|Fa
4p(r+1)2 (d−1)
[a;T ]m+1
m=0
pr′
[0; T ]2
r=1
( j+1
X
a1; r + a2; r E
Z
[0; T ]2
|Ds1 ; :::; sm us |
ds1 : : : dsm ds|Fa
4p(r+1)2 (d−1)
|Dt us |
)r=(r+1)
dt ds|Fa
;
)
(4.29)
where r; m = m; p(r+1) and for the last inequality we have used that x 61 + x for all
¡ 1 and x¿0: From (4.29) and Remark 2 we obtain (4.23).
135
S. Moret, D. Nualart / Stochastic Processes and their Applications 91 (2001) 115–149
Lemma 14. Fix n¿1; p¿2: Suppose that u satises hypotheses (H2) and
(H1)n+m+1; p′ for all p′ ¿2. Then; for any multi-index ∈ {1; : : : ; d}m we have
−m=2 a
a
kHa; b (Xb ; 1)kF
Yn; p ;
n; p 6c(b − a)
(4.30)
where
Yn;a p =
m
Y
Zn;i p Vn;i p
(4.31)
i=1
a
i
with Zn;i p = Zn+i;
2i+1 p which is dened by (4:24) and Vn; p dened as
Vn;i p = 0i +
n+i+1
X
Z
ji E 1 +
[a; b] j+1
j=1
|Dt1 :::tj ur |2
i+1
p
dt1 : : : dtj dr|Fa
for some constants ji depending on M; d; T; n and p.
Proof. In the same way as in Lemma 12 and using also Lemma 13, we can obtain
the following inequality:
a; b
a; b −1 Fa
Fa
p
a
kHa; b (Xb ; 1)kF
n; p 6 d kH(1; :::; m−1 ) (Xb ; 1)kn+1; 2p k(
Xb ) kn+1; 4p
a
×kDXb 1[a; b] kF
n+1; 4p; H
a; b
Fa
a
6 dp (b − a)−1 Zn+1;
4p kH(1; :::; m−1 ) (Xb ; 1)kn+1; 2p
a
×kDXb 1[a; b] kF
n+1; 4p; H ;
(4.32)
a
where Zn+1;
4p is the random variable dened in Lemma 13. On the other hand, we
have
1=4p
!2p n+2
Z b
X
4p
2
j
a
=
E
|D
X
|
ds
|F
E(kD
X
k
|F
)
:
+
kDXb 1[a; b] kF
⊗j
s
b
a
b
a
n+1; 4p; H
H
a
j=2
For the rst term, using inequality (4.17) with the exponent 4p yields
!2p
Z b
|Ds Xb |2 ds
|Fa
E
a
624p−1 (b − a)2p
(
M 4p + b4p T 2p−2 E
Z
b
a
Z
b
a
|Ds u |4p d ds|Fa
!)
:
(4.33)
For the other terms we make use of Remark 1. Then, for all 26j6n + 2 we get
E(kD j Xb k4p
a; H ⊗j |Fa )
6E
Z
[a;b] j
Ds1 ; :::; sj Xb 2 ds1 : : : dsj
2p
|Fa
!
136
S. Moret, D. Nualart / Stochastic Processes and their Applications 91 (2001) 115–149
6(b − a) j(2p−1) E
2p
6(b − a) T
+ j;2 4p E
Z
[a;b] j
j(2p−1)−2p
Z
Ds1 ; :::; sj Xb 4p ds1 : : : dsj |Fa
j;1 4p E
Z
[a;T ] j
4p
|Dt1 ; :::; tj−1 utj | dt1 : : : dtj |Fa
4p
[a;T ] j+1
|Dt1 ; :::; tj ur | dt1 : : : dtj dr|Fa
(4.34)
:
Hence, from (4.33) and (4.34) we obtain
2p 1
a
kDXb 1[a; b] kF
n+1; 4p; H 6(b − a) Vn; p ;
where
Vn;1 p
=
01
+
n+2
X
j=1
j1 E
1+
Z
[a;b] j+1
4p
|Dt1 :::tj ur | dt1 : : : dtj dr|Fa
(4.35)
or some constants j1 depending on T; j; M and p: Then, from (4.32) we have
a; b
Fa
p
−1=2 a
a
Zn+1; 4p Vn;1 p kH(1;
kHa; b (Xb ; 1)kF
n; p 6d (b − a)
:::; m−1 ) (Xb ; 1)kn+1; 2p :
Applying recurrently the last inequality we obtain (4.30).
Remark 3. Notice that if u satises (H1)n+m+d+1; p′ for all p′ ¿1; then by the proper′
ties of the derivative operator we have that Yk;a p ∈ Dd; p for all p′ ¿2:
5. Existence of quadratic covariation and Itˆo’s formula for Brownian martingales
Let u = (ui; j )16i; j6d be a matrix of adapted processes ui; j = {uti; j ; t ∈ [0; T ]} such
RT
Pd R t
that E 0 |us |2 ds ¡ ∞. Set Xtk = i=1 0 usk; i dWsi .
We will assume henceforth that u satises hypothesis (H1)n; p for all p¿2 and all
06n62d + 1: We will call that hypothesis (H1). We will also suppose henceforth that
u satises (H2) and (H3).
Consider a partition = {0 = t0 ¡ t1 ¡ · · · ¡ tn+1 = t} of the interval [0; t] for some
xed t ∈ [0; T ] satisying
ti+1
¡ ∞:
(5.1)
L := sup
06i6n ti
Set i X k = Xtki+1 − Xtki , for 06i6n. The main result of this section are the following estimates, which, as we have seen before, imply the existence of the quadratic
covariation and the Itˆo’s formula for the process X .
We will denote c and cp general constants which may change along of all this
section.
Lemma 15. There exists a constant c such that for any function f ∈ Lp (Rd ) for some
d+1
p ¿ 2 and G ∈ D1; 2 we have
E(f(Xt )2 G)6ct −d=p kfk2p kGkd; 2d+1 :
(5.2)
S. Moret, D. Nualart / Stochastic Processes and their Applications 91 (2001) 115–149
137
Proof. By Corollary 2 with a = 0 and b = t; we have
Z
0; t
E(f(Xt )2 G) =
f(x)2 E(H(1;
:::; d) (Xt ; G)1{Xt ¿x} ) d x
Rd
6
Z
Rd
0; t
2 1=2
1=2
f(x)2 (E(H(1;
d x:
:::; d) (Xt ; G)) ) (E1{Xt ¿x} )
Applying Lemma 12 with a = 0; b = t; n = 0 and p = 2 yields
0; t
2 1=2
−d=2
kGkd; 2d+1 :
E(H(1;
:::; d) (Xt ; G) ) 6ct
(5.3)
On the other hand, using the exponential inequality for martingales and Holder’s
inequality we have that
E(1{Xt ¿x} )6
d
Y
P(Xtk ¿ xk )1=d 6
k=1
d
Y
e−(x
k 2
) =2tM 2 d
= e−kxk
2
=2tM 2 d
;
(5.4)
k=1
where M is the constant of hypothesis (H3). Then, from (5.3) and (5.4) we obtain
Z
2
2
2
−d=2
kGkd; 2d+1
f(x)2 e−kxk =2tM d d x
E(f(Xt ) G) 6 ct
Rd
6 ct
6c
−d=2
kGkd; 2d+1 kfk2p
2M 2 d
q
d=2q
Z
e
−kxk2 q=2tM 2 d
dx
Rd
1=q
t −d=2+d=2q kGkd; 2d+1 kfk2p ;
(5.5)
where q is such that (2=p) + (1=q) = 1; and as a consequence (5.2) holds.
Corollary 16. There exists a constant c such that for any function f ∈ Lp (Rd ) with
p ¿ d we have
Z T
f(Xs )2 |usk |2 ds6ckfk2p :
(5.6)
E
0
Proof. Applying Lemma 15 with G = |usk |2 yields
Z T
Z t
2 k 2
2
f(Xs ) |us | ds 6 ckfkp
E
s−d=p k|usk |2 kd; 2d+1 ds
0
0
6 ckfk2p
sup k|usk |2 kd; 2d+1
s
Z
T
s−d=p ds:
0
It only remains to prove that sups k|usk |2 kd; 2d+1 ¡ ∞. For any 16j6d we have
p
j−1
d X
X
j
−
1
p
EkD j (|usk |2 )kH ⊗j = E
2
Dm usk; i D j−m usk; i
m
i=1 m=0
6c
j−1
d X
X
i=1 m=0
′
6 c Kd; 2p ;
and that completes the proof.
H ⊗j
)1=2
)1=2 E(kD j−m usk; i k2p
E(kDm usk; i k2p
H ⊗j−m+1
H ⊗m+1
138
S. Moret, D. Nualart / Stochastic Processes and their Applications 91 (2001) 115–149
Proposition 17. There exists a constant c such that for any function f ∈ Lp (Rd ) with
p ¿ d we have
2
n
X
k
f(Xti )i X 6ckfk2p :
(5.7)
E
i=0
Proof. By the isometry of Itˆo’s stochastic integral we have
2
n
Z ti+1
n
X
X
k
2
k 2
f(Xti )i X = E
f(Xti )
|us | ds :
E
ti
i=0
i=0
Rt
Then using Lemma 15 with G = ti i+1 |usk |2 ds yields
Z ti+1
Z ti+1
n
n
X
X
k 2
|usk |2 ds 6ckfk2p
f(Xti )2
ti−d=p
|u
|
ds
E
s
ti
i=0
i=0
ti
:
(5.8)
d; 2d+1
In order to estimate the last factor, we make use of the Lemma 9 with x = y = uk ,
a = ti and b = ti+1 , and we get
Z ti+1
k 2
|us | ds
6c′ (ti+1 − ti ):
(5.9)
ti
d; 2d+1
Finally, from (5.8) and (5.9) we obtain
2
n
n
X
X
f(Xti )i X k 6 cc′ kfk2p
E
ti−d=p (ti+1 − ti )
i=0
i=0
6 cc′ Ld=p kfk2p
6c
′′
Z
t
s−d=p ds
0
kfk2p ;
where L is the constant appearing in condition (5.1).
Proposition 18. There exists a constant c such that for any function f ∈ CK∞ (Rd ) we
have
n
2
X
k
f(Xti+1 )i X 6ckfk2p :
E
(5.10)
i=0
Proof. In order to prove the proposition we will establish the following inequalities:
n
X
(5.11)
f(Xti+1 )2 (i X k )2 6c1 kfk2p ;
S1k := E
i=0
S2k := E
X
i¡j
f(Xti+1 )f(Xtj+1 )i X k j X k 6c2 kfk2p :
Proof of (5.11). Using Lemma 15 with G = (i X k )2 and t = ti+1 we have
n
X
−d=p
ti+1
k(i X k )2 kd; 2d+1 :
S1k 6ckfk2p
i=0
(5.12)
(5.13)
139
S. Moret, D. Nualart / Stochastic Processes and their Applications 91 (2001) 115–149
Using Holder’s inequality for the k : kk; p -norms and Lemma 8 with p = 2d+2 and n = d
we obtain
k(i X k )2 kd; 2d+1 6 ki X k k2d; 2d+2
6 c(ti+1 − ti )
and hence, from (5.13) we get
n
X
−d=p
(ti+1 − ti )ti+1
S1k 6 ckfk2p
i=0
6 ckfk2p
Z
t
s−d=p ds
0
6 c′ kfk2p :
Proof of (5.12). Our objective is to transform the martingale increments i X k and
j X k into terms which involve only Lebesgue integrals. More precisely, if Sijk :=
E(f(Xti+1 )f(Xtj+1 )i X k j X k ), we derive an equality of the form
Sijk = E(f(Xti+1 )f(Xtj+1 )Cijk );
where
(ti+1 − ti )(tj+1 − tj )
:
kCijk k2 6c p
ti+1 (tj+1 − ti+1 )
(5.14)
Using the duality relationship between the derivative operator D and the Itˆo stochastic
integral we can write for i ¡ j
Sijk = E(f(Xti+1 )f(Xtj+1 )i X k j X k )
f(Xti+1 )f(Xtj+1 )i X
=E
k
Z
tj+1
utk
dWt
tj
Z
=E
=
d
X
tj+1
tj
E
d
X
utk; l Dt(l) (f(Xti+1 )f(Xtj+1 )i X k ) dt
l=1
k
f(Xti+1 )i X (@m f)(Xtj+1 )
d
X
Z
tj+1
tj
m=1
=
!
d
X
!
utk; l Dt(l) Xtmj+1
l=1
dt
!
k
E(f(Xti+1 )i X k (@m f)(Xtj+1 )(∇uj Xtmj+1 ));
m=1
where for any random variable F we write
d Z tj+1
X
k
utk; l Dt(l) F dt:
∇uj F =
l=1
tj
k
We now apply Proposition 1 to Y = Xtj+1 and to Z = f(Xti+1 )i X k ∇uj Xtmj+1
and to the interval [a; b] = [ti+1 ; tj+1 ] in order to get rid o the partial derivatives
140
S. Moret, D. Nualart / Stochastic Processes and their Applications 91 (2001) 115–149
t
i+1
of f. Of course, new derivatives will appear from the Skorohod integral H(m)
; tj+1
k
(Xtj+1 ; f(Xti+1 )i X k ∇uj Xtmj+1 ) and a further analysis will be necessary. Then, Proposition 1 yields
k
E(f(Xti+1 )i X k (@m f)(Xtj+1 )(∇uj Xtmj+1 ))
t
i+1
= E(f(Xtj+1 )H(m)
; tj+1
k
(Xtj+1 ; f(Xti+1 )i X k (∇uj Xtmj+1 )))
= E(f(Xti+1 )f(Xtj+1 )i X k Bijk; m );
where
t
i+1
Bijk; m = H(m)
; tj+1
k
(Xtj+1 ; ∇uj Xtmj+1 ):
Applying again the duality relationship to the increment i X k we obtain
E(f(Xti+1 )f(Xtj+1 )i X k Bijk; m )
Z
=E
ti
=
d
X
ti+1
d
X
utk; l Dt(l) (f(Xti+1 )f(Xtj+1 )Bijk; m ) dt
l=1
!
k
E((@n f)(Xti+1 )f(Xtj+1 )Bijk; m (∇ui Xtni+1 ))
n=1
+
d
X
k
E(f(Xti+1 )(@n f)(Xtj+1 )Bijk; m (∇ui Xtnj+1 ))
n=1
k
+ E(f(Xti+1 )f(Xtj+1 )(∇ui Bijk; m )):
(5.15)
Notice that we still have twice the derivative of the function f that must be
eliminated. In order to do this, we write
Dt (f(Xti+1 )f(Xtj+1 )) = f(Xtj+1 )
d
X
(@n f)(Xti+1 )Dt Xtni+1
n=1
+ f(Xti+1 )
d
X
(@n f)(Xtj+1 )Dt Xtnj+1 :
(5.16)
n=1
Multiplying both members of (5.16) by Dt Xtmj+1 and integrating in the interval [ti+1 ; tj+1 ]
yields
Z tj+1
d
X
t ; tj+1
hDt Xtmj+1 ; Dt (f(Xti+1 )f(Xtj+1 ))i dt = f(Xti+1 )
)mn
(@n f)(Xtj+1 )(
Xi+1
t
j+1
ti+1
n=1
and as a consequence
t
f(Xti+1 )(∇f)(Xtj+1 ) = (
Xi+1
t
; tj+1 −1
j+1
)
Z
tj+1
ti+1
hDt Xtj+1 ; Dt (f(Xti+1 )f(Xtj+1 ))i dt
(5.17)
where ∇f = (@1 f; : : : ; @d f)′ . Multiplying both members of (5.16) by Dt Xtmi+1
141
S. Moret, D. Nualart / Stochastic Processes and their Applications 91 (2001) 115–149
and integrating in the interval [0; ti+1 ] yields
Z ti+1
hDt Xtmi+1 ; Dt (f(Xti+1 )f(Xtj+1 ))i dt
0
d
X
= f(Xtj+1 )
(@n f)(Xti+1 )(
Xti+1 )mn
n=1
+ f(Xti+1 )
d
X
(@n f)(Xtj+1 )
ti+1
0
n=1
and as a consequence
Z
hDt Xtmi+1 ; Dt Xtnj+1 i dt
(∇f)(Xti+1 )f(Xtj+1 )
=
−1
Xt
i+1
ti+1
Z
hDt Xti+1 ; Dt (f(Xti+1 )f(Xtj+1 ))i dt − f(Xti+1 )
0
(∇f)(X
)
;
ij
tj+1
(5.18)
where the matrix
Z
( ij )mn =
is dened by
ij
ti+1
0
hDt Xtmi+1 ; Dt Xtnj+1 i dt:
Substituting (5.17) into (5.18) we get
(∇f)(Xti+1 )f(Xtj+1 )
=
−1
Xti+1
Z
ti+1
t
i+1
ij (
Xt
−
hDt Xti+1 ; Dt (f(Xti+1 )f(Xtj+1 ))i dt
0
; tj+1 −1
)
j+1
=
d
X
tj+1
ti+1
From (5.15) we have
Sijk
Z
hDt Xtj+1 ; Dt (f(Xti+1 )f(Xtj+1 ))i :
(5.19)
k
E((∇f)′ (Xti+1 )(∇ui Xti+1 )f(Xtj+1 )Bijk; m )
m=1
+
d
X
k
E((∇f)′ (Xtj+1 )(∇ui Xtj+1 )f(Xti+1 )Bijk; m )
m=1
k
+ E(f(Xti+1 )f(Xtj+1 )(∇ui Bijk; m ));
k
k
k
(5.20)
k
k
k
where ∇ui Xtj+1 = (∇ui Xt1j+1 ; : : : ; ∇ui Xtdj+1 ) and ∇ui Xti+1 = (∇ui Xt1i+1 ; : : : ; ∇ui Xtdi+1 ): Now
substituting (5.19) and (5.17) into (5.20) we obtain
Sijk
=
d
X
m=1
E
(Z
tj+1
ti+1
hDt Xtj+1 ; Dt (f(Xti+1 )f(Xtj+1 ))i dt
′
t ; tj+1 −1 k k; m
) Gij Bij
(
Xi+1
tj+1
)
142
S. Moret, D. Nualart / Stochastic Processes and their Applications 91 (2001) 115–149
+
d
X
E
(Z
0
m=1
ti+1
hDt Xti+1 ; Dt (f(Xti+1 )f(Xtj+1 ))i dt
′
k; m
uk
(
−1
Xti+1 )(∇i Xti+1 )Bij
)
k
+ E(f(Xti+1 )f(Xtj+1 )(∇ui Bijk; m ));
where
k
Gijk = ∇ui Xtj+1 −
−1
′
uk
ij (
Xti+1 )∇i Xti+1
k
k
u
= ∇ui (Xtj+1 − Xti+1 ) − ij (
−1
Xt )∇i Xti+1
(5.21)
i+1
and ij is the matrix dened as
Z ti+1
hDt (Xtmj+1 − Xtmi+1 ); Dt Xtni+1 i dt:
( ij )mn =
(5.22)
0
Applying the duality relationship we obtain
Sijk = E(f(Xti+1 )f(Xtj+1 )Cijk );
where
Cijk =
d n
o
X
k
k
ti+1 ; tj+1
0; ti+1
(Xti+1 ; ∇ui Xtni+1 Bijk; m ) + ∇ui Bijk; m :
(Xtj+1 ; Gijk; n Bijk; m ) + H(n)
H(n)
m; n=1
(5.23)
Let us prove that the terms Cijk satisfy condition (5.14). Applying Lemma 12 with
n = 0; p = 2 and m = 1; yields
E|Cijk |2 6 c1
d n
X
(tj+1 − ti+1 )−1 kBijk; m k21; 8 kGijk; n k21; 8
m; n=1
o
k
k
−1
kBijk; m k21; 8 k∇ui Xtni+1 k21; 8 + E|∇ui Bijk; m |2 :
+ ti+1
(5.24)
Then, we will make use of