Directory UMM :Data Elmu:jurnal:S:Stochastic Processes And Their Applications:Vol87.Issue2.2000:

Stochastic Processes and their Applications 87 (2000) 167–197
www.elsevier.com/locate/spa

Weak approximation of killed di usion
using Euler schemes
Emmanuel Gobet
Laboratoire de Probabilites et Modeles Aleatoires. Case 7012, Universite Paris VII. 2, place Jussieu,
75251 Paris Cedex 05, France
Received 12 March 1999; received in revised form 6 November 1999

Abstract
We study the weak approximation of a multidimensional di usion (Xt )06t6T killed as it leaves
an open set D, when the di usion is approximated by its continuous Euler scheme (X˜t )06t6T or
by its discrete one (X˜ti )06i6N , with discretization step T=N . If we set  := inf {t ¿ 0: Xt 6∈ D} and
˜c := inf {t ¿ 0: X˜t 6∈ D}, we prove that the discretization error Ex [5T ¡˜c f(X˜T )]−Ex [5T ¡ f(XT )]
can be expanded to the rst order in N −1 , provided support or regularity conditions on f. For the
discrete scheme, if we set ˜d := inf {ti ¿ 0: X˜ti 6∈ D}, the error Ex [5T ¡˜d f(X˜T )] − Ex [5T ¡ f(XT )]
is of order N −1=2 , under analogous assumptions on f. This rate of convergence is actually exact
c 2000 Elsevier Science B.V. All rights
and intrinsic to the problem of discrete killing time.
reserved.

Keywords: Weak approximation; Killed di usion; Euler scheme; Error’s expansion; Malliavin
calculus; Itˆo’s formula; Orthogonal projection; Local time on the boundary

1. Introduction
Let (Xt )t¿0 be the di usion taking its values in Rd de ned by
Z t
Z t
B(Xs ) ds +
(Xs ) dWs ;
Xt = x +
0

(1)

0



where (Wt )t¿0 is a Brownian motion in Rd : Let  := inf {t ¿ 0: Xt 6∈ D} be its rst
exit time from the open set D ⊂ Rd . We are interested in computing Ex [5T ¡ f(XT )],

where T is a xed time and f a measurable function, using a Monte-Carlo method. In
other words, we focus on the law at time T of the di usion killed when it leaves D.
The results presented in this paper were announced without proofs in Gobet (1998b,
1999).
It is of interest to know how to evaluate such expectations, e.g. in nancial mathematics. Indeed, let us consider a continuous monitored barrier option on the d-dimensional
assets Xt , with characteristics f; T and D: it is a contract which gives to its owner
the cash
ow f(XT ) at time T if the prices have stayed in D between 0 and T (the
E-mail address: gobet@ccr.jussieu.fr (E. Gobet)
c 2000 Elsevier Science B.V. All rights reserved.
0304-4149/00/$ - see front matter
PII: S 0 3 0 4 - 4 1 4 9 ( 9 9 ) 0 0 1 0 9 - X

168

E. Gobet / Stochastic Processes and their Applications 87 (2000) 167–197

option remains active) and 0 otherwise. When the market is complete, the price of this
option is unique and is given by the expectation under the neutral-risk probability of
the discounted cash

ow at time T: it leads to the computation of Ex [5T ¡ f(XT )] (see
Musiela and Rutkowski, 1998). The results we prove in this paper also enable us to
approximate a continuous monitored barrier option by a discrete monitored one, and
conversely (see Broadie et al., 1996).
Our approach is to evaluate Ex [5T ¡ f(XT )] with a Monte-Carlo algorithm. Whenever this expectation can be viewed as a solution of a parabolic partial di erential
equation, we might prefer a Monte-Carlo method to a deterministic algorithm issued
from numerical analysis if the dimension d is large (d¿4), if the operator is degenerate
or if we need to compute Ex [5T ¡ f(XT )] only for a few x and T (see the discussion
in Lapeyre et al., 1998).
To evaluate the expectation of the functional of the di usion, the simplest way to approximate the process is to use its discrete Euler scheme (X˜ti )06i6N with discretization
step T=N , de ned if ti = i T=N is the ith discretization time by
X˜0 = x;
X˜ti+1 = X˜ti + B(X˜ti )T=N + (X˜ti )(Wti+1 − Wti ):

(2)

Let ˜d := inf {ti : X˜ti 6∈ D} be its rst exit time from D. We study in this paper the
discretization error obtained by replacing Ex [5T ¡ f(XT )] by Ex [5T ¡˜d f(X˜T )].
A more sophisticated procedure consists in interpolating the previous discrete time
process (2) into a continuous Euler scheme (X˜t )06t6T by setting

for t ∈ [ti ; ti+1 ) X˜t = X˜ti + B(X˜ti )(t − ti ) + (X˜ti )(Wt − Wti ):
Note that the continuous Euler scheme is an Itˆo process verifying
Z t
Z t
B(X˜’(s) ) ds +
(X˜’(s) ) dWs ;
X˜t = x +
0

(3)

(4)

0

where ’(t) := sup{ti : ti 6t}. Let ˜c := inf {t: X˜t 6∈ D} be its rst exit time from D.
We are also interested in studying the approximation of Ex [5T ¡ f(XT )] by Ex [5T ¡˜c
f(X˜T )].
1.1. Monte-Carlo simulations


From the simulation point of view, the evaluation of Ex [5T ¡˜d f(X˜T )] by a
Monte-Carlo method works as follows: if (Ymd )m¿1 is a sequence of independent copies
of the random variable Y d := 5T ¡˜d f(X˜T ), we approximate Ex [5T ¡˜d f(X˜T )] by
PM
(1=M ) m=1 Ymd ; for M large enough. The simulation of Y d is straightforward whatever the dimension d is, because we only need realizations of (X˜ti )06i6N , which can
be easily obtained using the simulation of N independent Gaussian variables for the
increments (Wti+1 − Wti )06i6N −1 . For the continuous Euler scheme, the simulation of
Y c := 5T ¡˜c f(X˜T ) requires an additional step, because the process (X˜t )ti 6t6ti+1 may
have left D even if X˜ti ∈ D and X˜ti+1 ∈ D. We rst obtain realizations of (X˜ti )06i6N
as before. Then, conditionally on the values (X˜ti )16i6N ; (X˜t )tj 6t6tj+1 has the law of
some Brownian bridge. Using N extra independent Bernoulli variables, this enables us

E. Gobet / Stochastic Processes and their Applications 87 (2000) 167–197

169

to simulate if (X˜t )06t6T has left D between two discretization times or not. Each
parameter involved for the simulation of the Bernoulli variables is related to the
quantity
P(∀t ∈ [ti ; ti+1 ]X˜t ∈ D= X˜ti = z1 ; X˜ti+1 = z2 ) := p(z1 ; z2 ; T=N ):

In the one-dimensional case, p(z1 ; z2 ; ) is the cumulative of the in mum and supremum of a linear Brownian bridge and has a simple expression (see Revuz and Yor,
1991, p.105):
1. if D = (−∞; b), we have
p(z1 ; z2 ; ) = 5b¿z1 5b¿z2





(b − z1 )(b − z2 )
;
1 − exp −2
2 (z1 )

2. if D = (a; +∞), we have
p(z1 ; z2 ; ) = 5z1 ¿a 5z2 ¿a






(a − z1 )(a − z2 )
1 − exp −2
2 (z1 )



;

3. if D = (a; b), we have
p(z1 ; z2 ; )
= 5b¿z1 ¿a 5b¿z2 ¿a



+∞ 
X
k(b − a)(k(b − a) + z2 − z1 )
1−
exp −2

2 (z1 )
k=−∞



(k(b − a) + z1 − b)(k(b − a) + z2 − b)
− exp −2
2 (z1 )

 !
:

For higher dimension, in the case of a half-space, p(z1 ; z2 ; ) has also a simple expression (see Lepingle, 1993). But for more general domains, as far as we know, there
are no tractable expressions for p(z1 ; z2 ; ). Nevertheless, the probability p(z1 ; z2 ; )
can be accurately approximated using an asymptotic expansion in  (see Baldi, 1995):
this may be an appropriate way to evaluate p(z1 ; z2 ; ). So, in short, the discrete Euler
scheme is very easy to implement for any dimension d¿1, whereas for the continuous
Euler scheme, the simulation is simple in the one-dimensional case and more delicate
in higher dimension.
1.2. Convergence results

Now, our main objective is to analyze the two errors
Ec (f; T; x; N ) := Ex [5T ¡˜c f(X˜T )] − Ex [5T ¡ f(XT )]

(5)

Ed (f; T; x; N ) := Ex [5T ¡˜d f(X˜T )] − Ex [5T ¡ f(XT )]

(6)

and

as a function of N , the number of discretization steps. We rst state an easy result,
which shows that both errors tend to 0 when N goes to in nity under mild assumptions.

170

E. Gobet / Stochastic Processes and their Applications 87 (2000) 167–197

Proposition 1.1. Assume that B and  are globally Lipschitz functions; that D is
de ned by D = {x ∈ Rd : F(x) ¿ 0}; @D = {x ∈ Rd : F(x) = 0} for some globally

Lipschitz function F. Provided that the condition (C) below is satis ed
 = 0;
(C) : Px (∃t ∈ [0; T ] Xt 6∈ D ; ∀t ∈ [0; T ] Xt ∈ D)
0 
for all function f ∈ Cb (D; R); we have
lim Ec (f; T; x; N ) = lim Ed (f; T; x; N ) = 0:

N →+∞

N →+∞

Remark 1.1. Condition (C) rules out the pathological situation where the paths may
 A simple example of non-convergence in this situation is
reach @D without leaving D.
the following: take d = 1; D = (−∞; exp(1)); B(y) = y; (y) ≡ 0; X0 = 1; T = 1 and
f ≡ 1. In this deterministic situation, on the one hand, we have  = 1 (condition
(C) is not ful lled) and on the other, X˜t is an increasing function with X˜1 = (1 +
N −1 )N ¡ exp(1), so that ˜c ¿ 1 and ˜d ¿ 1: thus, Ec (f; T; x; N ) = Ed (f; T; x; N ) = 1 for
all N ¿1.
Remark 1.2. If D is of class C 2 with a compact boundary, the existence of such a

function F holds. Indeed, on a neighbourhood of @D, let F(x) be the algebraic distance
between x and @D: this is a locally C 2 function, which we can extend to the whole
space with the required properties (see Property 3:1 in Section 3).
Remark 1.3. Assume moreover that D is of class C 3 with a compact boundary. Then,
we note that an uniform ellipticity condition on the di usion implies condition (C).
Indeed, from the strong Markov property, we have Px (∃t ∈ [0; T ] Xt 6∈ D ; ∀t ∈
 = Px ( = T ) + Ex (5¡T PX [∀t ∈ [0; T − ] Xt ∈ D]).
 The rst term on the
[0; T ] Xt ∈ D)
r.h.s. equals 0 because  has a density w.r.t. the Lebesgue measure. The second term
 =0
on the r.h.s. also equals 0 using the 0 –1 law to show that Pz (∀t ∈ [0; s] Xt ∈ D)
for z ∈ @D and s ¿ 0 (see Friedman, 1976).
Remark 1.4. For d=1 and D=(−∞; b), condition (C) becomes Px (supt∈[0; T ] Xt =b)=0.
Thus, condition (C′ ) below implies condition (C):
(C′ ): ∃y ∈ (x; b) such that (y) 6= 0.
This can be justi ed using the Nualart–Vives criterion for absolute continuity
w.r.t. the Lebesgue measure of the law of the supremum of some process (see
Nualart, 1995, Proposition 2:1:4); we omit the details. Condition (C’) shows that, in
some sense, condition (C) is weak (much weaker than an uniform ellipticity condition e.g.).
Proof of Proposition 1.1. For the continuous Euler scheme, it is well known that
P
P
limn→+∞ X˜ = X uniformly on [0; T ]. It easily implies limn→+∞ 5T ¡˜c f(X˜T ) =
5T ¡ f(XT ) since one has {T ¡ ˜c } = {inf t∈[0; T ] F(X˜t ) ¿ 0} and the condition (C) is
equivalent to P(inf t∈[0; T ] F(Xt ) = 0) = 0. The result for Ec (f; T; x; N ) follows. For
Ed (f; T; x; N ), analogous arguments apply.
We now focus on the rate of convergence of the errors under stronger assumptions.
Our main results state that under regularity assumptions on B; ; D and an uniform

E. Gobet / Stochastic Processes and their Applications 87 (2000) 167–197

171

ellipticity condition, one has
• for the continuous Euler scheme,

Ec (f; T; x; N ) = C N −1 + o(N −1 );

provided that f is a measurable function with support strictly included in D
(Theorem 2.1). The support condition can be weakened if f is smooth enough
(Theorem 2.2).
• for the discrete Euler scheme,
Ed (f; T; x; N ) = O(N −1=2 );

for functions f satisfying analogous hypotheses as before (Theorem 2.3). The rate
N −1=2 is optimal and intrinsic to the choice of a discrete killing time (Theorem 2.4).
1.3. Background results
Known results about the Euler scheme (4) concern the approximation of Ex [f(XT )]:
the error can be expanded in terms of powers of N −1 (see Talay and Tubaro (1990)
if f is smooth and Bally and Talay (1996a) if f is only measurable with hypoellipticity conditions). A di erent point of view is to study the convergence in law of the

N
renormalized error ( N (X˜t − Xt ))t¿0 (see Kurtz and Protter (1991)).
When we consider the weak approximation of killed di usion, we know from Siegmund and Yuh (1982) that the error Ed (f; T; x; N ) can be expanded to the rst order in
N −1=2 in the case of a Brownian motion in dimension 1, for f equal to a characteristic
function of an interval strictly included in D (this implies in particular that f vanishes
on a neighbourhood of the boundary @D):
Ed (f; T; x; N ) = C N −1=2 + o(N −1=2 ):
Their proof uses random walk techniques and cannot be adapted to others situations.
For a more general multidimensional di usion, Costantini et al. (1998) prove that, for
all  ¿ 0,
|Ed (f; T; x; N )|6C N −1=2+ ;
provided that the domain is bounded, smooth and convex and that the function f ∈
 R) (i.e. f is C 3 (D;
 R) with third derivatives satisfying -Holder conditions
C 3; (D;
with ∈ (0; 1)) with some conditions of vanishing on @D. Our results improve theirs
since we show that
1. the convergence rate of Ed (f; T; x; N ) to 0 is in fact of order N −1=2 ;
2. the domain needs not be convex;
3. provided a support condition, the function f needs only be measurable.
1.4. Outline of the paper
To derive the estimates of the errors, following the approach of Bally and Talay
(1996a), we transform both approximation errors (Ec (f; T; x; N ) and Ed (f; T; x; N ))

172

E. Gobet / Stochastic Processes and their Applications 87 (2000) 167–197

using the parabolic PDE satis ed by the function (T; x) 7→ Ex [5t¡ f(XT )], so that the
global errors will be decomposed into a sum of local errors: in Section 2, we rst
recall some standard regularity results concerning the associate PDE and then, we state
the main results of the paper.
Their proofs are given in Section 3: for the continuous Euler scheme, the analysis of
local errors involves standard stochastic calculus. But to handle the case of measurable
functions f, we need some crucial controls on the law of killed processes, which
are given in Lemma 3.1. Their proofs use Malliavin calculus techniques and require
some particular and careful treatment due to the exit time: they are postponed to
Section 4. For the discrete Euler scheme, additional techniques are needed: in particular,
we project orthogonally on D the Euler scheme, to obtain a non-standard Itˆo’s formula.
It involves a local time on the boundary which we accurately estimate using a exterior
cone condition on @D. These boundary estimates are exposed in Lemmas 3.6 and 3.7,
but their proofs are given in Section 5. Section 6 deals with some extensions.
1.5. General notation
We consider a domain D ⊂ Rd , i.e. an open connected set, with a non-empty boundary @D. We assume that X0 = x ∈ D. For s ∈ @D; n(s) denotes the unit inner normal
at s, when it is well de ned.
For (t; x) ∈ [0; T ]×Rd , we set
v(t; x) := Ex [5T −t¡ f(XT −t )];

(7)

where  := inf {t ¿ 0: Xt 6∈ D} (with the convention  = +∞ if ∀t ¿ 0 Xt ∈ D).
For sets  and ′ in Rd , for z ∈ Rd ; d(z; ) stands for the distance between z and
; d(; ′ ) for the distance between  and ′ .
For r¿0, set V@D (r) := {z ∈ Rd : d(z; @D)6r} and D(r) := {z ∈ Rd : d(z; D) ¡ r}.
We also introduce the stopping time (r)
˜
which will permit to localize (X˜t )06t6T
near D:
(r)
˜ := inf {t ¿ 0: X˜t 6∈ D(r)}:

(8)

We will keep the same notation K(T ) for all nite, non-negative and non-decreasing
functions, independent of x; N or f, which will appear in proofs (i.e. they depend on
D, the coecients B(:); (:) of (1) and so on).
For smooth functions g(t; x), we denote by @ x g(t; x) the derivative of g w.r.t. x
according to the multi-index , and by | | the length of .
If (Vt )t¿0 is a process taking its values in Rd ; (Vi; t )t¿0 will denote its d coordinates.

2. Hypotheses and results
From now on, we assume that the following three assumptions are satis ed. The
rst one concerns the regularity of the coecients of the di usion process:


(H1) B(:) is a Cb∞ (Rd ; Rd ) function and (:) is a Cb∞ (Rd ; Rd ⊗ Rd ) function.

E. Gobet / Stochastic Processes and their Applications 87 (2000) 167–197

173

The next assumption is an uniform ellipticity condition on the di usion:
(H2) there is 0 ¿ 0 such that ∀x ∈ Rd (x)∗ (x)¿02 IRd ⊗Rd .
Moreover, we require that D is smooth enough. Let us recall
De nition 2.1 (Gilbarg and Trudinger; 1977; pp: 88–89). For d¿2, the domain D is
of class C k (k¿1) if for each point s ∈ @D, there is a ball O = O(s) and a one-to-one
mapping of O onto O′ ⊂ Rd such that
(O ∩ D) ⊂ Rd+ = {y ∈ Rd : y1 ¿ 0};
(O ∩ @D) ⊂ @Rd+ = {y ∈ Rd : y1 = 0};
−1
∈ C k (O) and
∈ C k (O′ ):

For d¿2, we assume that
(H3) The domain D is of class C ∞ and @D is compact.
For some of the next results, hypotheses (H1) – (H3) may be weakened (see
Section 6).
Under assumptions (H1) – (H3), we know that the function v(t; x) (de ned in (7))
is related to the transition density at time T − t of the killed di usion, denoted by
qT −t (x; y), by the relation
Z
qT −t (x; y) f(y) dy
(9)
v(t; x) =
D

for a bounded measurable function f. Moreover, if we x y; qs (x; y) is a C ∞ ((0; T ]×
 R) function in (s; x), vanishing for x ∈ @D. It satis es Kolmogorov’s backward
D;
equation. Furthermore, for all multi-index , there are a constant c ¿ 0 and a function
K(T ), such that
!
2
K(T
)
ky

xk

 D |@x qs (x; y)|6
:
(10)
exp −c
∀(s; x; y) ∈ (0; T ]×D×
s
s(| |+d)=2
These classical results can be found in Theorem 16:3 of Ladyzenskaja et al. (1968,
p. 413), and Chapter 3 of Friedman (1964) (see also Cattiaux (1991) for hypoellipticity
 R) and satis es a parabolic partial
conditions). Thus, v(t; x) is of class C ∞ ([0; T )×D;
di erential equation of second order with Cauchy and Dirichlet conditions, i.e.

for (t; x) ∈ [0; T )×D;
@t v + L v = 0
(11)
v(t; x) = 0
for (t; x) ∈ [0; T ]×Dc ;
v(T; x) = f(x) for x ∈ D;
where L is the in nitesimal generator of the di usion
Lu(x) =

d
X
i=1

d

Bi (x)

d

1 XX
@u
@2 u
(x) +
(x):
((x)∗ (x))i; j
@xi
2
@xi @xj
i=1 j=1

 R) ∩ C 0 ([0; T )×Rd ; R): spatial derivatives
Remark 2.1. Note that v ∈ C ∞ ([0; T )×D;
of v have jumps at the boundary. This simple fact will lead, for the analysis of the
discrete Euler scheme, to technical diculties for writing some Itˆo’s formula: to solve

this, we will orthogonally project the process on D.

174

E. Gobet / Stochastic Processes and their Applications 87 (2000) 167–197

We analyze the discretization errors Ec (f; T; x; N ) and Ed (f; T; x; N ) for two classes
of function. This corresponds to the following assumptions.
(H4) f is a bounded measurable function, satisfying d(Supp(f); @D)¿2 ¿ 0.
 R) function with m¿2k; ∈ (0; 1), satisfying the
(H5-k) (k ∈ N) f is a C m; (D;
following condition of vanishing on @D: ∀z ∈ @D f(z) = Lf(z) = · · · = L(k) f(z) = 0.
We recall (see Ladyzenskaja et al., 1968, pp. 7,8) that for (m; ) ∈ N×(0; 1);
 R) is the Banach space whose elements are continuous functions u(x) in D
C (D;
having in D continuous derivatives up to order m and a nite value for the quantity
m;

(m; )
kukD

=

m X
X




sup |@jx u(x)|
x∈D

j=0 |j ′ |=j

+

X

|m′ |=m

sup



m

|@m
x u(x) − @x u(x )|

x; x′ ∈D



kx − x′ k

;

P

 R) (the summation
i.e. the norm on C m; (D;
|j ′ |=j is taken over all multi-index j of
length j).
Denote by Lz , the operator on C 2 functions de ned by
Lz u(x) =

d
X

d

Bi (z)

i=1

d

@u
1 XX
@2 u
(x) +
(x)
((z)∗ (z))i; j
@xi
2
@xi @xj

(12)

i=1 j=1

 set
(for t ∈ [ti ; ti+1 ); LX˜t is the in nitesimal generator of X˜t ). For (t; x) ∈ [0; T )×D,
i

(13)
(t; x) = 12 (L2 v(t; x) − 2Lz Lv(t; x) + Lz2 v(t; x)) z=x :
2.1. Analysis of the continuous Euler scheme

We rst state the expansion result for bounded measurable functions f, with support
strictly included in D.
Theorem 2.1. Assume that assumptions (H1) –(H3) are ful lled and f satis es (H4).
Then; there is a function K(T ) such that
kfk∞ −1
N :
1 ∧ 4

(14)

dt Ex [5t¡ (t; Xt )] N −1 + o(N −1 );

(15)

|Ec (f; T; x; N )|6K(T )
Moreover; we have
Ec (f; T; x; N ) = T

Z

T

0

with |T

RT
0

dt Ex [5t¡ (t; Xt )]|6K(T )kfk∞ =(1 ∧ 4 ).

The support condition on f can be weakened to vanishing conditions on @D if f is
smooth enough. This is the statement of the following theorem.
Theorem 2.2. Assume that assumptions (H1) –(H3) are ful lled and f satis es (H5-2).
Then; there is a function K(T ) such that
(m; )

|Ec (f; T; x; N )|6K(T )kfkD

N −1 :

(16)

E. Gobet / Stochastic Processes and their Applications 87 (2000) 167–197

175

Moreover; we have
Ec (f; T; x; N ) = T

Z

T

dt Ex [5t¡ (t; Xt )]N −1 + o(N −1 );

(17)

0

with |T

RT
0

(m; )

dt Ex [5t¡ (t; Xt )]|6K(T )kfkD

.

The existence of the expansion of the error enables to reach a higher rate of convergence using linear combinations of results obtained with di erent step-sizes (Romberg
extrapolation technique: see Talay and Tubaro, 1990).
2.2. Analysis of the discrete Euler scheme
Theorem 2.3. Assume that assumptions (H1) –(H3) are ful lled. If f satis es (H4);
there is a function K(T ) such that
|Ed (f; T; x; N )|6K(T )

kfk∞ 1
√ :
1 ∧ 4 N

(18)

If f satis es (H5-1); there is a function K(T ) such that
(m; ) 1
|Ed (f; T; x; N )|6K(T )kfkD √ :
N

(19)

The rate of convergence N −1=2 is the best we can obtain in a general situation
because we know this rate is achieved in the special case of a linear Brownian motion
(see Siegmund and Yuh, 1982).
Moreover, if we set
d := inf {ti : Xti 6∈ D};
we have
Theorem 2.4. Assume that assumptions (H1) –(H3) are ful lled. If f satis es (H4);
there is a function K(T ) such that
|Ex [5T ¡ f(XT )] − Ex [5T ¡d f(XT )]|6K(T )

kfk∞ 1
√ :
1 ∧ 4 N

(20)

If f satis es (H5-1); there is a function K(T ) such that

(m; )

|Ex [5T ¡ f(XT )] − Ex [5T ¡d f(XT )]|6K(T )kfkD

1
√ :
N

(21)

Theorem 2.4 shows that the rate N −1=2 is intrinsic to the problem of discrete killing
time: even if there is no approximation of the values of the process at discretization
times, the error is still of order N −1=2 . This fact will appear more clearly in the proof
of these results (see Remark 3.5): we will see that the global error Ed (f; T; x; N ) can
be separated into two contributions, the rst one coming from the approximation due
to Euler scheme of the in nitesimal generator L, and the second coming from the
approximation of the “continuous” exit time by the discrete one.
Note that Theorems 2.3 and 2.4 deal with the case of functions f vanishing on
the boundary (as for Siegmund and Yuh, 1982; Costantini et al., 1998). Nevertheless,

176

E. Gobet / Stochastic Processes and their Applications 87 (2000) 167–197

presumably, the approach we develop in this sequel may be the appropriate one to
prove that the rate of convergence N −1=2 remains true for a function smooth near @D
(without conditions of vanishing on @D).

3. Proof of Theorems 2.1–2.4
To begin, we state a technical Lemma, involving controls on the law of some killed
processes: this result is crucial to handle the case of measurable functions f for the
analysis of the errors. Its proof based on Malliavin calculus techniques is given in
Section 4.
Let ∈ Cb∞ (Rd ; R) be a cutting function near @D verifying 5V@D (=2) 61 − 65V@D ()
and k@ x k∞ 6C| | =(1 ∧ | | ) for all multi-index ( ¿ 0 is de ned by (H4)).
Lemma 3.1. Assume that (H1) –(H4) are satis ed. Then; for all multi-index ; there
is a function K(T ); such that
kfk∞
:
(22)
∀(s; x) ∈ [0; T )×V@D () |@ x v(s; x)|6K(T )
1 ∧ | |
| |

Moreover; for R¿0; for all multi-indices and ′ ; for all g ∈ Cb (Rd ; R); there is a
function K(T ) (depending on kgkC | | ); such that for 06t6T and 06s ¡ T; we have
b

kfk∞ K(T )
;
|Ex [5s¡˜c g(X˜t )@ x v(s; X˜s )]|6
1 ∧ | | T | |=2

(23)

|Ex [5s¡ g(Xt )@ x v(s; Xs )]|6

(24)

kfk∞ K(T )
;
1 ∧ | | T | |=2



˜
˜
|Ex [5s¡˜d ∧(R)
˜ g(Xt )@x [v@x ](s; Xs )]|6

kfk∞ K(T )
:
1 ∧ | |+| ′ | T | |=2

(25)

The constant R introduced in the lemma above will be de ned later in the proof of
Theorems (see Property 3:1).
We rst address the analysis of the continuous Euler scheme, which is more easy
than the discrete one.
3.1. Continuous Euler scheme
3.1.1. Proof of Theorem 2.1
We have
Ec (f; T; x; N ) = Ex [5T ¡˜c f(X˜T )] − Ex [v((T − T=N ) ∧ ˜c ; X˜(T −T=N )∧˜c )]
+ Ex [v((T − T=N ) ∧ ˜c ; X˜(T −T=N )∧˜c )] − Ex [v(0; X˜0 )]
:= C1 (N ) + C2 (N ):

(26)

It results from Lemma 3.2 below (whose proof will be given at the end) that C1 (N )
yields a negligible contribution:
kfk∞ −3=2
N
:
(27)
|C1 (N )|6K(T )
1 ∧ 4

E. Gobet / Stochastic Processes and their Applications 87 (2000) 167–197

177

Lemma 3.2. Assume that (H1) –(H4) are satis ed. Then; there is a function K(T );
such that
|Ex [5T ¡˜c f(X˜T )] − Ex [v((T − T=N ) ∧ ˜c ; X˜(T −T=N )∧˜c )]|6K(T )

kfk∞ −3=2
N
: (28)
1 ∧ 4

 R), between times 0
ˆ formula to v ∈ C 1; 2 ([0; T )×D;
To transform C2 (N ), apply Ito’s
and (T − T=N ) ∧ ˜c . Using the notation (4) and (12), it readily follows that
#
"Z
tN −1 ∧˜c

C2 (N ) = Ex

0

=

Z

tN −1

0

ds(@t v + Lz v)|z=X˜’(s) (s; X˜s )

ds Ex [5s¡˜c (Lz v − Lv)|z=X˜’(s) (s; X˜s )];

 Since spatial derivatives of v have jumps on
where we used @t v = −Lv in [0; T )×D.
@D (see Remark 2.1), we may stop the paths at time ˜c to avoid some discontinuity
problems. Using 5s¡˜c = 5’(s)¡˜c − 5’(s)¡˜c 6s , we obtain
C2 (N ) =

Z

tN −1

0



Z

ds Ex [5’(s)¡˜c (Lz v − Lv)|z=X˜’(s) (s ∧ ˜c ; X˜s∧˜c )]

tN −1
0

ds Ex [5’(s)¡˜c 6s (Lz v − Lv)|z=X˜’(s) (s ∧ ˜c ; X˜s∧˜c )]

:= C3 (N ) − C4 (N ):

(29)

When we explicit Lz − L, we can assert that for g = Bi or (∗ )i; j , we have
(Lz v − Lv)|z=X˜’(s) (s ∧ ˜c ; X˜s∧˜c ) =

X

16| |62

c @ x v(s ∧ ˜c ; X˜s∧˜c )

×[g (X˜s∧˜c ) − g (X˜’(s) )]:
On the event {’(s) ¡ ˜c 6s}, the involved derivatives of v are computed on @D: so,
using the estimates (22), they are uniformly bounded. Using Lemma 3.3 below (proved
later), it readily follows that
"
#
N −2 Z
K(T )kfk∞ X ti+1
sup kX˜u − X˜ti kRd
ds Ex 5ti ¡˜c 6s max
|C4 (N )| 6
06i6N −1 u∈[ti ; ti+1 ]
1 ∧ 2
i=0 ti
"
!
K(T )kfk∞
˜
˜
sup kXu − Xti kRd
Ex
max
6
06i6N −1 u∈[ti ; ti+1 ]
1 ∧ 2
#
N
−2 Z ti+1
X
ds 5ti ¡˜c 6ti+1
×
i=0

ti

K(T )kfk∞ log (N + 1)
;
1 ∧ 2
N 3=2
PN −2
where we used i=0 5ti ¡˜c 6ti+1 61.
6

(30)

178

E. Gobet / Stochastic Processes and their Applications 87 (2000) 167–197

Lemma 3.3. Assume that (H1) is satis ed. Then; for p¿1; there is a function K(T );
such that
!p #1=p
"
p
sup kX˜s − X˜ti kRd
(31)
6K(T )N −1=2 log(N + 1):
max
Ex
06i6N −1 s∈[ti ; ti+1 ]

ˆ formula on the event {’(s) ¡ ˜c },
For the term C3 (N ), apply once again Ito’s
between times ’(s) and s ∧ ˜c , to obtain
Z tN −1 Z s
ds
dt Ex [5t¡˜c (L2 v − 2Lz Lv + Lz2 v)|z=X˜’(t) (t; X˜t )];
C3 (N ) =
0

’(s)

where we used the PDE satis ed by v (the term from Itˆo’s formula corresponding to
’(s) equals to 0 because Lz v(s; z) = Lv(s; z)). Applying the estimate (23), we immediately obtain
K(T ) kfk∞
:
N 1 ∧ 4
We complete the proof of (14) by combining this last estimate with (26), (27), (29)
and (30). Now, note that to obtain (15), it is enough to prove that
Z
T T
ds Ex [5s¡ (s; Xs )] + o(N −1 ):
C3 (N ) =
N 0
|C3 (N )|6

We proceed as follows:
Z tN −1 Z
ds
C3 (N ) =
0



Z

tN −1

ds
0

s

’(s)
Z s

dt Ex [5t¡˜c (L2 v − 2Lz Lv + Lz2 v)|z=X˜’(t) (t; X˜t )]

(Step 1)

dt 2Ex [5’(t)¡˜c (’(t); X˜’(t) )]

’(s)

N −2 Z
T X ti+1
ds Ex [5ti ¡˜c (ti ; X˜ti )] (Step 2)
N
i=0 ti
Z
N −1 Z
T T
T X ti+1
ds Ex [5s¡˜c (s; X˜s )] (Step 3)
ds Ex [5s¡˜c (s; X˜s )] :=

N
N 0
i=0 ti
Z
T T
ds Ex [5s¡ (s; Xs )] (Step 4);

N 0

:=

where the symbol ≈ means that the remainder term is an o(N −1 ) (recall that the
function (t; x) is de ned by (13)).
ˆ
To obtain Step 2 from Step 1, proceed as for the analysis of C2 (N ) by applying Ito’s
formula between ’(t) and t ∧ ˜c , using estimates (22) and (23). Very similar arguments
apply to the passage from Step 2 to Step 3. The last step consists in proving that
Z T
Z T
ds Ex [5s¡˜c (s; X˜s )] −
ds Ex [5s¡ (s; Xs )] = o(1):
0

0

This directly follows from the Lebesgue-dominated convergence Theorem by noting
that on the one hand, each integrand is bounded, using the estimates (23) and (24)
from Lemma 3.1. On the other, for s ∈ [0; T ), we have limN →+∞ Ex [5s¡˜c (s; X˜s )] =

E. Gobet / Stochastic Processes and their Applications 87 (2000) 167–197

179

Ex [5s¡ (s; Xs )] applying Proposition 1.1 with f(z)=(s; z). This completes the proof
of Theorem 2.1.
It remains to prove Lemmas 3.2 and 3.3.
Proof of Lemma 3.2. Using the properties (11), note that
Ex [5T ¡˜c f(X˜T )] − Ex [v((T − T=N ) ∧ ˜c ; X˜(T −T=N )∧˜c )]
= Ex [v(T ∧ ˜c ; X˜T ∧˜c )] − Ex [v((T − T=N ) ∧ ˜c ; X˜(T −T=N )∧˜c )]:
The reason why we treat this term apart from C2 (N ) in (26) is only technical. Since
ˆ formula between (T − T=N ) ∧ ˜c and
v ∈ C 1; 2 ([0; T )×Rd ; R), we may apply Ito’s
(T − ) ∧ ˜c and take the limit when  → 0: this last step is dicult to prove because
v(t; x) may not be continuous in t = T if f is only a measurable function. To solve this
diculty, we approximate f by some continuous functions, using a density argument.
1. Assume that the function f is continuous and satis es (H4). Then, we have
limt→T; y→x v(t; y) = v(T; x) = f(x) for x ∈ D. Fix  ∈ (0; T=N ). The term to estimate
can be rewritten as
Ex [5T ¡˜c f(X˜T )] − Ex [v((T − T=N ) ∧ ˜c ; X˜(T −T=N )∧˜c )]
= Ex [v(T ∧ ˜c ; X˜T ∧˜c )] − Ex [v((T − ) ∧ ˜c ; X˜(T −)∧˜c )]

+ Ex [v((T − ) ∧ ˜c ; X˜(T −)∧˜c )] − Ex [v((T − T=N ) ∧ ˜c ; X˜(T −T=N )∧˜c )]:

:= E1 (; N ) + E2 (; N ):

(32)

A very similar computation as the one made to estimate C2 (N ) from (26) leads to
|E2 (; N )|6K(T )

kfk∞ −3=2
N
;
1 ∧ 4

(33)

uniformly in . Using the continuity of v(t; x) and X˜t∧˜c , we conclude by the Lebesguedominated convergence Theorem that lim→0 E1 (; N ) = 0. Combining this fact with
(32) and (33), the required estimate (28) is proved for bounded continuous functions
f with d(Supp(f); @D)¿2.
2. Assume now that f is only measurable andRsatis es (H4). Denote by ˜1 and ˜2
the two measures de ned by Ex [5T ¡˜c f(X˜T )] := f d ˜1 and

 

T
∧ ˜c ; X˜(T −T=N )∧˜c
T−
= Ex [5T −T=N ¡˜c EX˜T −T=N [5T=N ¡ f(XT=N )]]
Ex v
N
Z
:=
f d ˜2 :

By a density argument, f can be approximated in L1 (˜1 + ˜2 ) by a sequence of
continuous functions denoted by (fp )p¿0 : moreover, in that loss of generality assume
that each function fp satis es kfp k∞ 6kfk∞ and d(Supp(fp ; @D))¿2, so that the
result for continuous functions applies, uniformly in p. This completes the proof.

Proof of Lemma 3.3. In fact, we prove that Lemma 3.3 holds for any Itoˆ process
(Yt )t¿0 , de ned by dYt =bt dt+t dWt , with adapted and uniformly bounded coecients.
Without loss of generality, we can assume that Yt ∈ R, (Wt )t¿0
R s is a linear Brownian motion and bt ≡ 0. Since sups∈[ti ; ti+1 ] |Ys − Yti |6sups∈[ti ; ti+1 ] ti u dWu − inf s∈[ti ; ti+1 ]

180

E. Gobet / Stochastic Processes and their Applications 87 (2000) 167–197

Rs

Rs
u dWu ; it is sucient to prove the estimate for sups∈[ti ;ti+1 ] ti u dWu (the other one
will follow by replacing  by −). The Bernstein exponential inequality for martingales
yields
ti

P

"

sup
s∈[tN −1 ; tN ]

Z

s

u dWu ¿ z=FtN −1

tN −1

#

z2
6exp −N
2k2 k∞ T




:

Hence,
sup

max

Px

06i6N −1 s∈[ti ; ti+1 ]

¿Px

Z

s

u dWu 6z

ti

sup

max

06i6N −2 s∈[ti ; ti+1 ]

Z

s

u dWu 6z

ti

z2
¿ 1 − exp −N
2k2 k∞ T




!

N

!


1 − exp −N

z2
2
2k k∞ T



;

where we have iterated the conditioning. It follows that
Ex

max

sup

06i6N −1 s∈[ti ; ti+1 ]

6

Z

d zpz

p−1

R+

Z

s

u dWu

ti

!p

z2
1 − 1 − exp −N
2
2k k∞ T




N !

:

p

Cut the integral at the point (N ) = 2kk∞ T N −1=2 log(N + 1). The rst term
corresponding to the integral between 0 and (N ) is obviously bounded by p (N ).
Using that 1 − (1 − u)N 6Nu for u ∈ [0; 1], the second one can be easily bounded
by K(T )N −p=2 exp(−N2 (N )=(4k2 k∞ T )) = K(T )N −p=2 (N + 1)−1 . This leads to the
required estimate and completes the proof.
3.1.2. Proof of Theorem 2.2
We mimic the arguments of Theorem 2.1: in that case, the fourth spatial derivatives of v are uniformly bounded with Holder conditions for the fourth ones (see
Lemma 3.4 below). This is enough to obtain the expected results: we omit the details.
The estimates (22) – (24) used under assumption (H4) have to be replaced by those
given by
 R) function;
Lemma 3.4. Under (H1) –(H3) and (H5-2); v is at least a C 2; 4 ([0; T ]×D;
and there is a function K(T ); such that for all multi-index of length | |64;
we have
∀(t; x) ∈ [0; T ]×D

(m; )

|@ x v(t; x)|6K(T )kfkD

:

(34)

E. Gobet / Stochastic Processes and their Applications 87 (2000) 167–197

181

Moreover; for all multi-index of length | | = 4; we have
|@ x v(t; x) − @ x v(t; x′ )|

sup



kx − x′ k

(t; x);(t ′ ; x′ )∈[0;T ]×D
(m; )

6K(T )kfkD

|@ v(t; x) − @ x v(t ′ ; x)|
+ x
|t − t ′ | =2

:

!
(35)

This is immediately derived from classical results for linear equations of parabolic
type: we refer to Ladyzenskaja et al. (1968, Theorem 5:2, p. 320) for fuller statement.
3.2. Discrete Euler scheme
Because the case d = 1 masks some problems, we focus on the case d¿2 in the
following.
First, for the convenience of stochastic calculus for continuous-time processes, we
consider, in the sequel, the continuous Euler scheme (4) killed at the discrete time
˜d := inf {ti : X˜ti 6∈ D}: this new point of view does not change of course Ed (f; T; x; N ).
The rst stage for the proof of Theorems 2.3 and 2.4 (and analogously to the analysis
of Ec (f; T; x; N )) is to note that, from properties (11), we have
Ed (f; T; x; N ) = Ex [v(T ∧ ˜d ; X˜T ∧˜d ) − v(0; X˜0 )]:

(36)

Then, we would like to apply Ito’s
ˆ formula to explicit v(T ∧ ˜d ; X˜T ∧˜d ) − v(0; X˜0 ).
Unfortunately, the situation is not classical at all because spatial derivatives of v are
discontinuous at the boundary (see Remark 2.1) and the process (X˜t∧˜d )t¿0 probably
crosses @D. Intuitively, if such a decomposition exists, it should involve a local time on
the boundary (note that for the analysis of Ec (f; T; x; N ), (X˜t∧˜c )t¿0 has been stopped
just before crossing @D, so that we only need classical Ito’s formula).
To solve this problem, our approach is the following. Consider Zt := ProjD (X˜t ), the
 As v vanishes
orthogonal projection on D of X˜t : this process takes its values in D.
outside D, we have v(t; X˜t ) = v(t; Zt ). If (Zt )t¿0 remains a continuous semimartingale,
 R).
then classical Ito’s
ˆ formula can be applied to v(t; Zt ) because v ∈ C 1; 2 ([0; T )×D;
So, the main task is to show that Zt is still a continuous semimartingale and to obtain a
tractable decomposition for it. In the case of a half-space D={z ∈ Rd : z1 ¿ 0}, this fact
is clear because Zt = ((X˜1; t )+ ; X˜2; t ; : : : ; X˜d; t )∗ and we conclude using Tanaka’s formula.
For a general domain, by an appropriate mapping, we can transform D locally near @D
in a half-space and thus, apply the arguments of the rst case. Actually, the orthogonal
projection is not uniquely de ned on the whole space (except if D is convex), but
only near D: so, we will use localization arguments, owing to the stopping time (r)
˜
(introduced in (8)).
We now bring together in Property 3:2 below few basic facts from di erential
geometry about the functions “distance to the boundary” and “orthogonal projection
 (for the proofs, see e.g. Appendix, Gilbarg and Trudinger, 1977, pp. 381–384).
on D”
Property 3.1. For a domain D of class C 3 with compact boundary @D, there is a
constant R ¿ 0 such that the following three properties hold.

182

E. Gobet / Stochastic Processes and their Applications 87 (2000) 167–197

1. Local di eomorphism (which locally maps the boundary into a half-space). For
all s ∈ @D, there are two open bounded sets U s and V s , a C 2 -di eomorphism F s
(G s = (F s )−1 ) from U s (s ∈ U s ) into (−2R; 2R)×V s , such that
 s
U ⊂ Rd → (−2R; 2R)×V s ⊂ R×Rd−1 ;
s
F :
x 7→ (z1 ; z) := (z1 ; z2 ; : : : ; zd ) such that x = gs (z) + z1 n(gs (z));

where gs is a mapping of @D in a neighbourhood of s.
2. Distance to @D. Let s ∈ @D. On U s , the function F1s (:) is the algebraic distance to
@D (thus it does not depend on s and we denote it by F1 ), i.e. |F1 (x)| = d(x; @D)
c
and F1 (x) ¿ 0 (resp. F1 (x) ¡ 0) if x ∈ D ∩ U s (resp. x ∈ D ∩ U s ). It is a
S
C 3 function on s∈@D U s = V@D (2R), which we extend into a Cb3 (Rd ; R) function,
c
with the conditions F1 (:) ¿ 0 on D and F1 (:) ¡ 0 on D . Note that @D = {x ∈
Rd : F1 (x) = 0}.
 Let s ∈ @D. For x ∈ U s , the orthogonal projection on
3. Orthogonal projection on D.

D of x is uniquely de ned by
ProjD (x) = G s ([F1 (x)]+ ; F2s (x); : : : ; Fds (x)):

(37)

Since @D is compact, there exists a nite number of points (si )16i6k in @D (we
S
associate to them G i , F i , U i and V i respectively) such that V@D (3R=2) ⊂ 16i6k U i .
S
Consider an open set U 0 with d(@D; U 0 ) ¿ 0, such that D(3R=2) ⊂ 06i6k U i . Now,
we construct a partition of unity, subordinate to the cover (U i )06i6k , i.e. non-negative
Pk
i
Cb∞ functions (i )06i6k verifying Supp (i ) ⊂ U i and
i=0  ≡ 1 on DR . So far,
i
i
i
functions F (resp. G ) have been well de ned only on U (resp. (−2R; 2R)×V i ): we
extend them in smooth functions on Rd .
We now can state
Proposition 3.1. Consider a domain D of class C 3 with compact boundary (with the
constant R ¿ 0 de ned in Property 3:1). Let (Yt )t¿0 be a continuous semimartingale;
taking its values in D(R) (with Y0 ∈ D). Then; the orthogonal projection of Yt on D
denoted by ProjD (Yt ) de nes a continuous semimartingale; whose decomposition is
d(ProjD (Yt )) = 5Yt ∈D dYt + 5Yt 6∈D dYt@D + 21 n(Yt )dL0t (F1 (Y ));
where
• Yt@D is a continuous semimartingale with Y0@D = 0; with decomposition
dYt@D

:=

k
X

i (Yt )d(G i (0; F2i (Yt ); : : : ; Fdi (Yt ))):

i=1

• L0t (F1 (Y )) is the one-dimensional local time of the continuous semimartingale F1 (Y )
at time t and level 0.
Proof. According to Property 3:1 and using the partition of unity above, we have
Pk
for x ∈ D(R): ProjD (x) = 0 (x)x + i=1 i (x)G i ([F1 (x)]+ ; F2i (x); : : : ; Fdi (x)). Since F i
are C 2 functions, (Fji (Yt ))t¿0 (26j6d) and ([F1 (Yt )]+ )t¿0 remain continuous semimartingales owing to Ito’s
ˆ formula for the rst ones and Tanaka’s formula for the latter:

183

E. Gobet / Stochastic Processes and their Applications 87 (2000) 167–197

because G i are C 2 functions, we deduce that ProjD (Yt ) is a continuous semimartingale.
To obtain its decomposition, we study coordinatewise. Denote Zt := ProjD (Yt ) and x
j ∈ {1; : : : ; d}. First, we have
dZj; t = 0 (Yt )dYj; t +

k
X

i (Yt )d(Gji ([F1 (Yt )]+ ; F2i (Yt ); : : : ; Fdi (Yt )))

i=1

+

k
X

Zj; t d(i (Yt )) +

i=0

k
X
i=0

= 0 (Yt )dYj; t +

k
X

dhi (Y: ); Zj; : it

i (Yt )d(Gji ([F1 (Yt )]+ ; F2i (Yt ); : : : ; Fdi (Yt )));

i=1

Pk
+
1
0
i
since
i=0  ≡ 1 on DR . Using d([F1 (Yt )] ) = 5F1 (Yt )¿0 d(F1 (Yt )) + 2 dLt (F1 (Y ))
(see Revuz and Yor, 1991), a straightforward computation with Ito’s
ˆ formula for C 2
functions leads to
dZj; t =

k
X

i (Yt )

i=1

@Gji
1
(0; F2i (Yt ); : : : ; Fdi (Yt )) dL0t (F1 (Y ))
@z1
2

+ 5Yt ∈D 0 (Yt )dYj; t + 5Yt ∈D
+

1
2

d
d X
X
l=1 m=1

+ 5Yt 6∈D
1
+
2

k
X

2

Gji

l=2 m=2

i=1

i (Yt )

d
X
@Gji
l=1

@zl
!

(F i (Yt ))d(Fli (Yt ))

@
(F i (Yt ))dhFli (Y: ); Fmi (Y: )it
@zl @zm

i (Yt )

i=1

d
d X
X

k
X

d
X
@Gji
l=2

2

Gji

@zl

(0; F2i (Yt ); : : : ; Fdi (Yt ))d(Fli (Yt ))

@
(0; F2i (Yt ); : : : ; Fdi (Yt ))dhFli (Y: ); Fmi (Y: )it
@zl @zm

!

;

where we used that {F1 (Yt )60} = {Yt 6∈ D} and 0 (Yt ) = 0 (Yt )5Yt ∈D .
The terms involving dL0t (F1 (Y )) can be identi ed with 21 nj (Yt ) using Property 3:1.
For terms corresponding to 5Yt ∈D , we obtain dYj; t , combining the simple fact that
Pk
Pk
Yj; t = i=0 i (Yt )Yj; t = 0 (Yt )Yj; t + i=1 i (Yt )Gji (F1 (Yt ); F2i (Yt ); : : : ; Fdi (Yt )) and a
Pk
computation as before. Terms with 5Yt 6∈D can be rewritten vectorially as i=1 i (Yt )
d(G i (0; F2i (Yt ); : : : ; Fdi (Yt ))) := dYt@D . This completes the proof.
Remark 3.1. The restriction to some set D(R) is necessary to have ProjD (:) well
de ned. If D is convex, R = +∞.
Remark 3.2. Another way to proceed is to show that the function z → ProjD (z) is
locally the di erence of convex functions: it is well known that these functions preserve
continuous semimartingales (see e.g. Bouleau, 1984). Rather than its semimartingale
character, what is of interest for our objective is to obtain a nice decomposition of

184

E. Gobet / Stochastic Processes and their Applications 87 (2000) 167–197

ProjD (Yt ). From this point of view, the approach we had is simpler and has the advantage that we are able to interpret and control quite easily the terms of the decomposition.
We now are able to deduce how to explicit (36).
Corollary 3.1. Consider a domain D of class C 3 with compact boundary (with the
constant R ¿ 0 de ned in Property 3:1). Let (Yt )t¿0 be a continuous semimartingale;
 R).
taking its values in D(R) (with Y0 ∈ D). Let u(t; x) ∈ C 0 (R+ ×Rd ; R)∩C 1; 2 (R+ ×D;

We assume that for t¿0; the support of u(t; :) is included in D.
Then; (u(t; Yt ))t¿0 is a continuous semimartingale; with decomposition
d

d(u(t; Yt )) = 5Yt ∈D

1 X @u
@u
(t; Yt ) dt +
(t; Yt )nl (Yt ) dL0t (F1 (Y ))
@t
2
@xl

+ 5Yt ∈D
+ 5Yt 6∈D
1
+
2

l=1

d
d
d
X
1 X X @2 u
@u
(t; Yt ) dYl; t +
(t; Yt ) dhYl; : ; Ym; : it
@xl
2
@xl @xm
l=1 m=1

l=1

d
X
l=1

d
d X
X
l=1 m=1

!

@u
(t; ProjD (Yt )) dYl;@D
t
@xl

@2 u
@D
(t; ProjD (Yt ))dhYl;@D
: ; Ym; : it
@xl @xm

!

:

Proof. Since u(t; :) vanishes outside D for all t¿0, we have u(t; Yt ) − u(0; Y0 ) =
u(t; ProjD (Yt )) − u(0; ProjD (Y0 )). Now, combine Proposition 3.1 and classical Itˆo’s
formula to complete the proof.
Remark 3.3. If d = 1, Corollary 3.1 reduces to Itˆo–Tanaka’s formula (see Revuz and
Yor, 1991).
Before proving Theorems 2.3 and 2.4, we state three technical Lemmas, the two last
will be proved in Section 5.
 R) function;
Lemma 3.5. Under (H1)–(H3) and (H5-1); v is at least a C 1; 2 ([0; T ]×D;
and there is a function K(T ); such that for all multi-index of length | |62;
we have
∀(s; x) ∈ [0; T ]×D

(m; )

|@ x v(s; x)|6K(T )kfkD

:

(38)

Proof. Apply the same arguments as for Lemma 3.4.
Lemma 3.6. Under (H1)–(H3); there is a function K(T ); such that; for s ∈ (0; T ]
and x ∈ D; we have
K(T ) 1
Px (X˜’(s) ∈ D; X˜s 6∈ D)6 √ √
N s

K(T ) 1
and Px (X’(s) ∈ D; Xs 6∈ D)6 √ √ :
N s

185

E. Gobet / Stochastic Processes and their Applications 87 (2000) 167–197

Lemma 3.7. Under (H1)–(H3); there is a function K(T ); such that; for x ∈ D;
we have
K(T )
Ex [L0T ∧˜d (F1 (X˜ ))]6 √
N

K(T )
and Ex [L0T ∧d (F1 (X ))]6 √ :
N

We now are able to prove Theorem 2.3.
1. Suppose rst that f is continuous on D and satis es (H4). Fix  ¿ 0. Considering
(11), we obtain easily
˜
Ed (f; T; x; h) = Ex [5(R)¡T
˜
¡˜d f(XT )]
˜
X˜T ∧˜d ∧(R)
+ Ex [v(T ∧ ˜d ∧ (R);
˜ )
˜
−v((T − ) ∧ ˜d ∧ (R);
˜
X(T −)∧˜d ∧(R)
˜ )]

˜
˜
X˜(T −)∧˜d ∧(R)
+ Ex [v((T − ) ∧ ˜d ∧ (R);
˜ ) − v(0; X0 )]

:= E1 (N ) + E2 (; N ) + E3 (; N ):

Observe that E1 (N ) is exponentially small and is bounded by K(T )N −1=2 kfk∞ . For
this, use classical upper bound for large deviations probability (see Lemma 4.1 in
Section 4)
!


2
d
;
sup kX˜t − X˜S k¿ 6K(T ) exp −c
∀x ∈ R ∀¿0 Px

t∈[S; S ′ ]
for S and S ′ two stopping times, bounded by T , such that 06S ′ − S6.
From the continuity of v at t = T (because f is continuous) and the continuity of
˜
X :∧˜d ∧(R)
˜ , we prove that lim →0 E2 (; N ) = 0 using the Lebesgue-dominated convergence Theorem.
To complete the proof, it suces now to show that
|E3 (; N )|6K(T )

kfk∞ 1
√ ;
1 ∧ 4 N

(39)

uniformly in . For this, we apply Itˆo’s formula from Corollary 3.1 with u = v, Yt =
between times 0 and (T − ) ∧ ˜d ∧ (R).
˜
If we introduce the operator Lz (see
X˜ t∧(R)
˜
 we easily obtain
(12)) and if we take into account that @t v + Lv = 0 in [0; T )×D,
"Z
#
(T −)∧˜d ∧(R)
˜
˜
dt5X˜t ∈D (Lz v − Lv)|z=X˜’(t) (t; Xt )
E3 (; N ) = Ex
0

+ Ex

"Z

(T −)∧˜d ∧(R)
˜

+ Ex

"Z

(T −)∧˜d ∧(R)
˜

0

l=1

0

+

1
2

d

1 X @v
(t; X˜t )nl (X˜t ) dL0t (F1 (X˜ ))
2
@xl

d
d X
X
l=1 m=1

#

d
X
@v
@D
(t; ProjD (X˜t ))5X˜t 6∈D d X˜l; t
@xl
l=1
#

@2 v
@D
@D
(t; ProjD (X˜t ))5X˜t 6∈D dhX˜l; : ; X˜m; : it
@xl @xm

:= E4 (; N ) + E5 (; N ) + E6 (; N ):

(40)

186

E. Gobet / Stochastic Processes and their Applications 87 (2000) 167–197

Since dL0t (F1 (X˜ )) is a non-negative measure, using inequality (22) and Lemma 3.7,
we deduce
|E5 (; N )|6K(T )

kfk∞
K(T ) kfk∞
Ex [L0T ∧˜d (F1 (X˜ ))]6 √
;
1∧
N 1∧

(41)

@D
uniformly in . Since X˜ is an Itˆo process with adapted and bounded coecients, X˜
has the same property (see Proposition 3.1). Hence, by using inequality (22) and
Lemma 3.6, we have

kfk∞
Ex
|E6 (; N )| 6 K(T )
1 ∧ 2

"Z

(T −)∧˜d ∧(R)
˜

kfk∞
Ex
6 K(T )
1 ∧ 2

Z

T

0

0

K(T ) kfk∞
5X˜’(t) ∈D 5X˜t 6∈D dt 6 √
;
N 1 ∧ 2

uniformly in . It remains to control E4 (; N ). If
at the beginning of Section 3, we have
E4 (; N ) = Ex

"Z

(T −)

0

+ Ex

"Z

5X˜t 6∈D dt

#



(42)

is the cutting function introduced

#

˜
dt5X˜t ∈D 5t¡˜d ∧(R)
˜ (Lz − L)|z=X˜’(t) [(1 − )v](t; X t )

(T −)

0

#

˜
dt5X˜t ∈D 5t¡˜d ∧(R)
˜ (Lz − L)|z=X˜’(t) [ v](t; X t )

:= E7 (; N ) + E8 (; N ):

(43)

When we explicit Lz − L, we can assert that for g = Bi or (∗ )i; j , we have
(Lz − L)|z=X˜’(t) [(1 − )v](t; X˜ t )
X
C @ x [(1 − )v](t; X˜ t )[g (X˜ t ) − g (X˜’(t) )]:
=
16| |62

Since (1 − ) has support included in V@D (), derivatives of [(1 − )v] are controlled
by (22), whereas the increments of X˜ are estimated by kkX˜t − X˜’(t) kRd kLp 6K(T )N −1=2 :
It readily follows that
K(T ) kfk∞
;
|E7 (; N )|6 √
N 1 ∧ 2

(44)

uniformly in .

To control E8 (; N ), we need to transform its expression. Since Supp( v) ⊂ D,
we have
˜
5X˜t ∈D 5t¡˜d ∧(R)
˜ (Lz − L)|z=X˜’(t) [ v](t; Xt )
˜
X˜t∧˜d ∧(R)
= 5’(t)¡˜d ∧(R)
˜ (Lz − L)|z=X˜’(t) [ v](t ∧ ˜d ∧ (R);
˜ ):

E. Gobet / Stochastic Processes and their Applications 87 (2000) 167–197

187

Hence, E8 (; N ) can be rewritten as
Z T −
dtEx [5’(t)¡˜d ∧(R)
˜
X˜t∧˜d ∧(R)
E8 (; N ) =
˜ (Lz − L)|z=X˜’(t) [ v](t∧˜d ∧(R);
˜ )]
0
"
Z
Z
T −

=

0

dtEx 5’(t)¡˜d ∧(R)
˜

t∧˜d ∧(R)
˜

ds[@t + Lz ]

’(t)

#

×(Lz [ v] − L[ v])|z=X˜’(t) (s; X˜s ) ;
˜
to the
by an application of Itˆo’s formula to X˜ , between times ’(t) and t ∧ ˜d ∧ (R),
function (Lz [ v]−L[ v])(s; x) ∈ C 1; 2 ([0; T )×Rd ; R): note that the term corresponding
to ’(t) vanishes because Lz u(z) = Lu(z). Since the coecients of L and L do not
depend on t, we obtain
Z T − Z t
dt
ds
E8 (; N ) =
0

’(t)

Ex [5s¡˜d ∧(R)
˜ (L[ Lv] − Lz [L( v) +

Lv] + Lz2 [ v])|z=X˜’(t) (s; X˜s )];

 Verify that
exploiting again @t v + Lv = 0 in [0; T )×D.
(L[ Lv] − Lz [L( v) +
=

X

Lv] + Lz2 [ v])(s; x)




| |64;| |+| |64;
y=x;y=z

g ; ′ (y)@ x [v@ x ](s; x);

where the functions g ; ′ (y) depend only on Bi , (∗ )i; j and their derivatives up to
4 − | |. From inequality (25) of Lemma 3.1, we deduce that
Ex [5s¡˜d ∧(R)
˜ (−L[ Lv] + Lz [L( v) +
6
hence

kfk∞ K(T )
;
1 ∧ 4 T 2

Lv] − L2z [ v])|z=X˜’(t) (s; X˜s )]

kfk∞ K(T )
;
(45)
1 ∧ 4 N
uniformly in . Combining estimations (41), (42), (44), (45) with (40) and (43), we
prove that estimate (39) holds: this completes the proof if f is continuous in D.
2. Suppose now that f is only measurableRand satis es (H4). Denote by R and ˜ the
˜ By
two measures de ned by Ex [5T ¡ f(XT )] := f d and Ex [5T ¡˜d f(X˜T )] := f d .
˜ by a sequence of continuous
a density argument, f can be approximated in L1 ( + )
functions denoted by (fp )p¿0 : moreover, we can impose that each function fp satis es
kfp k∞ 6kfk∞ and d(Supp(fp ; @D))¿2, so that the result for continuous function
applies, uniformly in p. This nishes the proof.
3. Suppose that f satis es (H5-1). In that case, the rst and second spatial derivatives
of v are uniformly bounded (see Lemma 3.5). This enables us to proceed analogously
to the case 1 (in this simpler situation, E4 (; N ) can be directly bounded with the same
arguments as for E7 (; N )).
|E8 (; N )|6

188

E. Gobet / Stochastic Processes and their Applications 87 (2000) 167–197

For the proof of Theorem 2.4, the same reasoning applies. Note that E4 (; N ) = 0
because there is no approximation of the in nitesimal generator of the di usion.
Remark 3.4. Actually, a