0601035v2. 294KB Jun 04 2011 12:07:14 AM

arXiv:math/0601035v2 [math.PR] 31 Dec 2006

G–Expectation, G–Brownian Motion and
Related Stochastic Calculus of Itˆo Type
Shige PENG∗
Institute of Mathematics, Fudan University
Institute of Mathematics
Shandong University
250100, Jinan, China
peng@sdu.edu.cn
1st version: arXiv:math.PR/0601035 v1 3 Jan 2006
Abstract. We introduce a notion of nonlinear expectation —-G–expectation—generated by a nonlinear heat equation with a given infinitesimal generator G. We
first discuss the notion of G–standard normal distribution. With this nonlinear distribution we can introduce our G–expectation under which the canonical process is
a G–Brownian motion. We then establish the related stochastic calculus, especially
stochastic integrals of Itˆ
o’s type with respect to our G–Brownian motion and derive
the related Itˆ
o’s formula. We have also given the existence and uniqueness of stochastic differential equation under our G–expectation. As compared with our previous
framework of g –expectations, the theory of G–expectation is intrinsic in the sense
that it is not based on a given (linear) probability space.


Keywords: g–expectation, G–expectation, G–normal distribution, BSDE, SDE,
nonlinear probability theory, nonlinear expectation, Brownian motion, Itˆo’s
stochastic calculus, Itˆ
o’s integral, Itˆo’s formula, Gaussian process, quadratic
variation process

MSC 2000 Classification Numbers: 60H10, 60H05, 60H30, 60J60, 60J65,
60A05, 60E05, 60G05, 60G51, 35K55, 35K15, 49L25

∗ The author thanks the partial support from the Natural Science Foundation of China,
grant No. 10131040. He thanks to the anonymous referee’s constructive suggestions, as
well as Juan Li’s typos-corrections. Special thanks are to the organizers of the memorable
Abel Symposium 2005 for their warm hospitality and excellent work (see also this paper in:
http://abelsymposium.no/2005/preprints).

1

1

Introduction


In 1933 Andrei Kolmogorov published his Foundation of Probability Theory
(Grundbegriffe der Wahrscheinlichkeitsrechnung) which set out the axiomatic
basis for modern probability theory. The whole theory is built on the Measure
´
Theory created by Emile
Borel and Henry Lebesgue and profoundly developed
by Radon and Fr´echet. The triple (Ω, F , P), i.e., a measurable space (Ω, F )
equipped with a probability measure P becomes a standard notion which appears in most papers of probability and mathematical finance. The second important notion, which is in fact at an equivalent place as the probability measure
itself, is the notion of expectation. The expectation
E[X] of a F –measurable
R
random variable X is defined as the integral Ω XdP . A very original idea of
Kolmogorov’s Grundbegriffe is to use Radon–Nikodym theorem to introduce the
conditional probability and the related conditional expectation under a given
σ–algebra G ⊂ F. It is hard to imagine the present state of arts of probability
theory, especially of stochastic processes, e.g., martingale theory, without such
notion of conditional expectations. A given time information (Ft )t≥0 is so ingeniously and consistently combined with the related conditional expectations
E[X|Ft ]t≥0 . Itˆ
o’s calculus—Itˆo’s integration, Itˆo’s formula and Itˆo’s equation

since 1942 [24], is, I think, the most beautiful discovery on this ground.
A very interesting problem is to develop a nonlinear expectation E[·] under which we still have such notion of conditional expectation. A notion of
g–expectation was introduced by Peng, 1997 ([35] and [36]) in which the conditional expectation Eg [X|Ft ]t≥0 is the solution of the backward stochastic differential equation (BSDE), within the classical framework of Itˆo’s calculus, with
X as its given terminal condition and with a given real function g as the generator of the BSDE. driven by a Brownian motion defined on a given probability
space (Ω, F , P). It is completely and perfectly characterized by the function g.
The above conditional expectation is characterized by the following well-known
condition.
Eg [Eg [X|Ft ]IA ] = Eg [XIA ], ∀A ∈ Ft .
Since then many results have been obtained in this subject (see, among others,
[4], [5], [6], [7], [11], [12], [8], [9], [25], [26], [37], [41], [42], [44], [46], [27]).
In [40] (see also [39]), we have constructed a kind of filtration–consistent nonlinear expectations through the so–called nonlinear Markov chain. As compared
with the framework of g–expectation, the theory of G–expectation is intrinsic,
a meaning similar to the “intrinsic geometry”. in the sense that it is not based
on a classical probability space given a priori.
In this paper, we concentrate ourselves to a concrete case of the above situation and introduce a notion of G–expectation which is generated by a very simple
one dimensional fully nonlinear heat equation, called G–heat equation, whose
coefficient has only one parameter more than the classical heat equation considered since Bachelier 1900, Einstein 1905 to describe the Brownian motion.. But
this slight generalization changes the whole things. Firstly, a random variable
X with “G–normal distribution” is defined via the heat equation. With this


2

single nonlinear distribution we manage to introduce our G–expectation under
which the canonical process is a G–Brownian motion.
We then establish the related stochastic calculus, especially stochastic integrals of Itˆ
o’s type with respect to our G–Brownian motion. A new type of Itˆo’s
formula is obtained. We have also established the existence and uniqueness of
stochastic differential equation under our G–stochastic calculus.
In this paper we concentrate ourselves to 1–dimensional G–Brownian motion. But our method of [40] can be applied to multi–dimensional G–normal
distribution, G–Brownian motion and the related stochastic calculus. This will
be given in [43].
Recently a new type of second order BSDE was proposed to give a probabilistic approach for fully nonlinear 2nd order PDE, see [10]. In finance a type
of uncertain volatility model in which the PDE of Black-Scholes type was modified to a fully nonlinear model, see [3] and [29]. A point of view of nonlinear
expectation and conditional expectation was proposed in [39] and [40]. When
I presented the result of this paper in Workshop on Risk Measures in Evry,
July 2006, I met Laurent Denis and got to learn his interesting work, joint with
Martini, on volatility model uncertainty [16]. See also our forthcoming paper
[17] for the pathwise analysis of G-Brownian motion.
As indicated in Remark 3, the nonlinear expectations discussed in this paper
are equivalent to the notion of coherent risk measures. This with the related

conditional expectations E[·|Ft ]t≥0 makes a dynamic risk measure: G–risk measure.
This paper is organized as follows: in Section 2, we recall the framework
established in [40] and adapt it to our objective. In section 3 we introduce 1–
dimensional standard G-normal distribution and discuss its main properties. In
Section 4 we introduce 1–dimensional G-Brownian motion, the corresponding
G–expectation and their main properties. We then can establish stochastic integral with respect to our G-Brownian motion of Itˆo’s type and the corresponding
Itˆ
o’s formula in Section 5 and the existence and uniqueness theorem of SDE
driven by G-Brownian motion in Section 6.

2

Nonlinear expectation: a general framework

We briefly recall the notion of nonlinear expectations introduced in [40]. Following Daniell (see Daniell 1918 [14]) in his famous Daniell’s integration, we
begin with a vector lattice. Let Ω be a given set and let H be a vector lattice of
real functions defined on Ω containing 1, namely, H is a linear space such that
1 ∈ H and that X ∈ H implies |X| ∈ H. H is a space of random variables. We
assume the functions on H are all bounded. Notice that
a ∧ b = min{a, b} =


1
(a + b − |a − b|), a ∨ b = −[(−a) ∧ (−b)].
2

Thus X, Y ∈ H implies that X ∧ Y , X ∨ Y , X + = X ∨ 0 and X − = (−X)+ are
all in H.
3

Definition 1 A nonlinear expectation E is a functional H 7→ R satisfying
the following properties
(a) Monotonicity: If X, Y ∈ H and X ≥ Y then E[X] ≥ E[Y ].
(b) Preserving of constants: E[c] = c.
In this paper we are interested in the expectations which satisfy
(c) Sub-additivity (or self–dominated property):
E[X] − E[Y ] ≤ E[X − Y ], ∀X, Y ∈ H.
(d) Positive homogeneity: E[λX] = λE[X], ∀λ ≥ 0, X ∈ H.
(e) Constant translatability: E[X + c] = E[X] + c.
Remark 2 The above condition (d) has an equivalent form: E[λX] = λ+ E[X]+
λ− E[−X]. This form will be very convenient for the conditional expectations

studied in this paper (see (vi) of Proposition 16).
Remark 3 We recall the notion of the above expectations satisfying (c)–(e) was
systematically introduced by Artzner, Delbaen, Eber and Heath [1], [2], in the
case where Ω is a finite set, and by Delbaen [15] in general situation with the
notation of risk measure: ρ(X) = E[−X]. See also in Huber [23] for even early
study of this notion E (called upper expectation E∗ in Ch.10 of [23]) in a finite
set Ω. See Rosazza Gianin [46] or Peng [38], El Karoui & Barrieu [18], [19] for
dynamic risk measures using g–expectations. Super-hedging and super pricing
(see [20] and [21]) are also closely related to this formulation.
Remark 4 We observe that H0 = {X ∈ H, E[|X|] = 0} is a linear subspace
of H. To take H0 as our null space, we introduce the quotient space H/H0 .
Observe that, for every {X} ∈ H/H0 with a representation X ∈ H, we can
define an expectation E[{X}] := E[X] which still satisfies (a)–(e) of Definition
1. Following [40], we set kXk := E[|X|], X ∈ H/H0 . It is easy to check that
H/H0 is a normed space under k·k. We then extend H/H0 to its completion [H]
under this norm. ([H], k·k) is a Banach space. The nonlinear expectation E[·]
can be also continuously extended from H/H0 to [H], which satisfies (a)–(e).
For any X ∈ H, the mappings
X + (ω) : H 7−→ H


and X − (ω) : H 7−→ H

satisfy
|X + − Y + | ≤ |X − Y | and |X − − Y − | = |(−X)+ − (−Y )+ | ≤ |X − Y |.
Thus they are both contraction mappings under k·k and can be continuously
extended to the Banach space ([H], k·k).
We define the partial order “≥” in this Banach space.
4

Definition 5 An element X in ([H], k·k) is said to be nonnegative, or X ≥ 0,
0 ≤ X, if X = X + . We also denote by X ≥ Y , or Y ≤ X, if X − Y ≥ 0.
It is easy to check that X ≥ Y and Y ≥ X implies X = Y in ([H], k·k).
The nonlinear expectation E[·] can be continuously extended to ([H], k·k) on
which (a)–(e) still hold.

3

G–normal distributions

For a given positive integer n, we denote by lip(Rn ) the space of all bounded

and Lipschitz real functions on Rn . In this section R is considered as Ω and
lip(R) as H.
In classical linear situation, a random variable X(x) = x with standard
normal distribution, i.e., X ∼ N (0, 1), can be characterized by
Z ∞
x2
1
E[φ(X)] = √
e− 2 φ(x)dx, ∀φ ∈ lip(R).
2π −∞
It is known since Bachelier 1900 and Einstein 1950 that E[φ(X)] = u(1, 0) where
u = u(t, x) is the solution of the heat equation
∂t u =

1 2
∂ u
2 xx

(1)


with Cauchy condition u(0, x) = φ(x).
In this paper we set G(a) = 12 (a+ − σ02 a− ), a ∈ R, where σ0 ∈ [0, 1] is fixed.
Definition 6 A real valued random variable X with the standard G–normal
distribution is characterized by its G–expectation defined by
E[φ(X)] = P1G (φ) := u(1, 0), φ ∈ lip(R) 7→ R
where u = u(t, x) is a bounded continuous function on [0, ∞) × R which is the
(unique) viscosity solution of the following nonlinear parabolic partial differential
equation (PDE)
2
∂t u − G(∂xx
u) = 0, u(0, x) = φ(x).
(2)
In case no confusion is caused, we often call the functional P1G (·) the standard G–normal distribution. When σ0 = 1, the above PDE becomes the standard heat equation (1) and thus this G–distribution is just the classical normal
distribution N (0, 1):
Z ∞
x2
1
P1G (φ) = P1 (φ) := √
e− 2 φ(x)dx.
2π −∞


5

Remark 7 The function G can be written as G(a) = 21 supσ0 ≤σ≤1 σ 2 a, thus
the nonlinear heat equation (2) is a special kind of Hamilton–Jacobi–Bellman
equation. The existence and uniqueness of (2) in the sense of viscosity solution
can be found in, for example, [13], [22], [34], [47], and [28] for C 1,2 -solution if
σ0 > 0 (see also in [32] for elliptic cases). Readers who are unfamililar with the
notion of viscosity solution of PDE can just consider, in the whole paper, the
case σ0 > 0, under which the solution u becomes a classical smooth function.
Remark 8 It is known that u(t, ·) ∈ lip(R) (see e.g. [47] Ch.4, Prop.3.1 or [34]
Lemma 3.1 for the Lipschitz continuity of u(t, ·), or Lemma 5.5 and Proposition
5.6 in [39] for a more general conclusion). The boundedness is simply from the
comparison theorem (or maximum principle) of this PDE. It is also easy to
check that, for a given ψ ∈ lip(R2 ), P1G (ψ(x, ·)) is still a bounded and Lipschitz
function in x.
In general situations we have, from the comparison theorem of PDE,
P1G (φ) ≥ P1 (φ), ∀φ ∈ lip(R).

(3)

The corresponding
normal distribution with mean at x ∈ R and variance t > 0

is P1G (φ(x + t × ·)). Just like the classical situation, we have
Lemma 9 For each φ ∈ lip(R), the function

u(t, x) = P1G (φ(x + t × ·)), (t, x) ∈ [0, ∞) × R

(4)

is the solution of the nonlinear heat equation (2) with the initial condition
u(0, ·) = φ(·).
Proof. Let u ∈ C([0, ∞) × R) be the viscosity solution of (2) with u(0, ·) =
φ(·)√∈ lip(R). For a fixed (t¯, x
¯) ∈ (0, ∞) × R, we denote u
¯(t, x) = u(t ×
¯). √
Then u
¯ is the viscosity solution of (2) with the initial condition
t¯, x t¯ + x
u
¯(0, x) = φ(x t¯ + x
¯). Indeed, let ψ be a C 1,2 function on (0, ∞) × R such that
ψ≥u
¯ (resp. ψ ≤ u¯) and ψ(τ, ξ) = u
¯(τ, ξ) for a fixed (τ, ξ) ∈ (0, ∞) × R. We
√ x ) ≥ u(t, x), for all (t, x) and
have ψ( tt¯, x−¯
¯
t


¯
t x−x
¯).
ψ( ¯, √ ) = u(t, x), at (t, x) = (τ t¯, ξ t¯ + x
t


Since u is the viscosity solution of (2), at the point (t, x) = (τ t¯, ξ t¯ + x
¯), we
have
√x)
√x)
∂ψ( tt¯, x−¯
∂ 2 ψ( tt¯, x−¯


) ≤ 0 (resp. ≥ 0).
− G(
∂t
∂x2
But since G is positive homogenous, i.e., G(λa) = λG(a), we thus derive
(

∂ 2 ψ(t, x)
∂ψ(t, x)
))|(t,x)=(τ,ξ) ≤ 0 (resp. ≥ 0).
− G(
∂t
∂x2

This implies that u
¯ is the viscosity subsolution (resp. supersolution) of (2).
According to the definition of P G (·) we obtain (4).
6

Definition 10 We denote
PtG (φ)(x) = P1G (φ(x +


t × ·)) = u(t, x), (t, x) ∈ [0, ∞) × R.

(5)

From the above lemma, for each φ ∈ lip(R), we have the following Kolmogorov–
Chapman chain rule:
G
PtG (PsG (φ))(x) = Pt+s
(φ)(x), s, t ∈ [0, ∞), x ∈ R.

(6)

Such type of nonlinear semigroup was studied in Nisio 1976 [30], [31].
Proposition 11 For each t > 0, the G–normal distribution PtG is a nonlinear
expectation on H = lip(R), with Ω = R, satisfying (a)–(e) of Definition 1.
The corresponding completion space [H] = [lip(R)]t under the norm kφkt :=
PtG (|φ|)(0) contains φ(x) = xn , n = 1, 2, · · · , as well as xn ψ, ψ ∈ lip(R) as its
special elements. Relation (5) still holds. We also have the following properties:
(1) Central symmetric: PtG (φ(·)) = PtG (φ(−·));
(2) For each convex φ ∈ [lip(R)] we have
Z ∞
x2
1
φ(x) exp(− )dx;
PtG (φ)(0) = √
2t
2πt −∞
For each concave φ, we have, for σ0 > 0,
Z ∞
x2
1
G
φ(x) exp(−
)dx,
Pt (φ)(0) = √
2tσ02
2πtσ0 −∞
and PtG (φ)(0) = φ(0) for σ0 = 0. In particular, we have
PtG ((x)x∈R ) = 0,
PtG ((x2 )x∈R )

PtG ((x2n+1 )x∈R ) = PtG ((−x2n+1 )x∈R ), n = 1, 2, · · · ,
= t, PtG ((−x2 )x∈R ) = −σ02 t.

Remark 12 Corresponding to the above four expressions, a random X with the
G–normal distribution PtG satisfies
E[X] = 0, E[X 2n+1 ] = E[−X 2n+1 ],
E[X 2 ]

= t,

E[−X 2 ] = −σ02 t.

See the next section for a detail study.

4

1–dimensional G–Brownian motion under G–
expectation

In the rest of this paper, we denote by Ω = C0 (R+ ) the space of all R–valued
continuous paths (ωt )t∈R+ with ω0 = 0, equipped with the distance
ρ(ω 1 , ω 2 ) :=


X
i=1

2−i [( max |ωt1 − ωt2 |) ∧ 1].
t∈[0,i]

7

We set, for each t ∈ [0, ∞),
Wt := {ω·∧t : ω ∈ Ω},
Ft := Bt (W) = B(Wt ),
\
Ft+ := Bt+ (W) =
Bs (W),
s>t

F :=

_

Fs .

s>t

(Ω, F ) is the canonical space equipped with the natural filtration and ω =
(ωt )t≥0 is the corresponding canonical process.
For each fixed T ≥ 0, we consider the following space of random variables:
L0ip (FT ) := {X(ω) = φ(ωt1 , · · · , ωtm ), ∀m ≥ 1, t1 , · · · , tm ∈ [0, T ], ∀φ ∈ lip(Rm )}.
It is clear that L0ip (Ft ) ⊆ L0ip (FT ), for t ≤ T . We also denote
L0ip (F ) :=


[

L0ip (Fn ).

n=1

Remark 13 It is clear that lip(Rm ) and then L0ip (FT ) and L0ip (F ) are vector
lattices. Moreover, since φ, ψ ∈ lip(Rm ) implies φ · ψ ∈ lip(Rm ) thus X, Y ∈
L0ip (FT ) implies X · Y ∈ L0ip (FT ).
We will consider the canonical space and set Bt (ω) = ωt , t ∈ [0, ∞), for
ω ∈ Ω.
Definition 14 The canonical process B is called a G–Brownian motion under
a nonlinear expectation E defined on L0ip (F ) if for each T > 0, m = 1, 2, · · · ,
and for each φ ∈ lip(Rm ), 0 ≤ t1 < · · · < tm ≤ T , we have
E[φ(Bt1 , Bt2 − Bt1 , · · · , Btm − Btm−1 )] = φm ,
where φm ∈ R is obtained via the following procedure:
φ1 (x1 , · · · , xm−1 )

= PtGm −tm−1 (φ(x1 , · · · , xm−1 , ·));

φ2 (x1 , · · · , xm−2 ) = PtGm−1 −tm−2 (φ1 (x1 , · · · , xm−2 , ·));
..
.

φm−1 (x1 )
φm

= PtG2 −t1 (φm−2 (x1 , ·));
= PtG1 (φm−1 (·)).

The related conditional expectation of X = φ(Bt1 , Bt2 − Bt1 , · · · , Btm − Btm−1 )
under Ftj is defined by
E[X|Ftj ] = E[φ(Bt1 , Bt2 − Bt1 , · · · , Btm − Btm−1 )|Ftj ]
= φm−j (Bt1 , · · · , Btj − Btj−1 ).
8

(7)

It is proved in [40] that E[·] consistently defines a nonlinear expectation on
the vector lattice L0ip (FT ) as well as on L0ip (F ) satisfying (a)–(e) in Definition
1. It follows that E[|X|], X ∈ L0ip (FT ) (resp. L0ip (F )) forms a norm and
that L0ip (FT ) (resp. L0ip (F )) can be continuously extended to a Banach space,
denoted by L1G (FT ) (resp. L1G (F )). For each 0 ≤ t ≤ T < ∞, we have
L1G (Ft ) ⊆ L1G (FT ) ⊂ L1G (F ). It is easy to check that, in L1G (FT ) (resp. L1G (F )),
E[·] still satisfies (a)–(e) in Definition 1.
Definition 15 The expectation E[·] : L1G (F ) 7→ R introduced through above
procedure is called G–expectation. The corresponding canonical process B is
called a G–Brownian motion under E[·].
For a given p > 1, we also denote LpG (F ) = {X ∈ L1G (F ), |X|p ∈ L1G (F )}.
is also a Banach space under the norm kXkp := (E[|X|p ])1/p . We have
(see Appendix)
kX + Y kp ≤ kXkp + kY kp
LpG (F )

and, for each X ∈ LpG , Y ∈ LqG (Q) with

1
p

+

1
q

= 1,

kXY k = E[|XY |] ≤ kXkp kXkq .
With this we have kXkp ≤ kXkp′ if p ≤ p′ .
We now consider the conditional expectation introduced in (7). For each
fixed t = tj ≤ T , the conditional expectation E[·|Ft ] : L0ip (FT ) 7→ L0ip (Ft ) is a
continuous mapping under k·k since E[E[X|Ft ]] = E[X], X ∈ L0ip (FT ) and
E[E[X|Ft ] − E[Y |Ft ]] ≤ E[X − Y ],
kE[X|Ft ] − E[Y |Ft ]k ≤ kX − Y k .

It follows that E[·|Ft ] can be also extended as a continuous mapping L1G (FT ) 7→
L1G (Ft ). If the above T is not fixed, then we can obtain E[·|Ft ] : L1G (F ) 7→
L1G (Ft ).
Proposition 16 We list the properties of E[·|Ft ] that hold in L0ip (FT ) and still
hold for X, Y ∈ L1G (F ):
(i) E[X|Ft ] = X, for X ∈ L1G (Ft ), t ≤ T .
(ii) If X ≥ Y , then E[X|Ft ] ≥ E[Y |Ft ].
(iii) E[X|Ft ] − E[Y |Ft ] ≤ E[X − Y |Ft ].
(iv) E[E[X|Ft ]|Fs ] = E[X|Ft∧s ], E[E[X|Ft ]] = E[X].
(v) E[X + η|Ft ] = E[X|Ft ] + η, η ∈ L1G (Ft ).
(vi) E[ηX|Ft ] = η + E[X|Ft ] + η − E[−X|Ft ], for each bounded η ∈ L1G (Ft ).
(vii) For each X ∈ L1G (FTt ), E[X|Ft ] = E[X],
where L1G (FTt ) is the extension, under k·k, of L0ip (FTt ) which consists of random
variables of the form φ(Bt1 − Bt1 , Bt2 − Bt1 , · · · , Btm − Btm−1 ), m = 1, 2, · · · ,
φ ∈ lip(Rm ), t1 , · · · , tm ∈ [t, T ]. Condition (vi) is the positive homogeneity, see
Remark 2.
9

Definition 17 An X ∈ L1G (F ) is said to be independent of Ft under the G–
expectation E for some given t ∈ [0, ∞), if for each real function Φ suitably
defined on R such that Φ(X) ∈ L1G (F ) we have
E[Φ(X)|Ft ] = E[Φ(X)].
Remark 18 It is clear that all elements in L1G (F ) are independent of F0 . Just
like the classical situation, the increments of G-Brownian motion (Bt+s −Bs )t≥0
is independent of Fs . In fact it is a new G–Brownian motion since, just like the
classical situation, the increments of B are identically distributed.
Example 19 For each n = 0, 1, 2, · · · , 0 ≤ s − t, we have E[Bt − Bs |Fs ] = 0
and, for n = 1, 2, · · · ,
Z ∞
1
x2
n
2n
E[|Bt − Bs | |Fs ] = E[|Bt−s | ] = p
)dx.
|x|n exp(−
2(t − s)
2π(t − s) −∞

But we have

E[−|Bt − Bs |n |Fs ] = E[−|Bt−s |n ] = −σ0n E[|Bt−s |n ].
Exactly as in classical cases, we have
E[(Bt − Bs )2 |Fs ] = t − s,

E[(Bt − Bs )4 |Fs ] = 3(t − s)2 ,

E[(Bt − Bs )6 |Fs ] = 15(t − s)3 , E[(Bt − Bs )8 |Fs ] = 105(t − s)4 ,
p

2(t − s)
2 2(t − s)3/2
3


E[|Bt − Bs ||Fs ] =
, E[|Bt − Bs | |Fs ] =
,
π
π

2(t − s)5/2

E[|Bt − Bs |5 |Fs ] = 8
.
π

Example 20 For each n = 1, 2, · · · , 0 ≤ s ≤ t < T and X ∈ L1G (Fs ), since
2n−1
E[BT2n−1
−t ] = E[−BT −t ], we have, by (vi) of Proposition 16,
E[X(BT − Bt )2n−1 ] = E[X + E[(BT − Bt )2n−1 |Ft ] + X − E[−(BT − Bt )2n−1 |Ft ]]
= E[|X|] · E[BT2n−1
−t ],

E[X(BT − Bt )|Fs ] = E[−X(BT − Bt )|Fs ] = 0.
We also have
E[X(BT − Bt )2 |Ft ] = X + (T − t) − σ02 X − (T − t).

Remark 21 It is clear that we can define an expectation E[·] on L0ip (F ) in the
same way as in Definition 14 with the standard normal distribution P1 (·) in the
place of P1G (·). Since P1 (·) is dominated by P1G (·) in the sense P1 (φ) − P1 (ψ) ≤
P1G (φ − ψ), then E[·] can be continuously extended to L1G (F ). E[·] is a linear
expectation under which (Bt )t≥0 behaves as a Brownian motion. We have
E[X] ≤ E[X], ∀X ∈ L1G (F ).

(8)

2n−1
2n−1
In particular, E[BT2n−1
−t ] = E[−BT −t ] ≥ E[−BT −t ] = 0. Such kind of extension
under a domination relation was discussed in details in [40].

10

The following property is very useful
Proposition 22 Let X, Y ∈ L1G (F ) be such that E[Y ] = −E[−Y ] (thus E[Y ] =
E[Y ]), then we have
E[X + Y ] = E[X] + E[Y ].
In particular, if E[Y ] = E[−Y ] = 0, then E[X + Y ] = E[X].
Proof. It is simply because we have E[X + Y ] ≤ E[X] + E[Y ] and
E[X + Y ] ≥ E[X] − E[−Y ] = E[X] + E[Y ].
Example 23 We have
E[Bt2 − Bs2 |Fs ] = E[(Bt − Bs + Bs )2 − Bs2 |Fs ]

= E[(Bt − Bs )2 + 2(Bt − Bs )Bs |Fs ]
= t − s,

since 2(Bt − Bs )Bs satisfies the condition for Y in Proposition 22, and
E[(Bt2 − Bs2 )2 |Fs ] = E[{(Bt − Bs + Bs )2 − Bs2 }2 |Fs ]

= E[{(Bt − Bs )2 + 2(Bt − Bs )Bs }2 |Fs ]

= E[(Bt − Bs )4 + 4(Bt − Bs )3 Bs + 4(Bt − Bs )2 Bs2 |Fs ]
≤ E[(Bt − Bs )4 ] + 4E[|Bt − Bs |3 ]|Bs | + 4(t − s)Bs2
= 3(t − s)2 + 8(t − s)3/2 |Bs | + 4(t − s)Bs2 .

5
5.1

Itˆ
o’s integral of G–Brownian motion
Bochner’s integral

Definition 24 For T ∈ R+ , a partition πT of [0, T ] is a finite ordered subset
π = {t1 , · · · , tN } such that 0 = t0 < t1 < · · · < tN = T . We denote
µ(πT ) = max{|ti+1 − ti |, i = 0, 1, · · · , N − 1}.
N
N
We use πTN = {tN
0 < t1 < · · · < tN } to denote a sequence of partitions of [0, T ]
such that limN →∞ µ(πTN ) = 0.

Let p ≥ 1 be fixed. We consider the following type of simple processes: for
a given partition {t0 , · · · , tN } = πT of [0, T ], we set
ηt (ω) =

N
−1
X

ξj (ω)I[tj ,tj+1 ) (t),

j=0

where ξi ∈ LpG (Fti ), i = 0, 1, 2, · · · , N − 1, are given. The collection and these
p,0
type of processes is denoted by MG
(0, T ).
11

1,0
Definition 25 For an η ∈ MG
(0, T ) with ηt =
related Bochner integral is

Z

T

ηt (ω)dt =

0

N
−1
X
j=0

PN −1
j=0

ξj (ω)I[tj ,tj+1 ) (t), the

ξj (ω)(tj+1 − tj ).

1,0
Remark 26 We set, for each η ∈ MG
(0, T ),

˜ T [η] := 1
E
T

Z

T

E[ηt ]dt =

0

N −1
1 X
E[ξj (ω)](tj+1 − tj ).
T j=0

˜ T : M 1,0 (0, T ) 7−→ R forms a nonlinear expectation
It is easy to check that E
G
satisfying (a)–(e) of Definition 1. By Remark 4, we can introduce a natural
R
1
˜ T [|η|] = 1 T E[|ηt |]dt. Under this norm M 1,0 (0, T ) can be
norm kηkT = E
G
T 0
1
continuously extended to MG
(0, T ) which is a Banach space.
p
Definition 27 For each p ≥ 1, we will denote by MG
(0, T ) the completion of
p,0
MG
(0, T ) under the norm

1
(
T

Z

T

0

kηtp k dt)1/p

1/p
N
−1
X
1
E[|ξj (ω)|p ](tj+1 − tj ) .
=
T j=0


We observe that,
E[|

Z

T
0

ηt (ω)dt|] ≤

N
−1
X
j=0

kξj (ω)k (tj+1 − tj ) =

Z

0

T

E[|ηt |]dt.

We then have
RT
1,0
(0, T ) 7→ L1G (FT ) is
Proposition 28 The linear mapping 0 ηt (ω)dt : MG
1
continuous. and thus can be continuously extended to MG
(0, T ) 7→ L1G (FT ).
RT
1
We still denote this extended mapping by 0 ηt (ω)dt, η ∈ MG
(0, T ). We have
E[|

Z

T

0

ηt (ω)dt|] ≤

Z

0

T

E[|ηt |]dt,

1
∀η ∈ MG
(0, T ).

(9)

p
p
1
Since MG
(0, T ) ⊃ MG
(0, T ), for p ≥ 1, this definition holds for η ∈ MG
(0, T ).

5.2

Itˆ
o’s integral of G–Brownian motion

2,0
Definition 29 For each η ∈ MG
(0, T ) with the form ηt (ω) =
we define
Z T
N
−1
X
ξj (Btj+1 − Btj ).
I(η) =
η(s)dBs :=
0

j=0

12

PN −1
j=0

ξj (ω)I[tj ,tj+1 ) (t),

2,0
Lemma 30 The mapping I : MG
(0, T ) 7−→ L2G (FT ) is a linear continuous
2
mapping and thus can be continuously extended to I : MG
(0, T ) 7−→ L2G (FT ).
In fact we have
Z T
E[
η(s)dBs ] = 0,
(10)

Z
E[(

0
T

0

η(s)dBs )2 ] ≤

Z

T

E[(η(t))2 ]dt.

(11)

0

2
Definition 31 We define, for a fixed η ∈ MG
(0, T ), the stochastic integral
Z T
η(s)dBs := I(η).
0

2
It is clear that (10), (11) still hold for η ∈ MG
(0, T ).

Proof of Lemma 30. From Example 20, for each j,
E[ξj (Btj+1 − Btj )|Ftj ] = 0.
We have
Z
E[

Z

tN −1

tN −1

= E[

Z

tN −1

= E[

Z

T

η(s)dBs ] = E[

η(s)dBs + ξN −1 (BtN − BtN −1 )]

0

0

η(s)dBs + E[ξN −1 (BtN − BtN −1 )|FtN −1 ]]

0

η(s)dBs ].

0

We then can repeat this procedure to obtain (10). We now prove (11):
Z
E[(

T

2

η(s)dBs ) ] = E[
0

Z

tN −1

Z

tN −1

η(s)dBs + ξN −1 (BtN

0

= E[

η(s)dBs

0

+ E[2

Z

tN −1

0

= E[

Z

2
− BtN −1 ) ]

2


2
2
η(s)dBs ξN −1 (BtN − BtN −1 ) + ξN
−1 (BtN − BtN −1 ) |FtN −1 ]]

tN −1

η(s)dBs

0

2

2
+ ξN
−1 (tN − tN −1 )].

2
R
Rt
t
2
Thus E[( 0 N η(s)dBs )2 ] ≤ E[ 0 N −1 η(s)dBs ] + E[ξN
−1 ](tN − tN −1 )]. We
then repeat this procedure to deduce
Z
E[(

T
0

η(s)dBs )2 ] ≤

N
−1
X
j=0

E[(ξj )2 ](tj+1 − tj ) =
13

Z

0

T

E[(η(t))2 ]dt.


We list some main properties of the Itˆo’s integral of G–Brownian motion.
We denote for some 0 ≤ s ≤ t ≤ T ,
t

Z

ηu dBu :=

Z

T

I[s,t] (u)ηu dBu .

0

s

We have
2
Proposition 32 Let η, θ ∈ MG
(0, T ) and let 0 ≤ s ≤ r ≤ t ≤ T . Then in
1
LG (FT ) we have
Rt
Rr
Rt
(i) s ηu dBu = s ηu dBu + r ηu dBu ,
Rt
Rt
Rt
(ii) s (αηu + θu )dBu = α s ηu dBu + s θu dBu , if α is bounded and in L1G (Fs ),
RT
(iii) E[X + r ηu dBu |Fs ] = E[X], ∀X ∈ L1G (F ).

5.3

Quadratic variation process of G–Brownian motion

We now study a very interesting process of the G-Brownian motion. Let πtN ,
N = 1, 2, · · · , be a sequence of partitions of [0, t]. We consider
Bt2 =

N
−1
X
j=0

=

[Bt2N − Bt2N ]

N
−1
X
j=0

j+1

j

2BtN
(BtN
− BtN
)+
j
j+1
j

N
−1
X
j=0

(BtN
− BtN
)2 .
j+1
j

As µ(πtN ) → 0, the first term of the right side tends to
term must converge. We denote its limit by hBit , i.e.,
hBit =

lim

µ(πtN )→0

N
−1
X
j=0

(BtN
− BtN
)2 = Bt2 − 2
j+1
j

Rt

Bs dBs . The second

Z

t

0

Bs dBs .

(12)

0

By the above construction, hBit , t ≥ 0, is an increasing process with hBi0 =
0. We call it the quadratic variation process of the G–Brownian motion
B. Clearly hBi is an increasing process. It perfectly characterizes the part of
uncertainty, or ambiguity, of G–Brownian motion. It is important to keep in
mind that hBit is not a deterministic process unless the case σ = 1, i.e., when
B is a classical Brownian motion. In fact we have
Lemma 33 We have, for each 0 ≤ s ≤ t < ∞
E[hBit − hBis |Fs ] = t − s,

E[−(hBit − hBis )|Fs ] =

14

−σ02 (t

(13)
− s).

(14)

Proof. By the definition of hBi and Proposition 32-(iii),
E[hBit − hBis |Fs ] = E[Bt2 − Bs2 − 2

Z

t

s

Bu dBu |Fs ]

= E[Bt2 − Bs2 |Fs ] = t − s.
The last step can be check as in Example 23. We then have (13). (14) can be
proved analogously with the consideration of E[−(Bt2 − Bs2 )|Fs ] = −σ 2 (t − s).
1
To define the integration of a process η ∈ MG
(0, T ) with respect to d hBi,
we first define a mapping:

Q0,T (η) =

Z

T

η(s)d hBis :=

0

N
−1
X
j=0

1,0
(0, T ) 7→ L1 (FT ).
ξj (hBitj+1 − hBitj ) : MG

1,0
Lemma 34 For each η ∈ MG
(0, T ),

Z

E[|Q0,T (η)|] ≤

T

0

E[|ηs |]ds.

(15)

1,0
Thus Q0,T : MG
(0, T ) 7→ L1 (FT ) is a continuous linear mapping. Consequently, Q0,T can be uniquely extended to L1F (0, T ). We still denote this mapping by
Z T
1
η(s)d hBis = Q0,T (η), η ∈ MG
(0, T ).
0

We still have
E[|

Z

0

T

η(s)d hBis |] ≤

Z

T

1
E[|ηs |]ds, ∀η ∈ MG
(0, T ).

0

(16)

Proof. By applying Lemma 33, (15) can be checked as follows:
E[|

N
−1
X
j=0

ξj (hBitj+1 − hBitj )|] ≤

N
−1
X

E[|ξj | · E[hBitj+1 − hBitj |Ftj ]]

=

N
−1
X

E[|ξj |](tj+1 − tj )

j=0

j=0

=

Z

T

0

E[|ηs |]ds.

A very interesting point of the quadratic variation process hBi is, just like
the G–Brownian motion B it’s self, the increment hBit+s − hBis is independent
of Fs and identically distributed like hBit . In fact we have
15

Lemma 35 For each fixed s ≥ 0, (hBis+t − hBis )t≥0 is independent of Fs . It is
the quadratic variation process of the Brownian motion Bts = Bs+t − Bs , t ≥ 0,
i.e., hBis+t − hBis = hB s it . We have
2

2

E[hB s it |Fs ] = E[hBit ] = t2

(17)

as well as
3

2

4

E[hB s it |Fs ] = E[hBit ] = t3 ,

4

E[hB s it |Fs ] = E[hBit ] = t4 .

Proof. The independence is simply from
Z s
Z s+t
2
2
Br dBr ]
Br dBr − [Bs − 2
hBis+t − hBis = Bt+s − 2
0
0
Z s+t
= (Bt+s − Bs )2 − 2
(Br − Bs )d(Br − Bs )
s

= hB s it .
We set φ(t) := E[hBi2t ].
φ(t) = E[{(Bt )2 − 2

Z

t

Bu dBu }2 ]
Z t
≤ 2E[(Bt )4 ] + 8E[(
Bu dBu )2 ]
0
Z t
2
E[(Bu )2 ]du
≤ 6t + 8
0

0

= 10t2 .
This also implies E[(hBit+s − hBis )2 ] = φ(t) ≤ 14t. Thus
φ(t) = E[{hBis + hBis+t − hBis }2 ]

≤ E[(hBis )2 ] + E[(hB s it )2 ] + 2E[hBis hB s it ]
= φ(s) + φ(t) + 2E[hBis E[hB s it ]]
= φ(s) + φ(t) + 2st.

We set δN = t/N , tN
k = kt/N = kδN for a positive integer N . By the above
inequalities
N
N
φ(tN
N ) ≤ φ(tN −1 ) + φ(δN ) + 2tN −1 δN

N
N
≤ φ(tN
N −2 ) + 2φ(δN ) + 2(tN −1 + tN −2 )δN
..
.

We then have
φ(t) ≤ N φ(δN ) + 2

N
−1
X
k=0

tN
k δN ≤ 10
16

N
−1
X
t2
+2
tN
k δN .
N
k=0

Rt
2
Let N → ∞ we have φ(t) ≤ 2 0 sds = t2 . Thus E[hBt i ] ≤ t2 . This with
2
2
E[hBt i ] ≥ E[hBt i ] = t2 implies (17).
Proposition 36 Let 0 ≤ s ≤ t, ξ ∈ L1G (Fs ). Then
E[X + ξ(Bt2 − Bs2 )] = E[X + ξ(Bt − Bs )2 ]

= E[X + ξ(hBit − hBis )].

Proof. By (12) and Proposition 22, we have
E[X + ξ(Bt2 − Bs2 )] = E[X + ξ(hBit − hBis + 2

t

Z

Bu dBu )]

s

= E[X + ξ(hBit − hBis )].
We also have
E[X + ξ(Bt2 − Bs2 )] = E[X + ξ{(Bt − Bs )2 + 2(Bt − Bs )Bs }]
= E[X + ξ(Bt − Bs )2 ].

We have the following isometry
2
Proposition 37 Let η ∈ MG
(0, T ). We have
Z T
Z T
η(s)dBs )2 ] = E[
η 2 (s)d hBis ].
E[(
0

(18)

0

2,0
Proof. We first consider η ∈ MG
(0, T ) with the form

ηt (ω) =

N
−1
X

ξj (ω)I[tj ,tj+1 ) (t)

j=0

and thus

RT
0

η(s)dBs :=

PN −1
j=0

ξj (Btj+1 − Btj ). By Proposition 22 we have

E[X + 2ξj (Btj+1 − Btj )ξi (Bti+1 − Bti )] = E[X], for X ∈ L1G (F ), i 6= j.
Thus
Z
E[(



T
0

η(s)dBs )2 ] = E[ 

N
−1
X
j=0

2

ξj (Btj+1 − Btj ) ] = E[

N
−1
X
j=0

ξj2 (Btj+1 − Btj )2 ].

This with Proposition 36, it follows that
Z
E[(

T

2

η(s)dBs ) ] = E[
0

N
−1
X

ξj2 (hBitj+1

j=0

− hBitj )] = E[

Z

0

T

η 2 (s)d hBis ].

2,0
Thus (18) holds for η ∈ MG
(0, T ). We thus can continuously extend the above
2
equality to the case η ∈ MG
(0, T ) and prove (18).

17

5.4

Itˆ
o’s formula for G–Brownian motion

We have the corresponding Itˆo’s formula of Φ(Xt ) for a “G-Itˆo process” X. For
simplification, we only treat the case where the function Φ is sufficiently regular.
We first consider a simple situation.
Let Φ ∈ C 2 (Rn ) be bounded with bounded derivatives and {∂x2µ xν Φ}nµ,ν=1
are uniformly Lipschitz. Let s ∈ [0, T ] be fixed and let X = (X 1 , · · · , X n )T be
an n–dimensional process on [s, T ] of the form
Xtν = Xsν + αν (t − s) + η ν (hBit − hBis ) + β ν (Bt − Bs ),

where, for ν = 1, · · · , n, αν , η ν and β ν , are bounded elements of L2G (Fs ) and
Xs = (Xs1 , · · · , Xsn )T is a given Rn –vector in L2G (Fs ). Then we have
Z t
Z t
ν
Φ(Xt ) − Φ(Xs ) =
∂xν Φ(Xu )β dBu +
(19)
∂xν Φ(Xu )αν du
+

Z

s
t

s

s

1
[Dxν Φ(Xu )η ν + ∂x2µ xν Φ(Xu )β µ β ν ]d hBiu .
2

Here we use the Einstein convention, i.e., each single term with repeated indices
µ and/or ν implies the summation.
Proof. For each positive integer N we set δ = (t − s)/N and take the partition
N
N
N
= {tN
π[s,t]
0 , t1 , · · · , tN } = {s, s + δ, · · · , s + N δ = t}.

We have
Φ(Xt ) = Φ(Xs ) +

N
−1
X
k=0

= Φ(Xs ) +

[Φ(XtN
) − Φ(XtN
)]
k+1
k

N
−1
X

)(XtµN
[∂xµ Φ(XtN
k

k+1

k=0

− XtµN )
k

1
)(XtµN − XtµN )(XtνN − XtνN ) + ηkN ]]
+ [∂x2µ xν Φ(XtN
k
k
k+1
k+1
k
2

(20)

where
ηkN = [∂x2µ xν Φ(XtN
+θk (XtN
−XtN
))−∂x2µ xν Φ(XtN
)](XtµN −XtµN )(XtνN −XtνN )
k
k+1
k
k
k+1

k

k+1

k

with θk ∈ [0, 1]. We have

E[|ηkN |] = E[|[∂x2µ xν Φ(XtN
+ θk (XtN
− XtN
)) − ∂x2µ xν Φ(XtN
)](XtµN
k
k+1
k
k

k+1

− XtµN )(XtνN
k

≤ cE[|XtN
− XtN
|3 ] ≤ C[δ 3 + δ 3/2 ],
k+1
k
P
where c is the Lipschitz constant of {∂x2µ xν Φ}nµ,ν=1 . Thus k E[|ηkN |] → 0. The
rest terms in the summation of the right side of (20) are ξtN + ζtN , with
ξtN =

N
−1
X
k=0

N
µ
)[αµ (tN
{∂xµ Φ(XtN
k+1 − tk ) + η (hBitN
k

k+1

− hBitN ) + β µ (BtN
− BtN
)]
k+1
k

1
)β µ β ν (BtN
− BtN
)(BtN
− BtN
)}
+ ∂x2µ xν Φ(XtN
k
k+1
k
k+1
k
2
18

k

k+1

− XtνN )|]
k

and
ζtN =
×

N −1
1 X 2
N
µ
)[αµ (tN
− hBitN )]
∂xµ xν Φ(XtN
k+1 − tk ) + η (hBitN
k
k+1
k
2

k=0
ν N
[α (tk+1
ν

µ

+ β [α

ν
− tN
k ) + η (hBitN

k+1

(tN
k+1



tN
k )

− hBitN )]

µ

+ η (hBitN

k+1

k

− hBitN )](BtN
− BtN
).
k+1
k
k

N
We observe that, for each u ∈ [tN
k , tk+1 )

E[|∂xµ Φ(Xu ) −

N
−1
X

(u)|2 ]
)I[tN
N
∂xµ Φ(XtN
k
k ,tk+1 )

k=0

)|2 ]
= E[|∂xµ Φ(Xu ) − ∂xµ Φ(XtN
k

≤ c2 E[|Xu − XtN
|2 ] ≤ C[δ + δ 2 ].
k
Thus

PN −1
k=0

2
(0, T ). Similarly,
)I[tN
N
(·) tends to ∂xµ Φ(X· ) in MG
∂xµ Φ(XtN
k
k ,tk+1 )

N
−1
X
k=0

2
∂x2µ xν Φ(XtN
)I[tN
N
(·) → ∂x2µ xν Φ(X· ), in MG
(0, T ).
k
k ,tk+1 )

Let N → ∞, by the definitions of the integrations with respect to dt, dBt and
d hBit the limit of ξtN in L2G (Ft ) is just the right hand of (19). By the estimates
of the next remark, we also have ζtN → 0 in L1G (Ft ). We then have proved (19).
1,0
Remark 38 We have the following estimates: for ψ N ∈ MG
(0, T ) such that
P
N
−1
N
N
N
N
(t), and πT = {0 ≤ t0 , · · · , tN = T } with limN →∞ µ(πTN ) =
ψt = k=0 ξtk I[tN
k ,tk+1 )
PN −1
N
0 and k=0 E[|ξtNk |](tN
k+1 − tk ) ≤ C, for all N = 1, 2, · · · , we have

E[|

N
−1
X
k=0

N 2
ξkN (tN
k+1 − tk ) |] → 0,

and, thanks to Lemma 35,
E[|

N
−1
X
k=0

ξkN (hBitN
k+1

− hBitN ) |] ≤

N
−1
X

E[|ξkN | · E[(hBitN

=

N
−1
X

N 2
E[|ξkN |](tN
k+1 − tk ) → 0,

2

k

k=0

k=0

19

k+1

− hBitN )2 |FtN
]]
k
k

as well as
N
−1
X

ξkN (hBitN



N
−1
X

E[|ξkN |]E[(hBitN

− hBitN )|BtN
− BtN
|]
k+1
k



N
−1
X

E[|ξkN |]E[(hBitN

− hBitN )2 ]1/2 E[|BtN
− BtN
|2 ]1/2
k+1
k

=

N
−1
X

N 3/2
→ 0.
E[|ξkN |](tN
k+1 − tk )

E[|

k+1

k=0

k=0

k=0

k=0

− hBitN )(BtN
− BtN
)|]
k+1
k
k

k+1

k+1

k

k

We also have
N
−1
X

ξkN (hBitN



N
−1
X

N
E[|ξkN |(tN
k+1 − tk ) · E[(hBitN

=

N
−1
X

N 2
E[|ξkN |](tN
k+1 − tk ) → 0.

E[|

k+1

k=0

k=0

k=0

N
− hBitN )(tN
k+1 − tk )|]
k

− hBitN )|FtN
]]
k

k+1

k

and
E[|

N
−1
X
k=0

N
ξkN (tN
− BtN
)|] ≤
k+1 − tk )(BtN
k+1
k

=

N
−1
X
k=0

r

N
E[|ξkN |](tN
− BtN
|]
k+1 − tk )E[|BtN
k+1
k

N −1
2 X
N 3/2
E[|ξkN |](tN
→ 0.
k+1 − tk )
π
k=0

We now consider a more general form of Itˆo’s formula. Consider
Z t
Z t
Z t
βsν dBs .
ηsν d hB, Bis +
ανs ds +
Xtν = X0ν +
0

0

0

Proposition 39 Let αν , β ν and η ν , ν = 1, · · · , n, are bounded processes of
2
MG
(0, T ). Then for each t ≥ 0 and in L2G (Ft ) we have
Φ(Xt ) − Φ(Xs ) =
+

Z

t

s
Z t
s

∂xν Φ(Xu )βuν dBu +

Z

s

t

∂xν Φ(Xu )ανu du

1
[∂xν Φ(Xu )ηuν + ∂x2µ xν Φ(Xu )βuµ βuν ]d hBiu
2

20

(21)

Proof. We first consider the case where α, η and β are step processes of the
form
N
−1
X
ηt (ω) =
ξk (ω)I[tk ,tk+1 ) (t).
k=0

From the above Lemma, it is clear that (21) holds true. Now let
Xtν,N = X0ν +

Z

t

0

αν,N
s ds +

Z

t

0

ηsν,N d hBis +

Z

t

0

βsν,N dBs

where αN , η N and β N are uniformly bounded step processes that converge to
2
α, η and β in MG
(0, T ) as N → ∞. From Lemma 5.4
Φ(Xtν,N ) − Φ(X0 ) =
+

Z

t

s
Z t
s

∂xν Φ(XuN )βuν,N dBu +

Z

s

t

∂xν Φ(XuN )αν,N
u du

(22)

1
[∂xν Φ(XuN )ηuν,N + ∂x2µ xν Φ(XuN )βuµ,N βuν,N ]d hBiu
2

Since
E[|Xtν,N
+3E[|

Z

0



Xtν |2 ]

≤ 3E[|

t

(βsν,N − βsν )dBs |2 ] ≤ 3
+3

Z

t

0

(αN
s

Z

2

− αs )ds| ] + 3E[|

T

0
Z T
0

we then can prove that, in

Z

E[(αν,N
− ανs )2 ]ds + 3
s

Z

0

0
T

t

(ηsν,N − ηsν )d hBis |2 ]

E[|ηsν,N − ηsν |2 ]ds

E[(βsν,N − βsν )2 ]ds,

2
MG
(0, T ),

we have (21). Furthermore

∂xν Φ(X·N )η·ν,N + ∂x2µ xν Φ(X·N )β·µ,N β·ν,N → ∂xν Φ(X· )η·ν + ∂x2µ xν Φ(X· )β·µ β·ν
→ ∂xν Φ(X· )αν·
∂xν Φ(X·N )αν,N
·
∂xν Φ(X·N )β·ν,N → ∂xν Φ(X· )β·ν

We then can pass limit in both sides of (22) and get (21).

6

Stochastic differential equations

2
We consider the following SDE defined on MG
(0, T ; Rn ):

Xt = X0 +

Z

0

t

b(Xs )ds +

Z

0

t

h(Xs )d hBis +

Z

0

t

σ(Xs )dBs , t ∈ [0, T ].

(23)

where the initial condition X0 ∈ Rn is given and b, h, σ : Rn 7→ Rn are given
Lipschitz functions, i.e., |φ(x) − φ(x′ )| ≤ K|x − x′ |, for each x, x′ ∈ Rn , φ = b, h
and σ. Here the horizon [0, T ] can be arbitrarily large. The solution is a process
21

2
X ∈ MG
(0, T ; Rn ) satisfying the above SDE. We first introduce the following
mapping on a fixed interval [0, T ]:
2
2
Λ· (Y ) := Y ∈ MG
(0, T ; Rn ) 7−→ MG
(0, T ; Rn )

by setting Λt with
Λt (Y ) = X0 +

Z

t

b(Ys )ds +

0

Z

0

t

h(Ys )d hBis +

Z

0

t

σ(Ys )dBs , t ∈ [0, T ].

We immediately have
2
Lemma 40 For each Y, Y ′ ∈ MG
(0, T ; Rn ), we have the following estimate:


2

E[|Λt (Y ) − Λt (Y )| ] ≤ C

t

Z

0

E[|Ys − Ys′ |2 ]ds, t ∈ [0, T ],

where C = 3K 2 .
Proof. This is a direct consequence of the inequalities (9), (11) and (16).
We now prove that SDE (23) has a unique solution. By multiplying e−2Ct on
both sides of the above inequality and then integrate them on [0, T ]. It follows
that
Z T
Z T
Z t
′ 2 −2Ct
e−2Ct
E[|Λt (Y ) − Λt (Y )| ]e
dt ≤ C
E[|Ys − Ys′ |2 ]dsdt
0

0

=C

Z

0

T

0

Z

T
s

= (2C)−1 C

e−2Ct dtE[|Ys − Ys′ |2 ]ds
Z

0

T

(e−2Cs − e−2CT )E[|Ys − Ys′ |2 ]ds.

We then have
Z T
Z
1 T
′ 2 −2Ct
E[|Λt (Y ) − Λt (Y )| ]e
dt ≤
E[|Yt − Yt′ |2 ]e−2Ct dt.
2 0
0
2
We observe that the following two norms are equivalent in MG
(0, T ; Rn ):

Z

0

T

E[|Yt |2 ]dt ∼

Z

T
0

E[|Yt |2 ]e−2Ct dt.

From this estimate we can obtain that Λ(Y ) is a contract mapping. Consequently, we have
2
Theorem 41 There exists a unique solution X ∈ MG
(0, T ; Rn ) of the stochastic differential equation (23).

22

7

Appendix
1
p

For r > 0, 1 < p, q < ∞ with

+

1
q

= 1, we have

|a + b|r ≤ max{1, 2r−1}(|a|r + |b|r ), ∀a, b ∈ R
|a|p
|b|q
|ab| ≤
+
.
p
q

(24)
(25)

Proposition 42
E[|X + Y |r ] ≤ Cr (E[|X|r ] + E[|Y |r ]),

E[|XY |] ≤ E[|X|p ]1/p · E[|Y |q ]1/q ,
p 1/p

E[|X + Y | ]

p 1/p

≤ E[|X| ]

p 1/p

+ E[|Y | ]

(26)
(27)
.



(28)


In particular, for 1 ≤ p < p′ , we have E[|X|p ]1/p ≤ E[|X|p ]1/p .
Proof. (26) follows from (24). We set
ξ=

X
Y
, η=
.
p
1/p
E[|X| ]
E[|Y |q ]1/q

By (25) we have
|η|q
|ξ|p
|η|q
|ξ|p
+
] ≤ E[
] + E[
]
p
q
p
q
1 1
= + = 1.
p q

E[|ξη|] ≤ E[

Thus (27) follows. We now prove (28):
E[|X + Y |p ] = E[|X + Y | · |X + Y |p−1 ]

≤ E[|X| · |X + Y |p−1 ] + E[|Y | · |X + Y |p−1 ]
≤ E[|X|p ]1/p · E[|X + Y |(p−1)q ]1/q

+ E[|Y |p ]1/p · E[|X + Y |(p−1)q ]1/q
We observe that (p − 1)q = p. Thus we have (28).

References
[1] Artzner, Ph., F. Delbaen, J.-M. Eber, and D. Heath (1997), Thinking Coherently, RISK 10, November, 68–71.
[2] Artzner, Ph., F. Delbaen, J.-M. Eber, and D. Heath (1999), Coherent Measures of Risk, Mathematical Finance 9.

23

[3] Avellaneda M., Levy, A. and Paras A. (1995). Pricing and hedging derivative securities in markets with uncertain volatilities. Appl. Math. Finance
2, 73–88.
[4] Briand, Ph., Coquet, F., Hu, Y., M´emin J. and Peng, S. (2000) A converse
comparison theorem for BSDEs and related properties of g-expectations,
Electron. Comm. Probab, 5.
[5] Chen, Z. (1998) A property of backward stochastic differential equations,
C.R. Acad. Sci. Paris S´
er.I Math.326(4), 483–488.
[6] Chen, Z. and Epstein, L. (2002), Ambiguity, Risk and Asset Returns in
Continuous Time, Econometrica, 70(4), 1403–1443.
[7] Chen, Z., Kulperger, R. and Jiang L. (2003) Jensen’s inequality for gexpectation: part 1, C. R. Acad. Sci. Paris, Ser.I 337, 725–730.
[8] Chen, Z. and Peng, S. (1998) A Nonlinear Doob-Meyer type Decomposition
and its Application. SUT Journal of Mathematics (Japan), 34(2), 197–208.
[9] Chen, Z. and Peng, S. (2000), A general downcrossing inequality for gmartingales, Statist. Probab. Lett. 46(2), 169–175.
[10] Cheridito, P., Soner, H.M., Touzi, N. and Victoir, N., Second order backward stochastic differential equations and fully non-linear parabolic PDEs,
Preprint (pdf-file available in arXiv:math.PR/0509295 v1 14 Sep 2005).
[11] Coquet, F., Hu, Y., M´emin, J. and Peng, S. (2001) A general converse comparison theorem for Backward stochastic differential equations, C.R.Acad.
Sci. Paris, t.333, Serie I, 577–581.
[12] Coquet, F., Hu, Y., Memin J. and Peng, S. (2002), Filtration–consistent
nonlinear expectations and related g–expectations, Probab. Theory Relat.
Fields, 123, 1–27.
[13] Crandall, M., Ishii, H., and Lions, P.-L. (1992) User’S Guide To Viscosity
Solutions Of Second Order Partial Differential Equations, Bulletin Of The
American Mathematical Society, 27(1), 1-67.
[14] Daniell, P.J. (1918) A general form of integral. Annals of Mathematics, 19,
279–294.
[15] Delbaen, F. (2002), Coherent Risk Measures (Lectures given at the Cattedra Galileiana at the Scuola Normale di Pisa, March 2000), Published by
the Scuola Normale di Pisa.
[16] Denis, L. and Martini, C. (2006) A theoretical framework for the pricing
of contingent claims in the presence of model uncertainty, The Annals of
Applied Probability, Vol. 16, No. 2, 827–852.

24

[17] Denis, L. and Peng, S. Working paper on: Pathwise Analysis of G-Brownian
Motions and G-Expectations.
[18] Barrieu, P. and El Karoui, N. (2004) Pricing, Hedging and Optimally Designing Derivatives via Minimization of Risk Measures, Preprint, to appear
in Contemporary Mathematics.
[19] Barrieu, P. and El Karoui, N. (2005) Pricing, Hedging and Optimally Designing Derivatives via Minimization of Risk Measures, Preprint.
[20] El Karoui, N., Quenez, M.C. (1995) Dynamic Programming and Pricing
of Contingent Claims in Incomplete Market. SIAM J.of Control and Optimization, 33(1).
[21] El Karoui, N., Peng, S., Quenez, M.C. (1997) Backward stochastic differential equation in finance, Mathematical Finance 7(1): 1–71.
[22] Fleming, W.H., Soner, H.M. (1992) Controlled Markov Processes and Viscosity Solutions. Springer–Verleg, New York.
[23] Huber,P. J., (1981) Robustic Statistics, John Wiley & Sons.
[24] Itˆ
o, Kiyosi, (1942) Differential Equations Determining a Markoff Process,
in Kiyosi Itˆ
o: Selected Papers, Edit. D.W. Strook and S.R.S. Varadhan,
Springer, 1987, Translated from the original Japanese first published in
Japan, Pan-Japan Math. Coll. No. 1077.
[25] Jiang, L. (2004) Some results on the uniqueness of generators of backward
stochastic differential equations, C. R. Acad. Sci. Paris, Ser. I 338 575–
580.
[26] Jiang L. and Chen, Z. (2004) A result on the probability measures dominated by g-expectation, Acta Mathematicae Applicatae Sinica, English Series 20(3) 507–512
[27] Kl¨
oppel, S., Schweizer, M.:
Dynamic Utility Indifference Valuation via Convex Risk Measures,
Working Paper (2005)
(http://www.nccr-nrisk.unizh.ch/media/pdf/wp/WP209-1.pdf).
[28] Krylov, N.V. (1980) Controlled Diffusion Processes. Springer–Verlag, New
York.
[29] Lyons, T. (1995). Uncertain volatility and the risk free synthesis of derivatives. Applied Mathematical Finance 2, 117–133.
[30] Nisio, M. (1976) On a nonlinear semigroup attached to optimal stochastic
control. Publ. RIMS, Kyoto Univ., 13: 513–537.
[31] Nisio, M. (1976) On stochastic optimal controls and envelope of Markovian
semi–groups. Proc. of int. Symp. Kyoto, 297–325.

25

[32] Øksendal B. (1998) Stochastic Differential Equations,
Springer.

Fifth Edition,

[33] Pardoux, E., Peng, S. (1990) Adapted solution of a backward stochastic
differential equation. Systems and Control Letters, 14(1): 55–61.
[34] Peng, S. (1992) A generalized dynamic programming principle and
Hamilton-Jacobi-Bellman equation. Stochastics and Stochastic Reports,
38(2): 119–134.
[35] Peng, S. (1997) Backward SDE and related g–expectation, in Backward
Stochastic Differential Equations, Pitman Research Notes in Math. Series,
No.364, El Karoui Mazliak edit. 141–159.
[36] Peng, S. (1997) BSDE and Stochastic Optimizations, Topics in Stochastic
Analysis, Yan, J., Peng, S., Fang, S., Wu, L.M. Ch.2, (Chinese vers.),
Science Publication, Beijing.
[37] Peng, P. (1999) Monotonic limit theorem of BSDE and nonlinear decomposition theorem of Doob-Meyer’s type, Prob. Theory Rel. Fields 113(4)
473-499.
[38] Peng, S. (2004) Nonlinear expectation, nonlinear evaluations and risk measures, in K. Back T. R. Bielecki, C. Hipp, S. Peng, W. Schachermayer,
Stochastic Methods in Finance Lectures, C.I.M.E.-E.M.S. Summer School
held in Bressanone/Brixen, Italy 2003, (Edit. M. Frittelli and W. Runggaldier) 143–217, LNM 1856, Springer-Verlag.
[39] Peng, S. (2004) Filtration Consistent Nonlinear Expectations and Evaluations of Contingent Claims, Acta Mathematicae Applicatae Sinica, English
Series 20(2), 1–24.
[40] Peng, S. (2005) Nonlinear expectations and nonlinear Markov chains, Chin.
Ann. Math. 26B(2) ,159–184.
[41] Peng, S. (2004) Dynamical evaluations, C. R. Acad. Sci. Paris, Ser.I 339
585–589.
[42] Peng, S. (2005), Dynamically consistent nonlinear evaluations and expectations, in arXiv:math.PR/0501415 v1 24 Jan 2005.
[43] Peng, S. (2006) Multi-dimensional G–Brownian motion and related
stochastic calculus under G–expectation, Preprint, (pdf-file available in
arXiv:math.PR/0601699 v1 28 Jan 2006).
[44] Peng, S. and Xu, M. (2003) Numerical calculations to solve BSDE, preprint.
[45] Peng, S. and Xu, M. (2005) gΓ –expectations and the Related Nonlinear
Doob-Meyer Decomposition Theorem

26

[46] Rosazza Giannin, E., (2002) Some examples of risk measures via g–
expectations, preprint, to appear in Insurance: Mathematics and Economics.
[47] Yong, J., Zhou, X. (1999) Stochastic Controls: Hamiltonian Systems and
HJB Equations. Springer–Verlag.

27

Dokumen yang terkait

AN ALIS IS YU RID IS PUT USAN BE B AS DAL AM P E RKAR A TIND AK P IDA NA P E NY E RTA AN M E L AK U K A N P R AK T IK K E DO K T E RA N YA NG M E N G A K IB ATK AN M ATINYA P AS IE N ( PUT USA N N O MOR: 9 0/PID.B /2011/ PN.MD O)

0 82 16

ANALISIS FAKTOR YANGMEMPENGARUHI FERTILITAS PASANGAN USIA SUBUR DI DESA SEMBORO KECAMATAN SEMBORO KABUPATEN JEMBER TAHUN 2011

2 53 20

EFEKTIVITAS PENDIDIKAN KESEHATAN TENTANG PERTOLONGAN PERTAMA PADA KECELAKAAN (P3K) TERHADAP SIKAP MASYARAKAT DALAM PENANGANAN KORBAN KECELAKAAN LALU LINTAS (Studi Di Wilayah RT 05 RW 04 Kelurahan Sukun Kota Malang)

45 393 31

FAKTOR – FAKTOR YANG MEMPENGARUHI PENYERAPAN TENAGA KERJA INDUSTRI PENGOLAHAN BESAR DAN MENENGAH PADA TINGKAT KABUPATEN / KOTA DI JAWA TIMUR TAHUN 2006 - 2011

1 35 26

A DISCOURSE ANALYSIS ON “SPA: REGAIN BALANCE OF YOUR INNER AND OUTER BEAUTY” IN THE JAKARTA POST ON 4 MARCH 2011

9 161 13

Pengaruh kualitas aktiva produktif dan non performing financing terhadap return on asset perbankan syariah (Studi Pada 3 Bank Umum Syariah Tahun 2011 – 2014)

6 101 0

Pengaruh pemahaman fiqh muamalat mahasiswa terhadap keputusan membeli produk fashion palsu (study pada mahasiswa angkatan 2011 & 2012 prodi muamalat fakultas syariah dan hukum UIN Syarif Hidayatullah Jakarta)

0 22 0

Pendidikan Agama Islam Untuk Kelas 3 SD Kelas 3 Suyanto Suyoto 2011

4 108 178

ANALISIS NOTA KESEPAHAMAN ANTARA BANK INDONESIA, POLRI, DAN KEJAKSAAN REPUBLIK INDONESIA TAHUN 2011 SEBAGAI MEKANISME PERCEPATAN PENANGANAN TINDAK PIDANA PERBANKAN KHUSUSNYA BANK INDONESIA SEBAGAI PIHAK PELAPOR

1 17 40

KOORDINASI OTORITAS JASA KEUANGAN (OJK) DENGAN LEMBAGA PENJAMIN SIMPANAN (LPS) DAN BANK INDONESIA (BI) DALAM UPAYA PENANGANAN BANK BERMASALAH BERDASARKAN UNDANG-UNDANG RI NOMOR 21 TAHUN 2011 TENTANG OTORITAS JASA KEUANGAN

3 32 52