Generalized Normal Mean Variance Mixture

INTERNATIONAL CENTRE FOR ECONOMIC RESEARCH

WORKING PAPER SERIES

E. Luciano, P. Semeraro

GENERALIZED NORMAL MEAN VARIANCE MIXTURE
AND SUBORDINATED BROWNIAN MOTION

Working Paper No. 42/2007
October 2007

APPLIED MATHEMATICS AND QUANTITATIVE METHODS
WORKING PAPER SERIES

Generalized normal mean variance mixture and
subordinated Brownian motion.
Elisa Luciano1 and Patrizia Semeraro2
October 18, 2007

1 Dipartimento


di Statistica e Matematica Applicata D. De Castro, Università degli Studi
di Torino, ICER and Collegio Carlo Alberto.
2 Dipartimento di Statistica e Matematica Applicata D. De Castro, Università degli Studi
di Torino

Abstract
Normal mean variance mixtures are extensively applied in finance. Under conditions
for infinite divisibility they generate subordinated Brownian motions, used to represent
stocks returns. The standard generalization to the multivariate setting of normal mean
variance mixture does not allow for independence and can incorporate only limited dependence. In this paper we propose a multivariate definition of normal mean variance
mixture, named generalized normal mean variance mixture, which includes both independence and high dependence. We give conditions for infinite divisibility and prove
that the multivariate Lévy process defined from it is a subordinated Brownian motion.
We analyze both the distribution and the related process.
In the second part of the paper we use the construction to introduce a multivariate
generalized hyperbolic distribution (and process) with generalized hyperbolic margins.
We conclude with a numerical example to show the case of calibration and the flexibility
of the model in describing dependence.

Introduction

Normal mean variance distributions have been extensively studied both from a statistical
and a financial perspective. For example Kelker [11] studied the infinite divisibility of
such distributions, Barndorff-Nielsen et al. [5] focused on the n dimensional case. In
particular, the generalized hyperbolic distribution, defined by Barndorff-Nielsen [1], gave
rise to a great interest: to mention a few, Barndorff-Nielsen and Halgreen [3] proved
that it is infinitely divisible, Eberlein and Keller [9] studied its relevance in financial
applications.
Under infinite divisibility normal mean variance distributions generate subordinated
Lévy processes. The interest of subordinated Lévy processes in finance in turn is due to
the idea that calendar time may not be appropriate to represent financial market time. A
complete treatment of real subordination can be found in Sato [14]. Real subordinators
are used to time change Lévy processes and introduce a stochastic clock (see Geman,
Madan and Yor [10]). One of the main processes used to model stocks returns is the
variance gamma process, introduced in Madan and Seneta [13]. It is a Brownian motion
time changed by a gamma subordinator, which can be defined through a normal mean
variance mixture distribution. Another process widely used in financial applications,
the Normal inverse Gaussian introduced by Barndorff-Nielsen [2], is a Brownian motion
time-changed by an inverse Gaussian subordinator.
Both normal mean variance distributions and time-changed Lévy processes have been
extended to the multivariate setting. As concerns the distributions, one of the most

important normal mean variance distributions extended to the multivariate case is the
generalized hyperbolic: see Blaseid [7]. As concerns the processes, Madan and Seneta
themselves proposed a multivariate version of the symmetric variance gamma process.
The main drawback of the traditional definition of multivariate mean variance mixtures is that they do not capture independence and allow only limited dependence.
Schmidt [15] proposed a multivariate generalized hyperbolic distribution able to model
independence, even if it does not have generalized hyperbolic margins. Also the associated Lévy process cannot model independence. Semeraro [17] and Luciano and Semeraro
[12] generalized the multivariate variance gamma and other processes in order to capture
independence.
In this paper we propose a multivariate definition of normal mean variance mixture,
named generalized normal mean variance mixture, which includes both independence
and high dependence. We give conditions for infinite divisibility and prove that the
multivariate Lévy process defined from it is a subordinated Brownian motion.
In the second part of the paper we use the construction to introduce a multivariate
generalized hyperbolic distribution (and process) with generalized hyperbolic margins.
We conclude with a numerical example to show the case of calibration and the flexibility
1

of the model in describing dependence.
The paper is organized as follows. Section 1 recalls some notations. Section 2 defines
the generalized normal mean variance distribution and discusses its properties, in particular infinite divisibility. It specifies a family of mixing distribution that makes possible

to move from independence to perfect correlation. In Section 3 we prove that the Lévy
process whose law at time one is a generalized normal mean variance distribution is
a subordinated process and we analyze in which case it can be interpreted as a timechanged Brownian motion. In Section 4 and 5 we apply the constructions in Sections
2 and 3 to generate, respectively, a multivariate version of the generalized hyperbolic
distribution and process. We characterize them through their characteristic function
and we provide a method to determine the Lévy triplet of the process. In Section 6 we
analyze the dependence structure of the model focusing on the linear correlation. The
relevance of linear correlation is due to the fact that, for fixed margins, it identifies the
joint distribution of the mixing distribution and then of the whole mixture. Section 7
provides a method to calibrate the model on data and discuss a numerical example.

1

Notations

With capital bold letters X we denote Rn valued random variables X = (X1 , ..., Xn )T .
ψX and ΨX represent respectively the characteristic function and the characteristic
d
exponent of X. L(X) stands for the law of X and X = Y means that X and Y
have the same distribution. If no confusion arises we denote by X the Lévy process

{X(t), t > 0} so that L(X(1)) = L(X).

n
X =
Let Mnn the set of n × n matrices, X stands for an element
in
M
.
We
set
n

p


T
( X1 , ..., Xn ) , where T stands for the transpose and X = ( Xij )i,j=1,...,n .


X1 0...

0
Given a vector X, diag(X) stands for the diagonal matrix X =  0 X2 .... 0 .
0
0... Xn
We will use the notion of Rn+ -parameter processes, in Barndorff-Nielsen et al. [6]. The
process X = {(X1 (s), ..., Xn (s))T , s ∈ Rn+ } is an Rn+ -parameter process if the following
hold:
1. for any m ≥ 3 and for any choice of s1  ...  sm , the increments X(sj )−X(sj−1 ),
j = 1, ..., m, are independent;
2. for any s1  s2 and s3  s4 satisfying s2 − s1 = s4 − s3 , X(s2 ) − X(s1 ) =
X(s4 ) − X(s3 ) (increments are stationary);
3. X(0) = 0 almost surely;

2

4. X(s) is right continuous, almost surely, with left limits in s in the partial ordering
of Rn+ .

2


Normal and generalized normal mean-variance mixture

In this section we recall the definition of Normal mean-variance mixture (M nmv) in the
multivariate setting and provide a generalization. Properties and examples of the former
class of distributions are in Barndorff-Nielsen at al. [5].
Definition 2.1. A random vector Y has a multivariate normal mean variance distribution (shortly Y ∈ M nmv) if

d
(2.1)
Y = µ0 + µG + GQW ,
where µ0 , µ ∈ Rn , Q ∈ Mn and QQT = Σ is positive-definite, G is a positive random
variable independent from W and W ∼ N (0, I n ).
For simplicity from now on we assume µ0 = 0. The M nmv distributions are strictly
related to type G distributions1 on Rn .
The infinite divisibility of this class is discussed for example in [11]. A sufficient
condition for i.d. is that the mixing distribution G is i.d. itself (see Barndorff-Nielsen et
al. [5]). Barndorff-Nielsen et al. [6] proved that under the condition of infinite divisibility
(i.d.) M nmv are the distributions at time one of subordinated Lévy processes. In
particular they are time-changed Lévy motions. These processes play a central role in
financial applications to represent returns of stock prices. For this reasons our interest

is in i.d. M nmv distributions and the Lévy processes related to them. From now on, if
not specified otherwise, the mixing distribution is assumed to be i.d..
The main disadvantages of normal mean variance mixtures are that they do not
allow for independence. Moreover their dependence structure is uniquely determined by
the marginal distributions and therefore is limited. If FG is the distribution of G, the
marginal distributions of Y are:
Z
x j − µj z
Φ(
)dFG (z)
Fj (xj ) =
σj z
R+
Obviously given Fj (xj ) we also have uniquely determined the joint distribution F :
Z Y
x j − µj z
F (x) =
)dFG (z).
(2.2)
Φ(

σj z
R+ j
Y is of type G if there are a standard Gaussian random vector on Rn and a non negative i.d.
d
distribution independent of X such that Y = T 1/2 X
1

3

We now generalize the previous construction using a multivariate mixing distribution
in order to generate independence and to obtain a more elastic dependence structure:
in the second case we look for a class of distributions that for fixed margins allows for
different dependence levels. We also prove that under suitable conditions the distribution
introduced is infinitely divisible and we relate it to subordinated Lévy processes.
Definition 2.2. A random vector Y has generalized normal mean variance mixture
distribution (shortly Y ∈ Gnmv) if

Y = AGµ + Q GW ,
(2.3)
where W ∼ N (0, I n ), A, Q ∈ Mn , QQT is positive-definite, and G = diag(G), G

positive and independent from W .
Theorem 2.1. If the mixing distribution G is infinitely divisible, the vector Y defined
in 2.2 has characteristic function
1
(2.4)
ψY (z) = exp(ΨG(iµz T A − (Qz )),
2
P
P
where Qz := (( l zl ql1 )2 , ...., ( l zl qln )2 )T = (QT z)(z T Q) and Qj = (q1j , ..., qnj )T and
Y is infinitely divisible.
Proof. To prove the assert we compute the characteristic function of Y . Let


Y j = (a1j µj gj + q1j gj Wj , ..., anj gj µj + qnj gj Wj )T , j = 1, ..., n;
P
where g ∈ Rn+ . The random variable Y j are independent and L(Y |G = g) = L( nj=1 Y j ).
Consider
E[exp{i hY , zi}|G = g] = E[exp{i


n
X


j=1

j



Y , z }] =

n
Y

ψY j (z).

(2.5)

j=1

where



ψY j (z) = E[exp{i Y j , z }]
X
X

= E[exp{i
zl alj gj µj + i
zl qlj gj Wj }]
l

= exp{i

X
l

= exp{i

X
l

l

X
zl alj gj µj }ψ√gj Wj (
zl qlj )
l

1 X
zl qlj )2 }.
zl alj gj µj } exp{− gj (
2
l

4

(2.6)

The characteristic function of Y becomes
n
Y
X
X
1
ψY (z) = E[ exp{i
zl alj Gj µj } exp{− Gj (
zl qlj )2 }]
2
j=1
l
l
n
X

1 X
zl qlj )2 })]
zl alj µj − (
2 l
j=1
l


1
1
= E[exp G, iµz T A − (Qz ) ] = exp(ΨG(iµz T A − (Qz )).
2
2

= E[exp

Gj ({i

X

(2.7)

From the previous equation and the infinite divisibility of G it easily follows that
also Y is infinitely divisible.

The model introduced covers a wide range of dependence and allows to model independence. Moreover, if we assume G with comonotone components, Definitions 2.1
and 2.2 are equivalent. Our generalization is also connected with the one proposed in
Barndorff-Nielsen et al. [6] of type G distributions.
We point out a useful closdness property that also characterizes the marginal distributions of Y .
Proposition 2.1. Let Y ∈ Gnmv. The following hold:
1. if µi = 0, i = 1, ..., n, i.e. in the symmetric case, the marginal distributions of Y
are normal mean variance distributions with mixing variable a linear combination
of the components of G.
2. if A = Q∗ := (qij2 )ij and µj = 1, j = 1, ..., n then the marginal distributions of Y
are normal mean variance distributions with mixing variable a linear combination
of the components of G.
Proof. Since rank(Q∗ ) = max, define G∗ = Q∗ G. Let A∗ = AQ∗−1 , then AGµ =
A(Q∗−1 G)µ = A∗ G∗ µ.

q
Gi Wi and Wi are i.i.d N (0, 1), from
the scaling property of
ij
i
qP
P √
2
Brownian motion it follows that L( i qij gi Wi ) = L((
i qij gi )W ), where W is
a standard Brownian motion.

1. Since Yj =

P

X √
X p
qij gi Wi )
L(Yj |G = g) = L(
qij Gi Wi |G = g) = L(
i

i

s
s
.
X
X
2
= L(
qij gi W ) = L(
qij2 Gi W |G = g)
i

i

5

(2.8)

qP
qP

2
2
q
G
W
|G
=
g)
=
L(
G = g iff G∗ = Q∗ g, then L(
i
i ij
i qij Gi W |G =
qP
P 2
p ∗





2
Q∗ g) = L(
j qij gi .
i qij Gi W |Gj = gj ) = L( gj W |Gj = gj ), where gi =
p ∗
Therefore L(Yj ) = L( Gj W ) and the assert is proved.
p
P
P
2. If A = Q∗ , i aij Gi µi = i qij∗ Gi = G∗j and L(Yj ) = L(G∗j + G∗j W )
Obviously if Y ∈ M nmv, i.e. the mixing distribution is univariate, then its marginal
distributions are normal mean variance mixtures.
The previous proposition is useful especially if the subordinator has α-stable margins.
In this case any linear combination of the margins is itself α-stable. Therefore under the
conditions of Proposition 2 the marginal distributions have α-stable mixing distributions.
For completeness we also state the following result, whose proof is analogous to the
previous one.
Proposition 2.2.
1. if µj = 0, j = 1, ..., n, i.e. in the symmetric case, then X =
Pn
j=1 Yj is a normal mean variance distribution.
P
2. if A = Q and µj = 1, for all j, then X = nj=1 Yj is a normal mean variance
distribution.
A case of interest is A = Q.

Y = QGµ + Q GW ,

(2.9)

where W ∼ N (0, I n ), Q ∈ Mn and QQT is positive-definite. Observe that for each
realization g of G we can find µ∗ so that A(diag(g))µ = Q(diag(g))µ∗, but this linear
transformation depends on g. Therefore the condition A 6= Q is a true generalization.
Y is a linear transformation of

Y = Gµ + GW ,
(2.10)
thus if G has independent components Y is a linear transformation of independent
normal mean variance mixtures. Its components are
Yi =

n
X

qij [µj Gj +

j=1

p

Gj Wj ].

(2.11)

We will mainly investigate the particular case A = Q, diagonal. The reason is that
it can be easily applied and calibrated on data as we will show in Section 7.1. For this
reason we formally define the class independent generalized mean variance distributions,
IGnmv:
6

Definition 2.3. A random vector Y has independent generalized mean variance distribution (Y ∈ IGnmv), if Y ∈ Gnmv, Q = A and they are diagonal.

Let Y ∈ IGnmv and Q = diag(σj ), we have
p
p
Y T = ( G1 σ1 W1 + µ1 σ12 G1 , ..., Gn σn Wn + µn σn2 Gn ).

(2.12)

Since if G has independent components also Y ∈ IGnmv does, the model allows to
capture independence.

2.1

Additive-effects mixing distributions

Recall that our aim in the Gnmv definition was twofold: on one side we wanted to capture
independence, as we have done, on the other side we wanted to have the possibility of
fixing the marginal distributions and move the dependence. This can be done in different
ways. We now specify the structure of G in order to
• generate infinitely divisible distributions with given margins,
• model different levels of dependence for fixed univariate marginal distributions,
• introduce multivariate Lévy models of interest for financial applications.
We adopt the models with random-additive-effects proposed in Barndorff-Nielsen et al.
[5] to generate multivariate distributions.
Definition 2.4. Let G be
G = (X1 + γ1 Z, ..., Xn + γn Z),

(2.13)

where γ1 , ..., γn are positive real parameters and Xi , i = 1, ..., n and Z independent random variables.
Assume that the margins of G have distributions closed under convolution and under
scale transformations. Then one can fix the marginal distributions Gi and consequently
the margins of Y and move the dependence structure of G from independence to perfect
correlation.
Since the characteristic function of G is
ψG(z) =

n
Y
j=1

n
X
ψj (zj )ψZ (
γj zj ),
j=1

the one of Y , introduced in (2.4), becomes:
7

(2.14)

1
ψY (z) = exp(ΨG(iµT zA − (Qz ))
2
n
Y
X
1 X
=
exp(Ψj ({i
zl qlj )2 }))
alj zl µj − (
2 l
j=1
l

(2.15)

n
X
X
1 X
zl qlj )2 })).
· exp(ΨZ (
γj {i
alj zl µj − (
2 l
j=1
l

From the expression of ψY we infer that Y is the convolution of two generalized mean
variance distributions, which we denote as Y X and Y Z . Moreover if Y ∈ IGnmv, its
characteristic function reduces to
ψY (z) = exp(ΨG(log ψB1 (z1 ), ..., log ψBn (z1 )))
n
n
Y
X
=
exp(ΨXj (log ψBj (zj )))exp(ΨZ (log
γj ψBj (zj ))),
j=1

(2.16)

j=1

Q
where, as it is well known, nj=1 exp(ΨXj (log ψBj (zj ))) is the characteristic function
of a random vector with independent normal mean variance mixture components and
P
exp(ΨZ (log nj=1 γj ψBj (zj ))) is the characteristic function of a M nmv distribution. We
have therefore proved
d

Lemma 2.1. Let Y ∈ IGnmv and assume G as in Definition 2.4. Then Y = Y X +Y Z ,
where Y X has independent unidimensional normal mean variance margins and Y Z is a
multivariate normal mean variance mixture. Y X and Y Z are independent.
The previous lemma can be generalized as follows.
P
Proposition 2.3. If Y ∈ Gnmv and G = j X j , where X j , j = 1, ..., n are indepenP
dent, then Y = j Y j , where Y j ∈ Gmnv are independent with mixing distribution
L(X j ).
Proof. By assumption ΨG(z) =

P

j

ΨXj (z); therefore

1
ψY (z) = exp(ΨG(iµT zA − (Qz ))
2
Y
X
1
ψY j (z),
= exp(
ΨXj (iµT zA − (Qz )) =
2
j
j

(2.17)

where for each j = 1, ..., n, ψY j (z) = exp(ΨXj (iµT zA − 21 (Qz )) is the characteristic
function of a Gnmv distribution.

8

3

Lévy motions

In this section we examine in general Lévy processes that arise from Y ∈ Gnmv, when
infinite divisibility holds. From now on we assume that the vector G in Definition 2.4
is infinitely divisible. A sufficient condition is Xi , i = 1, ..., n and Z infinitely divisible.
In this case, by Proposition 2.1, Y ∈ Gnmv is infinitely divisible.
Definition 3.1. The Lévy motion Y = {Y (t), t ≥ 0} is the unique in law process such
that L(Y (1)) = L(Y ), where Y ∈ Gnmv and G is infinitely divisible.
With abuse of notation we name Gnmv also the Lévy process, if no confusion arises.
The following proposition shows that Y is a subordinated Lévy process. Subordinated
Lévy processes are widely used to represent asset returns. A complete treatment of
subordinated processes is in Barndorff-Nielsen et al. [6]. We need some preliminary
notation.
Define the Brownian motion
n
n
n
n
X
X
X
X
B(s) := (
α1j sj +
β1j Wj (sj ), ...,
αnj sj +
βnj Wj (sj ))T ,
j=1

j=1

j=1

(3.1)

j=1

where αj , βj ∈ R+ , j = 1, ...n.
Lemma 3.1. B(s) is an Rn+ -parameter process.
Proof. The proof is straightforward. Define Z i (t) := (a1i Bi (t), ..., ani Bi (t))T . The Z i
P
are independent Lévy processes on Rn and B̂(s) = ni=1 Z i (si ). The assert is now a
direct consequence of Example 4.4 in Barndorff-Nielsen et al. [6].
The following holds:
d

Proposition 3.1. A random vector Y is in Gnmv if and only if Y = Y (1), where
Y (1) is a Lévy process obtained by subordination of a Rn+ -parameter Brownian motion
B(s).
Proof. This proof is similar to that of proposition 6.4 in Barndorff Nielsen et al. [6].
Let B(s) be a Rn+ -parameter Brownian motion defined as in (3.1) with αij = aij µj and
βij = qij . Let Y (t) be the subordination of B(s) by a multivariate subordinator G(t)
and let G := G(1). Using the scaling property of Brownian motion, for every bounded

9

measurable function f , we have
E[f (Y (1))] = E[(E[f (B(s))]G(1)=s)]
X
X
X
X
= E[(E[f (
a1j µj sj +
q1j Wj (sj ), ...,
anj µj sj +
qnj Wj (sj ))]G=s)]
j

j

j

j

X
X √
X
X √
= E[(E[f (
a1j µj sj +
q1j sj Wj (1), ...,
anj µj sj +
qnj sj Wj (1))]G=s)]
j

j

j

j

X p
X
X p
X
= E[f (
a1j µj Gj +
anj µj Gj +
qnj Gj Wj (1))].
q1j Gj Wj (1), ...,
j

j

j

j

(3.2)

Thus
X
X p
X
X p
d
anj µj Gj +
Y (1) = (
qnj Gj Wj (1))T , (3.3)
a1j µj Gj +
q1j Gj Wj (1), ...,
j

j

j

j

and Y (1) is a generalized normal mean variance mixture. On the other side let Y ∈
d
Gnmv with mixing distribution G. Define G(t) as the subordinator such that G(1) = G
and define the process Y by Y (t) = B(G(t)). An argument similar to the previous one
d
shows that Y (1) = Y .
As the previous proof shows, the subordinator G is the Lévy process {G(t) : t > 0},
such that L(G(1)) = L(G). If in addition G is the subordinator defined from the
distribution in 2.4, it ca be written as
d

G(t) = (X1 (t) + γ1 Z1 (t), ..., X1 (t) + γ1 Z1 (t))T ,
for each t > 0.
The Lévy triplet of Y is derived from the ones of G and of the Brownian motion
as stated in Theorem 4.7 in Barndorff-Nielsen [6]. It is easy to verify that the subcase
with a common subordinator always has normal mean variance marginal distributions.
In general this property does not hold. Sufficient conditions are given in the following
proposition, whose proof is a consequence of Proposition 2.1.
Proposition 3.2. In the symmetric case, or if A = Q∗ and µi = 1, i = 1, ..., n the
marginal processes are time-changed Brownian motion. The change of time is a suborP 2
Gi ).
dinator G∗j whose distribution at time 1 is L(G∗j ) = L( i qji
P 2
Gi (t)). If the subordinator G has α-stable margins, also
Easily L(G∗j (t)) = L( i qji

G does. This is a useful tool for multivariate models for returns. Assume that the
single returns are represented by time-changed Brownian motions and that the timechange is stable. The previous construction allows to define a multivariate model with
stable correlated time changes and correlated Brownian motions so that the marginal
processes, i.e. the single returns, have a α-stable time-change. On the other side in
general the subordinators of the marginal processes are unknown. As an example we
consider the multivariate α-VG model, whose subordinator has gamma margins.
10

Example 1. The VG case. Consider multivariate α-VG model introduced by Semeraro
[17], the symmetric case. Let Y (t) = QW (G(t)) be a multivariate VG where G(t)
P 2
Gi (t))). If Gi (1) ∼ Γ(ai , bi ) then
has gamma margins. Then L(Yj (t)) = L(W ( i qji
bi
2
qji Gi (t) ∼ Γ(ai , q2 ). The only way to sum them up obtaining again a gamma process is
ji

2
to impose bi = qji
for all j = 1, ..., n, but since qji 6= qli for at least one i (rankQ = max),
the previous assumption does not make sense. Therefore Y (t) = QW (G(t)) has time
changed margins, but the change of time is no more gamma distributed and Yj is no
more a VG process.

As shown by the previous example in general the marginal distributions of G∗ do not
have known distributions (obviously if the subordinator is univariate and its distribution
is closed under scale transformation, they have). Moreover with respect to the subordinator G, the process Y cannot be interpreted in general as a time-changed Brownian
motion. In fact if Y ∈ Gnmv and G is the subordinator by Proposition 3.1,
n
n
X
X

L( GQW (t)) = L(
qj1 Wj (Gj (t)), ...,
qjn Wj (Gj (t)))T .
j=1

(3.4)

j=1

Each component of Y depends on more than one margin of G, in that the conditional
P
law of Yj given G = s, i.e nj=1 qj1 Wj (sj ), depends on the whole multi-parameter s.
Therefore it is itself a multi-parameter process and not a usual stochastic process. This
is the case of GH distributions that we will investigate in the following sections.
To interpret the subordinator as a stochastic clock we have to consider two subcases.
• Unique subordinator: suppose G = (G, ..., G). From (3.4) it easily follows that
Y ∈ M nmv. Y can be interpreted as a time changed Lévy motion.
• Multivariate subordinator, independent Brownian motions: in this case Y ∈
IGnmv and the law of Y is

L( GW (t)) = L(W (G(t))).
(3.5)
Also in this case Y can be interpreted as a time changed Lévy process: each
component of the Brownian motion has its own stochastic clock.
The characterization of this process in terms of its Lévy triplet (γ Y , ΣY , νY ) , can
be obtained through Theorem 3.3 in Barndorff Nielsen et al. [6]. The Lévy triplet
is
R
R
γ Y = Rn νT (ds) |x|≤1 xρs(dx),
+

ΣY = 0,
R
νY (B) = Rn ρs(B)νT (ds),

(3.6)

+

Rn+ ,

where ρs = L(W (s)), s ∈
x = (x1 , ..., xn )T , B ∈ Rn \ {0} and νT is the
Lévy measure of T . Observe that the process Y is a pure jump. Y has finite
activity/variations if and only if the margins do.
11

In the following sections we apply the second construction with a multivariate subordinator of random-additive type to get a GIG subordinator (mixing distribution in static
case), which is not closed under convolution. Moreover we will present an ”intermediate” model with respect to the previous ones, whose advantage is that it both has a
subordinator with GIG margins and allows to capture high correlation. A key role in
our construction is played by the following proposition, that corresponds to Proposition
2.3 for distributions:
Proposition 3.3. Let Y ∈ Gnmv. Let X j , j = 1, ..., n be independent non negative
P
infinitely divisible random vectors and G = nj=1 X j , then
d

Y (t) =

n
X

(X j µT +



Xj QW )(t),

(3.7)

j=1

moreover the processes X j µT +



Xj QW , j = 1, ..., n are independent.


Proof. Let Y j := X j µT + Xj QW , and let Y j (t) be the Lévy process such that
L(Y j (1)) = L(Y j ) for j = 1, ..., n. Since Y (t) is a Lévy process, its characteristic
function is
ψY (t) (z) = (ψY (z))t
1
= (exp(ΨG(iµT zA − (Qz )))t
2
X
Y
Y
1
= (exp(
ΨXj (iµT zA − (Qz )))t = ( ψY j (z))t =
(ψY j (z))t ,
2
j
j
j

(3.8)

where (ψY j (z))t is the characteristic function of Y j (t). The thesis follows.

4

The Multivariate GH distribution

The second part of the paper is devoted to the generalized hyperbolic case. We have
proved that Gnmv are the distributions at time one of subordinated Lévy process. Taken
this into account, in this section we introduce a multivariate generalized hyperbolic
distribution in order to investigate the associated process. The peculiarity of these
models is that both the distribution and the process are generalization of the VG models.
The multivariate generalized hyperbolic distribution (MGH) is defined in the literature as a normal mean-variance distribution with mixing variable GIG distributed: see
Barndorff-Nielsen et al.[4] and Barndorff-Nielsen at al. [5]. In order to generalize it,
we need to introduce a multivariate mixing distribution with GIG margins. The GIG

12

distribution is not closed under convolution 2 . However, under a proper choice of the parameters, the convolution of a gamma and a GIG distribution is itself GIG distributed.
In this way we can construct a mixing vector G that allows us to cover the whole range
of dependence. In the construction, we separate the case in which the marginal processes
are GH (Section 4.1) from the others, which correspond to the Gnmv model (Section
4.2).

4.1

IGnmv model

The goal of this section is to introduce a multivariate GH distribution such that:
1. it has generalized hyperbolic margins;
2. it allows to model both independence and high dependence;
3. it allows to model different dependencies for fixed marginal distributions.
Let λ > 0, b ≥ 0, γi > 0 and δi , bi :=

b
γi

both nonnegative and not simultaneously
2

2

zero. Let Xi be GIG(−λ, δi , γbi ), Vi be Γ(λ − a, 2γb 2 ) and Z ∼ Γ(a, b2 ) then Xi + Vi + γi2 Z
i

is GIG with parameters (λ, δi , γbi ), where γi ≥ 0 (see Barndorff-Nielsen et al. [5]).
Definition 4.1.

3

Let T be
T = (X1 + V1 + γ12 Z, ..., Xn + Vn + γn2 Z),

(4.1)

where γ1 , ..., γn are positive real parameters.
Since the Xj , Vj , j = 1, ..., n and Z have i.d. distributions also T does.
The characteristic function of T is

ψT (z) =

n
Y
j=1

n
X
ψXj (zj )ψVj (zj )ψZ (
γj2 zj ),

(4.2)

j=1

2

It means that if we chose both the independent and the common components of G, respectively
X = (X1 , ..., Xn ) and Z = (γ1 Z, ..., γn Z), with GIG distributions, the margins of G are no more assured
to be GIG distributed.
3
A natural definition for the mixing distribution would be: Let λ > 0, b ≥ 0, γi > 0 and δi , bi := γbi
2

both nonnegative and not simultaneously zero. Let Xi be GIG(−λ, δi , γbi ) and Z ∼ Γ(λ, b2 ) then
Xi + γi2 Z is GIG with parameters (λ, δi , γbi ), where γi ≥ 0. The vector G is infinitely divisible because
Xi and Z are, moreover G has GIG margins. The drawback of this construction is that in order to
change the dependence we have to move λ. This leads to some limitations for the model because λ is a
marginal parameter.

13

We now define a multivariate distribution whose margins are GH distributed by
means of the previous mixing vector T .
2

2

Definition 4.2. Let Xi be GIG(−λ, δi , γbi ), Vi be Γ(λ − a, 2γb 2 ) and Z ∼ Γ(a, b2 ). We
i
say that Y has a T-independent generalized hyperbolic distribution (shortly Y ∈ T IGH)
if Y ∈ IGnmv (i.e., A = Q diagonal) and the mixing distribution has the law of T .
The following proposition is a consequence of our construction.
Proposition 4.1. Y is infinitely divisible and has GH margins with parameters αj , βj , δj , λ,
where
βj = µj
q
b
αj2 − βj2 = .
γj

(4.3)

The following constraints for the parameters are assumed:
δj ≥ 0, |βj | < α if λ > 0
δj > 0, |βj | < α if λ = 0

(4.4)

.

Observe that we do not allow λ < 0 because Vj + Z, j = 1, ..., n are gamma distributed, and their first parameter is λ. The components Yi are univariate Normal
mean-variance mixtures with GIG mixing variable.
The characteristic function of Y becomes
q
αj2 − (βj + izj )2
Y

−λ/2
q
ψY (z) =
( 2
)
αj − (βj + izj )2
Kλ (δj αj2 − βj2 )
j
Pn
1 2
2
− 21 zj2 + iβj zj −(λ−a)
j=1 (− 2 zj + iβj zj )γj −a
)
(1 −
) ,
· (1 −
(αj2 − βj2 )/2
(αj2 − βj2 )/2
αj2

Kλ (δj

βj2

(4.5)

From the expression of ψY we infer that Y is the convolution of a vector with independent
GH margins, Y X , and a multivariate α-VG random vector, Y Z .
With this choice for the mixing distribution we can change the level of dependence
moving a. Letting a → 0, for fixed marginal distributions, we get independence. On
the other side we are not able to capture perfect correlation for the subordinator only
through a: we should also let Xj , for j = 1, ..., n degenerate. This limit case corresponds
to a gamma mixing distribution and generates a VG distribution. Therefore as subclasses
of this family we find both the α-VG distribution and the distribution with independent
GH margins.
14

4.2

Gnmv model

In this section we introduce a multivariate GH distribution by means of the general mean
variance mixture. Our aim is to provide a model which generalizes the one presented in
the previous section and allows also to reach high correlation.
The mixing distribution is L(T ),
Definition 4.3. Y has a T-Generalized Hyperbolic distribution (shortly Y ∈ T GH), if
Y ∈ Gnmv, where G is given in 4.1 and Xi , Vi , i = 1, ..., n and Z as in definition 4.2.
The vector Y has infinitely divisible distribution; moreover if Q = A, then Y = QY ∗ ,
where Y ∗ ∈ T GH has independent components. Its characteristic function is
q

P
αj2 − (βj + i l zl qlj )2
−λ/2
q
2 − (β + iz )2 )
α
j
j
j
Kλ (δj αj2 − βj2 )
j
P
Pn
P
P
1 P
2
2
− 12 ( l zl qlj )2 + i l βl zl qlj −(λ−a)
l βl zl qlj )γj −a
j=1 (− 2 ( l zl qlj ) + i
· (1 −
)
(1

) ,
(αj2 − βj2 )/2
(αj2 − βj2 )/2
(4.6)

Y
ψY (z) =
(

αj2



βj2

Kλ (δj

Observe that the TGH family, under the condition Q = A, contains the affine generalized hyperbolic one proposed and studied by Schmidt [15], when Z → 0. As we noticed
at the beginning of this section our model does not capture the MGH with a common
GIG mixing distribution, since the common component of the subordinator is gamma
distributed. If the independent part degenerates we indeed find a VG distribution with
a common mixing law and correlated Brownian motions.

5

Multivariate GH Lévy motions

In this section we investigate the Lévy motion defined by the TGH distribution.
Definition 5.1. A Lévy process Y is said to be T-multivariate generalized hyperbolic
(Y ∈ T − GH) if L(Y (1)) = Y , where Y ∈ T GH.
By Proposition 3.1, Y ∈ T − GH is a subordinated Brownian motion with subordinator T defined by L(T (1)) = L(T ). We focus on the case Y ∈ IGnmv. In general Y
has neither GH margins, nor time-changed ones. On the other side if Y ∈ IGnmv it can
be interpreted as a time-changed Brownian motion and it has GH marginal processes.
For this reason we mainly investigate this case.

15

5.1

IGnmv processes

Definition 5.2. A Lévy process Y is said to be T-independent multivariate generalized
hyperbolic (Y ∈ T I − GH) if L(Y (1)) = Y , where Y ∈ T IGH.
The characteristic function of Y (1) has been given explicitly in the previous section,
see (4.5).
By proposition 4.1 the process Y has GH(αj , βj , δj , λ) marginal distributions. It
is a time-changed Brownian motion and the change of time is a GIG process, in fact,
as discussed in general, L(T (1)) = T . Since the GIG distributions are not closed
under convolution, if Ti (1) is GIG, the distribution of Ti (t) is no more assured to be
GIG. Therefore the law of the time change at time t is non known. Proposition 2.3
applies to Y : as a result Y is the sum of two independent multivariate processes. From
its characteristic function it can be argued that the addenda of Y are a process with
independent GH margins and a time-changed Brownian motion:
Proposition 5.1. Let Y ∈ T I − GH. Then Y = Y X + Y Z , where Y X has independent
GH margins and Y Z , the VG component, is the α-VG process. It has both a common
and an idiosyncratic time-change.
The dependence structure will be analyzed using linear correlation. It is possible,
as we will show in the application, that the data have high correlation and we could
have the necessity to add correlation in the Brownian motion. In this case we could
consider the Gnmv mixture, but we will not have GH margins anymore. For this reason
we end this section by proposing an intermediate model derived from Proposition 5.1
that allows to add correlation leaving GH margins.
d

Let Y ∈ T I − GH. It has GH margins and applying Proposition 5.1, Y = Y X +
Y V + Y γZ , where Y X , Y V and Y γZ are independent time changed Brownian motions
with subordinators respectively X(t), V (t) and γ 2 Z(t). The processes X(t), V (t)
and γ 2 Z(t) are defined by the vectors X, V , γ 2 Z in Definition 4.1. In particular
2
2
γ 2 Z has comonotone marginal distributions Γ(a, 2γb 2 ). Thus Y γZ is V G(µi , 1, a, 2γb 2 , i =
j

i

1, ..., n). This decomposition allows us to add correlation in the model leaving the
marginal processes fixed in law.
Definition 5.3. We name QT − GH the process Ỹ defined by
Z

Ỹ = Y X + Y V + Ỹ ,

Z

where Ỹ = QY Z (Z(t)) and Y Z is multivariate VG process with a common subordinator
2
2
Z(t) ∼ Γ(a, b2 ) and margins V G(µi , 1, a, b2 ).
P
Z
Since Ỹ ∈ M nmv, i.e. it has a common subordinator, L(ỸjZ (t)) = L( i qji µZ(t) +
P
WjZ (( i qji )2 Z(t))), j = 1, ..., n, where WjZ is a standard Brownian motion for each j =
1, ..., n (we are investigating the marginal laws and not their dependence relationship).
16

Proposition 5.2. The process Ỹ has GH(αj , βj , δj , λ) margins.
P
Proof. Since the gamma distribution is closed under scale transformations, L(( i qji )2 Z) =
2
Γ(a, 2(Pb qji )2 ). Therefore under the condition:
i

X
(
qji )2 = γj2 ,

(5.1)

i

P
2
we have L(( i qji )2 Z) = Γ(a, 2γb 2 ).
j

P
Z
2
Since ( i qji )µj = γj µj , then Ỹ ∼ V G(µi , 1, a, 2γb 2 , i = 1, ..., n).
i

Z

Since the processes Y X , Y V and Ỹ are independent, the law of Ỹ is the convolution
of their laws, and its marginal distributions are the convolutions of the marginal ones
Z
of Y X , Y V and Ỹ . The jth margin of Ỹ is Ỹj = YjX + YjV + ỸjZ . Thus L(Ỹj ) =
L(µj Tj + W (Tj )), where W ∼ N (0, 1), consequently it has GH distribution.
The process Ỹ depends on the marginal parameters (βj = µj , αj , λ, j = 1, ..., n) and
on the parameter a, involved in the correlation between theq
subordinators’ margins and
also on the correlation matrix Q. We underline that since αj2 − βj2 = γbj is fixed once
the marginal distributions are, moving b we change γj and the matrix Q. This fact
makes b relevant in correlation, as we will see in the sequel.

5.2

Gnmv processes

For completeness we also describe briefly a more general case, namely the Gnmv case,
even if it has not GH margins. We only consider the subcase A = Q, in fact it is referable
to the case with GH margins through a linear transformation.
Let Y IT ∈ T − GH, then consider Y T = QY IT , where QQT is symmetric and positive
definite. The marginal processes in general are not time-changed Brownian motions,
and also under the condition of Proposition 3.2 their subordinators have not GIG distribution, in fact the GIG family is not closed under convolution.
Since by Proposition 5.1, Y IT = Y R + Y Z , where Y R has independent components
and Y Z is a VG with a common time-change, we can decompose Y T as Y T = QY IT =
QY R + QY Z . Therefore the distribution of Y T is the convolution between a linear
transformation of a distribution with independent margins and a time-changed Brownian
motion. Even if Y R has not marginal GH distributions, we can provide its characteristic
function. At time one it is given by equation (4.2).
If both Vj j = 1, ..., n and Z degenerate, we get Y T = QY X , that is a linear
transformation of a vector with independent GH margins.
17

6

Dependence

We analyze first the linear dependence of the IGnmv model, then the one of the general
case. In the asymmetric case, linear dependence allows us to fully characterize the
parameters of the model given the marginal ones. It is not exhaustive in describing the
dependence structure of Y : we will discuss this point in the second part of this section
for Y ∈ T I − GH. Anyway it always allows us to fully characterize the parameters of
the subordinator G, given the marginal ones.

6.1

Linear dependence

Let Y ∈ T I − GH. We start from the correlation matrix ρT = (ρT (l, j)) of the subordinator.
Since
Cov(Tl , Tj ) = γl2 γj2 V (Z(t)) and V (Tj ) = V (Xj ) + V (Vj ) + γ 2 V (Z),

(6.1)

we have
γl2 γj2 V (Z)
γl2 γj2 4a
= p
,
ρT (l, j) = p
[V (Tl )][V (Tj )]
b4 [V (Tl )][V (Tj )]

where the expression for V (Tj ) are given in (B.3) in the Appendix. Since L(Tj ) =
GIG(λ, δj , γbj ), given the marginal parameters, the joint distribution of G is uniquely
determined by the parameter a; in turn a is uniquely determined by ρ.
Assume now that the marginal parameters are fixed and such that the marginal
distributions do not degenerate. Since the margins are independent iff a = 0 (iff ρ = 0),
imposing a = 0 we can capture independence starting from no matter which marginal
distribution. The same is not true for perfect correlation: a necessary condition for
ρ = 1 is that Xj degenerates for each j. In this case the subordinator degenerates in a
real gamma random variable and we get the V G model.
Since Y is a subordinated process, the variance of Yj (t) is:
V [Yj ] = E[V [Yj |Tj ]] + V [E[Yj |Tj ]] = E[Tj ] + βj2 V [Tj ].
The lj -covariance of the process at time t is:
cov[Yl , Yj ] = βi βj cov[T1 , T2 ] = βi βj γl2 γj2 V (Z).
Therefore the linear correlation coefficients are
βj βj γl2 γj2 4a
βl βj γl2 γj2 V (Z)
ρY (l, j) = p
= p
,
V (Yl )V (Yj )
b4 V (Yl )V (Yj )
18

(6.2)

where the expression for the marginal variances (B.8) are in the Appendix.
Observe that the linear correlation coefficient is zero if β is zero, i.e. in the symmetric
case, for each value of a. Therefore in the symmetric case the linear correlation coefficient
does not determine uniquely the joint distribution of Y for each value of the marginal
parameters. Anyway in the asymmetric case, which is more interesting for financial
applications, it does. In the latter case in order to calibrate the parameter a we can use
an estimate of the correlation coefficient. Since the subcase with a common subordinator
leads to the VG process, to reach high correlation leaving the GH marginal distributions
fixed we also investigate the QT − GH correlation coefficients.
Let Ỹ ∈ QT − GH, its linear correlation coefficients are
ρỸ (i, j) =
2

P
Z
Z
Z
q
q
V
(Y
)
+
k,l qil qjk cov(Yj , Yi )
ih
jh
h
h
k6=l
p
,
V (Yi )V (Yj )

P

2

where YiZ ∼ V G(µ, 1, a, b2 ) with a gamma subordinator whose parameters are (a, b2 ).
V [YiZ ] =

4a
2a
+ µi 4 , i = 1, ..., n;
2
b
b

(6.3)

and
X
X
4a
4a
cov[YiZ , YjZ ] = µi (
qil )µj (
qjl ) 4 = µi γi µj γj 4 ,
b
b
l
l

(6.4)

In Section 7.2 we will provide an example in which the T I − GH model keeps low
correlation, while the QT − GH process Ỹ allows to capture both independence and
very high correlation.

7

A financial application: the hyperbolic case

Define a price process to be the exponential of the process Y :
S(t) = S(0)exp(Y (t)), t ≥ 0.
Let the process Y represent the stocks’ returns under the historical measure4 .
In this section we first discuss a calibration procedure that can be developed for the
TI-GH and QT-GH models. We then provide a simple numerical example in which the
marginal parameters are calibrated on stock market data, and the remaining parameters
are selected in order to discuss the dependence flexibility of the model.
4

In this paper we only work with the historical measure; we do not discuss any choice for a risk
neutral equivalent measure

19

7.1

Calibration procedure

The parameters involved in the TI-GH model are:
• The marginal parameters of the returns: αj , βj , δj , λ;
• The parameters of the subordinator, involved in the dependence structure of the
model: γj , a, b.
The relationship between the marginal parameters and the dependence ones is:
p
b
= αj − βj .
γj

(7.1)

The calibration procedure we apply is divided into two steps: calibrate first the
marginal parameters, through the returns. Then the remaining ones, through correlation.
Once the marginal parameters are fixed we only have to find the common parameters
a, b, since the γj are determined by (7.1). In order to calibrate a we look for the
value which minimizes the distance between historical and theoretical correlation. The
correlation coefficients depend on b only through the ratios γbj : therefore for this kind of
analysis we can fix b = 1.
The parameters involved in the QT-GH model are:
• The marginal parameters of the returns: αj , βj , δj , λ;
• The parameters of the subordinator, involved in the dependence structure of the
model: γj , a, b.
• The correlation matrix Q.
The relationships between the marginal parameters and the dependence one are (7.1)
P
and (5.1), namely ( i qji )2 = γj2 .

In this case we use the correlation matrix to calibrate the parameters a, b, Q The γj
are a consequence of (7.1). The usefulness of b in this generalization is clear from (5.1).
In fact, since the marginal parameters are fixed, if b changes also the γj and consequently
the correlation matrix Q does. Therefore we can look for the parameters a, b and the
entries of Q that minimize the distance between the sample and theoretical correlation
matrix under the constraints (7.1) and (5.1).

20

7.2

Dependence span

In this section we investigate an application of the models T I − GH and QT − GH
discussed above. Our aim is to discuss the flexibility of the model with respect to
the dependence structure, or its dependence span. For this reason we do not use the
historical correlation to minimize the distance with the theoretical one, but we look for
the maximal correlation allowed by the model. In particular we choose a set of data
for which the marginal parameters allow very low dependence in the TI-GH model, and
high dependence for the QT-GH model. The example shows that, once the marginal
parameters are fixed, it is very easy to consider the simpler model first and, if it is
needed, the more general one.
Step 1: marginal parameters.
The marginal parameters can be calibrated using the same procedure as in the univariate case, stock by stock. Since the numerical analysis is not the main topic of the
paper, we consider the marginal parameters calibrated in Eberlein and Keller [9] for the
hyperbolic distribution (GH with λ = 1). We consider three firms belonging to their
sample, namely BASF, BMW, Daimler Benz.
The estimated parameters for BASF, BMW and Daimler Benz are given in the
following table:
αj
βj
δj
BASF 108.82 1.355 0.0014
BMW 89.72 4.7184 0.0009
DA-BE 88.19 4.1713 0.0019
Step 2: correlation
Assume first the T I − GH model.
As explained above we can choose b = 1: γj j = 1, ..., n follow from (7.1). For our
three names we get respectively γ1 = 0.0919, γ2 = 0.1116 and γ3 = 0.1135.The remaining
parameter to be calibrated is the parameter a ∈ [0, 1]. The maximal correlation allowed
by the model corresponds to a = max = 1, as can be easily argued from the constraints
of the parameters.
The theoretical correlation matrix for a = max = 1 follows:
ρ
BASF BMW
BMW 0.0013
DA-BE 0.0012 0.0049
It is evident from this table that, given the estimated marginal parameters, the model
allows only for very low correlations. Economic intuition suggests that their correlation
21

could be higher. Therefore we take the marginal parameter as fixed and calibrate the
QT-GH model for correlation.
We show that the process QT-GH is able to capture also high correlation. Conditions
(5.1) provide restrictions in the choice of the matrix Q. The restrictions can be relaxed if
we have the possibility to change the values of γj . This is the case because the marginal
parameters and the correlations, as observed above, depend on γj only through the
ratios γbj . We are therefore allowed to change γj and b simultaneously so as to keep γbj
unchanged while increasing γj .
We provide an example of choice of (a, b, Q) that gives rise to significant correlations,
with the same marginal parameters as above.


0.01063 0.01063 0.07063
Let a = 1 (maximal), b = 10 and Q =  0.01720 0.01720 0.07720  .
0.01784 0.01784 0.07784
With this choice for the dependence parameters we get the following correlations:
ρỸ
BASF BMW
.
BMW 0.747
DA-BE 0.735 0.684
Therefore the QT − GH model allows to capture independence but also to reach high
correlation, even in a case where the model T I − GH does not.

A

Appendix

Derivation of equation (4.5).

ψT (w) =

Y

X
ψXj (wj )ψVj (wj )ψz (
γj2 wj )

1
=
(1 − 2wj /b2j )λ/2 Kλ̃j (δj bj
Kλ̃j (δj b)
j
Pn
2
wj −(λ−a)
j=1 wj γj −a
· (1 − 2 )
(1 −
)
bj /2
b2j /2
Y

q

1 − 2wj b−2
j )

(A.1)

Therefore
q
1
(1 − 2wj /b2j )λ/2 Kλ̃j (δj bj 1 − 2wj b−2
j )

b
)
K
j
j
λ̃
j
j
Pn
2
wj −(λ−a)
j=1 wj γj −λ
) + log(1 −
) ,
+ log((1 − 2 )
bj /2
bj 2/2

ΨG(w) =

X

log

22

(A.2)

since log(ψW (t)) = − 21 t2 + µj t, we have

s
zj2 − 2iµj zj
zj2 − 2µj izj λ/2
1
ΨG(log(ψW (z))) =
log
)
K

b
)
1
+
(1 +
λ̃j j j
2
2
Kλ̃j (δj bj )
b
b
j
j
j
(A.3)
Pn
1 2
1 2
2
− 2 zj + iβj zj −(λ−a)
j=1 (− 2 zj + iµzj )γj −a
+ log((1 −
)
) + log(1 −
) .
b2j /2
b2j /2
X

Thus
zj2 − 2iµj zj λ/2
1
)
(1 +
Kλ (δj bj )
b2j
j
s
zj2 − 2iµj zj
· Kλ (δj bj 1 +
)
b2j
Pn
1 2
2
− 12 zj2 + iβj zj −(λ−a)
j=1 (− 2 zj + iµj zj )γj −a
· (1 −
)
(1

) .
b2j /2
b2j /2

ΨY (z) =

Assume: bj =

q

Y

αj2 − βj2 and µj = βj , ∀j, we have the following

p
Y
zj2 − iβzj λ/2 Kλ (δj α2 − (β + izj )2
p
ΨY (z) =
(1 + 2
)
2
α

β
K

α2 − β 2 )
λ j
j
Pn
1 2
2
− 12 zj2 + iβj zj −(λ−a)
j=1 (− 2 zj + iβzj )γj −λ
)
(1 −
) .
· (1 −
(αj2 − βj2 )/2
(α2 − β 2 )/2

B
B.1

(A.4)

(A.5)

Appendix
Generalized Inverse Gaussian distribution

Let λ ∈ R, a, b ∈ R+ and not both zero. A generalized inverse Gaussian distribution is a
three parameters distribution defined on the positive half line (shortly GIG(λ, a, b)). It
is an infinitely divisible distribution and it generates a subordinator. Its characteristic
function is

2iu λ
1
(1 − 2 ) 2 Kλ (ab 1 − 21ub−2 ), x > 0,
(B.1)
ψGIG (u) =
Kλ (ab)
b
where Kλ (x) denotes the modified Bessel function of the third kind with index λ.
For its complete description see for example Schoutens [16]. We recall here the GIG
mean,
aKλ+1 (ab)
(B.2)
bKλ (ab)
and variance
2
(ab)).
(B.3)
a2 b−2 Kλ−2 (ab)(Kλ+2 (ab)Kλ (ab) + Kλ+1
23

B.2

Generalized Hyperbolic distribution

Let λ ∈ R+ , δj ∈ R+ , α ∈ R+ , β ∈ R with
δj ≥ 0, |βj | < α if λ > 0

(B.4)

δj > 0, |βj | < α if λ = 0

δj > 0, |βj | ≤ α if λ = 0
The Generalized hyperbolic distribution (shortly GH) has been introduced in literature
by Barndorff-Nielsen [1] through its characteristic function:
p
α2 − (β + iu)2 )
K

α2 − β 2
λ
λ/2
p
.
(B.5)
ψGH (u) = ( 2
)
α − (β + iu)2
Kλ (δ α2 − β 2 )

The GH distribution can be defined as a normal mean variance mixture with mixing
distribution GIG. If G ∼ GIG(λ,
a, b) (positive distribution), W is standard normal and

they are independent, then GW + µG has a GH distribution, with parameters γ, β, δ, λ
where:
a=δ

The GH distribution mean is

µ=β
p
b = γ2 − β 2.

βδ(α2 − β 2 )Kλ+1 (δ
its variance is

p

α2 − β 2 )Kλ−1 (δ

(B.6)

p

α2 − β 2 )

(B.7)

p
p
p
2
(δ α2 − β 2 )
Kλ+2 (δ α2 − β 2 ) Kλ+1
α2 − β 2 )
β2
p
p
p
+

))
(
δ ( p
δ α2 − β 2 Kλ (δ α2 − β 2 ) α2 − β 2 Kλ (δ