AlinCombinatNouExempluNum. 189KB Jun 04 2011 12:08:20 AM

ACTA UNIVERSITATIS APULENSIS

No 14/2007

A MIXED MONTE CARLO AND QUASI-MONTE CARLO
SEQUENCE FOR MULTIDIMENSIONAL INTEGRAL
ESTIMATION
Alin V. Ros¸ca
Abstract. In this paper, we propose a method for estimating an sdimensional integral I. We define a new hybrid sequence that we call the
H-mixed sequence. We obtain a probabilistic bound for the H-discrepancy
of this sequence. We define a new estimator for a multidimensional integral
using the H-mixed sequence. We prove a central limit theorem for this
estimator. We show that by using our estimator, we obtain asymptotically
a smaller variance than by using the crude Monte Carlo method. We also
compare our method with the Monte Carlo and Quasi-Monte Carlo methods
on a numerical example.
2000 Mathematics Subject Classification: 68U20, 65C05, 11K36, 11K45,
11K38.
Keywords and phrases: Monte Carlo integration, Quasi-Monte Carlo integration, H-discrepancy, H-distributed low-discrepancy sequences, H-mixed
sequences.
1. Introduction

We consider the problem of estimating integrals of the form
Z
I=
f (x)dH(x),

(1)

[0,1]s

where f : [0, 1]s → R is the function we want to integrate and H : Rs → [0, 1]
is a distribution function on [0, 1]s . In the continuous case, the integral I can
be rewritten as
Z
f (x)h(x)dx,
I=
[0,1]s

141

A. V. Ro¸sca - A Mixed Monte Carlo and Quasi-Monte Carlo Sequence for...

where h is the density function corresponding to the distribution function H.
In the Monte Carlo (MC) method (see [11]), the integral I is estimated
by sums of the form
N
1 X
f (xk ),
IˆN =
N k=1
(1)

(s)

where xk = (xk , . . . , xk ), k ≥ 1, are independent identically distributed
random points on [0, 1]s , with the common density function h.
In the Quasi-Monte Carlo (QMC) method
(see [11]), the integral I is
PN
1
approximated by sums of the form N k=1 f (xk ), where (xk )k≥1 is a Hdistributed low-discrepancy sequence on [0, 1]s .
In [7], [8] and [9], Okten considered integrals of the form

Z
I1 =
f (x)dx,
[0,1]s

and proposed a method for estimating the integral I1 , using a so-called mixed
sequence on [0, 1]s , which combines pseudorandom and low-discrepancy vectors. Each element of the s-mixed sequence (xk )k≥1 is obtained by concatenating two vectors qk and Xk , i.e., xk = (qk , Xk ), k ≥ 1, where (qk )k≥1 is
a d-dimensional low-discrepancy sequence and Xk , k ≥ 1, are independent
uniformly distributed random vectors on [0, 1]s−d .
In this paper, we extend the results obtained by Okten to the case when
the integral is of the form (1). First, we remember some basic notions and
definitions that will be used in this paper. Next, we define a new hybrid
sequence that we call the H-mixed sequence and obtain probabilistic bounds
for the H-discrepancy of this sequence. Continuing, we define a new estimator for the integral I, using our H-mixed sequence, and prove a central limit
theorem for this estimator. In the last paragraph, we consider a numerical
example, in which we compare our estimator with the ones obtained by using
the MC and QMC methods.
2. A probabilistic bound for the H-discrepancy of the
H-mixed sequence
We start this paragraph by recalling some useful notions (see [10]).

142

A. V. Ro¸sca - A Mixed Monte Carlo and Quasi-Monte Carlo Sequence for...
Definition 1. (H-discrepancy) Consider an s-dimensional continuous
distribution on [0, 1]s , with distribution function H. Let λH be the probability
measure induced by H. Let P = (xk )k≥1 be a sequence of points in [0, 1]s .
The H-discrepancy of the first N terms of sequence P is defined as


1

DN,H (x1 , . . . , xN ) = sup AN (J, P ) − λH (J) ,
J⊆[0,1]s N
Q
where the supremum is calculated over all subintervals J = si=1 [ai , bi ] ⊆
[0, 1]s ; AN counts the number of elements of sequence P , falling into the
interval J, i.e.,
N
X
AN (J, P ) =

1J (xk ),
k=1

1J is the characteristic function of J. The sequence P is called H-distributed
if DN,H (x1 , . . . , xN ) → 0 as N → ∞. The H-distributed sequence P is said
to be a low-discrepancy sequence if

DN,H (x1 , . . . , xN ) = O (log N )s /N
for all N ≥ 2.
Definition 2. Consider an s-dimensional continuous distribution on
[0, 1]s , with density
h and distribution function H. For a point
 function
(1)
(s)
s
u = u ,...,u
∈ [0, 1] , the marginal density functions hl , l = 1, . . . , s,
are defined by
Z

Z


(l)
= . . . h t(1) , . . . , t(l−1) , u(l) , t(l+1) , . . . t(s) dt(1) . . . dt(l−1) dt(l+1) . . . dt(s) ,
hl u
| {z }
[0,1]s−1

and the marginal distribution functions Hl , l = 1, . . . , s, are defined by
Z u(l)

(l)
Hl u
hl (t)dt.
=
0

In this paper, we consider s-dimensional continuous distributions on [0, 1]s ,
with independent marginals, i.e.,

H(u) =

s
Y
l=1

Hl (u(l) ), ∀u = (u(1) , . . . , u(s) ) ∈ [0, 1]s .
143

A. V. Ro¸sca - A Mixed Monte Carlo and Quasi-Monte Carlo Sequence for...
This can be expressed, using the marginal density functions, as follows:
s
Y

h(u) =

l=1

hl (u(l) ), ∀u = (u(1) , . . . , u(s) ) ∈ [0, 1]s .


Consider an integer 0 < d < s. Using the marginal density functions, we
construct the following density functions on [0, 1]d and [0, 1]s−d , respectively:
hq (u) =

d
Y
l=1

and

s
Y

hX (u) =

l=d+1

hl (u(l) ), ∀u = (u(1) , . . . , u(d) ) ∈ [0, 1]d ,

hl (u(l) ), ∀u = (u(d+1) , . . . , u(s) ) ∈ [0, 1]s−d .


The corresponding distribution functions are
Hq (u) =

Z

u(1)

...

Z

u(d)

0

0


hq t(1) , . . . , t(d) dt(1) . . . dt(d) ,


(2)

where u = (u(1) , . . . , u(d) ) ∈ [0, 1]d , and
HX (u) =

Z

0

u(d+1)

...

Z

0

u(s)



hX t(d+1) , . . . , t(s) dt(d+1) . . . dt(s) ,

(3)

where u = (u(d+1) , . . . , u(s) ) ∈ [0, 1]s−d .
In the following definition, we introduce the new notion of a H-mixed
sequence.
Definition 3. (H-mixed sequence) Consider an s-dimensional continuous distribution on [0, 1]s , with distribution function H and independent
marginals Hl , l = 1, . . . , s. Let Hq and HX be the distribution functions
defined in (2) and (3), respectively. Let (qk )k≥1 be a Hq -distributed low(d)
(1)
discrepancy sequence on [0, 1]d , with qk = (qk , . . . , qk ), and Xk , k ≥ 1, be
independent and identically distributed random vectors on [0, 1]s−d , with dis(d+1)
(s)
, . . . , Xk ). A sequence (mk )k≥1 ,
tribution function HX , where Xk = (Xk
with the general term given by
mk = (qk , Xk ),
144

k ≥ 1,

(4)

A. V. Ro¸sca - A Mixed Monte Carlo and Quasi-Monte Carlo Sequence for...
is called a H-mixed sequence on [0, 1]s .
Q
Remark 4. For an interval J = sl=1 [al , bl ] ⊆ [0, 1]s , we define the
Q
Q
subintervals J ′ = dl=1 [al , bl ] ⊆ [0, 1]d and J ′′ = sl=d+1 [al , bl ] ⊆ [0, 1]s−d (i.e.
J = J ′ × J ′′ ).
Let (mk )k≥1 be a H-mixed sequence on [0, 1]s , with the general term
given by (4). Based on definitions (1) and (3), the H-discrepancy of the set
of points (m1 , . . . , mN ) can be expressed as

X

1 N
1J (mk ) − λH (J)
DN,H (m1 , . . . , mN ) = sup
J⊆[0,1]s N k=1


Z
N
1 X


= sup
1J (mk ) − dH(u)
J⊆[0,1]s N k=1
J


N
s
Y
1 X


= sup
1J (mk ) − [Hl (bl ) − Hl (al )] ,
J⊆[0,1]s N k=1
l=1

and the Hq -discrepancy of the set of points (q1 , . . . , qN ) is given by



N
1 X


DN,Hq (q1 , . . . , qN ) =
sup
1J ′ (qk ) − λHq (J )
J ′ ⊆[0,1]d N k=1


N
d
Y

1 X
1J ′ (qk ) − [Hl (bl ) − Hl (al )] ,
=
sup
J ′ ⊆[0,1]d N k=1
l=1

We consider the random variable 1J (mk ) that is taking two values 1 and
0, with the probabilities deduced as follows:
P (1J (mk ) = 1) = 1J ′ (qk )P (Xk ∈ J ′′ )
(d+1)

= 1J ′ (qk )P (Xk

(d+1)

(s)

∈ [ad+1 , bd+1 ], . . . , Xk ∈ [as , bs ])
(s)

∈ [ad+1 , bd+1 ]) · . . . · P (Xk ∈ [as , bs ])
= 1J ′ (qk )P (Xk
s
Y
= 1J ′ (qk )
[Hl (bl ) − Hl (al )]
l=d+1

= 1J ′ (qk )p,
145

A. V. Ro¸sca - A Mixed Monte Carlo and Quasi-Monte Carlo Sequence for...
Q
where the product sl=d+1 [Hl (bl )−Hl (al )] was denoted by p. Thus, P (1J (mk ) =
0) = 1 − 1J ′ (qk )p. Hence, the distribution of the random variable 1J (mk ) is


1
0
, k ≥ 1.
(5)
1J (mk ) :
1J ′ (qk ) · p 1 − 1J ′ (qk ) · p
Lemma 5. The random variable 1J (mk ) has the expectation and the
variance given by
E(1J (mk )) = 1J ′ (qk )p,
V ar(1J (mk )) = 1J ′ (qk )p(1 − p).

(6)
(7)

Furthermore,
Cov(1J (mi ) · 1J (mj )) = 0 for i, j ≥ 1,

i 6= j.

(8)

Proof. We have
E(1J (mk )) = 1 · 1J ′ (qk )p + 0 · (1 − 1J ′ (qk ) · p)
= 1J ′ (qk )p.
For the variance, we obtain
V ar(1J (mk )) = E(1J (mk )2 ) − (E(1J (mk )))2
= 1J ′ (qk )p − 1J ′ (qk )p2
= 1J ′ (qk )p(1 − p).
The distribution of the product 1J (mi ) · 1J (mj ) is
1J (mi ) · 1J (mj ) :



1
1J ′ (qi )1J ′ (qj ) · p2

0
1 − 1J ′ (qi )1J ′ (qj ) · p2

Hence, we get
E(1J (mi ) · 1J (mj )) = p2 · 1J ′ (qi )1J ′ (qj ).

146



.

A. V. Ro¸sca - A Mixed Monte Carlo and Quasi-Monte Carlo Sequence for...
From this, the covariance is
Cov(1J (mi ) · 1J (mj )) = p2 · 1J ′ (qi )1J ′ (qj ) − 1J ′ (qi )p · 1J ′ (qj )p = 0.
Corollary 6. Let 1J (mk ) be the random variable defined in (5). Then,
we have

V ar

N
1 X

N

k=1

E

N
1 X

V ar

N
1 X

N

k=1


1J (mk ) =

N
p X
1J ′ (qk ),
N k=1

(9)

N

p(1 − p) X
1J ′ (qk ),(10)
1J (mk ) =
N k=1
N 2 k=1

s

Y
1J (mk ) − [Hl (bl ) − Hl (al )] ≤
l=1

1
[DN,Hq (qk ) + 1]. (11)
4N

Proof. The first two relations follow immediately from Lemma 5. For the
last relation, we have
N
s

Y
p(1 − p) X
1J ′ (qk )
1J (mk ) − [Hl (bl ) − Hl (al )] =
V ar
N k=1
N 2 k=1
l=1
N
1 X

N

p(1 − p) X 1J ′ (qk )
=
N
N
k=1


1
[DN,Hq (qk ) +
4N
d
Y
[Hl (bl ) − Hl (al )]]
l=1



1
[DN,Hq (qk ) + 1].
4N

We introduce the following notations:
Z=

N
1 X
1J (mk ),
N k=1

147

(12)

A. V. Ro¸sca - A Mixed Monte Carlo and Quasi-Monte Carlo Sequence for...
and
u(J) = Z − λH (J),

where

(13)

s
Y
λH (J) =
[Hl (bl ) − Hl (al )].

(14)

l=1

We need the following lemma.
Lemma 7. If P (|u(J)| < a) ≥ p, then P (supJ⊆[0,1]s |u(J)| < a) ≥ p.
Proof We have
Hence, we get

|u(J)| < a, ∀J ⊆ [0, 1]s .
supJ⊆[0,1]s |u(J)| < a.

For the corresponding events, we have the implication
{|u(J)| < a} ⊆ {supJ⊆[0,1]s |u(J)| < a}.
Finally, using a well-known inequality from probability theory, we obtain
p ≤ P (|u(J)| < a) ≤ P (supJ⊆[0,1]s |u(J)| < a).
Thus, we get the desired inequality
P (supJ⊆[0,1]s |u(J)| < a) ≥ p.
Our main result, which gives a probabilistic error bound for the H-mixed
sequences, is presented in the next theorem.
Theorem 8. If (mk )k≥1 = (qk , Xk )k≥1 is a H-mixed sequence, then
∀ε > 0 we have

1 1 
P (DN,H (mk ) ≤ ε + DN,Hq (qk )) ≥ 1 − 2
DN,Hq (qk ) + 1 .
(15)
ε 4N
Proof. We apply the Chebyshev’s Inequality for the random variable
u(J), defined in (13), and obtain
P (|u(J) − E(u(J)| < ε) ≥ 1 −
148

V ar(u(J))
,
ε2

(16)

A. V. Ro¸sca - A Mixed Monte Carlo and Quasi-Monte Carlo Sequence for...
which is equivalent to
P (−ε + E(u(J)) < u(J) < ε + E(u(J))) ≥ 1 −

V ar(u(J))
ε2


1 1 
D
(q
)
+
1
.
N,Hq k
ε2 4N
We have the following chain of implications:
≥1−

(17)

−ε + E(u(J)) < u(J) < ε + E(u(J)) ⇒
|u(J)| ≤ max(| − ε + E(u(J))|, |ε + E(u(J))|) ⇒
|u(J)| ≤ supJ⊆[0,1]s max(| − ε + E(u(J))|, |ε + E(u(J))|) ⇒
|u(J)| ≤ max(ε + DN,Hq (qk ), ε + DN,Hq (qk )) ⇒
|u(J)| ≤ ε + DN,Hq (qk ).

(18)

From relations (17) and (18), we obtain
P (|u(J)| ≤ ε + DN,Hq (qk )) ≥ 1 −


1 1 
D
(q
)
+
1
.
N,Hq k
ε2 4N

(19)

We apply Lemma 7 and get
P (supJ⊆[0,1]s |u(J)| ≤ ε + DN,Hq (qk )) ≥ 1 −


1 1 
D
(q
)
+
1
.
N,Hq k
ε2 4N

Hence, we obtain the inequality that we have to prove:

1 1 
DN,Hq (qk ) + 1 .
P (DN,H (mk ) ≤ ε + DN,Hq (qk )) ≥ 1 − 2
ε 4N

(20)

(21)

Remark 9. We know that if (qk )k≥1 is a Hq -distributed low-discrepancy
sequence on [0, 1]d , then
DN,Hq (qk ) = O

 (logN )d 

DN,Hq (qk ) ≤ cd

 (logN )d 

Hence

N

149

N

.

.

A. V. Ro¸sca - A Mixed Monte Carlo and Quasi-Monte Carlo Sequence for...
If we consider ε =

1

4
N

in Theorem 8 then, as N → ∞, we have

cd
DN,Hq (qk ) + 1


4 N



(logN )d
N



+1


4 N

−→ 0.

In conclusion, when N → ∞, we have
DN,H (mk ) −→ 0,

(22)

with probability
p1 = 1 −

DN,Hq (qk ) + 1

−→ 1.
4 N

(23)

3. An estimator for a multidimensional integral using H-mixed
sequences
In order to estimate integrals of the form (1), we introduce the following
estimator, which extends the one given by Okten (see [9]).
Definition 10. Let (mk )k≥1 = (qk , Xk )k≥1 be an s-dimensional H-mixed
(d)
(1)
sequence, introduced by us in Definition 3, with qk = (qk , . . . , qk ) and Xk =
(s)
(d+1)
, . . . , Xk ). We define the following estimator for the integral I:
(Xk
θm =

N
1 X
f (mk ).
N k=1

(24)

We consider the independent random variables:
(d)

(1)

(d+1)

Yk = f (mk ) = f (qk , . . . , qk , Xk

(s)

, . . . , Xk ),

k ≥ 1.

(25)

We denote the expectation of Yk by
E(Yk ) = µk ,

(26)

V ar(Yk ) = σk2 .

(27)

and the variance of Yk by

150

A. V. Ro¸sca - A Mixed Monte Carlo and Quasi-Monte Carlo Sequence for...
We assume that
0 < σk2 < ∞,

(28)

2
2
2
0 < σ(N
) = σ1 + . . . + σN < ∞.

(29)

and we denote

Remark 11. The estimator θm , defined in relation (24), is a biased
estimator of the integral I, its convergence to I being asymptotic
E(θm ) → I, as N → ∞.
Proposition 12. We assume that f is bounded on [0, 1]s and that the
functions
(1)

(d)

f1 (x , . . . , x ) =

Z

(1)

(s)

2

(f (x , . . . , x ))

[0,1]s−d

(1)

(d)

f2 (x , . . . , x ) =

s
Y

l=d+1

hZ

(1)

(s)

f (x , . . . , x )

[0,1]s−d

s
Y

hl (x(l) )dx(d+1) · . . . · dx(s) ,
(l)

(d+1)

hl (x )dx

l=d+1

· . . . · dx

are Riemann integrable. Then, we have
2
σ(N
)

N
where
L=

Z

(1)

−→ L, as N −→ ∞,
(s)

(f (x , . . . , x ))

[0,1]s



Z

[0,1]d

hZ

2

s
Y
l=1

(1)

(s)

f (x , . . . , x )

[0,1]s−d

s
Y

hl (x(l) )dx(1) . . . dx(s) −
(l)

hl (x )dx

l=d+1

·

d
Y

hl (x(l) )dx(1) . . . dx(d) .

l=1

151

(d+1)

· . . . · dx

(s)

i2

·

(s)

i2

A. V. Ro¸sca - A Mixed Monte Carlo and Quasi-Monte Carlo Sequence for...
Proof. From the fact that f1 is Riemann integrable, it follows that
Z
N
d
Y
1 X
(d)
(1)
(1)
(d)
f1 (x , . . . , x )
f1 (qk , . . . , qk ) −→
hl (x(l) )dx(1) . . . dx(d) =
N k=1
d
[0,1]
l=1
Z
s
hZ
i
Y
(1)
(s) 2
(l)
(d+1)
(s)
(f (x , . . . , x ))
·
=
hl (x )dx
· . . . · dx
[0,1]d

·

d
Y

[0,1]s−d

l=d+1

hl (x(l) )dx(1) . . . dx(d) .

l=1

As f2 is Riemann integrable, we obtain
Z
N
d
Y
1 X
(d)
(1)
f2 (x(1) , . . . , x(d) )
f2 (qk , . . . , qk ) −→
hl (x(l) )dx(1) . . . dx(d) =
N k=1
d
[0,1]
l=1
Z
Z
s
h
i2
Y
=
f (x(1) , . . . , x(s) )
hl (x(l) )dx(d+1) · . . . · dx(s) ·
[0,1]d

d
Y

[0,1]s−d

l=d+1

hl (x(l) )dx(1) . . . dx(d) .

l=1

We also know that
(1)

(d)

(d+1)

l=d+1
s
Y

(l)

(d+1)

(s)

, . . . , Xk ))
V ar(Yk ) = σk2 = V ar(f (mk )) = V ar(f (qk , . . . , qk , Xk
Z
s
Y
(d)
(1)
(f (qk , . . . , qk , x(d+1) , . . . , x(s) ))2
=
hl (x(l) )dx(d+1) · . . . · dx(s) −
[0,1]s−d



hZ

[0,1]s−d
(1)

(d)
(1)
f (qk , . . . , qk , x(d+1) , . . . , x(s) )

hl (x )dx

l=d+1
(d)

(1)

· . . . · dx

(s)

i2

(d)

= f1 (qk , . . . , qk ) − f2 (qk , . . . , qk ).
Hence, we get
2
σ(N
)

N

2
σ12 + . . . + σN
N
PN
PN
(d)
(d)
(1)
(1)
k=1 f1 (qk , . . . , qk )
k=1 f2 (qk , . . . , qk )
=

−→ L, as N −→ ∞.
N
N

=

152

A. V. Ro¸sca - A Mixed Monte Carlo and Quasi-Monte Carlo Sequence for...
Now, we formulate and prove the main result of this paragraph.
Theorem 13. In the same hypothesis as in Proposition 12 and, in addition, assuming that L 6= 0, we have
a)
Y(N ) =

PN

k=1

P
Yk − N
k=1 µk
−→ Y,
σ(N )

as N → ∞,

(30)

where the random variable Y has the standard normal distribution.
b) If we denote the crude Monte Carlo estimator for the integral (1) by
θM C , then
V ar(θm ) ≤ V ar(θM C ),
(31)
meaning that, by using our estimator, we obtain asymptotically a smaller
variance than by using the classical Monte Carlo method.
Proof. a) As f is bounded, it follows that the random variables
Yk = f (mk ), k ≥ 1,

(32)

are also bounded.
2
From the facts that L 6= 0, (σ(N
) )N ≥1 is a strictly increasing sequence,
and
2
σ(N
)
lim
= L,
N →∞ N
it follows that
2
lim σ(N
(33)
) = ∞.
N →∞

Applying Corollary 5, page 267, from Ciucu [3] for the sequence of independent non-identically distributed random variables (Yk )k≥1 , which are bounded
and verify relation (33), it follows that
Y(N ) =

PN

k=1

P
Yk − N
k=1 µk
−→ Y ∈ N(0, 1) when N → ∞,
σ(N )

meaning that the sequence of independent non-identically distributed random
variables (Yk )k≥1 satisfies the central limit theorem.
153

A. V. Ro¸sca - A Mixed Monte Carlo and Quasi-Monte Carlo Sequence for...
b) We know that
V ar(θM C ) =

V ar(f (X))
,
N

and
V ar(θm ) =

(34)

2
σ(N
)

.
N2
Hence, the relation (31), which we have to prove, is equivalent to
L=

2
σ(N
)

≤ V ar(f (X)).

N

(35)

(36)

We apply the Cauchy-Buniakovski inequality [E(X)]2 ≤ E(X 2 ) (see [2]) and
we obtain
Z
s
hZ
i2
Y
(1)
(s)
(l)
(d+1)
(s)
f (x , . . . , x )
hl (x )dx
· . . . · dx
·
[0,1]d

·


l=1

[0,1]d

d
Y

l=d+1

hl (x(l) )dx(1) . . . dx(d) ≥

hZ
·

=

d
Y

[0,1]s−d

hZ

(1)

(s)

f (x , . . . , x )

[0,1]s−d

l=d+1

hl (x(l) )dx(1) . . . dx(d)

l=1

hZ

[0,1]s

(1)

s
Y

(s)

f (x , . . . , x )

i2

s
Y

i
hl (x(l) )dx(d+1) · . . . · dx(s) ·

=

hl (x(l) )dx(1) . . . dx(s)

l=1

Multiplying with (−1) the above inequality and adding
Q
· sl=1 hl (x(l) )dx(1) . . . dx(s) , we obtain
L ≤ V ar(f (X)).

So, the inequality (31) is proved.

154

i2

R

[0,1]s

(f (x(1) , . . . , x(s) ))2 ·

A. V. Ro¸sca - A Mixed Monte Carlo and Quasi-Monte Carlo Sequence for...
4. Numerical example
In this paragraph, we compare our method with the MC and QMC methods on a numerical example. We want to estimate the following s-dimensional
integral (s ≥ 1):
Z
(1) (s)
I =
18x(1) x(2) · . . . · x(s−1) (x(s) )2 ex x dH(x),
(37)
[0,1]s

where we consider the distribution function
H(x(1) , x(2) , . . . , x(s) ) = (x(1) )2 (x(2) )2 . . . (x(s) )2
on [0, 1]s . The corresponding density function is h(x(1) , x(2) , . . . , x(s) ) =
2s x(1) x(2) . . . x(s) . The marginal distribution functions are
H1 (x(1) ) = (x(1) )2 , H2 (x(2) ) = (x(2) )2 , . . . , Hs (x(s) ) = (x(s) )2 .
p
and their inverses are Hi−1 (x(i) ) = (x(i) ), i = 1, . . . , s.
The exact value of the integral I is
18(3e − 8)2s
.
3s−2
In the following, we compare numerically our estimator θm , with the
estimators obtained using the MC and QMC methods. As a measure of
comparison, we will use the relative errors produced by these three methods.
The MC estimate is defined as follows:
θM C
(1)

N
1 X
(s)
(1)
=
f (xk , . . . , xk ),
N k=1

(s)

(38)

where xk = (xk , . . . , xk ), k ≥ 1, are independent identically distributed
random points on [0, 1]s , with the common distribution function H.
In order to generate such a point xk , we proceed as follows. We first
(i)
(s)
(1)
generate a random point ωk = (ωk , . . . , ωk ), where ωk is a point uniformly
(i)
distributed on [0, 1], for each i = 1, . . . , s. Then, for each component ωk ,
(i)
i = 1, . . . , s, we apply the inversion method and obtain that Hi−1 (ωk ) is a
point with the distribution function Hi . As the s-dimensional distribution
155

A. V. Ro¸sca - A Mixed Monte Carlo and Quasi-Monte Carlo Sequence for...
with the distribution function H has independent marginals, it follows that
(s)
(1)
xk = (H1−1 (ωk ), . . . , Hs−1 (ωk )) is a point on [0, 1]s , with the distribution
function H.
The QMC estimate is defined as follows:
θQM C
(1)

N
1 X
(s)
(1)
f (xk , . . . , xk ),
=
N k=1

(39)

(s)

where (xk )k≥1 = (xk , . . . , xk )k≥1 is a H-distributed low-discrepancy sequence on [0, 1]s .
In order to generate such a sequence (xk )k≥1 , we first consider a low(s)
(1)
discrepancy sequence on [0, 1]s , ω = (ωk , . . . , ωk )k≥1 . Then, by applying
the inverses of the marginals on each dimension, we obtain that (xk )k≥1 =
(s)
(1)
(H1−1 (ωk ), . . . , Hs−1 (ωk ))k≥1 is a H-distributed low-discrepancy sequence
on [0, 1]s (see [10]).
During our experiments, we employed as low-discrepancy sequences on
[0, 1]s the Halton sequences (see [6]).
The estimate proposed by us earlier is:
N
1 X
(s)
(d+1)
(d)
(1)
, . . . , Xk ).
f (qk , . . . , qk , Xk
θm =
N k=1

(40)

where (qk , Xk )k≥1 is an s-dimensional H-mixed sequence on [0, 1]s .
In order to obtain such a H-mixed sequence, we first construct the Hq distributed low-discrepancy sequence (qk )k≥1 on [0, 1]d (the distribution function Hq was defined in (2)) and next, the independent and identically distributed random points xk , k ≥ 1 on [0, 1]s−d , with the common distribution
function HX (the distribution function HX was defined in (3)). Finally, we
concatenate qk and xk for each k ≥ 1.
In our experiments, we used as low-discrepancy sequences on [0, 1]d for
the H-mixed sequences, the Halton sequences (see [6]).

156

A. V. Ro¸sca - A Mixed Monte Carlo and Quasi-Monte Carlo Sequence for...

Figure 1: Simulation results for s = 10 and d = 3.
In our tests, we have considered the following dimensions of the integral
I: s = 8, 10, 11, 12, 14, 15. We present the numerical results for a number
of 51 samples, having sizes from N = 1000 to N = 2000, with a step size
of 20. We have also changed the dimension of the deterministic part of the
H-mixed sequence from d = 3 to d = 8, in order to determine an ”optimal”
dimension for the deterministic part of the H-mixed sequence.
The MC and mixed estimates are the mean values obtained in 10 independent runs, while the QMC estimate is the result of a single run. In our
graphs the relative error produced using each of the three methods is plotted
against the number of samples N .
We present the numerical results for s = 10 and d = 3 in Figure 1, and
for s = 15 and d = 5 in Figure 2. We observe that the H-mixed sequences
outperform their low-discrepancy versions, giving much better results.

157

A. V. Ro¸sca - A Mixed Monte Carlo and Quasi-Monte Carlo Sequence for...

Figure 2: Simulation resu1ts for s = 15 and d = 5.
Also, the H-mixed sequences produced a better overall error reduction than
the pseudorandom sequences, in both situations we present here. We also remark that, in order to achieve these improvements using H-mixed sequences,
the dimension of the deterministic part should be around one third of the
problem dimension. Considering a higher dimension of the deterministic
component leads us to worse results compared with the MC method. However, the relative errors are still much smaller than the ones produced by
using low-discrepancy sequences.
Next, we draw the following conclusions from all the tests we performed,
considering the parameters (s, d) presented above:
1. the relative errors for all three estimates are very small, even for small
sample sizes,
2. the behavior of the H-mixed sequence is superior to the one of the
low-discrepancy sequence, regardless of the dimension of the problem,
3. the performance of the H-mixed sequence is better than the one of the
pseudorandom sequence, for most of the sample sizes,
158

A. V. Ro¸sca - A Mixed Monte Carlo and Quasi-Monte Carlo Sequence for...
4. to achieve a good performance of our H-mixed estimator, we observe
from our tests that the dimension of the deterministic part should be
around one third of the dimension of the problem. However, this is
not a general rule and is depending on the test problem or practical
situation, where the H-mixed sequence is applied.
In conclusion, by properly chosen the dimension d of the deterministic
part in the s-dimensional H-mixed sequence, we can achieve considerable
improvements in error reduction, compared with the MC and QMC estimates, even for high dimensions and moderate sample sizes.

References
[1] P. Blaga, M. Radulescu, Probability Calculus, Babes-Bolyai University
Press, Cluj-Napoca, 1987 (In Romanian).
[2] P. Blaga, Probability Theory and Mathematical Statistics, Vol. II,
Babes-Bolyai University Press, Cluj-Napoca, 1994 (In Romanian).
[3] G. Ciucu, C. Tudor, Probabilities and Stochastic Processes, Vol. I,
Romanian Academy Press, Bucharest, 1978 (In Romanian).
[4] I. Deak, Random Number Generators and Simulation, Akademiai Kiado, Budapest, 1990.
[5] L. Devroye, Non-Uniform Random Variate Generation, Springer-Verlag,
New-York, 1986.
[6] J. H. Halton, On the efficiency of certain quasi-random sequences of
points in evaluating multidimensional integrals, Numer. Math., 2 (1960),
84-90.
[7] G. Okten, A Probabilistic Result on the Discrepancy of a Hybrid-Monte
Carlo Sequence and Applications, Monte Carlo Methods and Applications,
Vol. 2, No. 4 (1996), 255-270.
[8] G. Okten, Contributions to the theory of Monte Carlo and QuasiMonte Carlo Methods, Ph.D. Dissertation, Claremont Graduate School, California, 1997.
[9] G. Okten, B. Tuffin, V. Burago, A central limit theorem and improved
error bounds for a hybrid-Monte Carlo sequence with applications in computational finance, Journal of Complexity, Vol. 22, No. 4 (2006), 435-458.

159

A. V. Ro¸sca - A Mixed Monte Carlo and Quasi-Monte Carlo Sequence for...
[10] N. Ro¸sca, Generation of Non-Uniform Low-Discrepancy Sequences in
the Multidimensional Case, Rev. Anal. Num´er. Th´eor. Approx, Tome 35,
No 2 (2006), 207-219.
[11] D. D. Stancu, Gh. Coman, P. Blaga, Numerical Analysis and Approximation Theory, Vol. II, Presa Universitara Clujeana, Cluj-Napoca, 2002
(In Romanian).
Author:
Alin V. Ro¸sca,
Babe¸s-Bolyai University of Cluj Napoca
Romania.
email: arosca@econ.ubbcluj.ro

160

Dokumen yang terkait

AN ALIS IS YU RID IS PUT USAN BE B AS DAL AM P E RKAR A TIND AK P IDA NA P E NY E RTA AN M E L AK U K A N P R AK T IK K E DO K T E RA N YA NG M E N G A K IB ATK AN M ATINYA P AS IE N ( PUT USA N N O MOR: 9 0/PID.B /2011/ PN.MD O)

0 82 16

ANALISIS FAKTOR YANGMEMPENGARUHI FERTILITAS PASANGAN USIA SUBUR DI DESA SEMBORO KECAMATAN SEMBORO KABUPATEN JEMBER TAHUN 2011

2 53 20

EFEKTIVITAS PENDIDIKAN KESEHATAN TENTANG PERTOLONGAN PERTAMA PADA KECELAKAAN (P3K) TERHADAP SIKAP MASYARAKAT DALAM PENANGANAN KORBAN KECELAKAAN LALU LINTAS (Studi Di Wilayah RT 05 RW 04 Kelurahan Sukun Kota Malang)

45 393 31

FAKTOR – FAKTOR YANG MEMPENGARUHI PENYERAPAN TENAGA KERJA INDUSTRI PENGOLAHAN BESAR DAN MENENGAH PADA TINGKAT KABUPATEN / KOTA DI JAWA TIMUR TAHUN 2006 - 2011

1 35 26

A DISCOURSE ANALYSIS ON “SPA: REGAIN BALANCE OF YOUR INNER AND OUTER BEAUTY” IN THE JAKARTA POST ON 4 MARCH 2011

9 161 13

Pengaruh kualitas aktiva produktif dan non performing financing terhadap return on asset perbankan syariah (Studi Pada 3 Bank Umum Syariah Tahun 2011 – 2014)

6 101 0

Pengaruh pemahaman fiqh muamalat mahasiswa terhadap keputusan membeli produk fashion palsu (study pada mahasiswa angkatan 2011 & 2012 prodi muamalat fakultas syariah dan hukum UIN Syarif Hidayatullah Jakarta)

0 22 0

Pendidikan Agama Islam Untuk Kelas 3 SD Kelas 3 Suyanto Suyoto 2011

4 108 178

ANALISIS NOTA KESEPAHAMAN ANTARA BANK INDONESIA, POLRI, DAN KEJAKSAAN REPUBLIK INDONESIA TAHUN 2011 SEBAGAI MEKANISME PERCEPATAN PENANGANAN TINDAK PIDANA PERBANKAN KHUSUSNYA BANK INDONESIA SEBAGAI PIHAK PELAPOR

1 17 40

KOORDINASI OTORITAS JASA KEUANGAN (OJK) DENGAN LEMBAGA PENJAMIN SIMPANAN (LPS) DAN BANK INDONESIA (BI) DALAM UPAYA PENANGANAN BANK BERMASALAH BERDASARKAN UNDANG-UNDANG RI NOMOR 21 TAHUN 2011 TENTANG OTORITAS JASA KEUANGAN

3 32 52