Directory UMM :Journals:Journal_of_mathematics:OTHER:

ACTA UNIVERSITATIS APULENSIS
No 10/2005
Proceedings of the International Conference on Theory and Application of
Mathematics and Informatics ICTAMI 2005 - Alba Iulia, Romania

ON THE INVARIANCE PROPERTY OF THE FISHER
INFORMATION FOR A TRUNCATED LOGNORMAL
DISTRIBUTION (I)

˘tu
Ion Mihoc, Cristina I. Fa
Abstract.The Fisher information is well known in estimation theory. The
objective of this paper is to give some definitions and properties for the truncated lognormal distributions. Then, in the next paper ([2]), we shall determine some invariance properties of Fisher’s information in the case of these
distributions.
Keywords: Fisher information, lognormal distribution, truncated distribution, invariance property
2000 Mathematics Subject Classification: 62B10, 62B05.
1.Normal and lognormal distributions
Let X be a normal distribution with density function
(

 )


1
1 x − mx 2
=√
exp −
, x ∈ R,
(1)
2
σx
2πσx
where the parameters mx and σx2 have their usual significance, namely:
mx = E(X), σx2 = V ar(X), mx ∈ R, σx > 0.
f (x; mx , σx2 )



Definition 1.1.If X is normally distributed with mean mx and variance
σx2 , then the random variable
Y = eX , orX = ln Y, Y > 0,


(2)

is said to be lognormally distributed. The lognormal density function is given
by
g(y; mx , σx2 ) = √
where




1
1 1
exp −

2
2πσx y
371

!2 
ln y − mx 

, y > 0,

σx

(3)

I. Mihoc , C.I. F˘atu - On the invariance property of the Fisher ...

 2 
σ2
E(Y ) = my = exp mx + x , V ar(Y ) = σy2 = exp{2mx + σx2 } eσx −1 . (4)
2
!

Remark 1.1.From the relations (4), we obtain


m2y

mx = ln  q

m2y +



 , σ2
x
2
σy

m2y + σy2
.
= ln
m2y
#

"

(5)

2.The truncated lognormal distributions

Definition 2.1. We say that the random variable Y has a lognormal
distribution truncated to the left at Y = a, a ∈ R, a > 0 and to the right at
Y = b, b ∈ R, a < b, denoted by Ya↔b , if its probability density function ,
denoted by ga↔b (y; mx , σy2 ), is of the form






ga↔b (y; mx , σx2 ) = 

where the constant

k(a,b)
1

2πσ y

exp

0
0





k(a, b) =



− 21




ln y−mx 2
σx




if 0 < a ≤ y ≤ b,
if 0 < y < a
if y > b,

1
x −Φ ln a−mx
Φ( ln b−m
) ( σ )
σ
x

x

(6)

(7)

results from the conditions
10 ga↔b (y; mx , σx2 ) ≥ 0 if y ∈ (0, ∞),

20

+∞
R
0

ga↔b (y; mx , σx2 )dx = 1.

(8)

Remark 2.1.The function
z

t2
1 Z
dt,
exp −
Φ(z) = √
2
2π −∞

372

!

(9)

I. Mihoc , C.I. F˘atu - On the invariance property of the Fisher ...
for which we have the following relations
1
Φ(−∞) = 0, Φ(0) = , Φ(+∞) = 1, Φ(−z) = 1 − Φ(z),
(10)
2
is the normal distribution function corresponding to the standard normal random variable
X − mx
, M (Z) = 0, V ar(Z) = 1
σx
which has the probability density function
Z=

(11)


t2
1
f (z; 0, 1) = f (z) = √ exp −
, z ∈ (−∞, +∞).
2

!

(12)

Remark 2.2. A such truncated probability distribution can be regarded
as a conditional probability distribution in the sense that if X has an unrestricted distribution with the probability density function g(y; mx , σx2 ), then
ga↔b (y; mx , σx2 ), as defined above, is the probability density function which governs the behavior of subject to the condition that Ya↔b is known to lie in [a, b].
Also, in such case, we say that the random variable Ya↔b has a bilateral truncated lognormal distribution with the limits a and b.
Remark 2.3.It is easy to see that from the relations (6), (7) and (10)
follows

lim ga↔b (y; mx , σx2 )
a→0


1
.g(y; mx , σx2 )
x
Φ( ln b−m
)
σ




1
.g(y; mx , σx2 )
x
1−Φ( ln a−m
)
σ



lim ga↔b (y; mx , σx2 ) == 

b→+∞

and

=




lim

a→−0,b→+∞

ga↔b (y; mx , σx2 )

=

x

0

if y > b,

x

0

1
√ 1
2πσx y

if 0 < y ≤ b

(13)

if y ≥ a

if 0 < y ≤ a,

exp

373



− 12




ln y−mx 2
σx



if y > 0,

(14)

(15)

I. Mihoc , C.I. F˘atu - On the invariance property of the Fisher ...
where g→b (y; mx , σx2 ) is the probability density function then when Y→b has
a lognormal distribution truncated to the right at Y = b; ga← (y; mx , σx2 ) is
the probability density function then when Ya← has a lognormal distribution
truncated to the left at Y = a and g(y; mx , σx2 ) is the probability density function
then when Y has an ordinary lognormal distribution.
Theorem 2.1.If Ya↔b is a random variable which follows a bilateral truncated lognormal distribution, then its expected value E(Ya↔b ) and second mo2
ment E(Ya↔b
) have the following expressions
σx2
E(Ya↔b ) = exp mx +
2
(

)h

Φ





ln b−mx
− σx
σ

h x
x
Φ ln b−m
σx





ln a−mx

σx

i
x
Φ ln a−m
σx

−Φ

σx

i

,

h 


i
o Φ ln b−mx − 2σx − Φ ln a−mx − 2σx
σ
σx
2


i
hx 
E(Ya↔b
) = exp 2mx + 2σx2
.
n

Φ

ln b−mx
σx

−Φ

ln a−mx
σx

(16)

(17)

Proof. Making use of the definition of E(Ya↔b ), we obtain
k(a, b)
E(Ya↔b ) = √
2πσx

Zb
a




exp −

By making the change of variables
t=



1
2

!2 
ln y − mx 
dy.

σx

ln y − mx
,
σx

(18)

(19)

we obtain
y = exp {mx + σx t} , dy = σx exp {mx + σx t} dt,

(20)

and (18) can be rewritten as follows
ln b−mx
σx

E(Ya↔b ) =

1
k(a, b) exp {mx } Z

exp − (t2 − 2σx t) dt =
2

ln a−mx


σx

374



(21)

I. Mihoc , C.I. F˘atu - On the invariance property of the Fisher ...

n

k(a, b) exp mx +

=


σx2
2

o

ln b−mx
σx

ln a−mx
σx

n

k(a, b) exp mx +

=


σx2
2

o

1
exp − (t − σx )2 dt =
2


Z

ln b−mx
−σx
σx

Z

ln a−mx
−σx
σx



1
exp − w2 dw,
2




(22)

(23)

if we used a new change of variables, namely w = t − σx .
From (23) we obtain just the form (16) for the expected value of Ya↔b .
Using an analogous method we can obtain the expression for the second
2
moment E(Ya↔b
).
Thus, we have
E(Y2a↔b )
=
=

=

b
k(a,b) R

2πσx a

k(a,b) exp{2mx +2σx2 }



k(a,b) exp{2mx +2σx2 }



= exp {2mx + 2σx2 }

y exp

ln b−mx
σx

R



− 21


ln y−mx 2
σx

n



dy =
o

exp − 12 (t − 2σx )2 dt =

ln a−mx
σx
ln b−mx
−2σx
σx

R

[ (

o

n

exp − 12 v 2 dv =

ln a−mx
−2σx
σx
ln b−mx
Φ
−2σx
σx
x
Φ ln b−m
σx

[ (



x −2σ
)−Φ( ln a−m
x )]
σx
,
ln a−mx
)−Φ( σx )]

if we used the change of variables (19), respectively the change of variables
v = t − 2σx .
Corollary 2.1.If Ya↔b is a random variable with a lognormal distribution
truncated to the left at Y = a and to the right at Y = b, then
2
Var(Ya↔b ) = E(Ya↔b
) − [E(Ya↔b )]2 =
ln a−mx
x −2σ
−2σx )]
[Φ( ln b−m
x )−Φ(
σx
σx

ln a−mx
ln b−mx
−Φ
Φ
[ ( σx ) ( σx )]
2
ln a−mx
x −σ
−σx )]
[Φ( ln b−m
x )−Φ(
σx
σx
− exp {2mx + σx2 }
.
2
x −Φ ln a−mx
) ( σx )]
[Φ( ln b−m
σx

=exp {2mx + 2σx2 }

375

I. Mihoc , C.I. F˘atu - On the invariance property of the Fisher ...
Corollary 2.2.For the unilateral truncated lognormal random variables
Ya← and Y→b , respectively for the lognormal random variable Y we have the
following expected values and variances
σ2
E(Ya← ) = lim E(Ya↔b ) = exp mx +
b→+∞
2
(

)h

1−Φ
h



1−

ln a−mx
− σx
σ
i
x
x
Φ ln a−m
σx

i

(24)

,

VarYa← = lim V ar(Ya↔b ) =
b→+∞

= exp {2mx +

[1−Φ( ln a−mx −2σx )]
2σx2 } 1−Φ σlnxa−mx
[ ( σ )]
x

2

− exp {2mx +

n

E(Y→b ) = lim E(Ya↔b ) = exp mx +
a→0

σ2
2

x −σ
x )]
[1−Φ( ln a−m
σx
σx2 }
2
ln a−mx
1−Φ
[ ( σ )]

,

x

o Φ( ln b−mx −σx )
σx
x
Φ( ln b−m
)
σ

,

x

VarY→b = lim V ar(Ya↔b ) =
a→0
2 ln b−mx
Φ( ln b−mx −2σx )
2 Φ ( σx −σx )
x
,
= exp {2mx + 2σx2 } Φ(lnσb−m

exp
{2m
+
σ
}
x
x
x
x σx )
Φ2 ( ln b−m
)
σx
respectively


2
E(Y)=my =
lim
E(Ya↔b ) = exp mx + σ2x ,
a→−∞,b→+∞

Var(Y)=σy2 =

lim

a→0,b→∞



2



V ar(Ya↔b ) = exp{2mx + σx2 } eσx −1 .

Starting with the second page the header should contain the name of the
author and a short title of the paper
3.Fisher’s information measures for a truncated lognormal
distribution.
Case: mx -an unknown parameter,σx2 -a known parameter
Theorem 3.1.If the random variable Ya↔b has a bilateral truncated lognormal distribution,that is its probability density function is of the form

ga↔b (y; mx , σx2 )

=











k(a,b)
1

2πσ y



exp − 12
0
0



376


ln y−mx 2
σx



if 0 < a ≤ y ≤ b,
if 0 < y < a
if y > b,

(25)

I. Mihoc , C.I. F˘atu - On the invariance property of the Fisher ...
where
k(a, b) =

1
Φ



ln b−mx
σx



−Φ



ln a−mx
σx

(26)

,

then the Fisher’s information measure, about the unknown parameter mx ,
has the following form
[f (ln b; mx , σx2 ) − f (ln a; mx , σx2 )]2
1

IYa↔b (mx ) = 2 − h 


i2
ln a−mx
σx
x
Φ ln b−m

Φ
σx
σx


[(ln b − mx )f (ln b; mx , σx2 ) − (ln a − mx )f (ln a; mx , σx2 )]
h

σx2 Φ

where

f (ln a; mx , σx2 ) = √



ln b−mx
σx



−Φ



ln a−mx
σx

i

,




!2 
ln a − mx 
∈ R+

σx



ln b − mx
σx

1
1
exp −

2
2πσx

 1
1
exp −
f (ln b; mx , σx2 ) = √
2
2πσx

!2 



(27)

∈ R+ .

Proof. For a such continuous random variable Ya↔b the Fisher information
measure, about the unknown parameter mx ,has the form


IXa↔b (mx ) = E 
=

Zb
a

=−

Zb
a



ln ga↔b (y; mx , σx2 )
∂mx

∂ ln ga↔b (y; mx , σx2 )
∂mx

!2

!2 
=

ga↔b (y; mx , σx2 )dy =

∂ 2 ln ga↔b (y; mx , σx2 )
∂ 2 ln ga↔b (y; mx , σx2 )
2
ga↔b (y; mx , σx )dy = −E
.
∂m2x
∂m2x
(28)
"

377

#

I. Mihoc , C.I. F˘atu - On the invariance property of the Fisher ...
Using (25), we obtain
2


x
ln ga↔b (y; mx , σx2 ) = − ln( 2πσx ) + ln k(a, b) − ln y − 21 ln y−m
,
σx
respectively
∂ ln ga↔b (y; mx , σx2 )
1 ∂k(a, b) ln y − mx
∂ ln k(a, b) ln y − mx
=
.
=
+
+
2
∂mx
∂mx
σx
k(a, b) ∂mx
σx2
(29)
Because




1
∂ 
∂k(a, b)


 =

=
∂mx
∂mx Φ ln b−mx − Φ ln a−mx
σx
σx
=

h


Φ
− ∂m
x

h

and

Φ

"


ln a − mx
Φ
∂mx
σx

=


∂mx





ln b−mx
σx

ln b−mx
σx

!#

1
1
 +√
2


+


∂mx

h

+ Φ



 1
√




h

Φ



Z
0

!#

i

,

ln a−mx
σx



Z

exp −

−∞
2

exp −

ln a − mx
σx


ln b − mx
Φ
∂mx
σx

ln a−mx
σx
i2

ln a−mx
σx

ln a−mx
σx



 1

= −1 2πσx exp −
2
"

i


∂mx

=



i

!2 



t
2

!






t2
dt
=
2
!



dt
=

= −f (ln a; mx , σx2 ),

= −f (ln b; mx , σx2 ),

(30)
(31)

we obtain that
f (ln b; mx , σx2 ) − f (ln a; mx , σx2 )
∂k(a, b)
= h 
i h 
i2 ,
ln b−mx
ln a−mx
∂mx
+ Φ
Φ
σx
σx

and for the relation (29) we obtain a final form
378

(32)

I. Mihoc , C.I. F˘atu - On the invariance property of the Fisher ...

∂ ln ga↔b (y; mx , σx2 )
f (ln b; mx , σx2 ) − f (ln a; mx , σx2 ) ln y − mx
h 
i +
= h  ln b−m i
.
ln a−mx
x
∂mx
σx2
Φ
+
Φ
σx
σx
Then, from this last relation, we obtain
∂ 2 ln ga↔b (y; mx , σx2 )
=
∂m2x


 
i
k(a, b) h
1
2
2

ln
a

m
)f
(ln
a;
m
,
σ
)
+
(ln
b

m
)f
(ln
b;
m
,
σ
x
x
x
x
x +
x
σx2
σx2
2

[f (ln b; mx , σx2 ) − f (ln a; mx , σx2 )]

+ h 
i h 
i2 ,
ln a−mx
x
+
Φ
Φ ln b−m
σx
σx

then when we have in view the relations (30), (31) as well as the following
relations
i
∂ h
ln a − mx
f (ln a; mx , σx2 )
f (ln a; mx , σx2 ) =
∂mx
σx2
i
ln b − mx
∂ h
f (ln b; mx , σx2 ) =
f (ln b; mx , σx2 ).
∂mx
σx2
∂ 2 ln g

(y;m ,σ 2 )

x x
a↔b
Because
is independent of the variable y, from (3.3b) we
∂m2x
obtain just the Fisher’s information (27), if we have in view that ga↔b (y; mx , σx2 )
is a probability density function. This completes the proof of the theorem.

References
[1] Mihoc, I., F˘atu C. I., Fisher′ s Information Measures and Truncated
Normal Distributions (II), Seminarul de teoria celei mai bune aproxim¯ari, convexitate ¸si optimizare, 1960-2000, ,,Omagiu memoriei academicianului Tiberiu
Popoviciu la 25 de ani de la moarte, 26-29 octombrie,2000, pp.153-155.
[2] Mihoc I., F˘atu C. I.,On the invariance property of the Fisher information
for a truncated lognormal distribution (II) (to appear)

379

I. Mihoc , C.I. F˘atu - On the invariance property of the Fisher ...
[3] Mihoc, I., F˘atu, C.I., Calculul probabilit˘a¸tilor ¸si statistic˘a matematic˘a,
Casa de Editur˘a Transilvania Pres, Cluj-Napoca, 2003.
[4] Schervish,M.J.,Theory of Statistics, Springer-Verlag New York,Inc. 1995.
Ion Mihoc
Faculty of Economics
Christian University Dimitrie Cantemir , 3400 Cluj-Napoca, Romania
Teodor Mihali 56, 3400 Cluj-Napoca, Romania
email:imihoc@cantemir.cluj.astral.ro
Cristina F˘atu
Faculty of Economics
Christian University Dimitrie Cantemir , Cluj-Napoca, Romania
Teodor Mihali 56, 3400 Cluj-Napoca, Romania
email:cfatu@cantemir.cluj.astral.ro

380