48 Table 4.3: The expected values with standard errors and MSE of T
n;k ,p,t
and U
n;k ,p,t
for normal inverse-Gaussian with 1000 replications for given target value
µ
k+2 j
= 1 and k ∈ {2,4,6,8}.
k Target
n E T
n;k ,p,t
StdT
n;k ,p,t
E U
n;k ,p,t
StdU
n;k ,p,t
MSE T
n;k ,p,t
MSE U
n;k ,p,t
2 1
3 2.0068 4.9227
0.9135 0.8235 25.2469
0.6856 10
1.4249 2.8513 1.0316 0.4388
8.3103 0.1935
20 1.5936 1.8951
1.1340 0.3718 3.9439
0.1562 30
1.3677 1.0155 1.1641 0.2668
1.1664 0.0981
60 1.0846 0.5341
1.1104 0.1856 0.2924
0.0466 100
1.0819 0.5166 1.1102 0.1675
0.2735 0.0402
300 1.0006 0.2570
1.0843 0.0919 0.0660
0.0156 500
1.0356 0.1890 1.1374 0.0727
0.0370 0.0242
1000 1.0156 0.1219
1.0116 0.0670 0.0151
0.0115 4
1 5
9.3836 30.0947 1.3196 1.1323
975.9726 1.3843
10 4.6547 13.8643
1.2837 0.8153 205.5754
0.7452 20
2.7487 5.1845 1.2963 0.6189
29.9373 0.4709
30 1.4822 2.1166
1.1854 0.4572 4.7125
0.2434 60
1.3095 1.1051 1.2560 0.3054
1.3170 0.1588
100 1.1673 0.8467
1.2264 0.2671 0.7449
0.1226 300
1.0849 0.4296 1.2542 0.1520
0.1918 0.0877
500 1.0350 0.2839
1.0762 0.0914 0.0818
0.0416 1000
1.0107 0.2080 1.0102 0.1137
0.0434 0.0337
6 1
7 20.4865 113.4633
0.9423 0.9984 13253.6508
1.0001 10
12.1032 55.7841 1.0596 0.8610
3235.1488 0.7449
20 3.4498 10.3056
1.0054 0.5933 112.2060
0.3520 30
2.1422 3.2262 1.0246 0.4970
11.7130 0.2476
60 1.8236 2.6064
1.0587 0.3744 7.4717
0.1436 100
1.2468 1.1599 1.0129 0.2643
1.4062 0.1170
300 1.0781 0.4953
1.0568 0.1596 0.2514
0.0929 500
1.0815 0.4065 1.0230 0.1110
0.1719 0.0922
1000 1.0207 0.2816
1.0204 0.0775 0.0798
0.0760 8
1 10
27.9651 106.4417 1.1645 1.2832
12056.9414 1.6737
20 10.2639 47.3683
1.2127 0.9227 2329.5787
0.8674 30
5.8903 14.2638 1.2634 0.8024
227.3707 0.7133
60 1.8667 3.2137
1.1402 0.4894 11.0792
0.2504 100
1.5251 1.8103 1.1340 0.3734
3.5530 0.1591
300 1.2059 0.8122
1.1398 0.2275 0.7021
0.1571 500
1.1817 0.6075 1.1210 0.3032
0.4021 0.1200
1000 1.0325 0.3189
1.1125 0.0564 0.1027
0.0910
4.2.3. Normal Poisson
Again by fixing j = 1 we set several sample sizes n varied from 5 until 1000 and we generated 1000 samples for each n. However, for normal Poisson
case, to see the effect of zero values proportion within X
j
, we also consider small mean values on the Poisson component because PX
j
= 0 = exp−µ
j
, then we set
µ
j
= 0 .5, 1 and 5. We also used Theorem 4.1.3 for calculating
Bayesian estimator in this simulation, we assume that the parameters of
49
10 20
30 60
100 300
500 1000
ML UMVU
MSE bargraph for k=2 and mu_j=1
Sample Size Mean Square Error
2 4
6 8
a k=2
10 20
30 60
100 300
500 1000
ML UMVU
MSE bargraph for k=4 and mu_j=1
Sample Size Mean Square Error
50 100
150 200
b k=4
10 20
30 60
100 300
500 1000
ML UMVU
MSE bargraph for k=6 and mu_j=1
Sample Size Mean Square Error
500 1000
1500 2000
2500 3000
c k=6
10 20
30 60
100 300
500 1000
ML UMVU
MSE bargraph for k=8 and mu_j=1
Sample Size Mean Square Error
2000 4000
6000 8000
10000 12000
d k=8
Figure 4.3: Bargraphs of the mean square errors of T
n;k ,p,t
and U
n;k ,p,t
for normal inverse Gaussian with
µ
j
= 1 and k ∈ {2,4,6,8}.
prior distribution depend on sample mean of the Poisson component X
j
and the space dimension k. The results are presented in Tables 4.4 - 4.9. From the expected values and
the standard errors in Tables 4.4, 4.6 and 4.8, we can observe the performance of ML T
n;k ,p,t
, UMVU U
n;k ,p,t
and Bayesian B
n;k ,p,t,α,β
estimations on the generalized variance. The values of all estimators converge; but U
n;k ,p,t
, which is the unbiased estimator, always approximates the target m
k j
more accurately than T
n;k ,p,t
and B
n;k ,p,t,α,β
for small sample sizes n 6 30. Notice that U
n;k ,p,t
can be calculated only if nX
j
k; for this reason U
n;k ,p,t
is not available for some n when m
j
= 0 .5. In this case, for other n where U
n;k ,p,t
is available, we can observe that B
n;k ,p,t,X
j
,k
is closer to U
n;k ,p,t
than T
n;k ,p,t
. Thus if
µ
j
is small and U
n;k ,p,t
is not available, we can get a good estimation
50 Table 4.4: The expected values with empirical standard errors of T
n;k ,p,t
, U
n;k ,p,t
and B
n;k ,p,t,α,β
for normal-Poisson from 1000 replications for given target value
µ
k j
= 0 .5
k
with k ∈ {2,4,6,8}, α = X
j
and β = k.
k Target
n T
n;k ,p,t
Std
T
U
n;k ,p,t
Std
U
B
n;k ,p,t,X
j
,k
Std
B
2 0.25
3 0.3930 0.5426
- 0.2515 0.3473
10 0.2868 0.2421
0.2378 0.2212 0.2410 0.2034
20 0.2652 0.1660
0.2407 0.1583 0.2416 0.1513
30 0.2642 0.1374
0.2476 0.1332 0.2480 0.1290
60 0.2598 0.0903
0.2514 0.0888 0.2515 0.0874
100 0.2534 0.0712
0.2484 0.0705 0.2485 0.0698
300 0.2495 0.0418
0.2478 0.0417 0.2478 0.0415
500 0.2491 0.0313
0.2482 0.0313 0.2482 0.0312
1000 0.2495 0.0221
0.2490 0.0221 0.2490 0.0221
4 0.0625
5 0.2999 0.8462
- 0.0592 0.1672
10 0.1696 0.3115
0.0689 0.1750 0.0646 0.1187
20 0.1089 0.1541
0.0658 0.1097 0.0638 0.0903
30 0.0886 0.0894
0.0617 0.0689 0.0613 0.0618
60 0.0774 0.0559
0.0642 0.0487 0.0639 0.0461
100 0.0704 0.0403
0.0627 0.0370 0.0627 0.0358
300 0.0643 0.0207
0.0618 0.0201 0.0618 0.0199
500 0.0635 0.0158
0.0620 0.0156 0.0620 0.0155
1000 0.0631 0.0115
0.0624 0.0114 0.0624 0.0113
6 0.015625
7 0.2792 1.2521
- 0.0152 0.0680
10 0.1212 0.3918
0.0165 0.0858 0.0128 0.0414
20 0.0427 0.0883
0.0124 0.0345 0.0119 0.0245
30 0.0356 0.0539
0.0151 0.0271 0.0145 0.0220
60 0.0236 0.0281
0.0149 0.0196 0.0147 0.0175
100 0.0211 0.0183
0.0159 0.0145 0.0158 0.0137
300 0.0173 0.0089
0.0157 0.0082 0.0157 0.0081
500 0.0166 0.0068
0.0157 0.0064 0.0157 0.0064
1000 0.0164 0.0044
0.0159 0.0043 0.0159 0.0043
8 0.00390625
10 0.0891 0.4110
- 0.0017 0.0080
20 0.0384 0.1409
0.0054 0.0288 0.0038 0.0141
30 0.0171 0.0383
0.0037 0.0107 0.0033 0.0075
60 0.0081 0.0119
0.0035 0.0058 0.0034 0.0050
100 0.0063 0.0082
0.0038 0.0053 0.0037 0.0048
300 0.0045 0.0031
0.0038 0.0027 0.0038 0.0026
500 0.0045 0.0024
0.0040 0.0022 0.0040 0.0021
1000 0.0041 0.0015
0.0039 0.0014 0.0039 0.0014
by using B
n;k ,p,t,X
j
,k
. While for µ
j
= 1 and µ
j
= 5, the Bayesian estimator with prior distribution gammaX
j
, k produces the closer estimates to the UMVU than ML method. We can improve this Bayesian estimator by using other
parameter values of prior distribution. From the MSEs in Tables 4.5,4.7 and 4.9 we can conclude that all estimators are consistent.
In this simulation, the proportion of zero values in the samples increases