45
10 20
30 60
100 300
500 1000
ML UMVU
MSE bargraph for k=2 and mu_j=1
Sample Size Mean Square Error
0.0 0.5
1.0 1.5
a k=2
10 20
30 60
100 300
500 1000
ML UMVU
MSE bargraph for k=4 and mu_j=1
Sample Size Mean Square Error
5 10
15 20
25
b k=4
10 20
30 60
100 300
500 1000
ML UMVU
MSE bargraph for k=2 and mu_j=1
Sample Size Mean Square Error
200 400
600 800
1000 1200
c k=6
10 20
30 60
100 300
500 1000
ML UMVU
MSE bargraph for k=8 and mu_j=1
Sample Size Mean Square Error
10000 20000
30000 40000
50000
d k=8
Figure 4.1: Bargraphs of the mean square errors of T
n;k ,p,t
and U
n;k ,p,t
for normal-gamma with
µ
j
= 1 and k ∈ {2,4,6,8}.
There are more important performance characterizations for an estimator than just being unbiased. The MSE is perhaps the most important of them.
It captures the bias and the variance of the estimator. For this reason, we compare the quality of the estimators using their MSE values. The
result shows that when n increases the MSE of the estimates of the two methods become similar, and they both produced almost the same result
for n = 1000. The MSE values in the tables are presented graphically in Figure 4.1 and Figure 4.2 respectively. In the figures it is obviously seen that
the performance of all estimators becomes more similar when the sample size increase. For small sample sizes, U
n;k ,p,t
always has smaller MSE, in this situation U
n;k ,p,t
is preferable than T
n;k ,p,t
. In these figures we also can observe that the difference between U
n;k ,p,t
and T
n;k ,p,t
for small sample sizes increases when the dimension increases.
46 Table 4.2: The expected values with empirical standard errors and MSE of
T
n;k ,p,t
and U
n;k ,p,t
for normal-gamma with 1000 replications for given target value
µ
k+1 j
= 5
k+1
with k ∈ {2,4,6,8}
k Target
n ET
n;k ,p,t
StdT
n;k ,p,t
EU
n;k ,p,t
StdU
n;k ,p,t
MSE T
n;k ,p,t
MSE U
n;k ,p,t
2 125
3 1.46E+02 1.17E+02
6.58E+01 5.27E+01 1.41E+04
6.28E+03 10
1.32E+02 5.68E+01 9.97E+01 4.30E+01
3.27E+03 2.49E+03
20 1.27E+02 3.88E+01
1.10E+02 3.36E+01 1.51E+03
1.34E+03 30
1.27E+02 3.20E+01 1.16E+02 2.91E+01
1.03E+03 9.33E+02
60 1.25E+02 2.27E+01
1.19E+02 2.16E+01 5.16E+02
5.07E+02 100
1.26E+02 1.67E+01 1.22E+02 1.62E+01
2.79E+02 2.73E+02
300 1.25E+02 9.82E+00
1.24E+02 9.72E+00 9.65E+01
9.56E+01 500
1.25E+02 7.64E+00 1.25E+02 7.59E+00
5.84E+01 5.78E+01
1000 1.25E+02 5.32E+00
1.25E+02 5.30E+00 2.83E+01
2.83E+01 4
3125 5
4.55E+03 5.13E+03 9.41E+02 1.06E+03
2.84E+07 5.89E+06
10 3.83E+03 2.90E+03
1.60E+03 1.21E+03 8.89E+06
3.79E+06 20
3.45E+03 1.77E+03 2.17E+03 1.11E+03
3.25E+06 2.16E+06
30 3.39E+03 1.37E+03
2.47E+03 9.95E+02 1.94E+06
1.42E+06 60
3.20E+03 9.56E+02 2.72E+03 8.12E+02
9.20E+05 8.23E+05
100 3.14E+03 6.98E+02
2.84E+03 6.32E+02 4.87E+05
4.78E+05 300
3.15E+03 4.17E+02 3.05E+03 4.03E+02
1.74E+05 1.69E+05
500 3.14E+03 3.18E+02
3.07E+03 3.12E+02 1.02E+05
9.99E+04 1000
3.14E+03 2.18E+02 3.11E+03 2.16E+02
4.79E+04 4.69E+04
6 78125
7 1.51E+05 2.20E+05
1.44E+04 2.10E+04 5.38E+10
4.51E+09 10
1.15E+05 1.34E+05 2.00E+04 2.33E+04
1.94E+10 3.93E+09
20 9.86E+04 7.79E+04
3.81E+04 3.01E+04 6.49E+09
2.51E+09 30
8.83E+04 5.09E+04 4.59E+04 2.64E+04
2.69E+09 1.74E+09
60 8.55E+04 3.70E+04
6.10E+04 2.64E+04 1.43E+09
9.93E+08 100
8.13E+04 2.44E+04 6.62E+04 1.98E+04
6.04E+08 5.35E+08
300 7.87E+04 1.40E+04
7.34E+04 1.31E+04 1.97E+08
1.94E+08 500
7.83E+04 1.11E+04 7.51E+04 1.06E+04
1.23E+08 1.22E+08
1000 7.88E+04 7.92E+03
7.71E+04 7.75E+03 6.31E+07
6.11E+07 8
1953125 10
3.52E+06 6.52E+06 1.99E+05 3.69E+05
4.49E+13 3.21E+12
20 2.79E+06 3.42E+06
5.70E+05 6.98E+05 1.24E+13
2.40E+12 30
2.44E+06 2.01E+06 8.13E+05 6.68E+05
4.27E+12 1.75E+12
60 2.15E+06 1.16E+06
1.21E+06 6.52E+05 1.37E+12
9.78E+11 100
2.17E+06 9.34E+05 1.53E+06 6.58E+05
9.20E+11 6.11E+11
300 2.01E+06 4.79E+05
1.79E+06 4.25E+05 2.33E+11
2.08E+11 500
1.98E+06 3.52E+05 1.84E+06 3.28E+05
1.25E+11 1.20E+11
1000 1.96E+06 2.55E+05
1.89E+06 2.46E+05 6.53E+10
6.44E+10
4.2.2. Normal inverse-Gaussian
We generated normal inverse-Gaussian model in the same way as sim- ulating the normal-gamma model. Table 4.3 shows the expected values of
generalized variance estimates with their standard errors in parentheses and the means square error values of both ML and UMVU methods in case
of normal inverse-Gaussian. By setting
µ
j
= 1 and using equation 3.11 we
47
10 20
30 60
100 300
500 1000
ML UMVU
MSE bargraph for k=2 and mu_j=5
Sample Size Mean Square Error
500 1000
1500 2000
2500 3000
a k=2
10 20
30 60
100 300
500 1000
ML UMVU
MSE bargraph for k=4 and mu_j=5
Sample Size Mean Square Error
0e+00 2e+06
4e+06 6e+06
8e+06
b k=4
10 20
30 60
100 300
500 1000
ML UMVU
MSE bargraph for k=2 and mu_j=5
Sample Size Mean Square Error
0.0e+00 5.0e+09
1.0e+10 1.5e+10
c k=6
10 20
30 60
100 300
500 1000
ML UMVU
MSE bargraph for k=8 and mu_j=5
Sample Size Mean Square Error
0e+00 1e+13
2e+13 3e+13
4e+13
d k=8
Figure 4.2: Bargraphs of the mean square errors of T
n;k ,p,t
and U
n;k ,p,t
for normal-gamma with
µ
j
= 5 and k ∈ {2,4,6,8}.
have generalized variance of distribution: µ
k+2 j
= 1. Similar with the result of generalized variance estimation for normal
gamma, the result for normal inverse-Gaussian shows that UMVU method produced better estimates than ML method for small sample sizes. When
the sample size n increases, the expected values of the estimates of the two methods become closer to the target value and they both produced almost
the same result for n = 1000. The MSE values in Table 4.3 is presented as bargraphs in Figure 4.3.
In Figure 4.3 the behavior of MSE of T
n;k ,p,t
and U
n;k ,p,t
is displayed clearly. For small sample sizes n ≤ 30, U
n;k ,p,t
is preferable than T
n;k ,p,t
. The differ- ence between the two methods when n ≤ 30 increases when k increases.
48 Table 4.3: The expected values with standard errors and MSE of T
n;k ,p,t
and U
n;k ,p,t
for normal inverse-Gaussian with 1000 replications for given target value
µ
k+2 j
= 1 and k ∈ {2,4,6,8}.
k Target
n E T
n;k ,p,t
StdT
n;k ,p,t
E U
n;k ,p,t
StdU
n;k ,p,t
MSE T
n;k ,p,t
MSE U
n;k ,p,t
2 1
3 2.0068 4.9227
0.9135 0.8235 25.2469
0.6856 10
1.4249 2.8513 1.0316 0.4388
8.3103 0.1935
20 1.5936 1.8951
1.1340 0.3718 3.9439
0.1562 30
1.3677 1.0155 1.1641 0.2668
1.1664 0.0981
60 1.0846 0.5341
1.1104 0.1856 0.2924
0.0466 100
1.0819 0.5166 1.1102 0.1675
0.2735 0.0402
300 1.0006 0.2570
1.0843 0.0919 0.0660
0.0156 500
1.0356 0.1890 1.1374 0.0727
0.0370 0.0242
1000 1.0156 0.1219
1.0116 0.0670 0.0151
0.0115 4
1 5
9.3836 30.0947 1.3196 1.1323
975.9726 1.3843
10 4.6547 13.8643
1.2837 0.8153 205.5754
0.7452 20
2.7487 5.1845 1.2963 0.6189
29.9373 0.4709
30 1.4822 2.1166
1.1854 0.4572 4.7125
0.2434 60
1.3095 1.1051 1.2560 0.3054
1.3170 0.1588
100 1.1673 0.8467
1.2264 0.2671 0.7449
0.1226 300
1.0849 0.4296 1.2542 0.1520
0.1918 0.0877
500 1.0350 0.2839
1.0762 0.0914 0.0818
0.0416 1000
1.0107 0.2080 1.0102 0.1137
0.0434 0.0337
6 1
7 20.4865 113.4633
0.9423 0.9984 13253.6508
1.0001 10
12.1032 55.7841 1.0596 0.8610
3235.1488 0.7449
20 3.4498 10.3056
1.0054 0.5933 112.2060
0.3520 30
2.1422 3.2262 1.0246 0.4970
11.7130 0.2476
60 1.8236 2.6064
1.0587 0.3744 7.4717
0.1436 100
1.2468 1.1599 1.0129 0.2643
1.4062 0.1170
300 1.0781 0.4953
1.0568 0.1596 0.2514
0.0929 500
1.0815 0.4065 1.0230 0.1110
0.1719 0.0922
1000 1.0207 0.2816
1.0204 0.0775 0.0798
0.0760 8
1 10
27.9651 106.4417 1.1645 1.2832
12056.9414 1.6737
20 10.2639 47.3683
1.2127 0.9227 2329.5787
0.8674 30
5.8903 14.2638 1.2634 0.8024
227.3707 0.7133
60 1.8667 3.2137
1.1402 0.4894 11.0792
0.2504 100
1.5251 1.8103 1.1340 0.3734
3.5530 0.1591
300 1.2059 0.8122
1.1398 0.2275 0.7021
0.1571 500
1.1817 0.6075 1.1210 0.3032
0.4021 0.1200
1000 1.0325 0.3189
1.1125 0.0564 0.1027
0.0910
4.2.3. Normal Poisson
Again by fixing j = 1 we set several sample sizes n varied from 5 until 1000 and we generated 1000 samples for each n. However, for normal Poisson
case, to see the effect of zero values proportion within X
j
, we also consider small mean values on the Poisson component because PX
j
= 0 = exp−µ
j
, then we set
µ
j
= 0 .5, 1 and 5. We also used Theorem 4.1.3 for calculating
Bayesian estimator in this simulation, we assume that the parameters of