Maximum Likelihood Estimator Uniformly Minimum Variance Unbiased Estimator

39 in particular, C n ,k,p,t x is          n −k x j x j − 1x j − 2···x j − k + 1, x j ≥ k for normal Poisson t k Γnt[Γnt + k + 1] −1 x k+1 j for normal gamma t k 2 −1−k2 [Γ1 + k 2] −1 x 3 2 j exp n nt 2 2x j o × R x j y k 2 j x j − y j −32 exp n −y j − nt 2 [2x j − y j ] o dy j for normal Inverse-Gaussian Proof. From 4.3 we write: C n ,k,p,t x = ν n;k;p ,t dx ν p ,nt dx Following Kokonendji and Pommeret, 2007, Theorem 1 and using 3.13 we have: K ν n;k;p ,t θ = nK ν p ,t θ + log det K ′′ ν p ,t = K ν p ,nt θ + K ρν p ,t θ for all θ ∈ Θν p ,1 . Then it immediately follows that ν n;k;p ,t = ν p ,nt ∗ρν p ,t is the convolution product of ν p ,nt by ρν p ,t . The proof for C n ,k,p,t x is established by considering each group of the NST models with respect to the different values of p ≥ 1 and using 3.13. Indeed, for p = 1 and fixing j = 1 we have ρν 1 ,t = t k .δ e 1 Q k j=2 ξ ,1 ∗k and C n ,k,1,t x = t k ν 1 ,nt ∗ δ e 1 Q k j=2 ξ ,1 ∗k dx ν 1 ,nt dx = t k ξ 1 ,nt x 1 − k ξ 1 ,nt x 1    k Y j=2 Z R ξ ,x 1 −k x j − y j ξ ,k y j ξ ,x 1 x j dy j    = t k x 1 nt x 1 −k exp−nt x 1 − knt x 1 exp−nt × 1 = x 1 x 1 − 1...x 1 − k + 1 n k ; because for fixed j = 2 , . . . , k the expression Wj , x 1 , k = Z R ξ ,x 1 −k x j − y j ξ ,k y j ξ ,x 1 x j dy j 4.4 40 is finally Wj , x 1 , k = Z R 1 √ 2 πx 1 −k exp −x j −y j 2 2x 1 −y 1 1 √ 2 πk exp −y 2 j 2k 1 √ 2 πx 1 exp −x 2 j 2x 1 dy j = √ x 1 √ 2 π p kx 1 − k exp    x 2 j 2x 1    Z R exp     −y 2 j + 2 kx j x 1 y j − kx 2 j x 1 2 k x 1 x 1 − k     dy j = exp0 Z R ξ , kx1−k x1 y j − kx j x 1 dy j = 1 Let p = 2, then ρν 2 ,t = t k ν 2 ,η2,k , one obtains C n ,k,2,t x = t k ν 2 ,nt ∗ ν 2 ,η2,k dx ν 2 ,nt dx = t k ν 2 ,nt+η2,k dx ν 2 ,nt dx = t k ξ 2 ,nt+η2,k dx 1 Q k j=2 ξ ,x 1 dx j ξ 2 ,nt dx 1 Q k j=2 ξ ,x 1 dx j = t k x nt+ η2,k−1 1 Γ[nt + η2, k] × Γnt x nt−1 1 = Γnt Γ[nt + η2, k] x η2,k 1 with the modified Lévy measure parameter η2, k = k + 1. For p = 3 we have ρν p ,t = t k 2 −η3,k ν 2 ,η3,k , then C n ,k,3,t x = t k 2 −η3,k ν 3 ,nt ∗ ν 2 ,η3,k dx ν 3 ,nt dx = t k 2 −1−k2 Z R k ν 3 ,nt x − yν 2 ,η3,k y ν 3 ,nt x dy = t k 2 −1−k2 Z x 1 ξ 3 ,nt x 1 − y 1 ξ 2 ,η3,k y 1 ξ 3 ,nt x 1    k Y j=2 Z R ξ ,x 1 −y 1 x j − y j ξ ,y 1 y j ξ ,x 1 x j   dy 1 41 = t k 2 −1−k2 Z x 1 ξ 3 ,nt x 1 − y 1 ξ 2 ,η3,k y 1 ξ 3 ,nt x 1 × 1dy 1 = t k 2 −1−k2 Z x 1 nt √ 2 πx 1 −y 1 3 exp −nt 2 2x 1 −y 1 × y η3,k−1 1 Γ[ η3,k] exp−y 1 nt q 2 πx 3 1 exp −nt 2 2x 1 dy 1 = t k 2 −1−k2 x 3 2 1 Γ[ η3, k] exp nt 2 2x 1 Z x 1 y η3,k−1 1 x 1 − y 1 3 2 exp −y 1 − nt 2 2x 1 − y 1 dy 1

4.1.3. Bayesian Estimator

We introduce the Bayesian estimator of normal-Poisson generalized vari- ance using the conjugate prior of Poisson distribution namely gamma dis- tribution. Theorem 4.1.3. Let X 1 , ··· ,X n be random vectors i.i.d with distribution Pµ, G 1 ,t;j ∈ G ν 1 ,t;j a normal Poisson model. For t 0 and j ∈ {1,2,...,k} fixed, under assump- tion of prior gamma distribution of µ j with parameter α 0 and β 0, the Bayesian estimator of det V F t;j µ = µ k j is given by B n ,t;j,α,β =    α + nX j β + n    k . 4.5 Proof. Let X j1 , . . . , X jn given µ j be Poisson distribution with mean µ j , then the probability mass function is given by px ji |µ j = µ x ji j x ji exp−µ j ∀x ji ∈ N. Assuming that µ j follows gamma α, β, then the prior probability distribu- tion function of µ j is written as f µ j ; α, β = β α Γ α µ α−1 j exp−βµ j , ∀µ j 0, with Γ α := R ∞ x α−1 e −x dx. Using the classical Bayes theorem, the posterior 42 distribution of µ j given an observation x ji can be expressed as f µ j |x ji ; α, β = px ji |µ j f µ j ; α, β R µ j px ji |µ j f µ j ; α, βdµ j = β + 1 α+x ji Γ α + x ji µ α+x ji −1 j exp{−β + 1µ j } which is the gamma density with parameters α ′ = α + x ji and β ′ = β + 1. Then with random sample X j1 , . . . , X jn the posterior will be gamma α + nX j , β + n. Since Bayesian estimator of µ j is given by the expected value of the posterior distribution i.e. α + nX j β + n, this concludes the proof.

4.2. Simulation Study

We implemented the generalized variance estimators on normal gamma, normal inverse Gaussian and normal Poisson models presented in previous section on our simulation study to see further the numerical behavior of the estimators. Fixing t = 1, we carried out a Monte-Carlo simulation using R software from R Development Core Team 2016, we considered some parameter configurations of n and k to see the effects of n and k on generalized variance estimations. For each configuration we generated 1000 samples with dimension k = 2 , 4, 6, 8. We considered several sample sizes varied from n = 3 until n = 1000 and we fixed j = 1. To evaluate the result of the estimations we calculated the MSE of the generalized variance estimates. We report the numerical results of the generalized variance estimations for each model, i.e. the empirical expected value of the estimators with its standard errors Se and the empirical mean square error MSE. The procedure of the data simulation is described in the following steps: 1. Fix k = 2, generate randomly n = k + 1 observations from univariate gamma distribution X j with mean µ j = 1 using gamma parameters: scale=shape=1. 2. For each X j = x j , generate the corresponding normal i.i.d. components from k − 1-variate normal distribution with mean zero and variance X j = x j , we obtain the normal components of NST model X c j such that X c j ∼ N k−1 , X j I k−1 . 3. Combine X j and X c j for obtaining a k-variate normal gamma random sample denote by X 4. Calculate the generalized variance of X using ML and UMVU also Bayesian for p = 1, by using α = X j and β = k estimators. Keep the generalized variance estimates as T n;k ,p,1 , U n;k ,p,1 and B n;k ,p,1,α,β 43 5. Repeat step 1 - step 4 until 1000 times, we obtain 1000 generalized variance values for each estimator 6. Calculate the expected values and the standard errors of the general- ized variance estimates resulted from each estimator using the follow- ing formulas Eb ψ = 1 1000 1000 X i=1 b ψ i , Seb ψ = q Varb ψ = v u t 1 999 1000 X i=1 b ψ i − Eb ψ 2 , where b ψ is the estimate of det V G p ,t;j µ using ML, UMVU and Bayesian estimators. 7. Calculate the mean square error MSE of each method over 1000 data sets using the following formula: MSEb ψ = h Eb ψ − detV G p ,t;j µ i 2 + [Seb ψ] 2 8. Repeat step 1 - step 7 for n = 10 , 20, 30, 60, 100, 300, 500 and 1000. 9. Repeat step 1 - step 8 for other fixed value of k where k ∈ {2,4,6,8}. 10. Repeat step 1 - step 9 for µ j = 5 . 11. Repeat step 1 - step 9 using normal inverse-Gaussian model. 12. Repeat step 1 - step 10 using normal Poisson model and with additional small mean value µ j = 0 .5. We provide the scatterplots of some generated data from normal gamma, normal inverse-Gaussian and normal Poisson for bivariate and trivariate cases in Appendix C.

4.2.1. Normal gamma

Table 4.1 and Table 4.2 show the expected values of generalized variance estimates with their standard errors in parentheses and the means square error values of both ML and UMVU methods for normal gamma model. By setting µ j = 1 , 5 and using equation 3.11 we have generalized variance of distribution: µ k+1 j = 1 and µ k+1 j = 5 k+1 . From the result in Table 4.1 and Table 4.2 we can observe different per- formances of ML estimator T n ,;k,p,t and UMVU estimator U n;k ,p,t of the generalized variance. The values of T n ,;k,p,t converge while the values of