M det V Analisis Dispersi Multivariat

70 For S = {i 1 , i 2 , . . . , i ℓ }, with 1 6 i 1 i 2 ··· i ℓ 6 k, a non-empty subset of {1,2,...,k}, and τ S : R k → R ℓ the map defined by τ S x = x i 1 , x i 2 , . . . , x i ℓ ⊤ , we define ψ S the image measure of H ℓ dx 1 , . . . , dx ℓ = 1 ℓ det [ τ S x 1 . . . τ S x ℓ ] 2 ψdx 1 . . . ψdx ℓ by ψ ℓ : R k ℓ → R k , x 1 , . . . , x ℓ 7→ x 1 + x 2 + ··· + x ℓ . By Proposition 5.2.3 and Expression 5.26 the modified Lévy measure ρν in 5.1.1 can be expressed as ρν j = det Λ δ + X ∅,S⊂{1,2,...,k} det Λ S ′ ψ S , 5.27 where Λ is a diagonal representation of Σ in an orthonormal basis e = e i i=1 ,...,k see, e.g., Hassairi 1999, page 384. Since Σ is the Brownian part, then it corresponds to the k − 1 normal components from the right member of 5.25; that implies r = rankΣ = k − 1 and detΣ = 0. Therefore detΛ = 0 with Λ = diag λ 1 , λ 2 , . . . , λ k such that λ j = 0 and λ ℓ 0 for all ℓ , j. For all non-empty subsets S of {1,2,...,k} there exist real numbers α S 0 such that det Λ S ′ ψ S =    Y iS λ i   ψ S = α S h δ e j ∗ N0,1e c j i ∗k , 5.28 where e c j = e 1 , . . . , e j−1 , e j+1 , . . . , e k denotes the induced orthonormal basis of e without component e j ; i.e. k − 1 is the dimension of e c j . With respect to Kokonendji and Masmoudi 2006, Lemma 7 for making precise the measure ν of 5.28, it is easy to see that S = {j} is a singleton i.e. set with exactly one element such that, for x = x 1 e 1 + ··· + x k e k , x 2 j ψdx = βδ ae j , with β 0 and a , 0. Consequently, we have the following complementary set S ′ = {1,2,...,k} \ {j}. So, from 5.28 we have kth power of convolution of only one Poisson at the jth component e j and k −1-variate standard normal. That means K ′′ ν j θ = K ν j θ ˜θ c j ˜θ c⊤ j + I j k , with notations of 5.4. Let Bθ = exp θ j + 1 2 P ℓ,j θ 2 ℓ from 5.25. Since we check that ∂ 2 K ν j −Bθ∂θ 2 i = 0 for all i = 1 , . . . , k, Proposition 5.2.4 allows that K ν j − Bθ is an affine function on R k and therefore K ν j θ = exp   θ j + 1

2 X

ℓ,j θ 2 ℓ   +u ⊤ θ + b, for u , b ∈ R k ×R. Hence F j = F ν j is of normal-Poisson j type. This complete the proof of the theorem.

6. APPLICATION

6.1. Variance Modeling under Normality

For a given random vector X = X 1 , X 2 , . . . , X k ⊤ on R k of NST j models, we now assume that only the k − 1 normal terms X c j of X are observed X c j1 , . . . , X c jn and, therefore, X j is an unobserved random effect. Note that j is fixed in {1,2,...,k}. Assuming t = 1 and following Boubacar Maïnassara and Kokonendji

2014, Section 4.2 with X having mean vector µ = µ

1 , . . . , µ k ⊤ ∈ M F 1;j and covariance matrix V = Vµ, then X c j follows a k − 1-variate normal distri- bution, denoted by X c j ∼ N k−1 µ c j , X j V c j , 6.1 with µ c j = µ 1 , . . . , µ j−1 , µ j+1 , . . . , µ k ⊤ . The k − 1 × k − 1-matrix V c j which does not depend on µ c j is symmetric and positive define such that det V c j = 1 or V c j = I k−1 . Thus, without loss of generality X j in 6.1 can be the univariate positive stable Tweedie variable with mean µ j 0 and unit variance µ p j for p ≥

1. It follows that the unit generalized variance of X

c j is easily deduced as µ k−1 j . For a fixed model by p ≥ 1, this unknown part µ j 0 of X j can be estimated through the generalized variance estimators of normal observations in the sense of “standardized generalized variance SenGupta, 1987: b µ j =   det    1 n − 1 n X i=1 X c ji X c ji ⊤ − X c j X c j ⊤       1 k−1 for det V c j = 1 or b µ j =    Y ℓ,j    1 n − 1 n X i=1 X 2 ℓi − X 2 ℓ.       1 k−1 for V c j = I k−1 , with X c j = X c j1 + ··· + X c jn n and X ℓ. = X ℓ1 + ··· + X ℓn n for ℓ , j. See, e.g. Iliopoulos and Kourouklis 1998 and Shorrock and Zidek 1976 for im- proved version of generalized variance estimators under normality hypoth- esis. This statistical aspect of normal stable Tweedie j models in 6.1 points out the flexibility of these models compared with the classical multivariate normal model N k−1 µ c j , Σ, where the generalized variance det Σ is replaced to the random effect X j V c j . In fact, for V c j = I k−1 in 6.1 which corresponds to part 2 of Definition 3.1.1, one has a kind of conditional homoscedasticity under the assumption of normality. For det V c j = 1 then it is connected to the so-called stochastic volatility modeling; see, e.g. Bandorff-Nielsen 1997 for the particular case of normal inverse Gaussian model p = 3 with direct interest to financial 71 72 modeling. However, for the normal Poisson case we have to handle the presence of zeros in the sample of X j when the Poisson parameter µ j is close to zero. More exactly and without loss of generality, within the framework of one- way analysis of variance and keeping the previous notations, since there are at least two normal components to be tested, so the minimum value of k is 3 or k ≥ 3 for representing the number of levels k − 1. The unknown stable Tweedie parameter µ j 0 can play the role of the nuisance common variance for comparing all µ ℓ with ℓ , j. The next part proposes numerical analyses through some simulations for the robustness of b µ j . In order to apply this point of view, we can refer to SenGupta 1987 for a short numerical illustration; or in the context of multivariate random effect model, it can be used as the distribution of the random effects when they are assumed to have conditional homoscedasticity. To obtain the numerical behavior of the estimator, we present the explicit cases for p = 1 normal Poisson and p = 3 normal inverse-Gaussian. In these cases X j is an unobserved Poisson or inverse-Gaussian variable with mean µ j 0, for the Poisson case µ j is known to be at the same time the variance of X j . Hence, the parameter µ j can be estimated through generalized variance estimator of normal observations using b µ j with p = 1 and p = 3. In the following, the numerical analysis through a simulation study con- cerning the standardized generalized variance estimations in 6.1 is pre- sented. In this simulation, we fixed j = 1 and we set some sample sizes n = 30 , 50, 100, 300, 500, 1000 with several dimensions k = 3, 4, 6, 8. We gener- ated 1000 samples for each case. The procedure steps of data simulation were as follow: 1. For normal Poisson case p = 1, fix k = 3, generate randomly n = 30 observations from univariate Poisson distribution X j with mean µ j = .5. 2. For each X j = x j , generate the corresponding normal i.i.d. components from k − 1-variate normal distribution with mean zero and variance X j = x j , we obtain the normal components of NST model X c j such that X c j ∼ N k−1 , X j I k−1 . 3. Calculate the mean Poisson estimate b µ j based on the normal compo- nents X c j , the Poisson component X j was assumed to be unobserved. 4. Repeat step 1 - step 3 until 1000 times, we obtain 1000 values of b µ j from 1000 datasets. 5. Calculate the expected values and the variance of b µ j , i.e. E b µ j and 73 Var b µ j respectively, using the following formulas E b µ j = 1 1000 1000 X i=1 b µ i j and Var b µ j = 1 999 1000 X i=1 b µ i j − Eb µ j 2 . 6. Calculate the mean square error MSE of b µ j over 1000 data sets using the following formula: MSE b µ j = h E b µ j − µ j i 2 + Var b µ j . 7. Repeat step 1 - step 6 for n = 50 , 100, 300, 500, 1000, 1500 and 2000. 8. Repeat step 1 - step 7 for µ j = 1 and 5. 9. Repeat step 1 - step 8 for other fixed value of k with k ∈ {3,4,6,8}. 10. Repeat step 1 - step 9 for normal inverse Gaussian case p = 3. We report the expected values and MSE of b µ j in Table 6.1 and 6.2 for normal Poisson and normal inverse-Gaussian respectively. We provide the scatterplot of bivariate and trivariate normal components from some gener- ated data in Appendix C. The results in Table 6.1 and Table 6.2 show the behavior of b µ j . From the tables we see that when the sample size increases the expected values of b µ j converge to the target values µ j , and their MSE decrease for all dimension k. The simulation results with moderate sample sizes produce very good performances of b µ j . Note that the presence of zeros in the sample when µ j = 0 .5 of the Poisson component does not affect the estimation of µ j . For a clear description of the performance of b µ j we provide the bargraphs of MSE of b µ j for p = 1 in Figure 6.1 - Figure 6.3. The figures show that MSE value decrease when the sample size increase. From the result we conclude that b µ j is a consistent estimator of µ j . Notice that b µ j produce smaller MSE for larger dimension.