Generalized variance function and Lévy measures

35 parameter. Then ρν p ,t;j = t k p − 1 −ηp,k .ν 2 ,ηp,k f or p 1 t k .δ e j Q ℓ,j ξ ,1 ∗k f or p = 1 3.13 Table 3.1: Summary of k-variate NST models with power variance parameter p = p α ≥ 1, modified Lévy measure parameter η := 1 + kp − 1 and support of distributions S p fixing j = 1. Types p = p α η = 1 + kp − 1 S p Normal Poisson p = 1 η = ∞ N × R k−1 Normal compound Poisson 1 p 2 η k + 1 [0 , ∞ × R k−1 Normal noncentral gamma p = 3 2 η = 2k + 1 [0 , ∞ × R k−1 Normal gamma p = 2 η = k + 1 , ∞ × R k−1 Normal positive stable p 2 1 η k + 1 , ∞ × R k−1 Normal inverse Gaussian p = 3 η = 1 + k2 , ∞ × R k−1 The proof of 3.13 is provided here in form of power parameter p, using Lemma 3.2.1 and following Boubacar Maïnassara and Kokonendji 2014, Theorem 3.5. From the cumulant function 3.5 of univariate non-negative stable Tweedie we obtain the first and the second derivatives as follow K ′ ξ p ,1 θ = exp θ for p = 1 θ1 − p −1p−1 for p ≥ 1 and also K ′′ ξ p ,1 θ =    exp θ = K ′ ξ p ,1 θ for p = 1 θ1 − p −pp−1 = K ′ ξ p ,1 θ p for p ≥ 1 Lemma 3.2.1. Let f : R → R and g,h : R k → R be three functions, each is twice differentiable and such that h = f ◦ g. Then h ′ x = f ′ gx × g ′ x and h ′′ x = f ′′ gx × g ′ xg ′ x ⊤ + f ′ gx × g ′′ x Then fixing j = 1 and using Lemma 3.2.1 with h = K ν p ,t , f = tK ξ p ,1 and g θ = θ 1 + P k j=2 θ 2 j 2 such that g ′ θ = 1, θ 2 , . . . , θ k ⊤ and g ′′ θ = Diag0, 1, . . . , 1, 36 we can write K ′′ ν p ,t θ = ∂ 2 ∂θ i ∂θ j tK ξ p ,1 gθ i ,j=1,...,k = t . γ a ⊤ a A 3.14 with γ = K ξ p ,1 gθ, a = γθ 2 , . . . , θ k ⊤ and A = γ −1 aa ⊤ + K ′ ξ p ,1 gθI k−1 . There- fore, using 3.12 it follows that det K ′′ ν p ,t θ = t k = K ′′ ξ p ,1 gθ K ′ ξ p ,1 gθ k−1 =    t k exp{kgθ} for p = 1 g θ1 − p −1−kp−1 for p ≥ 1 Taking ηp, k = 1 + kp − 1 and K ρ p ,t θ = log det K ′′ ν p ,t θ which is K ρ p ,t θ =      k θ 1 + 1 2 P k j=2 + log t k for p = 1 −ηp,klogθ 1 − 1 2 P k j=2 θ 2 j + log c p ,k,t for p 1 for θ ∈ Θρν p ,t = Θ ν p ,t with c p ,k,t = t k p − 1 −ηp,k , this leads to 3.13. Recall that the Monge-Ampère equation which is generally stated as det ψ ′′ θ = rθ where ψ is an unknown smooth function and r is a given positive function. Then from the modified Lévy measure of ν p ,t;j , the Monge- Ampère equation below is considered to be the problem of the characteriza- tion of multivariate NST models through generalized variance function det K ′′ θ = exp n K ρν p ,t;j θ o , p ≥ 1 3.15 where K is unknown cumulant function to be determined. See Kokonendji and Masmoudi 2013 for normal gamma model and some references of particular cases. In this work we use 3.15 for the characterization of normal Poisson model p = 1 by generalized variance in Section 5.2.2.

4. GENERALIZED VARIANCE ESTIMATIONS OF SOME

NST MODELS Generalized variance; i.e. the determinant of covariance matrix ex- pressed in term of mean vector; has important roles in statistical analysis of multivariate data. The notion is introduced by Wilks Wilks 1932 as a scalar measure of multivariate dispersion and used for overall multivariate scatter. The estimation of the generalized variance, mainly from a decision the- oretic point of view, attracted the interest of many researchers in the past four decades; see for example Shorrock and Zidek 1976, Kubokawa and Konno 1990, Gupta and Ofori-Nyarko 1995, Iliopoulos and Kourouklis 1998 and Bobotas and Kourouklis 2013 for estimation under multivari- ate normality. In the last two decades the generalized variance has been extended for non-normal distributions in particular for natural exponential families NEFs; see Kokonendji and Seshadri 1996, Kokonendji and Pom- meret 2001, Kokonendji 2003 and Kokonendji and Pommeret 2007 who worked in the context of NEFs. The uses of generalized variance also have been discussed by several authors. See for example in the theory of statistical hypothesis testing, gen- eralized variance is used as a criterion for an unbiased critical region to have the maximum Gaussian curvature Isaacson, 1951; in the descriptive statis- tics Goodman 1968 proposed a classification of some groups according to their generalized variances; in sampling theory it is used as a loss function on multiparametric sampling allocation Arvanitis and Afonja, 1971. In this chapter we discuss the ML and UMVU estimators of generalized variance of normal gamma, normal inverse Gaussian NIG and normal Poisson models. Bayesian estimator of the generalized variance for normal Poisson is also introduced. A numerical analysis through simulation studies is provided.

4.1. Generalized Variance Estimators

4.1.1. Maximum Likelihood Estimator

Let X 1 , . . . , X n be random vectors i.i.d with distribution Pθ; p, t ∈ Gν p ,t;j in a given NST family, i.e. for fixed for fixed j ∈ {1,2,...,k}, p ≥ 1 and t 0. Denoting X = X 1 + ··· + X n n = X 1 , . . . , X k ⊤ the sample mean. Theorem 4.1.1. The maximum likelihood estimator MLE of the generalized vari- ance det V G p ,t;j µ is given by: T n;k;p ,t;j = det V G p ,t;j X = t 1−p X j p+k−1 4.1 Proof. The ML estimator 4.1 is directly obtained from ?? by substituting µ j with its ML estimator X j . 37 38 Then for each model one has: T n;k;t;j = det V G p ,t;j X =      X k j , for normal Poisson 1 tX k+1 j , for normal gamma 1 t 2 X k+2 j , for normal inverse Gaussian For all p ≥ 1, T n;k ,p,t;j is a biased estimator of det V G p ,t;j µ = t 1−p µ j p+k−1 . For example, for p = 1 we have det V G p ,t;j µ = µ k j , to obtain an unbiased estimator for this we need to use the intrinsic factorial moment formula E X j X j−1 X j−2 ···X j−k+1 = µ k j , where X follows the univariate Poisson distribution with mean µ j .

4.1.2. Uniformly Minimum Variance Unbiased Estimator

In order to avoid the lack of good properties by estimating det V G p ,t µ = t 1−p µ k+p−1 j with T n;k ,p,t , we are able to obtain directly the uniformly minimum variance and unbiased UMVU estimator U n;k ,p,t of det V G p ,t µ.This is done through the following techniques for all integers n k Kokonendji and Seshadri, 1996; Kokonendji and Pommeret, 2007; Kokonendji, 1994 : U n;k ,p,t = C n ,k,p,t nX 4.2 where C n ,k,p,t : R k → [0,∞ satisfies ν n ,k,p,t dx = C n ,k,p,t x ν p ,nt dx 4.3 and ν n ,k,p,t dx is the image measure of 1 k + 1 det 1 1 ··· 1 x 1 x 2 ··· x k+1 2 ν p ,t dx 1 ···ν p ,t dx n by the map x 1 , ···x n 7→ x 1 +···+x n . The expression of C n ,k,p,t x for computing the UMVU estimator U n;k ,p,t for p = p α ∈ [1,∞ is stated in the following theorem. Theorem 4.1.2. Let X 1 , ··· ,X n be random vectors i.i.d with distribution Pµ, G p ,t;j ∈ G ν p ,t;j in a given NST family, i.e. for fixed p ≥ 1,t 0, and having modified Lèvy measure ρν p ,t satisfies 3.13 with parameter ηp, k = p + k − 1. Then C n ,k,p,t x = ν p ,nt ∗ ρν p ,t dx ν p ,nt dx 39 in particular, C n ,k,p,t x is          n −k x j x j − 1x j − 2···x j − k + 1, x j ≥ k for normal Poisson t k Γnt[Γnt + k + 1] −1 x k+1 j for normal gamma t k 2 −1−k2 [Γ1 + k 2] −1 x 3 2 j exp n nt 2 2x j o × R x j y k 2 j x j − y j −32 exp n −y j − nt 2 [2x j − y j ] o dy j for normal Inverse-Gaussian Proof. From 4.3 we write: C n ,k,p,t x = ν n;k;p ,t dx ν p ,nt dx Following Kokonendji and Pommeret, 2007, Theorem 1 and using 3.13 we have: K ν n;k;p ,t θ = nK ν p ,t θ + log det K ′′ ν p ,t = K ν p ,nt θ + K ρν p ,t θ for all θ ∈ Θν p ,1 . Then it immediately follows that ν n;k;p ,t = ν p ,nt ∗ρν p ,t is the convolution product of ν p ,nt by ρν p ,t . The proof for C n ,k,p,t x is established by considering each group of the NST models with respect to the different values of p ≥ 1 and using 3.13. Indeed, for p = 1 and fixing j = 1 we have ρν 1 ,t = t k .δ e 1 Q k j=2 ξ ,1 ∗k and C n ,k,1,t x = t k ν 1 ,nt ∗ δ e 1 Q k j=2 ξ ,1 ∗k dx ν 1 ,nt dx = t k ξ 1 ,nt x 1 − k ξ 1 ,nt x 1    k Y j=2 Z R ξ ,x 1 −k x j − y j ξ ,k y j ξ ,x 1 x j dy j    = t k x 1 nt x 1 −k exp−nt x 1 − knt x 1 exp−nt × 1 = x 1 x 1 − 1...x 1 − k + 1 n k ; because for fixed j = 2 , . . . , k the expression Wj , x 1 , k = Z R ξ ,x 1 −k x j − y j ξ ,k y j ξ ,x 1 x j dy j 4.4