Application to insurance premium loading Parameter Estimation: Maximum Likelihood Estimation

9 , log log ] 1 [ 1 , , ; 1 2                  x x x p p p x f x F p p is decreasing in 1 ,  x . Thus, x F is concave for 1 ,  x . Since  x F , concavity implies x F is also log-concave for 1 ,  x . ฀ Theorem 3 . The function x F is not convex or concave for 1 ,  x for any 1   ,  p and   . Proof: For any 1   ,  p and   , consider the second order derivative of x F given by                              1 log 1 1 log 1 log ] 1 [ 1 2 1 1 2         p x p x x x p p x F p p In order for x F to be convex, x F   must be  or that the term                   1 log 1 1 log 2     p x p x  for all 1 ,  x . However, when 1  x , 1 1 log 1 1 log 2                             p p x p x . Thus, x F is not convex for 1 ,  x for any 1   ,  p and   . When  x ,                     1 log 1 1 log 2     p x p x . This implies that x F   cannot be  for all 1 ,  x . Therefore, x F is not concave for 1 ,  x for any 1   ,  p and   . ฀ Hence, x F is convex for 1 ,  x only for  p and 1 1     as shown in Gomez- Deniz et al. 2014.

3. Application to insurance premium loading

Theorem 4. If , ;   x F of ,   p LL given by 5 then , ; 1 1   x F   is a convex function from 0, 1 to 0, 1 for 1    and ,    p . 10 Proof: It has been shown in Theorem 2 that the cdf , ;   x F of ,   p LL is concave for 1    . Hence, , ; 1 1   x F   is a convex function from 0, 1 to 0, 1. Remark: , ;   x F can be used as a distortion function to distort survival function sf of a given random variable as stated in the corollary below. Corollary 2. If X is the risk with sf x G and let Z be a distorted random variable with survival function ] , ; [    x G F x H  for 1    and ,    p . Then ] , ; [ ] [ x P dx x G F Z E        is a premium principle such that i. max , X P X P X P     ii. b x aP b ax P      iii. if 1 X precedes 2 X under first stochastic dominance that is if 2 2 1 1 x G x G  then 2 , 1 , X P X P      iv. if 1 X precedes 2 X under second stochastic dominance that is if 2 2 2 1 1 1 dx x G dx x G      then 2 , 1 , X P X P      Where X E X P  is the net premium average loss, }] , , [max{ 1 max n X X E X P   is the maximum premium of the insurance product 1 1 x G and 1 1 x G are sf of two non negative risk r.v. respectively. Proof: Since from theorem 2 above we know that , ;   x F is concave for 1 ,  x when 1    ,  p and   and is an increasing function of x with , ;    F and 1 , ; 1    F . Therefore by definition 6 of distortion premium principle and subsequent properties thereof in Wang 1996 the results follow immediately.

4. Parameter Estimation: Maximum Likelihood Estimation

The likelihood function for a random sample of size n from the     , p LL is 11                      n i i p n i i n p n x x x p p L 1 1 1 2 log log 1 1     . The log-likelihood function is then given by              n i i x p p n p n p n L l 1 log log 1 log 1 log log 2 log           n i i n i i x x 1 1 log 1 log log   . The first and second order derivatives of the log-likelihood function are:           n i i x p n p n l 1 log 1 2              n i i x p n l 1 log 1 1                   n i i x p n p p n n p l 1 ]] 1 log[log[ 1 1 1 log   2 2 2 2 2 2 2 2 2 2 1 ] 1 2 [ 1 2                          p p p n p n p n l           n i i x p n l 1 2 2 2 2 2 log 1 1     1 1 1 1 2 2 2 p p n p p n p l                                 p n p n l 1 1 2 2 2 2 1           p n p l 2 2 1            p n n p l 12 For information matrix we obtain the following result , 1 ] 1 [ 1 log 1 2 1 2        r r p p p e x E p r p r p n i i                                . For  p , this reduces to the result given in Gomez et al. 2014. The information matrix is given by This matrix can be inverted to get asymptotic variance-covariance matrix for the maximum likelihood estimates.

5. Numerical Applications