Parameter estimation using maximum likelihood method classical bivariate binary logistic

209 where 1 Y and 2 Y are independent ψ =1. The value of log is ψ = �, with T X θ γ = , where γ is a bound parameter vector. The joint probabilities 11 p according to Dale [13] and Palmgren [14] can be obtained in terms of 1 p , 2 p , and ψ as { } 1 2 1 2 1 1 , 1 2 , 1 a a b ψ ψ ψ π π ψ −  − − + ≠  =   =  7 The other three joint probabilities can be recovered easily from the marginal 10 1 11 p p p = − , 01 2 11 p p p = − , and 00 10 01 11 1 p p p p = − − − .

2.2 Parameter estimation using maximum likelihood method classical bivariate binary logistic

regression Maximum likelihood method requires that parameters appraising must know the distribution of the model. The maximum likelihood method works by maximizing the likelihood function. If n random sample of observations are scaled on bivariate binary data, then the bivariate random variables 1 i Y , 2 i Y i = 1, 2, 3,…,n will be identical with 11i Y , 10i Y , 01i Y , 00i Y . They have multinomial distribution with probability 11i p , 10i p , 01i p , 00i p . So, the likelihood of a bivariate random variable is as follows: 1 11 11 10 10 01 01 00 00 P , , , i i i i i i i i i L Y y Y y Y y Y y = = = = = = ∏ β 11 10 01 00 11 10 01 00 1 i i i i n y y y y i i i i i p p p p = = ∏ 8 The parameter     = 1 2 β , β ,θ β is obtained by maximizing the equation 8 by derive it to its parameters. 11 10 01 00 11 10 01 00 1 ln ln i i i i n y y y y i i i i i L p p p p =   =     ∏ β 11 11 10 10 01 01 00 00 1 ln ln ln ln n i i i i i i i i i y p y p y p y p = = + + + ∑ 9 The first derivation of the equation 9 is used to estimate the  β , 11 11 10 10 01 01 00 00 11 10 01 00 1 ln n i i i i i i i i i i i i i y p y p y p y p L p p p p =   ∂ ∂ ∂ ∂ ∂ = + + +   ∂ ∂ ∂ ∂ ∂   ∑ β β β β β β 10 The second derivation of the equation 9 is used to estimate the standard deviation value of  β . 2 11 11 11 11 11 10 10 10 10 10 11 11 10 10 1 ln n i i i i i i i i i i T T T i i i i i y p p y p y p p y p L p p p p =     ∂ ∂ ∂ ∂ ∂ = − + + − + +     ∂ ∂ ∂ ∂ ∂ ∂ ∂ ∂      ∑ β β β β β β β β β 01 01 01 01 01 00 00 00 00 00 01 01 00 00 i i i i i i i i i i T T i i i i y p p y p y p p y p p p p p      ∂ ∂ ∂ ∂ − + + − +      ∂ ∂ ∂ ∂ ∂ ∂      β β β β β β 11 Furthermore, from the second derivation of log-natural, the expected value is calculated. The expectation become elements of the Hessian matrix. The variance and covariance matrix estimation are obtained from the inverse matrix. 210 2 11 11 10 10 01 01 11 10 01 1 ln 1 1 1 n i i i i i i T T T T i i i i p p p p p p L E p p p =          ∂ ∂ ∂ ∂ ∂ ∂ ∂ = + + +          ∂ ∂ ∂ ∂ ∂ ∂ ∂ ∂          ∑ β β β β β β β β β 00 00 00 1 i i T i p p p    ∂ ∂    ∂ ∂    β β 12 Given the parameter θ contains the association which shows that 1 Y and 2 Y are dependent. 11 11 10 10 01 01 00 00 11 10 01 00 1 ln n i i i i i i i i i i i i i y p y p y p y p L p p p p =   ∂ ∂ ∂ ∂ ∂ = + + +   ∂ ∂ ∂ ∂ ∂   ∑ β θ θ θ θ θ 11 10 01 00 11 11 10 01 00 1 n i i i i i i i i i i y y y y p p p p p =     ∂   = − − +      ∂         ∑ θ 13 2 11 11 10 10 01 01 00 00 11 2 2 2 2 2 1 11 10 01 00 ln n i i i i i i i i i i i i i i y p y p y p y p p L p p p p =   ∂ ∂ ∂ ∂ ∂ ∂   = − + + − +    ∂ ∂ ∂ ∂ ∂   ∂    ∑ β θ θ θ θ θ θ 2 11 10 01 00 11 11 10 01 00 i i i i i i i i i y y y y p p p p p      ∂ − − +      ∂     θ 14 2 2 11 11 11 11 11 2 11 1 11 ln n i i i i i i i i y p p y p L p p =   ∂ ∂ ∂ ∂ = − + +   ∂ ∂ ∂ ∂ ∂ ∂    ∑ β β θ β θ β θ 2 2 10 10 10 10 10 01 01 01 01 01 2 2 10 01 10 01 i i i i i i i i i i i i i i y p p y p y p p y p p p p p     ∂ ∂ ∂ ∂ ∂ ∂ − + + − + +     ∂ ∂ ∂ ∂ ∂ ∂ ∂ ∂     β θ β θ β θ β θ 2 00 00 00 00 00 2 00 00 i i i i i i i y p p y p p π    ∂ ∂ ∂ − +    ∂ ∂ ∂ ∂    β θ β θ 15 2 11 10 01 00 11 2 10 01 00 1 11 ln 1 1 1 1 n i i i i i i i i i i p p p p p L E p p p p =     ∂ ∂ ∂ ∂ ∂ ∂   = + + + +       ∂ ∂ ∂ ∂ ∂ ∂ ∂         ∑ β β θ β β β β θ 2 2 2 2 11 10 01 00 11 10 01 00 1 1 1 1 i i i i i i i i p p p p p p p p    ∂ ∂ ∂ ∂ + + +    ∂ ∂ ∂ ∂ ∂ ∂ ∂ ∂    β θ β θ β θ β θ 16 The completion of these parameters estimation can be done iteratively. Newton Raphson method is an iterative method that is often used in a logistic regression model [15]. So, the Newton Raphson iteration method will be used to obtain the parameters estimation of bivariate binary logistic regression. The next step is to test the significance of these parameters. The method used to test the significance of the parameters is likelihood ratio test.

2.3 Parameter estimation using Bayesian method Bayesian bivariate binary logistic regression