S TATIS TIK A UGM YOGYAK AR TA

  I

ntroduction of Mathematical

S tatis tics 2

  By : Indri R ivani Purwanti (10990) Gempur Safar (10877) Windu Pramana Putra Barus (10835) Adhiarsa Rakhman (11063)

  Dosen : . .

  , . ., . .

  Prof Dr Sri Haryatmi Kartiko S Si M Sc

  THE US E OF

MATHE MATIC AL

S TATIS TIC S

  Introduction to M athematical Statistics (I M S) can be applied for the whole statistics subject, such as: Statistical M ethods I and II

  

   Introduction to Probability M odels

  

   M aximum L ikelihood E stimation

  

   Waiting Times Theory

  

   Analysis of L ife-testing models

  

   Introduction to R eliability

  

  Nonparametric Statistical M ethods

   etc.

  S TATIS TIC AL METHODS

  In Statistical M ethods, I ntroduction of M athematical Statistics are used to:

  

  introduce and explain about the random variables , probability models and the suitable cases which can be solve by the right probability models.

  

  H ow to determine mean (expected value), variance and covariance of some random variables,

  

  Determining the convidence intervals of certain random variables

   E tc.

  Lee J. Bain & Max Engelhardt

  Probability M odels

  M athematical Statistics also describing the probability model that being discussed by the staticians. The I M S being used to make student easy in mastering how to decide the right probability models for certain random variables.

  Lee J. Bain & Max Engelhardt

  INTR ODUC TION OF R E LIAB ILITY

  The most basic is the reliability function that corresponds to probability of failure after time t. The reliability concepts:

  

If a random variable X represents the lifetime of failure of a

unit, then the reliability of the unit t is defined to be: R (t) = P ( X > t ) = 1 – F (t) x

  Lee J. Bain & Max Engelhardt

  MAXIMUM LIK E LIHOOD

E S TIMATION

  IM S is introduces us to the M L E ,

L et L (0) = f (x ,....,x :0), 0 Є Ω, be the joint pdf of X ,....,X .

  1 n 1 n

  

For a given set bof observatios, (x ,....,x :0), a value in Ω at

1 n

which L (0) is a maximum and called the maximum likelihood

estimate of θ. That is , is a value of 0 that statifies f (x ,....,x : ) = max f (x ,....,x :0),

  1 n 1 n Lee J. Bain & Max Engelhardt

  

ANALYS IS OF LIFE -TE S TING

MODE LS

  M ost of the statistical analysis for parametric life-testing models have been developed for the exponential and weibull models. The exponential model is generally easier to analyze because of the simplicity of the functional form. Weibull model is more flexibel , and thus it provides a more realistic model in many applications , particularly those involving wearout and aging.

  

NONPAR AME TR IC S TATIS TIC AL

ME THODS

  The IM S also introduce to us the nonparametrical methods of solving a statistical problem, such as: one-sample sign test

  

  Binomial Test

  

   Two-sample sign test

  

   wilcoxon paired-sample signed-rank test

  

   wilcoxon and mann-whitney tests

  

   correlation tests-tests of independence

  

   wald-wolfowitz runs test

   etc.

  Lee J. Bain & Max Engelhardt

  

K E TE R K AI TAN K ONV E R G ENSI E XAM P LE We consider the sequence of ”standardized” variables:

  ( ) ( ) n n n n Y np t n npt Z Y n t M t e e M

  1

  = − + + + + + + − 

    

      

   

       

   

  

  L L ( ) 2

  2

n

d n t n n

   

    = + +    

  Where d(n)

  0 as n

  ∞ ( ) 2

  2 lim n t Z n

  M t e → ∞

  = 0,1 d n

  Z Z N ∴  →

      

  σ σ σ σ 

  σ σ σ

  −   = + = +  

  − −

    = =

     

  n n Y np Z npq

  − = With the simplified notation n npq

  σ = By using the series expansion

  ( ) ( ) n n n n n n npt t pt t e pe q e pe q

  σ σ σ σ −

  2

  2 n n n n n

pt p t t t

p p

  1

  2

u

e u u

  = + + + L

  ( ) 2 2 2 2 2

  1

  1

  1

  2

  :

  

AP P R OX IM ATION FOR THE B INO M IAL

D IS TR IB UTIO N

     

  b 0.5 np a 0.5 np

  − − − +

  P a Y b

  ≤ ≤ = Φ − Φ n    

  [ ]

     

  npq npq

      Example: A certain type of weapon has probability p of working successfully. We test n weapons, and the stockpile is replaced if the number of failures, X, is at least one. How large must n be to have P[X ≥ 1] = 0.99 when p = 0.95?Use normal approximation.

  X : number of failures p : probability of working successfully = 0.95

  q : probability of working failure = 0.05

  P X

  1

  0.99

  ≥ =

  0.5 0.05 n [ ]

  −  

  0.01 Φ =  

  0.218 n 1 − P X ≤ = 0.99  

  [ ] 0.5 0.05 n

  −

  • 0 0.5 0.05 n

  −  

  2.33 = −

  1

  0.99 − Φ =  

  0.218 n n 0.05 0.95 g g

    2

  − = 2 + 0.0025 n 0.308 n

  • 0.25 0.05 n 0.0025 n 0.258 n

  0.25 − =

ASYMPTOTIC NORMAL DISTRIBUTIONS

  100 100 40

n n

n

  :

  θ θ = = = =

  X EXP E X Var X

  ( ) 100 i

  − − = = 2 2 (100) ( ) 100

  X X Z n µ σ

  

( )

  0.1 d n n

  100 2 / 40 = 250.

  Example: The random sample involve n = 40 lifetimes of electrical parts, X i ~ EXP(100). By the CLT, has an asymptotic normal distribution with mean m = 100 and variance c 2 /n =

  as , then Y n is said to have an asymptotic normal distribution with asymptotic mean m and asymptotic variance c 2 /n.

  X If Y 1 , Y 2 , … is a sequence of random variables and m and c are constants such that

  − =  → : n

  Y m Z Z N c n

  n → ∞

  AS YM PTOTIC DIS TR IB UTION OF C E NTR AL OR DE R S TATIS TIC S

  Theorem

  Let X , …, X be a random sample from a continuous distribution with a pdf f(x) 1 n that is continuous and nonzero at the pth percentile, x , for 0 < p < 1. If k/n p p → (with k np bounded), then the sequence of kth order statistics, X , is 2 k:n asymptotically normal with mean x and variance c /n, where p 2 p (1 p )

  − c

  = 2 f x ( )

    p  

  • Example

  Let X , …, X be a random sample from an exponential distribution, X ~ EXP(1), so 1 n -x -x i that f(x) = e and F(x) = 1 – e ; x > 0. For odd n, let k = (n+1)/2, so that Y = X is k k:n the sample median. If p = 0.5, then the median is x = - ln (0.5) = ln 2 and 0.5 x

  − − x 0.5 0.5 e 0.5 x ln 0.5

  ⇔ = ⇔ − = x

  0.5 F x 1 e 0.5 2 − 0.5(1 0.5) 0.25 = = = − 0.5 0.5 ( ) c

  1

  = = = 2 2

  − 1 (0.5) f (ln 2)

  1 [ ]

    x ln 0.5 ln ln 2

  ⇔ = − = = 0.5  

  2  

  Thus, X is asymptotically normal with asymptotic mean x = ln 2 and k:n 2 0.5 asymptotic variance c /n = 1/n.

  THEOREM If p n

  Var Y Var m Var Z n n n n

  1 n c P Y m n

  2

  2

  ( )

  Proof

   

  2 c

  − < ≥ −

  ε ε

  1 n n n Var Y P Y E Y

  ( ) 2 ( ) ( )

     

    = + = = =

  .1 n n n c c c cZ

  Y m  →

  ( ) ( ) 2 2 2

     

    = + = + = + =

  E Y E m E Z m m m n

n n

  .0 n n n c c cZ

  = − ⇔ = + ( ) ( )

  : ( ) n n n n cZ Z n Y m c Y m

n

  Z n Y m c Z N = −  →

  ( ) (0,1) d n n

  − < ≥ −

  ε ε

  1 n n n Var Y P Y E Y

  ( ) 2 ( ) ( )

  the n

  ε ε ⇒ − < ≥ − p

THEOR EM

  For a sequence of random variables, if p

  Y  → Y n

  then d

  Y Y n  →

  For the special case For the special case Y = c, the limiting distribution is the degenerate distribution P[Y = c] = 1. this was the condition we initially used to define stochastic convergence. p

  Y c

  If  → n , then for any function g(y) that is continuous at c, p

  g Y g c n  → ( ) ( )

THEOR EM

  p p X c Y d

   →  →

  If X and Y are two sequences of random variables such that n and n n n then:

  • 1. aX bY ac bd . n n p

   → + p 2.

  X Y cd . n n  → p 3. X c 1, for c 0. n  → ≠ p 4. 1 X  → 1 if c P X ≠ = 1 for all , c nn n 0. p [ ] 5.

  [ ] Example 2 E p ( ) ˆ E Y n np n p

  X c if P X 1 for all . n n n  → ≥ =

  ˆ = = = Var p ( ) Var Y n ( ) npq n pq n Suppose that Y~BIN(n, p).

  = = = ( )

  Var p ( ) ˆ pq pq

    ˆ ˆ

  P p E p ( )

  1 ˆ

  − < ε ≥ − ˆ lim P p p lim 1

  ( ) n

  ε ε

  1 P p p 1 − < ε ≥ − = 2 ε ( ) − < ≥ − 2 ( ) 2 n n → ∞ → ∞  

    n p

  ε p ˆ p = Y n  → p

  ˆ ˆ p 1 p p 1 p

  −  → −

  Thus it follows that ( ) ( )

  

Theorem

  Slutsky’s Theorem If X and Y are two sequences of random variables such that p d n n

  X c Y Y , then: n and  →  → n

  • 1.

  X Y c Y . n n  → + d d 2.

  X Y cY . n n  → d

3. Y

  X Y c , for c 0. n n  → ≠ Note that as a special case X could be an ordinary numerical sequence such as X = n/(n-1). d n d n g Y g Y .

   → If Y Y , n  → n

  ( ) ( )

  then for any continuous function g(y), d

  : g m ' 0, then If n Y m c Z N (0.1),

  ≠ n −  →

  ( ) ( ) and if g(y) has a nonzero derivative at y = m,

  n g Y  − g mn ( ) ( ) d

   :

  Z N (0.1)   → cg m '

  ( )

  A ny Question ? ? ?