The Use of Mathematical Statistics

Download Report

Transcript The Use of Mathematical Statistics

Introduction of Mathematical Statistics 2
By :
Indri Rivani Purwanti (10990)
Gempur Safar (10877)
Windu Pramana Putra Barus (10835)
Adhiarsa Rakhman (11063)
Dosen :
Prof.Dr. Sri Haryatmi Kartiko, S.Si., M.Sc.
STATISTIKA UGM YOGYAKARTA
THE USE OF
MATHEMATICAL STATISTICS
Introduction to Mathematical Statistics (IMS) can be
applied for the whole statistics subject, such as:
 Statistical Methods I and II
 Introduction to Probability Models
 Maximum Likelihood Estimation
 Waiting Times Theory
 Analysis of Life-testing models
 Introduction to Reliability
Nonparametric Statistical Methods
 etc.
STATISTICAL METHODS
In Statistical Methods, Introduction of Mathematical
Statistics are used to:

introduce and explain about the random variables ,
probability models and the suitable cases which can
be solve by the right probability models.

How to determine mean (expected value), variance
and covariance of some random variables,

Determining the convidence intervals of
random variables

Etc.
certain
Lee J. Bain & Max Engelhardt
Probability Models
Mathematical Statistics also describing the probability
model that being discussed by the staticians.
The IMS being used to make student easy in mastering
how to decide the right probability models for certain
random variables.
Lee J. Bain & Max Engelhardt
INTRODUCTION OF RELIABILITY
The most basic is the reliability function
corresponds to probability of failure after time t.
that
The reliability concepts:
If a random variable X represents the lifetime of failure of
a unit, then the reliability of the unit t is defined to be:
R (t) = P ( X > t ) = 1 – F x (t)
Lee J. Bain & Max Engelhardt
MAXIMUM LIKELIHOOD ESTIMATION
IMS is introduces us to the MLE,
Let L(0) = f (x1,....,xn:0), 0 Є Ω, be the joint pdf of X1,....,Xn.
For a given set bof observatios, (x1,....,xn:0), a value in Ω
at which L (0) is a maximum and called the maximum
likelihood estimate of θ. That is ,
is a value of 0 that
statifies
f (x1,....,xn:
) = max f (x1,....,xn:0),
Lee J. Bain & Max Engelhardt
ANALYSIS OF LIFE-TESTING MODELS
Most of the statistical analysis for parametric life-testing
models have been developed for the exponential and
weibull models.
The exponential model is generally easier to analyze
because of the simplicity of the functional form.
Weibull model is more flexibel , and thus it provides a
more realistic model in many applications , particularly
those involving wearout and aging.
Lee J. Bain & Max Engelhardt
NONPARAMETRIC STATISTICAL METHODS
The IMS also introduce to us the nonparametrical
methods of solving a statistical problem, such as:
 one-sample sign test
Binomial Test
 Two-sample sign test
 wilcoxon paired-sample signed-rank test
 wilcoxon and mann-whitney tests
 correlation tests-tests of independence
 wald-wolfowitz runs test
 etc.
Lee J. Bain & Max Engelhardt
KETERKAITAN
KONVERGENSI
EXAMPLE
We consider the sequence of ”standardized” variables:
Yn  np
Zn 
npq
With the simplified notation  n  npq
u
2
By using the series expansion e  1  u  u 2 
M Z n  t   e
e
Yn  np t  n
 npt  n
 e  npt
 pe
t n
n
q

pt p 2t 2
  1 


2

2


n
n

M Yn 
 t 
n 


n
 e
 pt  n
 
t
t2


  p 1 
2

2

n
n
 
d n 

t2
 1 

 Where d(n)
2
n
n


n
lim M Zn  t   e
t2 2
n 
d
Zn 
Z
 pe
N  0,1
t n

 q 
n


  1  p   

 
 0 as n  
n
APPROXIMATION FOR THE BINOMIAL DISTRIBUTION
 b  0.5  np 
 a  0.5  np 
P  a  Yn  b    






npq
npq




Example:
A certain type of weapon has probability p of working successfully. We test n
weapons, and the stockpile is replaced if the number of failures, X, is at least one. How
large must n be to have P[X ≥ 1] = 0.99 when p = 0.95?Use normal approximation.
X : number of failures
p : probability of working successfully = 0.95
q : probability of working failure = 0.05
P  X  1  0.99
1  P  X  0  0.99
 0  0.5  0.05n 
1  
  0.99
 n 0.05 0.95 
 0.5  0.05n 

  0.01
 0.218 n 
0.5  0.05n
 2.33
0.218 n
0.25  0.05n  0.0025n2  0.258n
0.0025n2  0.308n  0.25  0
b  b2  4ac 0.308  0.3082  4 0.0025 0.25
n

 122 ()
2a
2 0.0025
ASYMPTOTIC NORMAL DISTRIBUTIONS
If Y1, Y2, … is a sequence of random variables and m and c are constants
such that
Zn 
Yn  m
c
n
d

Z
N  0.1
as n , then Yn is said to have an asymptotic normal distribution with asymptotic
mean m and asymptotic variance c2/n.
Example:
The random sample involve n = 40 lifetimes of electrical parts, Xi ~ EXP(100). By the CLT,
X i EXP (100)
X n   X n  100
Z


n
E ( X )    100

100
Var ( X )   2  100 2
n
40
X n has an asymptotic normal distribution with mean m = 100 and variance c2/n =
1002/ 40 = 250.
ASYMPTOTIC DISTRIBUTION OF CENTRAL ORDER
STATISTICS

Theorem
Let X1, …, Xn be a random sample from a continuous distribution with a pdf f(x)
that is continuous and nonzero at the pth percentile, xp, for 0 < p < 1. If k/n  p
(with k – np bounded), then the sequence of kth order statistics, Xk:n, is
asymptotically normal with mean xp and variance c2/n, where
c2 
p (1  p)
 f ( x p ) 
2
• Example
Let X1, …, Xn be a random sample from an exponential distribution, Xi ~ EXP(1), so
that f(x) = e-x and F(x) = 1 – e-x; x > 0. For odd n, let k = (n+1)/2, so that Yk = Xk:n is
the sample median. If p = 0.5, then the median is x0.5 = - ln (0.5) = ln 2 and
c 
2
0.5(1  0.5)
 f (ln 2)
2
0.25

1
(0.5)2
x
x0.5  0.5  F  x0.5   1 e x0.5  e  0.5   x0.5  ln 0.5
0.5
1
1
 x0.5   ln 0.5  ln    ln 2
2
Thus, Xk:n is asymptotically normal with asymptotic mean x0.5 = ln 2 and
asymptotic variance c2/n = 1/n.
THEOREM
d
Z N (0,1)
If Zn  n (Yn  m) c 
p
then Yn 
m
Proof
P  Yn  E (Yn )     1 
Var (Yn )
2
Z n  n (Yn  m) c  Yn 
cZ n
n
m
c
c
cZ
E Yn   E  n
 m  
E  Zn   m 
.0  m  m
n


n
n
c
c
c
cZ
Var Yn   Var  n
 m   Var  Z n   .1 
n
n
n

 n
2
P  Yn  E (Yn )     1 
Var (Yn )
2
2
2
c2
 P  Yn  m     1  2
n

c2 
lim P  Yn  m     lim 1  2   1
n 
n 
 n 
p
Yn 
m
THEOREM
For a sequence of random variables, if
p
Yn 
Y
then
d
Yn 
Y
For the special case For the special case Y = c, the limiting distribution is the
degenerate distribution P[Y = c] = 1. this was the condition we initially used
to define stochastic convergence.
p
 c , then for any function g(y) that is continuous at c,
If Yn 
p
g Yn  
 g c
THEOREM
If Xn and Yn are two sequences of random variables such that
p
p
d
X n 
c and Yn 
then:
p
1. aX n  bYn 
 ac  bd .
p
2. X nYn 
 cd .
p
3. X n c 
1, for c  0.
p
4. 1 X n 
1 c if P  X n  0  1 for all n, c  0.
5.
p
X n 
 c if P  X n  0  1 for all n.
Example
Suppose that Y~BIN(n, p). E( pˆ )  E Y n  np n  p
P  pˆ  E ( pˆ )     1 
p
pˆ  Y n 
p
Var ( pˆ )

2
Var ( pˆ )  Var (Y n)  npq n2  pq n
P  pˆ  p     1 
pq
n 2
pq 

lim P  pˆ  p     lim 1  2   1
n 
n 
 n 
p
 p 1 p 
Thus it follows that pˆ 1 pˆ  
Theorem
Slutsky’s Theorem If Xn and Yn are two sequences of random variables such that
d
p
Y , then:
X n 
c and Yn 
d
1. X n  Yn 
c  Y.
d
2. X nYn 
 cY .
d
3. Yn X n 
 Y c , for c  0.
Note that as a special case Xn could be an ordinary numerical sequence such as Xn = n/(n-1).
d
d
 g Y  .
If Yn 
Y , then for any continuous function g(y), g  Yn  
d
If n Yn  m  c 
Z
n  g Yn   g  m  
cg '  m 
N (0.1), and
d

Z
N (0.1)
if g(y) has a nonzero derivative at y = m, g '  m  0, then
Any Question ? ? ?