Transcript Estimate

Point estimation, interval estimation
•
•
•
•
Point estimation
Desirable properties of point estimations
Interval estimations
Confidence intervals
Estimator
Assume that we have a sample (x1,x2,,,xn) from a given population. All parameters of
the population are known except some parameter . We want to determine
from the given observations unknown parameter - . In other words we want to
determine a number or range of numbers from the observations that can be
taken as a value of .
Estimator – is a method of estimation.
Estimate – is a result of an estimator
Point estimation – as the name suggests is the estimation of the population
parameter with one number.
Problem of statistics is not to find estimates but to find estimators. Estimator is not
rejected because it gives one bad result for one sample. It is rejected when it
gives bad results in a long run. I.e. it gives bad result for many, many samples.
Estimator is accepted or rejected depending on its sampling properties.
Estimator is judged by the properties of the distribution of estimates it gives
rise.
Properties of estimator
Since estimator gives rise an estimate that depends on sample points (x1,x2,,,xn)
estimate is a function of sample points. Sample points are random variable
therefore estimate is random variable and has probability distribution. We want
that estimator to have several desirable properties like
1.
Consistency
2.
Unbiasedness
3.
Minimum variance
In general it is not possible for an estimator to have all these properties.
Note that estimator is a sample statistic. I.e. it is a function of the sample elements.
Properties of estimator: Consistency
For many estimators variance of the sampling distribution of an estimator decreases
as sample size increases. We would like that estimator stays as close as possible
to the parameter it estimates as sample size increases.
We want to estimate  and tn is an estimator. If tn tends to  in probability as n
increases then estimator is called consistent. I.e. for any given  and  there is
an integer number n0 so that for all samples size of n > n0 following condition is
satisfied:
P(|tn- |< ) > 1- 
The property of consistency is a limiting property. It does not require any behaviour
of the estimator for a finite sample size.
If there is one consistent estimator then you can construct infinitely many others. For
example if tn is consistent then n/(n-1)tn is also consistent.
Example: 1/nxi and 1/(n-1) xi are both consistent estimators for the population
mean.
Properties of estimator: Unbiasedness
If an estimator tn estimates  then difference between them (tn- ) is called the
estimation error. Bias of the estimator is defined as the expectation value of this
difference
B =E(tn-)=E(tn)- 
If the bias is equal to zero then the estimation is called unbiased. For example sample
mean is an unbiased estimator:
1 n
1 n
1 n
E ( x   )  E (  xi   )   E ( xi )     E ( x )    0
n i 1
n i 1
n i 1
Here we used the fact that expectation and summation can change order (Remember
that expectation is integration for continuous random variables and summation
for discrete random variables.) and the expectation of each sample point is
equal to the population mean.
Knowledge of population distribution was not necessary for derivation of
unbiasedness of the sample mean. This fact is true for the samples taken from
population with any distribution for which the first moment exists..
Example of biased estimator: Sample variance.
Given sample of size n from the population with unknown mean () and variance
(2) we estimate mean as we already know and variance (intuitively) as:
2
1 n
1 n 2
2
tn   ( xi  x )   xi  x
n i 1
n i 1
What is the bias of this estimator? We could derive distribution of tn and then use it to
find expectation value. If population has normal distribution then it would give
us multiple of 2 distribution with n-1 degrees of freedom. Let us use a direct
approach:
E ( tn ) 
n
n
n
1 n
1 n
1
1
1
2
2
2
2
2
E
(
x
)

E
((
x
)

E
(
x
)

E
(
x
x
)

E
(
x
)

(
E
(
x
)

E
(
xi x j ))  E ( x 2 )  2 (nE ( x 2 )  n(n  1) E ( x )2 )





i
i
i j
i
2
2
n i 1
n i 1
n
n
n
i 1, j 1
i 1
i j
 E( x2 ) 
1
n 1
n 1
n 1 2
E( x2 ) 
E ( x )2 
( E ( x 2 )  E ( x )2 ) 

n
n
n
n
Sample variance is not an unbiased estimator for the population variance. That is why
when mean and variance are unknown the following equation is used for
sample variance: 2
1 n
2
s 
 ( x  x)
n  1 i 1
i
Property of estimator: mean square error and bias
Expectation value of the square of the differences between estimator and the
expectation of the estimator is called its variance:
V  E (tn  E (tn )) 2
Exercise: What is the variance of the sample mean.
As we noted if estimator for  is tn then difference between them is error of the
estimation. Expectation value of this error is bias. Expectation value of square
of this error is called mean square error (m.s.e.):
M   E ( tn   ) 2
It can be expressed by the bias and the variance of the estimator:
M  (tn )  E (tn   )2  E (tn  E (tn )  E (tn )   )2  E (tn  E (tn )) 2  ( E (tn )   )2 
V (tn )  B2 (tn )
M.s.e is equal to square of the estimator’s bias plus variance of the estimator. If the
bias is 0 then m.s.e is equal to the variance. In estimation it is usually trade of
between unbiasedness and minimum variance. In ideal world we would like to
have minimum variance unbiased estimator. It is not always possible.
Intuitive estimators: plug-in
One of the estimator is plug-in. It has only intuitive bases. If parameter we want to
estimate is expressed like =t(F) then estimator taken as ˆ  t ( Fˆ ) . Where F is
thepopulation distribution and F̂ is its sample equivalent.
Example: population mean is calculated as:
   xf ( x )dx
Since sample is from the population with the density of distribution f(x) sample mean
is plug-in estimator for the population mean.
Exercise: What is the plug-in estimator for population variance? What is the plug-in
estimator for covariance. Hint: Population variance and covariance are
calculated as: 2
2
   ( x   ) f ( x)dx and
cov( X , Y )   ( x   x )( y   y ) f ( x, y )dxdy
Replace the integration with summation and divide by the number of elements in the
sample. Since sample was drawn from the population with a given distribution
it is not necessary to multiply by f(x)
Least-squares estimator
Another well known and popular estimator is the least-square estimator. If we have a
sample and we think that (because of some knowledge we had before) all
parameters of interest are inside the mean value of the population then least
squares methods estimates by minimising the square of the differences between
observations and mean value:
n
 w ( x   ( ))
i 1
i
i
2
 min
Exercise: Verify that if only unknown parameter is the mean of the population and all
wi are equal to each other then the least-squares estimator will result in the
sample mean.
Interval estimation
Estimation of the parameter is not sufficient. It is necessary to analyse and see how
confident we can be about this particular estimation. One way of doing it is
defining confidence intervals. If we have estimated  we want to know if the
“true” parameter is close to our estimate. In other words we want to find an
interval that satisfies following relation:
P(GL    GU )  1  
I.e. probability that “true” parameter  is in the interval (GL,GU) is greater than 1-.
Actual realisation of this interval - (gL,gU) is called a 100(1- )% of confidence
interval, limits of the interval are called lower and upper confidence limits. 1- 
is called confidence level.
Example: If population variance is known (2) and we estimate population mean then
Z
x
is normal N (0,1)
/ n
We can find from the table that probability of Z is more than 1 is equal to 0.1587.
Probability of Z is less than -1 is again 0.1587. These values comes from the
tables of the standard normal distribution.
Interval estimation: Cont.
Now we can find confidence interval for the sample mean. Since:
P( 1  Z  1)  P( Z  1)  P( Z  1)  1  P( Z  1)  P( Z  1)  1  2 * 0.1587  0.6826
Then for  we can write
x
P( 1 
 1)  P( x   / n    x   / n )  0.6826
/ n
Confidence level that “true” value is within 1 standard error (standard deviation of
sampling distribution) from the sample mean is 0.6826. Probability that “true”
value is within 2 standard error from the sample mean is 0.9545.
What we did here is to find sample distribution and to use it to define confidence
intervals. Here we used two sided symmetric interval. They don’t have to be
two sided or symmetric. Under some circumstances non-symmetric intervals
might be better. For example it might be better to diagnose patient for particular
treatment than not. If doctor made an error and did not treat the patient then he
might die. But if doctor made a mistake and started to treat him then he can
stop and correct his mistake at some later time.
Interval estimation: Cont.
Above we considered the case when population variance is known in advance. It is
rarely the case in real life. When both population mean and variance are
unknown we can still find confidence intervals. In this case we calculate
population mean and variance and then consider distribution of the statistic:
Z
x
s/ n
Here s2 is the sample variance.
Since it is the ratio of the standard normal random variable to square root of 2
random variable with n-1 degrees of freedom, Z has Student’s t distribution
with n-1 degrees of freedom. In this case we can use table of t distribution to
find confidence levels.
It is not surprising that when we do not know sample variance confidence intervals
for the same confidence levels becomes larger. That is price we pay for what
we do not know.
If number of degrees of freedom becomes large then t distribution is approximated
well with normal distribution. For n>100 we can use normal distribution to find
confidence levels, intervals.