Review Lecture 3
Download
Report
Transcript Review Lecture 3
Applied Business Forecasting and
Regression Analysis
Review lecture 3
Statistical Inference
Statistical Inference
A market research firm interviews a random
sample of 2500 adults. Results: 66% find shopping
for cloths frustrating and time consuming.
That is the truth about the 2500 people in the
sample.
What is the truth about almost 210 million
American adults who make up the population?
Since the sample was chosen at random, it is
reasonable to think that these 2500 people
represent the entire population pretty well.
Statistical Inference
Therefore, the market researchers turn the fact that
66% of sample find shopping frustrating into an
estimate that about 66% of all adults feel this way.
Using a fact about a sample to estimate the truth
about the whole population is called statistical
inference.
To think about inference, we must keep straight
whether a number describes a sample or a
population.
Parameters and Statistics
A parameter is a number that describes the
population.
A parameter is a fixed number, but in practice we do
not know its value.
A statistic is a number that describes a sample.
The value of a statistic is known when we have taken a
sample, but it can change from sample to sample.
We often use statistic to estimate an unknown
parameter.
Example
A public opinion poll in Ohio wants to determine
whether registered voters in the state approve of a
measure to ban smoking in all public areas. They
select a simple random sample of 50 registered
voters from each county in the state and ask
whether they approve or disapprove of the
measure. The proportion of registered voters in
the state who approve of banning smoking in
public areas is an example of (parameter, or
statistic)
Example
A survey conducted by the marketing
department of Black Flag asked whether the
purchasers of a new type of roach disk
found it effective in killing roaches.
Seventy-nine percent of the respondents
agreed that the roach disk was effective.
The number 79% is a (parameter, or
statistic)
Example
In the marketing research example, the survey
asked a nationwide random sample of 2500 adults
if they agreed or disagreed that “ I like buying new
cloths, but shopping is often frustrating and time
consuming.”
Of the respondents, 1650 said they agreed.
The proportion of the sample who agreed that
cloths shopping is often frustrating is:
1650
Pˆ
.66 66%
2500
Example
The number P̂ = .66 is a statistic.
The corresponding parameter is the
proportion (call it P) of all adult U.S.
residents who would have said “agree” if
asked the same question.
We don’t know the value of parameter P,
so we use P̂ as its estimate.
Introduction to Inference
The purpose of inference is to draw conclusions from data.
Conclusions take into account the natural variability in the
data, therefore formal inference relies on probability to
describe chance variation.
We will go over the two most prominent types of formal
statistical inference
Confidence Intervals for estimating the value of a population
parameter.
Tests of significance which asses the evidence for a claim.
Both types of inference are based on the sampling
distribution of statistics.
Introduction to Inference
Since both methods of formal inference are based
on sampling distributions, they require probability
model for the data.
The model is most secure and inference is most
reliable when the data are produced by a properly
randomized design.
When we use statistical inference we assume that
the data come from a randomly selected sample or
a randomized experiment.
Estimating with Confidence
Community banks are banks with less than a billion dollars
of assets. There are approximately 7500 such banks in the
United States. In many studies of the industry these banks
are considered separately from banks that have more than a
billion dollars of assets. The latter banks are called “large
institutions.” The community bankers Council of the
American bankers Association (ABA) conducts an annual
survey of community banks. For the 110 banks that make
up the sample in a recent survey, the mean assets are X =
220 (in millions of dollars). What can we say about , the
mean assets of all community banks?
Estimating with Confidence
The sample mean X is the natural estimator of the
unknown population mean .
We know that
X is an unbiased estimator of .
The law of large numbers says that the sample mean
must approach the population mean as the size of the
sample grows.
Therefore, the value X = 220 appears to be a
reasonable estimate of the mean assets for all
community banks.
What if we want to do more than just provide a
point estimate?
Estimating with Confidence
If we have a way to estimate this parameter from
sample data (using an estimator, for example
sample mean), and we know the sampling
distribution of the estimator, we can use this
knowledge to construct a probability statement
involving both the estimator and the true value of
the parameter which we are trying to estimate.
This statement is manipulated mathematically to
yield confidence limits.
Confidence Interval
A level C confidence interval for a
parameter has the following form:
An interval calculated from the data, usually of
the form
Estimate [Factor][standard deviation of estimate]
The value of the factor will depend upon the
level of confidence desired, and the sampling
distribution of the estimator.
Confidence Interval
Suppose we are investigating a continuous
random variable x, which is normally
distributed with a mean and variance 2.
We can estimate the population mean
using the sample mean X , calculated from a
random sample of n observations; 2 can
also be estimated using the sample variance
S2.
Confidence Interval
We can also estimate the standard deviation
of the sample mean
Standard deviation of the sample mean is
also called standard error (SE)
s
ˆ
SE ( X )
n
Confidence Interval
The value of the factor will depend upon the level
of confidence desired, and the distribution of the
estimator.
The sampling distribution is exactly N(,
)
n
when the population has the N(, ) distribution.
The central Limit theorem says that this same
sampling distribution is approximately correct for
large samples whenever the population mean and
standard deviation are and .
Confidence Interval for a Population Mean
To construct a level C
confidence interval, 1st
catch the central C area
under a Normal curve.
Since all Normal
distributions are the same
in the standard scale, we
obtain what we need from
the standard Normal
curve.
Confidence Interval for a Population Mean
The figure in previous slide shows the
relationship between central area C and the
points z* that marks off this area.
Values of z* for many choices of C can be
found from standard Normal table (table A).
Here are some examples;
Z*
C
1.645
90%
1.96
95%
2.575
99%
Confidence Interval for a Population
Mean
Choose a SRS of size n from a population having unknown
mean and known standard deviation . A level C
confidence interval for is
X z
n
Here z* is the critical value with area C between –z* and
z* under the standard Normal curve. The quantity
z
n
is the margin of error. The interval is exact when the
population distribution is normal and is approximately
correct when n is large in other cases.
Example: Banks’ loan –to-deposit ration
The ABA survey of community banks also asked
about the loan-to-deposit ratio (LTDR), a bank’s
total loans as a percent of its total deposits. The
mean LTDR for the 110 banks in the sample is
X 76.7 and the standard deviation is s = 12.3. This
sample is sufficiently large for us to use s as the
population here. Find a 95% confidence interval
for the mean LTDR for community banks.
How Confidence Intervals behave?
The margin of error z*n for estimating the mean of a
Normal population illustrate several important properties
that are shared by all confidence intervals in common use.
Higher confidence level increases z* and therefore increases the
margin of error for intervals based on the same data.
If the margin of error is too large, there are two ways to
reduce it:
Use a lower level confidence (smaller c, hence smaller z*)
Increase the sample size (larger n)
Example: Banks’ loan –to-deposit ration
Suppose there were only
25 banks in the survey of
community banks, and
that x and are
unchanged. The margin of
error increases from 2.3 to
z*
n
1.96
12.3
4.8
25
A 95% confidence interval
for is:
Example: Banks’ loan –to-deposit ration
Suppose that we demand
99% confidence interval
for the mean LTDR rather
than 95% when n is 110.
The margin of error
increases from 2.3 to
z*
n
2.575
12.3
3.0
110
What is the 99%
confidence interval?
Tests of Significance
Confidence intervals are appropriate when our goal is to
estimate a population parameter.
The second type of inference is directed at assessing the
evidence provided by the data in favor of some claim about
the population.
A significance test is a formal procedure for comparing
observed data with a hypothesis whose truth we want to
assess.
The hypothesis is a statement about the parameters in a
population or model.
The results of a test are expressed in terms of a probability
that measures how well the data and the hypothesis agree.
Example: Bank’s net income
The community bank survey described in previous
lecture also asked about net income and reported
the percent change in net income between the first
half of last year and the first half of this year. The
mean change for the 110 banks in the sample
is X 8.1% Because the sample size is large, we
are willing to use the sample standard deviation
s = 26.4% as if it were the population standard
deviation . The large sample size also makes it
reasonable to assume that X is approximately
normal.
Example: Bank’s net income
Is the 8.1% mean increase in a sample good evidence that
the net income for all banks has changed?
The sample result might happen just by chance even if the
true mean change for all banks is = 0%.
To answer this question we asks another
Suppose that the truth about the population is that = 0% (this is
our hypothesis)
What is the probability of observing a sample mean at least as far
from zero as 8.1%?
Example: Bank’s net income
The answer is:
p( X 8.1) P( Z
8.1 0
) P( Z 3.22)
26.4 110
1 .9994 .0006
Because this probability is so small, we see that the
sample mean X 8.1 is incompatible with a population
mean of = 0.
We conclude that the income of community banks has
changed since last year.
Tests of Significance: Formal details
The first step in a test of significance is to state a
claim that we will try to find evidence against.
Null Hypothesis H0
The statement being tested in a test of significance is
called the null hypothesis.
The test of significance is designed to assess the
strength of the evidence against the null hypothesis.
Usually the null hypothesis is a statement of “no effect”
or “no difference.” We abbreviate “null hypothesis” as
H0.
Tests of Significance: Formal details
A null hypothesis is a statement about a population,
expressed in terms of some parameter or parameters.
The null hypothesis in our bank survey example is
H0 : = 0
It is convenient also to give a name to the statement we
hope or suspect is true instead of H0.
This is called the alternative hypothesis and is abbreviated
as Ha.
In our bank survey example the alternative hypothesis
states that the percent change in net income is not zero. We
write this as
Ha : 0
Tests of Significance: Formal details
Since Ha expresses the effect that we hope to find evidence
for we often begin with Ha and then set up H0 as the
statement that the Hoped-for effect is not present.
Stating Ha is not always straight forward.
It is not always clear whether Ha should be one-sided or
two-sided.
The alternative Ha : 0 in the bank net income
example is two-sided.
In any given year, income may increase or decrease, so
we include both possibilities in the alternative
hypothesis.
Example:Have we reduced processing time?
Your company hopes to reduce the mean time
required to process customer orders. At present,
this mean is 3.8 days. You study the process and
eliminate some unnecessary steps. Did you
succeed in decreasing the average process time?
You hope to show that the mean is now less than
3.8 days, so the alternative hypothesis is one
sided, Ha : < 3.8. The null hypothesis is as usual
the “no change” value, H0 : = 3.8.
Tests of Significance: Formal details
Test statistics
We will learn the form of significance tests in a
number of common situations. Here are some
principles that apply to most tests and that help
in understanding the form of tests:
The test is based on a statistic that estimate the
parameter appearing in the hypotheses.
Values of the estimate far from the parameter value
specified by H0 gives evidence against H0.
Tests of Significance: Formal details
A test statistic measures compatibility
between the null hypothesis and the data.
Many test statistics can be thought of as a
distance between a sample estimate of a
parameter and the value of the parameter
specified by the null hypothesis.
Example: bank’s income
The hypotheses:
H0 : = 0
Ha : 0
The estimate of is the sample mean X .
Because Ha is two-sided, large positive and
negative values of X (large increases and
decreases of net income in the sample) counts
as evidence against the null hypothesis.
Example: bank’s income
The test statistic
The null hypothesis is H0 : = 0, and a sample gave
the X 8.1 . The test statistic for this problem is the
standardized version of X :
z
X 0
n
This statistic is the distance between the sample mean
and the hypothesized population mean in the standard
scale of z-scores.
z
8.1 0
3.22
26.4 110
Tests of Significance: Formal details
The test of significance assesses the evidence against the
null hypothesis and provides a numerical summary of this
evidence in terms of probability.
P-value
The probability, computed assuming that H0 is true, that the test
statistic would take a value extreme or more extreme than that
actually observed is called the P-value of the test. The smaller the
p-value, the stronger the evidence against H0 provided by the data.
To calculate the P-value, we must use the sampling distribution of
the test statistic.
Example: bank’s income
The P-value
In our banking example we found that the test statistic
for testing H0 : = 0 versus Ha : 0 is
z
8.1 0
3.22
26.4 110
If the null hypothesis is true, we expect z to take a value
not far from 0.
Because the alternative is two-sided, values of z far
from 0 in either direction count ass evidence against H0.
So the P-value is:
P( z 3.22) p ( z 3.22)
(1 .9994) 0.0006 .0012
Example: bank’s income
The p-value for bank’s
income.
The two-sided p-value is
the probability (when H0
is true) that X takes a
value at least as far from 0
as the actually observed
value.
Tests of Significance: Formal details
We know that smaller P-values indicate stronger
evidence against the null hypothesis.
But how strong is strong evidence?
One approach is to announce in advance how
much evidence against H0 we will require to reject
H0.
We compare the P-value with a level that says
“this evidence is strong enough.”
The decisive level is called the significance level.
It is denoted be the Greek letter .
Tests of Significance: Formal details
If we choose = 0.05, we are requiring that
the data give evidence against H0 so strong
that it would happen no more than 5% of
the time (1 in 20) when H0 is true.
Statistical significance
If the p-value is as small or smaller than , we
say that the data are statistically significant at
level .
Tests of Significance: Formal details
You need not actually find
the p-value to asses
significance at a fixed
level .
You can compare the
observed test statistic z
with a critical value that
marks off area in one or
both tails of the standard
Normal curve.
Two Types of Error
In tests of hypothesis, there are simply two
hypotheses, and we must accept one and reject the
other.
We hope that our decision will be correct, but
sometimes it will be wrong.
There are two types of incorrect decisions.
If we reject H0 when in fact H0 is true. This is called
type I error.
If we accept H0 when in fact Ha is true. This is called
Type II error.
Two Types of Error
Two Types of Error
Significance and type I error
The significance level of any fixed level test
is the probability of a type I error.
That is, is the probability that the test will
reject the null hypothesis H0 when in fact H0 is
true.