heteroscedasticity-autocorrelation

Download Report

Transcript heteroscedasticity-autocorrelation

Heteroscedasticity
What does it mean? The variance of the error term is not constant
What are its consequences? Heteroscedasticity does not destroy the
unbiasedness and consistency properties of OLS estimators
- The least squares results are no longer efficient: NOT BLUE;
- the formulae used to estimate the coefficient standard error are no more
correct;
- t tests and F tests results may be misleading; (if the error variance is positively related to an
independent variable then the estimated standard errors are biased downwards and hence the t-values will be
inflated);
confidence intervals based on these standard errors will be wrong
- In the presence of heteroscedasticity, the variances of OLS estimators are not
provided by the usual OLS formulas
-
Heteroscedasticity
How can you detect the problem? Plot the residuals against each of the regressors
or use one of the more formal tests (White’s test, Breusch-Pagan-Godfrey test,
Goldfeld-Quandt test, etc)
- How to select test? We cannot tell for sure which will work in a given situation;
pay attention to several factors (e.g. level of significance, the probability of
rejecting a false hypothesis, sensitivity to outlies
How can I remedy the problem? Respecify the model – look for other missing
variables; perhaps take logs or choose some other appropriate functional form; or
make sure relevant variables are expressed “per capita”
-If the sample is large use White’s heteroscedasticity corrected standard errors for
OLS estimators
-- make educated guesses of the likely pattern of heteroscedasticity and transform
the original data in a such a way that in the transformed data there is no
heteroscedasticity and then use OLS estimates
Autocorrelation
What is meant by autocorrelation The error terms are not independent from
observation to observation – ut depends on one or more past values of u
What are its consequences? The least squares estimators remain unbiased,
consistent an asymptotically normally distributed but they are
no longer “efficient” (i.e. they don’t have the lowest variance).
Consequently the standard errors and t-values will be also affected.
t, F and X2 tests cannot be applied
If there is positive autocorrelation the standard errors will be underestimated
and t-values will biased upwards
The variance of the error term will also be underestimated under positive
autocorrelation so that R square will be exaggerated
Forecasts based on the OLS regression model will be inefficient (they have
larger variances than these from other techniques)
More seriously autocorrelation may be a symptom of model
misspecification
Autocorrelation
How can you detect the problem? Plot the residuals against
time or their own lagged values, calculate the Durbin-Watson
statistic or use some other tests of autocorrelation such as the BreuschGodfrey test
How can you remedy the problem? The remedy depends on the nature of the
interdepedence among the disturbances ut
Consider possible model re-specification of the model: a different functional
form,missing variables, lags etc.
The common practice is to assume that they are generated by some
mechanism, e.g. the first-order autoregressive scheme (it assumes that the
disturbance in the current time period is linearly related to the disturbance term in the
previous time period)
If AR(1) scheme is valid and the coefficient of autocorrelation is known then transform
the data through GLS procedure