Introduction to Time Series Analysis

Download Report

Transcript Introduction to Time Series Analysis

Introduction to Time Series Analysis
Gloria González-Rivera
University of California, Riverside
and
Jesús Gonzalo U. Carlos III de Madrid
Spring 2002
Copyright(© MTS-2002GG): You are free to use and modify these slides for educational purposes, but
please if you improve this material send us your new version.
Brief Review of Probability
 {},
•
Sample Space: Ω
•
Outcome:
•
Event:
•
Field:
•
Random Variables:
•
State Space: S, a space containing the possible values of a random variables –common choices are
the integers N, reals R, k-vectors Rk, complex numbers C, positive reals R+, etc
•
Probability:
•
Distribution:
(intervals, etc)
 
E
the set of possible outcomes of some random experiment
, a single element of the Sample Space
, a subset of the Sample Space
F  E : E  } , the collection of Events we will be considering
Z: S
P : F  [0,1]
, a function from the Sample Space  to a State Space S
, obeying the three rules that you must very well know
 : B  [0,1], where B  {A : A  R}
is the Borel sets
Brief Review (cont)
•Random Vectors: Z= (Z1, Z2 , ..., Zn) is a n-dimensional random vector if its components Z1 , ..., Zn
are one-dimensional real-valued random variables
If we interpret t=1, ..., n as equidistant instants of time, Zt can stand for the outcome of an experiment
at time t . Such a time series may, for example, consists of Toyota share prices Zt at n succeeding
days.
The new aspect now, compared to a one-dimensional radnom variable, is that now we can talk about
the dependence structure of the random vector.
•Distribution function FZ of Z : It is the collection of the probabilities
FZ (z)  P(Z1  z1,...,Z n  z n )
 P({ : Z1()  z1,...,Z n ()  z n })
Stochastic Processes
We suppose that the exchange rate €/$ at every fixed instant t
between 5p.m and 6p.m. this afternoon is random. Therefore we
can interpret it as a realization Zt() of the random variable Zt, and
so we observe Zt(), 5<t<6. In order to make a guess at 6 p.m.
about the exchange rate Z19() at 7 p.m. it is reasonable to look at
the whole evolution of Zt() between 5 and 6 p.m. A
mathematical model describing this evolution is called a stochastic
process.
Stochastic Processes (cont)
A stochastic process is a collection of time indexed random variables
(Zt , t  T)  (Zt (), t  T,   )
defined on some space .
Suppose that
(1) For a fixed t
(2) For fixed

Zt ( ),
Zt :   R
Z : T  R
This is just a random variable.
This is a realization or sample function
Changing the time index, we can generate several random variables:
Zt1 ( ), Zt2 ( ),.......Ztn ( )
From which a realization is:
z , z ,....z
t1
t2
tn
This collection of random variables is called a STOCHASTIC PROCESS
A realization of the stochastic process is called a TIME SERIES
Examples of stochastic processes
E1: Let the index set be T={1, 2, 3} and let the space of outcomes () be the possible
outcomes associated with tossing one dice:
1, 2, 3, ,4 ,5, 6}
Define
Z(t, )= t + [value on dice]2 t
Therefore for a particular , say 3={3}, the realization or path would be (10, 20, 30).
Q1: Draw all the different realizations (six) of this stochastic process.
Q2: Think on an economic relevant variable as an stochastic process and write down an
example similar to E1 with it. Specify very clear the sample space and the “rule” that
generates the stochastic process.
E2: A Brownian Motion B=(Bt, t [0, infty]):
• It starts at zero: Bo=0
• It has stationary, independent increments
• For evey t>0, Bt has a normal N(0, t) distribution
• It has continuous sample paths: “no jumps”.
Distribution of a Stochastic Process
In analogy to random variables and random vectors we want to introduce non-random
characteristics of a stochastic process such as its distribution, expectation, etc. and
describe its dependence structure. This is a task much more complicated that the
description of a random vector. Indeed, a non-trivial stochastic process Z=(Zt, t  T)
with infinite index set T is an infinite-dimensional object; it can be inderstood as the
infinite collection of the random variables Zt, t  T. Since the values of Z are functions
on T, the distribution of Z should be defined on subsets of a certain “function space”, i.e.
P(X  A), A  F,
where F is a collection of suitable subsets of this space of functions. This approach is
possible, but requires advanced mathematics, and so we will try something simpler.
The finite-dimensional distributions (fidis) of the stochastic process Z are the
distributions of the finite-dimensional vectors
(Zt1,..., Ztn),
t1, ..., tn T,
for all possible choices of times t1, ..., tn  T and every n  1.
Stationarity
Consider the joint probability distribution of the collection of
random variables
F ( zt1 , zt2 ,.....ztn )  P( Zt1  zt1 , Zt2  zt2 ,...Ztn  ztn )
1st order stationary process if
F ( zt1 )  F ( zt1 k )
for any t1, k
2nd order stationary process if
F ( zt1 , zt2 )  F ( zt1 k , zt2 k )
for any t1, t2 , k
n-order stationary process if
F ( zt1 .....ztn )  F ( zt1 k .....ztn k )
for any t1, tn , k
Definition.
A process is strongly (strictly) stationary if it is a n-order stationary
process for any n.
Moments
E( Z t )   t 

Z t f (z t )dz t
2
2
Var ( Z t )   t  E ( Z t   t ) 

2
( Z t   t ) f (z t )dz t
Cov( Z t , Z t )  E[(Z t   t )(Z t   t )]
1
2
1
1
2
2
( t1 , t 2 ) 
cov(Z t 1 , Z t 2 )
2

t1
2

t2
Moments (cont)
t  
For strictly stationary process:
 
F ( zt )  F ( zt k )  t  t k  
2
t
because
1
provided that
1
1
2
1
E( Zt )  , E(Z 2t )  
F ( zt1 , zt2 )  F ( zt1  k , zt2  k ) 
cov(zt1 , zt2 )  cov(zt1  k , zt2  k ) 
 (t1 , t2 )   (t1  k , t2  k )
let
t1  t  k
and
t2  t ,
t hen
 (t1 , t2 )   (t  k , t )   (t , t  k )   k
The correlation between any two random variables depends on the
time difference
Weak Stationarity
A process is said to be n-order weakly stationary if all its joint
moments up to order n exist and are time invariant.
Covariance stationary process (2nd order weakly stationary):
• constant mean
• constant variance
• covariance function depends on time difference between R.V.
Autocovariance and Autocorrelation Functions
For a covariance stationary process:
E ( Zt )  
Var( Z t )   2
Cov( Z t , Z s )   s t
k 
cov(Z t , Z t k )
k k
 2 
0
var(Z t ) var(Z t  k ) 
 k : autocovariance function
:k R
 k : autocorrelationfunction(ACF)
 : k  [1,1]
Properties of the autocorrelation function
1. If
 0  var(Zt ) t hen 0  1
2. Since  k is a correlation coefficient,
k  1   k   0
3.
 k   k
 k   k
since  k  E ( Z t k   )(Z ( t k ) k   ) 
 E ( Z t k   )(Z t   )   k
Partial Autocorrelation Function (conditional correlation)
This function gives the correlation between two random variables
that are k periods apart when the in-between linear dependence
(between t and t+k ) is removed.
Let Zt and Zt k be t wo random variables,
t he PACFis given by  ( Zt , Zt k | Zt 1,......Zt k 1 )
Motivation
Think about a regression model
(without loss of generality, assume that E(Z)=0)
  k1Z t  k 1   k 2 Z t  k  2 ......  kk Z t  e t  k
tk
where e t  k is uncorrelated with Z t  k  j j  1
Z
(1) multiplyby Z t  k  j
Z t  k  jZ t  k   k1Z t  k 1Z t  k  j   k 2 Z t  k  2 Z t  k  j......  kk Z t Z t  k  j  e t  k Z t  k  j
(2) takeexpectations
 j   k1 j1   k 2  j 2 ......  kk  j k
Dividing by the variance of the process:
 j  k1 j 1  k 2  j 2 ...... kk  j k
j  1,2,...k
1  k 1 0  ....... kk k 1
2  k 1 1  ....... kk k 2

k  k 1 k 1  ....... kk 0
k 1
k 2
k 3
Yule-Walker
equations
1  11  0  11  1
1  21  0  22 1
 2  21 1  22  0
1
 22 
1  31  0  32 1  33  2
 2  31 1  32  0  33 1
 3  31  2  32 1  33  0
1
1
1
1
2
1
1
1
 33 
1
2
1
1
2
1
1
1 2
1  3
1  2
1 1
1 1
Examples of stochastic processes
E4:
Yt
if t is even
Yt+1
if t is odd
Zt=
where Yt is a stationary time series. Is Zt weak stationary?
E5: Define the process
St = X1+ ... + Xn ,
where Xi is iid (0, 2). Show that for h>0
Cov (St+h, St) = t 2,
and therefore St is not weak stationary.
Examples of stochastic processes (cont)
E6: White Noise Process
A sequence of uncorrelated random variables is called a white noise
process. at }: E ( at )  a (normally a  0)
Var(at )   a2
Cov(at , at k )  0 for k  0
Autocovariance and autocorrelation
 a2 k  0
k  
0 k  0
1 k  0
k  
0 k  0
1 k  0
kk  
0 k  0
k
....
1
2
3
4
k
Dependence: Ergodicity
• See Reading 1 from Leo Breiman (1969) “Probability and Stochastic Processes: With
a View Toward Applications”
• We want to allow as much dependence as the Law of Large Numbers (LLN) let us do it
• Stationarity is not enough as the following example shows:
E7: Let {Ut} be a sequence of iid r.v uniformly distributed on [0, 1] and let Z be N(0,1)
independent of {Ut}.
Define Yt=Z+Ut . Then Yt is stationary (why?), but
n
Yn 
1
n

t 1
1
no
Yt 
 E(Yt ) 
2
Yn  Z 

1
2
The problem is that there is too much dependence in the sequence {Yt}. In fact the
correlation between Y1 and Yt is always positive for any value of t.
Ergodicity for the mean
Objective: estimate the mean of the process
Need to distinguishing between:
1. Ensemble average
Zt }   E(Zt )
2. Time average
n
m
z
Z
i 1
i
m
z
Z
t
t 1
n
Which estimator is the most appropriate? Ensemble average
Problem: It is impossible to calculate
Under which circumstances we can use the time average?
Is the time average an unbiased and consistent estimator of the mean?
Ergodicity for the mean (cont)
Reminder. Sufficient conditions for consistency of an estimator.
lim E (ˆT )   and lim var(ˆT )  0
T 
T 
1. Time average is asymptotically unbiased
E ( z) 
1
1
E
(
Z
)




t
n t
n t
2. Time average is consistent for the mean
1 n n
0 n n
var(z )  2  cov(Z t , Z s )  2  t s 
n
t 1 s 1
n
t 1 s 1
0 n
 2  ( t 1  t 2   t n ) 
n t 1

 02 [( 0  1    n 1 )  (  1  0  1    n 2 ) 
n
   (  ( n 1)   ( n 2 )    0 )] 
Ergodicity for the mean (cont)
0
0
n 1
k
var(z )  2  ( n  k )  k 
(1 
) k

n k   ( n 1)
n k
n
0
k
lim var(z )  lim
(1 
) k  0

n 
n  n
n
k


0

k
k
A covariance-stationary process is ergodic for the mean if
p lim z  E(Zt )  
A sufficient condition for ergodicity for the mean is


k 0
k


or

k 0
k

t hatis as k    k  0
Ergodicity under Gaussanity
If
Zt }
is a stationary Gaussian process,

k
is sufficient to ensure ergodicity for all moments
k

Where are We?
The Prediction Problem as a Motivating Problem:
Predict Zt+1 given some information set It at time t.

Min E[ Z t 1  Z t 1 ]2

Solution : Z t 1  E[ Z t 1 | I t ]
The conditional expectation can be modeled in a parametric way or
in a non-parametric way. We will choose in this course the former.
Parametric models can be linear or non-linear. We will choose in
this course the former way too. Summarizing the models we are
going to study and use in this course will be
Parametric and linear models
Some Problems
P1: Let {Zt} be a sequence of uncorrelated real-valued variables with zero means and unit variances,
and define the “moving average”
r
Yt 

ai Z t i
i 0
for constants a0, a1, ... , a . Show that Y is weak stationary and find its autocovariance function
P2: Show that a Gaussian process is strongly stationary if and only if it is weakly stationary
P3: Let X be a stationary Gaussian process with zero mean, unit variance, and autocovariance function
c. Find the autocovariance functions of the process
X 2  {X( t ) 2 :   t  } and X 3  {X( t ) 3 :   t  }
Appendix: Transformations
•Goal: To lead to a more manageable process
•Log transformation reduces certain type of
heteroskedasticity. If we assume t=E(Xt) and V(Xt) = k 2t,
the delta method shows that the variance of the log is roughly
constant:
Var (f ( Z))  f ' () 2 Var ( Z)  Var (log( Z t )  (1 /  t ) 2 Var ( Z t )  k
•Differencing eliminates the trend (not very informative about
the nature of the trend)
•Differencing + Log = Relative Change
log(Z t )  log(Z t 1)  log(
Zt
Z  Z t 1
Z  Z t 1
)  log(1  t
) t
Z t 1
Z t 1
Z t 1