Mean and Higher Moments
Download
Report
Transcript Mean and Higher Moments
MEAN AND HIGHER MOMENTS
Lecture VIII
EXPECTED VALUE
Definition
4.1.1. Let X be a discrete
random variable taking the value xi with
probability P(xi), i=1,2,…. Then the
expected value E[X] is defined to be
E[X]=i=1xiP(xi) if the series converges
absolutely.
We can write E[X]=+xiP(xi)+ -xiP(xi) where in
the first summation we sum for i such that
xi>0 and in the second summation we sum for i
such that xi<0.
If +xiP(xi)= and -xiP(xi)=- then E[X] does
not exist.
If +xiP(xi)= and - finite then we say E[X]=.
If -xiP(xi)=- and + is finite then we say that E[X]=.
ROLLING DICE
Number
1
Probability
0.167
E[X]=i=1xiP(xi)
0.167
2
3
4
5
6
0.167
0.167
0.167
0.167
0.167
0.333
0.500
0.667
0.833
1.000
3.500
EXPECTED VALUE OF TWO DIE
Die 1
1
2
3
4
5
6
Die 2
1
1
1
1
1
1
Number
2
3
4
5
6
7
E[X ]=
0.056
0.083
0.111
0.139
0.167
0.194
7
Expectation has several applications in risk
theory. In general, the expected value is the
value we expect to occur. For example, if we
assume that the crop yield follows a binomial
distribution as depicted in figure 1, the expected
return on the crop given that the price is $3 and
the cost per acre is $40, becomes:
EXPECTED PROFIT ON CROP
15
20
25
30
35
40
45
50
55
60
65
0.0001 0.0016 0.0005
0.0016 0.0315 0.0315
0.0106 0.2654 0.3716
0.0425
1.274 2.1234
0.1115 3.9017
7.246
0.2007 8.0263 16.0526
0.2508 11.287 23.8282
0.215 10.7495 23.649
0.1209 6.6513 15.1165
0.0403 2.4186 5.6435
0.006
0.393 0.9372
45
95
In the parlance of risk theory, the expected value
of the wheat crop is termed the actuarial or fair
value of the game. It is the value that a risk
neutral individual would be willing to pay for the
bet.
Another point about this value, it is sometimes
called the population mean as opposed to the
sample mean. Specifically, the sample mean is
an observed quantity based on a sample drawn
from the random generating function. The
sample mean is defined as:
x i 1 xi
n
RANDOM SAMPLE
Observation
1
2
3
4
5
6
7
8
9
20
Yield
Profit
40
40
40
50
50
45
35
25
40
45
80
80
80
110
110
95
65
35
80
95
38.75
76.25
Table
4 presents a sample of 20
observations drawn from the theoretical
distribution above, note that sample mean
for yield is smaller than the population
mean (33.75 for the sample mean versus
45.00 for the population mean). It follows
that the sample mean for profit is smaller
than the population mean for profit.
ST. PETERSBURG PARADOX
The Saint Petersburg paradox involves the
valuation of gambles with an infinite value.
The simplest form of the paradox involves the value
of a series of coin flips: What is the expected value of
a bet that pays off $2 for each consecutive head?
EG i 1 2 2 i 11
i
i
In theory, the expected value of this bet is infinity,
but no one is willing to pay an infinite price.
This unwillingness to pay an infinite price for the
gamble led to expected utility theory.
Theorem 4.1.3. Let (X,Y) be a bivariate discrete
random variable taking value (yi,xj) with
probability P(xi,yj), i,j=1,2,…, and let (.,.) be an
arbitrary function. Then
E X , Y i 1 j 1 xi , y j Pxi , y j
Theorem 4.1.4. Let (X,Y) be a bivariate
continuous random variable with joint density
function f(x,y), and let (.,.) be an arbitrary
function. Then
E X , Y
x, y f x, y dxdy
Theorem 4.1.5. If is a constant, E[]=.
Theorem 4.1.6. If X and Y are random variables
and and are constants,
E[X+Y]=E[X]+E[Y].
Theorem 4.1.7. If X and Y are independent
random variables, E[XY]=E[X] E[Y].
The
last series of theorems are important
to simplify decision making under risk. In
the crop example we have
P X C
where is profit, P is the price of the
output, X is the yield level and C is the
cost per acre.
The distribution of profit along with its expected
value is dependent on the distribution of P, X,
and C. In the example above, we assume that P
and C are constant at p and c. The expected
value of profit is then:
E
E p X c pEX c
As a first step, assume that cost is a random
variable
E X , C E p X C p EX EC
Next, assume that price and yield are random,
but cost is constant:
E X , P E P X c
E P X c
p x f x, p dxdp
By assuming that P and X are independent (firm
level assumptions)
E X , P EP X c EPEX c
MOMENTS
Another frequently used function of random
variables are the moments of the distribution
function:
r X EX
r
x dx
where r is a nonnegative integer.
r
From this definition, it is obvious that the mean
is the first moment of the distribution function.
The second moment is defined as
2 X EX
2
2
x dx
The higher moments can similarly be represented
as moments around the mean or central moments
r
~
r X EX EX
Examples, the first, second, third and fourth
moments of the uniform distribution can be
defined as:
1
1 2 1 1
1 X x 1dx x 0
0
2
2
1 3 1 1
2 X x dx x 0
0
3
3
1
1 4 1 1
3
3 X x dx x 0
0
4
4
1
1 5 1 1
4
4 X x dx x 0
0
5
6
1
2
Definition 4.2.1. The second central moment of
the distribution defines the variance of the
distribution:
EX
V X E X EX E X
2
2
2
The last equality is derived by:
EX EX E X EX X EX
2
E X 2 X EX EX
2
2
2 ExEX EX
E X EX
EX
2
2
2
2
Put another way, the variance can be derived as
V X 2 1
2
2
~
2
From these definitions, we see that for the
uniform distribution
V X 2 1
2
2
1 1
1 1 1
3 2
3 4 12
This can be verified directly by
2
1
V X x dx
0
2
1
1
2
x x dx
0
4
1
1 1
x dx x dx dx
0
0
4 0
1 1 1 4 6
3
1
3 2 4 12 12 12 12
1
2
1