III-ValueAsShape - University of Cambridge

Download Report

Transcript III-ValueAsShape - University of Cambridge

Risk Management & Real Options
III. The value is a shape
Stefan Scholtes
Judge Institute of Management
University of Cambridge
MPhil Course 2004-05
An interesting worksheet function…


Insert =randbetween(5,15) in a spreadsheet cell and press F9
•
Every time a different integer between 5 and 15 is generated
•
Each integer has the same chance (=1/16) to appear
•
How would you check this?
Simulating the roll of a die
=randbetween(1,6) simulates the roll of a die
=randbetween(1,6)+randbetween(1,6) simulates the roll of two
dice

•
What are the possible values?
•
What’s the distribution of these values?
Can we use this tool for management analysis?
2 September 2004
© Scholtes 2004
Page 2
The Monty Hall Game Revisited
Recall:

Contestant chooses one of three closed doors to win the prize behind
the door
Behind one of the doors is a £30K sports car, behind the other two
doors are goats
Monty Hall, who knows which door hides the sports car, will now open
one of the remaining doors with a goat behind it
Now the contestant is asked if she wishes to switch to the other door

Should she switch?



LET’S SIMULATE THE GAME
2 September 2004
© Scholtes 2004
Page 3
The value shape of the Monty Hall decisions
70%
60%
50%
40%
Don't swap
Swap
30%
20%
10%
0%
Goat
2 September 2004
Car
© Scholtes 2004
Page 4
Don’t over-interpret distributions



The fact that one shape is preferable over another does not mean that
the corresponding decision always leads to a better outcome
What’s the chance that swapping turns out to be the wrong decision?
In the presence of uncertainty it is important to distinguish between
“good decisions” and “good outcomes”
Good decisions may not have good outcomes
2 September 2004
© Scholtes 2004
Page 5
Probabilities

Objective probability: Assign to a scenario the frequency of its occurrence in the
long run
•


Based on “law of averages”: If we repeat a random experiment over and over again
and record the number of times a particular scenario occurs, then the percentage of
times this scenario occurs converges to some number in the long run
•
Law of nature, not a law of mathematics
•
Basis is the notion of a “repeated random experiment” – flipping a coin, rolling a dice
•
Objective through data recording
Subjective probability: Tool for the quantification and communication of
uncertainty, even if there is no repeated experiment
•
Most “experiments” in business and economics are never repeated
•
What’s the chance of selling more than 1 Mio cars of a new model?
•
We build mental analogy to repeated experiment, but there is no repeated experiment
Why do we work with probabilities?
•
Enable visualisation of uncertainties as shapes, technically called “distributions”
2 September 2004
© Scholtes 2004
Page 6
2 September 2004
© Scholtes 2004
47,830,414
46,068,095
44,305,775
42,543,456
40,781,137
39,018,818
37,256,499
35,494,180
33,731,861
31,969,541
30,207,222
28,444,903
26,682,584
24,920,265
23,157,946
21,395,627
19,633,308
17,870,988
16,108,669
14,346,350
12,584,031
10,821,712
9,059,393
7,297,074
5,534,754
3,772,435
2,010,116
247,797
-1,514,522
-3,276,841
-5,039,160
Frequency
Uncertainties as shapes: Histograms
Chance that a randomly picked value falls in this bin
Histogram
18.0%
16.0%
14.0%
12.0%
10.0%
8.0%
6.0%
4.0%
2.0%
0.0%
Bin values
Bin: a range of possible values
Bin value: typically the mid-point of the range
Page 7
Uncertainties as shapes: Value-at-risk charts
Cumulative distribution function
Probability that realised value is less
than target value
100.0%
90.0%
80.0%
70.0%
60.0%
50.0%
40.0%
30.0%
20.0%
10.0%
0.0%
-10,000,000
-5,000,000
0
5,000,000
10,000,000
15,000,000
20,000,000
25,000,000
30,000,000
Target value
2 September 2004
© Scholtes 2004
Page 8
Uncertainties as shapes: Value-at-risk charts
Cumulative distribution function
Probability that realised value is less
than target value
100.0%
90.0%
80.0%
70.0%
60.0%
50.0%
40.0%
30.0%
20.0%
10.0%
0.0%
-10,000,000
-5,000,000
0
5,000,000
10,000,000
15,000,000
20,000,000
25,000,000
30,000,000
Target value
10% value-at-risk
is about 2.5 Mio
There is a 10% chance of loosing 2.5 Mio or more!
2 September 2004
© Scholtes 2004
Page 9
Uncertainties as shapes: Value-at-risk charts
Cumulative distribution function
Probability that realised value is less
than target value
100.0%
90.0%
80.0%
70.0%
60.0%
50.0%
40.0%
30.0%
20.0%
10.0%
0.0%
-10,000,000
-5,000,000
0
5,000,000
10,000,000
15,000,000
20,000,000
25,000,000
30,000,000
Target value
10% value-at-risk
is about 2.5 Mio
2 September 2004
30% chance that the realised
value is negative
© Scholtes 2004
Page 10
Uncertainties as shapes: Value-at-risk charts
20% chance that the realised value is above 7,000,000
Cumulative distribution function
Probability that realised value is less
than target value
100.0%
90.0%
80.0%
70.0%
60.0%
50.0%
40.0%
30.0%
20.0%
10.0%
0.0%
-10,000,000
-5,000,000
0
5,000,000
10,000,000
15,000,000
20,000,000
25,000,000
30,000,000
Target value
10% value-at-risk
is about 2.5 Mio
2 September 2004
30% chance that the realised
value is negative
© Scholtes 2004
Page 11
The shape calculator: Monte Carlo Simulation

Key question: What is the distribution of a bottom line system
performance measure?
•
•

Mont Carlo simulation allows us to compute the shape of output measures
from the shapes of the uncertain inputs
Monte Carlo is for distributions what the calculator is for numbers
•

Depends on the distributions of the uncertain inputs to the valuation model
(demand, cost, price, etc)
Can type in “input” shapes and press a button to obtain “output” shapes
“Distribution-in-distribution-out” models
2 September 2004
© Scholtes 2004
Page 12
Can’t we do this intuitively?


Suppose x and y are two uncertain model inputs
Both, x and y, can achieve values between 0 and 1 with every number
having the same chance of being picked
•

If x has been picked first and is known, then that information does not
change the chances for the y-values
•

Uniform distribution on [0,1]
“Statistical independence”
What’s the shape of the following “output measures”:
•
x+y?
•
x*y?
•
1/x?
•
y/x?
•
Max{x,y}
2 September 2004
© Scholtes 2004
Page 13
Sampling from a distribution

What drives MCS?  Can ask your computer to draw a number from a
distribution
•



Repeated drawing from a distribution and recording the frequency of
occurrences will reproduce the histogram of the distribution
Generate one scenario by drawing once from all the distributions of the
uncertainties in the model and plugging the numbers in the model
Record the performance measure(s) and repeat this MANY times
Simulation software helps you to do the book-keeping and to produce
nice graphs
2 September 2004
© Scholtes 2004
Page 14
A road-map to Monte Carlo Simulation




Start with a traditional valuation spreadsheet
•
Logically correct representation of value creation
•
Filled with projections for uncertain values
Determine key uncertainty drivers through sensitivity analysis / tornado
diagram on uncertainties
Determine distribution of these uncertainties (most of the work!)
•
Use data where possible
•
Subjective expertise o/w
Perform a Monte Carlo Simulation
•
Use MCS software (e.g. XLSim, @Risk, Crystal Ball)
2 September 2004
© Scholtes 2004
Page 15
A bit more about the mechanics…
2 September 2004
© Scholtes 2004
Page 16
A word about software


Professional packages @Risk and Crystal Ball
•
Widespread in financial services
•
Professional version expensive
•
Student versions expire after some months
•
Overkill for most applications
•
Difficult to learn and remember if not used on a regular basis
We use Savage’s XLSim (or Sim.xla in the old version)
•
Fine for most practical applications
•
Easy to learn, very close to ordinary Excel
•
Good student version with Decision Making with Insight
•
Inexpensive professional upgrade through web
2 September 2004
© Scholtes 2004
Page 17
Key Question

Monte Carlo simulation is a “distribution calculator”
•

MCS produces distributions of model outputs (performance measures) from
distributions of model inputs
Where do we get input distributions from?
•
This is the modelling part
 A (rather technical) lecture course in its own right

•
Objective element: use historical / market data where-ever possibly
•
Subjective element: use expert opinions and industry experience
Creative challenge: Which industry / market data is relevant to my
uncertain inputs and how can I combine this data with subjective
information, e.g. expert opinions, to produce sensible input
distributions?
2 September 2004
© Scholtes 2004
Page 18
Examples of standard distributions:
I. Discrete distributions

Discrete distribution: Uncertain input can take on finitely many values,
each with a certain probability
•
•

Probabilities add up to 1
Can be generated with XLSim using gen_discrete()
Example:
•
Value of a card in Black Jack
̵
̵
̵
•

2,3,4,5,6,7,8,9,10 at face value
Jack,King,Queen = 10
Ace = 11 (or 1 if total above 21)
Values 2,3,4,5,6,7,8,9,11 with probability 9/13, value 10 with probability 4/13
Challenge: Set up a an Excel simulation of Black Jack, assuming a very
large number of decks and two players with fixed stopping strategies
2 September 2004
© Scholtes 2004
Page 19
A histogram is a visualization
of a discrete distribution
Frequency
Histogram of demand over 261 days
30.0%
25.0%
20.0%
15.0%
10.0%
5.0%
0.0%
19
22
25
28
31
34
37
40
43
46
Demand
2 September 2004
© Scholtes 2004
Page 20
Examples of standard distributions:
II. Uniform distribution
All values over a range are equally likely
Need only specify the range
Gen_Uniform(-1, 4)
0.25
0.20
0.15
Histogram
frequency
for bins of
width = 1
0.10
0.05
5
4
3
2
1
0
-1
-2
0.00
5.0%
-0.7500
2 September 2004
90.0%
5.0%
3.7500
© Scholtes 2004
Page 21
The rand() function


Insert =rand() in a spreadsheet cell and press F9
•
Every time a different decimal number between 0 and 1 is generated
•
The chance for any one number between 0 and 1 is as large as for any other
•
This is a “uniform” random number between 0 and 1
What’s the chance that “=rand()” produces a number
below 0.3?
above 0.6?
between 0.2 and 0.5?

What’s the shape of the sum of two (or 3 or 4) uniform random
variables?
•
Use XLSim
•
Can you explain this intuitively?
2 September 2004
© Scholtes 2004
Page 22
Examples of standard distributions:
III. Normal distribution
Reason for its prevalence is the Central Limit Theorem:
Aggregation of many independent random effects leads to bell-shape
Need to specify mean and standard deviation
Gen_Normal(2, 3)
0.14
0.12
0.10
0.08
0.06
0.04
8
6
4
2
0
-2
-4
10
0.02
-6
Histogram
frequency
for bins of
width = 1
0.00
<
95.0%
-3.8798
2 September 2004
>
7.8798
95% rule: roughly 95% of sampled values lie within
© Scholtes
2 standard deviations
of2004
the mean
Page 23
Sampling from a normal distribution


Standard Excel: =norminv(rand(),m,s) samples from a normal
distribution with mean m and standard deviation s
XLSim: gen_normal(m,s)
2 September 2004
© Scholtes 2004
Page 24
Log-normal distributions



Shape for product of random variables
Multiplicative version of the central limit theorem:
The product of many independent random variables is a random
variable of the form Y=exp(X), where X is normal
•
Y has so-called log-normal distribution
•
Suppose Y can be interpreted as return for random continuous interest rate X
Example: Annual return with random daily returns
•
Prevalent model for stock price returns
rannual  (1  r1 )(1  r2 )...(1  r365 )  exp( r1  r2  ...  r365 )





log- normal
2 September 2004
Sum of independent rv's
is normal
© Scholtes 2004
Page 25
Distribution of annual interest rate
R=r1+r2+….+r365 =normal(5%,30%)
1.4
1.2
1
0.8
0.6
0.4
0.2
0
-1
2 September 2004
-0.5
0
0.5
1
1.5
2
© Scholtes 2004
2.5
3
3.5
4
Page 26
Distribution of return if interest is paid
daily and money is re-invested (hold on to stock)
exp( R)  exp( r1  r2  ...  r365 ) is log - normal
1.4
1.2
1
Skewed
bell-shape
0.8
0.6
0.4
0.2
0
-1
-0.5
2 September 2004
0
0.5
1
1.5
© Scholtes 2004
2
2.5
3
3.5
4
Page 27
Examples of standard distributions:
IV. Triangular distribution
User needs to specify lowest, most likely and highest value
Gen_Triang(-2, 0, 10)
0.18
0.16
0.14
0.12
0.10
0.08
0.06
0.04
12
10
8
6
4
2
0
-2
-4
0.02
0.00
2 September 2004
5.0%
-0.9046
90.0%
5.0%
7.5505
© Scholtes 2004
Page 28
Sampling from a triangular distribution


XLSim: gen_triang(L,M,H)
•
L: lowest value
•
M: most likely value
•
H: highest value
Careful: the most likely value is NOT the mean if the triangular
distribution is skewed (i.e. if M is not in the middle between L and H)

Often easier to communicate than normal

Gives similar result to normal but allows for skewness
2 September 2004
© Scholtes 2004
Page 29
Using historical data I: Fitting a distribution


Begin by inspecting your data
•
Draw a histogram
•
Does the rough shape match any shape you know?
Choose from a variety of distributional shapes
•

Some software packages such as @Risk or Crystal Ball, offer many shapes to
choose from
Fit the shape to the data
•
•
•
•
Simple cases: Parameters of distribution are easy to estimate from historic
data (e.g. historic average and stdev for normal distribution)
@Risk and Crystal Ball do that for you
Can use “goodness of fit” measure to compare fit of distributions (chisquared statistic)
Unlikely that you need to know the technical details
̵
2 September 2004
If in doubt, ask an expert
© Scholtes 2004
Page 30
When is data fitting appropriate?

You need to have an idea about the shape of the distribution
•


E.g. a central limit theorem argument for normal or log-normal distributions
Data fitting is particularly appropriate if you have a good reason to
assume that the distribution has a particular shape AND only relatively
few data points (e.g. 20 rather than 200)
If you have plenty of data (e.g. 1000’s of data points) you may as well
sample directly from the data…
2 September 2004
© Scholtes 2004
Page 31
Using historic data II: Sampling from the data

The main challenge for statistics used to be to make an inference on the
basis of little data
Now we are often overwhelmed with data  Can we use this?

Suppose you have 1000 data entries

•
Put them in range b1:b1000 and put numbers 1 to 1000 in range a1:a1000
•
Use Vlookup(randbetween(1,1000),a1:b1000,2) to sample data
•
Alternative: use gen_resample(b1:b1000) in XLSim
2 September 2004
© Scholtes 2004
Page 32
Using historic data II: Sampling from the data

Can also use this technique to sample correlated data, e.g. price and
demand from the same day
•
Suppose price and demand are in columns b1:c1000
•
Put numbers 1:1000 in a1:a1000
•
First sample a number x between, say 1 and 1000
•

Then use Vlookup(x,a1:c1000,2) to sample the x’s price and
vlookup(x,a1:c1000,3) to sample the corresponding demand
Alternative: Use gen_ResampleSync function in XLSim
2 September 2004
© Scholtes 2004
Page 33
Taking account of trends in sampling

To simulate the future from the past, you can use regression
Demand at time t = Expected demand at time t + Random error


Key assumption: expected demand changes with time (e.g. linear growth)
but random error is independent of time (e.g. normal with mean 0 and
standard deviation 20)
Do a linear regression of your, say daily demand data, giving a relation
Expected demand = a+b*week
and generate worksheet with past data errors, one for each day:
Data error = actual demand – expected demand


To sample data for day k (in the future) sample one of the errors and add
it to the projected expected value a+b*k
Must be careful not to project linear growth too far into the future!
2 September 2004
© Scholtes 2004
Page 34
What if you don’t get it right?

Garbage-in-garbage-out principle applies to all quantitative analyses

Sloppiness is a prevalent reason for garbage-in!

Always spell out your assumptions and be prepared to defend them

Make sure you have intuitive explanations for your results

Always do a careful sensitivity analysis on your distributions
•
•
Work with different input distributions in the case of MCS if you are not sure
about the distribution
E.g. use ranges of means or standard deviations and check if the “model
story” changes as the parameters change
2 September 2004
© Scholtes 2004
Page 35
What if you don’t get it right?


Even if you don’t get it right, MCS improves on a projection-based
analysis
MCS is similar to the way in which shaking a ladder improves your trust
in the stability of the ladder
•
•
You know that you won’t apply the correct forces to the ladder – but you
would be silly not to shake it before you climb up
The art is to find the right balance between shaking the ladder too little and
shaking it too hard
2 September 2004
© Scholtes 2004
Page 36
Simulating a process

We can use MCS to simulate a process with uncertain inputs

Example 1: Customers at a helpline

•
Customers ring up at a helpline, are put in a queue and served successively
•
Shall we open a second line?
Example 2: Holding inventory
•

How much inventory do we need to hold if we want to be sure that we can
meet customer demand 99% of the time?
Worked spreadsheet examples are on the web
2 September 2004
© Scholtes 2004
Page 37
Summary MCS



Monte Carlo simulations generate the distribution of performance
measures from the distributions of inputs
Most important: Allows the visualization of risk profiles
•
Histograms and cumulative distribution functions
•
Supported by software such as XLSim, @Risk or Crystal Ball
Monte Carlo is a great toy that helps you test your intuition!!!
2 September 2004
© Scholtes 2004
Page 38
Group work for today
EasyBeds
2 September 2004
© Scholtes 2004
Page 39