Transcript Slide 1

How To
Measure Anything:
Finding the Value of
‘Intangibles’ in Business
Copyright HDR 2007
[email protected]
1
How to Measure Anything
• It started 12 years ago…
• I conducted over 60 major risk/return
analysis projects so far that included a
variety of “impossible” measurements
• I found such a high need for
measuring difficult things that I
decided I had to write a book
• The book was released in July 2007
with the publisher John Wiley & Sons
• This is a “sneak preview” of many of
the methods in the book
Copyright HDR 2007
[email protected]
2
How To Measure Anything
• “I love this book. Douglas Hubbard helps us create a path to know the
answer to almost any question, in business, in science or in life.” Peter
Tippett, Ph.D., M.D. Chief Technology Officer at CyberTrust and inventor
of the first antivirus software
• “Doug Hubbard has provided an easy-to-read, demystifying explanation of
how managers can inform themselves to make less risky, more profitable
business decisions.” Peter Schay, EVP and COO of The Advisory Council
• “As a reader you soon realize that actually everything can be measured
while learning how to measure only what matters. This book cuts through
conventional clichés and business rhetoric and it offers practical steps to
using measurements as a tool for better decision making.” Ray Gilbert,
EVP Lucent
• “This book is remarkable in it's range of measurement applications and it's
clarity of style. A must read for every professional who has ever exclaimed
‘Sure, that concept is important but can we measure it?’” Dr. Jack Stenner,
CEO and co-founder of MetaMetrics, Inc.
Copyright HDR 2007
[email protected]
3
A Few Examples
• IT
–
–
–
–
–
–
Risk of IT
The value of better information
The value of better security
The Risk of obsolescence
The value of productivity when headcount is not reduced
The value of infrastructure
• Business
– Market forecasts
– The risk/return of expanding operations
– Business valuations for venture capital
• Military
– Forecasting fuel for Marines in the battlefield
– Measuring the effectiveness of combat training to reduce roadside
bomb/IED casualties
Copyright HDR 2007
[email protected]
4
Does Your Model Consider…
1.
2.
3.
4.
Research shows most subject matter experts are statistically
overconfident. If you don’t account for this, you will underestimate
risk in every model you make.
Most Monte Carlo models are created with little or no empirical
measures of any kind.
If a model includes the results of empirical measurements, its
usually not the measures the are the “high payoff” measures.
There is a way to make tradeoff’s between higher risk/higher return
investments and lower risk/lower return investments that result in
an optimal portfolio. The final deliverable in most Monte Carlos is
a histogram – not a risk/return optimized recommendation.
@RISK can support anything we are talking about!
Copyright HDR 2007
[email protected]
5
Three Illusions of Intangibles
(The “howtomeasureanything.com” approach)
• The perceived impossibility of
measurement is an illusion caused by not
understanding:
– the Concept of measurement
– the Object of measurement
– the Methods of measurement
• See my “Everything is Measurable”
article in CIO Magazine (go to “articles”
link on www.hubbardresearch.com
Copyright HDR 2007
[email protected]
6
The Approach
•
•
•
•
Model what you know now
Compute the value of additional
information
Where economically justified, conduct
observations that reduce uncertainty
Update the model and optimize the
decision
Copyright HDR 2007
[email protected]
7
Uncertainty, Risk & Measurement
•
•
Measuring Uncertainty, Risk and the Value of Information are closely
related concepts, important measurements themselves, and
precursors to most other measurements
The “Measurement Theory” definition of measurement: “A
measurement is an observation that results in information
(reduction of uncertainty) about a quantity.”
Copyright HDR 2007
[email protected]
8
Calibrated Estimates
• Decades of studies show that most managers are
statistically “overconfident” when assessing their
own uncertainty
– Studies showed that bookies were great at assessing odds
subjectively, while doctors were terrible
• Studies also show that measuring your own
uncertainty about a quantity is a general skill that
can be taught with a measurable improvement
• Training can “calibrate” people so that of all the
times they say they are 90% confident, they will be
right 90% of the time
Copyright HDR 2007
[email protected]
9
Calibrated Estimates: Ranges
Calibrated probability assessment
results from various studies
Group
Harvard MBAs
Chemical Co. Employees
Chemical Co. Employees
Computer Co. Managers
Computer Co. Managers
AIE Seminar (before training)
AIE Seminar (some training)
Subject
General Trivia
General Industry
Company-Specific
General Business
Company-Specific
General Trivia & IT
General Trivia & IT
90% Confidence
Interval
% Correct (target 90%)
40%
50%
48%
17%
36%
50%
80%
Copyright HDR 2006
[email protected]
10
Calibration Results
73% of people who go through calibration training achieve calibration
The remaining 27% seem “stuck” in overconfidence
– for these we use “calibration factors” to adjust all ranges or we seek confirmation
from calibrated persons
– Fortunately, they are not usually the persons relied on for most estimates
Percent within expected 90%
confidence interval
•
•
100%
90%
80%
Target
60%
40%
20%
0%
Test 1
Test 2
Test 3
Test 4
Test 5*
0%
0%
7%
38%
55%
Copyright HDR 2006
[email protected]
% finished on this test
11
1997 Calibration Experiment
•
•
In January 1997, I conducted a calibration training experiment with 16 IT Industry Analysts
and 16 CIO’s to test if calibrated people were better at putting odds on uncertain future events.
The analysts were calibrated and all 32 subjects were asked To Predict 20 IT Industry events
Example: Steve Jobs will be CEO of Apple again, by Aug 8, 1997 - True or False? Are you
50%, 60%...90%, 100% confident?
100%
17
Percent Correct
•
90%
45
80%
70%
“Ideal” Confidence
21
Statistical Error
65
68
60%
Giga Clients
152
21
75
71
65
58
Giga Analysts
50%
40%
99 # of Responses
25
30%
50%
60%
70%
80%
90% 100%
Assessed Chance Of Being Correct
Source: Hubbard Decision Research
Copyright HDR 2007
[email protected]
12
The Value of Information
z
z
 z

EVI   p(ri ) max V1, j p( j | ri ),V2 , j p( j | ri ),... Vl , j p( j | ri ),  EV *
i 1
j 1
j 1
 j 1

k
The formula for the value of information has been around for almost
60 years. It is widely used in many parts of industry and government
as part of the “decision analysis” methods – but still mostly unheard
of in the parts of business where it might do the most good.
What it means:
1. Information reduces uncertainty
2. Reduced uncertainty improves decisions
3. Improved decisions have observable consequences with
measurable value
Copyright HDR 2007
[email protected]
13
The EOL Method
•
•
•
•
•
•
•
The simplest approach computes the change in “Expected Opportunity
Loss”
“Opportunity Loss” is the loss (compared to the alternative) if it turns out you
made the wrong decision
Expected Opportunity Loss (EOL) is the cost of being wrong times the
chance of being wrong
The reduction in EOL from more information is the value of the information.
In the case of perfect information (if that were possible) the value of
information is equal to the EOL.
Simple Binary Example: You are about to make a $20 million investment to
upgrade the equipment in a factory to make a new product. If the new
product does well, you save $50 million in manufacturing. If not, you lose
(net) $10 million. There is a 20% chance of the new product failing. What’s
it worth to have perfect certainty about this investment if that were possible?
Answer: 20% x $10 million = $2 million
Copyright HDR 2007
[email protected]
14
Information Value w/Ranges
•
•
•
The value of information is computed a little differently with a distribution,
but the same basic concepts apply
For each variable, there is a “Threshold” where the investment just breaks
even
If the threshold is within the range of possible values, then there is a chance
that you would make a different decision with better measurements
90%
Confidence
Interval
Threshold
5% tail
Mean
5% tail
Threshold
Probability
Copyright HDR 2007
[email protected]
15
Normal Distribution Information Value
• The “expected value” of the variable is the mean of the range of
possible values
• A threshold is a point where the value just begins to make some
difference in a decision – a breakeven
• The expected value is on one side of the threshold
• If the true value is on the opposite side of the threshold from the
mean then the best decision would have been different then one
based on the mean
• The “Threshold Probability” is the chance that this variable could
have a value that would change the decision
Example Threshold: 18% Productivity Improvement
Probability that true value is under required
threshold: 16.25%
5%
10%
20%
15%
Productivity Improvement in Process X
Copyright HDR 2007
[email protected]
25%
16
Normal Distribution VIA
•
•
•
•
The curve on the other side of the threshold is divided up into
hundreds of “slices”
Each slice has an assigned quantity (such as a potential productivity
improvement) and a probability of occurrence
For each assigned quantity, there is an Opportunity Loss
Each slice’s Opportunity Loss is multiplied by probability to compute
its Expected Opportunity Loss
Example:
Productivity Improvement: 15%
Opportunity Loss: $1,855,000
Probability: 0.0053%
EOL: $98.31
5%
10%
15%
20%
25%
Productivity Improvement in Process X
Copyright HDR 2007
[email protected]
17
Normal Distribution VIA
•
•
•
•
•
Total EOL for all slices equals the EOL for the variable
Since EOL=0 with perfect information, then the Expected Value of Perfect
Information (EVPI) =sum(EOL’s)
Even though perfect information is not usually practical, this method gives
us an upper bound for the information value, which can be useful by itself
Many of the EVPI’s in a business case will be zero
I do this with a macro in Excel but it can also be estimated
Total of all EOL’s = $58,989
This is the value of perfect
information about the
potential productivity
improvement
5%
10%
15%
20%
25%
Productivity Improvement in Process X
Copyright HDR 2007
[email protected]
18
Increasing Value & Cost of Info.
The value of information levels off while the cost of information accelerates
Information value grows fastest at the beginning of information collection
Use iterative measurements that err on the side of “small bites” at the steep
part of the slope
$$$
Dollar Value/Cost
•
•
•
Aim for this range
EVPI
•
ENBI
EVI
•
•
Maximum
ENBI
•
EVPI – Expected Value of Perfect
Information
ECI – Expected Cost of
Information
EVI – Expected Value of
Information
ENBI – Expected Net Benefit of
Information
ECI
$0
Low accuracy
High accuracy
Copyright HDR 2007
[email protected]
19
The Measurement Inversion
• Also, we found that, if anything, fewer
measurements were required after the
information values were known.
Economic Relevance
– Costs were measured more than the more uncertain
benefits
– Small “hard” benefits would be measured more than
large “soft” benefits
Typical Attention
• After the information values for over 4,000
Measurement Attention
variables was computed, a pattern emerged.
vs. Relevance
• The highest value measurements were almost
never measured while most measurement effort
was spent on less relevant factors
See my article “The IT Measurement Inversion” in CIO Magazine
(its also on my website at www.hubbardresearch.com under the “articles” link)
Copyright HDR 2006
[email protected]
20
Next Step: Observations
• Now that we know what to measure, we
can think of observations that would
reduce uncertainty
• The value of the information limits what
methods we should use, but we have a
variety of methods available
• Take the “Nike Method”: Just Do It – don’t
let imagined difficulties get in the way of
starting observations
Copyright HDR 2007
[email protected]
21
Some Useful Suggestions for
Making Empirical Observations
•
•
•
•
•
It has been done before
You have more data than you think
You need less data than you think
It is more economical than you think
The existence of noise is not a lack of signal
“It’s amazing what you can see when you look” Yogi Berra
Copyright HDR 2007
[email protected]
22
Statistics Goes to War
• Several clever sampling methods exist that can
measure more with less data than you might
think
• Examples: estimating the population of fish in
the ocean, estimating the number of tanks
created by the Germans in WWII, extremely
small samples, etc.
Copyright HDR 2006
[email protected]
23
Measuring to the Threshold
Number Sampled
Chance the Median is Below the Threshold
• Measurements have
value usually
because there is
some point where
the quantity makes a
difference
• Its often much
harder to ask “How
much is X” than “Is X
enough”
2
4
6
8 10 12 14 16 18 20
2
3
4
50%
40%
30%
20%
10%
5%
2%
1%
0.5%
0.2%
0.1%
0
1
5
6
7
8
9
10
Samples Below Threshold
Copyright HDR 2007
[email protected]
24
Measuring to the threshold
4. Find the
value on the
vertical axis
directly left of
the point
identified in
step 3; this
value is the
chance the
median of
the
population is
below the
threshold
Chance the Median is Below the Threshold
1. Find the curve beneath the number of
samples taken
2
•
Number Sampled
4 6 8 10 12 14 16 18 20
50%
•
40%
30%
20%
10%
5%
2%
1%
0.5%
3. Follow the curve
identified in step 1
until it intersects the
vertical dashed line
identified in step 2.
0.2%
0.1%
0 1 2 3 4 5 6 7 8 9 10
Samples Below Threshold
Use this chart when using small
samples to determine the
probability that the median of a
population is below a defined
threshold
Example: You want to
determine how much time your
staff spends on one activity.
You sample 12 of them and
only two spend less than 1 hour
a week at this activity. What is
the chance that the median time
all staff spend is more than 1
hour per week? Look up 12 on
the top row, following the curve
until it intersects the “2” line on
the bottom row, and look up the
number to the left. The answer
is just over 1%.
2. Identify the dashed line marked by the number of samples that fell below the threshold
The “Math-less” Statistics Table
• Measurement is based on
observation and most
observations are just
samples
• Reducing your uncertainty
with random samples is not
made intuitive in most
statistics texts
• This table makes computing
a 90% confidence interval
easy
Copyright HDR 2006
[email protected]
26
The Simplest Method
• “Bayesian” methods in statistics use new information to update prior
knowledge
• Bayesian methods can be even more elaborate that other statistical
methods BUT…
• It turns out that calibrated people are already mostly “instinctively
Bayesian”
• The instinctive Bayesian approach:
– Assess your initial subjective uncertainty with a calibrated probability
– Gather and study new information about the topic (it could be qualitative or
even tangentially related)
– Give another subjective calibrated probability assessment with this new
information
• In studies where people were asked to do this, there results were
usually not irrational compared to what would be computed with
Bayesian statistics – calibrated people do even better
Copyright HDR 2007
[email protected]
27
Comparison of Methods
•
Overconfident
(Stated
uncertainty is
lower than
rational)
Gullible
Typical
Un-calibrated
Expert
Stubborn
NonBayesian
Bayesian
Statistics Calibrated
Expert
Under-confident
(Stated
uncertainty is
higher than
rational)
•
Overly
Cautious
Vacillating,
Indecisive
Ignores Prior Knowledge;
Emphasizes new data
Ignores New data;
Emphasizes Prior
Knowledge
•
•
Copyright HDR 2007
[email protected]
Traditional non-Bayesian
statistics (what you
probably learned in the first
semester of stats) assumes
you knew nothing prior to
the samples you took - this
is almost never true in
reality
Most un-calibrated experts
are overconfident and
slightly overemphasize new
information
Calibrated experts are not
overconfident, but slightly
ignore prior knowledge
Bayesian analysis is the
perfect balance; neither
under- nor over- confident,
uses both new and old
information
28
Analyzing the Distribution
•
•
How are you assessing the resulting histogram from a Monte Carlo
simulation?
Is this a “good” distribution or a “bad” one? How would you know?
ROI = 0%
“Expected” ROI
Risk of
Negative
ROI
-25%
Probability of Positive ROI
0%
25%
50%
75%
100% 125%
Return on Investment (ROI)
Copyright HDR 2006
[email protected]
29
Quantifying Risk Aversion
•
•
•
The simplest element of Harry Markowitz’s Nobel Prize-winning method
“Modern Portfolio Theory” is documenting how much risk an investor
accepts for a given return.
The “Investment Boundary” states how much risk an investor is willing to
accept for a given return.
For our purposes, we modified Markowitz’s approach a bit.
Acceptable Risk/Return
Boundary
Investment Region
Investment
Copyright HDR 2007
[email protected]
30
Approach Summary
Define Decision
Model
Populate Model
with Calibrated
Estimates &
Measurements
Conduct Value
of Information
Analysis (VIA)
Analyze
Remaining Risk
Calibrate
Estimators
Measure
according to VIA
results and
update model
Optimize
Decision
Copyright HDR 2007
[email protected]
31
Connecting The Dots
•
•
The EPA needed to compute the ROI of the Safe Drinking Water
Information System (SDWIS)
As with any AIE project, we built a spreadsheet model that connected the
expected effects of the system to relevant impacts – in this case public
health and its economic value
Copyright HDR 2006
[email protected]
32
Reactions: Safe Water
• “I didn’t think that just defining the problem quantitatively would
result in something that eloquent. I wasn’t getting my point across
and the AIE approach communicated the benefits much better.” Jeff
Bryan, SDWIS Program Chief
• “Until [AIE], nobody understood the concept of the value of the
information and what to look for. They had to try to measure
everything, couldn’t afford it, so opted for nothing…
• “Translating software to environmental and health impacts was
amazing. I think people were frankly stunned anyone could make
that connection…
• “The result I found striking was the level of agreement of people with
disparate views of what should be done. From my view, where
consensus is difficult to achieve, the agreement was striking” Mark
Day, Deputy CIO and CTO for the Office of Environmental
Information
Copyright HDR 2006
[email protected]
33
Forecasting Fuel for Battle
• The US Marine Corps with
the Office of Naval Research
needed a better method for
forecasting fuel for wartime
operations
• The VIA showed that the big
uncertainty was really supply
route conditions, not whether
they are engaging the enemy
• Consequently, we performed
a series of experiments with
supply trucks rigged with
GPS and fuel-flow meters.
Copyright HDR 2006
[email protected]
34
Reactions: Fuel for the Marines
• “The biggest surprise was that we can save so much
fuel. We freed up vehicles because we didn’t have to
move as much fuel. For a logistics person that's critical.
Now vehicles that moved fuel can move ammunition.“
Luis Torres, Fuel Study Manager, Office of Naval
Research
• “What surprised me was that [the model] showed most
fuel was burned on logistics routes. The study even
uncovered that tank operators would not turn tanks off if
they didn’t think they could get replacement starters.
That’s something that a logistician in a 100 years
probably wouldn’t have thought of.” Chief Warrant
Officer Terry Kunneman, Bulk Fuel Planning, HQ
Marine Corps
Copyright HDR 2006
[email protected]
35
Final Tips
• Learn about calibration, computing information
values, and risk/return tradeoffs
• You can use the information value calculations
within @RISK
• Don’t let “exception anxiety” cause you to avoid
any observations at all – the existence of noise
does not mean the lack of signal
• Just do it – you learn about how to measure it by
just starting to take some observations
Copyright HDR 2007
[email protected]
36
Questions?
Doug Hubbard
Hubbard Decision Research
[email protected]
www.hubbardresearch.com
630 858 2788
Copyright HDR 2007
[email protected]
37