Transcript Compressed
Enhancing Monte Carlo Techniques
for Economic Capital Estimation
Christopher C. Finger
Austrian Workshop on Credit Risk Management
Vienna
February 2, 2001
Outline
• Introduction
– Quantities of interest -- portfolio capital, marginal capital
– Troubles with direct Monte Carlo
• Dealing with size -- portfolio compression
• Dealing with the model – importance sampling
• Dealing with the model – analytic marginal capital
• Conclusions
Model-based risk capital
• A natural way to define risk capital is as a level required to
guarantee solvency with some (high) degree of confidence.
• Once capital is established for the portfolio, examine the
contribution to capital of new positions or increased lines.
• Applications of capital contributions
– Internal credit charges or capital allocation, giving hurdle rates
of return or capital budgets
– Pricing -- charge for addition to capital, not just expected loss
VaR as risk capital
Risk capital at level p
Expected horizon value
or total notional value
Worst case
horizon value
at level p
5500
5600
5700
5800
5900
6000
Portfolio value at horizon
Holding risk capital in this way assures that
the likelihood of bankruptcy-causing losses is p.
Contribution of a new exposure to portfolio risk capital
Risk capital for new portfolio
Distribution of
base portfolio plus
new exposure
Risk capital for base portfolio
5500
5600
5700
5800
5900
6000
Portfolio value at horizon
Increase in capital comes from an increase in the total portfolio value
and a decrease in the worst case level.
VaR as capital
• Analytic shortcuts are available for marginal standard
deviation, but marginal VaR is more difficult.
• A common practice is to define capital as a multiple of
standard deviation and use the previous results.
• An established but little used result:
– The derivative of VaR with respect to a single exposure weight
is the conditional expectation, given that the realized loss is
VaR, of the loss on the exposure in question.
• This leads to the base capital term, but the size penalty is
more difficult to obtain.
The trouble with standard Monte Carlo
• The model presents a tough problem:
– Small default probabilities
– Discrete exposure distributions
– Portfolio distribution smoothes out very slowly
• Typical applications make things harder
– Large portfolios
– Economic capital = extreme portfolio percentiles
– Focus on capital contributions; values can be comparable to
portfolio MC error
• “Going faster when you’re lost don’t help a bit.”
Outline
• Introduction
– Quantities of interest -- portfolio capital, marginal capital
– Troubles with direct Monte Carlo
• Dealing with size -- portfolio compression
• Dealing with the model – importance sampling
• Dealing with the model – analytic marginal capital
• Conclusions
Naïve compression on ISDA benchmark portfolio
• Initial portfolio
– $30 billion total value, 1680 exposures
– Investment grade, average rating of A
– Diversified across nine industries, average correlation of 43%
• Compressed portfolio
– Maintain largest 2.3% of exposures
– Bucket remaining exposures homogeneously by industry/rating
– Resulting portfolio has 478 (28% of 1680) exposures
Size distribution of original and compressed portfolios
100%
20% of exposures
account for
70% of portfolio size
Cumulative size
80%
60%
Compressed portfolio
is almost homogeneous
40%
20%
Original
Compressed
0
0
20%
40%
60%
Exposures
80%
100%
Comparison of portfolio results
Cum prob
100%
80%
Original
Compressed
60%
40%
Near perfect fit in
middle of distribution
20%
0
32.3
32.5
32.7
Portfolio value ($B)
32.9
Cum prob
1.0%
0.8%
0.6%
0.4%
Very good fit
in tails as well
0.2%
0
27
28
29
30
Portfolio value ($B)
31
32
Comparison of portfolio results
Original
Compressed
% Diff
• Mean ($B)
32.7
32.7
0
• St. Dev. (bp)
45.9
46.2
0.6
• 5% loss (bp)
11.2
11.4
1.8
• 1% loss (bp)
150
150
-0.2
• 0.1% loss (bp)
633
679
7.6
All simulation estimates are within one standard error.
Comparison of marginal statistics
• Add an additional exposure in the most concentrated industry
– for each investment grade rating
– “small” exposure
• average size in the base portfolio ($18M, 0.25%)
– “large” exposure
• maximum size in the base portfolio ($74M, 0.06%)
• Capital statistics
– increase in portfolio standard deviation
– increase in 0.1% loss
• Report increase as percentage of the new exposure size
Comparison of marginal statistics
Original
Compressed
1.0%
St. Dev., small exposure
St. Dev., large exposure
1.0%
0.8%
0.8%
0.6%
0.6%
0.4%
0.4%
0.2%
0.2%
0
Aaa
Aa
A
Baa
0
0.1% loss, small exposure
25%
20%
20%
15%
15%
10%
10%
5%
5%
Aaa
Aa
A
Baa
Aaa
Aa
A
Baa
0.1% loss, large exposure
25%
0
Overestimation
of capital with
compressed portfolio
0
Underestimation
of capital with
compressed portfolio
Aaa
Aa
A
Baa
The real problem is the lack of convergence with any method.
Outline
• Introduction
– Quantities of interest -- portfolio capital, marginal capital
– Troubles with direct Monte Carlo
• Dealing with size -- portfolio compression
• Dealing with the model – importance sampling
• Dealing with the model – analytic marginal capital
• Conclusions
Overview of CreditMetrics
Single exposures follow a discrete distribution
BBB
Current state
Possible states
at horizon
Probabilities
AAA
AA
A
BBB
BB
B
CCC Default
0.00% 0.11% 5.28% 86.71% 6.12% 1.27% 0.23% 0.28%
(determined exogenous to model)
Instrument value
100.9% 100.8% 100.7%
Par
97.5% 95.8% 83.2%
Rec.
Overview of CreditMetrics
Correlations driven by asset value distributions
• Assume a connection between asset value and credit rating.
Second threshold
so next region
contains
CCC probability.
Set first threshold
so tail contains
default probability.
Z Def
Z CCC
ZB
Z
BB
ZBBB
ZA
Z AA
Asset return over one year
• Transition probabilities give us asset return “thresholds”.
Overview of CreditMetrics
One correlation parameter gives all joint probabilities
Obligor 1
Similar asset returns
produce joint defaults
Opposite asset returns
produce different credit moves.
Obligor 2
Equity factor model gives obligor correlations
based on mappings to industry indices.
First obligor
depends strongly
on its industry...
… somewhat
on specific
movements
Idiosyncratic
Obligor 1
Banking
Index
Industries
are correlated
Beverage
Index
… and mostly
on specific
movements
Obligor 2
Second obligor
depends weakly
on its industry...
Idiosyncratic
With few factors, conditioning on market move
makes many calculations easier.
Asset distribution
conditional on
down factor move
Unconditional
asset distribution
Area is conditional
default probability
-3
-2
-1
0
1
2
3
Conditional on factor move, all rating changes are independent.
To be more specific, at least in a simple case …
• Default condition -- obligor assets less than default threshold
Z
• Represent obligors through regressions on a common index.
Z wX 1 w
2
i
i
• Given a value for the index, conditional default condition
leads to conditional default probability
Z
wX
1 w
2
wX
p ( X )
1 w
2
• Given X, defaults are conditionally independent and portfolio
follows a binomial distribution.
Example portfolio
• Proxy a large bank lending book by two homogeneous
groups (and one common actor):
– A-rated -- 1700 exposures representing 75% of holdings,
20% asset correlation within group
– BB-rated -- 400 exposures representing 25% of holdings,
50% asset correlation within group
– 32% asset correlation between groups
• Total notional of $60B, “unit exposure” of $25M
• Higher concentration in investment grade, but lower grade
exposures are larger and more highly correlated.
Biggest issue is high sensitivity to extreme factor moves
4450
Many factor scenarios where
value does not change
Portfolio value
4440
4430
Value changes the most
where we do not simulate much
4420
4410
-3
-2
-1
0
Factor return
1
2
3
Importance sampling involves “cheating” and forcing
the scenarios where they are most interesting
Shift factor scenarios into
region where portfolio is more
sensitive
New scenarios capture sensitive
area better
4450
4430
4410
4390
4370
-5
-4
-3
-2
-1
0
1
2
3
4350
-5
-4
-3
-2
-1
0
1
2
3
Importance sampling stated in mathematical terms
• Goal is to estimate the expectation of V=v(X) where X~f
Straight MC
Imp Sampling
Integral expression
v( x) f ( x)dx
f ( x)
g ( x ) dx
v( x)
g ( x)
Random variates
X ~ f
Y ~g
Estimate V
1
n
i
v X
i
i
i
1
n
f Y
vY
g Y
i
i
i
i
• Trick is to choose g to reduce the variance of the estimate
Result is greater precision with fewer scenarios
required, particularly at extreme loss levels
2,000 simulations,
optimized for 5% loss
6000
Direct Monte Carlo
Importance Sampling
5000
5000
4000
4000
Loss
Loss
6000
10,000 simulations for Direct MC,
100 for Importance Sampling
3000
3000
2000
2000
1000
1000
0
5%
1%
0.1 %
Percentile level
0
0.1 %
Outline
• Introduction
– Quantities of interest -- portfolio capital, marginal capital
– Troubles with direct Monte Carlo
• Dealing with size -- portfolio compression
• Dealing with the model – importance sampling
• Dealing with the model – analytic marginal capital
• Conclusions
Intuition for the conditional loss result comes from
considering Monte Carlo estimation of VaR
• Suppose 1000 scenarios to estimate 1% VaR
• Losses in each scenarios (in descending order)
Pos. 1
Pos. 2
Pos. N
Total
1
37
12
60
2500
2
35
39
57
2312
3
29
27
58
2297
…
…
…
…
9
32
10
10
31
31
11
28
23
62
1689
54
1500
53
1476
• A small position change will not change the ordering, so VaR will
change by amount that position loss changes in scenario 10
The factor is the greatest determinant of portfolio value
6000
Portfolio value ($M)
5950
5900
5850
Examine the conditional
portfolio distribution,
given the factor return
5800
Cond SD bands
MC scenarios
5750
-2.5
-2
-1.5
-1
-0.5
0
0.5
Factor return
1
1.5
2
2.5
Since exposures are conditionally independent, the
conditional portfolio distribution is close to normal
5860
5870
5880
5890
Portfolio value ($M)
5900
5910
Practical assumptions on the conditional distribution
•
Assumptions
1.
2.
3.
4.
•
•
•
•
One factor drives all correlations
Factor is normally distributed
Portfolio is conditionally normal
Exposures are independent given factor returns
First is not necessary, though results are only practical for a
reduced set of factors
Second is part of CreditMetrics assumptions, but can be relaxed
Third is not essential (results require small modification for
arbitrary standardized distribution)
Fourth is crucial to the analysis
Size penalty analytic results
• Notation:
– L - portfolio loss, lq - portfolio VaR at level q
– l(Z) - conditional portfolio loss, li(Z) - conditional exposure loss
– (Z) - conditional portfolio SD, i (Z) - conditional exposure SD
• Capital estimates are expectations over the conditional
distribution of the factor, given that VaR is realized
6000
5950
5900
5850
5800
5750
-2.5 -2 -1.5 -1 -0.5 0 0.5 1 1.5
2 2.5
-2
-1.5
-1
Size penalty analytic results
• Base capital (conditional expected loss on unit exposure)
B E Llq li (Z )
Conditional
variance of new
exposure
Variance of
conditional mean
• Size penalty
2
2
i (Z ) (li (Z ) B)
1
E (l l (Z ))
2
2 L l q q
(Z )
Positive contribution
if factor move is
less than expected,
negative otherwise
Variance
contribution
of new exposure
Why is this useful?
Capital at 50bp for additional investment grade exposure
160
140
120
Capital (bp)
100
80
60
40
20
0
2
4
-20
Unit exposure
6
8
10
Capital and size penalty
for additional investment grade exposure
140
10bp loss
120
Capital (bp)
100
50bp loss
80
1% loss
60
5% loss
40
20
0
0
20
40
60
Unit exposure
80
100
Capital and size penalty
for additional speculative grade exposure
50
10bp loss
45
40
50bp loss
1% loss
Capital (%)
35
30
5% loss
25
20
15
10
5
0
0
20
40
60
Unit exposure
80
100
Express capital charges as a grid
• Base capital (basis points)
– Inv grade
– Spec grade
5%
26.8
695
1%
47.7
1070
50bp
57.3
1220
10bp
90.8
1650
• Incremental capital (basis points) per unit exposure
– Inv grade
– Spec grade
5%
0.15
22.2
1%
0.23
26.2
50bp
0.27
27.5
10bp
0.43
30.9
Conclusions
• Inherent features of a direct Monte Carlo approach will cause
convergence problems, particularly with capital calculations
• Practical assumptions go a long way, regardless of the model
– Portfolio capital based on compressed portfolio
– Capital contribution based on generic new exposures rather than
for each unique exposure in the portfolio
• For CreditMetrics particularly, a reduced factor approach
allows for variance reduction and hybrid techniques for the
most difficult quantities to obtain through Monte Carlo