Hi asja,
(Great questions, I really admire the way to refuse to takes things at face value!)
Although these from Dowd are using integrals (i.e., continuous functions), we can understand it in discrete terms. If we think about a uniform distribution from 1 to 100 where each is a loss, then each outcome has probability of 1/100 and f(x) = 1%. The probability property say the sum of f(x) must be 1.0, and here it's true 100 * 1% = 100%.
The ES(95%) is the average tail loss of the worst 5%. The 1/(1-confidence) here is 1/(1-95%) = 1/5% = 20
(as i mentioned in tutorial, Dowd unfortunately uses alpha for confidence whereas everywhere else, for us, alpha = significance = 1 - confidence)
So, now instead 100 outcomes with prob of 1% each, we have:
5 outcomes with probability of 1% * 20 = 20%
(and, as 5*20% = 1.0, this satisifies the "normalization" criteria...which is essentially treating the 5% tail like it's own probability distribution...this is what the EVT distributions do: they "zoom" into the tail with a distribution unto itself)
in the case of a continuous distribution, the 1/(1-confidence) probably will make f(x) > 1.0 but that's fine: the area under the tail curve will be 1.0 as probability is not f(x) but f(x)dx.
This leads to explaining your 2nd question ... the weighted sum must equal 1.0 because the 1/(1-alpha) is conditional on p >= alpha, but it's zero for all others. To use the discrete analogy:
Given discrete uniform distribution from 1-100, then p(x) = 1%
if conf = 90%, then under the ES, 1-90 get weight of 0, and 91-100 (#10) get 10*1% = 10% each and 10*10% = 100%
(with ES(@95%) = average of only those 10 tail losses since all other outcomes are excluded), or
if conf = 95%, then under the ES, 1-95 get weight of 0, and 95-100 (#5) get 20*1% = 20% each and 5*20% = 100%
In case you didn't notice, I added an ES calc to today's OpRisk practice question from the sample exam. You'll note the 95% ES uses a 1/(1-95%) to get the average in the tail, the XLS may be helpful. See: http://forum.bionicturtle.com/viewthread/1422/
Below is a 2006 practice question:
===========
2006.5. Given the following 30 ordered simulated percentage returns of an asset, calculate the
VaR and expected shortfall (both expressed in terms of returns) at a 90% confidence level.
-16, -14, -10, -7, -7, -5, -4, -4, -4, -3, -1, -1, 0, 0, 0, 1, 2, 2, 4, 6, 7, 8, 9,
11, 12, 12, 14, 18, 21, 23
a. VaR (90%) = 10, Expected shortfall = 14
b. VaR (90%) = 10, Expected shortfall = 15
2006 FRM Practice Exams 21
c. VaR (90%) = 14, Expected shortfall = 15
d. VaR (90%) = 18, Expected shortfall = 22
ANSWER: B
Ten percent of the observations will fall at or below the 3rd lowest observation of
the 30 listed. Therefore, the VaR equals 10. The expected shortfall is the mean
of the observations exceeding the VaR. Thus, the expected shortfall equals (16
+ 14) / 2 = 15.
============
But if we use weight avg method, the 3rd return (-10) needs to be included (-16/30 -14/30 -10/30) /0.1. could you advise?
I'd like to noodle on this, but my instant reaction is: Wow. You are right and the question is wrong (I think!).
First, let's understand that both ES = 15 and ES = 13.333 (yours) are both valid ES, but for different conditional quantiles (i.e., for difference confidence thresholds).
I think, by averaging the final two, the answer mistakely returns ES @ 1- 2/30; i.e., @ the 93.3% ile
... i hesitate b/c calibrating the quantile on discrete is open for variation/dispute (so, clearly the answer is not obvious)
... but your formula is indeed consistent with Dowd's formula (who is authoritative) for ES on a discrete
I'll get back on this, but my initial (<100% certain) view is that you are correct
...on the weighting: before the weights, it is just a probabilty distribution that must sum to 1.0 (by definition of a probability distribution; i.e., area under a PDF must = 1.0 and CDF must limit to 1.0...the prob distribution must "contain" all possible outcomes). Then the weights, as they often do, must sum to 1.0 or they might violate this definitional requirment of a distribution...David
For VaR in this case,
Jorion would return -10 @ 90% (i.e., 3rd datapoint), but
Dowd would return -7; i.e., = LARGE(array of 30, 10%*(n+1)) = 27th ranked out of 30, or 4th from worst
(or Dowd is okay to use midpoint btwn -10 and -7)
...so this is an "open issue" via* discrete distributions
...however, it still *seems* to me that the answer's approach (not yours) is taking the average of a 7.7% tail
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.