P2.dowd.chapter 4.queries

amresh

Member
Subscriber
Hi David/other members,
1.AIMs don't mention order stats and bootstrap methods to estimate confidence intervals. Can we skip them?
2. Could you please explain correlation weighted HS?

Waiting on chapter 3 queries as well. :)

Happy learning,
Amresh
 

amresh

Member
Subscriber
One more:
Dowd says : advantage of age weighted HS is it allows us to grow sample size. Under demerits it is stated that "effective sample size reduces". Please explain.
 
Last edited:

David Harper CFA FRM

David Harper CFA FRM
Subscriber
Hi @amresh If I can work your queries last in first out (LIFO), your 3rd question is easiest. From Dowd:

Dowd (page 94, emphasis mine): "Finally, we can also modify age-weighting in a way that makes our risk estimates more efficient and effectively eliminates any remaining ghost effects. Since age-weighting allows the impact of past extreme events to decline as past events recede in time, it gives us the option of letting our sample size grow over time. (Why can’t we do this under equal-weighted HS? Because we would be stuck with ancient observations whose information content was assumed never to date.) Age-weighting allows us to let our sample period grow with each new observation, so we never throw potentially valuable information away. This would improve efficiency and eliminate ghost effects, because there would no longer be any ‘jumps’ in our sample resulting from old observations being thrown away.

However, age-weighting also reduces the effective sample size, other things being equal, and a sequence of major profits or losses can produce major distortions in its implied risk profile. In addition, Pritsker shows that even with age-weighting, VaR estimates can still be insufficiently responsive to changes in underlying risk. Furthermore, there is the disturbing point that the BRW approach is ad hoc, and that except for the special case where λ=1, we cannot point to any asset-return process for which the BRW approach is theoretically correct."

It does appear to contradict, but I think the keys are (1) ghost-effect and (2) his term "effective." Say lambda is 0.92.
  • The reduction of effective sample size refers to how, really, only the most recent returns are used. If λ = 0.92, then notice that the most recent ten observations constitute over half of the total weight (56.56%) and the most recent twenty constitute over 80% (81.13%). This is the meaning of reducing the effective sample size.
  • This "reduction" is actually consistent with the first statement which is the alleviation of the dreaded ghosting effect; i.e., where under a simple HS, an outlier has equal weight for its entire participation in the the window, but then drops off abruptly. Here in age-weighted, the weights are getting small in the distant tail (they are not much informing the "effective" sample) so they feather out softly rather than ghost in and out abruptly.
 
Last edited:

David Harper CFA FRM

David Harper CFA FRM
Subscriber
Hi @amresh Sorry for the delay responding to your first two questions:
  1. Basically, yes, you can skip them. In regard to Chapter 4, the FRM's attention is really on basic historical simulation, bootstrapped HS, and the conceptual (not quantitative) weighted HS approaches. As you seem to understand the differenced, we do care about bootstrapped HS as means to generate VaR and/or ES (this is a common method, many would say it's the best, I think) but the FRM historically has not gotten into the more advanced issue of bootstrapping the confidence intervals (e.g., I just can't imagine a query into Dowd 4.3.2)
  2. I don't personally have experience with the correlation-weighted HS (I'm not aware that it's ever come up on the FRM exam). Dowd says "We can also adjust our historical returns to reflect changes between historical and current correlations," such that conceptually I view it as a generalization of the volatility-weighted HS ("This approach is a major generalisation of the HW approach, because it gives us a weighting system that takes account of correlations as well as volatilities."). I would summarize Dowd's approach in the following way:
  • Whereas basic historical simulation gives equal weight to all returns in the measurement window, Dowd introduces several "improvements" that adjust the historical returns in order to achieve a more predictive VaR or coherent (e.g., ES) measure, including age-weighted which is (by far) the most common
  • Correlation or covariance-weighting generalizes volatility-weighted HS; recall that 2 position covariance = volatility(1)*volatility(2)*correlation(1,2) such that n-position covariance is a matrix product of (volatility vector)*(correlation matrix)*(volatility vector).
  • In a manner similar to how we can use the Cholesky decomposition to generate correlated random standard normals (the Cholesky decomposition has historically been assigned in the FRM but currently doesn't have a link due to the new MCS reading), the Cholesky can be used to transform historical returns to returns that reflect the current correlation/covariance matrix. That's the whole point, as Dowd writes in example 4.1. "The historical correlation between our two positions is 0.3, and we wish to adjust our historical returns R to reflect a current correlation of 0.9".) I hope that helps
 

tosuhn

Active Member
Hi @David Harper CFA FRM CIPM & @Nicole Manley, can i check on the testability of both correlation-weighted historical simulation and filtered historical simulation?
Noticed that content is rather short in the notes and not mentioned in the instructional videos.
Hope to hear from you soon!

Many thanks.
Regards,
Sun
 

ShaktiRathore

Well-Known Member
Subscriber
Please refer to the Aims and read them carefully,look for the keywords like define/describe/calculate etc thede keywords definitions are given in the frm study guide only, you can infer from these keywords as what is to be actually desired by the Aim.
Thanks
 

David Harper CFA FRM

David Harper CFA FRM
Subscriber
Hi @tosuhn I think (as usual) that @ShaktiRathore has a good point here. The AIM reads: Distinguish among the age-weighted, the volatility-weighted, the correlation-weighted and the filtered historical
simulation approaches
. The use of distinguish is interesting (as opposed to describe or calculate); it is consistent with low testability of volatility-weighted, correlation-weighted and filtered historical simulation. The age-weighted HS is the most testable; then the "distinguish" is consistent with a superficial awareness of the difference between it and the others. Thanks,
 

bpdulog

Active Member
Hi all,

I have a few questions myself on this chapter. Specifically the notes state:

"Simple HS only allows VaR estimate at discrete confidence levels."

What is meant by this statement?

In addition, we are shown how to perform a bootstrap HS, what is the advantage to using this method over simple HS, aged weighted HS or volatility weighted? It has ability to generate multiple estimates, but it seems like it isn't much more accurate in practice, is that correct? Thanks in advance
 

David Harper CFA FRM

David Harper CFA FRM
Subscriber
@bpdulog
  • That is literally Dowd's statement ("Simple HS only allows VaR estimate at discrete confidence levels.") but having read the book dozens of times ;) I am certain he does not literally mean that simple HS cannot handle non-discrete confidence levels itself a interesting term, yes?). He is referring to the classic challenge posed by simple HS: say you have 500 sorted losses (two years), then 99.8% or 99.0% VaR is more natural to the dataset then 99.997% because 1/500 = 0.20%. It's not that we can't retrieve any 99.xxx% VaR, but rather that when applying simple HS at "non-discrete confidence levels" we have two (or really three or more) approaches depending on our interpolation method.
  • Re bootstrap: here's is Dowd's appendix which speaks to the advantage of bootstrapped HS http://trtl.bz/dowd-appendix-bootstrap The bottom line is that, because it simulates, it should give better (and easier to create) confidence intervals around the parameter estimates. In my code practice and learning, I experience it as very common, maybe primarily because in code it's easier to just run a sample esp if you aren't even sure how to deduce the analytical CI. I hope that helps!
 
Top