It is a two-step action, the counting is first than the 'confidencing.'
First, they count the exceptions (exceedences): how many times over the last year did the actual outcomes exceed the VaR. Statistically, this is to create the test statistic.
Second, they interpret the statistic where the question is, "Is the model good, do exceptions represent the probable randomness of a good model?" The null hypothesis is: the banks model is accurate. So, two types of errors: Type I = mistakenly reject the null (but oops the model was good) or Type II = mistakenly accept (fail to reject) the null. The three zones are merely their decision to prefer a Type I over a Type II. At 99% confidence, they 'red zone' a model that produces 10 out of 250. That is, an accurate model statistically can be expected to make < 10 exceptions 99% (actually 99.7%) of the time. Since they are so biased against the Type II, this *decision* implies they will reject some good models.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.