Type I and type II errors

shanlane

Active Member
Hello,

I know this may be splitting hairs, but certain readings refer to type I errors (Neymon Pearson rule) as excepting a bad model and other readings refer to type II errors (VaR backtesting) as excepting a bad model. I know it all depends on what the null hypothesis is, but is there a convention the test uses? In other words, are we supposed to assume that the null is that the model is good? The model is bad? WIll the test have to explicitly tell us what the null is?

Thanks!

Shannon
 

David Harper CFA FRM

David Harper CFA FRM
Subscriber
Hi Shannon,

That is an astute observation of the counter-example (i.e., Neyman-Pearson, at least as de Servigny defines it, has Type I as the more costly error of extending a loan to a future defaulting company, therefore its null is "company will default") ... however, I have come to experience this as the exception.

It is mechanically easier to swap the definition of null in the default prediction; i.e., to go from null = will default to null = won't default.

But this is not the case with VaR backtest: if we want to define the VaR backtest null as "our 99.0% VaR model is inaccurate," we are left with the question of what is the inaccuracy level (95%? 90%). Basel's own market risk VaR backtest illustrates this (Table 1, page 320): there is only one column for Type 1 but several columns for Type II, because in the event of a Type II error, it is unclear what the true accuracy is!

I think it is therefore much more natural to expect: Null is that the model is good (e.g., null: 99% VaR model is accurate) where the significance level (1- confidence) then equals probability of Type I error (e.g., if null is that 99% VaR model is good/accurate, then 1% is the probability we will mistakenly reject the good model -- the mean of our distribution is accurate but we end up, due to sampling variation, in the 1% reject tail anyway and make the "mistake" to reject) which is consistent with our intuition in selecting the model ("do we want a 1% or 5% model?")
... consider the alternative (no pun intended). If null is that model is bad, the Type I error becomes the probability we mistakenly accept the bad model (i think contradicts our intuitive setup).

In summary (sorry for length), the Bernoulli default (yes/no) is easier to treat than the market risk VaR, but I recommend expecting: null hypothesis = good model (if nothing else, it is consistent with Basel). Thanks,
 
Top