Monte Carlo and GBM

hsuwang

Member
Hello David,

I don't know if this is something you've already talked about but forgive me for my short memory.. If Monte Carlo simulation is driven by GBM process, and since GBM does not model fat tails, then what is the adjustment that needs to be made to make MCS model heavy tails? Is it by the use of GARCH(1,1)? But then if we use GARCH(1,1), wouldn't it model mean-reversion which we don't want for stock prices? Or better, do we use EWMA instead?


Thanks!
 

David Harper CFA FRM

David Harper CFA FRM
Subscriber
Hi Jack & Ajsa,

We only use GBM to introduce MCS; i.e., Jorion says it's "common" for equities (not sure I agree) and, of course, GBM is the process we study in Hull.

So, Jack your point is good. MCS is *very* flexible and by no means requires GBM. This is a whole field unto itself (for further study, I highly recommend Vol IV of Carol Alexander's Market Risk Analysis).

In MCS, we can use different distributional assumptions (heavy tailed). Using GARCH() in MCS is commmon (EWMA is possible, too); further, there are variations on the plain vanilla GARCH(1,1) that we study. Also, as it's more practical to model portfolios of risk factors, MCS can incorporate dependencies/correlations between risk factors. So, our "learning assumption" is normal i.i.d. but MCS can incorporate:

* non-normal distributions (e.g., student's t); C.A. really showed me how powerful simply normal-mixture distributions can be (mix two normals and you get fat tail)
* non independent time series (positive autocorrelation implies greater VaR than i.i.d.)
* time varying variance (GARCH)
* dependence between risk factors

(probably other dimensions i haven't thought of)

Re: "GARCH(1,1), wouldn’t it model mean-reversion we don’t want for stock prices?"
in practice, the weight to the mean reversion factor (i.e., gamma weight) is low ... I've not seen a dataset with a truly strong pull to the mean ... the tendency of volatility to CLUSTER (the "autoregressive" AR in GARCH)--i.e., high vol tending to follow high vol--gets more weight. C.A. recommens AGARCH over GARCH: gives more volatility following a negative return than a postive return.

...so, i would say, your question sort of highlight the chief advantage of MCS, design flexibility.

David
 

David Harper CFA FRM

David Harper CFA FRM
Subscriber
Hi ajsa,

EWMA is without mean reversion
(if the weight, gamma, in GARCH is set to zero, then GARCH has no mean reversion and "collapses" to EWMA)

it may be tempting to say L.R. Variance = 0 because omega = gamma * 0 = 0, anyway
but this implies alpha + beta < 1.0 in GARCH...(!)...and this would effectively mean revert GARCH to 0.
...so a key point here is that for both EWMA & GARCH, the weights sum to 1.0 and:
EWMA has no weight to assign to a long run variance but GARCH does.

it's a common test question: GARCH mean reverts (owing to omega term), but EWMA does not.
(and also, for the reason, the forecast of EWMA is today's EMWA: a straight line going forward;
GARCH forecast is only interesting b/c of the mean reversion term!)

David
 

hsuwang

Member
Hello David,

Can you please check to see if I'm right with this because I think it is hard to grasp the concept:

If we are scaling a 1-day VaR to a 1-year VaR,
Positive autocorrelation will cause the square-root rule to "understate" the true VaR. (with a/c, VaR will be greater because a +return is likely to be followed by another + return)

but, instead if we are scaling a 1-year VaR down to a 1-day Var,
Positive autocorrelation will cause the square-root rule to "overstate" the true VaR.

Am I right?

Thank you!
 

David Harper CFA FRM

David Harper CFA FRM
Subscriber
Hi Jack,

Exactly correct!

You can verify the scale up here:
http://www.bionicturtle.com/premium/spreadsheet/4.a.1_two_asset_var_relative_vs_absolute/

i have autocorrelation = 0.25, so the autocorrelated VaR, VaR, AR(1) is greater than VaR, iid. ; i.e., SRR understates, as you say, scaling up.

then scaling down is the mind-bender: if you use SRR to scale down, you end up with a daily VaR that, if scaled back up with AR(1) would be larger than your original. So, your SRR must "overestimate" on scaling down

David
 

ajsa

New Member
Hi David,

I understand EWMA's weight decays exponentially. I wonder if GARCH has the same property?

Thanks.
 

David Harper CFA FRM

David Harper CFA FRM
Subscriber
Hi asja - yes, indeed, the lagged variance in GARCH(1,1) is the recursive solution to exponentially declining weights, just like EWMA except beta is the weight (decay factor) instead of lambda - David
 

ajsa

New Member
Hi David,

so is it also true that the weight (the alpha or (1-lamda)) of the innovation term also decays exponentially?

Thanks,.
 

David Harper CFA FRM

David Harper CFA FRM
Subscriber
Hi asja,

No, the weighted return (innovation) is not the solution to the recursion, like the weighted variance is. If we think about the EWMA (and this part applies to GARCH, too), it is an infinite series (see Linda Allen Ch 2):

weight 1 = (1-lambda)
weight 2 = (1-lambda)*lambda^1
weight 3 = (1-lambda)*lambda^3

the EWMA applies these weights to the series of historical squared returns (see my XLS for illustration); i.e.,
weight 1 * lag return1^2
weight 2 * lag return2^2

..this is the "raw" EMWA: an historical series of squared returns, each given declining weights.

Then the solution to the infinite series is the recursive EWMA, so in a very real sense, all of the historical squared-returns EXCEPT the most recent, are "rolled into" the lagged variance * lambda

David
 

ajsa

New Member
Hi David,

"weight 1 = (1-lambda)
weight 2 = (1-lambda)*lambda^1
weight 3 = (1-lambda)*lambda^2"
so I feel the weight of lag return^2 declines decays exponentially by lambda for each lag.. Do I miss anything?

Thanks.
 

ajsa

New Member
Hi David,

sorry for being verbose.. just to double check, can I say the weights of both the lagged variance and the lagged return^2 decay exponentially by lambda?

Thanks.
 

David Harper CFA FRM

David Harper CFA FRM
Subscriber
Hi asja - No. the raw series is a series of weights that decline (by ratio lambda) on the historical returns; this raw series has no lag variance.
... the lagged variance gets "introduced" by the elegant recursive solution that collapses all returns^2, sans the most recent, into the lag variance. So the term (lambda * lag variance) contains all that information...David
 

ajsa

New Member
Hi David,

I agree that the raw series has no lag variance.. but you also said "the lagged variance in GARCH(1,1) is the recursive solution to exponentially declining weights, just like EWMA except beta is the weight (decay factor) instead of lambda". if we expand the reseursion for each level, we will get an older lagged variance whose weight declined by lamda or beta..

BTW, the variance is also called obervation, and return^2 is called innovation, right?
Thanks.
 

David Harper CFA FRM

David Harper CFA FRM
Subscriber
asja - yea, i guess i agree with the above, I have lost the point we are trying to figure out. "innovation" - yes, i recognize that, the other not so much

David
 
Top