R39-P2-T5 Tuckman Ch9 Model 1 - Simulation: Why always scale dw by SQRT(1/12) every month?

Karim_B

Active Member
Subscriber
Hi @David Harper CFA FRM
In R39-P2-T5 Tuckman Ch9 Model 1 - Simulation:
Why do we always scale the random value by SQRT(1/12) for every month when calculating dw?

Current calculation in P2.T5.Tuckman_Ch9.xlsx for the 3 month rate:
upload_2018-3-3_14-52-44.png

Whereas intuitively I thought we should be scaling by SQRT(0.25) which is the size of a 3 month step.

The tree values make more sense to me where each node's values depend on the preceding values so there's an evolution over time, whereas in the simulation it seems we don't take the passing of time into consideration and I'm not sure why.

Edit: Additional question - assuming the calculation above is correct, why do we show the (t) [or I think the video said it should be called dt] column in the table if we don't use it in the calculations?

Thanks
Karim
 
Last edited:

Karim_B

Active Member
Subscriber
Hi @David Harper CFA FRM
I've realized now that's it's the rt values that build on each other, so it makes sense that we scale each value by a single time step.

However, I still don't get the purpose of the (t) [or dt] column in the table since it's not used in the calculations.

Screenshot for anyone who was confused like me where you can see the r(t) values build on each other:
upload_2018-3-3_15-44-7.png

Thanks
Karim
 

David Harper CFA FRM

David Harper CFA FRM
Subscriber
Hi @Karim_B Exactly. I do think the presentation can be improved (is often the case!). I think you noticed that the model assumes one month (Δt = 1/12) not three month time steps. There is a fundamental difference between the tree (upper panel) and the simulation (lower panel). The tree is a "map" of one standard deviation jumps; it does not purport to represent a future scenario path but rather is--at least the way that I think about it--a "visual definition" of the model. On the other hand, the simulation is a single randomized trial of a step-by-step path (evolution) of the short-term rate.

You are correct that the (t) column is not functional, I inserted it merely to show the timeline in yearly units (in addition to the monthly units already displayed). But I acknowledge this is potentially confusing redundancy as dw = dt*NORM.S.INV((RAND()), where is simply a random standard normal; in this case dw = sqrt(1/12)*[z <-- N(0, 1)] such that dw is a random (non standard) normal with µ = 0 , σ = sqrt(1/12).

I will definitely incorporate your feedback when I iterate these XLS. Thank you!
 

QuantFFM

Member
Hi @Karim_B Exactly. I do think the presentation can be improved (is often the case!). I think you noticed that the model assumes one month (Δt = 1/12) not three month time steps. There is a fundamental difference between the tree (upper panel) and the simulation (lower panel). The tree is a "map" of one standard deviation jumps; it does not purport to represent a future scenario path but rather is--at least the way that I think about it--a "visual definition" of the model. On the other hand, the simulation is a single randomized trial of a step-by-step path (evolution) of the short-term rate.

You are correct that the (t) column is not functional, I inserted it merely to show the timeline in yearly units (in addition to the monthly units already displayed). But I acknowledge this is potentially confusing redundancy as dw = dt*NORM.S.INV((RAND()), where is simply a random standard normal; in this case dw = sqrt(1/12)*[z <-- N(0, 1)] such that dw is a random (non standard) normal with µ = 0 , σ = sqrt(1/12).

I will definitely incorporate your feedback when I iterate these XLS. Thank you!


Hi David,

i don't get it with the dw and the scaling...

I don't get the link between: dw as normal random variable/ standard normal random variable and the scaling with sqrt(dt).
upload_2018-4-8_21-4-17.png



1. In a simple and practical way for the exam: Have i always to scale the dw with sqrt(dt) or does it depends if it is a normal variable or a standard normal variable? When do i have to scale dw with sqrt(dt)

2. Why is the annual volatility of 1,6 % not scaled by sqrt(dt) to get the monthly vola?


Thx in advance and have a good start into the week

Regards
 

David Harper CFA FRM

David Harper CFA FRM
Subscriber
Hi @QuantFFM dw is how Tuckman specifies the models; he scales the random standard normal rather than scaling the annual basis point volatility input (which would be more intuitive to me, too!). But I'm not sure it matters because the essential random shock is the product of three variables:

[random normal Z = N^(-1)(random p)] * σ[annual basis point volatility] * sqrt(Δt/12_months); i.e.,
  • Random normal Z: a random standard normal, by definition µ = 0, σ = 1.0
  • The annual basis point volatility; e.g., 1.60% or 160 basis points per annum. As usual, inputs should be in per annum terms
  • Scaling factor per the usual square root rule (SRR) that assumes i.i.d. Notice I elaborated the full SRR to sqrt(Δt/12_months) because the denominator is whatever are the time dimension of the volatility input, in the case and as usual, per annum = 12 months. So to your second point, of course the 1-year volatility input can be scaled to monthly with 1.60% * SQRT(1 month/12 months) when our tree step is one month (i.e., numerator) and our input volatility is 12 months (i.e., denominator). Because we can assume this re-scaled monthly volatility is normal, it randomized by multiplying by a random normal Z.
It's just the case that models are specified by (random normal Z) * σ[annual basis point volatility] * sqrt(Δt) = [(random normal Z) *sqrt(Δt)] * σ[annual basis point volatility] = dw * σ[annual basis point volatility], so that dw is not random standard normal but instead a random normal, with standard deviation of sqrt(Δt), that scales (i.e., is a multiplier on) the annual basis point volatility. To me it's not different than itemizing all three with Z*σ*sqrt(Δt), and if he gains an advantage by using (dw) I don't really know what it is?! To your point, I don't know why his is better than: (random normal Z) * σ[annual basis point volatility] * sqrt(Δt) = (random normal Z) *[sqrt(Δt) * σ(annual basis point volatility)] = Z * σ[t-period basis point volatility]

I hope that's helpful, have a good week yourself!
 

QuantFFM

Member
Hi @QuantFFM dw is how Tuckman specifies the models; he scales the random standard normal rather than scaling the annual basis point volatility input (which would be more intuitive to me, too!). But I'm not sure it matters because the essential random shock is the product of three variables:

[random normal Z = N^(-1)(random p)] * σ[annual basis point volatility] * sqrt(Δt/12_months); i.e.,
  • Random normal Z: a random standard normal, by definition µ = 0, σ = 1.0
  • The annual basis point volatility; e.g., 1.60% or 160 basis points per annum. As usual, inputs should be in per annum terms
  • Scaling factor per the usual square root rule (SRR) that assumes i.i.d. Notice I elaborated the full SRR to sqrt(Δt/12_months) because the denominator is whatever are the time dimension of the volatility input, in the case and as usual, per annum = 12 months. So to your second point, of course the 1-year volatility input can be scaled to monthly with 1.60% * SQRT(1 month/12 months) when our tree step is one month (i.e., numerator) and our input volatility is 12 months (i.e., denominator). Because we can assume this re-scaled monthly volatility is normal, it randomized by multiplying by a random normal Z.
It's just the case that models are specified by (random normal Z) * σ[annual basis point volatility] * sqrt(Δt) = [(random normal Z) *sqrt(Δt)] * σ[annual basis point volatility] = dw * σ[annual basis point volatility], so that dw is not random standard normal but instead a random normal, with standard deviation of sqrt(Δt), that scales (i.e., is a multiplier on) the annual basis point volatility. To me it's not different than itemizing all three with Z*σ*sqrt(Δt), and if he gains an advantage by using (dw) I don't really know what it is?! To your point, I don't know why his is better than: (random normal Z) * σ[annual basis point volatility] * sqrt(Δt) = (random normal Z) *[sqrt(Δt) * σ(annual basis point volatility)] = Z * σ[t-period basis point volatility]

I hope that's helpful, have a good week yourself!

Hi David,
thanks a lot for your always detailed answers.
 
Top