Relationship between Jensen's Alpha and error term?

Arka Bose

Active Member
I am a bit curious to know whether jensen's alpha has any relation to the error term in the security characteristic line? Security characteristic line regresses (Ri-Rf) against (Rm-Rf).

In this case, if i regress, then (Ri-Rf) = α + β(Rm-Rf) + ei
then, Ri= α + [Rf + β(Rm-Rf)] + ei
then, Ri = α + E(R) + ei
Then, Ri-E(R) = α + ei ; thus Jensens alpha = α + ei ?
 

David Harper CFA FRM

David Harper CFA FRM
Subscriber
Hi @arkabose

SML regresses R(i) against β (systemic risk) such that we get a regression line in the typical format (y = mx +b + e): R(i) = Rf + [R(i) - Rf]*β + e(i), where slope is R(i) - Rf and e(i) is the typical regression error (residual, noise) which is uncorrelated to beta and also has an expected value of 0; E(e) = 0. So I *think* it's a fallacy to produce the following (your) equation simply as regression output:

(Ri-Rf) = α + β(Rm-Rf) + ei

because in the context of a univariate regression, you have generated two constants (Rf and alpha). Let me put that another way. I *think* that if you selected a R(f) then ran your regression, in the single factor SML/CAPM context, the resulting alpha would inform a recalibrated risk-free rate; i.e., the intercept is where beta is zero, after all. So where is alpha? It's not really here in SML, except as a theoretical insertion, E(alpha) = 0, as SML plots expected returns where expected returns are on the line and expected(alpha) is zero. The regression line, if it's SML, is a line which expects zero alpha:

E[R(i) - Rf] = β(Rm-Rf) + E[alpha] = E[R(i) - Rf] = β(Rm-Rf) + 0 = E[R(i) - Rf] = β(Rm-Rf); i.e., https://en.wikipedia.org/wiki/Security_market_line

Jensen's alpha is determined by a realized (not expected) return and denotes a narrow type of "alpha," it is just the difference: R(i) - E[R(i)]; i.e., the vertical distance from the line. It is the realization of a difference from the line which is anticipated by the error term (is how i see it). In this way, I do see them as conceptually related. And further it shows the "problem:" if we observed +2.0% alpha, say for example, we cannot immediately discern if it is skill or luck. Luck would be random sampling variation anticipated by the noise term; skill would be uncorrelated but non-random difference from the SML.

Just two other notes:
  • As Bodie explains, a key difference between expected (ex ante) and realized (ex post). SML/CAPM is the line itself: it predicts (ex ante) alpha of zero; but realized returns do not fall on the line, so ex post we do expect alpha. In summary, expected[alpha] = 0, but realized[alpha] is nonzero
  • It's only because we are referring to CAPM/SML that the intercept is the risk-free rate; in another (more useful; e.g., Grinold context), alpha is uncorrelated intercept. That's a little confusing. In a multifactor context, alpha is typically the regression intercept (as I've written here in the forum many times)--i.e., the excess return which cannot be attributed to beta factors--but Jensen's alpha is not a regression intercept, it's just the difference between a single observation and the regression line. I hope that helps!
 
Last edited:
Top