This thread from last year might be helpful; note I have a link to an XLS that conducts MLE for GARCH(1,1).
(I do plan this year to have published up to member page a nicely working version of the MLE for GARCH)
But the short version is, MLE is another method to generate estimators from sample data. Like OLS gives us estimators (slope, intercept) to produce a sample regression function where we hope the estimators are consistent (i.e., approaches the "true value" of the population slope and intercept as parameters). Under classical regression, OLS = MLE. In GARCH(1,1), MLE is commonly used to produce good (consistent) estimators (alpha, beta, gamma) based on the sample. Put another way, under the methodological assumptions (which could vary), what is the GARCH(1,1) formula (estimators) that gives us a function that is "most likely" to fit the sample data we happened to observe?
David
append: for what it's worth, from an exam perspective, the details of MLE are unlikely to matter (although I have about four good book reference I will be glad to supply, if interested). Rather, what's relevant is that MLE gives consistent estimators where consistency is a property in Gujarati Chapter 5 (e.g., unbiased, efficient, BLUE). The term 'consistent' has a specific meaning.
...sorry, let me retract that statement minimizing MLE: MLE has its own bullet in the Study Guide. I had forgotten but Hull 21.5 (a chapter I recommended they re-introduce!) is about MLE so that is the assigned source. David
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.