Accuracy for GARCH models

2018-03-15 16:47:43

How does one calculate the accuracy of forecasts given by GARCH models considering GARCH is run on returns. Assuming GARCH is a derivative of a regression based prediction model, would regular statistics like R squared, MAPE/ SMAPE etc be the right indicator for the performance? Unlike ARIMA where the predictive power just dies down after a forecast interval, I experience GARCH forecasting values of almost any time period specified. How would one be able to identify if there is any randomness in the forecasted values?

The best way to check the accuracy of a Garch model is to use the methodology of Hansen and Lunde (2005). In this paper they actually compared the accuracy of 330 Arch-type models and concluded that Garch(1,1) was superior in their sample.

The paper describes at great length the way to do it. But in a nutshell:

Estimate arch-type your model in monthly data using a window from $[t_{start}, t_{end}$. Compute volatility forecast for month $t_{end+1}

  • The best way to check the accuracy of a Garch model is to use the methodology of Hansen and Lunde (2005). In this paper they actually compared the accuracy of 330 Arch-type models and concluded that Garch(1,1) was superior in their sample.

    The paper describes at great length the way to do it. But in a nutshell:

    Estimate arch-type your model in monthly data using a window from $[t_{start}, t_{end}$. Compute volatility forecast for month $t_{end+1}$.

    Compute realized volatility during month $t_{end+1}$ using daily data (e.g. sum of squared returns).

    Subtract your forecasted volatility from realized volatility and square it. Do this for several months and check the sum of squared residuals.

    You can do this exercise using many frequencies. My example above is for monthly frequency. Just replace the appropriate definitions.

    2018-03-15 17:42:47