Rechercher
Fermer ce champ de recherche.

☆☆☆☆ What is an error-correcting model?

⚠️Automatic translation pending review by an economist.

In time series econometrics, when we want to model a non-stationary variable using a single explanatory variable that is also non-stationary, we use what is known as an error correction model. To do this, we proceed in several steps.

The first step is to test the stationarity of these time series. To do this, we have several non-stationarity tests[1]. If the variables are non-stationary, the standard Ordinary Least Squares (OLS) regression is said to be spurious or illusory. To avoid this problem, we can estimate an OLS model via a linear transformation of the variables (e.g., in most cases, the first difference makes the non-stationary variables usable at the level). However, from an economic point of view, it is common to want to work with variables at the level rather than at the first difference. But how can we ensure that the regression is not spurious? If the variables are cointegrated (i.e., there is a linear combination of the integrated series of order lower than the integration order of each of the series, zero in most cases), it is possible to estimate a relevant and statistically viable regression. Cointegration between two variables was conceptualized by Engle and Granger (1987). However, Engle and Granger’s method does not allow for the distinction of multiple cointegration relationships. It was not until a few years later that Johansen (1991) developed a procedure capable of testing for the existence of (n – 1) cointegration relationships between n variables (n>2). We will return to the vector error correction model (VECM), which seeks to model a more complex relationship with several explanatory variables, in a future insight.

If two series are cointegrated (the residuals estimated in the long-term relationship are stationary), we use the following error correction model ( ECM):

ΔYt =γΔXt + δ (Yt-1 – ĉXt-1 – â) +ɛt with δ < 0 (1)

Where t represents time, (Yt-1 – ĉXt-1 – â) represents the cointegration relationship and refers to the estimated residuals of the regression of the explained variable, Y, on the explanatory variable, X. ĉ represents the cointegration coefficient and â is the estimated constant of the cointegration relationship. We can emphasize the fact that δ must be significantly negative for the equation to induce a return ofYt to its long-term equilibrium value (ĉXt-1 + â). If this is not the case, the regression is misleading. The MCE therefore allows us to model both short-term dynamics (represented by the first difference variables) and long-term dynamics (represented by the level variables).

To summarize, the correct specification and estimation of an MCE is done in four steps:

1. Test the order of integration of the time series studied using non-stationarity tests (e.g., ADF and/or PP).

2. If the variables are integrated of the same order, regress the explained variable, Y, on the explanatory variable, X, using OLS;

3. Test the stationarity of the residuals estimated in this previous regression. If they are stationary, the two series are cointegrated;

4. Estimate the MCE in (1) and ensure that δ is significantly negative.

Julien Moussavi

Notes

[1] In the literature, it is common to encounter the term « stationarity test, » which is more of a misnomer because the null hypothesis that is tested most of the time is the presence of a unit root and therefore the non-stationarity of the time series. This is the case with the Augmented Dickey-Fuller (ADF) and Phillips-Perron (PP) tests. In the category of traditional tests, only the Kwiatkowski, Phillips, Schmidt, and Shin (KPSS) test has as its null hypothesis the absence of a unit root and therefore the stationarity of the time series.

L'auteur

Plus d’analyses