Boosting: Why You Can Use the Hp Filter

Peter C. B. Phillips, Zhentao Shi

Abstract

We propose a procedure of iterating the HP filter to produce a smarter smoothing device, called the boosted HP (bHP) filter, based on L2-boosting in machine learning. Limit theory shows that the bHP filter asymptotically recovers trend mechanisms that involve integrated processes, deterministic drifts, and structural breaks, covering the most common trends that appear in current modeling methodology. A stopping criterion automates the algorithm, giving a data-determined method for data-rich environments. The methodology is illustrated in simulations and with three real data examples that highlight the differences between simple HP filtering, the bHP filter, and an alternative autoregressive approach.

It’s important to use a lower penalty parameter for smaller timeseries, otherwise the HP filter will tend to over-penalize (over-smooth) the data.

In some ways the choice of lambda for the HP filter has less to do with the frequency of the data and more to do with the amount of data you have. The real reason lambda tends to align with data frequency is that similar frequency data tends to be of similar length. If you have similar frequency but significantly different length, it can make sense to use a different lambda penalization.

Probably makes sense to bias towards setting too high to begin with, rather than too low. Setting too low causes the HP filter to effectively over-fit the data, meaning that the residuals/cycle is too small/low variance, which can’t really be undone by further filtering/boosting. On the other hand, if is too high initially, the filter under-fits, meaning the trend will be biased and the errors/cycle will be too “large,” which can always be fixed via reapplication of the filter, progressively shaving off more non-stationary variation.

Repo for bHP filter


References

Why You Should Never Use the Hodrick-Prescott Filter

Why You Should Never Use the Hodrick-Prescott Filter

paperprocessonline