A Neural Phillips Curve and a Deep Output Gap

Philippe Goulet Coulombe Link Video

Abstract

Many problems plague the estimation of Phillips curves. Among them is the hurdle that the two key components, inflation expectations and the output gap, are both unobserved. Traditional remedies include creating reasonable proxies for the notable absentees or extracting them via some form of assumptions-heavy filtering procedure. I propose an alternative route: a Hemisphere Neural Network (HNN) whose peculiar architecture yields a final layer where components can be interpreted as latent states within a Neural Phillips Curve. There are benefits. First, HNN conducts the supervised estimation of nonlinearities that arise when translating a high-dimensional set of observed regressors into latent states. Second, computations are fast. Third, forecasts are economically interpretable. Fourth, inflation volatility can also be predicted by merely adding a hemisphere to the model. Among other findings, the contribution of real activity to inflation appears severely underestimated in traditional econometric specifications. Also, HNN captures out-of-sample the 2021 upswing in inflation and attributes it first to an abrupt and sizable disanchoring of the expectations component, followed by a wildly positive gap starting from late 2020. HNN’s gap unique path comes from dispensing with unemployment and GDP in favor of an amalgam of nonlinearly processed alternative tightness indicators – some of which are skyrocketing as of early 2022.

Hard to say how interesting this is without diving into the architecture. At a high level doesn’t sound particularly innovative. It’s basically a GAM where each component is a neural net. Could be wrong though, I haven’t read the paper, just watched the video above.

The separation of the network into different “hemispheres” is really what gives the model structure and allows you to interpret the various components. If you don’t do this then the output neurons would be effectively meaningless since they would each combine information from various inputs in complicated, black-box ways. Instead, by separating the pieces you end up with separate components which map to particular input information. Adding these components in simple linear fashion is also a form of structure, ensuring that their contributions remains linearly additive and separable.


References

paperreadonline