Home > Uncategorized > Econometrics? Keep it very simple.

Econometrics? Keep it very simple.

Readers of this blog might have noticed that I prefer to detrend time series using a moving average – and not the advanced and routinely and widely used Hodrick-Prescott filter. Part of this was lazyness. But I also do not like the HP filter: what is it? Why does it wag its tails so much? What is the ‘right’ smoothing parameter? James D. Hamilton has answered my questions (while implicating that a load of research is at least suspect if not worthless):

Why You should never use the Hodrick-Prescott filter: “Here’s why. (1) The HP filter produces series with spurious dynamic relations that have no basis in the underlying data-generating process. (2) Filtered values at the end of the sample are very different from those in the middle, and are also characterized by spurious dynamics. (3) A statistical formalization of the problem typically produces values for the smoothing parameter vastly at odds with common practice, e.g., a value for λ far below 1600 for quarterly data. (4) There’s a better alternative. A regression of the variable at date t+h on the four most recent values as of date t offers a robust approach to detrending that achieves all the objectives sought by users of the HP filter with none of its drawbacks.”,

  1. May 23, 2017 at 4:17 pm

    Roger Farmer also has some good reasons to avoid the H-P filter:
    http://rogerfarmerblog.blogspot.se/2014/12/real-business-cycle-theory-and-high.html

  2. Yu
    May 24, 2017 at 4:27 am

    Another interesting result is the “optimal” lambda in the HP filter. In page 40 of the paper, the author uses Maximum Likelihood to estimate the “best” lambda. They find the “best” lambdas are around 1 for many quarterly macro variables, including GDP, Gov spending, Employment, etc.

    The length of underlying data will change the “history” from the HP filter. If I use it to preprocess data for a model, the model will implicitly assume people know exactly the “future” data. For example, suppose the data has four quarters and is preprocessed by the HP filter. The ‘true’ value at Q1 depends Q2-Q4 data.

    Moving average seems to mitigate such problem. But another problem kicks in: what type of average? Simple , Exponential, Nearest Neighbor , etc. I am wondering how to pick one method, or simply try them all.

  3. merijntknibbe
    May 24, 2017 at 5:45 am

    Good point. For ‘regular’, more or less known variation, like seasonal variation, Arima-models (which calculate an ‘average’ season, do a good job. There are rules of thumb, too. Industrial production is for a whole number of reasons quite volatile. A ‘rule of thumb’ is not to look at monthly data but at two monthly data of de-seasonalized data. There are many options.

    But maybe we should not always look at detrended data. A case in point is Germany. De-seazonalized average unemployment is (after about thirty years) finally becoming lowish but real unemployment is, in summer, about one point lower than de-seazonalised unemployment and may become low – reaching a tipping point when it comes to labour market dynamics.

    Also, in models, detrended quarterly national accountng data are often used., Despite detrending this does not seem to be right (partly because of the reason you give!) though it enables the econometrician to work with more data. Anyway – ‘try them all’ (or perform at least a robustness check) seems a sound advice.

  4. May 24, 2017 at 5:17 pm

    Interesting exchange though i’ve never formally studied econometrics. Informally i went through many of the classifications for time series analyses, mostly in physics papers and some econ which go through how one distinguishes a deterministic , possibly chaotic process from a stochastic or random one.
    One gets into the whole issue of ‘unit roots’ and also the relation between differential equations versus discrete ones, markov processes versus higher order markov processes, correlated vs independent variables…

    In general, my conclusion was often one cannot distinguish the processes since one can fit a time series in many ways. This also gets on into issues of ergodicity.
    To me some of ancient classic papers on this (which are less technical) are Eugene Slutsky in 1931 econometrica (slutsky-yule effect or spurious correlations), , and one by Elliot Montroll in 70’s or 80’s Bull Am Math Assn (on chaos theory). The more technical reviews were in physics, and a few in theoretical economics.

    I could never find anyone who would give a sort of review or overview of these issues (except in papers, and there’s too much for me to remember). Most people work on very specific applications.
    I assume in a sense these applications work, but i could never decide why these problems were more intersting than others, but I am not involved in creating the scientific consensus as to what problems are important. (Forming a ‘scientific consensus’ is also an aggregation problem. How many terms in a time series, or members of a population (who is in versus out) do you use to find a pattern?)

  1. No trackbacks yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s