## Econometrics and the problem of unjustified assumptions

from **Lars Syll**

There seems to be a pervasive human aversion to uncertainty, and one way to reduce feelings of uncertainty is to invest faith in deduction as a sufficient guide to truth. Unfortunately, such faith is as logically unjustified as any religious creed, since a deduction produces certainty about the real world only when its assumptions about the real world are certain …

Unfortunately, assumption uncertainty reduces the status of deductions and statistical computations to exercises in hypothetical reasoning – they provide best-case scenarios of what we could infer from specific data (which are assumed to have only specific, known problems). Even more unfortunate, however, is that this exercise is deceptive to the extent it ignores or misrepresents available information, and makes hidden assumptions that are unsupported by data …

Econometrics supplies dramatic cautionary examples in which complex modelling has failed miserably in important applications …

Yes, indeed, econometrics fails miserably over and over again.

One reason why it does, is that the error term in the regression models used is thought of as representing the effect of the variables that were omitted from the models. The error term is somehow thought to be a ‘cover-all’ term representing omitted content in the model and necessary to include to ‘save’ the assumed deterministic relation between the other random variables included in the model. Error terms are usually assumed to be orthogonal (uncorrelated) to the explanatory variables. But since they are unobservable, they are also impossible to empirically test. And without justification of the orthogonality assumption, there is, as a rule, nothing to ensure identifiability:

Distributional assumptions about error terms are a good place to bury things because hardly anyone pays attention to them. Moreover, if a critic does see that this is the identifying assumption, how can she win an argument about the true expected value the level of aether? If the author can make up an imaginary variable, “because I say so” seems like a pretty convincing answer to any question about its properties.

Nowadays it has almost become a self-evident truism among economists that you cannot expect people to take your arguments seriously unless they are based on or backed up by advanced econometric modelling. So legions of mathematical-statistical theorems are proved — and heaps of fiction are being produced, masquerading as science. The rigour of the econometric modelling and the far-reaching assumptions they are built on is frequently not supported by data.

Econometrics is basically a deductive method. Given the assumptions, it delivers deductive inferences. The problem, of course, is that we almost never know when the assumptions are right. Conclusions can only be as certain as their premises — and that also applies to econometrics.

Econometrics doesn’t establish the truth value of facts. Never has. Never will.

I once saw a poster saying “GIVE ME AMBIGUITY OR GIVE ME SOMETHING ELSE.” Anxiety is derived from a problem that many humans have managing ambiguity. We need certainty to manage without or with minimal anxiety. We often become “overwhelmed” or “consumed” about not knowing or uncertainty about “what is next” or where something is “going.” Predictability reduces anxiety. There is a phrase in psychological counselling “Structure binds anxiety.” We are also said to “rationalize” our behaviour when things go wrong often in response to the question “Why did you say/do that?” We often do not actually know but make it up for fear that we are in trouble or to reduce anxiety.

Planning leads to structure so clients of life coaches are urged to use lists as reminders of what needs doing and to organize their schedule thereby increasing predictability and certainty. In an economics-determined society, it is believed there is an optimal point at which an equilibrium is achieved. But then that is dependent on the “invisible hand” in a free market where supposedly rational actors make reasonable decisions but more likely under the influence of marketers who know how to play on these anxieties. The paradoxes are numerous and lead to more uncertainty.

The reality is that we all live in a system where “chaos rules” and where we are unable to see the complexity therein and how unknown factors govern the events around us but we “sense” that they are there. As a species we are good at identifying patterns but in a complex world with brains able to retain in conscious awareness a limited number of the multitude of variables or stimuli impacting us, we invent the pattern and then reify our theories about them forming beliefs to provide reassurance and hopefully equanimity. We seize readily on apparent answers weaving a narrative for ourselves.

The ability to manage anxiety in the face of ambiguity is a skill partly developed with an inquisitive curious mind like that attributed to children. If true, how do so many lose it? Are economists any less susceptible to these phenomena but find comfort in the apparent predictability of econometrics?

“Error terms are usually assumed to be orthogonal (uncorrelated) to the explanatory variables. But since they are unobservable, they are also impossible to empirically test.”

This is incomprehensible. Error terms are more than observable, they are observed because generated from data. You have a model, it implies values of the dependent variable, given values of the chosen explanatory variables. The errors are the differences between those implied values and the ones actually observed. The orthogonality of those errors and the explanatory variables can therefore be tested – and that is routine good practice, When errors are not orthogonal there is probably an omitted confounding variable. If that cannot be traced, the model is re-estimated using instrumental variables. Moreover the normality of the errors and their serial independence can be tested – all important to assessing the success of specification, i.e the applicability of the model to the data set.

I say this with some hesitation but I do begin to wonder whether Lars knows anything about econometrics at all.

Moreover the statement “you cannot expect people to take your arguments seriously unless they are based on or backed up by advanced econometric modelling..” is the opposite of the truth. Many favoured economic propositions are never put to rigorous econometric test. And lots of fashionable models are over parameterised and are made operational by “calibration” or “literature search” , non-statistical, non-rigorous means of assigning parameter values. Others are casually estimated by statistical techniques without proper hypothesis testing.

Rigorous or advanced econometrics went out of fashion in economics at about the time that the fashion for ersatz “microfoundations” came in. Consistency with a priori axioms came to be regarded as more important than consistency with data.

“Econometrics doesn’t establish the truth value of facts. Never has. Never will.”

Of course not – facts are facts. Econometrics establishes the applicability (or otherwise) of economic models to particular data sets. There is no other means that I know of for assessing that applicability. The doctrinaire don’t care. They support their favoured formulations irrespective of evidence. People who want economics to be an empirical study have the ideological wind in their face.

Gerry, I was taught by the great David Freedman what the error term in econometric regression models is, and it seems to me you have one or two things too to learn, so for the benefit of your mathematical-statistical education …:

‘What the Imagination seizes as Beauty must be truth’ – Keats, Letters.

Sadly, whether this idea is written on a Grecian urn or for the preface of a book, it isn’t true. There is no necessary correlation between Beauty and Truth. Indeed, beauty is usually shown, by bitter experience, to be deceptive. Beauty, evolved or from artifice, attracts… for deceptive reasons of its own. Allure is entrapment.

If beauty was true, ads and brochures would be too! ;)

“Moreover the statement “you cannot expect people to take your arguments seriously unless they are based on or backed up by advanced econometric modelling..” is the opposite of the truth.”

This is unfair, Gerald. Lars prefixed this with “Nowadays it has almost become a self-evident truism among economists that …”. In other words it is not him saying this, it is most other economists.

This seems to be the gist of the Freedman article:

“On the other hand, if the Xij are dependent, the matter is problematic. … The standard assumptions fail, and fitting (2) to data for i = 1,…,n will estimate the wrong parameters”.

In his second reference – to Pratt and Schlaifer – it is a pity he has not included the bit about the discovery of structure. That surely involves parameters operating on different timescales?

All of the results you refer to Lars are acknowledged in the literature and not a surprise to me. There are diagnostic tests for independence of regressors as well as tests for correlation of regressors and observed residuals. There are techniques to address these problems if you have enough data. Where the problems cannot be resolved results may not be invalid but should be treated with caution.

No-one could complain if you inveighed against slipshod practice, which is all too common. But you write as if bad practice is inevitable and therefore seek to undermine one of the very few means we have of imposing some empirical discipline on economics.

Have you not noticed that econometrics has become less fashionable as the tendencies in economics that you deplore have gathered strength? I don’t think you know who your real friends are.

It has been fashionable, nay obligatory, for a long time to impose “rational expectations” on a macroeconomic model. That means the actors in the model are assumed to know how it (their world) works and therefore form model-consistent expectations. This usually means the system quickly establishes an equilibrium after shocks. If you complain that it is not true in reality that everyone has the same beliefs about the system and it certainly isn’t true that everyone knows how it works, you get patient looks and talk of this being “only a model” and “as if” theorising. My question then is: does your model forecast better than its own unrestricted reduced form? (For those unfamiliar with the terminology the reduced form re-arranges the model so the endogenous variables have only exogenous or predetermined variables on the right-hand side of their respective equations. The coefficients should be functions of the structural coefficients in the model but if the reduced form is unrestricted the implied restrictions are ignored.) The answer invariably is no; the unrestricted reduced form always wins in practice. Why then is it rational for an actor in the model to use those restrictions when forming expectations if she makes better predictions without them? This challenge sees the patient looks replaced by ones of great irritation. Because there is no answer. There is nothing rational about rational expectations when you try to apply the model to data.

Exposure to data is the principal means of sorting sense from nonsense in economics. No technique ever yields complete certainty but careful application of statistical methods with integrity is surely a necessary discipline. What is the alternative – divine revelation? But there are many gods and only one Bayes Theorem.