## Econometrics — fictions masquerading as science

from **Lars Syll**

In econometrics one often gets the feeling that many of its practitioners think of it as a kind of automatic inferential machine: input data and out comes casual knowledge. This is like pulling a rabbit from a hat. Great — but first you have to put the rabbit in the hat. And this is where assumptions come in to the picture.

As social scientists — and economists — we have to confront the all-important question of how to handle uncertainty and randomness. Should we equate randomness with probability? If we do, we have to accept that to speak of randomness we also have to presuppose the existence of nomological probability machines, since probabilities cannot be spoken of – and actually, to be strict, do not at all exist – without specifying such system-contexts.

Accepting a domain of probability theory and a sample space of “infinite populations” — which is legion in modern econometrics — also implies that judgments are made on the basis of observations that are actually never made! Infinitely repeated trials or samplings never take place in the real world. So that cannot be a sound inductive basis for a science with aspirations of explaining real-world socio-economic processes, structures or events. It’s not tenable.

In his book *Statistical Models and Causal Inference: A Dialogue with the Social Sciences *David Freedman touches on this fundamental problem, arising when you try to apply statistical models outside overly simple nomological machines like coin tossing and roulette wheels:

Lurking behind the typical regression model will be found a host of such assumptions; without them, legitimate inferences cannot be drawn from the model. There are statistical procedures for testing some of these assumptions. However, the tests often lack the power to detect substantial failures. Furthermore, model testing may become circular; breakdowns in assumptions are detected, and the model is redefined to accommodate. In short,

hiding the problems can become a major goal of model building.Using models to make predictions of the future, or the results of interventions, would be a valuable corrective. Testing the model on a variety of data sets – rather than fitting refinements over and over again to the same data set – might be a good second-best … Built into the equation is a model for non-discriminatory behavior: the coefficient d vanishes. If the company discriminates, that part of the model cannot be validated at all.

Regression models are widely used by social scientists to make causal inferences; such models are now almost a routine way of demonstrating counterfactuals.

However, the “demonstrations” generally turn out to depend on a series of untested, even unarticulated, technical assumptions.Under the circumstances, reliance on model outputs may be quite unjustified. Making the ideas of validation somewhat more precise is a serious problem in the philosophy of science. That models should correspond to reality is, after all, a useful but not totally straightforward idea – with some history to it. Developing appropriate models is a serious problem in statistics; testing the connection to the phenomena is even more serious …In our days, serious arguments have been made from data. Beautiful, delicate theorems have been proved, although the connection with data analysis often remains to be established. And

an enormous amount of fiction has been produced, masquerading as rigorous science.

Making outlandish statistical assumptions does not provide a solid ground for doing relevant social science.

AS I have been arguing the first assumption is that the probabilistic system obeys the ergodic hypothesis — so that a probability distribution function calculated from past observations is assumed to be equal to the probability distribution function that will govern future outcomes.

in otherbwords economic data are ahistoric!! And if you believe that I have a bridge between Brooklyn and Manhattan that I want to sell to you!

in your book the example you use for the ergodic theorem—a pendulum—is not a good example since its a conservative system unless it has friction. see chaos theory

Brock and Durlauf pointed to the same “ahistoric” hidden assumption of econometrics back to the early 2000’s.

We better distinguish between (a) critique on econometrics for not contributing to resolution to problems in the world, (b) critique on econometrics on issues of methodology. It is a fallacy to bash econometrics on issue (b) while the real target is (a). It is tragic when the child is thrown away with the bathwater, and researchers turn away from econometrics instead of improving it. Econometrics is for resolving issues in the real world and not solving questions on how many angels are on top of a pin.

See my proposal for the “definition & reality methodology” that uses a stylized reduced form model to derive two main empirically valid theorems on the state of information and the possibility of full employment. The theoretical implication of this calculus is that democratic nations need constitutional Economic Supreme Courts to warrant the use of correct information in the national budget. Then a return to full employment is feasible.

Hence, econometrics has helped in 1990 to resolve a key problem in the real world, under (a). There is an issue of how results of econometrics are getting known by others. Subsequently, we can discuss issues of methodology under (b) more relaxedly. For example, in response to Paul Davidson: while the real error series would be historic and non-ergodic, the model might assume an ergodic one to keep things tractable without making too large an error for the stated purposes.

For the resolution of mass unemployment and the “definition & reality methodology” see DRGTPE, pdf online: http://thomascool.eu/Papers/Drgtpe/Index.html