Home > econometrics, methodology > Econom(etr)ic fictions masquerading as rigorous science

Econom(etr)ic fictions masquerading as rigorous science

from Lars Syll

In econometrics one often gets the feeling that many of its practitioners think of it as a kind of automatic inferential machine: input data and out comes casual knowledge. This is like pulling a rabbit from a hat. Great — but first you have to put the rabbit in the hat. And this is where assumptions come in to the picture.

As social scientists — and economists — we have to confront the all-important question of how to handle uncertainty and randomness. Should we define randomness with probability? If we do, we have to accept that to speak of randomness we also have to presuppose the existence of nomological probability machines, since probabilities cannot be spoken of – and actually, to be strict, do not at all exist – without specifying such system-contexts.

Accepting a domain of probability theory and a sample space of “infinite populations” — which is legion in modern econometrics — also implies that judgments are made on the basis of observations that are actually never made! Infinitely repeated trials or samplings never take place in the real world. So that cannot be a sound inductive basis for a science with aspirations of explaining real-world socio-economic processes, structures or events. It’s not tenable.

In his great book Statistical Models and Causal Inference: A Dialogue with the Social Sciences David Freedman touched on this fundamental problem, arising when you try to apply statistical models outside overly simple nomological machines like coin tossing and roulette wheels:

freedLurking behind the typical regression model will be found a host of such assumptions; without them, legitimate inferences cannot be drawn from the model. There are statistical procedures for testing some of these assumptions. However, the tests often lack the power to detect substantial failures. Furthermore, model testing may become circular; breakdowns in assumptions are detected, and the model is redefined to accommodate. In short, hiding the problems can become a major goal of model building.

Using models to make predictions of the future, or the results of interventions, would be a valuable corrective. Testing the model on a variety of data sets – rather than fitting refinements over and over again to the same data set – might be a good second-best … Built into the equation is a model for non-discriminatory behavior: the coefficient d vanishes. If the company discriminates, that part of the model cannot be validated at all.

Regression models are widely used by social scientists to make causal inferences; such models are now almost a routine way of demonstrating counterfactuals. However, the “demonstrations” generally turn out to depend on a series of untested, even unarticulated, technical assumptions. Under the circumstances, reliance on model outputs may be quite unjustified. Making the ideas of validation somewhat more precise is a serious problem in the philosophy of science. That models should correspond to reality is, after all, a useful but not totally straightforward idea – with some history to it. Developing appropriate models is a serious problem in statistics; testing the connection to the phenomena is even more serious …

In our days, serious arguments have been made from data. Beautiful, delicate theorems have been proved, although the connection with data analysis often remains to be established. And an enormous amount of fiction has been produced, masquerading as rigorous science.

  1. February 28, 2015 at 4:18 pm

    Not to mention the big assumption to the effect that the system is either mean-square ergodic in the first moment or auto-covariance-ergodic in the second moment. For example, the process is auto-covariance-ergodic in the second moment if the time average converges in square-mean to the emsemble average as T moves forward to infinity. It takes boldness to assume that economic time series are stationary in the wide-sense.

  2. March 1, 2015 at 5:30 am

    My paper on post-positivist probability shows that many problems with understanding what probability is arose because logical positivism was the dominant philosophy at the time the foundations of probability were being studied rigorously. Since probability is INHERENTLY unobservable (the coin COULD have come up tails, but did not), it could not be handled well by the positivist idea that science must be based on observables.Abstract for my paper and link is given below:
    Post Postivist Probability: http://ssrn.com/abstract=1319048
    ABSTRACT: Positivist philosophy provided the foundation for all social sciences as they were developed in the twentieth century. Now that positivism has been abandoned by philosophers, it is essential to replace methodology founded on positivist precepts. We show that the foundations of probability (both Bayesian and Frequentist) are wedded to positivist assumptions, which have been shown not to be workable. We suggest an alternative non-positivist foundation for probability theory, which avoids many of the problems faced by the current methodologies.

  3. blocke
    March 1, 2015 at 12:19 pm

    In 1989 I published a book at Cambridge on Management and Higher Education Since 1940, in which the first chapter was entitled “The New Paradigm” and the second “The New Paradigm Revisited.” Chapter I was about the application of science to the field of management science, Chapter II about serious doubts that had arisen about this new scientific paradigm’s usefulness in the real world. Get the date, 1989. This means that my research was done in the 1980s. Since then the rehashing of the subject has not added much to our knowledge about the deficiencies of the new scientific paradigm, except the fact that people keep on rehashing it, without acknowledging that it is rehashing. By 1989 people knew that the new paradigm was no paradigm with which to gain knowledge about the real world, but continued nonetheless to insist that it was.

    • Lyonwiss
      March 1, 2015 at 3:19 pm

      Exact reference please.

      • blocke
        March 1, 2015 at 6:48 pm

        Robert R Locke (1989). Management and higher education since 1940: The influence of America and Japan on West Germany, Great Britain, and France. Cambridge University Press.

        Robert R Locke, European Institute for Advanced Studies in Management, Brussels. University of Hawaii. Manoa

  1. No trackbacks yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s