Home > The Economics Profession > Put mindless econometrics and regression analysis where they belong – in the garbage can!

Put mindless econometrics and regression analysis where they belong – in the garbage can!

from Lars Syll

A popular idea in quantitative social sciences – and nowadays also including economics – is to think of a cause (C) as something that increases the probability of its effect or outcome (O). That is:

P(O|C) > P(O|-C)

However, as is also well-known, a correlation between two variables, say A and B, does not necessarily imply that that one is a cause of the other, or the other way around, since they may both be an effect of a common cause, C. 

In statistics and econometrics this “confounder” problem is usually solved by “controlling for” C, i. e. by holding C fixed. This means that one actually look at different “populations” – those in which C occurs in every case, and those in which C doesn’t occur at all. This means that knowing the value of A does not influence the probability of C [P(C|A) = P(C)]. So if there then still exist a correlation between A and B in either of these populations, there has to be some other cause operating. But if all other possible causes have been “controlled for” too, and there is still a correlation between A and B, we may safely conclude that A is a cause of B, since by “controlling for” all other possible causes, the correlation between the putative cause A and all the other possible causes (D, E, F …) is broken.

This is of course a very demanding prerequisite, since we may never actually be sure to have identified all putative causes. Even in scientific experiments may the number of uncontrolled causes be innumerable. Since nothing less will do, we do all understand how hard it is to actually get from correlation to causality. This also means that only relying on statistics or econometrics is not enough to deduce causes from correlations.

Some people think that randomization may solve the empirical problem. By randomizing we are getting different “populations” that are homogeneous in regards to all variables except the one we think is a genuine cause. In that way we are supposed being able not having to actually know what all these other factors are.

If you succeed in performing an ideal randomization with different treatment groups and control groups that is attainable. But it presupposes that you really have been able to establish – and not just assume – that the probability of all other causes but the putative (A) have the same probability distribution in the treatment and control groups, and that the probability of assignment to treatment or control groups are independent of all other possible causal variables.

Unfortunately, real experiments and real randomizations seldom or never achieve this. So, yes, we may do without knowing all causes, but it takes ideal experiments and ideal randomizations to do that, not real ones.

This means that in practice we do have to have sufficient background knowledge to deduce causal knowledge. Without old knowledge, we can’t get new knowledge – and, no causes in, no causes out.

John Maynard Keynes was – as is yours truly – very critical of the way statistical tools are used in social sciences. In his criticism of the application of inferential statistics and regression analysis in the early development of econometrics, Keynes in a critical review of the early work of Tinbergen, writes:

Prof. Tinbergen agrees that the main purpose of his method is to discover, in cases where the economist has correctly analysed beforehand the qualitative character of the causal relations, with what -strength each of them operates. If we already know what the causes are, then (provided all the other conditions given below are satisfied) Prof. Tinbergen, given the statistical facts, claims to be able to attribute to the causes their proper quantitative importance. If (anticipating the conditions which follow) we know beforehand that business cycles depend partly on the present rate of interest and partly on the birth-rate twenty years ago, and that these are independent factors in linear correlation with the result, he can discover their relative importance. As regards disproving such a theory, he cannot show that they are not verce causce, and the most he may be able to show is that, if they are verce cause, either the factors are not independent, or the correlations involved are not linear, or there are other relevant respects in which the economic environment is not homogeneous over a period of time (perhaps because non-statistical factors are relevant).

Am I right in thinking that the method of multiple correlation analysis essentially depends on the economist having furnished, not merely a list of the significant causes, which is correct so far as it goes, but a complete list? For example, suppose three factors are taken into account, it is not enough that these should be in fact verce causce; there must be no other significant factor. If there is a further factor, not taken account of, then the method is not able to discover the relative quantitative importance of the first three. If so, this means that the method is only applicable where the economist is able to provide beforehand a correct and indubitably complete analysis of the significant factors. The method is one neither of discovery nor of criticism. It is a means of giving quantitative precision to what, in qualitative terms, we know already as the result of a complete theoretical analysis.

This, of course, is absolutely right. Once you include all actual causes into the original (over)simple model, it may well be that the causes are no longer independent or linear, and that a fortiori the coefficients in the econometric equations no longer are identifiable. And so, since all causal factors are not included in the original econometric model, it is not an adequate representation of the real causal structure of the economy that the model is purportedly meant to represent.

My line of criticism (and Keynes’s) is also shared by e.g. eminent mathematical statistician . In his Statistical Models and Causal Inference (2010) Freedman writes:

If the assumptions of a model are not derived from theory, and if predictions are not tested against reality, then deductions from the model must be quite shaky. However, without the model, the data cannot be used to answer the research question …

In my view, regression models are not a particularly good way of doing empirical work in the social sciences today, because the technique depends on knowledge that we do not have. Investigators who use the technique are not paying adequate attention to the connection – if any – between the models and the phenomena they are studying. Their conclusions may be valid for the computer code they have created, but the claims are hard to transfer from that microcosm to the larger world …

Regression models often seem to be used to compensate for problems in measurement, data collection, and study design. By the time the models are deployed, the scientific position is nearly hopeless. Reliance on models in such cases is Panglossian …

Given the limits to present knowledge, I doubt that models can be rescued by technical fixes. Arguments about the theoretical merit of regression or the asymptotic behavior of specification tests for picking one version of a model over another seem like the arguments about how to build desalination plants with cold fusion and the energy source. The concept may be admirable, the technical details may be fascinating, but thirsty people should look elsewhere …

Causal inference from observational data presents may difficulties, especially when underlying mechanisms are poorly understood. There is a natural desire to substitute intellectual capital for labor, and an equally natural preference for system and rigor over methods that seem more haphazard. These are possible explanations for the current popularity of statistical models.

Indeed, far-reaching claims have been made for the superiority of a quantitative template that depends on modeling – by those who manage to ignore the far-reaching assumptions behind the models. However, the assumptions often turn out to be unsupported by the data. If so, the rigor of advanced quantitative methods is a matter of appearance rather than substance.

And in Statistical Models: Theory and Practice (2009) Freedman writes:

The usual point of running regressions is to make causal inferences without doing real experiments. On the other hand, without experiments, the assumptions behind the models are going to be iffy. Inferences get made by ignoring the iffiness of the assumptions. That is the paradox of causal infernce …

Path models do not infer causation from association. Instead, path models assume causation through response schedules, and – using additional statistical assumptions – estimate causal effects from observational data … The problems are built into the assumptions behind the statistical models … If the assumptions don’t hold, the conclusions don’t follow from the statistics.

And as Stephen Morgan and Christopher Winship write in their seminal Counterfactuals and Causal Inference (2007):

Regression models have some serious weaknesses. Their ease of estimation tends to suppress attention to features of the data that matching techniques force researchers to consider, such as the potential heterogeneity of the causal effect and the alternative distributions of covariates across those exposed to different levels of the cause. Moreover, the traditional exogeneity assumption of regression … often befuddles applied researchers … As a result, regression practitioners can too easily accept their hope that the specification of plausible control variables generates as-if randomized experiment.

Econometrics (and regression analysis) is basically a deductive method. Given the assumptions (such as manipulability, transitivity, Reichenbach probability principles, separability, additivity, linearity etc) it delivers deductive inferences. The problem, of course, is that we will never completely know when the assumptions are right. Real target systems are seldom epistemically isomorphic to axiomatic-deductive models/systems, and even if they were, we still have to argue for the external validity of the conclusions reached from within these epistemically convenient models/systems. Causal evidence generated by statistical/econometric procedures like regression analysis may be valid in “closed” models, but what we usually are interested in, is causal evidence in the real target system we happen to live in.

Most advocates of econometrics (and regression analysis) want to have deductively automated answers to fundamental causal questions. Econometricians think – as David Hendry expressed it in Econometrics – alchemy or science? (1980) – they “have found their Philosophers’ Stone; it is called regression analysis and is used for transforming data into ‘significant results!’” But as David Freedman poignantly notes in Statistical Models: “Taking assumptions for granted is what makes statistical techniques into philosophers’ stones.” To apply “thin” methods we have to have “thick” background knowledge of what’s going on in the real world, and not in idealized models. Conclusions can only be as certain as their premises – and that also applies to the quest for causality in econometrics and regression analysis.

  1. sergio
    December 4, 2012 at 3:02 pm

    Econometrics kills logic and reasoning. I noticed, whenever I ask anyone who presents sophisticated econometric model, please simply explain, using logic, what is the causal relations between variables, what is the mechanism which makes them work and why do you think that relation is important in real life? They immediately turn from econometric gurus into brainwashed zombies, which take the relation of the model as textbook suggest it. They cannot logically explain anything. Econometrics turns off brains, turns off economic thinking. Many econometricians whole their career simply calculate relations, taking textbooks dogma as given. What they do is not economics, and they are not economists. Very often conclusions are so stupid and always ideologically biased, that they shouldnt even have performed econometric analysis to say what the wanted to say before started thinking. They never conclude that free market is inefficient and never conclude that government is good. Econometrically sophisticated papers should always be judged not by complexity of their models, but by their conclusions. Dont be charmed by “scientific” econometric part of paper, judge their thinking by their conclusions.
    Neoclassical-econometricians’ “logic” can be explained simply as – trees are waiving because wind blows. Their models include what can be observed, what can be calculated and for which there is a data. What can not be calculated and measured are not economic variables, therefore it is not a science. Neo-classical-empirical-imperial-econometric empire is at the latest stage of its methodological and moral decay. I can only explain own illusion. How can it explain reality?
    Why such an obsolete and primitive thinking should be taught in universities in the 21st century?

    • sergio
      December 4, 2012 at 6:15 pm

      In the previous comment I putted metaphor wrong.
      Neoclassical “logic” is “trees make wind blow”.

      • December 5, 2012 at 5:33 am

        You know, G K Chesterton was, in the columns of popular newpapers, fighting these battles while Keynes was still studying probability. “The Wind and the Trees” may be found in “Selected Essays of G K Chesterton”, 1939, Collins, along with another, “The Diabolist”, which points back indirectly to Hume, the probabilist who replaced ’cause’ by ‘correlation’, and thereby took the the step after which he could not “know the difference between right and wrong”. John Guest sums him up in the preface to this wonderfully wise and humorous selection of essays: “A very moderate sense and sanity”, he wrote, “is all I have evere claimed to possess”, but it is possible that in saying so, he did not realise the extent to which these precious qualities are far from commonplace.

        Good for you anyway, Lars, Sergio, Bruce and Jeff. For the record I’m attempting to characterise an all-inclusive, non-quantitative model of everything, late on in Geoff Davies’ blog on “Finding a framework for a new economics”.

      • December 5, 2012 at 10:43 am

        The (absolutely correct!) points made herein are some of the reasons why post-MBA, I pursued a career on Wall St. rather than pursuing Economics.

        As we all know, there are HUGE problems with economic theory: imaginary principles of perpetual equilibrium and balance, the efficient markets hypothesis, information symmetries, a complete ignorance of human behaviors, etc … a whole HOST of blatant assumptions that are taken as laws, then effectively glossed over using the indecypherable ‘blind ’em with science’ alchemy of ‘regression analysis’.

        As a young, eager B-School student, I expected alot more of the field of Economics. But once I found it was ‘garbage in’, I rightly concluded it was ‘garbage out’. I daresay the past 5 years has proved my conclusion right. [says she, as she pats herself on the back …]

        So, as entertained as I was by Dr. Bob Kavesh’s delightful lectures, I went to Wall St. instead, where at least you get paid enough to cover your student loans for garbage out. : ).

  2. Bruce E. Woych
    December 4, 2012 at 4:40 pm

    “This is of course a very demanding prerequisite, since we may never actually be sure to have identified all putative causes. Even in scientific experiments may the number of uncontrolled causes be innumerable. Since nothing less will do, we do all understand how hard it is to actually get from correlation to causality. This also means that only relying on statistics or econometrics is not enough to deduce causes from correlations.”

    Frankly I find that the search for “causality” has been “regressed” into oblivion by so-called statistically ‘inferred’ “deductive” processes…amounting to a nominator over a denominator type of persuasion. I have always felt “causality” was proven only by its consistency to actually predict a consistent outcome and variables were parameters to an invariable reference point or foundation.

    This article is at the heart of the problem of “empiricism” being critiqued as weak sense driven perspectives and ultimately subjective veneers to a more profound and intrinsic reality. However, in the social sciences we must work from a measured empiricism and attempt to establish “invariables” that remain constant despite particular variations or at the core of patterns of variations that become predictable.

    This is a great article; I hope to see some greater degree of “empiricism” established AS the foundation FOR a causality centered evidence- based methodology in an emergent unified field theory that we can actually call the social sciences without equivocation.

  3. Jeff Z.
    December 4, 2012 at 9:12 pm

    I agree. To make the discussion a little more concrete, we could as why in general the Black Unemployment rate in the U.S. is roughly twice that of whites. First most people agree that this is a fact. I got my data from the BLS (www.bls.gov) and searching unemployment by race. But this is difficult to prove in court because of all the confounding factors at play.

    Manifestations of discrimination in the course of HISTORY in the U.S. includes laws that prohibited mixed race marriages, laws the prohibited slaves from learning to read and write, laws that prohibited people from TEACHING slaves to read and write, poor schools after slavery was abolished, and hostility and discrimination on the part of some unions, notably the AFL. To even list these requires a fair amount of back pocket knowledge about U.S. history.

    More recently, many argue that such unjustified discrimination still exits. Some studies show discrimination against black sounding names in hiring decisions (http://www.law.virginia.edu/pdf/workshops/0708/pager.pdf for one example of research after a Google search). There is other information out there regarding the portrayal of blacks and whites in news sources, in TV and entertainment media, and so forth. See http://www.californiatomorrow.org/media/images.pdf for a thought provoking Power Point in PDF format.

    But you still have to know something before you can begin a decent statistical analysis. You would need to know that poor educational outcomes count against people in the job market. So there is a mixture of justifiable discrimination based on education (or credentials – another problem I won’t discuss here), and unjustified discrimination based simply on race that prevented a decent education in the past.

    Just looking at that list, as limited as it is, reveals the range of previous knowledge that comes into play when assessing any kind of social scientific reasoning.

    People must be careful, though, because it is SO easy to supply a narrative that may not be right. For example, Joe easts at The China Garden Restaurant every Friday, for a year. His wife Sara pays the credit card bill, and notices the pattern of credit card charges, listed as Highway 8 Motel 6 in Camden, New Jersey. She concludes that Joe is having a torrid affair, but the reality is that the Chinese restaurant is using the same credit card machine as the Motel 6 in order to save a few bucks.

    Then, think on the assumed structure of a market capitalist economy, or the U.S. capitalist economy, to make these examples understandable. CAREFULLY USED statistics can prevent poor reasoning. GOOD reasoning can detect poor use of statistics. You may not be able to discuss all salient factors that cause the unemployment rates of blacks to be twice that of whites in the U.S., but we sure do need a way to track what some of the most important factors are. Without the statistics, we do not even have a diagnostic tool to see if there might be a problem. Without the background knowledge, you wouldn’t know to even begin to suspect racial discrimination as a possible cause, nor know that it might lead to the fact that blacks are imprisoned much more than whites for the same criminal offenses, and to ask how this plays back on job market outcome and educational outcomes.

    • Bruce E. Woych
      December 4, 2012 at 10:35 pm

      Jeff Z. : Your “observations” are too dangerous to standard theoretical models and i am thinking that all Science (real Science) begins with observations and measurements. this, to me is where true empiricism would take us if we were actually observing society and measuring it in real time. There is no way things could be ignored, simply because the observations are subject to scrutiny.
      Statistics, on the other hand, is a measuring tool only. Unfortunately and all too often “facts” become fabrications and outcome oriented fictions verify subjectivity with a thin veneer of what then becomes “quantitative” (read sacred)….objectivity!

      In the meantime, omissions are the stuff of “subjective” empiricism and the evidential perspective is transcribed as “anecdotal” (see: http://en.wikipedia.org/wiki/Anecdotal_evidence).

      What a conundrum for the history of ideas!

  4. Bruce E. Woych
    December 5, 2012 at 4:53 pm

    @Dave Talor: “‘Just for the Record…” I am 100% with you! I might add that a unique “fallacy of distinction” exists in the dichotomy (politicized) version of quality vs. quantity. I have seen this resolved and corrected to be quality + quantity in the more positive sense. It is “telling” however that in the either/or (history of ideas) “science” disputes…only the negatives are argued by either side (usually stressing omissions more than methodological “mission” statements for either reality. In my opinion it is the split that is an error at its base.

    I thought these might be of some stirring interest:

    http://en.wikipedia.org/wiki/Framing_%28social_science%29

    http://www.nizkor.org/features/fallacies/index.html#index

    http://en.wikipedia.org/wiki/Money_illusion

    =================================

    @
    Lucy Honeychurch
    December 5, 2012 at 10:43 am | #4

    You are so ON the money; and I love that phrase…blinded by…(and I can think of several objects for that preposition). Perhaps you might a MORE comprehensive expansion
    to the notion of a “VEIL OF MONEY”? Money does hide all sins so completely!

    I think appeals to authority and institutional manipulations go hand in hand as a market modus operandi & norm.

    The categorical notion of “cognitive bias” has recently been brought into this arena and deserves to be expanded by empirical research and documentation. (see: http://en.wikipedia.org/wiki/List_of_biases_in_judgment_and_decision_making).

    Ethnographic methods are beginning to work on empirical studies of Wall Street Trading settings, and some titles might interest you:

    Liquidated: An Ethnography of Wall Street; 2012:(a John Hope Franklin Center Book)
    Karen Ho (Author)

    Wall Street Women; 2012: (Duke University Press)
    Melissa S. Fisher (Author)

    Out of the Pits: Traders and Technology from Chicago to London; 2010:
    (University Of Chicago Press)
    Caitlin Zaloom (Author)

    Is there a “Scottish Enlightenment” syndrome among women (marginalized, minimalized & exploited) at the heart of these Trading Communities? The Scotts rebelled against British arrogance and broke evolutionary stages from rank & file protocol to influence all of Western Civilization’s intellectual history.
    (see: http://en.wikipedia.org/wiki/Scottish_Enlightenment)
    and: (http://www.britannica.com/EBchecked/topic/529682/Scottish-Enlightenment)

    Perhaps you are the next in line for some authentic American contribution to this historic publishing trend?

    • December 7, 2012 at 3:17 pm

      Thanks, Bruce. I’m right with you on “quality + quantity”, and “for the record” I’m just about to post a PREFACE in the “Framework” debate, in which quality will be seen pre-quantitatively as DIRECTIONAL, with “rubber band” coordinates sufficing to direct the right things to the right people.

  5. Asad Zaman
    December 6, 2012 at 1:41 am

    This has been a deep and wide ranging discussion. I have recently published an article entitled Methodological Mistakes and Econometric Consequences which makes many of the points being made in the discussion above: the abstract of the article is:

    ABSTRACT:
    Econometric Methodology is based on logical positivist principles. Since logical positivism has collapsed, it is necessary to re-think these foundations. We show that positivist methodology has led econometricians to a meaningless search for patterns in the data. An alternative methodology which relates observed patterns to real causal structures is proposed.

    The full article can be downloaded from the journal website:
    http://www.era.org.tr/sept2012.htm

  6. Jan
    December 6, 2012 at 8:44 am

    The Art of Good Writing
    “The final thing, in economics, is to have one great truth always in mind. That is, that there are no propositions in economics that can’t be stated in clear, plain language. There just aren’t.”
    Professor John Kenneth Galbraith, Harvard-In Conversation with History-at University of California, Berkeley http://globetrotter.berkeley.edu/conversations/Galbraith/galbraith2.html

  7. pete
    December 6, 2012 at 8:13 pm

    Thank you very much for those interesting point stated here about the dubiety of econometrics.

    I am about to decide to do a Master program with focus on econometrics since this is in my opinon the part of economics – as teached in a mainstream Bachelor – which is most likely the most feasible one. But again you opened my eyes to see its fragile foundation. Yet as mentioned, statistics are needed as a diagnostic tool. So I won’t give up hope, that while doing the Master then, I can take a closer look on the assumptions behind and find better approaches to interprete the transmissions behind economic demeanor.

    If there are any seminal (of course heterodox) works on critism of econometrics you woould recommend, I’d be very thankful!

  8. December 7, 2012 at 12:12 pm

    These works might be of interest for anyone who critically wants to evaluate econometrics:

    Cartwright, Nancy (2007), Hunting Causes and Using Them. Cambridge: Cambridge University Press.

    Freedman, David (2010), Statistical Models and Causal Inference. Cambridge: Cambridge University Press.

    Keuzenkamp, Hugo A. (2000). Probability, econometrics and truth. Cambridge: Cambridge University Press

    Pålsson Syll, Lars (2011), Modern probabilistic econometrics and Haavelmo – a critical realist critique. [http://larspsyll.wordpress.com/2011/05/15/modern-probabilistic-econometrics-and-haavelmo-a-critical-realist-critique/]

    • pete
      December 9, 2012 at 1:19 pm

      Thank you again for this selection!

      I’d like to ask one further question: Could you recommend any heterodox Master program which focuses on econometrics? Any English-speaking one in Europe?

      • December 9, 2012 at 1:32 pm

        Sorry, but to my knowledge there isn’t anyone. Which, of course, is an absolute disgrace! The total dominance of neoclassical economics is devastating. Take for example Cambridge University, where one would have thought the economics of its greatest economist ever, J M Keynes, would be alive and healthy.. But no, instead a new generation of freshwater copies have more or less taken over. Unbelievable and sad.

  9. Britonomist
    December 9, 2012 at 12:42 am

    Why don’t you actually address modern econometric literature? Everything you say here is basic knowledge of all econometricians, and you have seemingly ignored the last 50 years of advances in econometrics ( http://ftp.iza.org/dp4800.pdf ). Of course different regression models require different assumptions, which is why numerous different types of analysis is usually applied – sensitivity tests, panel data analysis, granger causality, co-integaration, models robust to endogeneity & auto correlation, natural experiments – as well as numerous tests for model miss-spesification, omitted variables, unit roots, endogeneity, normality. Keynes might have been accurate about the state of econometrics, in the thirties, but as many have pointed out he was criticizing a very limited early form of econometrics which had at the time only very limited data available. Econometrics is scarcely used for inference, rather it is used to add or take away weight from an already established economic model: if a model provides quantitative predictions about a set of variables, and if this doesn’t hold up when testing the model using econometrics, then we know that either the model is wrong about how the variables interact with each other or there are variables exogenous to the model that need to be taken into account for it to be accurate. This is USEFUL and helps us build better models.

    • Bruce E. Woych
      December 10, 2012 at 1:57 am

      Brit: Intentionality is a critical factor. I think the intricate (and intimidating) data processing methodology you list is both a question of complexity and a syndrome of complexity at the same time. Intensive analytical material data micro-manages the perspective but ultimately leads right back up to the presuppositions and presumptions of the original intentions. While the list of technical data tests are indeed impressive in its sophistication and specialization levels, the massive accumulation of data is only as good as a ‘deconstructive” perspective and can not recreate emergent scope or “phase changes” for transitional interpretation and translation. With all that ‘cabbage” up there that you mention I hesitate to foolishly call it one dimensional prospecting; but it must be utilized as a tool and one might wonder how any “tool” (no matter how sophisticated…) can be the meeting ground for an entire school of thought.

      In real time, we shall see if “models” are (in fact) tested for accuracy and predictive values and at what level. Keep in mind, however, that standard business “survey data” collection for markets are also tentatively accurate and predicated upon some degree of market prediction and probability. In effect, I think you have it all backwards. Econometrics will not challenge standing causal models of probability as if the econometrics were the measure of final judgement and retort. In that arrangement I would suspect the “intentionality” behind the econometrics and its methodology or data for bias. However, I do believe that there is a place for econometrics to establish credibility (or lack thereof…) to claims made routinely in the name of sciences or authority that are not based upon credible evidence based models.

      Unfortunately, polls not econometrics lead the national consciousness in opinion oriented consciousness that is skewed intrinsically with mind numbing ideological self service and class oriented presuppositions. Ultimately the reductive mechanics of tautological methodology is subject to pre-suppositional and pre-dispositional expectation.

      Finally, All the fanciful “…blind-em-with-formulas” are not going to convince everyone that money and influences do not have uses for econometrics as econome-tricks …and that all comes back down to :intentionality” of processing an outcome in a controlled frame of references…such as fracking?

    • Asad Zaman
      December 10, 2012 at 2:04 am

      @Britinomist: My article on “Methodological Mistakes and Econometric Consequences” does address modern econometrics, although the Angrist-Pischke work is only peripherally discussed. But this is also because this work is currently of marginal importance in mainstread econnometrics. My article is available from:

      http://www.era.org.tr/sept2012.htm

    • December 10, 2012 at 6:44 am

      At least since the time of Keynes’ famous critique of Tinbergen’s econometric methods, those of us in the social science community who have been unpolite enough to dare questioning the preferred methods and models applied in quantitive research in general and econometrics more specifically, are as a rule met with disapproval. Although people seem to get very agitated and upset by the critique – just read the commentaries on this blog if you don’t believe me – defenders of “received theory” always say that the critique is “nothing new”, that they have always been “well aware” of the problems, and so on, and so on.So, for the benefit of all you mindless practitioners of econometrics and statistics – who don’t want to be disturbed in your doings – eminent mathematical statistician David Freedman has put together a very practical list of vacuous responses to criticism that you can freely use to save your peace of mind:

      “We know all that. Nothing is perfect … The assumptions are reasonable. The assumptions don’t matter. The assumptios are conservative. You can’t prove the assumptions are wrong. The biases will cancel. We can model the biases. We’re only doing what evereybody else does. Now we use more sophisticated techniques. If we don’t do it, someone else will. What would you do? The decision-maker has to be better off with us than without us … The models aren’t totally useless. You have to do the best you can with the data. You have to make assumptions in order to make progress. You have to give the models the benefit of the doubt. Where’s the harm?”

      The modelers’ response

    • December 10, 2012 at 6:51 am

      And re Angrist/PIschke:

      Suppose we want to estimate the average causal effect of a dummy variable (T) on an observed outcome variable (O). In a usual regression context one would apply an ordinary least squares estimator (OLS) in trying to get an unbiased and consistent estimate:

      O = α + βT + ε,

      where α is a constant intercept, β a constant “structural” causal effect and ε an error term.

      The problem here is that although we may get an estimate of the “true” average causal effect, this may “mask” important heterogeneous effects of a causal nature. Although we get the right answer of the average causal effect being 0, those who are “treated”( T=1) may have causal effects equal to – 100 and those “not treated” (T=0) may have causal effects equal to 100. Contemplating being treated or not, most people would probably be interested in knowing about this underlying heterogeneity and would not consider the OLS average effect particularly enlightening.

      So when e. g. Joshua Angrist and Jörn-Steffen Pischke in “The Credibility Revolution in Empirical Economics: How Better Research Design Is Taking the Con out of Econometrics” (Journal of Economic Perspectives, 2010, 24) says that (p. 23)

      “anyone who makes a living out of data analysis probably believes that heterogeneity is limited enough that the well-understood past can be informative about the future”

      I really think they underestimate the heterogeneity problem. It does not just turn up as an external validity problem when trying to “export” regression results to different times or different target populations. It is also often an internal problem to the millions of OLS estimates that economists produce every year.

      Heterogeneity problems (wonkish)

      • no name
        December 10, 2012 at 9:25 pm

        Completely wrong. In your treatment-versus-non-treatment scenario, the result for β would be 200, not 0, because β represents the different between T=1 and T=0.

        If you were capable of stringing a coherent thought together, you would probably say that you meant if there was a different in the treatment effect on dimension D, where T=1 and D=1 had a causal effect of 100, and T=1 and D=0 had a causal effect of -100, AND D was distributed evenly between 0 and 1, then yes, you would end up with a β of 0. But:

        1. You can easily estimate the regression equation O = α + β1T + β2(T x D) + ε and capture this effect, if D is observable;

        2. If D is unobservable, then we have no way of accounting for it anyway, and so the average effect of 0 is as informative as we’re gonna get;

        and 3. Either way, you’ve taken that quote well out of context. Here’s the full quote:

        “Perhaps it’s worth restating an obvious point. Empirical evidence on any given causal effect is
        always local, derived from a particular time, place, and research design. Invocation of a superficially
        general structural framework does not make the underlying variation or setting more representative.
        Economic theory often suggests general principles, but extrapolation of causal effects to new settings is
        always speculative. Nevertheless, anyone who makes a living out of data analysis probably believes that
        heterogeneity is limited enough that the well-understood past can be informative about the future.”

        A & P’s use of ‘heterogeneity’ here is not about intra-sample heterogeneity, it’s about differences across scenarios, the ability to extrapolate from an analysis to make predictions about the same treatment on another sample. As usual, an RWER regular misrepresents modern work to score points. I guess that’s better than the usual policy here, which is to not engage modern work at all.

  10. Jeff Z.
    December 10, 2012 at 4:29 am

    @Britonomist: Thank you for your thoughtful response. Almost anything I could say has been better said by Bruce Woych and Asad Zaman. There are other critiques that take up the issue of advances in econometric reasoning and practice, and a fair few appear in articles published here (http://www.paecon.net/PAEReview/).

    Another wrinkle in this is that in my post above, discussion of the Black unemployment rate in the U.S. depends on a whole host of commonly understood terms, which is an ontological question. What is Black and how do we know? Is there a common marker, genetic or otherwise that gives that idea (race) scientific meaning? (See http://en.wikipedia.org/wiki/Lewontin%27s_Fallacy for a brief discussion) What about the problem that in answers to surveys from the U.S. Census Bureau race is largely selected by the participant, not classified by the observer? How does this overlap with the obvious social meaning that being black has in the United States because of the trajectory of U.S. history? Even if the idea of race is scientifically valid or invalid, people still ascribe meaning to the idea of race, and it affects their behavior as a result.

    From Britonomist: “Econometrics is scarcely used for inference, rather it is used to add or take away weight from an already established economic model: if a model provides quantitative predictions about a set of variables, and if this doesn’t hold up when testing the model using econometrics, then we know that either the model is wrong about how the variables interact with each other or there are variables exogenous to the model that need to be taken into account for it to be accurate. This is USEFUL and helps us build better models.” But this misses the third logical possibility and another bugaboo of science in general – that the data is WRONG. This is different from the omission problem, because it means that the data that IS included is somehow inaccurate. It does not tell you what you think it does. Even so, this kind of a result would be tremendously USEFUL if it is more widely known. How many negative results, where a hypothesis does not work the way you expected it to, are actually published and disseminated? Thus, a researcher’s judgment is involved, and this could be flawed because of personal biases. The check is supposedly peer review, but if your peers all have the same biases, this does nothing to weed out biases that are SOCIALLY accepted in a group. Amartya Sen made this point about Rawls’ idea of the Original Position, in an article titled “Open and Closed Impartiality” appearing in the Journal of Philosophy in 2002. If you have access to JSTOR, you can find it here: http://www.jstor.org/stable/pdfplus/3655683.pdf?acceptTC=true.

    There is the further question of statistical significance versus theoretical significance. On this see any number of essays by Deirdre McCloskey.

  11. Asad Zaman
    December 23, 2012 at 4:08 am

    I recently came across the following two articles, which seem closely related to this discussion:

    De Long, J. Bradford, and Kevin Lang. “Are all economic hypotheses false?.” Journal of Political Economy (1992): 1257-1272.

    We develop an estimator that allows us to calculate an upper bound to the fraction of
    unrejected null hypotheses tested in economics journal articles that are in fact true. Our point estimate is that none of the unrejected nulls in our sample is true. We reject the hypothesis ..that more than one third are true.

    ADDITIONALLY — directly related to Angrist- Optimism — evaluations by top ranked econometricians that these methods are also unlikely to succeed:

    Heckman, James J., Sergio Urzua, and Edward J. Vytlacil. Understanding instrumental variables in models with essential heterogeneity. No. w12574. National Bureau of Economic Research, 2006.

    This paper examines the properties of instrumental variables (IV) applied to models with essential heterogeneity, that is, models where responses to interventions are heterogeneous and agents adopt treatments (participate in programs) with at least partial knowledge of their idiosyncratic response. We analyze two-outcome and multiple-outcome models including ordered and unordered choice models. We allow for transition-specific and general instruments. We generalize previous analyses by developing weights for treatment effects for general instruments. We develop a simple test for the presence of essential heterogeneity. We note the asymmetry of the model of essential heterogeneity: outcomes of choices are heterogeneous in a general way; choices are not. When both choices and outcomes are permitted to be symmetrically heterogeneous, the method of IV breaks down for estimating treatment parameters.

    Deaton, Angus S. Instruments of development: Randomization in the tropics, and the search for the elusive keys to economic development. No. w14690. National Bureau of Economic Research, 2009.

    Abstract:
    There is currently much debate about the effectiveness of foreign aid and about what kind of projects can engender economic development. There is skepticism about the ability of econometric analysis to resolve these issues or of development agencies to learn from their own experience. In response, there is increasing use in development economics of randomized controlled trials (RCTs) to accumulate credible knowledge of what works, without overreliance on questionable theory or statistical methods. When RCTs are not possible, the proponents of these methods advocate quasi-randomization through instrumental variable (IV) techniques or natural experiments. I argue that many of these applications are unlikely to recover quantities that are useful for policy or understanding: two key issues are the misunderstanding of exogeneity and the handling of heterogeneity. I illustrate from the literature on aid and growth. Actual randomization faces similar problems as does quasi-randomization, notwithstanding rhetoric to the contrary. I argue that experiments have no special ability to produce more credible knowledge than other methods, and that actual experiments are frequently subject to practical problems that undermine any claims to statistical or epistemic superiority. I illustrate using prominent experiments in development and elsewhere. As with IV methods, RCT-based evaluation of projects, without guidance from an understanding of underlying mechanisms, is unlikely to lead to scientific progress in the understanding of economic development. I welcome recent trends in development experimentation away from the evaluation of projects and toward the evaluation of theoretical mechanisms.

    • Bruce E. Woych
      December 23, 2012 at 3:49 pm

      Asad Zaman: Methodology is often either critically neglected or utilized for obfuscation and deriving a plausible framework for some desired outcome based conclusion. As such, the simple caveat of garbage -garbage-out can be phrased in many sophisticated ways. It is still the real problem between inference and intentionality (marketed or otherwise skewed) that is often served up as a politicized compromise, ‘relative perspectives’ or agreements to disagree in the polite academic circles of entrenchment and intellectual tolerances. A lot of rhetoric at the core of further presuppositions and presumptions lend itself to distortions that appear to mutate real “disinformation” morphed through consensus to honest “misinformation” and processed as “talking points” (to reduce it to common ground acceptance). Some reach hights of certainty without merit, but some validities are also abused (“Correlations do not imply causality” is often utilized to altogether dismiss correlations when, in fact, correlations are steps towards a realistic search for causality). It is interesting to note that perspective in finance often flux between perspectives that have validity and utility but no “proof” in measures. (the finance perspective for “a random walk in the park” is a good example of what might be considered a qualitative check against the pure reality of rational numbers: see; (http://www.investopedia.com/university/concepts/concepts5.asp#axzz2FsjH8Znq)
      Questions of Information asymmetry; static rationalization; asymmetrical rational choice; signal to noise ratios; cognitive dissonance; game theory and a host of ‘quantifiable data’ take the place of empirically demonstrated levels of real predictability. The real problem is that in the process the baseline of Quality and Quality has been confused with “pure” rational numbers as proof (concentration & circularity). The real question includes intentionality and utility. In my mind I have to wonder why these parameters are funded in the first place and what direct purpose do they serve subjectively. That does not mean they can not be valuable for objective assessments; but it does present the challenge that outcome must be scrutinized for legitimacy in its own right…before it “legitimizes” some subjectively adversely selected appeal to authority in turn.

      Your statement: “without guidance from an understanding of underlying mechanisms, is unlikely to lead to scientific progress in the understanding of economic development. I welcome recent trends in development experimentation away from the evaluation of projects and toward the evaluation of theoretical mechanisms.”

      …should be the theoretical mantra in the march towards an open field science of culture (universal field theory) for the advancement of knowledge and evidence based methods.

      Methods that methodically move from observation>patterns>correlations>hypothesis.interdependent and independent variables> theorems & “dynamic” qualitative theory …must be not only be empirically testable in real time, but also so that (with reverse engineering) we can prove that the observable is actually still capable of being isolated deductively (back to observation…) so that probability & predictability might be adduced in the full process of change, contextual transition and transformative contests (potentially) mediated by aggressive self service, environments, distortions, and pure corruptions at the base.

      Meanwhile, in regard to you bringing attention to the critical assessment of foreign aid selection and the objective credibility of RCTs from econometrics to produce foundation for just allocations, I would like to remind us all of “intentionality” that drives those consensus seeking RCTs:

      A little history base on assessing contemporary theoretical precepts should begin with one of the perspectives of a founding father: Milton Friedman. It speaks, I think, for itself.

      Friedman's Foreign Economic Aid: Means and Objectives

      http://books.google.com/books?hl=en&lr=&id=uTQiUNs5IzgC&oi=fnd&pg=PA1&dq=freidman+foreign+economic+aid+means+and+objectives&ots=m6qNXAS9pz&sig=dtel4YXmEPsl002kD8NXJpW4ugM#v=onepage&q=&f=false
      =

    • December 23, 2012 at 6:46 pm

      Asad: Deaton has some interesting stuff on RCT on this video (that may be of interest also to a wider audience): http://www.youtube.com/watch?v=2Js-AxZcmr8
      And I have a go at it myself here:

      Let’s take the con out of randomization

      • Bruce E. Woych
        December 23, 2012 at 10:31 pm

        Lars P syll: The Youtube was somewhat long but worth the sitting; definitely a support reference with “real” evidence to base further considerations.

        Your own article is right to the point and worth quoting:

        ” Especially when it comes to questions of causality, randomization is nowadays considered some kind of “gold standard”. Everything has to be evidence-based, and the evidence has to come from randomized experiments.

        But just as econometrics, randomization is basically a deductive method. Given the assumptions (such as manipulability, transitivity, Reichenbach probability principles, separability, additivity, linearity etc) these methods deliver deductive inferences. The problem, of course, is that we will never completely know when the assumptions are right.”

        Let’s take the con out of randomization


        ————————————————–
        Briefly, I may suggest to you that the definition of terms is a major problem in economics as it is very deliberately in political (and politicized) discourse. The use of “evidence” is one of these that really matter. You may well be aware that “evidence base” practice comes out of epidemiology. It originally (abbreviated) derived from the fact that Epidemiologists in the Field had their life on the line. You may also know that a good deal of research that is fairly new or brand new working knowledge does not leave into the mainstream practice of medicine for some 5 years or more.
        The inside scoop here is that epidemiologists did not want to hear that there was knowledge “pending” and worked to get ALL the “evidence” that they could get to protect themselves.
        A Canadian named Cochrane formalized this as a gold standard for general practice. The trend goes back to the 70s and the pioneering efforts by C.B. Stetler termed Models of Research Utilization were clearly emerging as the foundation for standardizing the process, but while causal science was clearly the targeted baseline for the researchers, and there were clearly a split between quantitative and qualitative “science”… issues over “practice variances” (including) error and reliability or resource material became a critical core problem in real time practice.

        “Best practice’ and “outcome based consensus” began to influence the criteria for “Evidence” in EB-P, and eventually authoritative opinion was recognized when only those doing the Research had any insights- and the “science” was still lab-bound or at Medical University centers: (But this later becomes professionalized grounding for peer review and the obfuscation of the causal science base towards the so-called “gold standard” of what amounts to collective consensus
        (… routinely stated with total confidence and authority…) of peer reviewed papers and randomly controlled trials (RCTs). Ironically these may actually LEAVE OUT a serious independent causal paper when it does not fulfill a certain set of technically standardizing rules. Methods chosen became “protocol evidence and potentially real evidence could be blocked for technical reasons!

        The years between 1980 and 1995 are critical. After 1995 the idea of “evidence” is no longer reliable. However, this should not stop us from engaging the original idea of a strict science based evidence based practice (specifically defined when EB-P is claimed as to precisely what hierarchy of evidence has been permitted or excluded).

        (This is from my own unpublished research but everyone or anyone is welcome to adopt the basics:
        Precise Definition of terms and clarity of methods (along with strengths and weaknesses evaluated) is a hallmark of good science. But time and time again the human element will degrade and obscure standards of excellence towards distortion and corruption …while the road to hell gets paved with good intentions!

        Best Wishes: Happy new Year!

  1. No trackbacks yet.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.