Home > Uncategorized > The methods economists bring to their research

The methods economists bring to their research

from Lars Syll

There are other sleights of hand that cause economists problems. In their quest for statistical “identification” of a causal effect, economists often have to resort to techniques that answer either a narrower or a somewhat different version of the question that motivated the research.

rcResults from randomized social experiments carried out in particular regions of, say, India or Kenya may not apply to other regions or countries. A research design exploiting variation across space may not yield the correct answer to a question that is essentially about changes over time: what happens when a region is hit with a bad harvest. The particular exogenous shock used in the research may not be representative; for example, income shortfalls not caused by water scarcity can have different effects on conflict than rainfall-related shocks.

So, economists’ research can rarely substitute for more complete works of synthesis, which consider a multitude of causes, weigh likely effects, and address spatial and temporal variation of causal mechanisms. Work of this kind is more likely to be undertaken by historians and non-quantitatively oriented social scientists.

Dani Rodrik / Project Syndicate

Nowadays it is widely believed among mainstream economists that the scientific value of randomisation — contrary to other methods — is totally uncontroversial and that randomised experiments are free from bias. When looked at carefully, however, there are in fact few real reasons to share this optimism on the alleged ’experimental turn’ in economics. Strictly seen, randomisation does not guarantee anything.

As Rodrik notes, ‘ideally controlled experiments’ tell us with certainty what causes what effects — but only given the right ‘closures.’ Making appropriate extrapolations from (ideal, accidental, natural or quasi) experiments to different settings, populations or target systems, is not easy. Causes deduced in an experimental setting still have to show that they come with an export-warrant to their target populations.

the-right-toolThe almost religious belief with which its propagators — like 2019’s ‘Nobel prize’ winners Duflo, Banerjee and Kremer — portray it, cannot hide the fact that randomized controlled trials, RCTs, cannot be taken for granted to give generalisable results. That something works somewhere is no warranty for us to believe it to work for us here or even that it works generally.

The present RCT idolatry is dangerous. Believing there is only one really good evidence-based method on the market — and that randomisation is the only way to achieve scientific validity — blinds people to searching for and using other methods that in many contexts are better. RCTs are simply not the best method for all questions and in all circumstances. Insisting on using only one tool often means using the wrong tool.

‘Nobel prize’ winners like Duflo et consortes think that economics should be based on evidence from randomised experiments and field studies. They want to give up on ‘big ideas’ like political economy and institutional reform and instead go for solving more manageable problems the way plumbers do. But that modern time ‘marginalist’ approach sure can’t be the right way to move economics forward and make it a relevant and realist science. A plumber can fix minor leaks in your system, but if the whole system is rotten, something more than good old fashion plumbing is needed. The big social and economic problems we face today is not going to be solved by plumbers performing RCTs.

The point of making a randomized experiment is often said to be that it ‘ensures’ that any correlation between a supposed cause and effect indicates a causal relation. This is believed to hold since randomization (allegedly) ensures that a supposed causal variable does not correlate with other variables that may influence the effect.

The problem with that simplistic view on randomization is that the claims made are both exaggerated and false:

• Even if you manage to do the assignment to treatment and control groups ideally random, the sample selection certainly is — except in extremely rare cases — not random. Even if we make a proper randomized assignment, if we apply the results to a biased sample, there is always the risk that the experimental findings will not apply. What works ‘there,’ does not work ‘here.’ Randomization hence does not ‘guarantee ‘ or ‘ensure’ making the right causal claim. Although randomization may help us rule out certain possible causal claims, randomization per se does not guarantee anything!

• Even if both sampling and assignment are made in an ideal random way, performing standard randomized experiments only give you averages. The problem here is that although we may get an estimate of the ‘true’ average causal effect, this may ‘mask’ important heterogeneous effects of a causal nature. Although we get the right answer of the average causal effect being 0, those who are ‘treated’ may have causal effects equal to -100, and those ‘not treated’ may have causal effects equal to 100. Contemplating being treated or not, most people would probably be interested in knowing about this underlying heterogeneity and would not consider the average effect particularly enlightening.

• There is almost always a trade-off between bias and precision. In real-world settings, a little bias often does not overtrump greater precision. And — most importantly — in case we have a population with sizeable heterogeneity, the average treatment effect of the sample may differ substantially from the average treatment effect in the population. If so, the value of any extrapolating inferences made from trial samples to other populations is highly questionable.

• Since most real-world experiments and trials build on performing one single randomization, what would happen if you kept on randomizing forever, does not help you to ‘ensure’ or ‘guarantee’ that you do not make false causal conclusions in the one particular randomized experiment you actually do perform. It is indeed difficult to see why thinking about what you know you will never do, would make you happy about what you actually do.

Randomization is not a panacea. It is not the best method for all questions and circumstances. Proponents of randomization make claims about its ability to deliver causal knowledge that is simply wrong. There are good reasons to be skeptical of the now popular — and ill-informed — view that randomization is the only valid and best method on the market. It is not. So, as Rodrik has it:

Economists’ research can rarely substitute for more complete works of synthesis, which consider a multitude of causes, weigh likely effects, and address spatial and temporal variation of causal mechanisms. Work of this kind is more likely to be undertaken by historians and non-quantitatively oriented social scientists.

  1. bruceolsen
    April 1, 2021 at 7:47 pm

    As a non-economist trying to understand the problems with neoclassical economics well enough to explain them to the general public, I find that most ordinary citizens are quite surprised how poorly the average value characterizes a set of observations.

    • April 1, 2021 at 8:19 pm

      According to https://en.wikipedia.org/wiki/Neoclassical_economics some advocates rely on some counter-intuitive assumptions. I have always found game theory a good antidote. I have some quotes at https://djmarsay.wordpress.com/mathematics/maths-subjects/game-theory/von-neumann-morgenstern/ which seem to me adequate.

      I’d appreciate any comments on my blog, or directly, but there is also a debate on https://rwer.wordpress.com/2021/03/14/on-the-use-of-logic-and-mathematics-in-economics-2/ which illuminate some of the issues. But maybe someone says it plainer?

      • bruceolsen
        April 1, 2021 at 11:05 pm

        Steve Keen’s “Debunking Economics” is good for a reader who wants to put the effort into following his arguments, but it’s not really for the general public (which he clearly states in the edition I have).

        He can be a bit polemical at times, which can tend to discredit his arguments in the eyes of the general public, which is unfortunate given the value of his message. But I don’t blame him, given the way the economics profession seems to be organized,

      • April 2, 2021 at 12:36 pm

        Thanks, Bruce. I think I understand this of his:

        “Look! Up there in the sky! What is it? Is it a plane? Is it a bird?”
        No, it’s a distraction from the robbery that is taking place in broad daylight on the ground.”

        From a game theory point of view one problem we have is that neoclassical ‘rational’ approaches to things like pandemics don’t work (as the UK has been finding), one does better to ‘muddle through’. But generations of us have been taught to distrust muddle, and – at least in the UK – we have an education system that almost seems designed to discourage muddled thinkers, so no wonder they tend not to get into positions of influence.

        There have been attempts to educate people on the difference between ‘good’ and ‘bad’ muddled thinking, but more recently ‘they’ have fallen back on denigrating all muddlers.

        But maybe (as with the Spanish flu) Covid may lead a greater appreciation of these issues. And maybe that would help? But who explains this better than Keynes, Von Neumann, Turing et al?

  2. April 1, 2021 at 8:28 pm

    Lars. Mathematically, the obvious objection to ‘randomisation’ is that it will be biased unless you have a uniform distribution, which presupposes that you have an appropriate measure, which often you don’t. This is similar to the objection to neoclassical science, which is that it sometimes applies https://en.wikipedia.org/wiki/Occam%27s_razor in circumstances where the notion of ‘simplicity’ seems dubious.

    But this is not to say that even a biased randomization might not be usefully informative, provided that one pays attention to the appropriate mathematics. (E.g., as my comment above.)

  3. Gerald Holtham
    April 2, 2021 at 4:08 pm

    Of course controlled experiments cannot guarantee that the results would hold in a different context. But do they improve our understanding and our chances of getting policies right? It would be hard to argue that a well conducted trial is not informative at least about the situation to which it is applied and similar ones. Also, why set up an opposition between trials to determine specific policies and more general institutional research? They are not contradictory. DuFlo was arguing against the generality and vacuity of much economic theory, not ruling out considering big issues. If I cannot cure cancer am I forbidden to clear up someone’s tuberculosis?
    I am intrigued by Lars remark that RCTs “blinds people to searching for and using other methods that in many contexts are better.” Lars doesn’t like field trials and he doesn’t like statistical analysis but he’s all in favour of economics becoming more empirical and grounded in reality. So what are the “other methods” that in many contexts are better? Come on Lars, we agree on the objective so don’t be a tease, What are these better methods?

  4. Gerald Holtham
    April 5, 2021 at 5:26 pm

    No answer from Lars, as usual. What a nihilist!

  1. No trackbacks yet.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.