Home > Uncategorized > Experiments in social sciences

Experiments in social sciences

from Lars Syll

du2How, then, can social scientists best make inferences about causal effects? One option is true experimentation … Random assignment ensures that any differences in outcomes between the groups are due either to chance error or to the causal effect … If the experiment were to be repeated over and over, the groups would not differ, on average, in the values of potential confounders. Thus, the average of the average difference of group outcomes, across these many experiments, would equal the true difference in outcomes … The key point is that randomization is powerful because it obviates confounding …

Thad Dunning’s book is a very useful guide for social scientists interested in research methodology in general and natural experiments in specific. Dunning argues that since random or as-if random assignment in natural experiments obviates the need for controlling potential confounders, this kind of “simple and transparent” design-based research method is preferable to more traditional multivariate regression analysis where the controlling only comes in ex post via statistical modelling.

But — there is always a but …

The point of making a randomized experiment is often said to be that it ‘ensures’ that any correlation between a supposed cause and effect indicates a causal relation. This is believed to hold since randomization (allegedly) ensures that a supposed causal variable does not correlate with other variables that may influence the effect.

The problem with that simplistic view on randomization is that the claims made are exaggerated and sometimes even false.

Since most real-world experiments and trials build on performing a finite amount of randomization, what would happen if you kept on randomizing forever, does not help you to ‘ensure’ or ‘guarantee’ that you do not make false causal conclusions in the one particular randomized experiment you actually do perform. It is indeed difficult to see why thinking about what you know you will never do, would make you happy about what you actually do.

In econometrics one often gets the feeling that many of its practitioners think of it as a kind of automatic inferential machine: input data and out comes causal knowledge. This is like pulling a rabbit from a hat. Great — but first you have to put the rabbit in the hat. And this is where assumptions come into the picture.

The assumption of imaginary ‘super populations’ is one of the many dubious assumptions used in modern econometrics.

As social scientists — and economists — we have to confront the all-important question of how to handle uncertainty and randomness. Should we define randomness with probability? If we do, we have to accept that to speak of randomness we also have to presuppose the existence of nomological probability machines, since probabilities cannot be spoken of — and actually, to be strict, do not at all exist — without specifying such system-contexts. Accepting a domain of probability theory and sample space of infinite populations also implies that judgments are made on the basis of observations that are actually never made!

Infinitely repeated trials or samplings never take place in the real world. So that cannot be a sound inductive basis for science with aspirations of explaining real-world socio-economic processes, structures or events. It’s not tenable. Why should we as social scientists — and not as pure mathematicians working with formal-axiomatic systems without the urge to confront our models with real target systems — unquestioningly accept models based on concepts like the ‘infinite super populations’ used in e.g. the ‘potential outcome’ framework that has become so popular lately in social sciences?

One could, of course, treat observational or experimental data as random samples from real populations. I have no problem with that (although it has to be noted that most ‘natural experiments’ are not based on random sampling from some underlying population — which, of course, means that the effect-estimators, strictly seen, only are unbiased for the specific groups studied). But probabilistic econometrics does not content itself with that kind of populations. Instead, it creates imaginary populations of ‘parallel universes’ and assume that our data are random samples from that kind of  ‘infinite super populations.’

In social sciences — including economics — it’s always wise to ponder C. S. Peirce’s remark that universes are not as common as peanuts …

  1. ghholtham
    January 23, 2020 at 6:29 pm

    Lars continues to speak the language of “ensure” and “guarantee”. No scientific experiment guarantees anything. He talks of a “sound inductive basis” whatever on earth that is. In practice you observe a regularity and think about it. That leads to a hypothesis: e.g. this regularity occurs because X causes Y. If you do not believe that X is the only influence on Y you have to specify a relation involving the other variables pf primary relevance. The hypothesis may be such as to suggest a functional form for the relationship. Quite often it will not, in which case in the interests of simplicity you adopt the simplest plausible form. You must specify the domain of your hypothesis in time or space. The more general your claims the easier will it be to refute the hypothesis. You must then find or generate a data set covering as much of the domain as possible. This is not a random sample, whatever that is, but it is supposed to be representative of the domain of the hypothesis. It might not be; you might hit sample bias and the smaller the data set or the bigger the claim of generality the greater the risk. Nothing to be done about that – it’s life. You can test the hypothesis on the data and if it passes, nothing is guaranteed but you are allowed to maintain your hypothesis until its limitations are exposed and a better one comes along. You can test for functional form if you think the simplest form may be inappropriate.

    The assignment of precise “degrees of confidence” to the hypothesis are based on assumptions about the reasons and nature of observed deviations from the hypothesis that you may be unable to test. But you don’t rely on precise degrees; you set a level of confidence below which you consider the hypothesis uncorroborated and another level at which you consider the hypothesis decisively rejected. Data do not generate hypotheses; people do. But data are indispensable for testing them. The point is not that this sort of procedure is airtight but that it is the best we can realistically do. If someone comes up with a better method of testing hypotheses I’m sure it will be adopted. If you reject statistical methods, does that mean we can all believe what we like?

    Economic data are generally poor and noisy and econometric tests can have low power to discriminate. This means that different hypotheses can co-exist, though not all of them can be right. Nothing to do but think of more implications that might enable an inferential test or wait for more data. Reading Lars I make the guess that he has never done empirical work and lives in a world of Platonic idealism.

    One area where I suspect Lars and I might nevertheless agree is in the rejection of Milton Friedman’s methodological instrumentalism. Friedman maintained that the value of a hypothesis did not depend on the “realism” of its assumptions but on its ability to predict data. Every hypothesis depends on a degree of abstraction but flagrantly counterfactual assumptions mean even if you predict some data you are ipso facto failing other empirical tests. Ignoring elements that might be relevant is one thing; after all they might not be very relevant. But if you assume e.g. that people have perfect foresight about future income and their life expectancy and they are all identical, we know your hypothesis is false from common experience. Whatever the reason for the continuing correspondence between your X and Y, you have not explained it.

    Excessive abstraction or counter-factual assumptions have had their uses in generating decision rules that can be used for asset allocation in specific circumstances. Linear and dynamic programming have their uses, after all. But they have no explanatory power over macroeconomic phenomena.

    • Yoshinori Shiozawa
      January 23, 2020 at 7:45 pm

      >> Reading Lars I make the guess that he has never done empirical work.

      I have a similar impression. I guess he has never done any theoretical work either, except methodological arguments. Perhaps he has done some econometric works but no theory work in economics.

  2. Frank Salter
    January 23, 2020 at 7:01 pm

    Only relationships conforming to the quantity calculus can be considered as possible candidates for theoretical validity. That means there are no candidates in orthodox or heterodox analysis. This is the simple truth. Recognising this reduces what should be considered to a handful of papers. I do not understand why economists are apparently incapable of understanding this fundamental aspect of how quantities must be manipulated. Can anyone tell me?

  3. Econoclast
    January 23, 2020 at 11:37 pm

    I just encountered this today:
    “… four new fields called public policy engineering, computational public policy, political engineering, and computational politics. Public policy engineering is the application of engineering, computer science, mathematics, or natural science to solving problems in public policy. Computational public policy is the application of computer science or mathematics to solving problems in public policy. Political engineering is the application of engineering, computer science, mathematics, or natural science to solving problems in politics. Computational politics is the application of computer science or mathematics to solving problems in politics. Public policy engineering and computational public policy include, but are not limited to, principles and methods for public policy formulation, decision making, analysis, modeling, optimization, forecasting, and simulation. Political engineering and computational politics include, but are not limited to, principles and methods for political decision making, analysis, modeling, optimization, forecasting, simulation, and expression. The definition of these four new fields will greatly increase the pace of research and development in these important fields.”

    As one trained in economics, who enhanced that training with some study in management science and operations research, who practiced environmental policy for more than three decades, and who now is involved in climate justice, I am rendered breathless by the quote above. I cannot wait to see the results of all this rational wonderfulness.

    • Yoshinori Shiozawa
      January 24, 2020 at 3:26 pm

      It is only an extravagant advertising of a pretended new discipline. There are many similar ones like Computable General Equilibrium Theory. See for example, a working paper by Mitra-Kahn Debunking the Myths of Computable General Equilibrium Models. I do not deny the possibility to use computer simulation (agent-based simulation or ABS) for economics but it requires well-conceived theory behind them and the latter should be supported by wide and multi-dimensional observations. Simulation must go in tandem with theory and observations. Excessive stress on Computation alone is itself a symptom of misleading advertising. See my chapter A Guided Tour of the Backside of Agent-Based Simulations.

  4. davetaylor1
    January 24, 2020 at 12:08 pm

    Reading this and the comments on it with my latest response to BinLi on “What Went Wrong With Economics” in mind, I am wondering whether this randomised research still has social scientists trying to establish facts rather than (after Popper) trying to dig out failures in our assumptions? If we want to reduce our ignorance we should look more closely at the facts, but if there is a problem in what we think and are taking for granted, we need to be looking much more randomly across a wide range of analogous facts.

    • Yoshinori Shiozawa
      January 28, 2020 at 6:03 am

      Dave,

      you have a good idea. Please join us in the discussion on Lars Syll’s post
      On causality and economics.

  1. No trackbacks yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.