Home > The Economics Profession > Causal inference from observational data

Causal inference from observational data

from Lars Syll

Distinguished Professor of social psychology Richard E. Nisbett takes on the idea of intelligence and IQ testing in his Intelligence and How to Get It (Norton 2011). He also has a some interesting thoughts on multiple-regression analysis and writes:

Researchers often determine the individual’s contemporary IQ or IQ earlier in life, socioeconomic status of the family of origin, living circumstances when the individual was a child, number of siblings, whether the family had a library card, educational attainment of the individual, and other variables, and put all of them into a multiple-regression equation predicting adult socioeconomic status or income or social pathology or whatever. Researchers then report the magnitude of the contribution of each of the variables in the regression equation, net of all the others (that is, holding constant all the others). It always turns out that IQ, net of all the other variables, is important to outcomes. But … the independent variables pose a tangle of causality – with some causing others in goodness-knows-what ways and some being caused by unknown variables that have not even been measured. Higher socioeconomic status of parents is related to educational attainment of the child, but higher-socioeconomic-status parents have higher IQs, and this affects both the genes that the child has and the emphasis that the parents are likely to place on education and the quality of the parenting with respect to encouragement of intellectual skills and so on. So statements such as “IQ accounts for X percent of the variation in occupational attainment” are built on the shakiest of statistical foundations. What nature hath joined together, multiple regressions cannot put asunder.

nisbett

Now, I think this is right as far as it goes, although it would certainly have strengthened Nisbett’s argumentation if he had elaborated more on the methodological question around causality, or at least had given some mathematical-statistical-econometric references. Unfortunately, his alternative approach is not more convincing than regression analysis. As so many other contemporary social scientists today, Nisbett seems to think that randomization may solve the empirical problem. By randomizing we are getting different “populations” that are homogeneous in regards to all variables except the one we think is a genuine cause. In that way we are supposed being able not having to actually know what all these other factors are.

If you succeed in performing an ideal randomization with different treatment groups and control groups that is attainable. But it presupposes that you really have been able to establish – and not just assume – that the probability of all other causes but the putative have the same probability distribution in the treatment and control groups, and that the probability of assignment to treatment or control groups are independent of all other possible causal variables.

Unfortunately, real experiments and real randomizations seldom or never achieve this. So, yes, we may do without knowing all causes, but it takes ideal experiments and ideal randomizations to do that, not real ones.

As I have argued that means that in practice we do have to have sufficient background knowledge to deduce causal knowledge. Without old knowledge, we can’t get new knowledge – and, no causes in, no causes out.

On the issue of the shortcomings of multiple regression analysis, no one sums it up better than David Freedman:

Layout 1

If the assumptions of a model are not derived from theory, and if predictions are not tested against reality, then deductions from the model must be quite shaky. However, without the model, the data cannot be used to answer the research question …

In my view, regression models are not a particularly good way of doing empirical work in the social sciences today, because the technique depends on knowledge that we do not have. Investigators who use the technique are not paying adequate attention to the connection – if any – between the models and the phenomena they are studying. Their conclusions may be valid for the computer code they have created, but the claims are hard to transfer from that microcosm to the larger world …

Regression models often seem to be used to compensate for problems in measurement, data collection, and study design. By the time the models are deployed, the scientific position is nearly hopeless. Reliance on models in such cases is Panglossian …

Given the limits to present knowledge, I doubt that models can be rescued by technical fixes. Arguments about the theoretical merit of regression or the asymptotic behavior of specification tests for picking one version of a model over another seem like the arguments about how to build desalination plants with cold fusion and the energy source. The concept may be admirable, the technical details may be fascinating, but thirsty people should look elsewhere …

Causal inference from observational data presents many difficulties, especially when underlying mechanisms are poorly understood. There is a natural desire to substitute intellectual capital for labor, and an equally natural preference for system and rigor over methods that seem more haphazard. These are possible explanations for the current popularity of statistical models.

Indeed, far-reaching claims have been made for the superiority of a quantitative template that depends on modeling – by those who manage to ignore the far-reaching assumptions behind the models. However, the assumptions often turn out to be unsupported by the data. If so, the rigor of advanced quantitative methods is a matter of appearance rather than substance.

  1. October 30, 2013 at 1:10 pm

    How true! Very few social science investigations stand up to scrutiny. However I have found one that does: (and contradicts Nesbitt’s hypothesis about schooling)

    Does school choice work? (Short answer: No!)

    Long answer: Read on for the results of a ‘Natural Scientific Experiment’

    There has been a plethora of books which popularise economics and the economists who write them. Perhaps the most conspicuous is Freakonomics by Levitt & Dubner. What is surprising, to me at any rate, is not the conclusions they draw but the fact that these esteemed economists have largely relied on statistical analysis, the tools of our trade to establish their results. In the same spirit of inter-disciplinary plagiarism, I would like to examine one of the most fundamental beliefs of economists, and see whether it stands up to the rigour of statistical analysis. Advocates of free-market economics hold firm to the belief that customers who are free to choose any supplier will automatically drive up the quality of the goods on offer as well as holding down prices.
    Looking at a particular example, namely school choice, where parents have an opportunity to choose which school their child attends, we can ask: Does it really work in raising school performance? Normally, as with most social policy, this can never be tested scientifically, just subjected to endless arguments with claims and counter-claims.
    But it was the allocation of school places by lottery which provided an opportunity put this idea to the test in the only way that we can trust. Allocating students randomly to different schools should never be part of a deliberate experiment just to test the ‘choice’ theory; rather it is the fortuitous intrusion of lottery allocation that has allowed what is claimed to be a ‘natural scientific experiment’.
    In Chicago the school board operates a parental choice scheme, with a lottery used in cases where more parents choose a given school than places (seats in US jargon) available. This scheme was investigated by Cullen, Jacob & Levitt (yes, the very same Levitt of Freakonomics fame). Previous U.S. studies on the use of school-vouchers appeared to show that ‘choice works’, that levels of attainment are raised by this policy. But Levitt & Co reject these studies as lacking scientific rigour — they did not amount to a proper experiment. Instead they chose to examine what happened in Chicago.
    First they had to establish that lottery-allocation in Chicago really did amount to a proper scientific experiment, that the ‘subjects’ really were distributed randomly. Having satisfied themselves that it did (details are given in the paper referred to below) they then went on to test what is claimed to be central ambition of allowing and encouraging parental choice of schools — raising school standards.
    They were “surprised” to find that there was little evidence that winning a place at a sought-after school provided any benefit. There was no benefit in improved test scores. Attendance rates, course-taking, and credit accumulation were not improved either. What surprised them was this result came about despite the fact that the students who had won in this lottery had attended schools which were better in almost every dimension. These chosen schools had higher peer achievement levels, higher peer graduation rates, and lower levels of poverty.
    Here’s what they concluded: “If the primary goal is to improve measures of academic achievement and attainment, then it does not appear that this mechanism (choice) is effective. The findings are consistent with an even stronger conclusion that attending ‘better’ schools as measured by a variety of level measures of student performance does not systematically improve short-term academic outcomes”.
    So the economists who claim that choice will improve educational standards, because economic theory says it should, are simply wrong, as the statistical analyses of the ‘natural experiments’ of school-place allocation by lottery have shown. There may be many other reasons, such as getting into a socially segregated school that will drive parents to seek out ‘better’ schools. But the incidental use a lottery for allocating students to schools in Chicago shows that parental choice cannot be expected to raise educational standards.
    One might imagine that such a startling and counter-theoretical result would be just the sort of ‘freaky economics’ worth including in a book on the subject. However, this result does not appear in Freakonomics nor its follow-up Superfreakonomics. Perhaps it is simply too much for Chicago economists to admit that, using statistical analysis, and the mechanism of lottery allocation, statistics can show that their cherished belief that Choice will raise standards is a delusion.

    Cullen, Julie Berry; Jacob, Brian A & Levitt, Steven (Nov 2003) The effect of school choice on student outcomes: Evidence from randomized lotteries. NBER Working Paper 10113
    ————–
    Having taught Statistics and Economics for many years at Birmingham City University, Conall Boyle is now an independent researcher (or ‘retired’ if you prefer!). You can read all about lotteries to allocate places in schools, and universities in his recently published Lotteries for Education: Origins, Experiences, Lessons (Imprint Academic, Exeter).

  2. November 4, 2013 at 10:47 am

    Reblogged this on Shane O'Mara's Blog.

  3. bruceedmonds
    November 8, 2013 at 10:53 am

    In particular the kind of regression analysis over a complete data set, makes a fundamental confusion: that of context-dependency and noise (which is usually represented with some kind of randomness). There is a huge body of evidence now about the context-dependency of human cognition, but often social psychology ignores that and just concentrates on theories that it hopes will be true ‘across contexts’.

    *IF* one has a situation where different mechanisms come into play (in some complex manner) in different kinds of circumstances (or when subjects are considering the same circumstance in a different way) then a regression will indeed show *some* correlation between all these aspects, IQ and these different outcomes. The regressions will be significant (if there is enough data) but only account for a small proportion of the connections. This is typical of regressions in this area.

    Other techniques such as data-mining techniques, and simulation modelling ARE able to include context-dependency into analysis and theory making. Further it might be that BIG data is sufficient to distinguish these separate contexts.

  1. No trackbacks yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.