Home > Uncategorized > The Keynes-Tinbergen debate on econometrics

The Keynes-Tinbergen debate on econometrics

from Lars Syll

Het econometrie-debat tussen Keynes en Tinbergen - Over Economie & EconomenIt is widely recognized but often tacitly neglected that all statistical approaches have intrinsic limitations that affect the degree to which they are applicable to particular contexts … John Maynard Keynes was perhaps the first to provide a concise and comprehensive summation of the key issues in his critique of Jan Tinbergen’s book Statistical Testing of Business Cycle Theories …

Keynes’s intervention has, of course, become the basis of the “Tinbergen debate” and is a touchstone whenever historically or philosophically informed methodological discussion of econometrics is undertaken. It has remained the case, however, that Keynes’s concerns with the “logical issues” regarding the “conditions which the economic material must satisfy” still
gain little attention in theory and practice.

Muhammad Ali Nasir & Jamie Morgan

Mainstream economists often hold the view that Keynes’ criticism of econometrics was the result of a sadly misinformed and misguided person who disliked and did not understand much of it.

This is, however, as Nasir and Morgan convincingly argue, nothing but a gross misapprehension.

To be careful and cautious is not the same as to dislike. Keynes did not misunderstand the crucial issues at stake in the development of econometrics. Quite the contrary. He knew them all too well — and was not satisfied with the validity and philosophical underpinning of the assumptions made for applying its methods.

Keynes’ critique of the “logical issues” regarding the conditions that have to be satisfied if we are going to be able to apply econometric methods, is still valid and unanswered in the sense that the problems he pointed at are still with us today and largely unsolved. Ignoring them — the most common practice among applied econometricians — is not to solve them.

To apply statistical and mathematical methods to the real-world economy, the econometrician has to make some quite strong assumptions. In a review of Tinbergen’s econometric work — published in The Economic Journal in 1939 — Keynes gave a comprehensive critique of Tinbergen’s work, focusing on the limiting and unreal character of the assumptions that econometric analyses build on:

Completeness: Where Tinbergen attempts to specify and quantify which factors influence the business cycle, Keynes maintains there must be a complete list of all the relevant factors to avoid misspecification and spurious causal claims. Usually, this problem is ‘solved’ by econometricians assuming that they somehow have a ‘correct’ model specification. Keynes is, to put it mildly, unconvinced:

It will be remembered that the seventy translators of the Septuagint were shut up in seventy separate rooms with the Hebrew text and brought out with them, when they emerged, seventy identical translations. Would the same miracle be vouchsafed if seventy multiple correlators were shut up with the same statistical material? And anyhow, I suppose, if each had a different economist perched on his a priori, that would make a difference to the outcome.

Homogeneity: To make inductive inferences possible — and be able to apply econometrics — the system we try to analyse has to have a large degree of ‘homogeneity.’ According to Keynes most social and economic systems — especially from the perspective of real historical time — lack that ‘homogeneity.’ As he had argued already in Treatise on Probability (ch. 22), it wasn’t always possible to take repeated samples from a fixed population when we were analysing real-world economies. In many cases, there simply are no reasons at all to assume the samples to be homogenous. Lack of ‘homogeneity’ makes the principle of ‘limited independent variety’ non-applicable, and hence makes inductive inferences, strictly seen, impossible since one of its fundamental logical premises is not satisfied. Without “much repetition and uniformity in our experience” there is no justification for placing “great confidence” in our inductions (TP ch. 8).

And then, of course, there is also the ‘reverse’ variability problem of non-excitation: factors that do not change significantly during the period analysed, can still very well be extremely important causal factors.

Stability: Tinbergen assumes there is a stable spatio-temporal relationship between the variables his econometric models analyze. But as Keynes had argued already in his Treatise on Probability it was not really possible to make inductive generalisations based on correlations in one sample. As later studies of ‘regime shifts’ and ‘structural breaks’ have shown us, it is exceedingly difficult to find and establish the existence of stable econometric parameters for anything but rather short time series.

Measurability: Tinbergen’s model assumes that all relevant factors are measurable. Keynes questions if it is possible to adequately quantify and measure things like expectations and political and psychological factors. And more than anything, he questioned — both on epistemological and ontological grounds — that it was always and everywhere possible to measure real-world uncertainty with the help of probabilistic risk measures. Thinking otherwise can, as Keynes wrote, “only lead to error and delusion.”

Independence: Tinbergen assumes that the variables he treats are independent (still a standard assumption in econometrics). Keynes argues that in such a complex, organic and evolutionary system as an economy, independence is a deeply unrealistic assumption to make. Building econometric models from that kind of simplistic and unrealistic assumptions risks producing nothing but spurious correlations and causalities. Real-world economies are organic systems for which the statistical methods used in econometrics are ill-suited, or even, strictly seen, inapplicable. Mechanical probabilistic models have little leverage when applied to non-atomic evolving organic systems — such as economies.

originalIt is a great fault of symbolic pseudo-mathematical methods of formalising a system of economic analysis … that they expressly assume strict independence between the factors involved and lose all their cogency and authority if this hypothesis is disallowed; whereas, in ordinary discourse, where we are not blindly manipulating but know all the time what we are doing and what the words mean, we can keep “at the back of our heads” the necessary reserves and qualifications and the adjustments which we shall have to make later on, in a way in which we cannot keep complicated partial differentials “at the back” of several pages of algebra which assume that they all vanish.

Building econometric models can’t be a goal in itself. Good econometric models are means that make it possible for us to infer things about the real-world systems they ‘represent.’ If we can’t show that the mechanisms or causes that we isolate and handle in our econometric models are ‘exportable’ to the real world, they are of limited value to our understanding, explanations or predictions of real-world economic systems.

The kind of fundamental assumption about the character of material laws, on which scientists appear commonly to act, seems to me to be much less simple than the bare principle of uniformity. They appear to assume something much more like what mathematicians call the principle of the superposition of small effects, or, as I prefer to call it, in this connection, the atomic character of natural law. 3The system of the material universe must consist, if this kind of assumption is warranted, of bodies which we may term (without any implication as to their size being conveyed thereby) legal atoms, such that each of them exercises its own separate, independent, and invariable effect, a change of the total state being compounded of a number of separate changes each of which is solely due to a separate portion of the preceding state …

The scientist wishes, in fact, to assume that the occurrence of a phenomenon which has appeared as part of a more complex phenomenon, may be some reason for expecting it to be associated on another occasion with part of the same complex. Yet if different wholes were subject to laws qua wholes and not simply on account of and in proportion to the differences of their parts, knowledge of a part could not lead, it would seem, even to presumptive or probable knowledge as to its association with other parts.

Linearity: To make his models tractable, Tinbergen assumes the relationships between the variables he studies to be linear. This is still standard procedure today, but as Keynes writes:

It is a very drastic and usually improbable postulate to suppose that all economic forces are of this character, producing independent changes in the phenomenon under investigation which are directly proportional to the changes in themselves; indeed, it is ridiculous.

To Keynes, it was a ‘fallacy of reification’ to assume that all quantities are additive (an assumption closely linked to independence and linearity).

The unpopularity of the principle of organic unities shows very clearly how great is the danger of the assumption of unproved additive formulas. The fallacy, of which ignorance of organic unity is a particular instance, may perhaps be mathematically represented thus: suppose f(x) is the goodness of x and f(y) is the goodness of y. It is then assumed that the goodness of x and y together is f(x) + f(y) when it is clearly f(x + y) and only in special cases will it be true that f(x + y) = f(x) + f(y). It is plain that it is never legitimate to assume this property in the case of any given function without proof.

J. M. Keynes “Ethics in Relation to Conduct” (1903)

And as even one of the founding fathers of modern econometrics — Trygve Haavelmo — wrote:

What is the use of testing, say, the significance of regression coefficients, when maybe, the whole assumption of the linear regression equation is wrong?

Real-world social systems are usually not governed by stable causal mechanisms or capacities. The kinds of ‘laws’ and relations that econometrics has established, are laws and relations about entities in models that presuppose causal mechanisms and variables — and the relationship between them — being linear, additive, homogenous, stable, invariant and atomistic. But — when causal mechanisms operate in the real world they only do it in ever-changing and unstable combinations where the whole is more than a mechanical sum of parts. Since statisticians and econometricians — as far as I can see — haven’t been able to convincingly warrant their assumptions of homogeneity, stability, invariance, independence, and additivity as being ontologically isomorphic to real-world economic systems, Keynes’ critique is still valid. As long as — as Keynes writes in a letter to Frisch in 1935 — “nothing emerges at the end which has not been introduced expressively or tacitly at the beginning,” I remain doubtful of the scientific aspirations of econometrics. Especially when it comes to using econometrics for making causal inferences, it is still often based on counterfactual assumptions that have outrageously weak grounds.

In his critique of Tinbergen, Keynes points us to the fundamental logical, epistemological and ontological problems of applying statistical methods to a basically unpredictable, uncertain, complex, unstable, interdependent, and ever-changing social reality. Methods designed to analyse repeated sampling in controlled experiments under fixed conditions are not easily extended to an organic and non-atomistic world where time and history play decisive roles.

Econometric modelling should never be a substitute for thinking. From that perspective, it is really depressing to see how much of Keynes’ critique of the pioneering econometrics in the 1930s-1940s is still relevant today.

The general line you take is interesting and useful. It is, of course, not exactly comparable with mine. I was raising the logical difficulties. You say in effect that, if one was to take these seriously, one would give up the ghost in the first lap, but that the method, used judiciously as an aid to more theoretical enquiries and as a means of suggesting possibilities and probabilities rather than anything else, taken with enough grains of salt and applied with superlative common sense, won’t do much harm. I should quite agree with that. That is how the method ought to be used.

J. M. Keynes, letter to E.J. Broster, December 19, 1939

  1. Gerald Holtham
    February 27, 2023 at 5:10 pm

    According to Lars: “In his critique of Tinbergen, Keynes points us to the fundamental logical, epistemological and ontological problems of applying statistical methods to a basically unpredictable, uncertain, complex, unstable, interdependent, and ever-changing social reality.”
    Who would guess then that Keynes had written a book called The GENERAL Theory of Employment, Interest and Money. Presumably he must have believed there were characteristics of economies of sufficient persistence across space and time for him to be generally theorising about them.
    “Keynes maintains there must be a complete list of all the relevant factors to avoid misspecification and spurious causal claims. “
    Yet in his General Theory Keynes does not hesitate to abstract from important influences. You will scour his book in vain, for example, to find any sustained discussion of the role of political processes in the evolution of an economy or the effect of technological change.
    Why did Keynes make demands of Tinbergen that his own work could not meet? Was he being obtuse or hypocritical? Probably neither. He is best read as warning against excessive expectations or ambitions. But Keynes was a noted controversialist. He loved debate and was not always scrupulous in his debating tactics. He was probably having a bit of fun with Tinbergen and couldn’t know that Lars was going to take him so literally.
    That said, it is clear Keynes did not really understand the point of econometrics. That is partly because he had an archaic philosophy of science. He was writing before Popper’s “Logic of scientific discovery” had appeared in English and he was not familiar with it. Keynes still talks of “induction” a purely psychological process that establishes no intrinsic validity for propositions. Post-Popper no-one sensible thinks that econometrics, or anything else, can “induce” theory from data. The theory has to come first and econometrics can only establish whether it is consistent with data or not. If it is, econometrics can say no more; if not the econometrician can declare the theory as defective or at least requiring amendment.
    Keynes was writing not only pre-Popper but pre-computer and before national and international data agencies. He worried, reasonably, about Tinbergen’s use of linear models. Poor old Tinbergen would have been working out linear regressions by hand, helped at most by a mechanical desk calculator. Of course he couldn’t cope with non-linearities. Modern computer algorithms can deal with any degree of non-linearity, given sufficient data. Of course people usually try the simplest possible specification of any theory but if the theory requires non-linearities, like interaction effects, or the data suggest it needs them, there is no difficulty in estimating and testing them. (I am currently working on a model where the forcing variables only have joint effects – no “atomism”).
    By the 1960s techniques had advanced enough for elements of Keynes own theory to become testable. Guess what: amendments were found to be necessary. The “marginal efficiency of capital”, for example turned out to be a useless theory of investment. Non-property business investment is rather insensitive to interest rates and a new theory was required that explained why it was correlated with the recent growth of demand in the economy. Similar amendments were required to the Keynes consumption function. Indeed nearly all the refinements to Keynesian theory, by people like Lawrence Klein, that improved its applicability to real economies were owing to econometrics.
    There was, however, a catch – hubris. People thought that what they had learned enabled them to fine tune economic activity. That was a mistake that Keynes himself, almost surely, would not have made. You can offset crises and substantial or prolonged recessions but you cannot smooth routine fluctuations in economic activity and you cannot, it turns out, stimulate long-term growth by maintaining excess demand. If we interpret Keynes, not as a luddite, but as warning against over-reaching, here would have been his vindication.
    That hubris facilitated the rise of the New Classical school that put an end to the empirical exploration of Keynesian macroeconomics. The New Classical approach declared that all macroeconomic theorising was illegitimate unless it posited a “representative agent” who was successfully optimising something or other. These suppositions were not up for empirical refutation; they were the sine qua non of theorising. Research that did not comply was not published in the best journals. The premises even crept into the “empirical” models maintained by central banks and other institutions. Macroeconomics disappeared down a rabbit hole from which it has not fully emerged 40 years later.
    The sadness is that everyone knew the assumptions were utterly counter-factual. But econometrics could show they weren’t even adequate “as if” theory because their predictions were usually dominated by models that did not make the assumptions. If econometrics had enjoyed higher status in economics the “rational expectations revolution” would have been confined to a few special cases and not become ubiquitous.
    That’s the point. Tinbergen and Haavelmo would not have fallen for it any more than Keynes. Lars doesn’t know who his real friends are. Anyone who tries systematically to confront economic theorising with facts and data, by whatever reasonable method is to hand, deserves some support. Anyone who tries to limit or circumscribe how economics theory proceeds should be opposed. And anyone who resists or defies empirical refutation should not be taken seriously.

  2. February 28, 2023 at 1:04 am

    I am so pleased to have this succinct critique of the basis of
    econometrics. Many economists with a more practical policy bent may be suspicious of its usefulness for many policy decisions. One example, assuming away speculative booms or treating them as exogenous in empirical housing models renders such models unhelpful as a guide to drivers of house prices or rents.

  3. Gerald Holtham
    February 28, 2023 at 3:30 pm

    I fear Susan St John is confusing the practice of forecasting using defective models with econometrics. Nothing in the “basis of econometrics” obliges anyone to assume speculative booms are exogenous. In fact econometrics provides tests for whether so-called explanatory variables are exogenous or not. If someone builds a silly model and does a slipshod regression without proper testing they deserve criticism. But let us distinguish misuse of a tool from the tool itself. Hypodermic needles sometimes take lives; they also save them.

  1. No trackbacks yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: