Home > Uncategorized > Econometrics — a crooked path from cause to effect

## Econometrics — a crooked path from cause to effect

from Lars Syll

In their book Mastering ‘Metrics: The Path from Cause to Effect Joshua Angrist and Jörn-Steffen Pischke write:

Our first line of attack on the causality problem is a randomized experiment, often called a randomized trial. In a randomized trial, researchers change the causal variables of interest … for a group selected using something like a coin toss. By changing circumstances randomly, we make it highly likely that the variable of interest is unrelated to the many other factors determining the outcomes we want to study. Random assignment isn’t the same as holding everything else fixed, but it has the same effect. Random manipulation makes other things equal hold on average across the groups that did and did not experience manipulation. As we explain … ‘on average’ is usually good enough.

Angrist and Pischke may “dream of the trials we’d like to do” and consider “the notion of an ideal experiment” something that “disciplines our approach to econometric research,” but to maintain that ‘on average’ is “usually good enough” is an allegation that in my view is rather unwarranted, and for many reasons.

First of all, it amounts to nothing but hand waving to simpliciter assume, without argumentation, that it is tenable to treat social agents and relations as homogeneous and interchangeable entities.

Randomization is used to basically allow the econometrician to treat the population as consisting of interchangeable and homogeneous groups (‘treatment’ and ‘control’). The regression models one arrives at by using randomized trials tell us the average effect that variations in variable X has on the outcome variable Y, without having to explicitly control for effects of other explanatory variables R, S, T, etc., etc. Everything is assumed to be essentially equal except the values taken by variable X.

In a usual regression context one would apply an ordinary least squares estimator (OLS) in trying to get an unbiased and consistent estimate:

Y = α + βX + ε,

where α is a constant intercept, β a constant “structural” causal effect and ε an error term.

The problem here is that although we may get an estimate of the “true” average causal effect, this may “mask” important heterogeneous effects of a causal nature. Although we get the right answer of the average causal effect being 0, those who are “treated”( X=1) may have causal effects equal to – 100 and those “not treated” (X=0) may have causal effects equal to 100. Contemplating being treated or not, most people would probably be interested in knowing about this underlying heterogeneity and would not consider the OLS average effect particularly enlightening.

Limiting model assumptions in economic science always have to be closely examined since if we are going to be able to show that the mechanisms or causes that we isolate and handle in our models are stable in the sense that they do not change when we “export” them to our “target systems”, we have to be able to show that they do not only hold under ceteris paribus conditions and a fortiori only are of limited value to our understanding, explanations or predictions of real economic systems.

Real-world social systems are not governed by stable causal mechanisms or capacities. The kinds of “laws” and relations that econometrics has established, are laws and relations about entities in models that presuppose causal mechanisms being atomistic and additive. When causal mechanisms operate in real-world social target systems they only do it in ever-changing and unstable combinations where the whole is more than a mechanical sum of parts. If economic regularities obtain they do it (as a rule) only because we engineered them for that purpose. Outside man-made “nomological machines” they are rare, or even non-existent. Unfortunately, that also makes most of the achievements of econometrics – as most of the contemporary endeavours of mainstream economic theoretical modelling – rather useless.

Remember that a model is not the truth. It is a lie to help you get your point across. And in the case of modeling economic risk, your model is a lie about others, who are probably lying themselves. And what’s worse than a simple lie? A complicated lie.

Sam L. Savage The Flaw of Averages

When Joshua Angrist and Jörn-Steffen Pischke in an earlier article of theirs [“The Credibility Revolution in Empirical Economics: How Better Research Design Is Taking the Con out of Econometrics,” Journal of Economic Perspectives, 2010] say that

anyone who makes a living out of data analysis probably believes that heterogeneity is limited enough that the well-understood past can be informative about the future

I really think they underestimate the heterogeneity problem. It does not just turn up as an external validity problem when trying to “export” regression results to different times or different target populations. It is also often an internal problem to the millions of regression estimates that economists produce every year.

But when the randomization is purposeful, a whole new set of issues arises — experimental contamination — which is much more serious with human subjects in a social system than with chemicals mixed in beakers … Anyone who designs an experiment in economics would do well to anticipate the inevitable barrage of questions regarding the valid transference of things learned in the lab (one value of z) into the real world (a different value of z) …

Absent observation of the interactive compounding effects z, what is estimated is some kind of average treatment effect which is called by Imbens and Angrist (1994) a “Local Average Treatment Effect,” which is a little like the lawyer who explained that when he was a young man he lost many cases he should have won but as he grew older he won many that he should have lost, so that on the average justice was done. In other words, if you act as if the treatment effect is a random variable by substituting βt for β0 + β′zt, the notation inappropriately relieves you of the heavy burden of considering what are the interactive confounders and finding some way to measure them …

If little thought has gone into identifying these possible confounders, it seems probable that little thought will be given to the limited applicability of the results in other settings.

Ed Leamer

Evidence-based theories and policies are highly valued nowadays. Randomization is supposed to control for bias from unknown confounders. The received opinion is that evidence-based on randomized experiments, therefore, is the best.

More and more economists have also lately come to advocate randomization as the principal method for ensuring being able to make valid causal inferences.

I would however rather argue that randomization, just as econometrics, promises more than it can deliver, basically because it requires assumptions that in practice are not possible to maintain.

Especially when it comes to questions of causality, randomization is nowadays considered some kind of “gold standard”. Everything has to be evidence-based, and the evidence has to come from randomized experiments.

But just as econometrics, randomization is basically a deductive method. Given the assumptions (such as manipulability, transitivity, separability, additivity, linearity, etc.) these methods deliver deductive inferences. The problem, of course, is that we will never completely know when the assumptions are right. And although randomization may contribute to controlling for confounding, it does not guarantee it, since genuine randomness presupposes infinite experimentation and we know all real experimentation is finite. And even if randomization may help to establish average causal effects, it says nothing of individual effects unless homogeneity is added to the list of assumptions. Real target systems are seldom epistemically isomorphic to our axiomatic-deductive models/systems, and even if they were, we still have to argue for the external validity of the conclusions reached from within these epistemically convenient models/systems. Causal evidence generated by randomization procedures may be valid in “closed” models, but what we usually are interested in, is causal evidence in the real target system we happen to live in.

When does a conclusion established in population X hold for target population Y? Only under very restrictive conditions!

Angrist’s and Pischke’s “ideally controlled experiments,” tell us with certainty what causes what effects — but only given the right “closures”. Making appropriate extrapolations from (ideal, accidental, natural or quasi) experiments to different settings, populations or target systems, is not easy. “It works there” is no evidence for “it will work here”. Causes deduced in an experimental setting still have to show that they come with an export-warrant to the target population/system. The causal background assumptions made have to be justified, and without licenses to export, the value of “rigorous” and “precise” methods — and ‘on-average-knowledge’ — is despairingly small.

1. February 28, 2020 at 11:29 am

There is so much that is incomplete and contestable in this that I don’t know where to start – so I won’t. I’ll confine myself to one question.
Lars, how would you propose to test empirically any economic proposition, generalisation or policy being advanced – or are you content to say you don’t know and can never know?

• February 28, 2020 at 12:28 pm

Why don’t you provide a specific case to be tested Gerald? Why not be specific?

• February 28, 2020 at 2:10 pm

Well, Gerry, to be honest, I think you have an attitude here that is both a little lazy and arrogant. When I was a student of mathematical statistics and econometrics in California back in the 80s I was — and still am — very much influenced by the critique of econometrics and regression analysis put forward by David Freedman and Ed Leamer. Most of the things I say in the post really emanates from them. So why don’t you “start” with telling us where the Freedman/Leamer/Syll critique is “incomplete” and “contestable”?

• March 4, 2020 at 11:19 am

Gerald, you’ve got this back to front. Econometrics is not about testing propositions, generalisations or policies, it is about deriving them from historic evidence. The disproof of the policy is when you try it in the real world and it doesn’t produce the due outcomes.

2. February 29, 2020 at 5:26 pm

Well, I have no wish to appear arrogant so you answer my question, Lars and I’ll try to answer yours.
A lot of Freedman and Leamer criticisms are entirely justified criticism of bad practice that is all too common. You have a tendency to elevate the criticisms to the point of saying that there is no good practice at all; all econometric testing is wrong or pointless. That is where we part company. If you criticised econometricians rather than econometrics I couldn’t complain. It is easy to find examples of sloppy analysis but in most cases there is a way to proceed that corrects the errors. Sometimes available data do not allow us to discriminate between competing hypotheses and then we have to own up,
But I am still curious. You don’t believe in attempts at controlled experiment and you don’t believe in (any?) econometrics. So how do you test any generalisation in economics? As for being specific, I am happy to play away – cite any proposition you like and tell me how you would evaluate it. So far your position appears to be entirely nihilistic when it comes to empirical economics.

• February 29, 2020 at 6:03 pm

When econometricians, again and again, come up with more or less unconvincing results, it is difficult not to think that maybe there is something fundamentally wrong with their tools.

Gerry, you write “but in most cases there is a way to proceed that corrects the errors” and point to the testing ability of econometrics. I find this optimism almost amusing when we consider the absolutely unpersuasive track record of econometric hypotheses testing. Not a single substantive economic theory has ever been abandoned after being econometrically tested! And — for the record — that is not only my opinion. People like D McCloskey, Aris Spanos, and Larry Summers, have expressed similar views on the lack of econometric testing/corroboration/falsification success.

As Keynes once wrote:

“It will be remembered that the seventy translators of the Septuagint were shut up in seventy separate rooms with the Hebrew text and brought out with them, when they emerged, seventy identical translations. Would the same miracle be vouchsafed if seventy multiple correlators were shut up with the same statistical material? And anyhow, I suppose, if each had a different economist perched on his a priori, that would make a difference to the outcome.”

• March 1, 2020 at 1:00 pm

This is a repeat of my comment which is still showing in Opera:
February 29, 2020 at 5:47 pm

The critical criticism is the quantity calculus. Any arbitrary equation will fit any set of data, but that is all that is achieved. The arbitrary equation will have no theoretical significance whatsoever. only equations conforming to the quantity calculus are able to have theoretical significance. This is the acid test. Your comments should reflect this inescapable fact.

3. February 29, 2020 at 6:17 pm

Lars, we have had this conversation before. I have given examples of refutation of theories. It is not the fault of the tester if the test results are ignored. The problem, as you know, is that economic theories are generally so abstract that when they are rejected in a particular instance it is possible to think of innumerable qualifications and suggest that when these are taken into account the theory will work. So if people are devoted to a theory it can be hard to kill it off. The more qualifications are attached to a theory the more data you need to test it. But what are you saying – we shouldn’t try? Surely we should keep chasing the “degenerating research strategy” to the point of terminal embarrassment.

And of course the a priori makes a difference to the outcome. You can’t test a hypothesis without a hypothesis I thought you approved of causal modelling rather than curve-fitting.

By the way. You haven’t answered my question.

• February 29, 2020 at 7:42 pm

Gerry, you write “the more qualifications are attached to a theory the more data you need to test it.” But that sure doesn’t solve the basic problem. As one of the founding fathers of modern econometrics — Trygve Haavelmo — himself wrote:

“What is the use of testing, say, the significance of regression coefficients, when maybe, the whole assumption of the linear regression equation is wrong?”

Real-world social systems are usually not governed by stable causal mechanisms or capacities. The kinds of ‘laws’ and relations that econometrics has established, are laws and relations about entities in models that presuppose causal mechanisms and variables — and the relationship between them — being linear, additive, homogenous, stable, invariant and atomistic. But — when causal mechanisms operate in the real world they only do it in ever-changing and unstable combinations where the whole is more than a mechanical sum of parts. Since statisticians and econometricians — as far as I can see — haven’t been able to convincingly warrant their assumptions as being ontologically isomorphic to real-world economic systems, I remain deeply sceptic of the whole enterprise. You write “shouldn’t we try?” My answer to that question is the same as Keynes’ (re Tinbergen): “Newton, Boyle and Locke all played with alchemy. So let him continue.”

• March 1, 2020 at 9:35 pm

“Real-world social systems are usually not governed by stable causal mechanisms or capacities. The kinds of ‘laws’ and relations that econometrics has established, are laws and relations about entities in models that presuppose causal mechanisms and variables — and the relationship between them — being linear, additive, homogenous, stable, invariant and atomistic. But — when causal mechanisms operate in the real world they only do it in ever-changing and unstable combinations where the whole is more than a mechanical sum of parts. Since statisticians and econometricians — as far as I can see — haven’t been able to convincingly warrant their assumptions as being ontologically isomorphic to real-world economic systems, “I remain deeply sceptic of the whole enterprise. You write “shouldn’t we try?” My answer to that question is the same as Keynes’ (re Tinbergen): “Newton, Boyle and Locke all played with alchemy. So let him continue.”

The point of retail sale is where production becomes consumption, and hence is a terminal ending, summing and economic factor expression point. It is also hence a potentially paradigm changing price and monetary policy point for both the economy and the monetary system. The point of exchange is such an integratively woof and warp part of the economic process that, like breathing, most are unconscious of its power and significance.

It’s true that econometrics, like virtually all studies that precede paradigm changes, is largely an alchemical pursuit, but then the single concept of the new pattern sweeps all of the suspended complexities, perplexities and erudition aside.

• March 1, 2020 at 10:38 pm

In essence retail sale is probably the single aggregative point of the entire micro-economy and hence an important integrative point between micro and macro-economics.

4. March 1, 2020 at 7:28 pm

You conflate a number of points. Some I accept but others I contest. Take an economic theory or proposition, e.g people never suffer from money illusion. If that is true, savings ratios, for example, should not alter durably with the rate of inflation. So I take the most approved theory I have for what does determine savings and I add inflation to the equation. The question is: will that have a non-zero coefficient. If it does what appears to be money illusion exists. Now, of course, this test depends on my having an adequate characterisation of the consumption or savings function. If everything people think they know about that is wrong, then my test is not conclusive. Even then, however, I have tested the joint hypothesis: I can say it is not possible to believe your theory of saving AND the absence of money illusion.

Stable: does such a test conducted in 1935 hold for 2015? Well of course one can have no confidence in that. But if it applies consistently for 1990-2015 we can be pretty sure it holds for 2018. Things change but not that fast otherwise no systematic study would be possible
Linearity, additivity – no, not necessary. I can put in inflation additively or multiplicatively with the other variables. If you insist I can put in the log of the square root and multiply it by everything in sight or add power terms.. I can test the resultant non-linear equation against one without the inflation terms in a battery of tests, nested or non-nested. I’m not sure what you mean by homogenous but whatever it is I don’t think I have to assume it. Evidently there is no percentage in adding complexities for fun but if there is reason to think they are necessary they can be accommodated – provided I have enough data. I also don’t know what you mean by atomistic. This kind of exercise is conducted on aggregates – which won’t be perfectly stable, but see above. What if we missed some conditioning variable? Always possible but if no-one has proposed one, no use worrying about it until our results break down. If they propose a confounder, fair enough. Let’s test that too.

But, Lars, your persistent failure to say how you would conduct empirical studies in economics obliges me to think you take a nihilistic position: no empirical testing is perfect; no proposition can be tested without maintaining some other proposition, therefore it is not worth bothering. Economics must remain a form of theology on which no evidence can be brought to bear and no progress made. Do you really think that? If you insist on impossible standards empirical testing becomes impossible in most disciplines,not just economics.

5. March 1, 2020 at 9:36 pm

For example, physicists “discovered” the Higgs boson. They looked at a mass of data from the CERN collider and determined by statistical analysis that the patterns were consistent with the Higgs hypothesis and the null hypothesis of no Higgs boson could be rejected with a high degree of confidence. You could step up and say but what if the whole standard theory of particle physics is wrong. Then the results are meaningless. Absolutely true. So what? Where do you go from there? You can’t test everything at once. Newton stood on the shoulders of giants, he said. The real point is, he had to stand somewhere. You don’t like my starting point? Pick your own. Then we can proceed – econometrically.

• March 2, 2020 at 10:38 am

I think this comparison to physics describes the fundamental difference between “standard” econometrics and Lars’ point very well. The CERN people could (and can) with considerable confidence assume that the world they investigate is fundamentally stable and independent of what humans think about it. That is not the case in social sciences. We investigate a world that is created by humans and hence constantly changing partly in response to how we judge this world. Therefore, the constancy of the object under investigation physicists and science in general can plausibly rely on just does not exist in our case although all asymptotic arguments that we employ assume so.
Anyway, I am wondering what economic hypotheses or theories you have in mind that have been abandoned following adverse econometric proof.

• March 2, 2020 at 11:19 am

I almost replied the same but have concluded this indeed is dialogut of the deaf. You are spot on Christian.

• March 2, 2020 at 11:20 am

Dialogue.

• March 4, 2020 at 11:38 am

Re my comment on Gerald’s understanding of economics being back to front, so is CERN’s understanding. Smashing sub-atomic particles does not explain how they were formed in the first place: arguably as circuital processes rather than from building bricks. Is “not knowing” just about “jobs for the boys”?

• March 4, 2020 at 11:43 am

When we learn the nature of sub-atomic particals, we learn something about the nature of our material reality. CERN is a good investment in my view. I cannot follow you point I am afraid.

• March 4, 2020 at 12:13 pm

What we have learned is that sub-sub-atomic “particles” are not particles but extremely transient energy transfers. I agree that “jobs for the boys” is a good investment – and not just here. It is a necessary part of education.

• March 4, 2020 at 12:30 pm

I see now. Have a great quote but don’t think appropriate for this blog.

• March 4, 2020 at 1:52 pm

I don’t see why not. The difference between particles and energy flows is the root difference between money as a particulate “thing” and the energy released by a cyclic (minimally three-stage) transfer of credit. (C.f. the example in Geoff Davies’ book). See the one and you may see the other.

6. March 2, 2020 at 1:39 am

Mr. Holtham, I rather like your :”Newton stood on the shoulders of giants”,….He had to stand somewhere”.
My version of that, is at the bottom of page 2 of the 197 page edition of TELOS & TECHNOS. ” The arguments presented {herein} owe a debt of gratitude to the insights of the great economic thinkers of the past. Especially, some of their more philosophic & intuitive probings”. Thus, this writer emphatically connects his material to the history of economic thought. On the same issues that were always there. We are all “Time path dependent”.
Norman L. Roth

7. March 2, 2020 at 12:01 pm

Christian, I think you’re spot on the nodal point here in this debate.
Modern macroeconomics obviously builds on the myth of us knowing the ‘data-generating process’ and that we can describe the variables of our evolving economies as drawn from an urn containing stochastic probability functions with known means and variances. Uncertainty is reduced to risk. But as Keynes convincingly argued, this is not always possible. Usually, in real-world contexts, there are no given probability distributions that we can appeal to. In the end, this is what it all boils down to. We all know that many activities, relations, processes and events are of the Keynesian uncertainty type. Neither the econometrician nor the deciding individuals can fully pre-specify how people will decide when facing uncertainties and ambiguities that are ontological facts of the way the world works.
Fooling people into believing that one can cope with an unknown economic future in a way similar to playing at the roulette wheels, can not be the right way to proceed. And still, econometrics fundamentally builds on the assumption that a world permeated by genuine uncertainty can be described and analysed with a nomologic machine that portrays that genuine uncertainty as calculable risk. Since I sincerely consider this an absolute non-starter for doing relevant and realist economic analysis, I remain a sceptic of the whole econometric project.

• March 2, 2020 at 12:08 pm

Thank you Lars.

• March 2, 2020 at 3:29 pm

Lars, thanks! We obviously agree here.
I also hope that Gerald will rejoin us and provides us with the example(s).
What I’d like to add here is the perspective that econometrics as well as DSGE (and relatives) may well be nothing else but crutches for coping with uncertainty. Their use would be justified as we (or at least I) do really not know what superior means of investigation to apply instead. (I have got some ideas but as far as I can see there is no established alternative yet.) Econometrics and DSGE, therefore, are merely sophisticated versions of historism, the school DSGE guys in particular heartily despise.
While it is perfectly ok to use crutches when the legs don’t do it is not ok at all to pretend to do a physicist’s job when you know that physics’ ontology is so badly suited for economics. Rather, what we should be able to expect from econometricians and DSGE people alike is to acknowledge the limits of their tools. Without that, I am afraid, no constructive exchange can arise, let alone useful insights gained.

• March 2, 2020 at 4:51 pm

As you write Christian: “what we should be able to expect from econometricians … is to acknowledge the limits of their tools.” If their articles and (text)books came with warning labels — like “the reader should be aware that we are building our analysis (partly) on assumptions that we can’t always really justify or test and may not really apply to economic data” — then, fine with me, let them go on doing what they do. The pretence of doing anything close to what physicists do would then easily be exposed as more or less pseudo-scientific fraud!

8. March 2, 2020 at 5:01 pm

Sorry to have to repeat myself. Refutations: Hendry’s demonstration that Friedman and Schwartz treatment of data in their monetary history was inadequate and his tests on the stability on the velocity of circulation undermined the empirical basis for monetarism. No-one now pretends there is a stable “demand for money”, though it was common to believe so in the 1970s. DSGE models consistently fail econometric texts. Their RMSE out of sample is worse than their unrestricted reduced form. That has not led to their abandonment, yet.
The physics point was just an analogy. I am not asserting equivalence. I am making the point that Lars complaint that we can’t believe any conclusion of any empirical test because our whole model might be wrong is an empty one. Of course. One can only proceed a hypothesis at a time.
I am impressed by Lars’ chutzpah, He has replied three times now in this discussion without even attempting to answer my original question: how are we supposed to do empirical economics? If you reject controlled experiment and reject econometrics, how do we sort hypotheses and propositions? At least Christian acknowledges the problem.
As the poet Pope said : “she who spurns a man, must die a maid”.
Lars maintains his purity but the price is sterility.

9. March 2, 2020 at 5:19 pm

By the way, I am very aware of the limitations of model-building and statistical analysis. Economics involves complex evolving relationships. Models are based on what has happened and embody hypotheses drawn from that experience. The spotlight illuminates what’s there. It can never tell you what will come out of the dark.
I now wish I had not mentioned CERN. People have focused on the analogy at the expense of what I wrote before. I now expect to be told that economists can’t compare with spotlight builders.

• March 3, 2020 at 10:57 am

Gerald, have your read Christian Mueller-Kademann’s book. I just gave in and ordered the ebook on Amazon (hell to pay with my wife I think). I am going to read it. Hope you do too.

• March 4, 2020 at 8:53 am

Thanks for recommending my book. I hope you enjoy reading it, too.
I am sorry for the price – the publisher’s choice but as an eBook it’s almost affordable. Regards to your wife :-)!

• March 4, 2020 at 9:03 am

• March 4, 2020 at 9:36 am

Hum, what the heck did I write? I’ll be reading it while cycling across Kyushu! That’s better 😊

10. March 3, 2020 at 9:51 am

Dear Gerald, thanks for the example and the clarification of the CERN analogy. It seems that we three are on the same page on many points. I am afraid that the answer to how to do empirical analysis will remain open for the time being. My understanding is that we are still in the mode of getting the fundamental problems across and that we need this phase in order to avoid mistakes when eventually moving towards developing alternatives. It will be trial and error anyway but so far we haven’t yet succeeded with the first step.

Regarding the monetarist example, I cannot agree with the view that monetarism is completely dead by now, never mind that only a small minority still considers “money demand” to be stable. That’s unfortunate. Some time ago, I had the privilege to talk to David about his attempt to deconstruct Freedman and Schwartz econometrically. In his eyes he was not successful at large. And I think he is right in that if you measure success in terms of acceptance by mainstream. In fact, David had offered to the AER more convincing arguments back then but the journal rejected them wholesome (but the AER published a strange account of Freedman’s WWII activities which I use in class to demonstrate econometric fallacies). After David’s article several system cointegration papers seemed to support stability of money demand (I have authored a couple of studies myself on this topic) which means that econometrics also worked the other way (I do not count my work here). Nevertheless, I agree with the opinion that David’s analysis was an important econometric attack on monetarism that deserves much more credit.
Instead of econometrics, I’d rather argue that we owe BoE’s resurrection of the true (fiat) money creation process something like the end of monetarism because it proved (as if that had been necessary :-) ) that the monetarist fantasy of exogeneous money is, well, just a fantasy. Following the BoE, even the world’s most monetarist (former) central bank, the Bundesbank, published a mea culpa (kind of) and so did the Swiss National Bank, for example. Therefore, Lars’ objection to the role of econometrics as a judge for or against theories seems to be largely justified also in this example although the example has some merits. While not being a good judge econometrics may well help to uncover interesting data features that deserve explanations which leads me back to my earlier point that we must insist on acknowledging the limitations of this tool. Again, I kind of sense agreement here on your part.

There is yet another argument that one should have in mind when discussing the role of econometrics. Suppose we live in an uncertain world (I think we agree on that) then whatever rational expectations you may have there cannot be an objective probability distribution function underlying your expectations. It follows that your econometric exercise, however sophisticated, cannot be appropriate. Consequently, failure to prove or disprove any hypothesis involving rational expectations (such as in monetarism, fx determination) is usually met with “explainawaytions” (Thaler), ie attacks on the econometric method (just mind the money demand literature, or fx studies), data choice, data transformations, sample selection etc. As a result, pet hypotheses systematically survive all econometrics unless sober thinking and communication by authoritative economists (BoE) set the record straight. It’s nice if that happens (occasionally) but this approach is certainly not systematically scientific.

• March 3, 2020 at 3:55 pm

Well I cannot disagree with what you say but “explainawaytions” are
possible in any discipline. All one can do is inflict repeated and embarrassments as new qualifications are advanced. A problem in economics is that even when the preponderance of evidence is against a theory, it tends to survive – Hendry’s complaint and the nature of his “failure”. I believe that the prestige of empirical testing versus armchair theorising needs to rise to counteract the dominance of a priorism. That is why I find Lars attacks on empirical methods so misguided. It encourages and perpetuates the very tendencies in economics that he most dislikes.
There is a gulf between casual and best- practice econometrics, worthy of criticism. Attacking the best we can do on, frankly , rarified grounds is to worry about the cat when the tiger is in the vicinity.

• March 3, 2020 at 4:06 pm

It is true that monetarism as a guide to policy went out of fashion for operational reasons to do with the difficulties of base
money control. That is a different point from whether there is a stable demand for money and a predictable velocity of circulation – a majority view 40 years ago, very much a minority view now.

• March 4, 2020 at 11:59 am

Why should anyone wonder if there is a stable demand for money when the seasons, market saturations and coronavirus plagues so obviously vary it? Only if their aim is not to spend money but to make money out of making it!

• March 4, 2020 at 8:18 pm

DSGE and Monetarism’s failure is the personal indirectness of its policies similar to the failure of the church forcing everyone to receive absolution ONLY through its sacraments instead of cultivating a direct personal relationship with god. A debt jubilee, universal dividend, a 50% discount/rebate price and monetary policy or two and tax and boycotting regulations to keep any anti-social corporate bad actors from trying to game such policies would make the economy sing for everyone and enable us to get off our asses and tackle climate change at the same time.

Erudition and scientific cautions are all well and good, but pale to insignificance in contrast to the new insights and new applications that always accompany genuine paradigm changes. Let us learn by doing as Aristotle said, but let us ACTUALLY DO WHAT WILL EFFECT the new paradigm.

Otherwise, erudition becomes a stumbling block instead of a bulwark of science.

11. March 5, 2020 at 12:32 am

Dave, When someone (wrongly) posits a stable demand for money, they are saying it is stably related to certain key forcing variables. They don’t have to say “unless there is a pandemic of coronavirus”. That’s understood. No theory can include every conceivable event that might affect outcomes. All theories in social studies make ceteris paribus assumptions. Seasons and market excesses are different – they are sure to happen so should be accommodated by the theory if it’s any good. Things outside economics that cannot be predicted like plagues or life-changing inventions, we just have to accept will result in outcomes we didn’t predict. If the theory works again when the shock passes, it’s good enough. If it doesn’t, it should be ditched.

• March 5, 2020 at 10:44 am

Gerry, your Humean empiricism manifested itself the moment you said “it is”. Real science is about recognising possibilities, their significance and probability. So this corona virus didn’t exist, though things like it have done: “out of the same mold”, as Sophie realises in Gaarder’s book. So empirical social theory – characterised as “The Maniac” in G K Chesterton’s orthogonal “Orthodoxy” – is “quite as complete as the sane one, but not so large”. C E Shannon focussed on “information capacity” and demonstrated what some of us learned as a child: that “you can’t fit a quart into a pint pot”. The mold provided by mainstream economic theories has not been big enough to accommodate the significance of possibilities now becoming evident before our eyes: new viruses against which our immune systems have no defence; fires and floods exceeding our defensive capabilities; a future in which, like plagues of locusts and birds, we humans die out because we have devoured everything and fouled our own nests. “But who cares?” ask the locust-brained. “We have plenty of money, and we are all going to die anyway”. That’s true; but what is its significance for our children?

• March 5, 2020 at 11:36 am

Despite its being frequently used and with seemingly apparent confidence in its common understanding, surprisingly little effort is usually spent on properly defining the term uncertainty. This is in start contrast to concepts like randomness or determinism, for which an undisputed common ground exists. This section, therefore, reviews key terminology. In particular, we will consider definitions for institutions, events, risk, ambiguity and uncertainty. Careful scrutiny of these terms will simplify the later analysis. (Müller-Kademann 2019, 3)

.
Christian then goes on to list a few “definitions other than uncertainty in order to operationalise the definition of uncertainty.” The first is institution: “An institution is a set of established and prevalent social rules that structure social interactions.” He notes that,
.

Since institutions have to be established, they are amenable. This distinguishes them from the laws of physics or purely biological or genetic rules which humans also obey. Non-adjustable rules are by definition beyond the reach of humans. (Müller-Kademann 2019, 3)

.
Changes in institutions cause “events”, but in order to distinguish between human caused events and all other events human caused “events” are called “actions”—an event triggered by a human or a group of humans. (Müller-Kademann 2019, 3)

Müller-Kademann, Christian. Uncertainty and Economics (Routledge Frontiers of Political Economy) (Page 4). Taylor and Francis. Kindle Edition. Which raises the question, the current pandemic originated in a human caused action it seems—the live animal markets run in unsafe food and safety conditions so that viruses can easily spread from closely packed animals and eventually from animal to human. This was most definitely a foreseeable human action as it has been discussed very widely in the scientific literature for a long time.
.

12. March 5, 2020 at 2:26 am

“Things outside economics that cannot be predicted like plagues or life-changing inventions, we just have to accept will result in outcomes we didn’t predict.”

Yes, like the outside as in entirely parasitical and never actually “worked” 5000 year old monetary and financial paradigm of Debt Only, and the new paradigm of Direct and Reciprocal Monetary Gifting which will finally stabilize economics and make it work for every agent. You just have to be willing to look at the new invention/insight that accompanies all historical paradigm changes and be open enough to accept it.

13. March 5, 2020 at 12:52 pm

Dave, if your point is that economics now has to be expanded to take account of the impact of economic activity on the physical environment, you’ll get no argument from me. I entirely agree. Economics has treated the economy as an open system whose transactions with its environment, the physical substructure, can be ignored. The assumption is warranted only if the economy is insignificant for the environment. That is long past being a tenable position. The economy is now disturbing the environment to the point where deadly feedbacks on human society, not excepting its economic relations, are foreseeable. An economics that ignores that is indeed useless. You can’t blame economists of the past who made an abstraction that seemed warranted at the time. You can most certainly blame contemporary economists who make that abstraction now.

• March 5, 2020 at 7:25 pm

Thanks for this, Gerry, but my point is that scientific truth is (in the mathematical sense) complex, i.e. it is not along a line from observable effects to a posited cause, it is from a cause to an area or “field” of effects. Herbert Marcuse’s critique of modern society is called “One DImensional Man”. Chesterton talks of academics tending to be one-eyed where normal people have stereoscopic sight. Reasoning back from observable effects a detective can deduce a cause, but seeking the effects of a cause the scientist is confronted with a field of possibilities, often specifiable as only two-dimensionally as a space not yet observed. (Hence CERN phhysicists looking for Higgs’ bosuns). In other words, my critique is not about people getting things wrong, it is about social science being based on Hume’s inappropriate logic.

14. March 5, 2020 at 8:51 pm

There are scientists and there are those few scientists of sufficient integrative mindset and bravery that they accomplish a breakthrough in an area of their study. Finally there are those people of such integrative philosophical inclination, openness and unclutteredness of mind, desire for pragmatic application and just pure luck who are able to perceive and willing to confront a new paradigm.

“Only the free-wheeling artist-explorer, non-academic, scientist-philosopher, mechanic, economist-poet who has never waited for patron-starting and accrediting of his co-ordinate capabilities holds the prime initiative today.” R. Buckminster Fuller

• March 8, 2020 at 7:47 pm

Craig, thanks for this association of ideas. I’ve got Stimpson’s “Scientists and Amateurs: A History of the Royal Society” (1949) saying much the same thing. The Catch-22 position is when one has to be retired to be free to speak, and then few want to listen to you! A couple who bucked the trend were E F Schumacher (“Small is Beautiful”) and J E Lovelock (“Gaia”).

15. March 6, 2020 at 4:39 pm

Dave,
you’ve lost me I’m afraid. I understand that observing an event and working back to a cause may be easier than guessing what will follow from a given event given all the other variables in play. When something has happened after all, all the counterfactuals are ruled out but looking forward there is a universe of possibility. .But I’m not sure that’s what you meant by a field. Economics does not have to deal with quantum field effects, thank goodness. Indeterminacy in our case is epistemological rather than ontological.
Hume didn’t introduce any innovations in formal logic so I don’t know what you mean by his inappropriate logic either. I gather you don’t like him though!

• March 8, 2020 at 8:35 pm

Gerald, it is Hume’s philosophy I don’t like, and that not so much because it was misguided as plausible enough to be misguiding. The guy himself was a creature of his times: a younger son who c.1740 had to seek his own fortune in an atmosphere poisoned by the revocation by a French Catholic of the Edict of Nantes (1685), the Glorious Revolution (1688) against his cousin James II and the Protestant massacre of Catholic Highlanders at Glencoe (1692). In that context representative government sounds like a good idea compared to “religious” kings. At a Critical Realist conference in 2001, I said:

“[W]e are agreed, the problem is Hume’s ontology (or logic, or theory of truth). Roy Bhaskar, following Kant, has shown there is more to reality than Hume supposed, and I agree with him. What I haven’t seen is a direct attempt to show that Hume was simply wrong (because ‘cause’ implies transformation, not existence), that information can pass the sensory barrier (the method not being what Hume supposed), that you can derive an ‘ought’ from an ‘is’ (the logic not being what Hume presumed), that it is not a mere assumption that there are persistent objects as well as sensory events (the physics of Hume’s time just not being up to showing it). All this follows naturally from information science, but I have to say the argument and any failings in it are entirely my own.

“Secondly, I come to what in my experience is a major issue, the reality that people are different, have different types of interest, motivation, sensitivity, imagination, judgement etc quite apart from differences in background and education, so that even people who live or work together can fail to really understand where the other is coming from.

“Just a few years ago I was introduced to the Jung/Myers-Briggs personality analysis , and suddenly, after forty years of confusion, my wife’s conduct and other’s difficulties with mathematical concepts became almost totally intelligible. I had already worked out the brain functions involved and could see immediately how their combinations operated. Reflecting on my life in scientific research, I could see why different types of people ended up in different jobs, and thus that the jobs themselves were different. From the points of view of environmental politics and changing the world, it seemed highly significant that a majority of us are sensory extroverts like my practical wife, living life as it comes, and only 1% intuitive introverts like myself, really concerned about the future”.

Of Hume himself I wrote:

“Let me say at once that my criticism is not of Hume but of what he taught. He was a man of his time, informed by Bacon and Newton, reacting on the one hand to the loose philosophy of Locke (critic of Descartes) and the strange one of Berkeley, and on his other theme (his atheist morality) to the savagery of religion in a Scotland of the puritan Covenant. Cardwell, discussing eighteenth century steam, set the scene thus.
> “According to Francis Bacon there are two different types of invention; there are those which, like the mariner’s compass and firearms, depend on some sort of prior scientific knowledge; and there are those which, like the printing press, are substantially independent of science. Nowadays we could, of course, extend the lists enormously, adding radar, television, synthetic dyestuffs, plastics etc., to the first two inventions, and barbed wire, zip fasteners, bicycles, sewing-machines etc., to the third”.

“The examples show that only in the second type can one see how the inventions work, so we can adopt as a working definition of science that it is concerned with making evident what cannot be seen. The information processing of the brain is thus surely a matter for science.
In 1744 Hume was “unsuccessful candidate for the Chair of Ethics and Pneumatic Philosophy at Edinburgh”. [Italics mine]. Was this Hume likening thought to the invisible wind, trying to envisage effects, motion abstracted from things, what we now call process? Or was this a still Christian university envisaging the workings of Love, the Spirit of God? (For ‘spirit’ means breath). Certainly, much that is scientifically evident now was not evident then, and conversely with religion.

“What, then, motivated Hume’s arguments against causality? In the background are Newton’s overturning of Aristotle’s assumption that continuous movement has to be continuously caused. (In space there is no friction). In the distant background are Plato’s belief in “eternal ideas” which our spirits already have, ready for “education” draw them out, and Aristotle’s more physical view that knowledge, “science”, is acquired, given “training” to go and look for it. The rationalist philosophy of Descartes had been Platonic, Locke’s rejoinder an update on Aristotelian training, and the brilliant Bishop Berkeley, realising Locke’s theory could lead to a godless evolutionism, had “irrefutably” relocated Plato’s eternal ideas in the mind of God. Echoes of all these themes may be found in Hume, but his arguments, focussed as they eventually are on “first causes”, seem less a misunderstanding of Newton than a rejection of Berkeley’s theology. Hume had a personal need to justify his own [at the time dangerous] rejection of God as “first cause”, his admitted atheism.

“This said, his denial that we can know anything beyond our own experience undermines a correspondence theory of truth, and led ultimately to the belief that truth and morality are redundant” .

On logic, then, the point was that Hume DIDN’T “introduce any innovations in formal logic” even though Newton had introduced his four-level equations of motion. Aristotle’s logic deals with words (names of sets) rather than processes like managing an economy or governing a society. Hence his decision making processes based on counting heads.

16. March 15, 2020 at 2:58 pm

You all miss what happened between capitalism and science. From the 19th century onward to the current time there has been a union of capitalism and science. The union was not and is not an alliance of equals, however. It has always been a master-servant relationship, with capitalism as the dominant partner. The many successes of modern science have created the illusion that it is an autonomous factor driving the process of historical change, but as Andrew Ure correctly observed, science has long been “at the call of capitalism” and “in her service!”

The Laws of commerce are the laws of Nature, and therefore the laws of God.
-Edmund Burke, Thoughts and Details on Scarcity [1800)
The Growth of a large business is merely the survival of the fittest.
-John D. Rockefeller [c. 1900)

Today, the production of knowledge occurs on an industrial scale in science factories, also known as research laboratories. Nearly all scientific research is the work of professional scientists either directly employed or indirectly funded by capitalist corporations and governments. Consequently, knowledge and even nature itself have become increasingly “commodified”–converted into things that can be bought and sold. Scientific knowledge production in the 19th and 20th centuries was shaped not by considerations of human needs but by the profit motive. This movement continues in the 21st century. This tends to be masked by the dogma of scientific neutrality—that the objectivity of scientists shields their findings from external influences and guarantees that the content of modern science is an ever-expanding ocean of objective truth. But this simply is not the case. Is it not notorious that the medical research performed by pharmaceutical companies routinely subordinates human well-being to narrow proprietary interests? Are the tobacco industry’s “scientific” studies suggesting smoking is neither cancer-causing nor addictive not laughable? Is not the entire American medical establishment now focused without apology on extracting as much wealth as possible out of every patient and patient service? Even in the few areas such as climate change where some scientists break free of capitalist masters, there is a high price for them for those acts of rebellion.

A favorite of mine in this history of capitalism’s control of science, and the various forms of elitism that went with it is Francis Galton, a first cousin of Charles Darwin. Galton urged the “gifted class” to produce more offspring of their own and to take measures to limit the procreation of “children inferior in moral, intellectual and physical qualities.” Among other things, Galton attempted to provide a biological justification for scientific elitism. The primary thesis of his books “Hereditary Genius” and “English Men of Science” is that great men, including creative scientists, tend to be related and that therefore a series of elite families contributed perhaps the majority of distinguished statesmen, scientists, poets, judges, military commanders, and business leaders, of his day and of the past. A wealthy polymath, Galton is usually awarded a place in the pantheon of ‘Great Minds’ for pioneering methods of applying mathematics to the study of human behavior. He has been called the father of intelligence testing, is frequently credited with the invention of fingerprinting as a means of identifying individuals, and modern statistical analysis owes an immense debt to his undeniably brilliant innovations. But his eugenics-as well as his other attempts to apply scientific methodology to the solution of social problems-was constructed on the false foundation of social intolerance and bias. He held “axiomatically” that “certain marked types of character” can be “justly associated” with the different races of humankind. Galton maintained, for example, that the typical West African Negro” has “strong impulsive passions, and neither patience, reticence, nor dignity …. He is eminently gregarious, for he is always jabbering, quarrelling, tom-tom-ing, or dancing … and he is endowed with such constitutional vigour, and is so prolific, that his race is irrepressible.” Eugenics was, among other things, the supposedly scientific answer to the demographic threat posed by the sexually hyperactive black race and its infernal drumming.

Because it seemed to offer a scientific explanation of why privileged social groups deserved their privileges, Galton’s eugenics gained and maintains an influence far exceeding what it would have achieved if science really were a disinterested, objective quest for truth. That influence was strong and growing stronger at the end of the 19th century and persisted through the 20th and into the 21st century. And the capitalists adore it, as it justifies their treatment of workers and their control of national and international affairs.