Mainstream economics — slipping from the model of reality to the reality of the model
from Lars Syll
A couple of years ago, Paul Krugman had a piece up on his blog arguing that the ‘discipline of modelling’ is a sine qua non for tackling politically and emotionally charged economic issues:
In my experience, modeling is a helpful tool (among others) in avoiding that trap, in being self-aware when you’re starting to let your desired conclusions dictate your analysis. Why? Because when you try to write down a model, it often seems to lead some place you weren’t expecting or wanting to go. And if you catch yourself fiddling with the model to get something else out of it, that should set off a little alarm in your brain.
So when ‘modern’ mainstream economists use their models — standardly assuming rational expectations, Walrasian market clearing, unique equilibria, time invariance, linear separability and homogeneity of both inputs/outputs and technology, infinitely lived intertemporally optimizing representative agents with homothetic and identical preferences, etc. — and standardly ignoring complexity, diversity, uncertainty, coordination problems, non-market clearing prices, real aggregation problems, emergence, expectations formation, etc. — we are supposed to believe that this somehow helps them ‘to avoid motivated reasoning that validates what you want to hear.’
Yours truly is, to say the least, far from convinced. The alarm in my brain is that this, rather than being helpful for understanding real-world economic issues, is more of an ill-advised plaidoyer for voluntarily taking on a methodological straight-jacket of unsubstantiated and known to be false assumptions.
Let me just give one example to illustrate my point.
In 1817 David Ricardo presented — in Principles — a theory that was meant to explain why countries trade and, based on the concept of opportunity cost, how the pattern of export and import is ruled by countries exporting goods in which they have a comparative advantage and importing goods in which they have a comparative disadvantage.
Ricardo’s theory of comparative advantage, however, didn’t explain why the comparative advantage was the way it was. At the beginning of the 20th century, two Swedish economists — Eli Heckscher and Bertil Ohlin — presented a theory/model/theorem according to which the comparative advantages arose from differences in factor endowments between countries. Countries have comparative advantages in producing goods that use up production factors that are most abundant in the different countries. Countries would a fortiori mostly export goods that used the abundant factors of production and import goods that mostly used factors of production that were scarce.
The Heckscher-Ohlin theorem — as do the elaborations on it by e.g. Vanek, Stolper and Samuelson — builds on a series of restrictive and unrealistic assumptions. The most critically important — besides the standard market-clearing equilibrium assumptions — are
(1) Countries use identical production technologies.
(2) Production takes place with constant returns to scale technology.
(3) Within countries the factor substitutability is more or less infinite.
(4) Factor prices are equalised (the Stolper-Samuelson extension of the theorem).
These assumptions are, as almost all empirical testing of the theorem has shown, totally unrealistic. That is, they are empirically false. Building theories and models on unjustified patently ridiculous assumptions we know to be false, does not deliver real science. Science fiction is not science.
That said, one could indeed wonder why on earth anyone should be interested in applying this theorem to real-world situations. Like so many other mainstream mathematical models taught to economics students today, this theorem has very little to do with the real world.
From a methodological point of view, one can, of course, also wonder, how we are supposed to evaluate tests of a theorem building on known to be false assumptions. What is the point of such tests? What can those tests possibly teach us? From falsehoods, anything logically follows.
Some people have trouble with the fact that by allowing false assumptions mainstream economists can generate whatever conclusions they want in their models. But that’s really nothing very deep or controversial. What I’m referring to is the well-known ‘principle of explosion,’ according to which if both a statement and its negation are considered true, any statement whatsoever can be inferred.
Whilst tautologies, purely existential statements and other nonfalsifiable statements assert, as it were, too little about the class of possible basic statements, self-contradictory statements assert too much. From a self-contradictory statement, any statement whatsoever can be validly deduced. Consequently, the class of its potential falsifiers is identical with that of all possible basic statements: it is falsified by any statement whatsoever.
Using false assumptions, mainstream modellers can derive whatever conclusions they want. Wanting to show that ‘all economists consider austerity to be the right policy,’ just e.g. assume ‘all economists are from Chicago’ and ‘all economists from Chicago consider austerity to be the right policy.’ The conclusions follow by deduction — but are of course factually totally wrong. Models and theories building on that kind of reasoning are nothing but a pointless waste of time.
“… to say the least, far from convinced”
Much too polite. Krugman’s claim is deluded bullshit.
I believe that Karl Popper and Lars Syll are misusing logical argument “From falsehoods, anything logically follows.” This is valid for an ideally constructed mathematical system. It is reductio ad absurdum. At least, Lars Syll is using Popper in a misleading way.
The logic must be applied within a well specified theory or a system of theories, which is a set of many concepts and assumptions. The meaning of logical inference (and practically all efforts in mathematics) lies in knowing what propositions can be derived and what cannot be from a set of assumptions (postulates, or axioms). If someone adds an arbitrary assumption that contradicts the set of assumptions assumed in the theory, the theory goes down (because, as Lars contends, any proposition are valid now and you know nothing from a theory).
Lars Syll is right to accuse this and that assumptions of a theory are unrealistic and argue that we should construct a more plausible theory. But, if he does, he is required to present an alternative theory (or at least a tentative theory) that can replace the theory he rejects. However, I do not argue this point in this comment, because this is a rare occasion that Lars argue international trade theory per se.
Of course, in my opinion, Heckscher-Ohlin theory is wrong. I do not argue that. I must point it out that the reasons I reject Heckscher-Ohlin theory (HO theory hereafter) is not only that it assumes four “unrealistic assumptions” that Lars estimates “most critically important” (I reject three of the four, but I defend assumption (2)), but also it assumes a kind of aggregate production functions. On the latter point, HO theory has the same defect as almost all mainstream macroeconomic theories. To judge if an aggregate production function is valid or not is difficult question as empirical facts. However, we have ample reasons to reject the concept of aggregate production function. It is something “not even wrong” (Wolfgang Pauli) after Felipe and McCombie, who wrote a book on this theme.
I do not know why Lars thinks that assumption (2) is unrealistic. The meaning of “constant returns to scale” is ambiguous. If it means proportionality between direct inputs and the product, this is rather widely observed “law”. Some of mainstream economist wanted to defend “decreasing returns to scale,” because without it it was difficult to define supply functions. This is an old story between Marshallians and Sraffa. As far as variations of capacity utilization is concerned, we must say that constant returns to scale is exceptionally a good law if compared with other economic “laws”. A classical, but not well known result is that of Joel Dean’s Statistical Cost Estimation (1976, 1977) (The research was done in 1940’s).
Capacity utilization rate changes day by day or week by week. It assumes a given set of machines and installations. It is a short-period question. When new capacity building is planned, a different problem emerges. In this case, we normally observe increasing returns to scale. For example, roughly a “0.6 rule” holds (the total value necessary to construct a capacity x is proportional to x to the power 0.6). This is only an empirical law and scale coefficients can change from 0.2 to 1.2 after Tribe and Alpine (1986). (If the coefficient is greater than 1, returns are decreasing.)
So, the assumption (2) is not so “unrealistic” as far as the analysis concerns short or intermediate periods. By the way, Paul Krugman’s trade theory (often called New trade theory) draws on two characteristic assumptions: (1) increasing returns to scale, and (2) consumer’s love of variety (Dixit-Stiglitz utility function). Based on this two assumptions, Krugman claimed he explained the intra-industry trade. However, Krugman explained only a small part of intra-industry trade, because he could explain international division of labor (pattern of specialization) only for final consumer goods. Now it is pointed out that about 60% of world trade is input trade (raw materials, parts, and intermediate products). This category of international trade is simply assumed off in Krugman’s theory.
Now, if the mainstream trade theory is no good, what do we know as an alternative theory of international trade? Lars Syll may not know it but a fairly good theory is now constructed. See my paper:
The new theory of international values: an overview (2017).
This is essentially a generalization of Ricardo’s theory of trade, but it assumes a world economy where we assumes
(0) There are M-countries and N-goods. (M and N can be any integer larger than or equal to 2).
(1) Each country has its own set of production techniques.
(2) Production takes place with constant returns to scale technology.
(3) Each production technique has fixed input coefficients, but there are choice of production techniques.
(4) All goods are supposed to have the same price for everywhere but workers of different countries have different wage rates.
(5) Input trade are freely admitted.
(6) There is no transportation cost, no tariffs, and no extra-tariff restrictions.
Assumption (6) is manifestly unrealistic. But, this is necessary in order to clarify the logic of international trade. Assumption (4) is a consequence of (6). It is not impossible to make a theory without assumptions (4) and (6), but we only get a complicated theory that is hardly understandable even for economists. It is silly to exclude all “unrealistic” assumptions from a theory. In many occasion, we should admit some unrealistic but plausible assumptions for the purpose of analysis.
The new theory has still many defects. For example, there is a clear gap between assuming a relatively stable international value (a set of wage rates for countries and prices for products) and the existence of free exchange markets for currencies. However, it is still developing. The most important result after 2017 is the discovery of a new definition of (international) regular value. With this new definition, it is now possible to analyze unemployment in the international trade situations.
There are four generations of trade theories among mainstream economics (international micro): (1) Ricardian theory, (2) HO theory, (3) New trade theory, and (4) New new trade theory. However, none of them possess a general theory of input trade and all of them (perhaps except (4)) exclude unemployment by assumption. Although the new theory of international values has various defects, it is much superior than all four generations of mainstream trade theories.
There is no need to criticize eternally mainstream economics. We have our own economics that may supersede mainstream economics, at least in specific fields.
The carpenter makes a bad chair that is uncomfortable and falls over. Do we blame his saw? Lars criticizes all-too common assumptions in economic theorising and provides an example of a model where strong conclusions follow only from counter-factual assumptions. One cannot disagree. However, that does not bear on Krugman’s point, that when thinking through a complicated problem it is often instructive to write down one’s assumptions or stylizations of the facts as you understand them and see what follows from them. Are the implications what you thought or are there implications you did not forsee? This is frequently a useful thing to do. You are not obliged to make excessively strong or counterfactual assumptions. The fact that people do that is not in itself an argument against taking a systematic approach.
Because a lot of contemporary economics is nonsense does not mean that all the methods economists (mis)apply are useless. In fact they are widely applied in other disciplines. Modelling is part of the armoury in all sciences.
Lars Syll posted a post with a similar title one and half years ago:
https://rwer.wordpress.com/2021/12/01/mainstream-economics-a-harmful-fantasy/
There were more than 20 comments there. Holtham and I had joined in it. Tony Lawson posted three comments there. It is remarkable, because it is rare to read his comments in any of social networks like this forum.
Rereading those comments on Lars’s December 1 post, I am a bit sad, because there is no sign of improvement for our discussion. Lars Syll is always continuing his refrains. Is it simply we are forgetful? Or, we lack some fundamental points of arguments?
Geoff Davies is a geophysics guy who wrote a book in economics:
Economy, Society, Nature: An introduction to the new system-based, life-friendly economics (World Economics Association Books, 2019)
His career as geophysicist gave us a new picture about economics as a scientific effort. Chapter 9 “Scientific economics?” teaches us much and must be useful in enriching our discussion.
He also wrote an article for Real-World Economics Review: A modest proposal for generating useful analyses of economies: a brief note.
A part of this article is reproduced as a post in RWER Blog here. But it is necessary to read whole the original article to consider about sciences and scientific methods.
Davies’s main recommendation is this:
I find many interesting comments like these:
I believe it is sufficient to confirm following two paragraphs (in the book) to situate the role and use of mathematics in sciences. Why do we need excessive arguments that Lars Syll repeats again and again?
May I post a slightly edited version of what ChatGPT turned my original comment into?
In response to yoshinorishiozawa’s comments, I have some questions. With regard to the nature of mathematical proof, I’m curious about whether the Gödel incompleteness theorem undermines the certainty of mathematical truths. Additionally, given that some mathematical truths may be unprovable, can we still confidently apply logical inference within a well-specified theory? Does this approach not suffer from the same limitations as Hilbert’s program, which Gödel famously critiqued? Furthermore, if Gödel’s point was that contradictory statements can emerge from within any given axiom set, what guarantees do we have that any given set of mathematical assumptions themselves don’t undermine the integrity of any given theory? Finally, is it reasonable to embrace inconsistency and completeness (i.e. be a Trivialist) over logic’s preference for incompleteness and consistency, or does this trivialize the value of taking a systematic approach?
Regenerate response
I do not know Chat-GPT, but in general generative AI reproduces what is most often spoked among people. When you question about “the nature of mathematical proof,” they produce an opinion which seems to be closely related to what is most often spoken among the philosophy-oriented people. A natural theme would be Gödel’s theorem. Philosophy-oriented people talk much about the theorem but mathematicians are now not shocked by the theorem and do not think it matters much for the future of mathematics.
The wonder of Gödel’s theorem was that axiomatic system of arithmetics was “incomplete” in the sense that there always exist some propositions that are written in the language of the system and whose truth value is not decidable within the system. This was a shock at the time when Gödel’s theorem was discovered, but a very ordinary and common fact for many other axiomatic systems. Take, for example, the theory of groups (or rings, fields, or any algebraic systems), they are “incomplete” in the same sense as Gödel’s theorem, i.e., there exist always some propositions that are written in the language of the system and whose truth value is not decidable within the system.
A group (as algebraic system) is not decided by the (standard) axiom of groups. A group may have much deep complex and diversified structure. Each of groups may be different with others. The merit of observing them as groups lies in the fact that we can use many theorems in the theory of groups. So, mathematicians do not worry about the significance of Gödel’s theorem.
yoshinorishiozawa, are there not undecidable statements about groups (the Whitehead problem, the word problem, Hilbert’s tenth problem) in the meta-language, such that if you actually try to operationalize groups, engineers will invariably have to use kludges that can represent mathematical statements unreachable from your model language, and from all meta-languages too?
In the General Theory, much revered by Lars, Keynes makes a number of assertion about how “the economy” works. If you assemble these assertions you have a model of the economy or, rather, a model economy since Keynes is not always explicit about whether he is talking about the UK, the US or all monetary economies everywhere. I am not sure why he escapes Lars’ criticisms of the “deductive” approach because, like most people, he draws conclusions from his arguments. And he abstracts massively from important aspects of reality, like political interests and motivations.
Largely as a result of Keynes work, a system of national accounts was devised that attempted to make concrete certain concepts that he had freely employed : consumption, saving, investment, exports etc.
Once those data existed it became possible to see whether Keynes’ generalisations applied, always supposing their definitions were consistent with his.
I wonder at what point Lars considers that primordial virtue was lost. Did writing Keynes’ model down symbolically rather than in words lose a magic ingredient? Did subjecting it to vulgar testing against quantitative data constitute sacrilege? Must theory remain vague? Are all attempts at specification and testing misguided? Is it illegitimate to explores a theory’s limits?
I am genuinely puzzled as to why Keynes and his “general theory” escapes the criticisms that Lars applies to other economists. I wonder whether he thinks the theory is unimprovable and, if not, I have no idea what he would regard as a legitimate attempt at improvement.
Analytics can also be important. A short theory or model is better than nothing. You can talk about it, everyone can give their positive or negative opinion. But if something is not, it was not created, it is nothing, it is zero, you cannot talk about it either. Theories and models are not about the people who created them, but about the experts and eminent economists who all over the world are moving them and using them in their own projects.
László Kulin
social expert
Hungary
Analytics can also be important. A short theory or model is better than nothing. You can talk about it, everyone can give their positive or negative opinion. Theories and models are not about those who created them, but about the 100 or more 1,000 experts and eminent economists who drive them all over the world and use them in their own projects.
It is not a simple matter. This is a difficult path.
László Kulin
social expert
Hungary