## Do ‘small-world’ models help us understand ‘large-world’ problems?

from** Lars Syll**

In L. J. Savage’s seminal *The Foundations of Satistics* the reader is invited to tackle the problem of an uncertain future using the concept of ‘lottery tickets’ and the principle of ‘look before you leap.’

Carried to its logical extreme, the ‘Look before you leap’ principle demands that one envisage every conceivable policy for the government of his whole life (at least from now on) in its most minute details, in the light of the vast number of unknown states of the world, and decide here and now on one policy. This is utterly ridiculous … because the task implied in making such a decision is not even remotely resembled by human possibility. It is even utterly beyond our power to plan a picnic or to play a game of chess in accordance with the principle, even when the world of states and the set of available acts to be envisaged are artificially reduced to the narrowest reasonable limits.

Savage was very explicit on the restrictions one had to put on decision analysis based on subjective (‘personal’) probabilities. Outside ‘small worlds’ it would be “utterly ridiculous” to claim applicability for a theory based on its restricted assumptions.

In *The World in the Model *Mary Morgan characterizes the modelling tradition of economics as one concerned with “thin men acting in small worlds”‘ and writes:

Strangely perhaps, the most obvious element in the inference gap for models … lies in the validity of any inference between two such different media – forward from the real world to the artificial world of the mathematical model and back again from the model experiment to the real material of the economic world. The model is at most a parallel world. The parallel quality does not seem to bother economists. But materials do matter: it matters that economic models are only

representations ofthings in the economy, not the things themselves.

Now, a salient feature of modern mainstream economics is the idea of science advancing through the use of “successive approximations” whereby ‘small-world’ models become more and more relevant and applicable to the ‘large world’ in which we live . Is this really a feasible methodology? I think not.

Most models in science are representations of something else. Models “stand for” or “depict” specific parts of a “target system” (usually the real world). All theories and models have to use sign vehicles to convey some kind of content that may be used for saying something of the target system. But purpose-built assumptions made solely to secure a way of reaching deductively validated results in mathematical models — like “rational expectations” or “representative actors” — are of little value if they cannot be validated outside of the model.

All empirical sciences use simplifying or unrealistic assumptions in their modeling activities. That is not the issue — *as long as the assumptions made are not unrealistic in the wrong way or for the wrong reasons*.

Theories are difficult to directly confront with reality. Economists therefore build models of their theories. Those models are *representations* that are *directly* examined and manipulated to *indirectly* say something about the target systems.

But models do not only face theory. They also have to look to the world. Being able to model a “credible world,” a world that somehow could be considered real or *similar* to the real world, is not the same as investigating the real world. Even though all theories are false, since they simplify, they may still possibly serve our pursuit of truth. But then they cannot be unrealistic or false in *any* way. The falsehood or unrealisticness has to be* qualified*.

One could of course also ask for *robustness*, but the “credible world,” even after having tested it for robustness, can still be a far way from reality – and unfortunately often in ways we know are important. Robustness of claims in a model does not *per se* give a warrant for exporting the claims to real world target systems.

Anyway, robust theorems are exceedingly rare or non-existent in economics. Explanation, understanding and prediction of real world phenomena, relations and mechanisms therefore cannot be grounded (solely) on robustness analysis. Some of the standard assumptions made in neoclassical economic theory – on rationality, information handling and types of uncertainty – are not possible to make more realistic by “de-idealization” or “successive approximations” without altering the theory and its models fundamentally.

If we cannot show that the mechanisms or causes we isolate and handle in our models are stable, in the sense that what when we export them from are models to our target systems they do not change from one situation to another, then they only hold under *ceteris paribus* conditions and *a fortiori* are of limited value for our understanding, explanation and prediction of our real world target system.

The obvious ontological shortcoming of a basically epistemic – rather than ontological – approach such as “successive approximations” is that “similarity” or “resemblance” *tout court* do not guarantee that the correspondence between model and target is interesting, relevant, revealing or somehow adequate in terms of mechanisms, causal powers, capacities or tendencies. No matter how many convoluted refinements of concepts made in the model, if the “successive approximations” do not result in models similar to reality in the appropriate respects (such as structure, isomorphism etc), the surrogate system becomes a *substitute *system that does not bridge to the world but rather misses its target.

So, I have to conclude that constructing “minimal economic models” — or using microfounded macroeconomic models as “stylized facts” or “stylized pictures” somehow “successively approximating” macroeconomic reality — is a rather unimpressive attempt at legitimizing using ‘small-world’ models and fictitious idealizations for reasons more to do with mathematical tractability than with a genuine interest of understanding and explaining features of real economies.

Many of the ‘small-world’ model assumptions standardly made by mainstream economics are *restrictive* rather than *harmless *and could *a fortiori* anyway not in any sensible meaning be considered approximations at all.

The problem posed by Lars Syll is quite right. Economics has been ignoring questions of scales. A mechanism that is valid in a small (or toy) model cannot be necessarily the mechanism that drives a large economy. As different institutions were required for the working of larger economy, it is required if a theory on the working of an economy remains valid for a larger economy.

In the history of human kind, the standard range of transactions became larger and larger, from a community of several families, to a village of dozens of families, a town of thousands of families, a region which covers million people, a country of tens of million people, an international region, and finally to the all world that contains billions of people. If we want to consider a modern economy, it is evident that the economy comprises at least millions of people and thousands kinds of traded commodities. A relevant theory must show how this big system works with transactions between members. This is a very simple test when we judge whether a theory is a real theory that reflects the real economy in an essential way or a toy model that can satisfies only those economists who have no sense of scale.

There is few theories that can pass this simple test. One in the mainstream economics is that of Arrow-Debreu’s competitive economy. Its theoretical structure is, in a sense, scale-free. Whatever the number of agents (households and firms) and the number of goods, the theory proves the existence of a competitive equilibrium. However, this is only an appearence. The theory must assume tremendous capabilities for each agent: (1) the agent must know all the prices of all products of all places almost at once (infinite sight), (2) agent can calculate the optimal solutions of a maximizing problem within a limit of short time span (unbounded rationality). These two assumptions are too stringent for humans to satisfy. In addition, the existence of an equilibrium in Arrow-Debreu model does not mean that the system works. To understand how the economy works after Arrow-Debreu model, we must normally assume an imaginable auctioneer who collects all necessary information and calculate whether this gives an equilibrium state (within a small range of admissible errors) and announce a tentative price system. Further, the cycle of price announcement and the response from all agents (households and firms) must be rapidly convergent. The relaxation time depends on the scale of number of products. In view of all these requirements, Arrow-Debreu model is only too rough a parable, which works only in an imaginary world. We know no system that works in Arrow-Debreu way even for a very small economy. Bourse de Paris worked on a different principle than that imagined by Walras.

How about is the situation in heterodox economics? As far as I know, there is no theory that clarifies how modern economy works, except our theory developed in our book Microfoundations of Evolutionary Economics.

In Chapter 1 of the same title with the book itself, I explained how to formulate economic behaviors of humans. A human intentional behavior is organized as programmed series of if-then orders. The relevance of this kind of behaviors is assured only when the environment (i.e economic processes) shows certain stationarity (For the concept of “stationarity”, see my old paper “The Primacy of Stationarity,” 1989). Each of if-then behaviors or C-D transformations is a detection of a sign of small set of local information. Chapter 2 is a new theory of prices based on the understanding that human economic behaviors are routine behavior with highly selected information of the environment. The price setting of a firm is based on calculation of normal unit cost plus markups (normal pricing or a refined version of full cost pricing). There are substantial numbers of exceptions but this assumption (Postulate 3) makes clear the range of validity of our price theory. It is evident that this theory cannot apply to financial economy. Section 2.7 and Chapter 3 by Morioka are introductions to quantity adjustment process analysis as a whole. Quantity adjustment of each firm is simple. It sets the production volume of a product based on the past demands expressed to the product (of the firm). This simplicity assures no easy analysis of the total interactions process for the economic system as a whole.

If the total process is divergent, it means that the supposed process cannot be the core mechanism that drives the quantity adjustment process, because in such a case the process is self-destructive and must shift to another mode of adjustment. Fortunately, Morioka proved that the process as a whole converges irrespective of the number of products. To prove this requires the estimation of the absolute value of all eigenvalues of a large matrix. I also tried to do the same thing, but soon I have abandoned it, because it seemed impossible. The size of the matrix was so big and its form was so irregular. No known properties on matrices seemed to be usable. Astonishingly, Morioka first examined an associated simpler matrix by computer (using

Mathematica), the eigenvalues of which are arranged like a polynomial function. He could evaluate the absolute value of all other eigenvalues of the large matrix based on this examination. This difficult proof is given in Chapter 4.If among readers of this blog who doubts whether the proof is really valid or not, I am ready to send them a PDF of the book so that they can check by themselves. Please send me an e-mail at y@shiozawa.net.

Our theory, we believe, is really a starting point of all heterodox economics. It opens a new theory of value and a vision, totally different from the mainstream economics, on how the modern industrial economy works. To add a word, this new vision was only possible without detailed formulation of all process and solving delicate mathematical question.

Video: “The Toxic Power of Monopolies (David Dayen Interview)” Executive Editor at the American Prospect and author of the new book “Monopolized,” Sept 8, The David Pakman Show

https://youtu.be/7g99lKLk898 via @YouTube

Did Keynes’ income-expenditure model help understand the complex real world? It was certainly a small model: just a handful of variables.

Minsky’s model was expressed verbally though Steve Keen and others have formalized something like it. Did it help in understanding the last financial crisis, which certainly had multiple, complex causes?

Did Ricardo’s comparative advantage not advance at all our understanding of gains from trade?

Each of those model has been applied in situations where it was inappropriate as well as illuminating other situations.

What makes a model useful or potentially useful while others are barren. Now that’s an interesting question.

1) Its identification of the actual and deepest problems of the current model being critiqued,

2) Its resolution of those problems by its policies, and

3) whether or not it aligns with the ethic expressed in the notion that “Systems were made for Man, not Man for systems.” In other words whether it aligns with freedom NOW or is willing to tolerate some lesser ethic and time frame for change…especially when freedom is imminently and observably possible.

Gerald, you are measuring smallness quantitatively by the number of variables, when the discussion is of small-world models made small by their reduction to quantification. Keynes’s model of a larger world had space to include the unknowable future and the accumulation of unemployment. That may not what industrialists want to look at, but it it is highly significant now we are having to look at the consequences of industrialism.

See evidencebas’es three Replies (September 12, 2020 at 5:00 pm to 5:14 pm) posted in as comments on Peter Radford’s More on what’s missing.

Economics is in the

replacement modenow. Let us get out of thereactive mode.