The model of all economic models (wonkish)
from Lars Syll
Economics is perhaps more than any other social science model-oriented. There are many reasons for this — the history of the discipline, having ideals coming from the natural sciences (especially physics), the search for universality (explaining as much as possible with as little as possible), rigour, precision, etc.
Mainstream economists want to explain social phenomena, structures and patterns, based on the assumption that the agents are acting in an optimizing (rational) way to satisfy given, stable and well-defined goals.
The procedure is analytical. The whole is broken down into its constituent parts so as to be able to explain (reduce) the aggregate (macro) as the result of interaction of its parts (micro).
Building their economic models, modern mainstream (neoclassical) economists ground their models on a set of core assumptions (CA) — describing the agents as ‘rational’ actors — and a set of auxiliary assumptions (AA). Together CA and AA make up what I will call the ur-model (M) of all mainstream neoclassical economic models. Based on these two sets of assumptions, they try to explain and predict both individual (micro) and — most importantly — social phenomena (macro).
The core assumptions typically consist of:
CA1 Completeness — rational actors are able to compare different alternatives and decide which one(s) he prefers
CA2 Transitivity — if the actor prefers A to B, and B to C, he must also prefer A to C.
CA3 Non-satiation — more is preferred to less.
CA4 Maximizing expected utility — in choice situations under risk (calculable uncertainty) the actor maximizes expected utility.
CA4 Consistent efficiency equilibria — the actions of different individuals are consistent, and the interaction between them result in an equilibrium.
When describing the actors as rational in these models, the concept of rationality used is instrumental rationality – choosing consistently the preferred alternative, which is judged to have the best consequences for the actor given his in the model exogenously given wishes/interests/ goals. How these preferences/wishes/interests/goals are formed is not considered to be within the realm of rationality, and a fortiori not constituting part of economics proper.
The picture given by this set of core assumptions (rational choice) is a rational agent with strong cognitive capacity that knows what alternatives he is facing, evaluates them carefully, calculates the consequences and chooses the one — given his preferences — that he believes has the best consequences according to him.
Weighing the different alternatives against each other, the actor makes a consistent optimizing (typically described as maximizing some kind of utility function) choice, and acts accordingly.
Beside the core assumptions (CA) the model also typically has a set of auxiliary assumptions (AA) spatio-temporally specifying the kind of social interaction between ‘rational actors’ that take place in the model. These assumptions can be seen as giving answers to questions such as
AA1 who are the actors and where and when do they act
AA2 which specific goals do they have
AA3 what are their interests
AA4 what kind of expectations do they have
AA5 what are their feasible actions
AA6 what kind of agreements (contracts) can they enter into
AA7 how much and what kind of information do they possess
AA8 how do the actions of the different individuals/agents interact with each other.
So, the ur-model of all economic models basically consist of a general specification of what (axiomatically) constitutes optimizing rational agents and a more specific description of the kind of situations in which these rational actors act. The list of assumptions can never be complete, since there will always unspecified background assumptions and some (often) silent omissions (like closure, transaction costs, etc., regularly based on some negligibility and applicability considerations). The hope, however, is that the ‘thin’ list of assumptions shall be sufficient to explain and predict ‘thick’ phenomena in the real, complex, world.
These economic models are not primarily constructed for being able to analyze individuals and their aspirations, motivations, interests, etc., but typically for analyzing social phenomena as a kind of equilibrium that emerges through the interaction between individuals. Employing a reductionist-individualist methodological approach, macroeconomic phenomena are, analytically, given microfoundations.
Now, of course, no one takes the ur-model (and those models that build on it) as a good (or, even less, true) representation of economic reality (which would demand a high degree of appropriate conformity with the essential characteristics of the real phenomena, that, even when weighing inn pragmatic aspects such as ‘purpose’ and ‘adequacy’, it is hard to see that this ‘thin’ model could deliver). The model is typically seen as a kind of ‘thought-experimental’ bench-mark device for enabling a rigorous mathematically tractable illustration of how an ideal market economy functions, and to be able to compare that ‘ideal’ with reality. The model is supposed to supply us with analytical and explanatory power, enabling us to detect, describe and understand mechanisms and tendencies in what happens around us in real economies.
Based on the model — and on interpreting it as something more than a deductive-axiomatic system — predictions and explanations can be made and confronted with empirical data and what we think we know. If the discrepancy between model and reality is too large — ‘falsifying’ the hypotheses generated by the model — the thought is that the modeler through ‘successive approximations’ improves on the explanatory and predictive capacity of the model.
When applying their preferred deductivist thinking in economics, mainstream neoclassical economists usually use this ur-model and its more or less tightly knit axiomatic core assumptions to set up further “as if” models from which consistent and precise inferences are made. The beauty of this procedure is of course that if the axiomatic premises are true, the conclusions necessarily follow. The snag is that if the models are to be relevant, we also have to argue that their precision and rigour still holds when they are applied to real-world situations. They often don’t. When addressing real economies, the idealizations and abstractions necessary for the deductivist machinery to work simply don’t hold.
If the real world is fuzzy, vague and indeterminate, then why should our models build upon a desire to describe it as precise and predictable? The logic of idealization, that permeats the ur-model, is a marvellous tool in mathematics and axiomatic-deductivist systems, but, a poor guide for action in real-world systems, in which concepts and entities are without clear boundaries and continually interact and overlap.
Being told that the model is rigorus and amenable to ‘successive approximations’ to reality is of little avail, especially when the law-like (nomological) core assumptions are highly questionable and extremely difficult to test. Being able to construct “thought-experiments,“ depicting logical possibilities, doesn’t — really — take us very far. An obvious problem with the mainstream neoclassical ‘ur’-model — formulated in such a way that it realiter is extremely difficult to empirically test and decisively evaluate if it’s ‘corrobated’ or ‘falsified.’ Such models are from an scientific-explanatory point of view unsatisfying. The ‘thinness’ is bought at to high a price, unless you decide to leave the intended area of application unspecified or immunize your model by interpreting it as nothing more than two sets of core and auxiliary assumptions making up a content-less theoretical system with no connection whatsoever to reality.
Seen from a deductive-nomological perspective, the ur-model (M) consist of, as we have seen, a set of more or less general (typically universal) law-like hypotheses (CA) and a set of (typically spatio-temporal) auxiliary conditions (AA). The auxiliary assumptions give “boundary” descriptions such that it is possible to deduce logically (meeting the standard of validity) a conclusion (explanandum) from the premises CA and AA. Using this kind of model economists can be portrayed as trying to explain/predict facts by subsuming them under CA given AA.
This account of theories, models, explanations and predictions does not — of course — give a realistic account of actual scientific practices, but rather aspires to give an idealized account of them.
An obvious problem with the formal-logical requirements of what counts as CA is the often severely restricted reach of the ‘law.’ In the worst case it may not be applicable to any real, empirical, relevant situation at all. And if AA is not ‘true,’ then M doesn’t really explain (although it may predict) at all. Deductive arguments should be sound — valid and with true premises — so that we are assured of having true conclusions. Constructing models assuming ‘rational’ expectations, says nothing of situations where expectations are ‘non-rational.’
Most mainstream economic models — elaborations on the ur-model — are abstract, unrealistic and presenting mostly non-testable hypotheses. How then are they supposed to tell us anything about the world we live in?
And where does the drive to build those kinds of models come from?
I think one important rational behind this kind of model building is the quest for rigour, and more precisely, logical rigour. Formalization of economics has been going on for more than a century and with time the it has become obvious that the preferred kind of formalization is the one that rigorously follows the rules of formal logic. As in mathematics, this has gone hand in hand with a growing emphasis on axiomatics. Instead of basically trying to establish a connection between empirical data and assumptions, ‘truth’ has come to be reduced to, a question of fulfilling internal consistency demands between conclusion and premises, instead of showing a ‘congruence’ between model assumptions and reality. This has, of course, severely restricted the applicability of economic theory and models.
Not all mainstream economists subscribe to this rather outré deductive-axiomatic view of modeling, and so when confronted with the massive empirical refutations of almost every theory and model they have set up, many mainstream economists react by saying that these refutations only hit AA (the Lakatosian ‘protective belt’), and that by ‘successive approximations’ it is possible to make the theories and models less abstract and more realistic, and — eventually — more readily testable and predictably accurate. Even if CA & AA1 doesn’t have much of empirical content, if by successive approximation we reach, say, CA & AA25, we are to believe that we can finally reach robust and true predictions and explanations.
But there are grave problems with this modeling view, too. The tendency for modelers to use the method of successive approximations as a kind of ‘immunization,’ implies that it is taken for granted that there can never be any faults with CA. Explanatory and predictive failures hinge solely on AA. That the CA used by mainstream economics should all be held non-defeasibly corrobated, seems, however — to say the least — rather unwarranted.
Confronted with the empirical failures of their models and theories, even these mainstream economists often retreat into looking upon their models and theories as some kind of ‘conceptual exploration,’ and give up any hopes/pretenses whatsoever of relating their theories and models to the real world. Instead of trying to bridge the gap between models and the world, one decides to look the other way. But restricting the analytical activity to examining and making inferences in the models is tantamount to treating the models as a self-contained substitute systems, rather than as surrogate systems that the modeler uses to indirectly being able to understand or explain the real target system.
Trying to develop a science where we want to be better equipped to explain and understand real societies and economies, it sure can’t be enough to prove or deduce things in model worlds. If theories and models do not — directly or indirectly — tell us anything of the world we live in, then why should we waste time on them?