Home > Uncategorized > Do models make economics a science?

Do models make economics a science?

from Lars Syll

Well, if we are to believe most mainstream economists, models are what make economics a science.

economists_do_it_with_models_economics_humor_keychain-re3ab669c9dd84be1806dc76325a18fd0_x7j3z_8byvr_425In a Journal of Economic Literature review of Dani Rodrik’s Economics Rules, renowned game theorist Ariel Rubinstein discusses Rodrik’s justifications for the view that “models make economics a science.” Although Rubinstein has some doubts about those justifications — models are not indispensable for telling good stories or clarifying things in general; logical consistency does not determine whether economic models are right or wrong; and being able to expand our set of ‘plausible explanations’ doesn’t make economics more of a science than good fiction does — he still largely subscribes to the scientific image of economics as a result of using formal models that help us achieve ‘clarity and consistency’.

There’s much in the review I like — Rubinstein shows a commendable scepticism on the prevailing excessive mathematization​ of economics, and he is much more in favour of a pluralist teaching of economics than most other mainstream economists — but on the core question, “the model is the message,” I beg to differ with the view put forward by both Rodrik and Rubinstein.

Economics is more than any other social science model-oriented. There are many reasons for this — the history of the discipline, having ideals coming from the natural sciences (especially physics), the search for universality (explaining as much as possible with as little as possible), rigour, precision, etc.

Mainstream economists want to explain social phenomena, structures and patterns, based on the assumption that the agents are acting in an optimizing (rational) way to satisfy given, stable and well-defined goals.

The procedure is analytical. The whole is broken down into its constituent parts so as to be able to explain (reduce) the aggregate (macro) as the result of interaction of its parts (micro).

Modern mainstream (neoclassical) economists ground their models on a set of core assumptions (CA) — basically describing the agents as ‘rational’ actors — and a set of auxiliary assumptions (AA). Together CA and AA make up what might be called the ‘ur-model’ (M) of all mainstream neoclassical economic models. Based on these two sets of assumptions, they try to explain and predict both individual (micro) and — most importantly — social phenomena (macro).

The core assumptions typically consist of:

CA1 Completeness — rational actors are able to compare different alternatives and decide which one(s) he prefers

CA2 Transitivity — if the actor prefers A to B, and B to C, he must also prefer A to C.

CA3 Non-satiation — more is preferred to less.

CA4 Maximizing expected utility — in choice situations under risk (calculable uncertainty) the actor maximizes expected utility.

CA4 Consistent efficiency equilibria — the actions of different individuals are consistent, and the interaction between them results​ in an equilibrium.

When describing the actors as rational in these models, the concept of rationality used is instrumental rationality – choosing consistently the preferred alternative, which is judged to have the best consequences for the actor given his in the model exogenously given wishes/interests/goals. How these preferences/wishes/interests/goals are formed is typically not considered to be within the realm of rationality, and a fortiori not constituting part of economics proper.

The picture given by this set of core assumptions (rational choice) is a rational agent with strong cognitive capacity that knows what alternatives he is facing, evaluates them carefully, calculates the consequences and chooses the one — given his preferences — that he believes has the best consequences according to him.

Weighing the different alternatives against each other, the actor makes a consistent optimizing (typically described as maximizing some kind of utility function) choice ​and acts accordingly.

Beside​ the core assumptions (CA) the model also typically has a set of auxiliary assumptions (AA) spatio-temporally specifying the kind of social interaction between ‘rational actors’ that take place in the model. These assumptions can be seen as giving answers to questions such as

AA1 who are the actors and where and when do they act

AA2 which specific goals do they have

AA3 what are their interests

AA4 what kind of expectations do they have

AA5 what are their feasible actions

AA6 what kind of agreements (contracts) can they enter into

AA7 how much and what kind of information do they possess

AA8 how do the actions of the different individuals/agents interact with each other

So, the ur-model of all economic models basically consists of a general specification of what (axiomatically) constitutes optimizing rational agents and a more specific description of the kind of situations in which these rational actors act (making AA serves as a kind of specification/restriction of the intended domain of application for CA and its deductively derived theorems). The list of assumptions can never be complete ​since there will always unspecified background assumptions and some (often) silent omissions (like closure, transaction costs, etc., regularly based on some negligibility and applicability considerations). The hope, however, is that the ‘thin’ list of assumptions shall be sufficient to explain and predict ‘thick’ phenomena in the real, complex, world.

In some (textbook) model depictions, we are essentially given the following structure,

A1, A2, … An

where a set of undifferentiated assumptions are used to infer a theorem.

This is, however, too vague and imprecise to be helpful, and does not give a true picture of the usual mainstream modelling​ strategy, where there’s a differentiation between a set of law-like hypotheses (CA) and a set of auxiliary assumptions (AA), giving the more adequate structure

CA1, CA2, … CAn & AA1, AA2, … AAn


CA1, CA2, … CAn
(AA1, AA2, … AAn) → Theorem,

more clearly underlining the function of AA as a set of (empirical, spatio-temporal) restrictions on the applicability of the deduced theorems.

This underlines the fact that specification of AA restricts the range of applicability of the deduced theorem. In the extreme cases we get

CA1, CA2, … CAn

where the deduced theorems are analytical entities with universal and totally unrestricted applicability, or

AA1, AA2, … AAn

where the deduced theorem is transformed into an untestable tautological thought-experiment without any empirical commitment whatsoever beyond telling a coherent fictitious as-if story.

Not clearly differentiating between CA and AA means that we can’t make this all-important interpretative distinction and opens up for unwarrantedly ‘saving’ or ‘immunizing’ models from almost any kind of critique by simple equivocation between interpreting models as empirically empty and purely deductive-axiomatic analytical systems, or, respectively, as models with explicit empirical aspirations. Flexibility is usually something people deem positive, but in this methodological context, ​it’s more troublesome than a sign of real strength. Models that are compatible with everything, or come with unspecified domains of application, are worthless from a scientific point of view.

Economics — in contradistinction to logic and mathematics — ought to be an empirical science, and empirical testing of ‘axioms’ ought to be self-evidently relevant for such a discipline. For although the mainstream economist himself (implicitly) claims that his axiom is universally accepted as true and in no need of proof, that is in no way a justified reason for the rest of us to simpliciter accept the claim.

When applying deductivist thinking to economics, mainstream (neoclassical) economists usually set up ‘as if’ models based on the logic of idealization and a set of tight axiomatic assumptions from which consistent and precise inferences are made. The beauty of this procedure is, of course, that if the axiomatic premises are true, the conclusions necessarily follow. But — although the procedure is a marvellous tool in mathematics and axiomatic-deductivist systems, it is a poor guide for real-world systems. As Hans Albert has it on the neoclassical style of thought:

hans_albertScience progresses through the gradual elimination of errors from a large offering of rivalling ideas, the truth of which no one can know from the outset. The question of which of the many theoretical schemes will finally prove to be especially productive and will be maintained after empirical investigation cannot be decided a priori. Yet to be useful at all, it is necessary that they are initially formulated so as to be subject to the risk of being revealed as errors. Thus one cannot attempt to preserve them from failure at every price. A theory is scientifically relevant first of all because of its possible explanatory power, its performance, which is coupled with its informational content …

Clearly, it is possible to interpret the ‘presuppositions’ of a theoretical system … not as hypotheses, but simply as limitations to the area of application of the system in question. Since a relationship to reality is usually ensured by the language used in economic statements, in this case the impression is generated that a content-laden statement about reality is being made, although the system is fully immunized and thus without content. In my view that is often a source of self-deception in pure economic thought …

Most mainstream economic models are abstract, unrealistic and presenting mostly non-testable hypotheses. How then are they supposed to tell us anything about the world we live in?

Confronted with the massive empirical failures of their models and theories, mainstream economists often retreat into looking upon their models and theories as some kind of ‘conceptual exploration,’ and give up any hopes whatsoever of relating their theories and models to the real world. Instead of trying to bridge the gap between models and the world, one decides to look the other way.

This kind of scientific defeatism is equivalent to surrendering our search for understanding the world we live in. It can’t be enough to prove or deduce things in a model world. If theories and models do not directly or indirectly tell us anything of the world we live in – then why should we waste any of our precious time on them?

The way axioms and theorems are formulated in mainstream (neoclassical) economics standardly leaves their specification without almost any restrictions whatsoever, safely making every imaginable evidence compatible with the all-embracing ‘theory’ — and a theory without informational content never risks being empirically tested and found falsified. Used in mainstream economics ‘thought experimental’ activities, it may, of course, ​be very ‘handy’, but totally void of any empirical value.

Mainstream economic models are nothing but broken pieces models. That kind of models can never make economics a science.

  1. Helge Nome
    September 23, 2019 at 3:32 am

    Mainstream economics is designed to hide what is going on.

  2. Nancy E. Sutton
    September 23, 2019 at 5:35 am

    Amen, Helge.

  3. Robert Locke
    September 23, 2019 at 10:21 am

    There is nothing new about attacking model building in economics as science. I have been doing it, along with many business and enterprise historians for years, and along with business scientists outside Anglosaxonia. So why is this not acknowledged in the literature? Its all there to see, the attacked on mathematical models building as a guide to a science of economics.

    • September 24, 2019 at 9:20 am

      Robert, you ask why it is not acknowledged in the literature. I have no answer but some ideas: 1. Who should be interested in knowing / acknowledging? Those, who build careers on those models certainly not. Those who love counting publications and thus hire those career modellers certainly not either. 2. Contrary to Romer’s (and Lars’ correct) claim that an alternative is not necessary for valid criticism, I think that availability of an alternative would be rather helpful, and, finally, 3. when facing true uncertainty, mainstream economists use their fake models as a straw to cling on to something although knowing that this something does not hold water. It is their strategy to cope with uncertainty. Others pray.

      • Robert Locke
        September 24, 2019 at 10:14 am

        The problem is the institutionalization of neoclassical economics in the discipline after WWII. It excluded from economics, institutional and historical economists, as well (and much more importantly) as firm centered business economists, in favor of a MBA academic centered effort to research and teach general management as science in academic institutions (business schools). Consider economics as an object of study not a science or hoped for science and noneconomists voices would be heard.

  4. lobdillj
    September 23, 2019 at 1:02 pm

    I agree with Helge. The reason is that it acknowledges which side of the bread is buttered.

  5. Ken Zimmerman
    September 24, 2019 at 11:32 am

    To answer the question asked, no, models do not make economics a science. They can help economics more fully reveal and describe its supposed subject matter, economic judgments and actions by humans, but only if the models are drawn properly. And there’s the rub with current mainstream economics. Lars has revealed, at least to me for the first time just how f**ked up economics is. A few examples from the paper, and not the obvious ones. First, logic, in the sense of a rule-based (axiomatic) system for finding truth has little use in social science. Most of the objects of study by social scientists don’t know these rules and don’t follow them. Most human actions and decisions are not logical. Nor do they need to be.

    Second, according to Lars, “Mainstream economists want to explain social phenomena, structures and patterns, based on the assumption that the agents are acting in an optimizing (rational) way to satisfy given, stable and well-defined goals.” Why would anyone who’s ever been involved in economic transactions make such assumptions? They’re nonsense, and disrespectful nonsense, at that. Such assumptions result in the economist missing all the economic actions and judgment of humans. Since, none are optimal, none are rational, and none look to satisfy given, stable, and well-defined goals.

    Third, if they are to be effective, the procedures to study human economic actions and judgments are not analytical. It is pointless to break down these actions and judgments into their constituent parts to explain (reduce) the aggregate (macro) as the result of interaction of its parts (micro). It’s pointless since until the work of investigating and describing these actions and judgments is completed, we have no notion of what they are or even if they can be “broken down.”

    So, what ought to be the goal of economics science? That goal is to investigate and describe what’s happening in any situation we believe is economic. That is simply seeking to investigate, describe, and to the extent we can, understand social phenomena (beliefs, judgments, actions, etc.) created by others, including the explanations of these phenomena these others create, and then writing that up and sharing it. This requires no a priori assumptions. Only the willingness to listen to and read the words and actions of these others. Whether in the present or the past. And to respect the work of these others. That economists assume these others are utility optimizers does not make that assumption correct. In fact, the assumption is a major impediment to good economic research.

    Economist devote all their energy and spend all their time on one side of the divide. The side that says economists and what they want and believe. While neglecting entirely the side that says creators and users of economics in daily life.

    In their paper, “Propinquity drives the emergence of network structure and density,” sociologists Lazaros K. Gallos, Shlomo Havlin, H. Eugene Stanley, and Nina H. Fefferman show us how models can be a useful part of social science research.

    The lack of large-scale, continuously evolving empirical data usually limits the study of networks to the analysis of snapshots in time. This approach has been used for verification of network evolution mechanisms, such as preferential attachment. However, these studies are mostly restricted to the analysis of the first links established by a new node in the network and typically ignore connections made after each node’s initial introduction. Here, we show that the subsequent actions of individuals, such as their second network link, are not random and can be decoupled from the mechanism behind the first network link. We show that this feature has strong influence on the network topology. Moreover, snapshots in time can now provide information on the mechanism used to establish the second connection. We interpret these empirical results by introducing the “propinquity model,” in which we control and vary the distance of the second link established by a new node and find that this can lead to networks with tunable density scaling, as found in real networks. Our work shows that sociologically meaningful mechanisms are influencing network evolution and provides indications of the importance of measuring the distance between successive connections.

  6. Ken Zimmerman
    September 24, 2019 at 11:33 am


  1. No trackbacks yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.