Home > methodology > On the consistency of microfounded macromodels

On the consistency of microfounded macromodels

from Lars Syll

“New Keynesian” macroeconomist Simon Wren-Lewis has a post up on his blog, trying to answer a question posed by Brad DeLong, on why microfounded models dominate modern macro:

Brad DeLong asks why the New Keynesian (NK) model, which was originally put forth as simply a means of demonstrating how sticky prices within an RBC framework could produce Keynesian effects, has managed to become the workhorse of modern macro, despite its many empirical deficiencies …

16527659-Abstract-word-cloud-for-Microfoundations-with-related-tags-and-terms-Stock-PhotoWhy are microfounded models so dominant? From my perspective this is a methodological question, about the relative importance of ‘internal’ (theoretical) versus ‘external’ (empirical) consistency …

I think this has two implications for those who want to question the microfoundations hegemony. The first is that the discussion needs to be about methodology, rather than individual models. Deficiencies with particular microfounded models, like the NK model, are generally well understood, and from a microfoundations point of view simply provide an agenda for more research. Second, lack of familiarity with methodology means that this discussion cannot presume knowledge that is not there … That makes discussion difficult, but I’m not sure it makes it impossible.

Indeed — this is certainly a question of methodology. And it shows the danger of neglecting methodological issues — issues mainstream economists regularly have almost put an honour in neglecting.

Being able to model a credible world, a world that somehow could be considered real or similar to the real world, is not the same as investigating the real world. Even though all theories are false, since they simplify, they may still possibly serve our pursuit of truth. But then they cannot be unrealistic or false in any way. The falsehood or unrealisticness has to be qualified (in terms of resemblance, relevance, etc.). At the very least, the minimalist demand on models in terms of credibility has to give away to a stronger epistemic demand of appropriate similarity and plausibility. One could of course also ask for a sensitivity or robustness analysis, but the credible world, even after having tested it for sensitivity and robustness, can still be a far way from reality – and unfortunately often in ways we know are important. Robustness of claims in a model does not per se give a warrant for exporting the claims to real world target systems.

Questions of external validity are important more specifically also when it comes to microfounded macromodels. It can never be enough that these models somehow are regarded as internally consistent. One always also has to pose questions of consistency with the data. Internal consistency without external validity is worth nothing.

Yours truly and people like Tony Lawson have for many years been urging economists to pay attention to the ontological foundations of their assumptions and models. Sad to say, economists have not paid much attention — and so modern economics has become increasingly irrelevant to the understanding of the real world.

an-unconvenient-truthWithin mainstream economics internal validity is still everything and external validity nothing. Why anyone should be interested in that kind of theories and models is beyond imagination. As long as mainstream economists do not come up with any export-licenses for their theories and models to the real world in which we live, they really should not be surprised if people say that this is not science, but autism!

Since fully-fledged experiments on a societal scale as a rule are prohibitively expensive, ethically indefensible or unmanageable, economic theorists have to substitute experimenting with something else. To understand and explain relations between different entities in the real economy the predominant strategy is to build models and make things happen in these “analogue-economy models” rather than engineering things happening in real economies.

Formalistic deductive “Glasperlenspiel” can be very impressive and seductive. But in the realm of science it ought to be considered of little or no value to simply make claims about the model and lose sight of reality.

Neoclassical economics has since long given up on the real world and contents itself with proving things about thought up worlds. Empirical evidence only plays a minor role in economic theory, where models largely function as a substitute for empirical evidence. Hopefully humbled by the manifest failure of its theoretical pretences, the one-sided, almost religious, insistence on axiomatic-deductivist modeling as the only scientific activity worthy of pursuing in economics will give way to methodological pluralism based on ontological considerations rather than formalistic tractability.

To have valid evidence is not enough. What economics needs is sound evidence. Why? Simply because the premises of a valid argument do not have to be true, but a sound argument, on the other hand, is not only valid, but builds on premises that are true. Aiming only for validity, without soundness, is setting the economics aspirations level too low for developing a realist and relevant science.

Studying mathematics and logics is interesting and fun. It sharpens the mind. In pure mathematics and logics we do not have to worry about external validity. But economics is not pure mathematics or logics. It’s about society. The real world. Forgetting that, economics is really in dire straits.

  1. Larry Motuz
    April 9, 2015 at 4:32 pm

    Scientific micro-theoretic foundations of economics can only be based upon the notion that consumption is use for direct of indirect benefits from that use. Demand is a function of those direct and indirect benefits. Use-value is not value-in-exchange, and value-in-exchange is a very poor proxy for use-value since the former depends upon ‘ability to pay’ and the ‘distribution of income’.

    If anyone is interest, please comment.

  2. Larry Motuz
    April 9, 2015 at 4:35 pm

    Please pardon the ‘typos’ in my earlier post. Thank you.

  3. April 10, 2015 at 1:38 pm

    L. Moutez—I basically agree with your comment; it seems to be one particularily clear formulation of standard economic ideas (supply and demand, marginal utility and maximization, labor and other tjheories of value, transaction costs).

    The problem is people often do not really know what the use-value of something is (especially evident when people are saying buy some new ‘cure for cancer’ or even technology—eg should one wait for the 2.0 version or buy the first model on the market?).

    Egalitarian types think the economy should be organized so that everyone should have ‘the ability to pay’ for things they (or society) says they need–ie which have use-value for them.

    But who decides? Does a 100 year old need expensive hopsital care if they are very sick? ; Do all children need or have a use-value for an education, and of what sort—-for a trade, or to be an economist? And what are the needs of society—what jobs have a use-value for society as a whole? Weapons devellopment? art and music? wildlife and plant identification? And how much does society as a whole have ‘the ability to pay’ for? Do I need an ipod or car? or music studio or book contract? Or do ‘i need to get out of this place’ (old song by the animals possibly about the vietnam war ) —eg find cheaper rent?

    The transaction costs arise due to the search or decisionmaking procedures and bounded rationalities people use to identify needs, use-values, and ‘abilities to pay’—-costs of production, ‘You can’t always get what you want, but if you try some times, you can get what you need’ (song by rolling stones).

    It seems Demand= (ability to pay) -(benefits – costs) – transaction costs . (Or something like that—since that identity is sort of redundant if you parse the definitions of terms (but so is hamiltonian and lagrangian formulation of euler-lagrange equations which ends up with alot of zero terms commonly), aggregated over all people and all possible needs.

    If there is an equilibrium that should be equal to zero (says’s or walras law of (no) excess demand. Arrow-Hahn (general equilbrium) says such an equiolbrium point exists (a fixed point eg via Perron-Froebenius theorem) but SMD etc. say finding it may be like finding a needle in a haystack. And KAM theorem also says the system may not be ergodic so if you atrt your search at the wrong place you;’ll never find the equilibrium—just end up going around in circles in nonoptimal states.

    The ‘scientific microfoundations’ seem similar to chaos theory (as seems to have been known in econ for 30 to 40 years) or alogorithms used to try to find solutions to NP complete problems . (An additional diffiulty is ‘biology’ or ‘psychology’ (though it may not really be different from physics where often people think of the entites as not changing)—organisms ‘mutate’ and human preferences and cognitions continually co/evolve with their environments—so they don’t even know what they are looking for (fountain of youth? eternal life? oil? cheap solar energy?). But chaotic systems sometimes can be ‘tamed’ by adding some noise (eg stochastic resonance) and effective or practical solutions of NP complete problems can be found (eg via simulated annealing). A traveling salesman may be able to find a reasonably short route through N cities though maybe not the shortest one. People can sometimes or fairly commonly more or less balance their budgets (though equally common may be ones where they can’t or don’t—eg financial crisis of 2008, possibly in siutations like Yemen or Syria). Some reasonable treatements or cures for diseases may be found, some won’t be, and how people decide what cures to search for is a somewhat ad hoc process.

    It seems a scientific microfoundations might have to rest on empirical data—the ‘micro’ may refer to the properties of the people in the system, as well as their environment. Its like voting systems. Its a big data problem, which must be analyzed with some complicated algorithms or dynamical systems theory; at a common sense level its just heuristics, or ‘satisficing’ (H Simon). Alot of potential data will just be ignored (politicians may just make decisions based on their own values, and will be given decisionmaking power in a system where many dont vote, and those who vote for the ‘loser’ dont get what they want). Much of the data in the environment also will be ignored—you can’t search every nook and cranny for every elementary particle, planet, or inhabitant of the biosphere). Alot of dynamical systems and algorithms are not going to be used by decisionmakers.

    So you just play it by ear—no score, no formal system of notes and scales. Scientific microfoundations may exist in theory and in part, but in daily life they often are out of the picture. . .

  1. No trackbacks yet.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.