## Two reasons DSGE models are such spectacular failures

from **Lars Syll**

The unsellability of DSGE models — private-sector firms *do not* pay lots of money to use DSGE models — *is* one strong argument against DSGE.

But it is *not* the most damning critique of it.

To me the most damning critiques that can be levelled against DSGE models are the following two:

**(1) DSGE models are unable to explain involuntary unemployment**

In the basic DSGE models the labour market is always cleared – responding to a changing interest rate, expected life time incomes, or real wages, the representative agent maximizes the utility function by varying her labour supply, money holding and consumption over time. Most importantly – if the real wage somehow deviates from its ‘equilibrium value,’ the representative agent adjust her labour supply, so that when the real wage is higher than its ‘equilibrium value,’ labour supply is increased, and when the real wage is below its ‘equilibrium value,’ labour supply is decreased.

In this model world, unemployment is always an optimal choice to changes in the labour market conditions. Hence, unemployment is totally voluntary. To be unemployed is something one optimally chooses to be.

Although this picture of unemployment as a kind of self-chosen optimality, strikes most people as utterly ridiculous, there are also, unfortunately, a lot of neoclassical economists out there who still think that price and wage rigidities are the prime movers behind unemployment. DSGE models basically explains variations in employment (and *a fortiori* output) with assuming nominal wages being more flexible than prices – disregarding the lack of empirical evidence for this rather counterintuitive assumption.

Lowering nominal wages would not clear the labour market. Lowering wages – and possibly prices – could, perhaps, lower interest rates and increase investment. It would be much easier to achieve that effect by increasing the money supply. In any case, wage reductions was not seen as a general substitute for an expansionary monetary or fiscal policy. And even if potentially positive impacts of lowering wages exist, there are also more heavily weighing negative impacts – management-union relations deteriorating, expectations of on-going lowering of wages causing delay of investments, debt deflation *et cetera*.

The classical proposition that lowering wages would lower unemployment and ultimately take economies out of depressions, was ill-founded and basically wrong. Flexible wages would probably only make things worse by leading to erratic price-fluctuations. The basic explanation for unemployment is insufficient aggregate demand, and that is mostly determined *outside* the labour market.

Obviously it’s rather embarrassing that the kind of DSGE models ‘modern’ macroeconomists use cannot incorporate such a basic fact of reality as involuntary unemployment. Of course, working with representative agent models, this should come as no surprise. The kind of unemployment that occurs is voluntary, since it is only adjustments of the hours of work that these optimizing agents make to maximize their utility.

**(2) In DSGE models increases in government spending leads to a drop in private consumption**

In the most basic mainstream proto-DSGE models one often assumes that governments finance current expenditures with current tax revenues. This will have a negative income effect on the households, leading — rather counterintuitively — to a drop in private consumption although both employment an production expands. This mechanism also holds when the (in)famous Ricardian equivalence is added to the models.

Ricardian equivalence basically means that financing government expenditures through taxes or debts is equivalent, since debt financing must be repaid with interest, and agents — equipped with rational expectations — would only increase savings in order to be able to pay the higher taxes in the future, thus leaving total expenditures unchanged.

Why?

In the standard neoclassical consumption model — used in DSGE macroeconomic modeling — people are basically portrayed as treating time as a dichotomous phenomenon – *today* and the *future — *when contemplating making decisions and acting. How much should one consume today and how much in the future? Facing an intertemporal budget constraint of the form

*c _{t} + c_{f}/(1+r) = f_{t} + y_{t} + y_{f}/(1+r),*

where c_{t} is consumption today, c_{f} is consumption in the future, f_{t} is holdings of financial assets today, y_{t} is labour incomes today, y_{f} is labour incomes in the future, and r is the real interest rate, and having a lifetime utility function of the form

*U = u(c _{t}) + au(c_{f}),*

where a is the time discounting parameter, the representative agent (consumer) maximizes his utility when

*u'(c _{t}) = a(1+r)u'(c_{f})*.

This expression – the Euler equation – implies that the representative agent (consumer) is indifferent between consuming one more unit today or instead consuming it tomorrow. Typically using a logarithmic function form – u(c) = log c – which gives u'(c) = 1/c, the Euler equation can be rewritten as

*1/c _{t} = a(1+r)(1/c_{f}),*

or

*c _{f}/c_{t} = a(1+r).*

This importantly implies that according to the neoclassical consumption model changes in the (real) interest rate and consumption move in the same direction. And — it also follows that consumption is invariant to the timing of taxes, since wealth — *f _{t} + y_{t} + y_{f}/(1+r)* — has to be interpreted as present discounted value

*net*of taxes. And so, according to the assumption of Ricardian equivalence, the timing of taxes does not affect consumption, simply because the maximization problem as specified in the model is unchanged. As a result — households

*cut down*on their consumption when governments increase their spendings.

*Mirabile dictu!*

Benchmark DSGE models have paid little attention to the role of fiscal policy, therefore minimising any possible interaction of fiscal policies with monetary policy. This has been partly because of the assumption of Ricardian equivalence. As a result, the distribution of taxes across time become irrelevant and aggregate financial wealth does not matter for the behavior of agents or for the dynamics of the economy because bonds do not represent net real wealth for households.

Incorporating more meaningfully the role of fiscal policies requires abandoning frameworks with the Ricardian equivalence. The question is how to break the Ricardian equivalence? Two possibilities are available. The first is to move to an overlapping generations framework and the second (which has been the most common way of handling the problem) is to rely on an infinite-horizon model with a type of liquidity constrained agents (eg “rule of thumb agents”).

Yours truly totally agree that macroeconomic models have to abandon Ricardian equivalence nonsense. But replacing it with “overlapping generations” and “infinite-horizon” models — isn’t that — in terms of realism and relevance — just getting out of the frying pan into the fire? All unemployment is still voluntary. Intertemporal substitution between labour and leisure is still ubiquitous. And the specification of the utility function is still hopelessly off the mark from an empirical point of view.

As one Nobel laureate had it:

Ricardian equivalence is taught in every graduate school in the country. It is also sheer nonsense.

Joseph E. Stiglitz, twitter

And as one economics blogger has it:

DSGE modeling is taught in every graduate school in the country. It is also sheer nonsense.

Lars Syll, twitter

FYI… https://mostlyeconomics.wordpress.com/2010/08/02/dsge-models-explained-in-english/

“This expression – the Euler equation – implies that the representative agent (consumer) is indifferent between consuming one more unit today or instead consuming it tomorrow.”

Much of human consumption activity cannot ever be postponed. One cannot be indifferent to how much one gets to eat today (or not) relative to how much one might get to eat tomorrow (or not). One cannot be indifferent to having or not having the existential necessities of life :: food, clothing, shelter, utilities, and various technological et ceteras :: relative to having them tomorrow. Indifference is irrational, as is all of the mathematics of indifference analysis.

It is the fact that we cannot be indifferent over much of what we consume/use that underlies ‘involuntary’ unemployment and underemployment.

The DSGE model fails on many different grounds, but the most important of these is the ‘utility’ framework it is framed within. That framework is not based on persons with existential needs but upon an algorithm of a ‘consumer’ without any existential needs except monies to consume with — existential because the ‘consumer’ as a concept vanishes from analysis if ‘it’ has no ability to pay. [The consumer in economics is simply a vehicle that spends moneys it has or can borrow. The ‘consumer’ ceases existence in economics at the moment it is no longer able to ‘consume’.]

That is the central problem with all DSGE models. [The problem of the ‘representative’ consumer is trivial by comparison.]

There seems to be an assumption that governments lack that fundamental tool of modern monetary policy, the *printing press*, which turns government spending into a non-zero-sum game. Ricardian equivalence was not a reasonable assumption when money was gold, because more gold continued to be mined and distributed. It’s an even less reasonable assumption now that we do not even need paper to create money, we just need to transfer ones and zeros on the Federal Reserve’s balance sheets and voila, the Fed owns more U.S. Treasuries and the U.S. Treasury owns more dollars to spend.

At which point the DSGE fanatics start throwing holy water while holding a cross and shouting “Inflation! Inflation! Inflation!” despite the oodles of real-world evidence showing that when there is substantial unemployment and deflationary pressures upon salaries and prices, no inflation results from this kind of fiscal stimulus of spending freshly printed money to buy “stuff” that requires putting people to work, like, say, infrastructure. Yet the notion that the Great Depression was actually the Great Vacation where 30% of Americans voluntarily chose to take a vacation on the beach still remains heterodox truth taught in economics textbooks. Which to this economic historian is a what… the… *uck… moment.

I admit I have not really read many DSGE models. A few readings of examples was enough for me.

My problem with them has 2 parts. The first is the same as what is stated above—they use unrealistic assumptions basically from the ‘get go’—beginning. I think this is done for mathematical tractability or conveniance — i see this to be analogous to ‘linearization’ in other fields or the linear Gaussian model in statistical contexts, though in the last 20 years ‘power laws’ have also become popular. (There is a whole literature which basically considers of papers which aim refute claims in other papers that they have discovered a power law in some context. The problem is most of the time the data is in sufficent to distinguish a power law from some other distribution—eg lognormal.. Also some people transform variables to end up with a power law, but this is an artifact of their transformation. Its like proving the world is flat by taking a map on a globe and cutting the globe and making it flat (mercator projection). (Some people do the same thing in general relativisitc cosmology — model it without Einstein’s idea of ‘curved space-time’. However they know what they are doing—its just a mathematical excercize–which goes back to Emile Cartan and others. Its a different ‘convention’ but equivalent to standard one.)

Much of statistical mechanics after Boltzmann for 100 years made similar unrealistic assumptions, mostly because people could not solve the more realiastic equations—they did approximations , a very slow process. )eg start with a linear equation, and then add a second order quadratic term, then a third order term, etc. Last ones i saw had from 7 to nine terms. This is not dissimilar from the history of diophantine equatiions. It took over 200 years to prove Fermat’s Last Theorem, and required algebraic geomatry—and they could only solve it up to the second order case for a long time apart from a few examples up to fifth order via Galois theory. (Galois is famous for inventing group theory, and was a political radical, and from a non wealthy background in France, and died in his early 20’s in a duel over a girl. One US president also died in a duel–a different form of conflict resolution ).

The second problem with DSGE models for me is they are ‘too complex’. This is like the difference in physics between very theoretical models and applied ones.

A pure theoretical model in physics (eg see Feynman’s book QED) or the Hawking-Hartle’ theory of ‘the wave function of the universe’. These can be written in 5 pages. All the fundamental constants like the speed of light c, boltzmann’s k, planck’s h-bar, newton’s G, etc. are set to 1. Other approximations are made—eg rather than use Dirac or Schrodinger equation, you use Klein Gordon (which is basically a classical wave equation, non-quantum—they call this ‘neoclassical approximation’—reminds me of economics, but this refers to classical as opposed to quantum mechanics).

If you look at applied physics (eg papers in Physical Review D or ones on atomic physics) you might need 100 pages to calculate something about the hydogen atom—perhaps the simplest quantum system. It will involve all sorts of advanced math — all sorts of polynomial expansions (see Handbook of Mathematical Physics by stegun and abramowitz—see wikipedia—i used this for awhile) Also algorithms.

Most DSGE stuff basically is along this line. Even those very unrealistic models are intractable.

I have a hard time maximizing my utlility or bliss’ just with a wallet and 20 dollars—-allocte 5$ for vegetables, 5 for fruit, 5 for tea and creamer, etc. Try to put the US economy into that formalism you end up with hundreds of equations.

This is why arrow, Hahn, etc assumed ‘there is no money, all individuals are the same, and there is no government or corporations’. Leontief did the same thing—more like DSGE—all the details.

Physics changed course in the 60’s and 70’s. They developed computers, so they no longer had to solve equaitons—they let the computer do it. Also, the renormalization group—Kadanoff and Wilson. They noticed after awhile adding new equations doesn’t change anything. (A similar result was found in solution of Hilbert’s 10th problem — ‘universal diophantine equation’. S Smale also showed ih the 70’s ( J Mathematical Biology) every system of equations could be written essentially as a generalized Lotka-Volterra model (competition, cooperation, etc.) Richard Goodwin of course (and Steven Keene) use this type of model).

DSGE models could be ‘fixed’ by adding in nonlinearitires—interdependent utility functions, all sorts of ‘imperfect information’ or ‘bounded rationality’, frictions, spatial effects, finite speeds for information transfer or communication, money illusion, endogeneity, etc. . But that is complex. There are quite a few very theoretical models which do this from the 80’s—but they are not ‘applied’. The do ‘dimensional reduction’—get rid of evrything seen as superfluous. In 10 or maybe 30 pages they can describe qualitative features of just about any possible economic trajectory. (J B Rosser of JMU has some of these and is familiar with many of them).

(The famous Lorentz equations in meteorology reduced the 5 equations of Navier Stokes model which has many variables to a system of 3 with just a few variables—not that different from a predator prey or lotka volterra system).