Home > Uncategorized > DSGE — models built on shaky ground

DSGE — models built on shaky ground

from Lars Syll

In most aspects of their lives humans must plan forwards. They take decisions today that affect their future in complex interactions with the decisions of others. When taking such decisions, the available information is only ever a subset of the universe of past and present information, as no individual or group of individuals can be aware of all the relevant information. Hence, views or expectations about the future, relevant for their decisions, use a partial information set, formally expressed as a conditional expectation given the available information.

vraylar-shaky-ground-large-4Moreover, all such views are predicated on there being no un-anticipated future changes in the environment pertinent to the decision. This is formally captured in the concept of ‘stationarity’. Without stationarity, good outcomes based on conditional expectations could not be achieved consistently. Fortunately, there are periods of stability when insights into the way that past events unfolded can assist in planning for the future.

The world, however, is far from completely stationary. Unanticipated events occur, and they cannot be dealt with using standard data-transformation techniques such as differencing, or by taking linear combinations, or ratios. In particular, ‘extrinsic unpredictability’ – unpredicted shifts of the distributions of economic variables at unanticipated times – is common. As we shall illustrate, extrinsic unpredictability has dramatic consequences for the standard macroeconomic forecasting models used by governments around the world – models known as ‘dynamic stochastic general equilibrium’ models – or DSGE models … 

Many of the theoretical equations in DSGE models take a form in which a variable today, say incomes (denoted as yt) depends inter alia on its ‘expected future value’… For example, yt may be the log-difference between a de-trended level and its steady-state value. Implicitly, such a formulation assumes some form of stationarity is achieved by de-trending.

Unfortunately, in most economies, the underlying distributions can shift unexpectedly. This vitiates any assumption of stationarity. The consequences for DSGEs are profound. As we explain below, the mathematical basis of a DSGE model fails when distributions shift … This would be like a fire station automatically burning down at every outbreak of a fire. Economic agents are affected by, and notice such shifts. They consequently change their plans, and perhaps the way they form their expectations. When they do so, they violate the key assumptions on which DSGEs are built.

David Hendry & Grayham Mizon

A great article, not only showing on what shaky mathematical basis DSGE models are built but also confirming much of Keynes’s critique of econometrics, underlining that to understand real-world ‘non-routine’ decisions and unforeseeable changes in behaviour, stationary probability distributions are of no avail. In a world full of genuine uncertainty — where real historical time rules the roost — the probabilities that ruled the past are not those that will rule the future.

Advocates of DSGE modelling want to have deductively automated answers to fundamental causal questions. But to apply ‘thin’ methods we have to have ‘thick’ background knowledge of what’s going on in the real world, and not in idealized models. Conclusions can only be as certain as their premises — and that also applies to the quest for causality and forecasting predictability in DSGE models.

  1. James Beckman
    December 8, 2018 at 8:21 am

    Our predecessors knew the world is unpredictable, given technology, nationalism, nature & whatever. Unfortunately this made their math conditional, which then yields many options. That’s where statistics enters, but one which speaks of, say, 4 “likely” futures as seen by a few persons at one point in time/place. Things become mathematically messy as I realized during my first year of grad study at Berkeley.

  2. December 11, 2018 at 7:16 am

    As a criticism of DSGE model, the paper of 18 June 2014 by David Hendry and Grayham Mizon is really insightful. They explains why the economy in normal running goes well while in some cases the system becomes out of rail and burst.

    However, I cannot agree with Lars Syll’s comment. He emphasizes the genuine uncertainty. For example, he put it:

    ”In a world full of genuine uncertainty — where real historical time rules the roost — the probabilities that ruled the past are not those that will rule the future.

    If we only emphasize genuine uncertainty, we face two difficulties: (1) We cannot explain why in its normal (or ordinary) situation, the economy works sufficiently well; (2) We cannot explain how agents with bounded rationality can effectively behave in a genuine uncertain world.

    Lars Syll is too busy in criticizing the neoclasssical economics and DSGE macroeconomics and is not considering how to reconstruct economics. The new economics should explain the above two points plus one: (1) How are the stationarity and (moving) stability of economic processes obtained in a normal state? (2) How do the agents behave effectively to assure the normal running of economics with their very limited capability in rationality and information collection? (3) How, in some rare cases, are this stationarity and stability are violated and how does the economic process deviate from the stable process and enters in a process that is driven by positive feedbacks (e.g. booms and bursts).

    Of course, these are not the tasks of macroeconomics. All these processes occur in microeconomic level, i.e. by interactions between economic agents. Arrow-Debreu model has succeeded to explain the point (1), only by assuming infinitely rational agents. But, as H. A. Siomon and all others emphasized, human agents have only a bounded rationality. We must refute Arrow-Debreu model, which is in fact at the base of DSGE model thinking. Emphasizing genuine uncertainty like Lars Syll does is in practice equivalent to assume super-rational agents who can behave in such situations. It is more unrealistic than assuming simply rational agents.

    David Hendry and Grayham Mizon are wise, because they distinguish two states: where stationarity reigns and where it is violated. We need two theories at least at first phase of the economics reconstruction before an integrated (or synthesized) theory is obtained.

    For further details, see two of my papers:

    The Primacy of Stationarity: A case against General Equilibrium theory, 1989.
    https://www.researchgate.net/publication/233943723_The_Primacy_of_Stationarity_A_Case_against_General_Equilibrium_Theory

    Microfoundations of Evolutionary Economics 2016 (To be published next year from Springer)
    https://www.researchgate.net/publication/301766363_Microfoundations_of_Evolutionary_Economics

    • Frank Salter
      December 11, 2018 at 9:52 am

      “[T]hey distinguish two states” is a false dichotomy. There is only a continuous change from some initial condition which decays exponentially towards a limiting value. This is a transient change, which is best understood by solving the appropriate differential equations.

      Transient analysis is all that is needed to provide a complete description of this process.

    • December 11, 2018 at 1:13 pm

      Frank Salter
      Of course, the two states are a part of the total unified process, but we should also know our capability to understand complex things. This is why I added point (3) in my previous post. In the last sentence, I added that “We need two theories at least at first phase of the economics reconstruction before an integrated (or synthesized) theory is obtained.” That explains sufficiently that I am asking a synthetic theory which combines (1) and (2).

      You claim that you have a theory that provides a complete description of the process. Then, can you show us how your transient process diverges from the stable steady state? How is your theory related to human incapability in foretelling the future? Is the productivity progress an automatic process that your system assumes? How can you explain the interactive network of input-output relations?

      You may be thinking that economic process is something like a physical process which has no relevance with economic agents who are workers and entrepreneurs. Lars Syll emphasizes genuine uncertainty and I am opposed to overemphasizing it. In this controversy, the question is how to characterize our capability. Your system has nothing to do with this question. You simply repeat that your theory is complete, but few people believes it. Then the responsibility to prove (burden of proof) lies on your part. Please submit an article that can persuade referees of a journal and that can persuade at least dozens of people that your theory has something relevant to the real-world economy.

      • Frank Salter
        December 11, 2018 at 3:56 pm

        There are two orthogonal dimensions in the economy.

        One is how production tools are created and used. The capability of the tools in use is limited by the current knowledge available at any time and it is improved by technical progress. My paper on “Transient Development” deals with actions in this dimension. At any level of technical progress, it is a clear how production is maximised whilst using the least expenditure of effort. Solow’s 1909−1949 data shows that output was being maximised by U.S manufacturing industries and that my analysis is in accord with what actually transpired. There is NO conventional theoretical analysis which predicts the empirical nature of industries/ All their hypotheses are invalidated by the empirical evidence.

        The other orthogonal dimensions is in what direction the economy is to be directed. This can be in desirable or undesirable directions. Hitler initially created economic improvement by increasing military expenditure before the disaster of the second world war. I have no method of predicting along this dimension. I am not Hari Seldon with a theory of psychohistory but even that was undone by the Mule.

        My analysis deals with the mathematics of the physical world. There is no, nor will there be, a steady state condition until technical progress is no longer possible. The extent of the technical progress actually achieved depends on the effort applied to develop existing and new technologies.

        I am working on further papers which will be relevant to the development of maximising output more generally. It is appropriate for the case of genuinely competitive market-clearing economies.

  3. December 12, 2018 at 9:09 am

    I have read Lars Syll’s paper indicated in the above post: Modern econometrics – a critical realist critique (wonkish) on 7 May, 2013.

    I have no capability to understand totally what Syll says in this paper. It seemed for me that he is repeating the similar argument. For example, he is repeating this:

    “we have to be able to describe the world in terms of risk rather than genuine uncertainty. (Lars Syll)

    But I found a comment on 8 May, 2013 by somebody named Jed Harris was interesting. In the second point of his observations, he commented like this:

    “Stability in social or economic systems is by no means guaranteed by the underlying ontology — and (if I understand you correctly) economic modeling seems to be assuming or demanding such guarantees. In social and economic systems all the (heterogeneous) participants are modeling each other. Networks of agents modeling each other may collectively be stable for a while, but also may drift over an edge into a phase transition to a different stable regime, or into chaos. Stability is constructed (through what we call institutions) not given a priori by “microfoundations”. (Jed Harris)

    Syll is always talking about macroeconomic econometric model. Most readers, I believe, agree that there is a serious error in it and we must restart our investigation from deeper inquiry on the working of our economy. Macro econometric model is a flashy flower in boom but it will be soon placed in the museum of past economics attempts that failed. To find a new research program is much more important than the reasons of econometric models’ failure. Let us simply abandon it or simply forget it.

    As Jed Harris hinted, there are stable and unstable processes in the economy (and stable and unstable period). We should study which is stable and which is not (and when it is stable and when it becomes unstable). We should find for each process a mechanism that guarantee the stability. We should also find when and in what mechanism the process turns out to be divergent. This must be our new research program.

    • December 12, 2018 at 3:22 pm

      Yoshinori, thanks for a really helpful comment. All along I have been following your proposed research program, though conflating stability wiith equilibrium and only comparatively recently connecting it with chaos theory. The “mechanism” which CAN ensure the stability (if understood and used correctly) is an information-based PID control system.

      A physical homeostat like a room temperature thermostat will keep one room temperature stable by varying heat input from radiators, but only inso far as other heat does not make it hotter, and certainly not in the chaos of a house fire. As a method, PID navigation dynamically stabilises a direction of motion, requiring not only a physical steersman and captain (aim setter) but also the institution of feedback channels from instruments (compass, sextant and radar/lookout) sensing the present (directional), past (positional) and future errors (positional changes deliberately created to avoid approaching obstacles). The exponent in the chaos theory logistic equation counts control of these three degrees of freedom (where the homeostat has only one). Negative directional and positional feedbacks stabilise the direction, but positive directional change destabilises it until offset by negative feedback.

      What the 2.5 threshold in chaos theory is warning us about in economics, therefore, is that changes of business to avoid money loss will lead to chaos in the real business of the economy (supplying human needs without destabilising the wider PID system of the earth’s ecology) insofar as these happen more quickly than reorganisation of work and recycling. We went into chaos with the automation of a stock market feeding back false information in the form of junk bonds, thereby turning feedback on the need to correct low incomes into positive feedback suggesting foreclosures (often with the consequence of family home and job loss).

      That’s the paradigm. Your research program needs to be studying the functions of economies and their ordering in time; the existence of communication/distribution paths between them, nature’e ecology and the banker’s monetary system; how any three functions provide PID feedback on the aims of the other (one being a D feedback that can leave that “other” function in chaos). The simple outcome of this, following Shannon, is that economics has no need to maximise efficiency but should use redundant capacity to maximise reliability. How to do that in economies? The obvious way (with lots of impacts on reduction of traffic and pollution) is to timeshare mass production and develop local facilities and communities to “grow” ourselves. Less obvious is the need to value ourselves rather than the goods we purchase. An agreed credit limit for automatic repayment of what (from time to time) we need to maintain ourselves would eliminate chaos due to loss of wages/incomes and a vast amount of unnecessary work.

  4. December 13, 2018 at 3:22 am

    Thank you for your comment, Dave.

    As I am theoretical economist, I am developing my theory in a more concrete form than that I have here proposed in a rough suggestive way. I have sent you by e-mail Chapter 2 of our book that has the title Microfoundations of Evolutionary Economics. The paper I have suggested in the first comment in this page composes the chapter 1 of the book. The first chapter is only an introduction and you can know how different the send chapter is from chapter 1. We are treating most concrete economic problem, that is, the price formation and quantity adjustment process between firms which are in a network of input-output relations.

    • December 13, 2018 at 3:39 am

      Let me add to my general readers that the chapter 2 of our book above indicated has the tile:

        A Large Economic System with Minimally Rational Agents.

      Minimally rational agents mean that we suppose only agents (mainly manufacturing firms) which can know only the past series of their sales volumes of their product. Even with such such most myopic agents, we could prove that the whole network of input-output can adjust itself to the change of final demand. (The main result is not mine. It dues to two of my colleagues Kazuhisa Taniguchi and Masashi Morioka.)

      What is most interesting to me is that this system diverges when agents behave as predictive agents of their future demand. This is what most of DSGE model assumes. Prospective agents are more inclined to engender positive feed backs inside the system. The base process of DSGE is not stable. It is generally divergent. In this sense, DSGE model lacks its microfoundations. We need not emphasize genuine uncertainty to show that DSGE is a model which is based on no confirmed assumptions if we examine it closely.

  1. No trackbacks yet.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.