Home > Uncategorized > Time

Time

from Lars Syll


Time is what prevents everything from happening at once. To simply assume that economic processes are ergodic and concentrate on ensemble averages — and hence in any relevant sense timeless — is not a sensible way for dealing with the kind of genuine uncertainty that permeates real-world economies.

Ergodicity and the all-important difference between time averages and ensemble averages are difficult concepts — so let me try to explain the meaning of these concepts by means of a couple of simple examples.

Let’s say you’re offered a gamble where on a roll of a fair die you will get €10  billion if you roll a six, and pay me €1 billion if you roll any other number.

Would you accept the gamble?

If you’re an economics student​ you probably would because that’s what you’re taught to be the only thing consistent with being rational. You would arrest the arrow of time by imagining six different “parallel universes” where the independent outcomes are the numbers from one to six, and then weight them using their stochastic probability distribution. Calculating the expected value of the gamble – the ensemble average – by averaging on all these weighted outcomes you would actually be a moron if you didn’t take the gamble (the expected value of the gamble being 5/6*€0 + 1/6*€10 billion = €1.67 billion)

If you’re not an economist you would probably trust your common sense and decline the offer, knowing that a large risk of bankrupting one’s economy is not a very rosy perspective for the future. Since you can’t really arrest or reverse the arrow of time, you know that once you have lost the €1 billion, it’s all over. The large likelihood that you go bust weights heavier than the 17% chance of you becoming enormously rich. By computing the time average – imagining one real universe where the six different but dependent outcomes occur consecutively – we would soon be aware of our assets disappearing, and a fortiori that it would be irrational to accept the gamble.

flipWhy is the difference between ensemble and time averages of such importance in economics? Well, basically, because when assuming the processes to be ergodic, ensemble and time averages are identical.

Assume we have a market with an asset priced at €100.​ Then imagine the price first goes up by 50% and then later falls by 50%. The ensemble average for this asset would be €100 – because we here envision two parallel universes (markets) where the asset price​ falls in one universe (market) with 50% to €50, and in another universe (market) it goes up with 50% to €150, giving an average of 100 € ((150+50)/2). The time average for this asset would be 75 € – because we here envision one universe (market) where the asset price first rises by 50% to €150 and then falls by 50% to €75 (0.5*150).

From the ensemble perspective nothing really, on average, happens. From the time perspective lots of things really, on average, happen. Assuming ergodicity there would have been no difference at all.

On a more economic-theoretical level, ​the difference between ensemble and time averages also highlights the problems concerning the neoclassical theory of expected utility.

When applied to the neoclassical theory of expected utility, one thinks in terms of “parallel universe” and asks what is the expected return of an investment, calculated as an average over the “parallel universe”? In our coin tossing example, it is as if one supposes that various “I” are tossing a coin and that the loss of many of them will be offset by the huge profits one of these “I” does. But this ensemble average does not work for an individual, for whom a time average better reflects the experience made in the “non-parallel universe” in which we live.

Time averages give​ a more realistic answer, where one thinks in terms of the only universe we actually live in and ask what is the expected return of an investment, calculated as an average over time.

onewaySince we cannot go back in time – entropy and the arrow of time make this impossible – and the bankruptcy option is always at hand (extreme events and “black swans” are always possible) we have nothing to gain from thinking in terms of ensembles.

Actual events follow a fixed pattern of time, where events are often linked in a multiplicative process (as e. g. investment returns with “compound interest”) which is basically non-ergodic.

Instead of arbitrarily assuming that people have a certain type of utility function – as in the neoclassical theory – time average considerations show that we can obtain a less arbitrary and more accurate picture of real people’s decisions and actions by basically assuming that time is irreversible. When are assets are gone, they are gone. The fact that in a parallel universe it could conceivably have been refilled, is​ of little comfort to those who live in the one and only possible world that we call the real world.

Our coin toss example can be applied to more traditional economic issues. If we think of an investor, we can basically describe his situation in terms of our coin toss. What fraction of his assets should an investor – who is about to make a large number of repeated investments – bet on his feeling that he can better evaluate an investment (p = 0.6) than the market (p = 0.5)? The greater the fraction, the greater is the leverage. But also – the greater is the risk. Letting p be the probability that his investment valuation is correct and (1 – p) is the probability that the market’s valuation is correct, it means that he optimizes the rate of growth on his investments by investing a fraction of his assets that is equal to the difference in the probability that he will “win” or “lose”. This means that he at each investment opportunity (according to the so-called Kelly criterion) is to invest the fraction of  0.6 – (1 – 0.6), i.e. about 20% of his assets (and the optimal average growth rate of investment can be shown to be about 2% (0.6 log (1.2) + 0.4 log (0.8))).

Time average considerations show that because we cannot go back in time, we should not take excessive risks. High leverage increases the risk of bankruptcy. This should also be a warning for the financial world, where the constant quest for greater and greater leverage – and risks – creates extensive and recurrent systemic crises. A more appropriate level of risk-taking is a necessary ingredient in policy to come to curb excessive risk-taking​.

To understand real-world “non-routine” decisions and unforeseeable changes in behaviour, ergodic probability distributions are of no avail. In a world full of genuine uncertainty — where real historical time rules the roost — the probabilities that ruled the past are not necessarily those that will rule the future.

Irreversibility can no longer be identified with a mere appearance​ that would disappear if we had perfect knowledge … Figuratively speaking, matter at equilibrium, with no arrow of time, is ‘blind,’ but with the arrow of time, it begins to ‘see’ … The claim that the arrow of time is ‘only phenomenological​,’ or subjective, is therefore absurd. We are actually the children of the arrow of time, of evolution, not its progenitors.

Ilya Prigogine

  1. July 8, 2019 at 9:32 pm

    Profound!

  2. Frank Salter
    July 9, 2019 at 10:33 am

    True but irrelevant. Only by solving appropriate differential equations over time can economic reality be quantified. Kaldor looked to the Verdoorn relationship to numerically integrate his equations. Keen (2011) sets up simultaneous equations and solves them numerically. Time is introduced as time constants. The only analytical solution published so far is my Transient Development (RWER-81) which describes abstract production theory.

    Reference:
    Keen, S., A monetary Minsky model of the Great Moderation and the Great Recession.
    J. Econ. Behav. Organ. (2011), doi:10.1016/j.jebo.2011.01.010

  3. Mike Ryan
    July 9, 2019 at 4:27 pm

    True but in a different way.

    1. Physical events/science are deterministic, predictable and continuous. This allows the application of calculus/differential equations due to the continuous nature of the real world.
    2. Human events/economics are not deterministic, predictable and continuous. All economic events are step functions – the rules of calculus cannot be applied.

    Time for economics systems are properly perceived using financial analysis and present value evaluations. No one will ever get the 100% accurate answer as their are many possible assumptions and human events are not deterministic.

    This doesn’t stop people from estimating the value of a stock of a firm based upon future flows of profits or cash. This is where economists should spend their time answering these questions.

    1. When firm A buys firm B what is the impact on the economy?
    A. Firm A borrows tons of money creating a less stable economic system. (firm A and B)
    B. Firm A/B pays less taxes reducing the contribution to societies common wealth.
    C. Firm A/B lays off staff to cut cost lowering employment rates.
    D Firm A/B raises prices to cover costs of the acquisition contributing to inflation.

    2. Why does a democratic government for the people create tax policies that encourage acquisitions given the negative impacts?

    3. What should a democratic government do to fix this problem?

    This is real world – raise the bar!!!

    • Frank Salter
      July 10, 2019 at 8:50 am

      Mike Ryan asserts: “All economic events are step functions – the rules of calculus cannot be applied.”

      The above assertion is false to fact. My paper, Transient Development − RWER-81, uses both algebra and calculus to describe production theory/ Its description conforms to the economic reality of manufacturing. This is a counter example to the above assertion thus proving it wrong.

      I agree that certain human decisions are totally unpredictable. Others are not. In my analysis. that manufacturing follows the maximising of output quantity for minimum effort is clear. When the runes are able to be read clearly decision makers follow their indications. So in some decisions people react logically.

      Using money as a proxy for output quantities does work relatively successfully.

      To answer the questions posed by question 1 requires a detailed model of the economy. “Transient Development” provides the basis for the manufacturing models. The remainder needs to be provided in some other way

  4. Norman Roth
    July 9, 2019 at 8:29 pm

    Tsk, Tsk. All those futile ramblings about the role of TIME in economic thought and practice arise from the amateurish dismissal of the great minds who have tackled the subject over the centuries. Which leads to the presumptuous proclamation that some newly inspired “innovator” has to invent the wheel all over again by starting from his or her version of “basic principles”. Why not consult TELOS & TECHNOS, especially the bibliography, for all the meaningful thought that has been put into this problem. It’s what led to the conclusions therein
    that real event laden time must be separated from chronological time. And that any assumptions that has built into it the fallacy that there is some automatic correspondence between clock time and real time {as in P = f{t} {Progress is a function of clock time} ignores completely both all relevant Ontology and Epistomology. All the “solving” of differential equations you can think of, teach away from the role of TIME in Economic thought.

    Please GOOGLE: Norman L. Roth, Technological Time.

    • Frank Salter
      July 10, 2019 at 9:00 am

      If the “great minds” had been successful in their tackling of the subject, then time would not be the continuing problem it is.

      I agree there is a difference between elapsed and sidereal time. However no physical scientist will agree that the solving of differential equations is inappropriate to studying how reality will evolve over time.

  5. Ken Zimmerman
    July 18, 2019 at 10:40 am

    A story about systems that combine chaos and stability, together. Robustness and strangeness coexisting in the same system. Like societies and the economic systems with them.

    During the 1960s Edward Lorenz found unpredictability, but he also found pattern. He found dynamical systems. Others, too, discovered suggestions of structure amid seemingly random behavior. The example of the pendulum was simple enough to disregard, but those who chose not to disregard it found a provocative message. In some sense, they realized, physics understood perfectly the fundamental mechanisms of pendulum motion but could not extend that understanding to the long term. The microscopic pieces were perfectly clear; the macroscopic behavior remained a mystery. The tradition of looking at systems locally—isolating the mechanisms and then adding them together—was beginning to break down. For pendulums, for fluids, for electronic circuits, for lasers, knowledge of the fundamental equations no longer seemed to be the right kind of knowledge at all.

    Differential equations describe the way systems change continuously over time. The tradition was to look at such things locally, meaning that engineers or physicists would consider one set of possibilities at a time. But Poincaré and Stephen Smale wanted to understand them globally, meaning that they wanted to understand the entire realm of possibilities at once. Any set of equations describing a dynamical system—Edward Lorenz’s, for example—allows certain parameters to be set at the start. In the case of thermal convection, one parameter concerns the viscosity of the fluid. Large changes in parameters can make large differences in a system—for example, the difference between arriving at a steady state and oscillating periodically. But physicists assumed that very small changes would cause only very small differences in the numbers, not qualitative changes in behavior. Linking topology and dynamical systems created the possibility of using a shape to help visualize the whole range of behaviors of a system. For a simple system, the shape might be a curved surface; for a complicated system, a manifold of many dimensions. A single point on such a surface represents the state of a system at an instant frozen in time. As a system progresses through time, the point moves, tracing an orbit across this surface. Bending the shape, a little corresponds to changing the system’s parameters, making a fluid more viscous or driving a pendulum a little harder. Shapes that look roughly the same give roughly the same kinds of behavior. If you can visualize the shape, you can understand the system.

    When Smale turned to consider dynamical systems, topology was just one more part of pure mathematics. And like most pure mathematics, was carried out with an explicit and elitist disdain for real-world applications. Mathematicians studied the shapes for their own sake, period. Smale fully believed in that ethos, yet he had an idea that abstract and esoteric topology might now have something to contribute to physics, just as Poincaré had intended at the turn of the century. One of Smale’s first contributions, quite by accident was his own faulty conjecture. In physical terms, he was proposing a law of nature something like this: A system can behave erratically, but the erratic behavior cannot be stable. Stability — “stability in the sense of Smale,” as mathematicians often say is a crucial property. Stable behavior in a system is behavior that would not disappear just because some number was changed a tiny bit. Any system could have both stable and unstable behaviors within it. The equations governing a pencil standing on its point have a good mathematical solution with the center of gravity directly above the point—but you cannot stand a pencil on its point because the solution is unstable. The slightest perturbation draws the system away from that solution. On the other hand, a marble lying at the bottom of a bowl stays there, because if the marble is perturbed slightly it rolls back. Physicists assumed that any behavior they could observe regularly would have to be stable, since in real systems tiny disturbances and uncertainties are unavoidable. You never know the parameters exactly. If you want a model that will be both physically realistic and robust in the face of small perturbations, physicists reasoned that you must surely want a stable model.

    Smale defined a class of differential equations, all structurally stable. Any chaotic system, he claimed, could be approximated as closely as you liked by a system in his class. It was not so. To Smale’s surprise many systems were not so well-behaved as he had imagined. Even more to his surprise Smale saw a class of system he never imagined could exist, a system with chaos and stability, together. This system was robust. If you perturbed it slightly, as any natural system is constantly perturbed by noise, the strangeness would not go away. Robust and strange. Chaos and instability, concepts only beginning to acquire formal definitions, were not the same at all. A chaotic system could be stable if its brand of irregularity persisted in the face of small disturbances. Lorenz’s system was an example, and those Smale examined. The chaos Lorenz discovered, with all its unpredictability, was as stable as a marble in a bowl. You could add noise to this system, jiggle it, stir it up, interfere with its motion, and when everything settled down the system would return to the same peculiar pattern of irregularity as before. It was locally unpredictable, globally stable. Real dynamical systems play by a more complicated set of rules than anyone imagined.

  1. No trackbacks yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.