Home > Uncategorized > The arrow of time in a non-ergodic world

The arrow of time in a non-ergodic world

from Lars Syll

an end of certaintyFor the vast majority of scientists, thermodynamics had to be limited strictly to equilibrium. That was the opinion of J. Willard Gibbs, as well as of Gilbert N. Lewis. For them, irreversibility associated with unidirectional time was anathema …

I myself experienced this type of hostility in 1946 … After I had presented my own lecture on irreversible thermodynamics, the greatest expert in the field of thermodynamics made the following comment: ‘I am astonished that this young man is so interested in nonequilibrium physics. Irreversible processes are transient. Why not wait and study equilibrium as everyone else does?’ I was so amazed at this response that I did not have the presence of mind to answer: ‘But we are all transient. Is it not natural to be interested in our common human condition?’

Time is what prevents everything from happening at once. To simply assume that economic processes are ergodic and concentrate on ensemble averages — and hence in any relevant sense timeless — is not a sensible way for dealing with the kind of genuine uncertainty that permeates real-world economies.

Ergodicity and the all-important difference between time averages and ensemble averages are difficult concepts — so let me try to explain the meaning of these concepts by means of a couple of simple examples.

Let’s say you’re offered a gamble where on a roll of a fair die you will get €10  billion if you roll a six, and pay me €1 billion if you roll any other number.

Would you accept the gamble?

If you’re an economics student​ you probably would, because that’s what you’re taught to be the only thing consistent with being rational. You would arrest the arrow of time by imagining six different “parallel universes” where the independent outcomes are the numbers from one to six, and then weight them using their stochastic probability distribution. Calculating the expected value of the gamble – the ensemble average – by averaging on all these weighted outcomes you would actually be a moron if you didn’t take the gamble (the expected value of the gamble being 5/6*€0 + 1/6*€10 billion = €1.67 billion)

If you’re not an economist you would probably trust your common sense and decline the offer, knowing that a large risk of bankrupting one’s economy is not a very rosy perspective for the future. Since you can’t really arrest or reverse the arrow of time, you know that once you have lost the €1 billion, it’s all over. The large likelihood that you go bust weights heavier than the 17% chance of you becoming enormously rich. By computing the time average – imagining one real universe where the six different but dependent outcomes occur consecutively – we would soon be aware of our assets disappearing, and a fortiori that it would be irrational to accept the gamble.

Why is the difference between ensemble and time averages of such importance in economics? Well, basically, because when assuming the processes to be ergodic, ensemble and time averages are identical.

Assume we have a market with an asset priced at €100.​ Then imagine the price first goes up by 50% and then later falls by 50%. The ensemble average for this asset would be €100 – because we here envision two parallel universes (markets) where the asset price​ falls in one universe (market) with 50% to €50, and in another universe (market) it goes up with 50% to €150, giving an average of 100 € ((150+50)/2). The time average for this asset would be 75 € – because we here envision one universe (market) where the asset price first rises by 50% to €150, and then falls by 50% to €75 (0.5*150).

From the ensemble perspective nothing really, on average, happens. From the time perspective lots of things really, on average, happen. Assuming ergodicity there would have been no difference at all.

On a more economic-theoretical level, ​the difference between ensemble and time averages also highlights the problems concerning the neoclassical theory of expected utility.

When applied to the neoclassical theory of expected utility, one thinks in terms of “parallel universe” and asks what is the expected return of an investment, calculated as an average over the “parallel universe”? In our coin tossing example, it is as if one supposes that various “I” are tossing a coin and that the loss of many of them will be offset by the huge profits one of these “I” does. But this ensemble average does not work for an individual, for whom a time average better reflects the experience made in the “non-parallel universe” in which we live.

Time averages give​ a more realistic answer, where one thinks in terms of the only universe we actually live in, and ask what is the expected return of an investment, calculated as an average over time.

Since we cannot go back in time – entropy and the arrow of time make this impossible – and the bankruptcy option is always at hand (extreme events and “black swans” are always possible) we have nothing to gain from thinking in terms of ensembles.

Actual events follow a fixed pattern of time, where events are often linked in a multiplicative process (as e. g. investment returns with “compound interest”) which is basically non-ergodic.

Instead of arbitrarily assuming that people have a certain type of utility function – as in the neoclassical theory – time average considerations show that we can obtain a less arbitrary and more accurate picture of real people’s decisions and actions by basically assuming that time is irreversible. When are assets are gone, they are gone. The fact that in a parallel universe it could conceivably have been refilled, is​ of little comfort to those who live in the one and only possible world that we call the real world.

Our coin toss example can be applied to more traditional economic issues. If we think of an investor, we can basically describe his situation in terms of our coin toss. What fraction of his assets should an investor – who is about to make a large number of repeated investments – bet on his feeling that he can better evaluate an investment (p = 0.6) than the market (p = 0.5)? The greater the fraction, the greater is the leverage. But also – the greater is the risk. Letting p be the probability that his investment valuation is correct and (1 – p) is the probability that the market’s valuation is correct, it means that he optimizes the rate of growth on his investments by investing a fraction of his assets that is equal to the difference in the probability that he will “win” or “lose”. This means that he at each investment opportunity (according to the so-called Kelly criterion) is to invest the fraction of  0.6 – (1 – 0.6), i.e. about 20% of his assets (and the optimal average growth rate of investment can be shown to be about 2% (0.6 log (1.2) + 0.4 log (0.8))).

Time average considerations show that because we cannot go back in time, we should not take excessive risks. High leverage increases the risk of bankruptcy. This should also be a warning for the financial world, where the constant quest for greater and greater leverage – and risks – creates extensive and recurrent systemic crises. A more appropriate level of risk-taking is a necessary ingredient in a policy to come to curb excessive risk-taking​.

To understand real world “non-routine” decisions and unforeseeable changes in behaviour, ergodic probability distributions are of no avail. In a world full of genuine uncertainty — where real historical time rules the roost — the probabilities that ruled the past are not necessarily those that will rule the future.

Irreversibility can no longer be identified with a mere appearance​ that would disappear if we had perfect knowledge … Figuratively speaking, matter at equilibrium, with no arrow of time, is ‘blind,’ but with the arrow of time, it begins to ‘see’ … The claim that the arrow of time is ‘only phenomenological​l,’ or subjective, is therefore absurd. We are actually the children of the arrow of time, of evolution, not its progenitors.

Ilya Prigogine

  1. February 16, 2018 at 1:35 am

    Here:

    Is a video showing the reversibility of a process that looks irreversible yet reverses quite precisely. If the crank were turned more quickly the nonlinearity of the governing equations would render this system irreversible as the nonlinear momentum term comes into play. There’s a strong link between irreversibility and non linearity simply because substituting -t for t in nonlinear governing equations evidently doesn’t result in the system retracing it’s steps. Consider terms containing t squared. There’s a Nobel Prize waiting for the guy who fleshes this out for thermodynamics, for Physics in general, and for economics.

    • February 16, 2018 at 7:07 am

      Wonderful comment!

    • Frank Salter
      February 16, 2018 at 9:05 am

      Quote at end of Peterblogdanovich’s comment:
      “There’s a Nobel Prize waiting for the guy who fleshes this out … for economics.”

      I hesitate to make such a claim, but, at least for production theory, this (irreversible time) has been fleshed out in Transient Development (RWER-81). This analysis appears, to me, from recent blog posts, to be the 𝘦𝘭𝘦𝘱𝘩𝘢n𝘵 𝘪𝘯 𝘵𝘩𝘦 𝘳𝘰𝘰𝘮.

      A rigorous analysis from first principles, which predicts the nature of the empirical evidence, is available, 𝘣𝘶𝘵 it is not mentioned by anyone other than me and one or two others who have said the mathematics used is outside their current experience.

      Has anyone with the mathematical expertise read my paper?
      If so, no one has challenged my analysis.
      Surely, if they had, the tenor of the debate would be very different than that being discussed here.

      If I am right then the discussion needs to move on — if not, please explain why!

  2. February 16, 2018 at 7:02 am

    When we talk about Ilya Prigogine and his ideas and research on disequilibrium thermodynamics, it is not sufficient to argue only about irreversibility of time. We should also think why living bodies and economic systems can conserve its activities in a relatively stable state. The secret lies in the fact that they are dissipative structures. This is the most important message from Prigogine for economics. There lies a hint to construct a new economics that does not rely on equilibrium framework.

    A dissipative structure can exists far from thermodynamic equilibrium and is a system which violates ergodic hypothesis.

    See my comment of February 10, 2018 at 4:16 am to
    Lars Syll
    Economath — a device designed to fool the feebleminded

    Economath — a device designed to fool the feebleminded

    I hope Lars will in the future explain about dissipative structure.

  3. February 16, 2018 at 11:13 am

    Looks like u have finally understood basic ideas of ergodic theory.

  4. February 16, 2018 at 12:27 pm

    Lars:
    When I saw “1946”, I was happy that a man of approximately my age was so good.
    No wonder the thermodynamics man was amazed at your lecture, that you did at age of -11!

    Time ensembles are nonsense; the technology ensembles as described by Senge (1990) made me doubt (partially) the evolution theory.

  5. February 16, 2018 at 12:59 pm

    In response to Frank Salter, the debate cannot move on if the one side doesn’t understand [the mathematics of] what the other side is saying. In my opinion, this has been one of Lars’ best blogs because it makes clear what I’ve struggled to get across about standard Gibbsian hypergeometry. Likewise Peter’s comment is brilliant precisely because it expresses the answer in terms people already understand, whereas most people’s eyes glaze over when I point out that x squared is literally, in its original Greek Pythagorean geometrical form, not x times itself but x times ix, where i represents a right angle; and that the – is i times i.

    As for Frank’s claim that his critics (which in a very mild sense I have been) have not understood his maths, that is untrue. I may be a tortoise rather than a hare here when it comes to following it in detail, but that is another matter. What Frank hasn’t understood is that, while I approve his principle of couching the argument in terms of dimensional analysis, he is thinking in terms of physical units and not how the unmeasurable can be turned into units of information (hence the difference between phenomena and graphs of them); so his first principles are only telling half the story. My first principles tell a story of emergent dynamic logic based on the dimensional language of Cartesian coordinates, which can be traced from the Big Bang right into the logical failings of contemporary economic practices. Like Frank I feel frustrated people don’t understand me, but I do try to learn from the short-comings in my presentation: explaining myself in different ways that may someday “click” in someone else’s mind; “sowing seeds” that with luck may someday germinate.

    It is interesting, if you look back to my comment at February 13, 2018 at 8:10 pm on “the semantics of mathematical equilibrium theory”, how I have said both what Peter and Yoshinori are saying in ways more directly related to economics.

    • Frank Salter
      February 16, 2018 at 4:39 pm

      I am not claiming that my critic (so far you seem to be the only one) fails to understand the mathematics. I think you have not recognised that the significance lies in the forms of solution. By corresponding to the empirical evidence in all respects, they demonstrate that the emergent properties of production arise quite naturally from both the differential equations and the algebraic equations — in essence they describe production theory from opposite directions — but they traverse the same hypersurface.

      The algebraic solutions describe individual manufacturing plants. The solutions by calculus describe the emergence of whole industries — the introduction of more manufacturing plant produces the development of the industry over time.

      The solutions, in the natural units of production, are aggregative. Therefore, the analysis is relevant at both the micro and macro levels. Thus, it is demonstrated that the so called micro-foundational analyses that conventional analysis is following leads yet again into another blind alley.

      I am unsure what you mean by “the unmeasurable can be turned into units of information”. I am interpreting that you are referring to what decisions are being made to obtain a maximum of output for the expenditure of minimum effort. As, at any level of technical progress, there is a single maximum, it is the application of best engineering practise which takes plant and industries towards this maximum. Essentially, there is only the appropriate level of engineering which achieves the maximum. The Verdoorn coefficient’s being industry specific and tending towards a half confirms this.

      The many explanations, of the empirical evidence, provided by my analysis should, I believe, be the universe of discourse which allows economics to move forward.

      Please give your opinions.

  1. No trackbacks yet.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.