## Ergodicity and the wrong way to calculate expectations (wonkish)

from **Lars Syll**

If there was one thing I believed was a reasonable implicit assumption of economics, it was determining the expectation value upon which agents base their decisions as the “ensemble mean” of a large number of draws from a distribution …

But now I’m not so sure …Rolling a dice is a good example. The expected distribution of outcomes from rolling a single dice in a 10,000 roll sequence is the same as the expected distribution of rolling 10,000 dice once each. That process is ergodic.

But many processes are not like this. You cannot just keep playing over time and expect to converge to the mean …

You start with a $100 balance. You flip a coin. Heads means you win 50% of your current balance. Tails means you lose 40%. Then repeat.

Taking the ensemble mean entails reasoning by way of imagining a large number coin flips at each time period and taking the mean of these fictitious flips. That means the expectation value based on the ensemble mean of the first coin toss is (0.5x$50 + 0.5*$-40) = $5, or a 5% gain. Using this reasoning, the expectation for the second sequential coin toss is (0.5*52.5 + 0.5 * $-42) = $5.25 (another 5% gain).

The ensemble expectation is that this process will generate a 5% compound growth rate over time.

But if I start this process and keep playing long enough over time, I will never converge to that 5% expectation. The process is non-ergodic …

In fact, out of the 20,000 runs in my simulation, 17,000 lost money over the 100 time periods, having a final balance less than their $100 starting balance. Even more starkly, more than half the runs had less than $1 after 100 time periods …

So if almost everybody losses from this process, how can the ensemble mean of 5% compound growth be a reasonable expectation value? It cannot. For someone who is only going to experience a single path through a non-ergodic process, basing your behaviour on an expectation using the ensemble mean probably won’t be an effective way to navigate economic variations.

Cameron Murray is absolute right — and the issue of ergodicity is of fundamental importance in economics to have a clear view on.

Let’s say we have a stationary process. That does not guarantee that it is also ergodic. The long-run time average of a single output function of the stationary process may not converge to the expectation of the corresponding variables — and so the long-run time average may not equal the probabilistic (expectational) average. Say we have two coins, where coin A has a probability of 1/2 of coming up heads, and coin B has a probability of 1/4 of coming up heads. We pick either of these coins with a probability of 1/2 and then toss the chosen coin over and over again. Now let H1, H2, … be either one or zero as the coin comes up heads or tales. This process is obviously stationary, but the time averages — [H1 + … + Hn]/n — converges to 1/2 if coin A is chosen, and 1/4 if coin B is chosen. Both these time averages have a probability of 1/2 and so their expectational average is 1/2 x 1/2 + 1/2 x 1/4 = 3/8, which obviously is not equal to 1/2 or 1/4. The time averages depend on which coin you happen to choose, while the probabilistic (expectational) average is calculated for the whole “system” consisting of both coin A and coin B.

Time is what prevents everything from happening at once. To simply assume that economic processes are ergodic and concentrate on ensemble averages – and *a fortiori* in any relevant sense timeless – is not a sensible way for dealing with the kind of genuine uncertainty that permeates open systems such as economies.

Ergodicity and the all-important difference between time averages and ensemble averages are difficult concepts that many students of economics have problems with understanding. So let me just try to give yet one other explanation of the meaning of these concepts by means of a couple of simple examples.

Let’s say you’re offered a gamble where on a roll of a fair die you will get €10 billion if you roll a six, and pay me €1 billion if you roll any other number.

Would you accept the gamble?

If you’re an economics students you probably would, because that’s what you’re taught to be the only thing consistent with being **rational**. You would arrest the arrow of time by imagining six different “parallel universes” where the independent outcomes are the numbers from one to six, and then weight them using their stochastic probability distribution. Calculating the expected value of the gamble – **the ensemble average** – by averaging on all these weighted outcomes you would actually be a moron if you didn’t take the gamble (the expected value of the gamble being 5/6*€0 + 1/6*€10 billion = €1.67 billion)

If you’re not an economist you would probably trust your common sense and decline the offer, knowing that a large risk of bankrupting one’s economy is not a very rosy perspective for the future. Since you can’t really arrest or reverse the arrow of time, you know that once you have lost the €1 billion, it’s all over. The large likelihood that you go bust weights heavier than the 17% chance of you becoming enormously rich. By computing **the time average** – imagining one real universe where the six different but dependent outcomes occur consecutively – we would soon be aware of our assets disappearing, and *a fortiori* that it would be **irrational** to accept the gamble.

[From a mathematical point of view you can (somewhat non-rigorously) describe the difference between ensemble averages and time averages as a difference between **arithmetic averages** and **geometric averages**. Tossing a fair coin and gaining 20% on the stake (S) if winning (heads) and having to pay 20% on the stake (S) if loosing (tails), the arithmetic average of the return on the stake, assuming the outcomes of the coin-toss being independent, would be [(0.5*1.2S + 0.5*0.8S) – S)/S] = 0%. If considering the two outcomes of the toss not being independent, the relevant time average would be a geometric average return of squareroot[(1.2S *0.8S)]/S – 1 = -2%.]

Why is the difference between ensemble and time averages of such importance in economics? Well, basically, because when assuming the processes to be ergodic,ensemble and time averages are identical.

Assume we have a market with an asset priced at €100 . Then imagine the price first goes up by 50% and then later falls by 50%. The* ensemble average *for this asset would be €100 – because we here envision two parallel universes (markets) where the assetprice falls in one universe (market) with 50% to €50, and in another universe (market) it goes up with 50% to €150, giving an average of 100 € ((150+50)/2). The time average for this asset would be 75 € – because we here envision one universe (market) where the asset price first rises by 50% to €150, and then falls by 50% to €75 (0.5*150).

From the ensemble perspective nothing really, on average, happens. From the time perspective lots of things really, on average, happen. Assuming ergodicity there would have been no difference at all.

The difference between ensemble and time averages also highlights — as Murray’s post shows — the problems concerning the neoclassical theory of expected utility (something I have touched upon e. g. in Why expected utility theory is wrong).

When applied to the neoclassical theory of expected utility, one thinks in terms of “parallel universe” and asks what is the expected return of an investment, calculated as an average over the “parallel universe”? In our coin tossing example, it is as if one supposes that various “I” are tossing a coin and that the loss of many of them will be offset by the huge profits one of these “I” does. But this ensemble average does not work for an individual, for whom a time average better reflects the experience made in the “non-parallel universe” in which we live.

Time averages gives a more realistic answer, where one thinks in terms of the only universe we actually live in, and ask what is the expected return of an investment, calculated as an average over time.

Since we cannot go back in time – entropy and the arrow of time make this impossible – and the bankruptcy option is always at hand (extreme events and “black swans” are always possible) we have nothing to gain from thinking in terms of ensembles.

Actual events follow a fixed pattern of time, where events are often linked in a multiplicative process (as e. g. investment returns with “compound interest”) which is basically non-ergodic.

Instead of arbitrarily assuming that people have a certain type of utility function – as in the neoclassical theory – time average considerations show that we can obtain a less arbitrary and more accurate picture of real people’s decisions and actions by basically assuming that time is irreversible. When are assets are gone, they are gone. The fact that in a parallel universe it could conceivably have been refilled, are of little comfort to those who live in the one and only possible world that we call the real world.

Our coin toss example can be applied to more traditional economic issues. If we think of an investor, we can basically describe his situation in terms of our coin toss. What fraction of his assets should an investor – who is about to make a large number of repeated investments – bet on his feeling that he can better evaluate an investment (p = 0.6) than the market (p = 0.5)? The greater the fraction, the greater is the leverage. But also – the greater is the risk. Letting p be the probability that his investment valuation is correct and (1 – p) is the probability that the market’s valuation is correct, it means that he optimizes the rate of growth on his investments by investing a fraction of his assets that is equal to the difference in the probability that he will “win” or “lose”. This means that he at each investment opportunity (according to the so called Kelly criterion) is to invest the fraction of 0.6 – (1 – 0.6), i.e. about 20% of his assets (and the optimal average growth rate of investment can be shown to be about 2% (0.6 log (1.2) + 0.4 log (0.8))).

Time average considerations show that because we cannot go back in time, we should not take excessive risks. High leverage increases the risk of bankruptcy. This should also be a warning for the financial world, where the constant quest for greater and greater leverage – and risks – creates extensive and recurrent systemic crises. A more appropriate level of risk-taking is a necessary ingredient in a policy to come to curb excessive risk taking.

The geometric random walk (as in your win –> 50%, lose –> 50% is a tame non-ergodic process which underlies the Black-Scholes option value calculation and Dixit’s hurdle rate calculation. They don’t pretend, at least in Dixit’s hands, to make exact predictions but they do help planners make prudent judgments about investment under uncertainty.

The simplest and most direct take on Dixit (JEP article) is that it is NOT rational to make an uncertain investment if the promised return is only marginally better than the statistically expected return: hurdle rates of three or more times the “risk free” rate are quite rational and quite common in the real world.

Perhaps, maybe economists should be required to state (as investment advisers do ),

” Past performance can not guarantee future performance.”

When will economists concede that…” If one can not guarantee the exact conditions of changes performed to produce a result; they can not guarantee its future result.

One is Justaluckyfool, if by chance correct, albeit still a fool.”