Home > Uncategorized > Probability and economics

Probability and economics

from Lars Syll

Modern mainstream (neoclassical) economics relies to a large degree on the notion of probability.

To at all be amenable to applied economic analysis, economic observations allegedly have to be conceived as random events that are analyzable within a probabilistic framework.

But is it really necessary to model the economic system as a system where randomness can only be analyzed and understood when based on an a priori notion of probability?

probabilityWhen attempting to convince us of the necessity of founding empirical economic analysis on probability models,  neoclassical economics actually forces us to (implicitly) interpret events as random variables generated by an underlying probability density function.

This is at odds with reality. Randomness obviously is a fact of the real world. Probability, on the other hand, attaches (if at all) to the world via intellectually constructed models, and a fortiori is only a fact of a probability generating (nomological) machine or a well constructed experimental arrangement or ‘chance set-up.’

Just as there is no such thing as a ‘free lunch,’ there is no such thing as a ‘free probability.’

To be able at all to talk about probabilities, you have to specify a model. If there is no chance set-up or model that generates the probabilistic outcomes or events – in statistics one refers to any process where you observe or measure as an experiment (rolling a die) and the results obtained as the outcomes or events (number of points rolled with the die, being e. g. 3 or 5) of the experiment – there strictly seen is no event at all.  

Probability is a relational element. It always must come with a specification of the model from which it is calculated. And then to be of any empirical scientific value it has to be shown to coincide with (or at least converge to) real data generating processes or structures – something seldom or never done.

And this is the basic problem with economic data. If you have a fair roulette-wheel, you can arguably specify probabilities and probability density distributions. But how do you conceive of the analogous nomological machines for prices, gross domestic product, income distribution etc? Only by a leap of faith. And that does not suffice. You have to come up with some really good arguments if you want to persuade people into believing in the existence of socio-economic structures that generate data with characteristics conceivable as stochastic events portrayed by probabilistic density distributions.

We simply have to admit that the socio-economic states of nature that we talk of in most social sciences – and certainly in economics – are not amenable to analyze as probabilities, simply because in the real world open systems there are no probabilities to be had!

The processes that generate socio-economic data in the real world cannot just be assumed to always be adequately captured by a probability measure. And, so, it cannot be maintained that it even should be mandatory to treat observations and data – whether cross-section, time series or panel data – as events generated by some probability model. The important activities of most economic agents do not usually include throwing dice or spinning roulette-wheels. Data generating processes – at least outside of nomological machines like dice and roulette-wheels – are not self-evidently best modeled with probability measures.

If we agree on this, we also have to admit that much of modern neoclassical economics lacks sound foundations.

When economists and econometricians – uncritically and without arguments — simply assume that one can apply probability distributions from statistical theory on their own area of research, they are really skating on thin ice.

Mathematics (by which I shall mean pure mathematics) has no grip on the real world ; if probability is to deal with the real world it must contain elements outside mathematics ; the meaning of ‘ probability ‘ must relate to the real world, and there must be one or more ‘primitive’ propositions about the real world, from which we can then proceed deductively (i.e. mathematically). We will suppose (as we may by lumping several primitive propositions together) that there is just one primitive proposition, the ‘probability axiom’, and we will call it A for short. Although it has got to be true, A is by the nature of the case incapable of deductive proof, for the sufficient reason that it is about the real world  …

We will begin with the … school which I will call philosophical. This attacks directly the ‘real’ probability problem; what are the axiom A and the meaning of ‘probability’ to be, and how can we justify A? It will be instructive to consider the attempt called the ‘frequency theory’. It is natural to believe that if (with the natural reservations) an act like throwing a die is repeated n times the proportion of 6’s will, with certainty, tend to a limit, p say, as n goes to infinity … If we take this proposition as ‘A’ we can at least settle off-hand the other problem, of the meaning of probability; we define its measure for the event in question to be the number p. But for the rest this A takes us nowhere. Suppose we throw 1000 times and wish to know what to expect. Is 1000 large enough for the convergence to have got under way, and how far? A does not say. We have, then, to add to it something about the rate of convergence. Now an A cannot assert a certainty about a particular number n of throws, such as ‘the proportion of 6’s will certainly be within p +- e for large enough n (the largeness depending on e)’. It can only say ‘the proportion will lie between p +- e with at least such and such probability (depending on e and n*) whenever n>n*’. The vicious circle is apparent. We have not merely failed to justify a workable A; we have failed even to state one which would work if its truth were granted. It is generally agreed that the frequency theory won’t work. But whatever the theory it is clear that the vicious circle is very deep-seated: certainty being impossible, whatever A is made to state can only be in terms of ‘probability ‘.

John Edensor Littlewood 

This importantly also means that if you cannot show that data satisfies all the conditions of the probabilistic nomological machine, then the statistical inferences made in mainstream economics lack sound foundations!

  1. April 4, 2017 at 11:12 pm

    I would as soon believe that hunger, homelessness, and health insurance are randomly distributed as I would believe that the ‘market’ always provides to everyone irrespective of income.

    The common weal refers to the well-being of the community of persons within society, and that is not merely not an aggregate of ‘market choice’ decisions given personal ‘endowed’ incomes. It is about how communities provide opportunities for the well-being of all of their members by working together to do so. It is tethered to the well-being of everyone, not to the well-being only of those with the ability to pay for ‘well-being’ as individuals.

  2. April 6, 2017 at 7:48 pm

    I don’t think it is ‘neccesary’ or mandatory to model economies with model theory; one can use a deterministic model —i view optimization models as determinsitic, as is the R Goodwin predator prey / marxism model. (At hand i can’t think of any other deterministic models though other business cycle models likely are.)

    Are there any other kinds of models? One might be ‘words’ or narratives, which have a place–qualititative. (and qualitative models can often be semi-formalized using graphs, and such so they are semi-mathematical.

    Also probabilistic and determinstic models often overlap –eg ‘stochastic processes’ which have deterministic and ‘noise’ (random) terms drawn from a probability distribution. I actually often think of determinstic models as probabilistic ones in the limit of when probabilities of events approach unity. (eg in Duncan Foley’s model of statistical rather than general equlibrium theory, ‘general equilibrium’ is recovered as the determinstic limit.
    .

  3. April 6, 2017 at 7:54 pm

    correction—the 2nd use of the word ‘model’ should have been ‘probability’

  4. April 7, 2017 at 9:36 am

    From the dozen definitions of model in the Merriam-Webster dictionary these seem most relevant here.

    1) a description or analogy used to help visualize something (as an atom) that cannot be directly observed
    2) a system of postulates, data, and inferences presented as a mathematical description of an entity or state of affairs; also : a computer simulation (see simulation 3a) based on such a system climate models
    3) an example for imitation or emulation

    Point being models are not mysterious or unknown. They are merely representations of something directly or indirectly observable (e.g., atom, army, society, economy, climate). Representations are not judged by their beauty, compactness, clarity, etc., but rather by their usefulness. Even a hated, or poorly designed, or biased model can be useful. One of the jobs of social scientists who employ models is to demonstrate how and why they are useful for the work they do.

    As to statistical models, I concur with all your comments. But even with all these limitations and flaws, such models can still be useful. For example, to force us to examine relationships more closely. Or, to go out on a limb to examine generally considered irrelevant relationships.

  1. No trackbacks yet.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.