## Probability and economics (wonkish)

from **Lars Syll**

Modern neoclassical economics relies to a large degree on the notion of probability.

To at all be amenable to applied economic analysis, economic observations allegedly have to be conceived as random events that are analyzable within a probabilistic framework.

But is it really necessary to model the economic system as a system where randomness can only be analyzed and understood when based on an *a priori* notion of probability?

When attempting to convince us of the necessity of founding empirical economic analysis on probability models, neoclassical economics actually forces us to (implicitly) interpret events as random variables generated by an underlying probability density function.

This is at odds with reality. Randomness obviously is a fact of the real world. Probability, on the other hand, attaches (if at all) to the world via intellectually constructed models, and *a fortiori* is only a fact of a probability generating (nomological) machine or a well constructed experimental arrangement or “chance set-up”.

In probabilistic econometrics randomness is often defined with the help of independent trials – two events are said to be independent if the occurrence or nonoccurrence of either one has no effect on the probability of the occurrence of the other – as drawing cards from a deck, picking balls from an urn, spinning a roulette wheel or tossing coins – trials which are only definable if somehow set in a probabilistic context.

But if we pick a sequence of prices – say 2, 4, 3, 8, 5, 6, 6 – that we want to use in an econometric regression analysis, how do we know the sequence of prices is random and *a fortiori* being able to treat as generated by an underlying probability density function? How can we argue that the sequence is a sequence of probabilistically independent random prices? And are they really random in the sense that is most often applied in probabilistic econometrics – where X is called a random variable only if there is a sample space S with a probability measure and X is a real-valued function over the elements of S?

Bypassing the scientific challenge of going from describable randomness to calculable probability by just assuming it, is of course not an acceptable procedure. Since a probability density function is a “Gedanken” object that does not exist in a natural sense, it has to come with an export license to our real target system if it is to be considered usable. We still have to show that the real sequence somehow coincides with the ideal sequence that defines independence and randomness within our “nomological machine,” our “probabilistic model.”

**Just as there is no such thing as a “free lunch,” there is no such thing as a “free probability.”** To be able at all to talk about probabilities, you have to specify a model. If there is no chance set-up or model that generates the probabilistic outcomes or events – in statistics one refers to any process where you observe or measure as an experiment (rolling a die) and the results obtained as the *outcomes* or *events* (number of points rolled with the die, being e. g. 3 or 5) of the experiment – there strictly seen is no event at all.

Probability is a relational element. It always must come with a specification of the model from which it is calculated. And then to be of any empirical scientific value it has to be *shown* to coincide with (or at least converge to) real data generating processes or structures – something seldom or never done!

And this is the basic problem with economic data. If you have a fair roulette-wheel, you can arguably specify probabilities and probability density distributions. But how do you conceive of the analogous nomological machines for prices, gross domestic product, income distribution etc? Only by a leap of faith. And that does not suffice. You have to come up with some really good arguments if you want to persuade people into believing in the existence of socio-economic structures that generate data with characteristics conceivable as stochastic events portrayed by probabilistic density distributions!

From a realistic point of view we really have to admit that the socio-economic states of nature that we talk of in most social sciences – and certainly in economics – are not amenable to analyze as probabilities, simply because in the real world open systems that social sciences – including economics – analyze, there are no probabilities to be had!

The processes that generate socio-economic data in the real world cannot just be assumed to always be adequately captured by a probability measure. And, so, it cannot really be maintained that it even should be mandatory to treat observations and data – whether cross-section, time series or panel data – as events generated by some probability model. The important activities of most economic agents do not usually include throwing dice or spinning roulette-wheels. Data generating processes – at least outside of nomological machines like dice and roulette-wheels – are not self-evidently best modeled with probability measures.

**If we agree on this, we also have to admit that much of modern neoclassical economics lacks a sound justification.** I would even go further and argue that there really is no justifiable rationale at all for this belief that all economically relevant data can be adequately captured by a probability measure. In most real world contexts one has to *argue* and *justify* one’s case. And that is obviously something seldom or never done by practitioners of neoclassical economics.

As **David Salsburg** (2001:146) notes on probability theory:

[W]e assume there is an abstract space of elementary things called ‘events’ … If a measure on the abstract space of events fulfills certain axioms, then it is a probability. To use probability in real life, we have to identify this space of events and do so with sufficient specificity to allow us to actually calculate probability measurements on that space … Unless we can identify [this] abstract space, the probability statements that emerge from statistical analyses will have many different and sometimes contrary meanings.

Just as e. g. **John Maynard Keynes** (1921) and **Nicholas Georgescu-Roegen** (1971), Salsburg (2001:301f) is very critical of the way social scientists – including economists and econometricians – uncritically and without arguments have come to simply assume that one can apply probability distributions from statistical theory on their own area of research:

Probability is a measure of sets in an abstract space of events. All the mathematical properties of probability can be derived from this definition. When we wish to apply probability to real life, we need to identify that abstract space of events for the particular problem at hand … It is not well established when statistical methods are used for observational studies … If we cannot identify the space of events that generate the probabilities being calculated, then one model is no more valid than another … As statistical models are used more and more for observational studies to assist in social decisions by government and advocacy groups, this fundamental failure to be able to derive probabilities without ambiguity will cast doubt on the usefulness of these methods.

Or as the great British mathematician **John Edensor Littlewood** says in his *A Mathematician’s Miscellany:*

Mathematics (by which I shall mean pure mathematics) has no grip on the real world ; if probability is to deal with the real world it must contain elements outside mathematics ; the meaning of ‘ probability ‘ must relate to the real world, and there must be one or more ‘primitive’ propositions about the real world, from which we can then proceed deductively (i.e. mathematically). We will suppose (as we may by lumping several primitive propositions together) that there is just one primitive proposition, the ‘probability axiom’, and we will call it A for short. Although it has got to be true, A is by the nature of the case incapable of deductive proof, for the sufficient reason that it is about the real world …

We will begin with the … school which I will call philosophical. This attacks directly the ‘real’ probability problem; what are the axiom A and the meaning of ‘probability’ to be, and how can we justify A? It will be instructive to consider the attempt called the ‘frequency theory’. It is natural to believe that if (with the natural reservations) an act like throwing a die is repeated n times the proportion of 6’s will, with certainty, tend to a limit, p say, as n goes to infinity … If we take this proposition as ‘A’ we can at least settle off-hand the other problem, of the meaning of probability; we define its measure for the event in question to be the number p. But for the rest this A takes us nowhere. Suppose we throw 1000 times and wish to know what to expect. Is 1000 large enough for the convergence to have got under way, and how far? A does not say. We have, then, to add to it something about the rate of convergence. Now an A cannot assert a certainty about a particular number n of throws, such as ‘the proportion of 6’s will certainly be within p +- e for large enough n (the largeness depending on e)’. It can only say ‘the proportion will lie between p +- e with at least such and such probability (depending on e and n*) whenever n>n*’. The vicious circle is apparent. We have not merely failed to justify a workable A; we have failed even to state one which would work if its truth were granted. It is generally agreed that the frequency theory won’t work. But whatever the theory it is clear that the vicious circle is very deep-seated: certainty being impossible, whatever A is made to state can only be in terms of ‘probability ‘.

**This importantly also means that if you cannot show that data satisfies all the conditions of the probabilistic nomological machine, then the statistical inferences used – and a fortiori neoclassical economics – lack sound foundations!**

*References*

Georgescu-Roegen, Nicholas (1971), *The Entropy Law and the Economic Process*. Harvard University Press.

Keynes, John Maynard (1973 (1921)), *A Treatise on Probability*. Volume VIII of *The Collected Writings of John Maynard Keynes*, London: Macmillan.

Littlewood, John Edensor (1953) A Mathematician’s Miscellany, London: Methuen & Co.

Salsburg, David (2001), *The Lady Tasting Tea*. Henry Holt.

The question now becomes, given the severity of the problems of probabilistic analysis, what do we use in its place? How else do we deal with the complexity and uncertainty of social systems?

I think the major force that drives us into the world of probabilities in the first place is laissez-faire, wherein we are much more passive observers than active interveners. If we instead permitted ourselves more aggressive interventions we would be expending less effort trying to figure out the chance of what we want happening and more on setting things up to actually make it happen.

We would spend less time wringing our hands over all the untoward future possibilities (which we can’t accurately know anyway), and more pushing things in a positive direction, dealing with the consequences as they arise.

If we take the aggregate of all similar kinds of economic activity, we have raised the probability of this occurrence as high as possible. Since we already can assume that there are a finite number of kinds of these activities, this approach is a reasonable way of representing theoretically the system and even of explaining how it works. So where lays the difficulty?

Oct. 07, 2015

Great stuff, Lars Syll ! You actually succeeded in making the thrust of your argument fairly intelligible to us non mathematicians .

I strongly suspect that the lesson to be learned from your exposition is related to the well known aphorism of Ludwig Mises: “Case probability is not class probability”…Which undoubtedly contained some contribution from his brother, aerodynamic theorist and statistical mathematician, Richard Mises.

GOOGLE {1} Norman L. Roth, {2} Norman L. Roth, Technological Time {3} Norman L. Roth, Economics of work {4} Norman L. Roth, current conception of the standard of life

@ Rhonda

Look at the relationship between total costs/prices and total individual incomes simultaneously produced with which to liquidate those total costs/prices. And then proactively craft policies that are the anatomy of equilibrium itself, i.e. effect both a + and a – to the disequilibrium discovered after that analysis, and finally that do not add an additional cost to any enterprise and so neither to the system.

Redefining economics

Comment on Lars Syll on ‘Probability and economics’

You sum up: “… neoclassical economics lacks sound foundations!” (See intro)

This, of course, is true, and just because of this we have heterodox economics as superior alternative, don’t we? Unfortunately not, because Heterodoxy, too, lacks sound foundations.* And this leads one quite naturally to the conclusion: “… then one model is no more valid than another …” (See intro)

This conclusion is (i) false and (ii) self-defeating. Clearly, if there is no way to discriminate between a true and a false model any further discussion is no better than medieval word play about dancing-angels-on-a-pinpoint. And how can Heterodoxy assert that Orthodoxy is unacceptable, if no model is more valid than another? This is a blatant self-contradiction.

It is pretty obvious that Heterodoxy is in the same dark methodological woods as Orthodoxy and this in turn explains why economics is caught in secular stagnation.

How to get out of the woods? Heterodoxy has, first of all, to stick to the scientific method, which is well-defined: “Research is in fact a continuous discussion of the consistency of theories: formal consistency insofar as the discussion relates to the logical cohesion of what is asserted in joint theories; material consistency insofar as the agreement of observations with theories is concerned.” (Klant, 1994, p. 31)

Sticking to the scientific method implies that the heterodox economist avoids at all costs to get involved in questions that have no answer to begin with, like is God male/female or is greed good? Or, more generally, Heterodoxy avoids to apply concepts and tools where they are not applicable. Randomness is a case in point.

“From a realistic point of view we really have to admit that the socioeconomic states of nature that we talk of in most social sciences — and certainly in economics — are not amenable to analyze as probabilities, simply because in the real world open systems that social sciences — including economics — analyze, there are no probabilities to be had!” (See intro)

Yes, indeed, and from this follows that Heterodoxy has to get out of the so-called social sciences. Why? Because, as a matter of principle, we can have neither deterministic nor probabilistic knowledge about human behavior.

“By having a vague theory it is possible to get either result. … It is usually said when this is pointed out, ‘When you are dealing with psychological matters things can’t be defined so precisely’. Yes, but then you cannot claim to know anything about it.” (Feynman, 1992, p. 159)

From this follows: economics cannot be built upon an assumption about human behavior. Behavior is neither deterministic nor purely random. To be sure, there is nothing wrong with the concept of randomness, only with its application in economics. That is an old hat.

“Alexander Rosenberg lays great emphasis on the role of intentionality in the social sciences, for in his view this role explains the nomological failures of the social sciences and supports the view that the social sciences (in anything like their current form) can never succeed in formulating real laws of human behavior.” (Hausman, 1992, p. 326)

The subject matter of economics is not homo oeconomicus but the economic system. Because of this, economics has to be redefined.

— Old definition, subjective-behavioral: “Economics is the science which studies human behavior as a relationship between ends and scarce means which have alternative uses.”

— New definition, objective-structural: “Economics is the science which studies how the monetary economy works.”

The original methodological blunder of Orthodoxy has been that it attempted to axiomatize human behavior. This is a fine example of what Feynman called cargo cult science, i.e. ‘The form is perfect. But it doesn’t work.’ The correct approach consists in axiomatizing the objective structural relationships of the monetary economy (2014). Since Jevons, Walras, and Menger neoclassical economists have not got the salient point of methodology.

In sum: Heterodoxy has to free itself from the social science illusion, which it has hitherto shared with Orthodoxy.** The joint failure speaks for itself.

Egmont Kakarot-Handtke

References

Feynman, R. P. (1992). The Character of Physical Law. London: Penguin.

Hausman, D. M. (1992). The Inexact and Separate Science of Economics. Cambridge: Cambridge University Press.

Kakarot-Handtke, E. (2014). Objective Principles of Economics. SSRN Working Paper Series, 2418851: 1–19. URL http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2418851

Klant, J. J. (1994). The Nature of Economic Thought. Aldershot, Brookfield, VT: Edward Elgar.

* With regard to profit theory see the proof ‘Heterodoxy, too, is scientific junk’

http://axecorg.blogspot.de/2015/09/heterodoxy-too-is-scientific-junk_85.html

** See also ‘PsySoc— the scourge of economics’

http://axecorg.blogspot.de/2015/09/psysoc-scourge-of-economics.html