Home > Uncategorized > The limits of probabilistic reasoning

## The limits of probabilistic reasoning

from Lars Syll

Almost a hundred years after John Maynard Keynes wrote his seminal A Treatise on Probability (1921), it is still very difficult to find statistics books that seriously try to incorporate his far-reaching and incisive analysis of induction and evidential weight.

The standard view in statistics – and the axiomatic probability theory underlying it – is to a large extent based on the rather simplistic idea that “more is better.” But as Keynes argues – “more of the same” is not what is important when making inductive inferences. It’s rather a question of “more but different.”

Variation, not replication, is at the core of induction. Finding that p(x|y) = p(x|y & w) doesn’t make w “irrelevant.” Knowing that the probability is unchanged when w is present gives p(x|y & w) another evidential weight (“weight of argument”). Running 10 replicative experiments do not make you as “sure” of your inductions as when running 10 000 varied experiments – even if the probability values happen to be the same.

According to Keynes we live in a world permeated by unmeasurable uncertainty – not quantifiable stochastic risk – which often forces us to make decisions based on anything but “rational expectations.” Keynes rather thinks that we base our expectations on the confidence or “weight” we put on different events and alternatives. To Keynes expectations are a question of weighing probabilities by “degrees of belief,” beliefs that often have preciously little to do with the kind of stochastic probabilistic calculations made by the rational agents as modeled by “modern” social sciences. And often we “simply do not know.” As Keynes writes in Treatise:

The kind of fundamental assumption about the character of material laws, on which scientists appear commonly to act, seems to me to be [that] the system of the material universe must consist of bodies … such that each of them exercises its own separate, independent, and invariable effect, a change of the total state being compounded of a number of separate changes each of which is solely due to a separate portion of the preceding state … Yet there might well be quite different laws for wholes of different degrees of complexity, and laws of connection between complexes which could not be stated in terms of laws connecting individual parts … If different wholes were subject to different laws qua wholes and not simply on account of and in proportion to the differences of their parts, knowledge of a part could not lead, it would seem, even to presumptive or probable knowledge as to its association with other parts … These considerations do not show us a way by which we can justify induction … /427 No one supposes that a good induction can be arrived at merely by counting cases. The business of strengthening the argument chiefly consists in determining whether the alleged association is stable, when accompanying conditions are varied … /468 In my judgment, the practical usefulness of those modes of inference … on which the boasted knowledge of modern science depends, can only exist … if the universe of phenomena does in fact present those peculiar characteristics of atomism and limited variety which appears more and more clearly as the ultimate result to which material science is tending.

Science according to Keynes should help us penetrate to “the true process of causation lying behind current events” and disclose “the causal forces behind the apparent facts.” Models can never be more than a starting point in that endeavour. He further argued that it was inadmissible to project history on the future. Consequently we cannot presuppose that what has worked before, will continue to do so in the future. That statistical models can get hold of correlations between different “variables” is not enough. If they cannot get at the causal structure that generated the data, they are not really “identified.”

How strange that economists and other social scientists as a rule do not even touch upon these aspects of scientific methodology that seems to be so fundamental and important for anyone trying to understand how we learn and orient ourselves in an uncertain world. An educated guess on why this is so would be that Keynes’s concepts are not possible to squeeze into a single calculable numerical “probability.” In the quest for quantities one puts a blind eye to qualities and looks the other way – but Keynes’s ideas keep creeping out from under the statistics carpet.

1. December 4, 2015 at 3:57 pm

Keynes was famously unable to make a formal theory of probability of any kind, and his own student Ramsey did better. Read up on the developments in modern decision theory, is what I suggest. Itzhak Gilboa has a couple of good books. Economists, yes even mainstream ones, have learned much more about such things as ambiguity and many varieties of non-“rational” decision making. But in particular it seems that you are seeking ambiguity theory. It has been found and developed.

2. December 4, 2015 at 6:55 pm

I like this book of JMK most of all. Please forward the attached paper to Lars Sylls.

And to whoever else may appreciate it.

_____

3. December 5, 2015 at 1:09 am

Keynes argument that it was inadmissible to project history into the future is, what I have always argued, a rejection of the ergodic axiom of stochastic theory

4. December 16, 2015 at 2:15 pm

What accounts like that of JMK do is attempt to formulate an entirely macro-theory of events, without passing through the micro-causation level. One result of this kind of approach is a conflation of the “unknown” and the “random” – just because something is not predictable does not make it random. For example, especially with human behaviour, often unexplained variation in behaviour is due to this being context-dependent. When data is collected the context of each event is often loss, making the variation *look* random, but will actually behave differently (e.g. when scaling system or sample size).

5. December 16, 2015 at 6:06 pm

Bruce, try reading Keynes’s “Treatise on Probability” (freely available as a Gutenburg text) and I think you will find that he is relating it to propositions, not events. His second para begins:

“The terms certain and probable describe the various degrees of rational belief about a proposition which different amounts of knowledge authorise us to entertain. All propositions are true or false, but the knowledge we have of them depends on our circumstances;
and while it is often convenient to speak of propositions as certain or probable, this expresses strictly a relationship in which they stand to a corpus of knowledge, actual or hypothetical, and not a characteristic of the propositions in themselves … The Theory of Probability is logical, [as against subjective], because it is concerned with the degree of belief which it is rational to entertain in given conditions, and not merely with the actual beliefs of particular individuals, which may or may not be rational”.

If you want to understand the origins of the formal scientific method attempting to bypass micro-causation, you need to study the three volumes of Hume’s “Treatise on Human Nature” (1739-40) , which disposed of belief in God as Aristotle’s “First Cause” by emphasising consciousness and denying the possibility of communication and knowledge of causation, substituting agreement among scientists/lawyers/politicians on probable correlation of independent snapshots of events/mores (as against ethos)/policies. As Kant tried to point out, that was merely Hume’s interpretation. Causality was another: necessary, in practice.

I don’t know who Cogido has been reading, but I very much doubt it was Keynes: more likely his neo-liberal enemies. He might try reading a more balanced discussion of Ramsey’s controversial assertions in Passmore’s “100 Years of Philosophy”.