Home > methodology > The Keynes-Ramsey-Savage debate on probability

## The Keynes-Ramsey-Savage debate on probability

from Lars Syll

Neoclassical economics nowadays usually assumes that agents that have to make choices under conditions of uncertainty behave according to Bayesian rules, axiomatized by Ramsey (1931) and Savage (1954) – that is, they maximize expected utility with respect to some subjective probability measure that is continually updated according to Bayes theorem. If not, they are supposed to be irrational, and ultimately – via some “Dutch book” or “money pump”argument – susceptible to being ruined by some clever “bookie”.

Bayesianism reduces questions of rationality to questions of internal consistency (coherence) of beliefs, but – even granted this questionable reductionism – do rational agents really have to be Bayesian? As I have been arguing elsewhere (e. g. here, here and here) there is no strong warrant for believing so.

In many of the situations that are relevant to economics one could argue that there is simply not enough of adequate and relevant information to ground beliefs of a probabilistic kind, and that in those situations it is not really possible, in any relevant way, to represent an individual’s beliefs in a single probability measure.

Say you have come to learn (based on own experience and tons of data) that the probability of you becoming unemployed in Sweden is 10 %. Having moved to another country (where you have no own experience and no data) you have no information on unemployment and a fortiori nothing to help you construct any probability estimate on. A Bayesian would, however, argue that you would have to assign probabilities to the mutually exclusive alternative outcomes and that these have to add up to 1, if you are rational. That is, in this case – and based on symmetry – a rational individual would have to assign probability 10% to becoming unemployed and 90% of becoming employed.

That feels intuitively wrong though, and I guess most people would agree. Bayesianism cannot distinguish between symmetry-based probabilities from information and symmetry-based probabilities from an absence of information. In these kinds of situations most of us would rather say that it is simply irrational to be a Bayesian and better instead to admit that we “simply do not know” or that we feel ambiguous and undecided. Arbitrary an ungrounded probability claims are more irrational than being undecided in face of genuine uncertainty, so if there is not sufficient information to ground a probability distribution it is better to acknowledge that simpliciter, rather than pretending to possess a certitude that we simply do not possess.

I think this critique of Bayesianism is in accordance with the views of John Maynard Keynes’ A Treatise on Probability (1921) and General Theory (1937). According to Keynes we live in a world permeated by unmeasurable uncertainty – not quantifiable stochastic risk – which often forces us to make decisions based on anything but rational expectations. Sometimes we “simply do not know.” Keynes would not have accepted the view of Bayesian economists, according to whom expectations “tend to be distributed, for the same information set, about the prediction of the theory.” Keynes, rather, thinks that we base our expectations on the confidence or “weight” we put on different events and alternatives. To Keynes expectations are a question of weighing probabilities by “degrees of belief”, beliefs that have preciously little to do with the kind of stochastic probabilistic calculations made by the rational agents modeled by Bayesian economists.

Stressing the importance of Keynes’ view on uncertainty John Kay writes in Financial Times:

Keynes believed that the financial and business environment was characterised by “radical uncertainty”. The only reasonable response to the question “what will interest rates be in 20 years’ time?” is “we simply do not know” …

For Keynes, probability was about believability, not frequency. He denied that our thinking could be described by a probability distribution over all possible future events, a statistical distribution that could be teased out by shrewd questioning – or discovered by presenting a menu of trading opportunities. In the 1920s he became engaged in an intellectual battle on this issue, in which the leading protagonists on one side were Keynes and the Chicago economist Frank Knight, opposed by a Cambridge philosopher, Frank Ramsey, and later by Jimmie Savage, another Chicagoan.

Keynes and Knight lost that debate, and Ramsey and Savage won, and the probabilistic approach has maintained academic primacy ever since. A principal reason was Ramsey’s demonstration that anyone who did not follow his precepts – anyone who did not act on the basis of a subjective assessment of probabilities of future events – would be “Dutch booked” … A Dutch book is a set of choices such that a seemingly attractive selection from it is certain to lose money for the person who makes the selection.

I used to tell students who queried the premise of “rational” behaviour in financial markets – where rational means are based on Bayesian subjective probabilities – that people had to behave in this way because if they did not, people would devise schemes that made money at their expense. I now believe that observation is correct but does not have the implication I sought. People do not behave in line with this theory, with the result that others in financial markets do devise schemes that make money at their expense.

Although this on the whole gives a succinct and correct picture of Keynes’s view on probability, I think it’s necessary to somewhat qualify in what way and to what extent Keynes “lost” the debate with the Bayesians Frank Ramsey and Jim Savage.

In economics it’s an indubitable fact that few mainstream neoclassical economists work within the Keynesian paradigm. All more or less subscribe to some variant of Bayesianism. And some even say that Keynes acknowledged he was wrong when presented with Ramsey’s theory. This is a view that has unfortunately also been promulgated by Robert Skidelsky in his otherwise masterly biography of Keynes. But I think it’s fundamentally wrong. Let me elaborate on this point (the argumentation is more fully presented in my book John Maynard Keynes (SNS, 2007)).

It’s a debated issue in newer research on Keynes if he, as some researchers maintain, fundamentally changed his view on probability after the critique levelled against his A Treatise on Probability by Frank Ramsey. It has been exceedingly difficult to present evidence for this being the case.

Ramsey’s critique was mainly that the kind of probability relations that Keynes was speaking of in Treatise actually didn’t exist and that Ramsey’s own procedure  (betting) made it much easier to find out the “degrees of belief” people were having. I question this both from a descriptive and a normative point of view.

What Keynes is saying in his response to Ramsey is only that Ramsey “is right” in that people’s “degrees of belief” basically emanates in human nature rather than in formal logic.

Patrick Maher, former professor of philosophy at the University of Illinois, even suggests that Ramsey’s critique of Keynes’s probability theory in some regards is invalid:

Keynes’s book was sharply criticized by Ramsey. In a passage that continues to be quoted approvingly, Ramsey wrote:

“But let us now return to a more fundamental criticism of Mr. Keynes’ views, which is the obvious one that there really do not seem to be any such things as the probability relations he describes. He supposes that, at any rate in certain cases, they can be perceived; but speaking for myself I feel confident that this is not true. I do not perceive them, and if I am to be persuaded that they exist it must be by argument; moreover, I shrewdly suspect that others do not perceive them either, because they are able to come to so very little agreement as to which of them relates any two given propositions.” (Ramsey 1926, 161)

I agree with Keynes that inductive probabilities exist and we sometimes know their values. The passage I have just quoted from Ramsey suggests the following argument against the existence of inductive probabilities. (Here P is a premise and C is the conclusion.)

P: People are able to come to very little agreement about inductive proba- bilities.
C: Inductive probabilities do not exist.

P is vague (what counts as “very little agreement”?) but its truth is still questionable. Ramsey himself acknowledged that “about some particular cases there is agreement” (28) … In any case, whether complicated or not, there is more agreement about inductive probabilities than P suggests.

Ramsey continued:

“If … we take the simplest possible pairs of propositions such as “This is red” and “That is blue” or “This is red” and “That is red,” whose logical relations should surely be easiest to see, no one, I think, pretends to be sure what is the probability relation which connects them.” (162)

I agree that nobody would pretend to be sure of a numeric value for these probabilities, but there are inequalities that most people on reflection would agree with. For example, the probability of “This is red” given “That is red” is greater than the probability of “This is red” given “That is blue.” This illustrates the point that inductive probabilities often lack numeric values. It doesn’t show disagreement; it rather shows agreement, since nobody pretends to know numeric values here and practically everyone will agree on the inequalities.

Ramsey continued:

“Or, perhaps, they may claim to see the relation but they will not be able to say anything about it with certainty, to state if it ismore or less than 1/3, or so on. They may, of course, say that it is incomparable with any numerical relation, but a relation about which so little can be truly said will be of little scientific use and it will be hard to convince a sceptic of its existence.” (162)

Although the probabilities that Ramsey is discussing lack numeric values, they are not “incomparable with any numerical relation.” Since there are more than three different colors, the a priori probability of “This is red” must be less than 1/3 and so its probability given “This is blue” must likewise be less than 1/3. In any case, the “scientific use” of something is not relevant to whether it exists. And the question is not whether it is “hard to convince a sceptic of its existence” but whether the sceptic has any good argument to support his position …

Ramsey concluded the paragraph I have been quoting as follows:

“Besides this view is really rather paradoxical; for any believer in induction must admit that between “This is red” as conclusion and “This is round” together with a billion propositions of the form “a is round and red” as evidence, there is a finite probability relation; and it is hard to suppose that as we accumulate instances there is suddenly a point, say after 233 instances, at which the probability relation becomes finite and so comparable with some numerical relations.” (162)

Ramsey is here attacking the view that the probability of “This is red” given “This is round” cannot be compared with any number, but Keynes didn’t say that and it isn’t my view either. The probability of “This is red” given only “This is round” is the same as the a priori probability of “This is red” and hence less than 1/3. Given the additional billion propositions that Ramsey mentions, the probability of “This is red” is high (greater than 1/2, for example) but it still lacks a precise numeric value. Thus the probability is always both comparable with some numbers and lacking a precise numeric value; there is no paradox here.

I have been evaluating Ramsey’s apparent argument from P to C. So far I have been arguing that P is false and responding to Ramsey’s objections to unmeasurable probabilities. Now I want to note that the argument is also invalid. Even if P were true, it could be that inductive probabilities exist in the (few) cases that people generally agree about. It could also be that the disagreement is due to some people misapplying the concept of inductive probability in cases where inductive probabilities do exist. Hence it is possible for P to be true and C false …

I conclude that Ramsey gave no good reason to doubt that inductive probabilities exist.

Ramsey’s critique made Keynes more strongly emphasize the individuals’ own views as the basis for probability calculations, and less stress that their beliefs were rational. But Keynes’s theory doesn’t stand or fall with his view on the basis for our “degrees of belief” as logical. The core of his theory – when and how we are able to measure and compare different probabilities – he doesn’t change. Unlike Ramsey he wasn’t at all sure that probabilities always were one-dimensional, measurable, quantifiable or even comparable entities.

1. July 25, 2015 at 11:45 am

At a time when I suspect most of us are mourning the Greek tragedy, it is hardly surprising this article has attracted little interest.

The challenge being thrown at me as a Christian, is how, with liar-led zombies oppressing the innocent by blatent monetary theft and the fuelling of yet more wars within and between nations, I can suppose there is any real alternative to bloody French-style revolution to ensure the guilty are brought to justice? Given the precedent of the Nuremburg trials of German Nazi’s, why is Tony Blair not being tried at the Hague for the same offences (having led Britain into war under false pretences) instead of being allowed to interfere in Labour party deIiberations as to whether it should follow his covert Conservative or Kerbyn’s openly Socialist principles? How can I continue to believe in local self-government when local government has been so effectively reduced to an executive arm of zombie banker controlled central government’s imposition of fraudulent austerity?

My choices seem to involve Hobb’s Leviathon (accepting oppression is less costly than war), going for the liars and not the zombies (remembering how post-Nazi Germans regained their senses with the help of Marshall aid and debt write-offs), and being still prepared to believe – i.e. to hope – that truth will set us free (remembering Chinese drips, “softly softly catchee monkey” and Augustine’s eventual conversion). On this last, Ramsey’s misrepresentation of Keynes and Lars’s swallowing a red herring version of Bayesianism certainly won’t help.

Here’s an alternative version of what Keynes was on about, from Passmore’s “A Hundred years of Philosophy” p.346.

“Russell had deduced arithmetic from logic; Keynes set out to do the same for probability theory. … Keynes begins from the proposition, not, as Venn had done, from a ‘happening’ or an ‘event’. On Venn’s version of the frequency theory, the statement ‘the next ball from the urn will probably be black’ is an assertion about the percentage of draws from the urn in which a black ball appears; for Keynes, on the other hand, the problem is to ascribe a probability to the proposition ‘the next ball will be black’. Unless probability theory is prepared to surrender all claims to be useful in everyday determinations of probability, Keynes argues, it must extend its interest int areas where the frequency theory, which looks plausible enough in the case of ball-drawing, would be obviously inapplicable.

“To assign a degree of probability to a proposition, on Keynes’s theory, is to relate it to a body of knowledge. Probability is not a property of the proposition-in-itself; it expresses the degree to which it would be rational, on the evidence at our disposal, to regard the proposition as true. This probability is always relative; it is nevertheless ‘objective’ in the sense that a proposition HAS a certain probability relative to the evidence, whether or not we recognise that probability. What precisely, we may ask, is this relation of ‘making probable’ which holds between evidence and conclusion? A unique logical relation, Keynes thought, not reducible to any other; we apprehend it intuitively, as we apprehend implication. [The difference between the urn’s black or red choices being that background knowledge is not necessarily conscious but is in any case diverse rather than random].”

My own reading of Bayes is different again. His probability is not just updated by increasing information but oscillates between two different dimensions: increasing experience of each updating the other. Basically, it involves a complex rather than linear number or ordering.

2. July 27, 2015 at 9:55 am

My paper entitled “Post-Positivist Probability” explains why the Savage-Ramsey argument (De-Finnetti was also influential) based on money-pump arguments is wrong. Basically, the problem arose due to Logical Positivism, a wrong philosophy widely popular at the time. According to this philosophy, only observables matter. So beliefs about probability must be deduced from actions. We can create artificial betting situations which force a person to specify a probability and stick with it throughout a sequence of decisions, or else he can be made to act as a money pump (lose money in all cases, regardless of the unknown state of nature). According to positivist precepts, acting according to a certain belief is the same as having that belief, and so this argument proves existence of probabilistic beliefs.  This idea, that acting according to a belief and having that belief is the same thing, is the fundamental positivist fallacy. In the paper, I prove this by showing that everyone KNOWS the exchange rate between Russian Rubles and Sudanese Piasters. This can be proven by constructing a sequence of bets which require you to fix a rate and act according to this rate throughout the sequences. If you deviate, you can be punished by ensuring that you lose money regardless of the true rate.  Once the nature of the Bayesian argument is understood, we can create a short-cut to achieve the same result. I hold a gun to your head and ask you to name your belief about probability of rain tomorrow – I can similarly force you to act according to this belief in any arbitrary sequence of decisions. The necessity of acting according to a belief in a special situation is NOT the same as having that belief.

This paper has been sitting for many years because I have not had the time and energy to write it up for publication. If anyone is interested in doing this work, I would be happy to offer co-author credits

3. July 28, 2015 at 11:41 pm

The whole rational choice debate misses the point completely. In fact, the apologetic “people are not rational” disclaimer seems like a clever framing by neoliberals to move progressives away from their strongest guns.

Demand is the inner product of preference with means. With the apologetic straw dummy on the preference side, they have shifted attention away from the far more important means part of the debate. Specifically, a large part of consumer choice is “path dependent” on wage history. Rather than how the middle class divvies up the loot coming from efficiency increases, the biggest variable is how much of that loot they actually have access to and how much they believe they will have access to going forward.

I believe that many of the axioms of neoliberal economic thought can be proven correct in a completely egalitarian society. That does not make these axioms useful.

4. July 29, 2015 at 2:35 pm

Following up Lars’ link to Dutch Books revealed the existence of a converse Dutch Book argument, which may help me clarify the neo-liberal interpretation of Bayes’ Theorem being a red herring. The theorem did not merely repeatedly update probability A given further information justifying B, it conversely updated probability B given further information justifying A. Thus unsymmetrical outcomes in throwing an apparently symmetrical die may lead to discovery of physical a-symmetry in the die and hence revised expectations of the probabilities for particular outcomes, which may lead to further refinements of estimates.

The form of the Dutch Book argument also drew my attention to a problem which first puzzled me when I encountered the phrase “accumulated probabilities” in J H Newman’s ‘Apologia’ (more fully developed in his ‘Grammar of Assent’ and pre-Darwin ‘Development of Christian Doctrine’. C.f. Hume’s objection to a logic of induction). How could one add independent probabilities without the sum becoming likely to exceed one? It eventually clicked (and became obvious studying failure rates in reliability research) that the probabilities of failure over time due to any one component were in parallel rather than series, and in like manner to summing the effect of resistors in parallel electrical circuits) were being summed as 1/P = sum of the individual 1/p’s. However, the lesson learned from our research was that although the failure rate of an equipment could be predicted from the sum of the failures of the components, the prediction failed where there were design faults in the system. Thus there were two types of failure, and inverting the sum of 1/p to get the total P took the form of the parallel resistor formula; R(total) = r(1)r(2)/r(1) + r(2), which is the form of Bayes’ Theorem for reliability, given both component and “wiring” faults.

For economics, the significance of that is not that people are irrational (we all fail at some time) but that major failures are due to design faults in the “wiring” of the system.

Wonkish or not, I think this is worthy of your attention, Lars. Anyway, thanks for the lead.