Home > Uncategorized > Bayesian overload

Bayesian overload

from Lars Syll

Although Bayesians think otherwise, to me there’s nothing magical about Bayes’ theorem. The important thing in science is for you to have strong evidence. If your evidence is strong, then applying Bayesian probability calculus is rather unproblematic. Otherwise — garbage in, garbage out. Applying Bayesian probability calculus to subjective beliefs founded on weak evidence is not a recipe for scientific akribi and progress.

Neoclassical economics nowadays usually assumes that agents that have to make choices under conditions of uncertainty behave according to Bayesian rules — that is, they maximize expected utility with respect to some subjective probability measure that is continually updated according to Bayes’ theorem.

Bayesianism reduces questions of rationality to questions of internal consistency (coherence) of beliefs, but – even granted this questionable reductionism – do rational agents really have to be Bayesian? As I have been arguing repeatedly over the years, there is no strong warrant for believing so.  

In many of the situations that are relevant to economics one could argue that there is simply not enough of adequate and relevant information to ground beliefs of a probabilistic kind, and that in those situations it is not really possible, in any relevant way, to represent an individual’s beliefs in a single probability measure.

Bayesianism cannot distinguish between symmetry-based probabilities from information and symmetry-based probabilities from an absence of information. In these kinds of situations most of us would rather say that it is simply irrational to be a Bayesian and better instead to admit that we “simply do not know” or that we feel ambiguous and undecided. Arbitrary an ungrounded probability claims are more irrational than being undecided in face of genuine uncertainty, so if there is not sufficient information to ground a probability distribution it is better to acknowledge that simpliciter, rather than pretending to possess a certitude that we simply do not possess.

So, why then are so many scientists nowadays so fond of Bayesianism? I guess one strong reason is that Bayes’ theorem gives them a seemingly fast, simple and rigorous answer to their problems and hypotheses. But, as already Popper showed back in the 1950’s, the Bayesian probability (likelihood) version of confirmation theory is ‘absurd on both formal and intuitive grounds: it leads to self-contradiction.’

  1. June 7, 2016 at 12:19 pm

    The last paragraph is slightly misleading. If you read Popper’s full sentence, you will see that he actually said “Thus we have proved that the identification of degree of corroboration or confirmation with probability (and even with likelihood) is absurd on both formal and intuitive grounds: it leads to self-contradiction”. This is because the Popperian approach is against “accepting” a hypothesis, but this applies to both paradigms!!!! Moreover, the Bayesian paradigm is not against falsificationism.

    I recommend the following references for a clarification:

    http://projecteuclid.org/download/pdfview_1/euclid.ss/1089808272

    http://andrewgelman.com/2004/10/14/bayes_and_poppe/

  2. June 10, 2016 at 5:14 pm

    I have forming my own views on this issue, not just latched on to an opinion. In light of the historical contexts of Pascal’s triangle, Bayes’ theorem, Hume’s fork, Russell’s typed logic, Keynes’ treatise on probability, Shannon’s theory of information capacity and dynamic error correction, fundamental computational methods (cyclic algorithms) using analog and digital logics, the actual practice of statistical quality assurance, research into the reliability of reliability theory and the then-contemporary philosophical interpretations and critiques (e.g.Popper’s) of e.g. Logical Positivism, my conclusion is that the the current interpretation of Probability as a one-dimensional number between 0 and 1, and the so-called “Bayesian” way of refining estimates of it with a larger sample of the same type, is simply wrong.

    What Bayes had to play with c.1740 was two different ways of estimating the probability of a dice throwing up a given result: the symmetry of the dice and sampling the throwing of it, with hypotheses based on either being corrected by conclusions based on the other. The correct form of number (the one providing sufficient information capacity to convey all the relevant information : “differences which makes a difference”) is a two-dimensional complex number, where the dfferent types of estimate can be distinguished as the real and imaginary dimensions of a complex number. The customary error can be likened to lay people thinking of electrical currents or voltages as “power”, where actually, variations in the voltage or current are physically at right angles and power is defined as their dynamic product: volts x amps, with an “r.m.s.” [root mean square] version of that allowing for phase differences (time delays) in alternating currents/measurements.

    The updating of information with more of the same kind is not the Bayesian method as such but the special case where there are no differences other than quantity. In quality assurance terms this corresponds to the confidence level indicated by the sample size, not the probability of failure to be within tolerance. One could thus describe Probability as the product of confidence in a measure and confidence in the measuring method.

  1. No trackbacks yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s