Home > Uncategorized > The limits of probabilistic reasoning

The limits of probabilistic reasoning

from Lars Syll

Probabilistic reasoning in science — especially Bayesianism — reduces questions of rationality to questions of internal consistency (coherence) of beliefs, but, even granted this questionable reductionism, it’s not self-evident that rational agents really have to be probabilistically consistent. There is no strong warrant for believing so. Rather, there is strong evidence for us encountering huge problems if we let probabilistic reasoning become the dominant method for doing research in social sciences on problems that involve risk and uncertainty.

probIn many of the situations that are relevant to economics, one could argue that there is simply not enough of adequate and relevant information to ground beliefs of a probabilistic kind and that in those situations it is not possible, in any relevant way, to represent an individual’s beliefs in a single probability measure.

Say you have come to learn (based on own experience and tons of data) that the probability of you becoming unemployed in Sweden is 10%. Having moved to another country (where you have no own experience and no data) you have no information on unemployment and a fortiori nothing to help you construct any probability estimate on. A Bayesian would, however, argue that you would have to assign probabilities to the mutually exclusive alternative outcomes and that these have to add up to 1 if you are rational. That is, in this case – and based on symmetry – a rational individual would have to assign probability 10% to become unemployed and 90% to become employed. 

That feels intuitively wrong though, and I guess most people would agree. Bayesianism cannot distinguish between symmetry-based probabilities from information and symmetry-based probabilities from an absence of information. In these kinds of situations, most of us would rather say that it is simply irrational to be a Bayesian and better instead to admit that we “simply do not know” or that we feel ambiguous and undecided. Arbitrary an ungrounded probability claims are more irrational than being undecided in face of genuine uncertainty, so if there is not sufficient information to ground a probability distribution it is better to acknowledge that simpliciter, rather than pretending to possess a certitude that we simply do not possess.

I think this critique of Bayesianism is in accordance with the views of John Maynard Keynes’ A Treatise on Probability (1921) and General Theory (1937). According to Keynes we live in a world permeated by unmeasurable uncertainty – not quantifiable stochastic risk – which often forces us to make decisions based on anything but rational expectations. Sometimes we “simply do not know.” Keynes would not have accepted the view of Bayesian economists, according to whom expectations “tend to be distributed, for the same information set, about the prediction of the theory.” Keynes, rather, thinks that we base our expectations on the confidence or ‘weight’ we put on different events and alternatives. To Keynes expectations are a question of weighing probabilities by ‘degrees of belief,’ beliefs that have preciously little to do with the kind of stochastic probabilistic calculations made by the rational agents modelled by probabilistically reasoning Bayesian economists.

We always have to remember that economics and statistics are two quite different things, and as long as economists cannot identify their statistical theories with real-world phenomena there is no real warrant for taking their statistical inferences seriously.

Just as there is no such thing as a ‘free lunch,’ there is no such thing as a ‘free probability.’ To be able at all to talk about probabilities, you have to specify a model. If there is no chance set-up or model that generates the probabilistic outcomes or events -– in statistics one refers to any process where you observe or measure as an experiment (rolling a die) and the results obtained as the outcomes or events (number of points rolled with the die, being e. g. 3 or 5) of the experiment -– there, strictly seen, is no event at all.

Probability is a relational element. It always must come with a specification of the model from which it is calculated. And then to be of any empirical scientific value it has to be shown to coincide with (or at least converge to) real data generating processes or structures –- something seldom or never done in economics.

And this is the basic problem!

If you have a fair roulette-wheel, you can arguably specify probabilities and probability density distributions. But how do you conceive of the analogous ‘nomological machines’ for prices, gross domestic product, income distribution etc? Only by a leap of faith. And that does not suffice in science. You have to come up with some really good arguments if you want to persuade people into believing in the existence of socio-economic structures that generate data with characteristics conceivable as stochastic events portrayed by probabilistic density distributions! Not doing that, you simply conflate statistical and economic inferences.

The present ‘machine learning’ and ‘big data’ hype shows that many social scientists — falsely — think that they can get away with analysing real-world phenomena without any (commitment to) theory. But — data never speaks for itself. Without a prior statistical set-up, there actually are no data at all to process. And — using a machine learning algorithm will only produce what you are looking for. Theory matters.

Causality in social sciences — and economics — can never solely be a question of statistical inference. Causality entails more than predictability, and to really in-depth explain social phenomena require theory. Analysis of variation — the foundation of all econometrics — can never in itself reveal how these variations are brought about. First when we are able to tie actions, processes or structures to the statistical relations detected, can we say that we are getting at relevant explanations of causation.
5cd674ec7348d0620e102a79a71f0063Most facts have many different, possible, alternative explanations, but we want to find the best of all contrastive (since all real explanation takes place relative to a set of alternatives) explanations. So which is the best explanation? Many scientists, influenced by statistical reasoning, think that the likeliest explanation is the best explanation. But the likelihood of x is not in itself a strong argument for thinking it explains y. I would rather argue that what makes one explanation better than another are things like aiming for and finding powerful, deep, causal, features and mechanisms that we have warranted and justified reasons to believe in. Statistical — especially the variety based on a Bayesian epistemology — reasoning generally has no room for these kinds of explanatory considerations. The only thing that matters is the probabilistic relation between evidence and hypothesis. That is also one of the main reasons I find abduction — inference to the best explanation — a better description and account of what constitute actual scientific reasoning and inferences.

And even worse — some economists using statistical methods think that algorithmic formalisms somehow give them access to causality. That is, however, simply not true. Assuming ‘convenient’ things like ‘faithfulness’ or ‘stability’ is to assume what has to be proven. Deductive-axiomatic methods used in statistics do no produce evidence for causal inferences. The real causality we are searching for is the one existing in the real world around us. If there is no warranted connection between axiomatically derived statistical theorems and the real-world, well, then we haven’t really obtained the causation we are looking for.

  1. February 14, 2018 at 12:31 am

    Interesting:

    It’s not really related to how I use Bayes Theorem, but then maybe I am unorthodox or maybe I simply don’t understand it and made up my own version of it which is quite likely. I do that.

    I tend to focus on the Bayesian idea of how much we should change our mind on rough order probabilities in the light of new information, rather than obsessing on how precise the probabilities are before and after new information. My priors are rough and ready approximations. And for me a good Bayesian approach is one where we are open to new information and adjust our views appropriately, rather than obsesses about roulette wheel like prior and post probabilities. Knightian Uncertainty insights can still be used to assign rough probabilities that then adjust in the light of the new data. Or indeed we can say we don’t know. But sometimes we have to act and saying “we are from Barcelona and know nothing” is not possible. In a battle you have to shoot or not shoot sometimes…

    Philip Tetlock’s work on Expert Political Judgement suggests we are not good at forecasting, experts especially; mainly because most of us only use one lens (are Hedgehogs rather than Foxes who know many things in Isaiah Berlin’s terms) and because we don’t admit error and therefore cannot learn from it. Nor I would add do we adjust our forecasts in the light of new data very appropriately. We are over-committed to our prior forecasts.

    So to me a healthy use of Bayesian Theorem in economics would be around say Brexit, a very Knightian situation in its lumpy political and economic aspects. At the time of the Referendum I think I assigned about a 35% probability to a Wrecking Ball Scenario aka Hard Brexit with no plan. (I had three other possible scenarios as per Peter Schwartz scenario based futuring). As time has passed I have raised that to about 50% in the light of government incompetence and absence of any apparent plan…if they suddenly next week came out with a brilliant and credible plan my % probability of disaster would plummet..Bayesian Adjustment…

    Now I am not using these %s in precise terms like odds on the roulette wheel, that if we ran Hard Brexit a 100 times in parallel universes, 50 would turn out disastrous (and yes I define disastrous as say 10% below trend line growth in GDP by 2029 say) but following Tetlock’s idea that we should test our forecasting by assigning rough probabilities so that over many forecasts of many different things, we can refine our judgment using fine grained %s of our confidence level in our forecasts, rather than binary: Brexit = success vs Brexit = disaster…Such a pity my current Brexit example is 50/50 so ok may it 55/45 or whatever. :)

    Finally I like to use what I call Reverse Bayesian-ism: “What information would make you change your mind?” (rather than how much should you change your mind in the light of new information.) This is a good test of what I call Positional Fundamentalists: people whose views/forecasts are so rigid that no information, even in theory if it existed, would make them change their mind. The supporters of a certain US President often fail this test when I apply it. Brexiteers often come close…

    Anyway, as a loose Bayesian, I am open to changing my mind in the light of any argument or evidence that you may offer to show I am full of it. I am often mistaken…how about you? :)

    • Charlie
      February 14, 2018 at 3:12 am

      I certainly have similar experience. I started using Stein-rule estimation in forest inventorys of the South. Moving more and more toward a mathematical and empirical use of the Bayes risk. It made estimates much more reasonable and verifiable in the face of information of mixed temporal and spatial variability. I probably ended my career ( forest statistician ) as a ‘loose Bayesian’ willing to change my mind as new information was added to our data bases.
      Thanks for your reply to this issue as it has some real problems in my opinion.

      • February 14, 2018 at 3:58 am

        Thanks, I think loose Bayesianism has value in the real world the subject matter of this blog site… :)

    • Jan Milch
      February 16, 2018 at 3:25 pm

      As i see it Lars main point is not to avoid the use of statistical methods or mathematics per see,but to see the limitations to use those methods in a social-science like economics.
      Remember that the first one that used Bayesian Theory more properly was Laplace in his studies of Mechanics and Planetary Astronomics.
      I think that Laplace would be rather stunned if he lived today and saw the heavy missuse of the Bayesian methods in all sort of “Scienses” from Political, Ethnology ,Sociology and foremost Macro Economics.

      The main point for every scientist in choice methods as i see it should be, to first ask oneself, do my method apply?If not i should of course have to chose another one.
      As C Wright Mills once stated:as a social scientist you in many cases have to be your own methodologist,simply because in most areas that are of most interest ,there are no developed methodologies that apply.

      Incase of Critique of Bayesianism
      see :Andrew Gelman : Bayesian Analysis 2008, Number 3, pp. 445-450
      Objections to Bayesian statistics :

      “Bayesian inference is one of the more controversial approaches to
      statistics. The fundamental objections to Bayesian methods are twofold: on one hand, Bayesian methods are presented as an automatic inference engine, and this raises suspicion in anyone with applied experience. The second objection to Bayes comes from the opposite direction and addresses the subjective strand of Bayesian
      inference. This article presents a series of objections to Bayesian inference, written in the voice of a hypothetical anti-Bayesian statistician. The article is intended to elicit elaborations and extensions of these and other arguments from non-Bayesians and responses from Bayesians who might have different perspectives on these issues.

      The fundamental objections to Bayesian methods are twofold: on one hand, Bayesian methods are presented as an automatic inference engine, and this raises suspicion in anyone with applied experience, who realizes that different methods work well in different settings .

      Bayesians promote the idea that a multiplicity of parameters can be handled via hierarchical, typically exchangeable, models, but it seems implausible that this could really work automatically. In contrast, much of
      the work in modern non-Bayesian statistics is focused on developing methods that give reasonable answers using minimal assumptions.

      The second objection to Bayes comes from the opposite direction and addresses the subjective strand of Bayesian inference: the idea that prior and posterior distributions represent subjective states of knowledge. Here the concern from outsiders is, first, that as scientists we should be concerned with objective knowledge rather than subjective
      belief, and second, that it’s not clear how to assess subjective knowledge in any case.
      Beyond these objections is a general impression of the shoddiness of some Bayesian analyses, combined with a feeling that Bayesian methods are being oversold as an all purpose statistical solution to genuinely hard problems. Compared to classical inference,which focuses on how to extract the information available in data, Bayesian methods seem to quickly move to elaborate computation. It does not seem like a good thing for
      a generation of statistics to be ignorant of experimental design and analysis of variance,instead becoming experts on the convergence of the Gibbs sampler.

      In the short-term this represents a dead end, and in the long term it represents a withdrawal of statisticians from the deeper questions of inference and an invitation for econometricians, computer
      scientists, and others to move in and ll in the gap.

      I find it clearest to present the objections to Bayesian statistics in the voice of a hypothetical anti-Bayesian statistician. I am imagining someone with experience in theoretical and applied statistics, who understands Bayes’ theorem but might not be aware of recent developments in the eld. In presenting such a persona, I am not trying to mock or parody anyone but rather to present a strong rm statement of attitudes that deserve serious consideration.
      Here follows the list of objections from a hypothetical or paradigmatic non-Bayesian:

      Bayesian inference is a coherent mathematical theory but I don’t trust it in scientific applications. Subjective prior distributions don’t transfer well from person to person,and there’s no good objective principle for choosing a noninformative prior (even if that concept were mathematically denied, which it’s not). Where do prior distributions
      come from, anyway? I don’t trust them and I see no reason to recommend that other people do, just so that I can have the warm feeling of philosophical coherence.

      To put it another way, why should I believe your subjective prior? If I really believed it, then I could just feed you some data and ask you for your subjective posterior. That would save me a lot of eort!

      As Brad Efron wrote in 1986, Bayesian theory requires a great deal of thought about the given situation to apply sensibly, and recommending that scientists use Bayes’ theorem is like giving the neighborhood kids the key to your F-16. I’d rather start
      with tried and true methods, and then generalize using something I can trust, such as statistical theory and minimax principles, that don’t depend on your subjective beliefs.

      Especially when the priors I see in practice are typically just convenient conjugate forms. What a coincidence that, of all the innite variety of priors that could be chosen, it always seems to be the normal, gamma, beta, etc., that turn out to be the right choices?

      To restate these concerns mathematically: I like unbiased estimates and I like conffidence intervals that really have their advertised condence coverage. I know that these aren’t always going to be possible, but I think the right way forward is to get as close to these goals as possible and to develop robust methods that work with minimal assumptions.

      The Bayesian approach to give up even trying to approximate unbiasedness and to instead rely on stronger and stronger assumptions|that seems like the wrong way to go.
      In the old days, Bayesian methods at least had the virtue of being mathematically clean.

      Nowadays, they all seem to be computed using Markov chain Monte Carlo,
      which means that, not only can you not realistically evaluate the statistical properties of the method, you can’t even be sure it’s converged, just adding one more item to the list of unveriable (and unveried) assumptions. Computations for classical methods aren’t easy running from nested bootstraps at one extreme to asymptotic theory on
      the otherbut there is a clear goal of designing procedures with proper coverage, in contrast to Bayesian simulation which seems stuck in an innite regress of inferentialuncertainty.
      People tend to believe results that support their preconceptions and disbelieve results that surprise them. Bayesian methods encourage this undisciplined mode of thinking.

      I’m sure that many individual Bayesian statisticians are acting in good faith, but they’re providing encouragement to sloppy and unethical scientists everywhere.

      And, probably worse, Bayesian techniques motivate even the best-intentioned researchers to get stuck in the rut of prior beliefs.
      As the applied statistician Andrew Ehrenberg wrote in 1986, Bayesianism assumes:
      (a) Either a weak or uniform prior, in which case why bother?, (b) Or a strong prior, in which case why collect new data?, (c) Or more realistically, something in between,in which case Bayesianism always seems to duck the issue.

      Nowadays people use a lot of empirical Bayes methods. I applaud the Bayesians’ newfound commitment to empiricism but am skeptical of this particular approach, which always seems to rely on an assumption of exchangeability.”

      In political science, people are embracing Bayesian statistics as the latest methodological fad. Well, let me tell you something.
      The 50 states aren’t exchangeable. I’ve lived in a few of them and visited nearly all the others, and calling them exchangeable is just silly.

      Calling it a hierarchical or a multilevel model doesn’t change things|it’s an additional level of modeling that I’d rather not do. Call me old-fashioned, but I’d rather let the data speak without applying a probability distribution to something like the 50 states which are neither random nor a sample.

      So, don’t these empirical and hierarchical Bayes methods use the data twice? If you’re going to be Bayesian, then be Bayesian: it seems like a cop-out and contradictory to the Bayesian philosophy to estimate the prior from the data. If you want to do multilevel modeling, I prefer a method such as generalized estimating equations that makes minimal assumptions.
      And don’t even get me started on what Bayesians say about data collection. The mathematics of Bayesian decision theory lead inexorably to the idea that random sampling and random treatment allocation are inecient, that the best designs are deterministic.

      I have no quarrel with the mathematics here the mistake lies deeper in the
      philosophical foundations, the idea that the goal of statistics is to make an optimal decision.

      A Bayes estimator is a statistical estimator that minimizes the average risk, but when we do statistics, we’re not trying to \minimize the average risk,” we’re trying to do estimation and hypothesis testing. If the Bayesian philosophy of axiomatic reasoning implies that we shouldn’t be doing random sampling, then that’s a strike against the theory right there.

      Bayesians also believe in the irrelevance of stopping times that,if you stop an experiment based on the data, it doesn’t change your inference. Unfortunately for the Bayesian theory,the p-value does change when you alter the stopping rule, and no amount of philosophical reasoning will get you around that point.

      I can’t keep track of what all those Bayesians are doing nowadays,unfortunately,all sorts of people are being seduced by the promises of automatic inference through the magic of MCMC,but I wish they would all just stop already and get back to doing statistics the way it should be done, back in the old days when a p-value stood for something, when a condence interval meant what it said, and statistical bias was something to eliminate, not something to embrace.”

  2. February 14, 2018 at 10:54 am

    In my view the prevailing view of Bayesianism is misguided. The extra information needs to come from a different source, as in the number of sides of a dice vs the times each side comes up if one throws it. The present Humean or Logical Positivist interpretation applies only to the probability of success or failure of what you have got. Equally important is whether you are getting or looking at the right thing: we can go elsewhere in the world than along the existing train track (or better, rat race). Is the dice as symmetrical as it appears, or to what extent is it biassed? And yes, accuracy isn’t the issue. The point is deciding what to do: choosing between DIFFERENT options.

    • February 14, 2018 at 4:16 pm

      Well said: seek different sources and look at the data with different lenses but use Bayesian insights to ensure you are careful in adjusting your views in the light of new data and new lenses if need be. Someone once pointed out to me you could only be a good Bayesian if you had some stable core identity you were not afraid to challenge vigorously and I think applies to having a stable core of tested ideas that are not brittle but adapt in the light of evidence.

  3. February 14, 2018 at 1:37 pm

    Sometimes we don’t take any action because ‘simply we do not know’. We should learn to take decision or explore what is there ahead with available information. This will allow us to refine our thoughts to better our future decisions. How do we know anything if we don’t try to explore the unseen landscape? We have to take risks.

  4. Rob Reno
    February 16, 2018 at 7:18 pm

    A very interesting thread indeed, from the initial post to the comments, enjoyable and informative indeed.

  1. No trackbacks yet.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.