Author Archive

Putting theories to the test

October 19, 2017 2 comments

from Lars Syll

Mainstream neoclassical economists often maintain — usually referring to the methodological individualism of Milton Friedman — that it doesn’t matter if the assumptions of the theories and models they use are realistic or not. What matters is if the predictions are right or not. But, if so, then the only conclusion we can make is — throw away the garbage! Because, oh dear, oh dear, how wrong they have been!

The empirical and theoretical evidence is clear. Predictions and forecasts are inherently difficult to make in a socio-economic domain where genuine uncertainty and unknown unknowns often rule the roost. The real processes that underly the time series that economists use to make their predictions and forecasts do not conform with the assumptions made in the applied statistical and econometric models. Much less is a fortiori predictable than standardly — and uncritically — assumed. The forecasting models fail to a large extent because the kind of uncertainty that faces humans and societies actually makes the models strictly seen inapplicable. The future is inherently unknowable — and using statistics, econometrics, decision theory or game theory, does not in the least overcome this ontological fact. The economic future is not something that we normally can predict in advance. Better then to accept that as a rule “we simply do not know.”

papIt is of paramount importance that economists be frank with themselves and their audience. The limitations of current practice … must be recognized openly. If we are engaged in ex post facto explanation, we should be quite ready to say so, rather than pretend that our ‘theories’ can pass the same tests that theories of well-developed sciences can pass … Recognition of the character of current methodological practice will go a long way toward substituting fact for myth and thus open the way to new horizons of research.

Axioms — things to be suspicious of

October 18, 2017 6 comments

from Lars Syll

miracle_cartoonTo me, the crucial difference between modelling in physics and in economics lies in how the fields treat the relative role of concepts, equations and empirical data …

An economist once told me, to my bewilderment: “These concepts are so strong that they supersede any empirical observation” …

Physicists, on the other hand, have learned to be suspicious of axioms. If empirical observation is incompatible with a model, the model must be trashed or amended, even if it is conceptually beautiful or mathematically convenient.

Jean-Philippe Bouchaud

Thaler and behavioural economics — some critical perspectives

October 15, 2017 Leave a comment

from Lars Syll

Although discounting empirical evidence cannot be the right way to solve economic issues, there are still, in my opinion, a couple of weighty reasons why we perhaps shouldn’t be too excited about the so-called ’empirical revolution’ in economics.

behBehavioural experiments and laboratory research face the same basic problem as theoretical models — they are built on often rather artificial conditions and have difficulties with the ‘trade-off’ between internal and external validity. The more artificial conditions, the more internal validity, but also less external validity. The more we rig experiments to avoid the ‘confounding factors’, the less the conditions are reminiscent of the real ‘target system.’ The nodal issue is how economists using different isolation strategies in different ‘nomological machines’ attempt to learn about causal relationships. One may have justified doubts on the generalizability of this research strategy since the probability is high that causal mechanisms are different in different contexts and that lack of homogeneity and invariance doesn’t give us warranted export licenses to the ‘real’ societies or economies.   Read more…

Keynes — the first behavioural economist

October 13, 2017 Leave a comment

from Lars Syll

To-day, in many parts of the world, it is the serious embarrassment of the banks which is the cause of our gravest concern …

[The banks] stand between the real borrower and the real lender. They have given their guarantee to the real lender; and this guarantee is only good if the money value of the asset belonging to the real borrower is worth the money which has been advanced on it.

It is for this reason that a decline in money values so severe as that which we are now experiencing threatens the solidity of the whole financial structure. Banks and bankers are by nature blind. They have not seen what was coming. Some of them … employ so-called “economists” who tell us even to-day that our troubles are due to the fact that the prices of some commodities and some services have not yet fallen enough, regardless of what should be the obvious fact that their cure, if it could be realised, would be a menace to the solvency of their institution. A “sound” banker, alas! is not one who foresees danger and avoids it, but one who, when he is ruined, is ruined in a conventional and orthodox way along with his fellows, so that no one can really blame him.   Read more…

Nobel Committee making a colossal fool of itself

October 10, 2017 3 comments

from Lars Syll

In its ‘scientific background’ description on the 2017 ‘Nobel prize’ in economics, The Royal Swedish Academy of Sciences writes (emphasis added):

dumstrut-317x330In order to build useful models, economists make simplifying assumptions. A common and fruitful simplification is to assume that agents are perfectly rational. This simplification has enabled economists to build powerful models to analyze a multitude of different economic issues and markets.



What absolute nonsense! Writing this and at the same time giving the prize to an economist that has devoted his whole career to show how utterly wrong this modelling strategy is, is truly an amazing case of having bad luck when thinking …

Richard Thaler gets the 2017 ‘Nobel prize’

October 10, 2017 13 comments

from Lars Syll

150511_tbq_thaler_portraitToday The Royal Swedish Academy of Sciences announced that it  has decided to award The Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel for 2017 to Richard Thaler.

A good choice for once!

To yours truly Thaler’s main contribution has been to show that one of the main building blocks of modern mainstream economics — expected utility theory — is fundamentally wrong.

If a friend of yours offered you a gamble on the toss of a coin where you could lose €100 or win €200, would you accept it? Probably not. But if you were offered to make one hundred such bets, you would probably be willing to accept it, since most of us see that the aggregated gamble of one hundred 50–50 lose €100/gain €200 bets has an expected return of €5000 (and making our probabilistic calculations we find out that there is only a 0.04% risk of losing any money).

Unfortunately – at least if you want to adhere to the standard neoclassical expected utility maximization theory – you are then considered irrational! A mainstream neoclassical utility maximizer that rejects the single gamble should also reject the aggregate offer.

In Matthew Rabin’s and Richard Thaler’s modern classic Risk Aversion it is forcefully and convincingly shown that expected utility theory does not explain actual behaviour and choices.  Read more…

Why game theory will be nothing but a footnote in the history of social science

October 6, 2017 15 comments

from Lars Syll

Nash equilibrium has since it was introduced back in the 50’s come to be the standard solution concept used by game theorists. The justification for its use has mainly built on dubious and contentious assumptions like ‘common knowledge’ and individuals exclusively identified as instrumentally rational. And as if that wasn’t enough, one actually, to ‘save’ the Holy Equilibrium Grail, has had to further make the ridiculously unreal assumption that those individuals have ‘consistently aligned beliefs’ — effectively treating different individuals as incarnations of the microfoundationalist ‘representative agent.’

In the beginning — in the 50’s and 60’s — hopes were high that game theory would enhance our possibilities of understanding/explaining the behaviour of interacting actors in non-parametric settings. And this is where we ended up! A sad story, indeed, showing the limits of methodological individualism and instrumental rationality.

Why not give up on the Nash concept altogether? Why not give up the vain dream of trying to understand social interaction by reducing it to something that can be analyzed within a grotesquely unreal model of instrumentally interacting identical robot imitations?  Read more…

Rational expectations — the triumph of ideology over science

October 5, 2017 5 comments

from Lars Syll

Research shows not only that individuals sometimes act differently than standard economic theories predict, but that they do so regularly, systematically, and in ways that can be understood and interpreted through alternative hypotheses, competing with those utilised by orthodox economists.

Senate Banking Subcommittee On Financial Institutions Hearing With StiglitzTo most market participants — and, indeed, ordinary observers — this does not seem like big news … In fact, this irrationality is no news to the economics profession either. John Maynard Keynes long ago described the stock market as based not on rational individuals struggling to uncover market fundamentals, but as a beauty contest in which the winner is the one who guesses best what the judges will say …

Adam Smith’s invisible hand — the idea that free markets lead to efficiency as if guided by unseen forces — is invisible, at least in part, because it is not there …

For more than 20 years, economists were enthralled by so-called “rational expectations” models which assumed that all participants have the same (if not perfect) information and act perfectly rationally, that markets are perfectly efficient, that unemployment never exists (except when caused by greedy unions or government minimum wages), and where there is never any credit rationing.

That such models prevailed, especially in America’s graduate schools, despite evidence to the contrary, bears testimony to a triumph of ideology over science. Unfortunately, students of these graduate programmes now act as policymakers in many countries, and are trying to implement programmes based on the ideas that have come to be called market fundamentalism … Good science recognises its limitations, but the prophets of rational expectations have usually shown no such modesty.

Joseph Stiglitz

Read more…

Hicks on neoclassical ‘uncertainty laundering’

October 3, 2017 3 comments

from Lars Syll

To understand real world ‘non-routine’ decisions and unforeseeable changes in behaviour, ergodic probability distributions are of no avail. In a world full of genuine uncertainty — where real historical time rules the roost — the probabilities that ruled the past are not necessarily those that will rule the future.

Wickham, Mark, active 1984-2000; Sir John Hicks (1904-1989)When we cannot accept that the observations, along the time-series available to us, are independent … we have, in strict logic, no more than one observation, all of the separate items having to be taken together. For the analysis of that the probability calculus is useless; it does not apply … I am bold enough to conclude, from these considerations that the usefulness of ‘statistical’ or ‘stochastic’ methods in economics is a good deal less than is now conventionally supposed … We should always ask ourselves, before we apply them, whether they are appropriate to the problem in hand. Very often they are not … The probability calculus is no excuse for forgetfulness.

John Hicks, Causality in Economics, 1979:121

Time to abandon statistical significance

October 1, 2017 3 comments

from Lars Syll

worship-p-300x214We recommend dropping the NHST [null hypothesis significance testing] paradigm — and the p-value thresholds associated with it — as the default statistical paradigm for research, publication, and discovery in the biomedical and social sciences. Specifically, rather than allowing statistical signicance as determined by p < 0.05 (or some other statistical threshold) to serve as a lexicographic decision rule in scientic publication and statistical decision making more broadly as per the status quo, we propose that the p-value be demoted from its threshold screening role and instead, treated continuously, be considered along with the neglected factors [such factors as prior and related evidence, plausibility of mechanism, study design and data quality, real world costs and benefits, novelty of finding, and other factors that vary by research domain] as just one among many pieces of evidence.

We make this recommendation for three broad reasons. First, in the biomedical and social sciences, the sharp point null hypothesis of zero effect and zero systematic error used in the overwhelming majority of applications is generally not of interest because it is generally implausible. Second, the standard use of NHST — to take the rejection of this straw man sharp point null hypothesis as positive or even definitive evidence in favor of some preferredalternative hypothesis — is a logical fallacy that routinely results in erroneous scientic reasoning even by experienced scientists and statisticians. Third, p-value and other statistical thresholds encourage researchers to study and report single comparisons rather than focusing on the totality of their data and results.

Andrew Gelman et al. 

Read more…

Neoliberal ‘ethics’

September 28, 2017 1 comment

from Lars Syll

As we all know, neoliberalism is nothing but a self-serving con endorsing pernicious moral cynicism. But it’s still sickening to read its gobsmacking trash, maintaining that unregulated capitalism is a ‘superlatively moral system’:

neoThe rich man may feast on caviar and champagne, while the poor woman starves at his gate. And she may not even take the crumbs from his table, if that would deprive him of his pleasure in feeding them to his birds.
David Gauthier Morals by Agreement

Now, compare that unashamed neoliberal apologetics with what two truly great economists and liberals — John Maynard Keynes and Robert Solow — have to say:

The outstanding faults of the economic society in which we live are its failure to provide for full employment and its arbitrary and inequitable distribution of wealth and incomes … I believe that there is social and psychological justification for significant inequalities of income and wealth, but not for such large disparities as exist to-day.

John Maynard Keynes General Theory (1936)

Read more…

Seven sins of economics

September 26, 2017 2 comments

from Lars Syll

There has always been some level of scepticism about the ability of economists to offer meaningful predictions and prognosis about economic and social phenomenon. That scepticism has heightened in the wake of the global financial crisis, leading to what is arguably the biggest credibility crisis the discipline has faced in the modern era.

Some of the criticisms against economists are misdirected. But the major thrust of the criticisms does have bite.

There are seven key failings, or the ‘seven sins’, as I am going to call them, that have led economists to their current predicament. These include sins of commission as well as sins of omission.

Sin 1: Alice in Wonderland assumptions

The problem with economists is not that they make assumptions. After all, any theory or model will have to rely on simplifying assumptions … But when critical assumptions are made just to circumvent well-identified complexities in the quest to build elegant theories, such theories will simply end up being elegant fantasies.

Sin 2: Abuse of modelling

What compounds the sin of wild assumptions is the sin of careless modelling, and then selling that model as if it were a true depiction of an economy or society …

Sin 3: Intellectual capture

Several post-crisis assessments of the economy and of economics have pointed to intellectual capture as a key reason the profession, as a whole failed, to sound alarm bells about problems in the global economy, and failed to highlight flaws in the modern economic architecture …   Read more…

Missing the point — the quantitative ambitions of DSGE models

September 25, 2017 Leave a comment

from Lars Syll

A typical modern approach to writing a paper in DSGE macroeconomics is as follows:

o to establish “stylized facts” about the quantitative interrelationships of certain macroeconomic variables (e.g. moments of the data such as variances, autocorrelations, covariances, …) that have hitherto not been jointly explained;

o to write down a DSGE model of an economy subject to a defined set of shocks that aims to capture the described interrelationships; and

o to show that the model can “replicate” or “match” the chosen moments when it is fed with stochastic shocks generated by the assumed shock process …

reality-check_600_441_80However, the test imposed by matching DSGE models to the data is problematic in at least three respects:

First, the set of moments chosen to evaluate the model is largely arbitrary …

Second, for a given set of moments, there is no well-defined statistic to measure the goodness of fit of a DSGE model or to establish what constitutes an improvement in such a framework …

Third, the evaluation is complicated by the fact that, at some level, all economic models are rejected by the data … In addition, DSGE models frequently impose a number of restrictions that are in direct conflict with micro evidence. If a model has been rejected along some dimensions, then a statistic that measures the goodness-of-fit along other dimensions is meaningless …

Focusing on the quantitative fit of models also creates powerful incentives for researchers (i) to introduce elements that bear little resemblance to reality for the sake of achieving a better fit (ii) to introduce opaque elements that provide the researcher with free (or almost free) parameters and (iii) to introduce elements that improve the fit for the reported moments but deteriorate the fit along other unreported dimensions.

Albert Einstein observed that “not everything that counts can be counted, and not everything that can be counted counts.” DSGE models make it easy to offer a wealth of numerical results by following a well-defined set of methods (that requires one or two years of investment in graduate school, but is relatively straightforward to apply thereafter). There is a risk for researchers to focus too much on numerical predictions of questionable reliability and relevance that absorb a lot of time and effort rather than focusing on deeper conceptual questions that are of higher relevance for society.

Anton Korinek

Read more…

Putting predictions to the test

September 21, 2017 12 comments

from Lars Syll

tetIt is the somewhat gratifying lesson of Philip Tetlock’s new book that people who make prediction their business — people who appear as experts on television, get quoted in newspaper articles, advise governments and businesses, and participate in punditry roundtables — are no better than the rest of us. When they’re wrong, they’re rarely held accountable, and they rarely admit it, either. They insist that they were just off on timing, or blindsided by an improbable event, or almost right, or wrong for the right reasons. They have the same repertoire of self-justifications that everyone has, and are no more inclined than anyone else to revise their beliefs about the way the world works, or ought to work, just because they made a mistake. No one is paying you for your gratuitous opinions about other people, but the experts are being paid, and Tetlock claims that the better known and more frequently quoted they are, the less reliable their guesses about the future are likely to be. The accuracy of an expert’s predictions actually has an inverse relationship to his or her self-confidence, renown, and, beyond a certain point, depth of knowledge. People who follow current events by reading the papers and newsmagazines regularly can guess what is likely to happen about as accurately as the specialists whom the papers quote. Our system of expertise is completely inside out: it rewards bad judgments over good ones.

The New Yorker

Mainstream neoclassical economists often maintain – usually referring to the methodological individualism of Milton Friedman — that it doesn’t matter if the assumptions of the models they use are realistic or not. What matters is if the predictions are right or not. But, if so, then the only conclusion we can make is — throw away the garbage! Because, oh dear, oh dear, how wrong they have been!  Read more…

Stiglitz and the full force of Sonnenschein-Mantel-Debreu

September 19, 2017 18 comments

from Lars Syll

In his recent article on Where Modern Macroeconomics Went Wrong, Joseph Stiglitz acknowledges that his approach “and that of DSGE models begins with the same starting point: the competitive equilibrium model of Arrow and Debreu.”

This is probably also the reason why Stiglitz’ critique doesn’t go far enough.

It’s strange that mainstream macroeconomists still stick to a general equilibrium paradigm more than forty years after the Sonnenschein-Mantel-Debreu theorem — SMD — devastatingly showed that it  is an absolute non-starter for building realist and relevant macroeconomics:

SMD theory means that assumptions guaranteeing good behavior at the microeconomic level do not carry over to the aggregate level or to qualitative features of the equilibrium …

24958274Given how sweeping the changes wrought by SMD theory seem to be, it is understandable that some very broad statements about the character of general equilibrium theory were made. Fifteen years after General Competitive Analysis, Arrow (1986) stated that the hypothesis of rationality had few implications at the aggregate level. Kirman (1989) held that general equilibrium theory could not generate falsifiable propositions, given that almost any set of data seemed consistent with the theory …

S. Abu Turab Rizvi

Read more…

Where modern macroeconomics went wrong

September 16, 2017 6 comments

from Lars Syll

DSGE models seem to take it as a religious tenet that consumption should be explained by a model of a representative agent maximizing his utility over an infinite lifetime without borrowing constraints. Doing so is called micro-foundingthe model. But economics is a behavioral science. If Keynes was right that individuals saved a constant fraction of their income, an aggregate model based on that assumption is micro-founded.FRANCE-US-ECONOMY-NOBEL-STIGLITZOf course, the economy consists of individuals who are different, but all of whom have a finite life and most of whom are credit constrained, and who do adjust their consumption behavior, if slowly, in response to changes in their economic environment. Thus, we also know that individuals do not save a constant fraction of their income, come what may. So both stories, the DSGE and the old-fashioned Keynesian, are simplifications. When they are incorporated into a simple macro-model, one is saying the economy acts as if… And then the question is, which provides a better description; a better set of prescriptions; and a better basis for future elaboration of the model. The answer is not obvious. The criticism of DSGE is thus not that it involves simplification: all models do. It is that it has made the wrong modelling choices, choosing complexity in areas where the core story of macroeconomic fluctuations could be told using simpler hypotheses, but simplifying in areas where much of the macroeconomic action takes place.

Joseph Stiglitz

Stiglitz is, of course, absolutely right.   Read more…

Rethinking expectations

September 14, 2017 5 comments

from Lars Syll

The tiny little problem that there is no hard empirical evidence that verifies rational expectations models doesn’t usually bother its protagonists too much. Rational expectations überpriest Thomas Sargent has defended the epistemological status of the rational expectations hypothesis arguing that since it “focuses on outcomes and does not pretend to have behavioral content,” it has proved to be “a powerful tool for making precise statements.”

Precise, yes, but relevant and realistic? I’ll be dipped!

In their attempted rescue operations, rational expectationists try to give the picture that only heterodox economists like yours truly are critical of the rational expectations hypothesis.

But, on this, they are, simply … eh … wrong.

Let’s listen to Nobel laureate Edmund Phelps — hardly a heterodox economist — and what he has to say (emphasis added):   Read more…

What makes economics a science?

September 12, 2017 21 comments

from Lars Syll

Well, if we are to believe most mainstream economists, models are what make economics a science.

economists3_royalblue_whiteIn a recent Journal of Economic Literature(1/2017) review of Dani Rodrik’s Economics Rules, renowned game theorist Ariel Rubinstein discusses Rodrik’s justifications for the view that “models make economics a science.” Although Rubinstein has some doubts about those justifications — models are not indispensable for telling good stories or clarifying things in general; logical consistency does not determine whether economic models are right or wrong; and being able to expand our set of ‘plausible explanations’ doesn’t make economics more of a science than good fiction does — he still largely subscribes to the scientific image of economics as a result of using formal models that help us achieve ‘clarity and consistency’.

There’s much in the review I like — Rubinstein shows a commendable scepticism on the prevailing excessive mathematisation of economics, and he is much more in favour of a pluralist teaching of economics than most other mainstream economists — but on the core question, “the model is the message,” I beg to differ with the view put forward by both Rodrik and Rubinstein.

Economics is more than any other social science model-oriented. There are many reasons for this — the history of the discipline, having ideals coming from the natural sciences (especially physics), the search for universality (explaining as much as possible with as little as possible), rigour, precision, etc.  Read more…

Modern society

September 9, 2017 4 comments

from Lars Syll


The history of ‘New Keynesianism’

September 6, 2017 1 comment

from Lars Syll

Stage 0. Late 1960’s. The Phelps volume, and Milton Friedman’s paper (pdf), both thinking about the microfoundations of the Phillips Curve, the difference between actual and expected inflation, and the role of monetary policy. This was the ancestral homeland of both New Keynesian and New Classical macroeconomics, which could not be distinguished at this stage …

nk-2-2Stage 1. Mid 1970’s. Now we see the difference. A distinct New Keynesian approach emerges. New Keynesians assume that prices (and/or wages) are set in advance, at expected market-clearing levels, before the shocks are known. This means that monetary policy can respond to those shocks, and help prevent undesirable fluctuations in output and employment. Even under rational expectations …

Stage 2. Late 1980’s. New Keynesians introduce monopolistic competition. This has two big advantages. First, you can now easily model price-setting firms as choosing a price to maximize profit… Second, because if a positive demand shock hits a perfectly competitive market, where prices are fixed at what was the expected market-clearing level, firms would ration sales, and you get a drop in output and employment, rather than a boom. And the world doesn’t seem to look like that.

Stage 3. Early 2000’s. New Keynesians introduce monetary policy without money. They become Neo-Wicksellians … There were two advantages to doing this. First, it let them model households’ and firms’ choices without needing to model the demand for money and the supply of money. Second, it made it easier to talk to central bankers who already thought of central banks as setting interest rates.

Which brings us to the End of History.

What about microfoundations? Well, it was an underlying theme, but there is nothing distinctively New Keynesian about that theme …

Likewise with rational expectations. New Keynesians just went with the flow.

Nick Rowe

Read more…