Archive

Author Archive

Economists saving the world …

January 17, 2020 3 comments

from Lars Syll

a

Does it — really — take a model to beat a model?

January 16, 2020 16 comments

from Lars Syll

A critique yours truly sometimes encounters is that as long as I cannot come up with some own alternative model to the failing mainstream models, I shouldn’t expect people to pay attention.

This is, however, to totally and utterly misunderstand the role of philosophy and methodology of economics!

As John Locke wrote in An Essay Concerning Human Understanding:

19557-004-21162361The Commonwealth of Learning is not at this time without Master-Builders, whose mighty Designs, in advancing the Sciences, will leave lasting Monuments to the Admiration of Posterity; But every one must not hope to be a Boyle, or a Sydenham; and in an Age that produces such Masters, as the Great-Huygenius, and the incomparable Mr. Newton, with some other of that Strain; ’tis Ambition enough to be employed as an Under-Labourer in clearing Ground a little, and removing some of the Rubbish, that lies in the way to Knowledge.

That’s what philosophy and methodology can contribute to economics — clearing obstacles to science by clarifying limits and consequences of choosing specific modelling strategies, assumptions, and ontologies. Read more…

Is economics — really — predictable?

January 13, 2020 11 comments

from Lars Syll

oskarAs Oskar Morgenstern noted already back in his 1928 classic Wirtschaftsprognose: Eine Untersuchung ihrer Voraussetzungen und Möglichkeiten, economic predictions and forecasts amount to little more than intelligent guessing.

Making forecasts and predictions obviously isn’t a trivial or costless activity, so why then go on with it?

The problems that economists encounter when trying to predict the future really underlines how important it is for social sciences to incorporate Keynes’s far-reaching and incisive analysis of induction and evidential weight in his seminal A Treatise on Probability (1921).  Read more…

How to teach econometrics

January 10, 2020 3 comments

from Lars Syll

aWhen-I-tell-people-I-study-econometrics-1280x721Professor Swann (2019) seems implicitly to be endorsing the traditional theorem/proof style for teaching econometrics but with a few more theorems to be memorized. This style of teaching prepares students to join the monks in Asymptopia, a small pristine mountain village, where the monks read the tomes, worship the god of Consistency, and pray all day for the coming of the Revelation, when the estimates with an infinite sample will be revealed. Dirty limited real data sets with unknown properties are not allowed in Asymptopia, only hypothetical data with known properties. Not far away in the mountains is the village of Euphoria where celibate priests compose essays regarding human sexuality. Down on the plains is the very large city of Real Data, where applied economists torture dirty data until the data confess, providing the right signs and big t-values. Although Real Data is infinitely far from Asymptopia, these applied econometricians are fond of supporting the “Scientific” character of their work with quotations from the spiritual essays of the Monks of Asymptopia.

Ed Leamer

What went wrong with economics?

January 9, 2020 40 comments

from Lars Syll

To be ‘analytical’ is something most people find recommendable. The word ‘analytical’ has a positive connotation. Scientists think deeper than most other people because they use ‘analytical’ methods. In dictionaries, ‘analysis’ is usually defined as having to do with “breaking something down.”

anBut that’s not the whole picture. As used in science, analysis usually means something more specific. It means to separate a problem into its constituent elements so to reduce complex — and often complicated — wholes into smaller (simpler) and more manageable parts. You take the whole and break it down (decompose) into its separate parts. Looking at the parts separately one at a time you are supposed to gain a better understanding of how these parts operate and work. Built on that more or less ‘atomistic’ knowledge you are then supposed to be able to predict and explain the behaviour of the complex and complicated whole.

In economics, that means you take the economic system and divide it into its separate parts, analyse these parts one at a time, and then after analysing the parts separately, you put the pieces together. Read more…

The randomistas revolution

January 8, 2020 Leave a comment

from Lars Syll

RandomistasIn his history of experimental social science — Randomistas: How radical researchers are changing our world (Yale University Press, 2018) — Andrew Leigh gives an introduction to the RCT (randomized controlled trial) method for conducting experiments in medicine, psychology, development economics, and policy evaluation. Although it mentions there are critiques that can be waged against it, the author does not let that shadow his overwhelmingly enthusiastic view on RCT.

Among mainstream economists, this uncritical attitude towards RCTs has become standard. Nowadays many mainstream economists maintain that ‘imaginative empirical methods’ — such as natural experiments, field experiments, lab experiments, RCTs — can help us to answer questions concerning the external validity of economic models. In their view, they are more or less tests of ‘an underlying economic model’ and enable economists to make the right selection from the ever-expanding ‘collection of potentially applicable models.’

When looked at carefully, however, there are in fact few real reasons to share this optimism on the alleged ’empirical turn’ in economics. Read more…

‘New Keynesianism’ — the art of making relevance irrelevant

January 7, 2020 3 comments

from Lars Syll

kThere really is something about the way macroeconomists construct their models nowadays that obviously doesn’t sit right.

Empirical evidence still only plays a minor role in mainstream economic theory, where models largely function as a substitute for empirical evidence. One might have hoped that humbled by the manifest failure of its theoretical pretences during the latest economic-financial crises, the one-sided, almost religious, insistence on axiomatic-deductivist modeling as the only scientific activity worthy of pursuing in economics would give way to methodological pluralism based on ontological considerations rather than formalistic tractability. That has, so far, not happened.

Fortunately — when you’ve got tired of the kind of macroeconomic apologetics produced by ‘New Keynesian’ macroeconomists and other DSGE modellers — there still are some real Keynesian macroeconomists to read. One of them — Axel Leijonhufvud — writes: Read more…

Chicago economics — garbage in, gospel out

January 4, 2020 2 comments

from Lars Syll

Savings-and-InvestmentsEvery dollar of increased government spending must correspond to one less dollar of private spending. Jobs created by stimulus spending are offset by jobs lost from the decline in private spending. We can build roads instead of factories, but fiscal stimulus can’t help us to build more of both. This form of “crowding out” is just accounting, and doesn’t rest on any perceptions or behavioral assumptions.

John Cochrane

And the tiny little problem? It’s utterly and completely wrong!

What Cochrane is reiterating here is nothing but Say’s law, basically saying that savings are equal to investments and that if the state increases investments, then private investments have to come down (‘crowding out’). Read more…

Some limitations of the experimental approach

January 2, 2020 11 comments

from Lars Syll

Without question, the experimental approach has produced genuine insights … All the same, there are serious limitations to a strategy centered on experimental design:

EAugFIhXkAAabHX1. Good experimental design results in internal validity, where measurements actually measure the things they’re supposed to and confounding influences are suppressed. External validity, the extent to which results can be generalized to a wider array of situations beyond the confines of the experiment is a different matter. There are two specific aspects of experimentalism that raise questions on this front, the tendency for experiments to be small, local and time-bound …

2. The strategy of experimental design virtually requires a reductionist, small-bore approach to social change. A more sweeping, structural approach to poverty and inequality introduces too many variables and defeats experimental control. Thus, without any explicit ideological justification, we end up with incremental reformism when the entire social configuration may be the true culprit …

Using experimental methods to incorporate more learning in program administration should be standard practice; perhaps some day it will be. But the big problems in poverty and oppression are too complex and encompassing to be reduced to experimental bits, and there is no substitute for theoretical analysis and a willingness to take chances with large-scale collective action.

Peter Dorman

‘Ideally controlled experiments’ tell us with certainty what causes what effects — but only given the right ‘closures.’ Read more…

Markov’s inequality (wonkish)

December 27, 2019 7 comments

from Lars Syll

One of the most beautiful results of probability theory is Markov’s inequality (after the Russian mathematician Andrei Markov (1856-1922)):

If X is a non-negative stochastic variable (X ≥ 0) with a finite expectation value E(X), then for every a > 0

P{X ≥ a} ≤ E(X)/a

If the production of cars in a factory during a week is assumed to be a stochastic variable with an expectation value (mean) of 50 units, we can – based on nothing else but the inequality – conclude that the probability that the production for a week would be greater than 100 units can not exceed 50% [P(X≥100)≤(50/100)=0.5 = 50%]

I still feel humble awe at this immensely powerful result. Without knowing anything else but an expected value (mean) of a probability distribution we can deduce upper limits for probabilities. The result hits me as equally surprising today as forty years ago when I first run into it as a student of mathematical statistics.

[For a derivation of the inequality, see e.g. Sheldon Ross, Introduction to Probability and Statistics for Engineers and Scientists, Academic Press, 2009]

RBC models — willfully silly obscurantism

December 22, 2019 5 comments

from Lars Syll

ECm4uwXU0AA9wr1

 

As a result of the three waves of new classical economics, the field of macroeconomics became increasingly rigorous and increasingly tied to the tools of microeconomics. The real business cycle models were specific, dynamic examples of Arrow–Debreu general equilibrium theory. Indeed, this was one of their main selling points. Over time, proponents of this work have backed away from the assumption that the business cycle is driven by real as opposed to monetary forces, and they have begun to stress the methodological contributions of this work. Today, many macroeconomists coming from the new classical tradition are happy to concede to the Keynesian assumption of sticky prices, as long as this assumption is imbedded in a suitably rigorous model in which economic actors are rational and forward-looking.

Greg Mankiw

Well, rigour may be fine, but how about reality? Read more…

Microfoundations and economic policy choices

December 19, 2019 20 comments

Lars Syll

41I1CkwyXjLSince there will generally be many micro foundations consistent with some given aggregate pattern, empirical support for an aggregate hypothesis does not constitute empirical support for any particular micro foundation … Lucas himself points out that short-term macroeconomic forecasting models work perfectly well without choice-theoretic foundations: “But if one wants to know how behaviour is likely to change under some change in policy, it is necessary to model the way people make choices” (Snowdon and Vane 2005, interview with Robert Lucas). The question, of course, is why on earth would one insist on deriving policy implications from foundations that deliberately misrepresent actual behavior?

Yes, indeed, why would one?

Defenders of microfoundations and its rational expectations equipped representative agent’s intertemporal optimization often argue as if sticking with simple representative agent macroeconomic models doesn’t impart a bias to the analysis. Yours truly unequivocally reject that unsubstantiated view. Read more…

Can governments ever run out of money?

December 16, 2019 6 comments

from Lars Syll

Whether it’s more nurses, frozen tax promises, free broadband internet or more social housing in the UK; or tax cuts and green energy investments in America, public spending is set to surge. This sudden abandonment of fiscal rectitude comes amid the rise in prominence of a way of thinking about money, spending and the economy – Modern Monetary Theory (MMT).

MMT_twitter

According to its key architect, US businessman Warren Mosler, it is based on a simple idea – that countries that issue their own currencies can never run out of money in the same way a business or person can. This is important to understand because it means when someone says the government can’t do something for want of money, that’s simply not applicable, says Mr Mosler. A government can no more run out of money than a football stadium can run out of goals scored …

MMTers say they do believe in being fiscally responsible, and that critics misunderstand. The story starts with any government’s desire to fund public services, which it does through taxation. Citizens need money to pay that tax, and they work to get it, Mr Mosler says …

The tax-and-spend orthodoxy embraced by most governments should be rethought. What’s actually going on is tax liabilities come first, then spending, and then payment of taxes, which paints guiding economic principles such as the debt and deficit in entirely different lights.

Howard Mustoe / BBC

In modern times legal currencies are based on fiat. Read more…

2019 ‘Nobel prize’ reveals the poverty of economics

December 13, 2019 8 comments

from Lars Syll

nobimRCTs have delivered intriguing insights into how poor people think and act, but also into how behavioural economists do. For example, when a slew of high-profile RCTs failed to deliver the evidence that researchers expected on the ‘miracle of microfinance’, the researchers paid little heed to the implications of their insignificant and sometimes even negative findings. Instead, they focused attention onto some small (but statistically) significant behavioural changes in their data. These included microfinance services encouraging slightly higher propensities to engage in entrepreneurship and reduced purchasing of ‘temptation goods’ (a category in which Banerjee and Duflo included, for Indian slum-dwellers, tea and food on the street).

The problem is that these insights, far from shifting economic paradigms in a progressive way, and enabling greater realism and pluralism in economic thinking, have led to thinly-veiled efforts at behaviourally re-engineering the poor, which have gained traction in global development. The new behavioural paradigm, canonised in the World Bank’s 2015 World Development Report Mind, Society and Behavior, invokes targeted social norm-shifting, subliminal marketing through entertainment, ‘choice architecture’ and ‘nudge’, social pressures, and punitive conditionalities, to change poor people’s behaviours. The idea is to ‘help’ poor people overcome supposedly irrational ‘risk aversion’ in order to be more entrepreneurial, or more ‘time-consistent’ and save for a rainy day. Read more…

Uber and the gender pay gap

December 9, 2019 6 comments

from Lars Syll

uberUber has conducted a study of internal pay differentials between men and women, which they describe as “gender blind” … The study found a 7% pay gap in favor of men. They present their findings as proof that there are issues unrelated to gender that impact driver pay. They quantify the reasons for the gap as follows:

Where: 20% is due to where people choose to drive (routes/neighborhoods).

Experience: 30% is due to experience …

Speed: 50% was due to speed, they claim that men drive slightly faster, so complete more trips per hour …

The company’s reputation has been affected by its sexist and unprofessional corporate culture, and its continued lack of gender balance won’t help. Nor, I suspect, will its insistence, with research conducted by its own staff to prove it, that the pay gap is fair. This simply adds insult to obnoxiousness.

But then, why would we have expected any different? The Uber case study’s conclusions may actually be almost the opposite of what they were trying to prove. Rather than showing that the pay gap is a natural consequence of our gendered differences, they have actually shown that systems designed to insistently ignore differences tend to become normed to the preferences of those who create them.

Avivah Wittenberg-Cox

Spending a couple of hours going through a JEL survey of modern research on the gender wage gap, yours truly was struck almost immediately by how little that research really has accomplished in terms of explaining gender wage discrimination. Read more…

Consistency and rationality

December 7, 2019 Leave a comment

from Lars Syll

consistentAxioms of ‘internal consistency’ of choice, such as the weak and the strong axioms of revealed preference … are often used in decision theory, micro-economics, game theory, social choice theory, and in related disciplines …

Can a set of choices really be seen as consistent or inconsistent on purely internal grounds, without bringing in something external to choice, such as the underlying objectives or values that are pursued or acknowledged by choice? …

The presumption of inconsistency may be easily disputed, depending on the context, if we know a bit more about what the person is trying to do. Suppose the person faces a choice at a dinner table between having the last remaining apple in the fruit basket (y) and having nothing instead (x), forgoing the nice-looking apple. She decides to behave decently and picks nothing (x), rather than the one apple (y). If, instead, the basket had contained two apples, and she had encountered the choice between having nothing (x), having one nice apple (y) and having another nice one (z), she could reasonably enough choose one (y), without violating any rule of good behavior. The presence of another apple (z) makes one of the two apples decently choosable, but this combination of choices would violate the standard consistency conditions, including Property a, even though there is nothing particularly “inconsistent” in this pair of choices (given her values and scruples) … We cannot determine whether the person is failing in any way without knowing what he is trying to do, that is, without knowing something external to the choice itself.

Amartya Sen

Being able to model a credible world, a world that somehow could be considered somehow ‘similar’ to the real world is not the same as investigating the real world. Read more…

Transmogrifying Keynes

December 5, 2019 15 comments

from Lars Syll

econtalkThe other day, on my way home on the train after having attended an economics conference, yours truly tried to beguile the way by listening to a podcast of EconTalk where Garett Jones of George Mason University talked with EconTalk host Russ Roberts about the ideas of Irving Fisher on debt and deflation.

Jones’s thoughts on Fisher were thought-provoking and interesting, but in the middle of the discussion Roberts started to ask questions on the relation between Fisher’s ideas and those of Keynes, saying more or less something like “Keynes generated a lot of interest in his idea that the labour market doesn’t clear … because the price for labour does not adjust, i. e. wages are ‘sticky’ or ‘inflexible’.”

This is of course pure nonsense. Read more…

Maurice Allais on empirics and theory

December 4, 2019 13 comments

from Lars Syll

225px-allais_pn_maurice-24x30-2001b

Submission to observed or experimental data is the golden rule which dominates any scientific discipline. Any theory whatever, if it is not verified by empirical evidence, has no scientific value and should be rejected.

 

Maurice Allais

Formalistic deductive “Glasperlenspiel” can be very impressive and seductive. But in the realm of science, it ought to be considered of little or no value to simply make claims about the model and lose sight of reality.

Mainstream — neoclassical — economics has since long given up on the real world and contents itself with proving things about thought up worlds. Empirical evidence only plays a minor role in economic theory, where models largely function as a substitute for empirical evidence. Hopefully humbled by the manifest failure of its theoretical pretences, the one-sided, almost religious, insistence on axiomatic-deductivist modelling as the only scientific activity worthy of pursuing in economics will give way to methodological pluralism based on ontological considerations rather than formalistic tractability. Read more…

What is (wrong with) mainstream economics?

December 3, 2019 12 comments

from Lars Syll

If you want to know what is neoclassical economics — or mainstream economics as we call it nowadays — and turn to Wikipedia you are told that

fundneoclassical economics is a term variously used for approaches to economics focusing on the determination of prices, outputs, and income distributions in markets through supply and demand, often mediated through a hypothesized maximization of utility by income-constrained individuals and of profits by cost-constrained firms employing available information and factors of production, in accordance with rational choice theory.

The basic problem with this definition of neoclassical (mainstream) economics — arguing that its differentia specifica is its use of demand and supply, utility maximization and rational choice — is that it doesn’t get things quite right. As we all know, there is an endless list of mainstream models that more or less distance themselves from one or the other of these characteristics. So the heart of mainstream economic theory lies elsewhere. Read more…

Macroeconomic uncertainty

December 1, 2019 37 comments

from Lars Syll

The financial crisis of 2007-08 hit most laymen and economists with surprise. What was it that went wrong with our macroeconomic models, since they obviously did not foresee the collapse or even make it conceivable?

There are many who have ventured to answer this question. And they have come up with a variety of answers, ranging from the exaggerated mathematization of economics to irrational and corrupt politicians.

0But the root of our problem goes much deeper. It ultimately goes back to how we look upon the data we are handling. In ‘modern’ macroeconomics — Dynamic Stochastic General Equilibrium, New Synthesis, New Classical and ‘New Keynesian’ — variables are treated as if drawn from a known ‘data-generating process’ that unfolds over time and on which we, therefore, have access to heaps of historical time-series. If we do not assume that we know the ‘data-generating process’ – if we do not have the ‘true’ model – the whole edifice collapses. And of course, it has to. I mean, who really honestly believes that we should have access to this mythical Holy Grail, the data-generating process?

Modern macroeconomics obviously did not anticipate the enormity of the problems that unregulated ‘efficient’ financial markets created. Why? Because it builds on the myth of us knowing the ‘data-generating process’ and that we can describe the variables of our evolving economies as drawn from an urn containing stochastic probability functions with known means and variances.

Read more…