Archive

Author Archive

We need a ‘Fridays for Keynesianism’ movement!

February 20, 2020 4 comments

from Lars Syll

Basically, the classical model is a model for a corn economy: households decide whether to consume the corn or to save it. If it is saved it can be supplied to investors who sow the grains, repaying to the households one period later the credit amount plus interest.

keynes_vert-b496c6e9f20ef97ddbb4491927240bb38e458854-s800-c85In the Keynesian model the ‘funds’ exchanged on the capital market are made up of money—‘funds’ are bank deposits. Funds are not created here by a renunciation of consumption but by the banks granting credit …

In the classical model, there is a strict crowding out of private investors on the capital market by government deficits. The all-purpose good, which functions equally as ‘funds’ and as investment good, can only be used once.

In the Keynesian model, on the other hand, ‘funds’ (money) and investment goods are independent of each other. Therefore there is no crowding out on the capital market, if deficits are financed by banks or the central bank. This is the fundamental insight of Modern Monetary Theory … Read more…

What is truth in economics?

February 18, 2020 42 comments

from Lars Syll

28mptoothfairy_jpg_1771152eIn my view, scientific theories are not to be considered ‘true’ or ‘false.’ In constructing such a theory, we are not trying to get at the truth, or even to approximate to it: rather, we are trying to organize our thoughts and observations in a useful manner.

Robert Aumann

What a handy view of science …

How reassuring for all of you who have always thought that believing in the tooth fairy make you understand what happens to kids’ teeth. Now a ‘Nobel prize’ winning economist tells you that if there are such things as tooth fairies or not doesn’t really matter. Scientific theories are not about what is true or false, but whether ‘they enable us to organize and understand our observations’!

What Aumann and other defenders of scientific storytelling ‘forgets’ is that potential explanatory power achieved in thought experimental models is not enough for attaining real explanations. Read more…

Econometrics — a matter of BELIEF and FAITH

February 17, 2020 4 comments

from Lars Syll

economEverybody who takes regression analysis course, studies the assumptions of regression model. But nobody knows why, because after reading about the axioms, they are rarely mentioned. But the assumptions are important, because if any one assumption is wrong, the regression is not valid, and the interpretations can be completely wrong. In order to have a valid regression model, you must have right regressors, the right functional form, all the regressors must be exogenous, regression parameters should not change over time, regression residuals should be independent and have mean zero, and many other things as well. There are so many assumptions that it is impossible to test all of them. This means that interpreting a regression model is always a matter of FAITH – we must BELIEVE, without having any empirical evidence, that our model is the ONE TRUE VALID model. It is only under this assumption that our interpretations of regression models are valid … Read more…

Econometrics and the Axiom of Omniscience

February 14, 2020 4 comments

from Lars Syll

Most work in econometrics and regression analysis is — still — made on the assumption that the researcher has a theoretical model that is ‘true.’ Based on this belief of having a correct specification for an econometric model or running a regression, one proceeds as if the only problem remaining to solve have to do with measurement and observation.

aWhen things sound to good to be true, they usually aren’t. And that goes for econometric wet dreams too. The snag is, of course, that there is pretty little to support the perfect specification assumption. Looking around in social science and economics we don’t find a single regression or econometric model that lives up to the standards set by the ‘true’ theoretical model — and there is pretty little that gives us reason to believe things will be different in the future. Read more…

Econometrics — fictions masquerading as science

February 11, 2020 19 comments

from Lars Syll

rabbitIn econometrics one often gets the feeling that many of its practitioners think of it as a kind of automatic inferential machine: input data and out comes casual knowledge. This is like pulling a rabbit from a hat. Great — but first you have to put the rabbit in the hat. And this is where assumptions come into the picture.

As social scientists — and economists — we have to confront the all-important question of how to handle uncertainty and randomness. Should we equate randomness with probability? If we do, we have to accept that to speak of randomness we also have to presuppose the existence of nomological probability machines, since probabilities cannot be spoken of – and actually, to be strict, do not at all exist – without specifying such system-contexts.

Read more…

Overcontrolling in econometrics — a wasteful practice ridden with errors

February 7, 2020 6 comments

from Lars Syll

The gender pay gap is a fact that, sad to say, to a non-negligible extent is the result of discrimination. And even though many women are not deliberately discriminated against, but rather self-select into lower-wage jobs, this in no way magically explains away the discrimination gap. As decades of socialization research has shown, women may be ‘structural’ victims of impersonal social mechanisms that in different ways aggrieve them. Wage discrimination is unacceptable. Wage discrimination is a shame.

You see it all the time in studies. “We controlled for…” And then the list starts. The longer the better. Income. Age. Race. Religion. Height. Hair color. Sexual preference. Crossfit attendance. Love of parents. Coke or Pepsi. The more things you can control for, the stronger your study is — or, at least, the stronger your study seems. Controls give the feeling of specificity, of precision. But sometimes, you can control for too much. Sometimes you end up controlling for the thing you’re trying to measure …

paperAn example is research around the gender wage gap, which tries to control for so many things that it ends up controlling for the thing it’s trying to measure. As my colleague Matt Yglesias wrote:

“The commonly cited statistic that American women suffer from a 23 percent wage gap through which they make just 77 cents for every dollar a man earns is much too simplistic. On the other hand, the frequently heard conservative counterargument that we should subject this raw wage gap to a massive list of statistical controls until it nearly vanishes is an enormous oversimplification in the opposite direction. After all, for many purposes gender is itself a standard demographic control to add to studies — and when you control for gender the wage gap disappears entirely!” … Read more…

Hicks on the limited applicability of probability calculus

February 5, 2020 12 comments

from Lars Syll

When we cannot accept that the observations, along the time-series available to us, are independent, or cannot by some device be divided into groups that can be treated as independent, we get into much deeper water. For we have then, in strict logic, no more than one observation, all of the separate items having to be taken together. For the analysis of that the probability calculus is useless; it does not apply. We are left to use our judgement, making sense of what has happened as best we can, in the manner of the historian. Applied economics does then come back to history, after all.

hicksI am bold enough to conclude, from these considerations that the usefulness of ‘statistical’ or ‘stochastic’ methods in economics is a good deal less than is now conventionally supposed. We have no business to turn to them automatically; we should always ask ourselves, before we apply them, whether they are appropriate to the problem at hand. Very often they are not. Thus it is not at all sensible to take a small number of observations (sometimes no more than a dozen observations) and to use the rules of probability to deduce from them a ‘significant’ general law. For we are assuming, if we do so, that the variations from one to another of the observations are random, so that if we had a larger sample (as we do not) they would by some averaging tend to disappear. But what nonsense this is when the observations are derived, as not infrequently happens, from different countries, or localities, or industries — entities about which we may well have relevant information, but which we have deliberately decided, by our procedure, to ignore. By all means let us plot the points on a chart, and try to explain them; but it does not help in explaining them to suppress their names. The probability calculus is no excuse for forgetfulness.

John Hicks’ Causality in economics ought to be on the reading list of every course in economic methodology.

Chicago economics — where do we unload the garbage?

February 3, 2020 3 comments

from Lars Syll

chicagotrashThere is also a practical problem, if economics as a discipline is to survive. There is a huge amount of junk in the peer-reviewed economics literature -– the reviewing process is no protection when the reviewers themselves are prejudiced. A comparison that comes to mind is the collapse of “scientific” eugenics. There were vast amounts of that written, and now it is only read as an object example of the capture of a social science by prejudice and authoritarianism. For economists, meantime, there is a huge task ahead: the garbage must be taken out; removed from the field’s teaching, textbooks, and policy advice. It will be a generation at least before this is set right, if indeed it can be set right at all.

Advice Unask

Models and evidence in economics

January 31, 2020 60 comments

from Lars Syll

UnknownAnalogue-economy models may picture Galilean thought experiments or they may describe credible worlds. In either case we have a problem in taking lessons from the model to the world. The problem is the venerable one of unrealistic assumptions, exacerbated in economics by the fact that the paucity of economic principles with serious empirical content makes it difficult to do without detailed structural assumptions. But the worry is not just that the assumptions are unrealistic; rather, they are unrealistic in just the wrong way.

Nancy Cartwright

One of the limitations with economics is the restricted possibility to perform experiments, forcing it to mainly rely on observational studies for knowledge of real-world economies.

But still — the idea of performing laboratory experiments holds a firm grip of our wish to discover (causal) relationships between economic ‘variables.’If we only could isolate and manipulate variables in controlled environments, we would probably find ourselves in a situation where we with greater ‘rigour’ and ‘precision’ could describe, predict, or explain economic happenings in terms of ‘structural’ causes, ‘parameter’ values of relevant variables, and economic ‘laws.’ Read more…

Uncertainty in economics

January 29, 2020 20 comments

from Lars Syll

kadeNot accounting for uncertainty may result in severe confusion about what we do indeed understand about the economy. In the financial crisis of 2007/2008 the demon has lashed out at this ignorance and challenged the credibility of the whole economic community by laying bare economists’ incapability to prevent the crisis …

Economics itself cannot be regarded a purely analytical science. It has the amazing and exciting property of shaping the object of its own analysis. This feature clearly distinguishes it from physics, chemistry, archaeology and many other sciences. While biologists, chemists, engineers, physicists and many more are very able to transform whole societies by their discoveries and inventions — like Penicillin or the internet — the laws of nature they study remain unaffected by these inventions. In economic, this constancy of the object under study just does not exist.

The financial crisis of 2007-2008 hit most laymen and economists with surprise. What was it that went wrong with our macroeconomic models, since they obviously did not foresee the collapse or even made it conceivable?

There are many who have ventured to answer that question. And they have come up with a variety of answers, ranging from the exaggerated mathematization of economics, to irrational and corrupt politicians. Read more…

On causality and econometrics

January 27, 2020 19 comments

from Lars Syll

causal-inference-in-statistics-233x165The point is that a superficial analysis, which only looks at the numbers, without attempting to assess the underlying causal structures, cannot lead to a satisfactory data analysis … We must go out into the real world and look at the structural details of how events occur … The idea that the numbers by themselves can provide us with causal information is false. It is also false that a meaningful analysis of data can be done without taking any stand on the real-world causal mechanism … These issues are of extreme important with reference to Big Data and Machine Learning. Machines cannot expend shoe leather, and enormous amounts of data cannot provide us knowledge of the causal mechanisms in a mechanical way. However, a small amount of knowledge of real-world structures used as causal input can lead to substantial payoffs in terms of meaningful data analysis. The problem with current econometric techniques is that they do not have any scope for input of causal information – the language of econometrics does not have the vocabulary required to talk about causal concepts.

Asad Zaman / WEA Pedagogy

What Asad Zaman tells us in his splendid set of lectures is that causality in social sciences can never solely be a question of statistical inference. Read more…

Experiments in social sciences

January 23, 2020 7 comments

from Lars Syll

du2How, then, can social scientists best make inferences about causal effects? One option is true experimentation … Random assignment ensures that any differences in outcomes between the groups are due either to chance error or to the causal effect … If the experiment were to be repeated over and over, the groups would not differ, on average, in the values of potential confounders. Thus, the average of the average difference of group outcomes, across these many experiments, would equal the true difference in outcomes … The key point is that randomization is powerful because it obviates confounding …

Thad Dunning’s book is a very useful guide for social scientists interested in research methodology in general and natural experiments in specific. Dunning argues that since random or as-if random assignment in natural experiments obviates the need for controlling potential confounders, this kind of “simple and transparent” design-based research method is preferable to more traditional multivariate regression analysis where the controlling only comes in ex post via statistical modelling.

But — there is always a but … Read more…

Why all RCTs are biased

January 21, 2020 1 comment

from Lars Syll

Randomised experiments require much more than just randomising an experiment to identify a treatment’s effectiveness. They involve many decisions and complex steps that bring their own assumptions and degree of bias before, during and after randomisation …

rcSome researchers may respond, “are RCTs not still more credible than these other methods even if they may have biases?” For most questions we are interested in, RCTs cannot be more credible because they cannot be applied (as outlined above). Other methods (such as observational studies) are needed for many questions not amendable to randomisation but also at times to help design trials, interpret and validate their results, provide further insight on the broader conditions under which treatments may work, among other rea- sons discussed earlier. Different methods are thus complements (not rivals) in improving understanding.

Finally, randomisation does not always even out everything well at the baseline and it cannot control for endline imbalances in background influencers. No researcher should thus just generate a single randomisation schedule and then use it to run an experiment. Instead researchers need to run a set of randomisation iterations before conducting a trial and select the one with the most balanced distribution of background influencers between trial groups, and then also control for changes in those background influencers during the trial by collecting endline data. Though if researchers hold onto the belief that flipping a coin brings us closer to scientific rigour and understanding than for example systematically ensuring participants are distributed well at baseline and endline, then scientific understanding will be undermined in the name of computer-based randomisation.

Alexander Krauss

The point of making a randomized experiment is often said to be that it ‘ensures’ that any correlation between a supposed cause and effect indicates a causal relation. This is believed to hold since randomization (allegedly) ensures that a supposed causal variable does not correlate with other variables that may influence the effect.

The problem with that simplistic view on randomization is that the claims made are both exaggerated and false: Read more…

Chicago economics — only for Gods and Idiots

January 20, 2020 11 comments

from Lars Syll

4703325-2If I ask myself what I could legitimately assume a person to have rational expectations about, the technical answer would be, I think, about the realization of a stationary stochastic process, such as the outcome of the toss of a coin or anything that can be modeled as the outcome of a random process that is stationary. I don’t think that the economic implications of the outbreak of World war II were regarded by most people as the realization of a stationary stochastic process. In that case, the concept of rational expectations does not make any sense. Similarly, the major innovations cannot be thought of as the outcome of a random process. In that case the probability calculus does not apply.

Robert Solow

‘Modern’ macroeconomic theories are as a rule founded on the assumption of rational expectations — where the world evolves in accordance with fully predetermined models where uncertainty has been reduced to stochastic risk describable by some probabilistic distribution. Read more…

Economists saving the world …

January 17, 2020 18 comments

from Lars Syll

a

Does it — really — take a model to beat a model?

January 16, 2020 58 comments

from Lars Syll

A critique yours truly sometimes encounters is that as long as I cannot come up with some own alternative model to the failing mainstream models, I shouldn’t expect people to pay attention.

This is, however, to totally and utterly misunderstand the role of philosophy and methodology of economics!

As John Locke wrote in An Essay Concerning Human Understanding:

19557-004-21162361The Commonwealth of Learning is not at this time without Master-Builders, whose mighty Designs, in advancing the Sciences, will leave lasting Monuments to the Admiration of Posterity; But every one must not hope to be a Boyle, or a Sydenham; and in an Age that produces such Masters, as the Great-Huygenius, and the incomparable Mr. Newton, with some other of that Strain; ’tis Ambition enough to be employed as an Under-Labourer in clearing Ground a little, and removing some of the Rubbish, that lies in the way to Knowledge.

That’s what philosophy and methodology can contribute to economics — clearing obstacles to science by clarifying limits and consequences of choosing specific modelling strategies, assumptions, and ontologies. Read more…

Is economics — really — predictable?

January 13, 2020 11 comments

from Lars Syll

oskarAs Oskar Morgenstern noted already back in his 1928 classic Wirtschaftsprognose: Eine Untersuchung ihrer Voraussetzungen und Möglichkeiten, economic predictions and forecasts amount to little more than intelligent guessing.

Making forecasts and predictions obviously isn’t a trivial or costless activity, so why then go on with it?

The problems that economists encounter when trying to predict the future really underlines how important it is for social sciences to incorporate Keynes’s far-reaching and incisive analysis of induction and evidential weight in his seminal A Treatise on Probability (1921).  Read more…

How to teach econometrics

January 10, 2020 3 comments

from Lars Syll

aWhen-I-tell-people-I-study-econometrics-1280x721Professor Swann (2019) seems implicitly to be endorsing the traditional theorem/proof style for teaching econometrics but with a few more theorems to be memorized. This style of teaching prepares students to join the monks in Asymptopia, a small pristine mountain village, where the monks read the tomes, worship the god of Consistency, and pray all day for the coming of the Revelation, when the estimates with an infinite sample will be revealed. Dirty limited real data sets with unknown properties are not allowed in Asymptopia, only hypothetical data with known properties. Not far away in the mountains is the village of Euphoria where celibate priests compose essays regarding human sexuality. Down on the plains is the very large city of Real Data, where applied economists torture dirty data until the data confess, providing the right signs and big t-values. Although Real Data is infinitely far from Asymptopia, these applied econometricians are fond of supporting the “Scientific” character of their work with quotations from the spiritual essays of the Monks of Asymptopia.

Ed Leamer

What went wrong with economics?

January 9, 2020 54 comments

from Lars Syll

To be ‘analytical’ is something most people find recommendable. The word ‘analytical’ has a positive connotation. Scientists think deeper than most other people because they use ‘analytical’ methods. In dictionaries, ‘analysis’ is usually defined as having to do with “breaking something down.”

anBut that’s not the whole picture. As used in science, analysis usually means something more specific. It means to separate a problem into its constituent elements so to reduce complex — and often complicated — wholes into smaller (simpler) and more manageable parts. You take the whole and break it down (decompose) into its separate parts. Looking at the parts separately one at a time you are supposed to gain a better understanding of how these parts operate and work. Built on that more or less ‘atomistic’ knowledge you are then supposed to be able to predict and explain the behaviour of the complex and complicated whole.

In economics, that means you take the economic system and divide it into its separate parts, analyse these parts one at a time, and then after analysing the parts separately, you put the pieces together. Read more…

The randomistas revolution

January 8, 2020 Leave a comment

from Lars Syll

RandomistasIn his history of experimental social science — Randomistas: How radical researchers are changing our world (Yale University Press, 2018) — Andrew Leigh gives an introduction to the RCT (randomized controlled trial) method for conducting experiments in medicine, psychology, development economics, and policy evaluation. Although it mentions there are critiques that can be waged against it, the author does not let that shadow his overwhelmingly enthusiastic view on RCT.

Among mainstream economists, this uncritical attitude towards RCTs has become standard. Nowadays many mainstream economists maintain that ‘imaginative empirical methods’ — such as natural experiments, field experiments, lab experiments, RCTs — can help us to answer questions concerning the external validity of economic models. In their view, they are more or less tests of ‘an underlying economic model’ and enable economists to make the right selection from the ever-expanding ‘collection of potentially applicable models.’

When looked at carefully, however, there are in fact few real reasons to share this optimism on the alleged ’empirical turn’ in economics. Read more…