Archive

Author Archive

On the difference between econometrics and data science

January 22, 2021 Leave a comment

from Lars Syll

Causality in social sciences can never solely be a question of statistical inference. Causality entails more than predictability, and to really in depth explain social phenomena require theory. The analysis of variation can never in itself reveal how these variations are brought about. First when we are able to tie actions, processes or structures to the statistical relations detected, can we say that we are getting at relevant explanations of causation.

Most facts have many different, possible, alternative explanations, but we want to find the best of all contrastive (since all real explanation takes place relative to a set of alternatives) explanations. So which is the best explanation? Many scientists, influenced by statistical reasoning, think that the likeliest explanation is the best explanation. But the likelihood of x is not in itself a strong argument for thinking it explains y. I would rather argue that what makes one explanation better than another are things like aiming for and finding powerful, deep, causal, features and mechanisms that we have warranted and justified reasons to believe in. Statistical — especially the variety based on a Bayesian epistemology — reasoning generally has no room for these kinds of explanatory considerations. The only thing that matters is the probabilistic relation between evidence and hypothesis. That is also one of the main reasons I find abduction — inference to the best explanation — a better description and account of what constitute actual scientific reasoning and inferences. Read more…

Leontief’s devastating critique of econom(etr)ics

January 19, 2021 5 comments

from Lars Syll

Much of current academic teaching and research has been criticized for its lack of relevance, that is, of immediate practical impact … I submit that the consistently indifferent performance in practical applications is in fact a symptom of a fundamental imbalance in the present state of our discipline. The weak and all too slowly growing empirical foundation clearly cannot support the proliferating superstructure of pure, or should I say, speculative economic theory …

004806Uncritical enthusiasm for mathematical formulation tends often to conceal the ephemeral substantive content of the argument behind the formidable front of algebraic signs … In the presentation of a new model, attention nowadays is usually centered on a step-by-step derivation of its formal properties. But if the author — or at least the referee who recommended the manuscript for publication — is technically competent, such mathematical manipulations, however long and intricate, can even without further checking be accepted as correct. Nevertheless, they are usually spelled out at great length. By the time it comes to interpretation of the substantive conclusions, the assumptions on which the model has been based are easily forgotten. But it is precisely the empirical validity of these assumptions on which the usefulness of the entire exercise depends.

What is really needed, in most cases, is a very difficult and seldom very neat assessment and verification of these assumptions in terms of observed facts. Here mathematics cannot help and because of this, the interest and enthusiasm of the model builder suddenly begins to flag: “If you do not like my set of assumptions, give me another and I will gladly make you another model; have your pick.” …

But shouldn’t this harsh judgment be suspended in the face of the impressive volume of econometric work? The answer is decidedly no. This work can be in general characterized as an attempt to compensate for the glaring weakness of the data base available to us by the widest possible use of more and more sophisticated statistical techniques. Alongside the mounting pile of elaborate theoretical models we see a fast-growing stock of equally intricate statistical tools. These are intended to stretch to the limit the meager supply of facts … Like the economic models they are supposed to implement, the validity of these statistical tools depends itself on the acceptance of certain convenient assumptions pertaining to stochastic properties of the phenomena which the particular models are intended to explain; assumptions that can be seldom verified.

Wassily Leontief

A salient feature of modern mainstream economics is the idea of science advancing through the use of “successive approximations” whereby ‘small-world’ models become more and more relevant and applicable to the ‘large world’ in which we live. Is this really a feasible methodology? Yours truly thinks not. Read more…

Fooled by randomness

January 16, 2021 4 comments

from Lars Syll

A non-trivial part of teaching statistics to social science students is made up of learning them to perform significance testing. A problem yours truly has noticed repeatedly over the years, however, is that no matter how careful you try to be in explicating what the probabilities generated by these statistical tests — p-values — really are, still most students misinterpret them.

Is betting random? | Analysing randomness in bettingA couple of years ago I gave a statistics course for the Swedish National Research School in History, and at the exam I asked the students to explain how one should correctly interpret p-values. Although the correct definition is p(data|null hypothesis), a majority of the students either misinterpreted the p-value as being the likelihood of a sampling error (which of course is wrong, since the very computation of the p-value is based on the assumption that sampling errors are what causes the sample statistics not coinciding with the null hypothesis) or that the p-value is the probability of the null hypothesis being true, given the data (which of course also is wrong, since it is p(null hypothesis|data) rather than the correct p(data|null hypothesis)). Read more…

Garbage-can econometrics

January 14, 2021 15 comments

from Lars Syll

When no formal theory is available, as is often the case, then the analyst needs to justify statistical specifications by showing that they fit the data. That means more than just “running things.” It means careful graphical and crosstabular analysis …

garbageWhen I present this argument … one or more scholars say, “But shouldn’t I control for every-thing I can? If not, aren’t my regression coefficients biased due to excluded variables?” But this argument is not as persuasive as it may seem initially.

First of all, if what you are doing is mis-specified already, then adding or excluding other variables has no tendency to make things consistently better or worse. The excluded variable argument only works if you are sure your specification is precisely correct with all variables included. But no one can know that with more than a handful of explanatory variables.

Still more importantly, big, mushy regression and probit equations seem to need a great many control variables precisely because they are jamming together all sorts of observations that do not belong together. Countries, wars, religious preferences, education levels, and other variables that change people’s coefficients are “controlled” with dummy variables that are completely inadequate to modeling their effects. The result is a long list of independent variables, a jumbled bag of nearly unrelated observations, and often, a hopelessly bad specification with meaningless (but statistically significant with several asterisks!) results.

Christopher H. Achen

This article is one of my absolute favourites. Why? Because it reaffirms yours truly’s view that since there is no absolutely certain knowledge at hand in social sciences — including economics — explicit argumentation and justification ought to play an extremely important role if purported knowledge claims are to be sustainably warranted. As Achen puts it — without careful supporting arguments, “just dropping variables into SPSS, STATA, S or R programs accomplishes nothing.”

Econometrics and the challenge of regression specification

January 11, 2021 Leave a comment

from Lars Syll

Most work in econometrics and regression analysis is — still — made on the assumption that the researcher has a theoretical model that is ‘true.’ Based on this belief of having a correct specification for an econometric model or running a regression, one proceeds as if the only problem remaining to solve have to do with measurement and observation.

aWhen things sound too good to be true, they usually aren’t. And that goes for econometric wet dreams too. The snag is, of course, that there is pretty little to support the perfect specification assumption. Looking around in social science and economics we don’t find a single regression or econometric model that lives up to the standards set by the ‘true’ theoretical model — and there is pretty little that gives us reason to believe things will be different in the future.

To think that we are being able to construct a model where all relevant variables are included and correctly specify the functional relationships that exist between them, is  not only a belief without support, but a belief impossible to support. Read more…

Overconfident economists

January 9, 2021 3 comments

from Lars Syll

Worst of all, when we feel pumped up with our progress, a tectonic shift can occur, like the Panic of 2008, making it seem as though our long journey has left us disappointingly close to the State of Complete Ignorance whence we began …

overconfidenceIt often takes years down the Path, but sooner or later, someone articulates the concerns that gnaw away in each of us and asks if the Assumptions are valid …

It would be much healthier for all of us if we could accept our fate, recognize that perfect knowledge will be forever beyond our reach and find happiness with what we have …

Can we economists agree that it is extremely hard work to squeeze truths from our data sets and what we genuinely understand will remain uncomfortably limited? We need words in our methodological vocabulary to express the limits … Those who think otherwise should be required to wear a scarlet-letter O around their necks, for “overconfidence.”

Ed Leamer

Many economists regularly pretend to know more than they do. Often this is a conscious strategy to promote their authority in politics and among policy makers. When economists present their models it should be mandatory that the models have warning labels to alert readers to the limited real-world relevance of models building on assumptions known to be absurdly unreal. Read more…

NAIRU — closer to religion than science

January 5, 2021 19 comments

from Lars Syll

phillips-curve-lr-1Once we see how weak the foundations for the natural rate of unemployment are, other arguments for pursuing rates of unemployment economists once thought impossible become more clear. Wages can increase at the expense of corporate profits without causing inflation …

The harder we push on improving output and employment, the more we learn how much we can achieve on those two fronts. That hopeful idea is the polar opposite of a natural, unalterable rate of unemployment. And it’s an idea and attitude that we need to embrace if we’re to have a shot at fully recovering from the wreckage of the Great Recession.

Mike Konczal / Vox

NAIRU does not hold water simply because it has not existed for the last 50 years. But still today ‘New Keynesian’ macroeconomists use it — and its cousin the Phillips curve — as a fundamental building block in their models. Why? Because without it ‘New Keynesians’ have to give up their — again and again empirically falsified — neoclassical view of the long-run neutrality of money and the simplistic idea of inflation as an excess-demand phenomenon. Read more…

Mainstream economics finally made it …

January 3, 2021 16 comments

from Lars Syll

out of frame

Wooh! So this is reality!

On logic and science

December 31, 2020 17 comments

from Lars Syll

Julia Rohrer в Twitter: "I don't always tweet about books I haven't  finished yet, but when I do, it's because they're awesome 📘 @_MiguelHernan  & Robins have a (free!) book on causalSuppose you conducted an observational study to identify the effect of heart transplant A on death Y and that you assumed no unmeasured confounding given disease severity L. A critic of your study says “the inferences from this observational study may be incorrect because of potential confounding.” The critic is not making a scientific statement, but a logical one. Since the findings from any observational study may be confounded, it is obviously true that those of your study can be confounded. If the critic’s intent was to provide evidence about the shortcomings of your particular study, he failed. His criticism is noninformative because he simply restated a characteristic of observational research that you and the critic already knew before the study was conducted.

To appropriately criticize your study, the critic needs to engage in a truly scientific conversation. For example, the critic may cite experimental or observational findings that contradict your findings, or he can say something along the lines of “the inferences from this observational study may be incorrect because of potential confounding due to cigarette smoking, a common cause through which a backdoor path may remain open”. This latter option provides you with a testable challenge to your assumption of no unmeasured confounding. The burden of the proof is again yours.

To be ‘analytical’ and ‘logical’ is something most people find recommendable. These words have a positive connotation. Scientists think deeper than most other people because they use ‘logical’ and ‘analytical’ methods. In dictionaries, logic is often defined as “reasoning conducted or assessed according to strict principles of validity” and ‘analysis’ as having to do with “breaking something down.” Read more…

Econometrics — the art of pulling a rabbit out of a hat

December 26, 2020 8 comments

from Lars Syll

Magician Pulling Rabbit From Hat Cartoon Illustration Royalty Free  Cliparts, Vectors, And Stock Illustration. Image 68544338.In econometrics one often gets the feeling that many of its practitioners think of it as a kind of automatic inferential machine: input data and out comes causal knowledge. This is — as Joan Robinson once had it — like pulling a rabbit from a hat. Great — but first you have to put the rabbit in the hat. And this is where assumptions come in to the picture.

The assumption of imaginary ‘superpopulations’ is one of the many dubious assumptions used in modern econometrics, and as Clint Ballinger highlights, this is a particularly questionable rabbit pulling assumption:

Inferential statistics are based on taking a random sample from a larger population … and attempting to draw conclusions about a) the larger population from that data and b) the probability that the relations between measured variables are consistent or are artifacts of the sampling procedure.

However, in political science, economics, development studies and related fields the data often represents as complete an amount of data as can be measured from the real world (an ‘apparent population’). Read more…

Why everything we know about modern economics is wrong

December 24, 2020 115 comments

from Lars Syll

The proposition is about as outlandish as it sounds: Everything we know about modern economics is wrong. And the man who says he can prove it doesn’t have a degree in economics. But Ole Peters is no ordinary crank. A physicist by training, his theory draws on research done in close collaboration with the late Nobel laureate Murray Gell-Mann, father of the quark …

His beef is that all too often, economic models assume something called “ergodicity.” That is, the average of all possible outcomes of a given situation informs how any one person might experience it. But that’s often not the case, which Peters says renders much of the field’s predictions irrelevant in real life. In those instances, his solution is to borrow math commonly used in thermodynamics to model outcomes using the correct average …

If Peters is right — and it’s a pretty ginormous if — the consequences are hard to overstate. Simply put, his “fix” would upend three centuries of economic thought, and reshape our understanding of the field as well as everything it touches …

Peters asserts his methods will free economics from thinking in terms of expected values over non-existent parallel universes and focus on how people make decisions in this one. His theory will also eliminate the need for the increasingly elaborate “fudges” economists use to explain away the inconsistencies between their models and reality.

.                                                                                                                          Brandon Kochkodin / BlombergQuint

sfiOle Peters’ fundamental critique of (mainstream) economics involves arguments about ergodicity and the all-important difference between time averages and ensemble averages. These are difficult concepts that many students of economics have problems with understanding. So let me just try to explain the meaning of these concepts by means of a couple of simple examples. Read more…

What can RCTs tell us?

December 23, 2020 Leave a comment

from Lars Syll

Using Randomised Controlled Trials in Education (BERA/SAGE Research Methods  in Education): Amazon.co.uk: Connolly, Paul, Biggart, Andy, Miller, Dr  Sarah, O'hare, Liam, Thurston, Allen: 9781473902831: BooksWe seek to promote an approach to RCTs that is tentative in its claims and that avoids simplistic generalisations about causality and replaces these with more nuanced and grounded accounts that acknowledge uncertainty, plausibility and statistical probability …

Whilst promoting the use of RCTs in education we also need to be acutely aware of their limitations … Whilst the strength of an RCT rests on strong internal validity, the Achilles heel of the RCT is external validity … Within education and the social sciences a range of cultural conditions is likely to influence the external validity of trial results across different contexts. It is precisely​ for this reason that qualitative components of an evaluation, and particularly the development of plausible accounts of generative mechanisms are so important …

Highly recommended reading.

Nowadays it is widely believed among mainstream economists that Read more…

MMT perspectives on rising interest rates

December 21, 2020 8 comments

from Lars Syll

The Bank of England is today wholly-owned by the UK government, and no other body is allowed to create UK pounds. It can create digital pounds in the payments system that it runs, thus marking up and down the accounts of banks, the government and other public institutions. It also acts as the bank of the government, facilitating its payments. The Bank of England also determines the bank rate, which is the interest rate it pays to commercial banks that hold money (reserves) at the Bank of England …

The Great Unwind: What Will Rising Interest Rates Mean for Bank Risk  Exposures?The interest rate that the UK government pays is a policy variable determined by the Bank of England. Furthermore, it is not the Bank of England’s remit to bankrupt the government that owns it. The institutional setup ensures that the Bank of England supports the liquidity and solvency of the government to the extent that it becomes an issuer of currency itself. Selling government bonds, it can create whatever amount of pounds it deems necessary to fulfil its functions. Given that the Bank of England stands ready to purchase huge amounts of gilts on the secondary market (for “used” gilts), it is clear to investors that gilts are just as good as reserves. There is no risk of default …

The government of the UK cannot “run out of money”. Read more…

Arrow-Debreu obsession

December 17, 2020 14 comments

from Lars Syll

I’ve never yet been able to understand why the economics profession was/is so impressed by the Arrow-Debreu results. They establish that in an extremely abstract model of an economy, there exists a unique equilibrium with certain properties. The assumptions required to obtain the result make this economy utterly unlike anything in the real world. In effect, it tells us nothing at all.what if So why pay any attention to it? The attention, I suspect, must come from some prior fascination with the idea of competitive equilibrium, and a desire to see the world through that lens, a desire that is more powerful than the desire to understand the real world itself. This fascination really does hold a kind of deranging power over economic theorists, so powerful that they lose the ability to think in even minimally logical terms; they fail to distinguish necessary from sufficient conditions, and manage to overlook the issue of the stability of equilibria.

Mark Buchanan

Almost a century and a half after Léon Walras founded neoclassical general equilibrium theory, economists still have not been able to show that markets move economies to equilibria. Read more…

Why economic models do not explain

December 14, 2020 44 comments

from Lars Syll

Thomas Piketty explains the meaning of economic models, and why we can't  rely on them — QuartzIn physics, we have theories and centuries of experience and experiments that show how gravity makes bodies move. In economics, we know there is nothing equivalent. Mainstream economists necessarily have to load their theories and models with sets of auxiliary structural assumptions to get any results at all in their models.

So why then do mainstream economists keep on pursuing this modelling project?

Mainstream ‘as if’ models are based on the logic of idealization and a set of tight axiomatic and ‘structural’ assumptions from which consistent and precise inferences are made. The beauty of this procedure is, of course, that if the assumptions are true, the conclusions necessarily follow. But it is a poor guide for real-world systems. As Hans Albert has it on this ‘style of thought’:

A theory is scientifically relevant first of all because of its possible explanatory power, its performance, which is coupled with its informational content … Clearly, it is possible to interpret the ‘presuppositions’ of a theoretical system … not as hypotheses, but simply as limitations to the area of application of the system in question. Since a relationship to reality is usually ensured by the language used in economic statements, in this case the impression is generated that a content-laden statement about reality is being made, although the system is fully immunized and thus without content. In my view that is often a source of self-deception in pure economic thought …

The way axioms and theorems are formulated in mainstream economics often leaves their specification without almost any restrictions whatsoever, safely making every imaginable evidence compatible with the all-embracing ‘theory’ — and theory without informational content never risks being empirically tested and found falsified. Used in mainstream ‘thought experimental’ activities, it may, of course, ​be very ‘handy’, but totally void of any empirical value. Read more…

Statistics and causation — a critical review

December 12, 2020 1 comment

from Lars Syll

50 Common Misconceptions in the World of Software Development | Hacker NoonCausal inferences can be drawn from nonexperimental data. However, no mechanical rules can be laid down for the activity. Since Hume, that is almost a truism. Instead, causal inference seems to require an enormous investment of skill, intelligence, and hard work. Many convergent lines of evidence must be developed. Natural variation needs to be identified and exploited. Data must be collected. Confounders need to be considered. Alternative explanations have to be exhaustively tested. Before anything else, the right question needs to be framed. Naturally, there is a desire to substitute intellectual capital for labor. That is why investigators try to base causal inference on statistical models. The technology is relatively easy to use, and promises to open a wide variety of ques- tions to the research effort. However, the appearance of methodological rigor can be deceptive. The models themselves demand critical scrutiny. Mathematical equations are used to adjust for confounding and other sources of bias. These equations may appear formidably precise, but they typically derive from many somewhat arbitrary choices. Which variables to enter in the regression? What functional form to use? What assumptions to make about parameters and error terms? These choices are seldom dictated either by data or prior scientific knowledge. That is why judgment is so critical, the opportunity for error so large, and the number of successful applications so limited.

David Freedman

Causality in social sciences — including economics — can never solely be a question of statistical inference. Read more…

Natural experiments in the social sciences

December 8, 2020 2 comments

from Lars Syll

du2How, then, can social scientists best make inferences about causal effects? One option is true experimentation … Random assignment ensures that any differences in outcomes between the groups are due either to chance error or to the causal effect … If the experiment were to be repeated over and over, the groups would not differ, on average, in the values of potential confounders. Thus, the average of the average difference of group outcomes, across these many experiments, would equal the true difference in outcomes … The key point is that randomization is powerful because it obviates confounding …

Thad Dunning’s book is a very useful guide for social scientists interested in research methodology in general and natural experiments in specific. Dunning argues that since random or as-if random assignment in natural experiments obviates the need for controlling potential confounders, this kind of “simple and transparent” design-based research method is preferable to more traditional multivariate regression analysis where the controlling only comes in ex post via statistical modelling.

But — there is always a but … Read more…

Causality and analysis of variation

December 7, 2020 8 comments

from Lars Syll

Modern econometrics is fundamentally based on assuming — usually without any explicit justification — that we can gain causal knowledge by considering independent variables that may have an impact on the variation of a dependent variable. This is however, far from self-evident. Often the fundamental causes are constant forces that are not amenable to the kind of analysis econometrics supplies us with. As Stanley Lieberson has it in Making It Count:

LiebersonOne can always say whether, in a given empirical context, a given variable or theory accounts for more variation than another. But it is almost certain that the variation observed is not universal over time and place. Hence the use of such a criterion first requires a conclusion about the variation over time and place in the dependent variable. If such an analysis is not forthcoming, the theoretical conclusion is undermined by the absence of information …

Moreover, it is questionable whether one can draw much of a conclusion about causal forces from simple analysis of the observed variation … To wit, it is vital that one have an understanding, or at least a working hypothesis, about what is causing the event per se; variation in the magnitude of the event will not provide the answer to that question.

Trygve Haavelmo was making a somewhat similar point back in 1941, when criticizing the treatmeant of the interest variable in Tinbergen’s regression analyses. The regression coefficient of the interest rate variable being zero was according to Haavelmo not sufficient for inferring that “variations in the rate of interest play only a minor role, or no role at all, in the changes in investment activity.” Interest rates may very well play a decisive indirect role by influencing other causally effective variables. And: Read more…

Testing game theory

December 5, 2020 6 comments

from Lars Syll

The “prisoner’s dilemma” is a familiar concept to just about everyone who took Econ 101 …

GametheoryYet no one’s ever actually run the experiment on real prisoners before, until two University of Hamburg economists tried it out in a recent study comparing the behavior of inmates and students.

Surprisingly, for the classic version of the game, prisoners were far more cooperative than expected.

Menusch Khadjavi and Andreas Lange put the famous game to the test for the first time ever, putting a group of prisoners in Lower Saxony’s primary women’s prison, as well as students, through both simultaneous and sequential versions of the game …

They expected, building off of game theory and behavioral economic research that show humans are more cooperative than the purely rational model that economists traditionally use, that there would be a fair amount of first-mover cooperation, even in the simultaneous simulation where there’s no way to react to the other player’s decisions.

And even in the sequential game, where you get a higher payoff for betraying a cooperative first mover, a fair amount will still reciprocate.

As for the difference between student and prisoner behavior, you’d expect that a prison population might be more jaded and distrustful, and therefore more likely to defect.

The results went exactly the other way …

The paper … demonstrates that prisoners aren’t necessarily as calculating, self-interested, and untrusting as you might expect, and as behavioral economists have argued for years, as mathematically interesting as Nash equilibrium might be, they don’t line up with real behavior all that well.

Business Insider

Many mainstream economists — still — think that game theory is useful and can be applied to real-life and give important and interesting results. Read more…

Leontief and the sorry state of economics

December 1, 2020 20 comments

from Lars Syll

The core assumption of 'modern' macro — totally FUBAR | LARS P. SYLLPage after page of professional economic journals are filled with mathematical formulas leading the reader from sets of more or less plausible but entirely arbitrary assumptions to precisely stated but irrelevant theoretical conclusions …

Year after year economic theorists continue to produce scores of mathematical models and to explore in great detail their formal properties; and the econometricians fit algebraic functions of all possible shapes to essentially the same sets of data without being able to advance, in any perceptible way, a systematic understanding of the structure and the operations of a real economic system.

Wassily Leontief

Mainstream economics has Read more…