Archive

Author Archive

Real business cycles — nonsense on stilts

June 16, 2021 5 comments

from Lars Syll

rbcThey try to explain business cycles solely as problems of information, such as asymmetries and imperfections in the information agents have. Those assumptions are just as arbitrary as the institutional rigidities and inertia they find objectionable in other theories of business fluctuations … I try to point out how incapable the new equilibrium business cycles models are of explaining the most obvious observed facts of cyclical fluctuations … I don’t think that models so far from realistic description should be taken seriously as a guide to policy … I don’t think that there is a way to write down any model which at one hand respects the possible diversity of agents in taste, circumstances, and so on, and at the other hand also grounds behavior rigorously in utility maximization and which has any substantive content to it.

James Tobin

Real business cycle theory (RBC) basically says that Read more…

Discrimination and the use of ‘statistical controls’

June 14, 2021 3 comments

from Lars Syll

The gender pay gap is a fact that, sad to say, to a non-negligible extent is the result of discrimination. And even though many women are not deliberately discriminated against, but rather self-select into lower-wage jobs, this in no way magically explains away the discrimination gap. As decades of socialization research has shown, women may be ‘structural’ victims of impersonal social mechanisms that in different ways aggrieve them. Wage discrimination is unacceptable. Wage discrimination is a shame.

You see it all the time in studies. “We controlled for…” And then the list starts … The more things you can control for, the stronger your study is — or, at least, the stronger your study seems. Controls give the feeling of specificity, of precision. But sometimes, you can control for too much. Sometimes you end up controlling for the thing you’re trying to measure …

paperAn example is research around the gender wage gap, which tries to control for so many things that it ends up controlling for the thing it’s trying to measure …

Take hours worked, which is a standard control in some of the more sophisticated wage gap studies. Women tend to work fewer hours than men. If you control for hours worked, then some of the gender wage gap vanishes. As Yglesias wrote, it’s “silly to act like this is just some crazy coincidence. Women work shorter hours because as a society we hold women to a higher standard of housekeeping, and because they tend to be assigned the bulk of childcare responsibilities.”

Controlling for hours worked, in other words, is at least partly controlling for how gender works in our society. It’s controlling for the thing that you’re trying to isolate.

Ezra Klein

Read more…

On the limits of ‘mediation analysis’ and ‘statistical causality’

June 11, 2021 1 comment

from Lars Syll

mediator“Mediation analysis” is this thing where you have a treatment and an outcome and you’re trying to model how the treatment works: how much does it directly affect the outcome, and how much is the effect “mediated” through intermediate variables …

In the real world, it’s my impression that almost all the mediation analyses that people actually fit in the social and medical sciences are misguided: lots of examples where the assumptions aren’t clear and where, in any case, coefficient estimates are hopelessly noisy and where confused people will over-interpret statistical significance …

More and more I’ve been coming to the conclusion that the standard causal inference paradigm is broken … So how to do it? I don’t think traditional path analysis or other multivariate methods of the throw-all-the-data-in-the-blender-and-let-God-sort-em-out variety will do the job. Instead we need some structure and some prior information.

Andrew Gelman

Causality in social sciences — and economics — can never solely be a question of statistical inference. Causality entails more than predictability, and to really in depth explain social phenomena require theory. Read more…

How statistics can be misleading

June 5, 2021 2 comments

.

from Lars Syll

From a theoretical perspective, Simpson’s paradox importantly shows that causality can never be reduced to a question of statistics or probabilities.

To understand causality we always have to relate it to a specific causal structure. Statistical correlations are never enough. No structure, no causality. Read more…

Causal inference from observational data

June 4, 2021 2 comments

from Lars Syll

nisbettResearchers often determine the individual’s contemporary IQ or IQ earlier in life, socioeconomic status of the family of origin, living circumstances when the individual was a child, number of siblings, whether the family had a library card, educational attainment of the individual, and other variables, and put all of them into a multiple-regression equation predicting adult socioeconomic status or income or social pathology or whatever. Researchers then report the magnitude of the contribution of each of the variables in the regression equation, net of all the others (that is, holding constant all the others). It always turns out that IQ, net of all the other variables, is important to outcomes. But … the independent variables pose a tangle of causality – with some causing others in goodness-knows-what ways and some being caused by unknown variables that have not even been measured. Higher socioeconomic status of parents is related to educational attainment of the child, but higher-socioeconomic-status parents have higher IQs, and this affects both the genes that the child has and the emphasis that the parents are likely to place on education and the quality of the parenting with respect to encouragement of intellectual skills and so on. So statements such as “IQ accounts for X percent of the variation in occupational attainment” are built on the shakiest of statistical foundations. What nature hath joined together, multiple regressions cannot put asunder.

Now, I think this is right as far as it goes, although it would certainly have strengthened Nisbett’s argumentation if he had elaborated more on the methodological question around causality, or at least had given some mathematical-statistical-econometric references. Unfortunately, his alternative approach is not more convincing than regression analysis. Read more…

Is causality only in the mind?

June 2, 2021 2 comments

from Lars Syll

James HeckmanI make two main points that are firmly anchored in the econometric tradition. The first is that causality is a property of a model of hypotheticals. A fully articulated model of the phenomena being studied precisely defines hypothetical or counterfactual states. A definition of causality drops out of a fully articulated model as an automatic by-product. A model is a set of possible counterfactual worlds constructed under some rules. The rules may be the laws of physics, the consequences of utility maximization, or the rules governing social interactions, to take only three of many possible examples. A model is in the mind. As a consequence, causality is in the mind.

James Heckman

So, according to this ‘Nobel prize’ winning econometrician, “causality is in the mind.” But is that a tenable view? Yours truly thinks not. If one as an economist or social scientist would subscribe to that view there would be pretty little reason to be interested in questions of causality at all.  And it sure doesn’t suffice just to say that all science is predicated on assumptions. To most of us, models are seen as ‘vehicles’ or ‘instruments’ by which we represent causal processes and structures that exist and operate in the real world. As we all know, models often do not succeed in representing or explaining these processes and structures, but if we didn’t consider them as anything but figments of our minds, well then maybe we ought to reconsider why we should be in the science business at all … Read more…

Why econometric models by necessity are endlessly misspecified

May 31, 2021 2 comments

from Lars Syll

The impossibility of proper specification is true generally in regression analyses across the social sciences, whether we are looking at the factors affecting occupational status, voting behavior, etc. The problem is that as implied by the three conditions for regression analyses to yield accurate, unbiased estimates, you need to investigate a phenomenon that has underlying mathematical regularities – and, moreover, you need to know what they are. Neither seems true. I have no reason to believe that the way in which multiple factors affect earnings, student achievement, and GNP have some underlying mathematical regularity across individuals or countries. More likely, each individual or country has a different function, and one that changes over time. Even if there was some constancy, the processes are so complex that we have no idea of what the function looks like.

Misspecification in empirical models: How problematic and what can we do  about it? | VOX, CEPR Policy PortalResearchers recognize that they do not know the true function and seem to treat, usually implicitly, their results as a good-enough approximation. But there is no basis for the belief that the results of what is run in practice is anything close to the underlying phenomenon, even if there is an underlying phenomenon. This just seems to be wishful thinking. Most regression analysis research doesn’t even pay lip service to theoretical regularities. But you can’t just regress anything you want and expect the results to approximate reality. And even when researchers take somewhat seriously the need to have an underlying theoretical framework – as they have, at least to some extent, in the examples of studies of earnings, educational achievement, and GNP that I have used to illustrate my argument – they are so far from the conditions necessary for proper specification that one can have no confidence in the validity of the results.

Steven J. Klees

Most work in econometrics and regression analysis is made on the assumption that the researcher has a theoretical model that is ‘true.’ Read more…

Weekend read: Why economic models do not explain

May 28, 2021 5 comments

from Lars Syll

Krugman on models (II) | LARS P. SYLLAnalogue-economy models may picture Galilean thought experiments or they may describe credible worlds. In either case we have a problem in taking lessons from the model to the world. The problem is the venerable one of unrealistic assumptions, exacerbated in economics by the fact that the paucity of economic principles with serious empirical content makes it difficult to do without detailed structural assumptions. But the worry is not just that the assumptions are unrealistic; rather, they are unrealistic in just the wrong way.

Nancy Cartwright

One of the limitations with economics is the restricted possibility to perform experiments, forcing it to mainly rely on observational studies for knowledge of real-world economies. Read more…

Testing causal explanations in economics

May 27, 2021 2 comments

from Lars Syll

Idealisations to the rescue | Opinion | Chemistry WorldThird, explanations fail by question (3.1) [“are the factors cited as possible causes of an event in fact aspects of the situation in which that event occurred?”] where the factors invoked as possible causes are idealisations. No doubt this claim will be considered contentious by some economists, accustomed as they are to explanations based on such dramatic assumptions as rational expectations, single-agent ‘economies’, and two-commodity ‘worlds’. The issue here turns on the distinction between abstraction (passing over or omitting to mention aspects of the causal history) and idealisation (invoking entities that exist only in the realm of ideas, such as most limit types and what Maki (1992) calls ‘theoretical isolations’). This distinction cannot be pursued here, but the general idea is that although every explanation involves abstraction insofar as we can never provide a complete list of the causes of any event, no genuine attempt at causal explanation can invoke as causes theoretical entities that have no existence other than in the minds and discourse of scientific investigators. For such entities cannot be aspects of real economic situations and are therefore ineligible as candidate causes. Explanations that invoke such entities therefore either fail, if offered as causal explanations in the sense I have described, or should be thought of as something other than causal explanations.

Jochen Runde

When it comes to modelling, yours truly does see the point often emphatically made for simplicity among economists and econometricians — but only as long as it doesn’t impinge on our truth-seeking. Read more…

Functional finance and Ricardian equivalence

May 25, 2021 3 comments

from Lars Syll

According to Abba Lerner, the purpose of public debt is “to achieve a rate of interest which results in the most desirable level of investment.” He also maintained that an application of Functional finance will have a tendency to balance the budget in the long run:

Finally, there is no reason for assuming that, as a result of the continued application of Functional Finance to maintain full employment, the government must always be borrowing more money and increasing the national debt. There are a number of reasons for this.

dec3bb27f72875e4fb4d4b62daebb2fd161b36392c1a0626f00cfd2ece207d84First, full employment can be maintained by printing the money needed for it, and this does not increase the debt at all. It is probably advisable, however, to allow debt and money to increase together in a certain balance, as long as one or the other has to increase.

Second, since one of the greatest deterrents to private investment is the fear that the depression will come before the investment has paid for itself, the guarantee of permanent full employment will make private investment much more attractive, once investors have gotten over their suspicion of the new procedure. The greater private investment will diminish the need for deficit spending.

Third, as the national debt increases, and with it the sum of private wealth, there will be an increasingly yield from taxes on higher incomes and inheritances, even if the tax rates are unchanged. These higher tax payments do not represent reductions of spending by the taxpayers. Therefore the government does not have to use these proceeds to maintain the requisite rate of spending, and can devote them to paying the interest on the national debt.

Fourth, as the national debt increases it acts as a self-equilibrating force, gradually diminishing the further need for its growth and finally reaching an equilibrium level where its tendency to grow comes completely to an end. The greater the national debt the greater is the quantity of private wealth. The reason for this is simply that for every dollar of debt owed by the government there is a private creditor who owns the government obligations (possibly through a corporation in which he has shares), and who regards these obligations as part of his private fortune. The greater the private fortunes the less is the incentive to add to them by saving out of current income …

Fifth, if for any reason the government does not wish to see private property grow too much … it can check this by taxing the rich instead of borrowing from them, in its program of financing government spending to maintain full employment. The rich will not reduce their spending significantly, and thus the effects on the economy, apart from the smaller debt, will be the same as if Money had been borrowed from them.

Abba Lerner

Even if most of today’s mainstream economists do not understand Lerner, there once was one who certainly did: Read more…

Does size matter?

May 21, 2021 4 comments

from Lars Syll

Economic growth has since long interested economists. Not least, the question of which factors are behind high growth rates has been in focus. The factors usually pointed at are mainly economic, social and political variables. In an interesting study from the University of  Helsinki, Tatu Westling has expanded the potential causal variables to also include biological and sexual variables. In  the report Male Organ and Economic Growth: Does Size Matter, he has — based on the “cross-country” data of Mankiw et al (1992), Summers and Heston (1988), Polity IV Project data of political regime types and a new data set on average penis size in 76 non-oil producing countries (www.everyoneweb.com/worldpenissize) — been able to show that the level and growth of GDP per capita between 1960 and 1985 varies with penis size. Replicating Westling’s study — yours truly has used his favourite program Gretl — we obtain the following two charts: Read more…

Ayn Rand — one of history’s biggest psychopaths

May 19, 2021 3 comments

from Lars Syll

Now, I don’t care to discuss the alleged complaints American Indians have against this country. I believe, with good reason, the most unsympathetic Hollywood portrayal of Indians and what they did to the white man. They had no right to a country merely because they were born here and then acted like savages. The white man did not conquer this country …

Since the Indians did not have the concept of property or property rights—they didn’t have a settled society, they had predominantly nomadic tribal “cultures”—they didn’t have rights to the land, and there was no reason for anyone to grant them rights that they had not conceived of and were not using …

What were they fighting for, in opposing the white man on this continent? For their wish to continue a primitive existence; for their “right” to keep part of the earth untouched—to keep everybody out so they could live like animals or cavemen. Any European who brought with him an element of civilization had the right to take over this continent, and it’s great that some of them did. The racist Indians today—those who condemn America—do not respect individual rights.

Ayn Rand,  Address To The Graduating Class Of The United States Military Academy at West Point, 1974

It’s sickening to read this gobsmacking trash. But it’s perhaps even more sickening that people like Alan Greenspan consider Rand some​ kind of intellectual hero. Read more…

The gender wage gap

May 18, 2021 2 comments

from Lars Syll

uberUber has conducted a study of internal pay differentials between men and women, which they describe as “gender blind” … The study found a 7% pay gap in favor of men. They present their findings as proof that there are issues unrelated to gender that impact driver pay. They quantify the reasons for the gap as follows:

Where: 20% is due to where people choose to drive (routes/neighborhoods).

Experience: 30% is due to experience …

Speed: 50% was due to speed, they claim that men drive slightly faster, so complete more trips per hour …

The company’s reputation has been affected by its sexist and unprofessional corporate culture, and its continued lack of gender balance won’t help. Nor, I suspect, will its insistence, with research conducted by its own staff to prove it, that the pay gap is fair. This simply adds insult to obnoxiousness.

But then, why would we have expected any different? The Uber case study’s conclusions may actually be almost the opposite of what they were trying to prove. Rather than showing that the pay gap is a natural consequence of our gendered differences, they have actually shown that systems designed to insistently ignore differences tend to become normed to the preferences of those who create them.

Avivah Wittenberg-Cox

Spending a couple of hours going through a JEL survey of modern research on the gender wage gap, yours truly was struck almost immediately by how little that research really has accomplished in terms of explaining gender wage discrimination. With all the heavy regression and econometric alchemy used, wage discrimination is somehow more or less conjured away … Read more…

The RCT controversy

May 15, 2021 3 comments

from Lars Syll

In Social Science and Medicine (December 2017), Angus Deaton & Nancy Cartwright argue that RCTs do not have any warranted special status. They are, simply, far from being the ‘gold standard’ they are usually portrayed as:

rctsRandomized Controlled Trials (RCTs) are increasingly popular in the social sciences, not only in medicine. We argue that the lay public, and sometimes researchers, put too much trust in RCTs over other methods of in- vestigation. Contrary to frequent claims in the applied literature, randomization does not equalize everything other than the treatment in the treatment and control groups, it does not automatically deliver a precise estimate of the average treatment effect (ATE), and it does not relieve us of the need to think about (observed or un- observed) covariates. Finding out whether an estimate was generated by chance is more difficult than commonly believed. At best, an RCT yields an unbiased estimate, but this property is of limited practical value. Even then, estimates apply only to the sample selected for the trial, often no more than a convenience sample, and justi- fication is required to extend the results to other groups, including any population to which the trial sample belongs, or to any individual, including an individual in the trial. Demanding ‘external validity’ is unhelpful because it expects too much of an RCT while undervaluing its potential contribution. RCTs do indeed require minimal assumptions and can operate with little prior knowledge. This is an advantage when persuading dis- trustful audiences, but it is a disadvantage for cumulative scientific progress, where prior knowledge should be built upon, not discarded. RCTs can play a role in building scientific knowledge and useful predictions but they can only do so as part of a cumulative program, combining with other methods, including conceptual and theoretical development, to discover not ‘what works’, but ‘why things work’.

Read more…

Some methodological perspectives on causal modeling in economics

May 9, 2021 12 comments

from Lars Syll

Causal modeling attempts to maintain this deductive focus within imperfect research by deriving models for observed associations from more elaborate causal (‘structural’) models with randomized inputs … But in the world of risk assessment … the causal-inference process cannot rely solely on deductions from models or other purely algorithmic approaches. Instead, when randomization is doubtful or simply false (as in typical applications), an honest analysis must consider sources of variation from uncontrolled causes with unknown, nonrandom interdependencies. Causal identification then requires nonstatistical information in addition to information encoded as data or their probability distributions …

157e4bb021a73ee61009ce85178c36c3a6d4069b53842d45f3dc54a39754676bThis need raises questions of to what extent can inference be codified or automated (which is to say, formalized) in ways that do more good than harm. In this setting, formal models – whether labeled ‘‘causal’’ or ‘‘statistical’’ – serve a crucial but limited role in providing hypothetical scenarios that establish what would be the case if the assumptions made were true and the input data were both trustworthy and the only data available. Those input assumptions include all the model features and prior distributions used in the scenario, and supposedly encode all information being used beyond the raw data file (including information about the embedding context as well as the study design and execution). Read more…

Hunting for causes (wonkish)

May 4, 2021 3 comments

from Lars Syll

Causality and CorrelationThere are three fundamental differences between statistical and causal assumptions. First, statistical assumptions, even untested, are testable in principle, given sufficiently large sample and sufficiently fine measurements. Causal assumptions, in contrast, cannot be verified even in principle, unless one resorts to experimental control. This difference is especially accentuated in Bayesian analysis. Though the priors that Bayesians commonly assign to statistical parameters are untested quantities, the sensitivity to these priors tends to diminish with increasing sample size. In contrast, sensitivity to priors of causal parameters … remains non-zero regardless of (non-experimental) sample size. Read more…

Hunting for causes (wonkish)

May 1, 2021 2 comments

from Lars Syll

Causality and CorrelationThere are three fundamental differences between statistical and causal assumptions. First, statistical assumptions, even untested, are testable in principle, given sufficiently large sample and sufficiently fine measurements. Causal assumptions, in contrast, cannot be verified even in principle, unless one resorts to experimental control. This difference is especially accentuated in Bayesian analysis. Though the priors that Bayesians commonly assign to statistical parameters are untested quantities, the sensitivity to these priors tends to diminish with increasing sample size. In contrast, sensitivity to priors of causal parameters … remains non-zero regardless of (non-experimental) sample size.

Second, statistical assumptions can be expressed in the familiar language of probability calculus, and thus assume an aura of scholarship and scientific respectability. Causal assumptions, as we have seen before, are deprived of that honor, and thus become immediate suspect of informal, anecdotal or metaphysical thinking. Read more…

The tools economists use

April 25, 2021 3 comments

from Lars Syll

In their quest for statistical “identification” of a causal effect, economists often have to resort to techniques that answer either a narrower or a somewhat different version of the question that motivated the research.

Policy Methods Toolbox - Observatory of Public Sector Innovation  Observatory of Public Sector InnovationResults from randomized social experiments carried out in particular regions of, say, India or Kenya may not apply to other regions or countries. A research design exploiting variation across space may not yield the correct answer to a question that is essentially about changes over time …

Economists’ research can rarely substitute for more complete works of synthesis, which consider a multitude of causes, weigh likely effects, and address spatial and temporal variation of causal mechanisms. Work of this kind is more likely to be undertaken by historians and non-quantitatively oriented social scientists … Read more…

Econometrics — science based on whimsical assumptions

April 23, 2021 4 comments

from Lars Syll

freedmanIt is often said that the error term in a regression equation represents the effect of the variables that were omitted from the equation. This is unsatisfactory …

There is no easy way out of the difficulty. The conventional interpretation for error terms needs to be reconsidered. At a minimum, something like this would need to be said:

The error term represents the combined effect of the omitted variables, assuming that

(i) the combined effect of the omitted variables is independent of each variable included in the equation,
(ii) the combined effect of the omitted variables is independent across subjects,
(iii) the combined effect of the omitted variables has expectation 0.

This is distinctly harder to swallow.

David Freedman

Yes, indeed, that is harder to swallow.

Those conditions on the error term actually mean that we are being able to construct a model where all relevant variables are included and correctly specify the functional relationships that exist between them. Read more…

Who’s afraid of MMT?

April 20, 2021 31 comments

from Lars Syll

As anyone who has ever been responsible for legislative oversight of central bankers knows, they do not like to have their authority challenged. Most of all, they will defend their mystique – that magical aura that hovers over their words, shrouding a slushy mix of banality and baloney in a mist of power and jargon …

In our day, the voices of Modern Monetary Theory perturb the sleep not only of present central bankers, but even of those retired from the role. They prowl the corridors like Lady Macbeth, shouting “Out damn spot!”

Two fresh cases are Raghuram G. Rajan, a former governor of the Reserve Bank of India, and Mervyn King, a former governor of the BOE. In recently published commentaries, each combines bluster and condescension (in roughly equal measure) in a statement of trite truths with which one can, for the most part, hardly disagree.

Modern Monetary Theory Is Wrong: Inflation Is ComingBut Rajan and King each confront MMT only in the abstract. Neither cites or quotes from a single source, and neither names a single person associated with MMT …

What, then, is MMT? Contrary to the claims of King and Rajan, it is not a policy slogan. Rather, it is a body of theory in Keynes’s monetary tradition, which includes such eminent thinkers as the American economist Hyman Minsky and Wynne Godley of the UK Treasury and the University of Cambridge. MMT describes how “modern” governments and central banks actually work, and how changes in their balance sheets are mirrored by changes in the balance sheets of the public – an application of double-entry bookkeeping to economic thought. Thus, as Kelton writes in the plainest English, the deficit of the government is the surplus of the private sector, and vice versa. Read more…