Archive

Archive for the ‘econometrics’ Category

Yes, economics has a problem with women

October 8, 2017 7 comments

from Julie Nelson

Yes, economics has a problem with women. In the news recently we’ve heard about the study of the Economics Job Market Rumors (EJMR) on-line forum. Student researcher Alice H. Wu found that posts about women were far more likely to contain words about their personal and physical issues (including “hot,” “lesbian,” “cute,” and “raped” ) than posts about men, which tended to focus more on academic and professional topics. As a woman who has been in the profession for over three decades, however, this is hardly news.

Dismissive treat of women, and of issues that impact women more than men, comes not only from the sorts of immature cowards who vent anonymously on EJMR, but even from men who probably don’t think of themselves as sexist. And because going along with professional fashion may be necessary for advancement, women economists also sometimes play along with the dominant view.

Consider a few other cases I’ve noticed during my thirty years in the profession:   Read more…

What can econometrics achieve?

June 10, 2015 Leave a comment

from Lars Syll

A popular idea in quantitative social sciences is to think of a cause (C) as something that increases the probability of its effect or outcome (O). That is:

P(O|C) > P(O|-C)

However, as is also well-known, a correlation between two variables, say A and B, does not necessarily imply that that one is a cause of the other, or the other way around, since they may both be an effect of a common cause, C.

causation-2In statistics and econometrics we usually solve this “confounder” problem by “controlling for” C, i. e. by holding C fixed. This means that we actually look at different “populations” – those in which C occurs in every case, and those in which C doesn’t occur at all. This means that knowing the value of A does not influence the probability of C [P(C|A) = P(C)]. So if there then still exist a correlation between A and B in either of these populations, there has to be some other cause operating. But if all other possible causes have been “controlled for” too, and there is still a correlation between A and B, we may safely conclude that A is a cause of B, since by “controlling for” all other possible causes, the correlation between the putative cause A and all the other possible causes (D, E,. F …) is broken.  Read more…

In search of causality

May 13, 2015 Leave a comment

from Lars Syll

dilbert

One of the few statisticians that I have on my blogroll is Andrew Gelman.  Although not sharing his Bayesian leanings, yours truly finds  his open-minded, thought-provoking and non-dogmatic statistical thinking highly recommendable. The plaidoyer here below for “reverse causal questioning” is typical Gelmanian:

When statistical and econometrc methodologists write about causal inference, they generally focus on forward causal questions. We are taught to answer questions of the type “What if?”, rather than “Why?” Following the work by Rubin (1977) causal questions are typically framed in terms of manipulations: if x were changed by one unit, how much would y be expected to change? But reverse causal questions are important too … In many ways, it is the reverse causal questions that motivate the research, including experiments and observational studies, that we use to answer the forward questions …  Read more…

Econom(etr)ic fictions masquerading as rigorous science

February 28, 2015 5 comments

from Lars Syll

In econometrics one often gets the feeling that many of its practitioners think of it as a kind of automatic inferential machine: input data and out comes casual knowledge. This is like pulling a rabbit from a hat. Great — but first you have to put the rabbit in the hat. And this is where assumptions come in to the picture.

As social scientists — and economists — we have to confront the all-important question of how to handle uncertainty and randomness. Should we define randomness with probability? If we do, we have to accept that to speak of randomness we also have to presuppose the existence of nomological probability machines, since probabilities cannot be spoken of – and actually, to be strict, do not at all exist – without specifying such system-contexts.

Accepting a domain of probability theory and a sample space of “infinite populations” — which is legion in modern econometrics — also implies that judgments are made on the basis of observations that are actually never made! Infinitely repeated trials or samplings never take place in the real world. So that cannot be a sound inductive basis for a science with aspirations of explaining real-world socio-economic processes, structures or events. It’s not tenable. Read more…

Bayesianism — a ‘patently absurd’ approach to science

December 4, 2014 1 comment

from Lars Syll

Back in 1991, when I earned my first Ph.D. — with a dissertation on decision making and rationality in social choice theory and game theory — yours truly concluded that “repeatedly it seems as though mathematical tractability and elegance — rather than realism and relevance — have been the most applied guidelines for the behavioural assumptions being made. On a political and social level it is doubtful if the methodological individualism, ahistoricity and formalism they are advocating are especially valid.”

steer-clear-of-scientologyThis, of course, was like swearing in church. My mainstream neoclassical colleagues were — to say the least — not exactly überjoyed.

The decision theoretical approach I perhaps was most critical of, was the one building on the then reawakened Bayesian subjectivist interpretation of probability.

One of my inspirations when working on the dissertation was Henry E. Kyburg, and I still think his critique is the ultimate take-down of Bayesian hubris (emphasis added): Read more…

Calibration and ‘deep parameters’ — a severe case of econometric self-deception

November 23, 2014 2 comments

from Lars Syll

One may wonder how much calibration adds to the knowledge of economic structures and the deep parameters involved … Micro estimates are imputed in general equilibrium models which are confronted with new data, not used for the construction of the imputed parameters … However this procedure to impute parameter values into calibrated models has serious weaknesses …

poofFirst, few ‘deep parameters’ have been established at all …

Second, even where estimates are available from micro-econometric investigations, they cannot be automatically importyed into aggregated general equlibrium models …

Third, calibration hardly contributes to growth of knowledge about ‘deep parameters’. These deep parameters are confronted with a novel context (aggregate time-series), but this is not used for inference on their behalf. Rather, the new context is used to fit the model to presumed ‘laws of motion’ of the economy …  Read more…

Macroeconomic aspirations

October 30, 2014 7 comments

from Lars Syll

Oxford macroeconomist Simon Wren-Lewis has a post up on his blog on the use of labels in macroeconomics:

EPAreadlabelLabels are fun, and get attention. They can be a useful shorthand to capture an idea, or related set of ideas … Here are a couple of bold assertions, which I think I believe, and which I will try to justify. First, in academic research terms there is only one meaningful division, between mainstream and heterodox … Second, in macroeconomic policy terms I think there is only one meaningful significant division, between mainstream and anti-Keynesians …

So what do I mean by a meaningful division in academic research terms? I mean speaking a different language. Thanks to the microfoundations revolution in macro, mainstream macroeconomists speak the same language. I can go to a seminar that involves an RBC model with flexible prices and no involuntary unemployment and still contribute and possibly learn something.

Wren-Lewis seems to be überjoyed by the fact that using the same language as real business cycles macroeconomists he can “possibly learn something” from them.

Hmm …

Wonder what …

I’m not sure Wren-Lewis uses the same “language” as James Tobin, but he’s definitely worth listening to: Read more…

“Methodological Mistakes and Econometric Consequences”

February 17, 2014 8 comments

From Edward Fullbrook

Asad Zaman has just published an illuminating paper related both to Bryant Chen and Judea Pearl’s recent RWER paper “Regression and causation: a critical examination of econometrics textbooks” and to topics frequently discussed on this blog.  Titled “Methodological Mistakes and Econometric Consequences”, Zaman’s paper appears in the current issue of the International Econometric Review.   Here is an open-access link to the paper and below is its introduction.

1. INTRODUCTION 

The rise and fall of logical positivism is the most spectacular philosophical story of the twentieth century. Rising to prominence in the second quarter of the twentieth century, it swept away all contenders, and became widely accepted throughout the academia. Logical positivism provided a particular understanding of the nature of knowledge, as well as that of science and of scientific methodology. The foundations of the social sciences were re-formulated in the light of this new understanding of what science is. Later on, it became clear that the central tenets of the positivist philosophy were wrong. Logical positivism had a “spectacular crash,” and there was some dispute about who had “killed” logical positivism1. As a logical consequence, it became necessary to re-examine the foundations of the social science, and to find new bases on which to re-construct them. This has occurred to differing degrees in different disciplines. One of the most recalcitrant has been economics. As discussed in Zaman (2011), the foundations of economics continue to be based on erroneous logical positivist ideas, and hence require radical revisions.  Read more…