Archive
How evidence is treated in macroeconomics
from Lars Syll
“New Keynesian” macroeconomist Simon Wren-Lewis has a post up on his blog, discussing how evidence is treated in modern macroeconomics (emphasis added):
It is hard to get academic macroeconomists trained since the 1980s to address this question, because they have been taught that these models and techniques are fatally flawed because of the Lucas critique and identification problems. But DSGE models as a guide for policy are also fatally flawed because they are too simple. The unique property that DSGE models have is internal consistency. Take a DSGE model, and alter a few equations so that they fit the data much better, and you have what could be called a structural econometric model. It is internally inconsistent, but because it fits the data better it may be a better guide for policy.
Being able to model a credible world, a world that somehow could be considered real or similar to the real world, is not the same as investigating the real world. Read more…
Noah Smith thinks p-values work. Read my lips — they don’t!
from Lars Syll
Noah Smith has a post up trying to defend p-values and traditional statistical significance testing against the increasing attacks launched against it:
Suddenly, everyone is getting really upset about p-values and statistical significance testing. The backlash has reached such a frenzy that some psych journals are starting to ban significance testing. Though there are some well-known problems with p-values and significance testing, this backlash doesn’t pass the smell test. When a technique has been in wide use for decades, it’s certain that LOTS of smart scientists have had a chance to think carefully about it. The fact that we’re only now getting the backlash means that the cause is something other than the inherent uselessness of the methodology.
Hmm …
That doesn’t sound very convincing.
Maybe we should apply yet another smell test … Read more…
Macroeconomic ad hocery
from Lars Syll
Robert Lucas is well-known for condemning everything that isn’t microfounded rational expectations macroeconomics as “ad hoc” theorizing.
But instead of rather unsubstantiated recapitulations, it would be refreshing and helpful if the Chicago übereconomist — for a change — endeavoured to clarify just what he means by “ad hoc.”
The standard meaning — OED — of the term is “for this particular purpose.” But in the hands of New Classical–Real Business Cycles–New Keynesians it seems to be used more to convey the view that modeling with realist and relevant assumptions is somehow equivalent to basing models on “specifics” rather than the “fundamentals” of individual intertemporal optimization and rational expectations.
Why Real Business Cycle models can’t be taken seriously
from Lars Syll
They try to explain business cycles solely as problems of information, such as asymmetries and imperfections in the information agents have. Those assumptions are just as arbitrary as the institutional rigidities and inertia they find objectionable in other theories of business fluctuations … I try to point out how incapable the new equilibrium business cycles models are of explaining the most obvious observed facts of cyclical fluctuations … I don’t think that models so far from realistic description should be taken seriously as a guide to policy … I don’t think that there is a way to write down any model which at one hand respects the possible diversity of agents in taste, circumstances, and so on, and at the other hand also grounds behavior rigorously in utility maximization and which has any substantive content to it.
Real Business Cycle theory basically says that economic cycles are caused by technology-induced changes in productivity. It says that employment goes up or down because people choose to work more when productivity is high and less when it’s low. This is of course nothing but pure nonsense — and how on earth those guys that promoted this theory (Thomas Sargent et consortes) could be awarded The Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel is really beyond comprehension. Read more…
The Keynes-Ramsey-Savage debate on probability
from Lars Syll
Neoclassical economics nowadays usually assumes that agents that have to make choices under conditions of uncertainty behave according to Bayesian rules, axiomatized by Ramsey (1931) and Savage (1954) – that is, they maximize expected utility with respect to some subjective probability measure that is continually updated according to Bayes theorem. If not, they are supposed to be irrational, and ultimately – via some “Dutch book” or “money pump”argument – susceptible to being ruined by some clever “bookie”.
Bayesianism reduces questions of rationality to questions of internal consistency (coherence) of beliefs, but – even granted this questionable reductionism – do rational agents really have to be Bayesian? As I have been arguing elsewhere (e. g. here, here and here) there is no strong warrant for believing so.
In many of the situations that are relevant to economics one could argue that there is simply not enough of adequate and relevant information to ground beliefs of a probabilistic kind, and that in those situations it is not really possible, in any relevant way, to represent an individual’s beliefs in a single probability measure. Read more…
Noah Smith is wrong on the experimental turn in empirical economics
from Lars Syll
The increasing use of natural and quasi-natural experiments in economics during the last couple of decades has led Noah Smith to — on his blog Noahpinion today — triumphantly declare it as a major step on a recent path toward empirics, where instead of being a “deductive, philosophical field,” economics is now increasingly becoming an “inductive, scientific field.”
Smith is especially apostrophizing the work of Joshua Angrist and Jörn-Steffen Pischke, so lets start with one of their later books and see if there is any real reason to share Smith’s optimism on this ’empirical turn’ in economics.
In their new book, Mastering ‘Metrics: The Path from Cause to Effect, Angrist and Pischke write: Read more…
Rational expectations — only for Gods and idiots
from Lars Syll
In a laboratory experiment run by James Andreoni and Tymofiy Mylovanov — presented here — the researchers induced common probability priors, and then told all participants of the actions taken by the others. Their findings is very interesting, and says something rather profound on the value of the rational expectations hypothesis in standard neoclassical economic models: Read more…
Ditch ‘ceteris paribus’!
from Lars Syll
When applying deductivist thinking to economics, neoclassical economists usually set up “as if” models based on a set of tight axiomatic assumptions from which consistent and precise inferences are made. The beauty of this procedure is of course that if the axiomatic premises are true, the conclusions necessarily follow. The snag is that if the models are to be relevant, we also have to argue that their precision and rigour still holds when they are applied to real-world situations. They often don’t. When addressing real economies, the idealizations necessary for the deductivist machinery to work — as e. g. IS-LM and DSGE models — simply don’t hold.
If the real world is fuzzy, vague and indeterminate, then why should our models build upon a desire to describe it as precise and predictable? The logic of idealization is a marvellous tool in mathematics and axiomatic-deductivist systems, but a poor guide for action in real-world systems, in which concepts and entities are without clear boundaries and continually interact and overlap. Read more…
Consistency and validity is not enough!
from Lars Syll
Neoclassical economic theory today is in the story-telling business whereby economic theorists create make-believe analogue models of the target system – usually conceived as the real economic system. This modeling activity is considered useful and essential. Since fully-fledged experiments on a societal scale as a rule are prohibitively expensive, ethically indefensible or unmanageable, economic theorists have to substitute experimenting with something else. To understand and explain relations between different entities in the real economy the predominant strategy is to build models and make things happen in these “analogue-economy models” rather than engineering things happening in real economies.
Formalistic deductive “Glasperlenspiel” can be very impressive and seductive. But in the realm of science it ought to be considered of little or no value to simply make claims about the model and lose sight of reality. As Julian Reiss writes: Read more…
In search of causality
from Lars Syll
One of the few statisticians that I have on my blogroll is Andrew Gelman. Although not sharing his Bayesian leanings, yours truly finds his open-minded, thought-provoking and non-dogmatic statistical thinking highly recommendable. The plaidoyer here below for “reverse causal questioning” is typical Gelmanian:
When statistical and econometrc methodologists write about causal inference, they generally focus on forward causal questions. We are taught to answer questions of the type “What if?”, rather than “Why?” Following the work by Rubin (1977) causal questions are typically framed in terms of manipulations: if x were changed by one unit, how much would y be expected to change? But reverse causal questions are important too … In many ways, it is the reverse causal questions that motivate the research, including experiments and observational studies, that we use to answer the forward questions … Read more…
Validating assumptions
from Lars Syll
Piketty uses the terms “capital” and “wealth” interchangeably to denote the total monetary value of shares, housing and other assets. “Income” is measured in money terms. We shall reserve the term “capital” for the totality of productive assets evaluated at constant prices. The term “output” is used to denote the totality of net output (value-added) measured at constant prices. Piketty uses the symbol β to denote the ratio of “wealth” to “income” and he denotes the share of wealth-owners in total income by α. In his theoretical analysis this share is equated to the share of profits in total output. Piketty documents how α and β have both risen by a considerable amount in recent decades. He argues that this is not mere correlation, but reflects a causal link. It is the rise in β which is responsible for the rise in α. To reach this conclusion, he first assumes that β is equal to the capital-output ratio K/Y, as conventionally understood. From his empirical finding that β has risen, he concludes that K/Y has also risen by a similar amount. According to the neoclassical theory of factor shares, an increase in K/Y will only lead to an increase in α when the elasticity of substitution between capital and labour σ is greater than unity. Piketty asserts that this is the case. Indeed, based on movements α and β, he estimates that σ is between 1.3 and 1.6 (page 221). Read more…
Macroeconomic ad hocery
from Lars Syll
Robert Lucas is well-known for condemning everything that isn’t microfounded rational expectations macroeconomics as “ad hoc” theorizing.
But instead of rather unsubstantiated recapitulations, it would be refreshing and helpful if the Chicago übereconomist — for a change — endeavoured to clarify just what he means by “ad hoc.”
The standard meaning — OED — of the term is “for this particular purpose.” But in the hands of New Classical–RBC–New Keynesians it seems to be used more to convey the view that modeling with realist and relevant assumptions is somehow equivalent to basing models on “specifics” rather than the “fundamentals” of individual intertemporal optimization and rational expectations.
This is of course pure nonsense, simply because there is no — as yours truly has argued at length e. g. here — macro behaviour that consistently follows from the RBC–New Keynesian microfoundations. The only ones that succumb to ad hoc assumptions here are macroeconomists like Lucas et consortes, who believe that macroeconomic behaviour can be adequately analyzed with a fictitious rational-expectations-optimizing-robot-imitation-representative-agent.
Bayesianism — a dangerous scientific cul-de-sac
from Lars Syll
The bias toward the superficial and the response to extraneous influences on research are both examples of real harm done in contemporary social science by a roughly Bayesian paradigm of statistical inference as the epitome of empirical argument. For instance the dominant attitude toward the sources of black-white differential in United States unemployment rates (routinely the rates are in a two to one ratio) is “phenomenological.” The employment differences are traced to correlates in education, locale, occupational structure, and family background. The attitude toward further, underlying causes of those correlations is agnostic … Yet on reflection, common sense dictates that racist attitudes and institutional racism must play an important causal role. People do have beliefs that blacks are inferior in intelligence and morality, and they are surely influenced by these beliefs in hiring decisions … Thus, an overemphasis on Bayesian success in statistical inference discourages the elaboration of a type of account of racial disadavantages that almost certainly provides a large part of their explanation.
Gödel’s theorems & the limits of reason
from Asad Zaman
My article on the limits of reason was published in Express Tribune recently (Monday April 13, 2015). This essay shows that logic is limited in its ability to arrive at a definite conclusion even in the heartland of mathematics. Pluralism is required to cater for the possibility that both Euclidean and non-Euclidean geometries represent valid ways of looking at the world. The world of human affairs is far more complex. In order to study and understand societies, one must learn to deal with a multiplicity of truths. This argument, which is related to the first, has been made in my article “Tolerance and Multiple Narratives” which was published in Express Tribune earlier (March 29, 2015). These ideas form part of the background for supporting the drive for pluralism in our approaches to economic problems.
Model validation and significance testing
from Lars Syll
In its standard form, a significance test is not the kind of “severe test” that we are looking for in our search for being able to confirm or disconfirm empirical scientific hypothesis. This is problematic for many reasons, one being that there is a strong tendency to accept the null hypothesis since they can’t be rejected at the standard 5% significance level. In their standard form, significance tests bias against new hypotheses by making it hard to disconfirm the null hypothesis. Read more…
On dogmatism in economics
from Lars Syll
Abstraction is the most valuable ladder of any science. In the social sciences, as Marx forcefully argued, it is all the more indispensable since there ‘the force of abstraction’ must compensate for the impossibility of using microscopes or chemical reactions. However, the task of science is not to climb up the easiest ladder and remain there forever distilling and redistilling the same pure stuff. Standard economics, by opposing any suggestions that the economic process may consist of something more than a jigsaw puzzle with all its elements given, has identified itself with dogmatism. And this is a privilegium odiosum that has dwarfed the understanding of the economic process wherever it has been exercised.
Mastering ‘metrics
from Lars Syll
In their new book, Mastering ‘Metrics: The Path from Cause to Effect, Joshua D. Angrist and Jörn-Steffen Pischke write:
Our first line of attack on the causality problem is a randomized experiment, often called a randomized trial. In a randomized trial, researchers change the causal variables of interest … for a group selected using something like a coin toss. By changing circumstances randomly, we make it highly likely that the variable of interest is unrelated to the many other factors determining the outcomes we want to study. Random assignment isn’t the same as holding everything else fixed, but it has the same effect. Random manipulation makes other things equal hold on average across the groups that did and did not experience manipulation. As we explain … ‘on average’ is usually good enough.
Angrist and Pischke may “dream of the trials we’d like to do” and consider “the notion of an ideal experiment” something that “disciplines our approach to econometric research,” but to maintain that ‘on average’ is “usually good enough” is an allegation that in my view is rather unwarranted, and for many reasons.
First of all it amounts to nothing but hand waving to simpliciter assume, without argumentation, that it is tenable to treat social agents and relations as homogeneous and interchangeable entities. Read more…
“Is there anything worth keeping in standard microeconomics?”
from Edward Fullbrook
For me three economists stand out historically as having been the most effective at building resistance to the dominance of scientism in economics. Keynes of course is one, and the other two are Bernard Guerrien and Tony Lawson, Guerrien because he was the intellectual and moral force behind Autisme Economie which, among other things, gave rise to the RWER; and Lawson because his papers, books and seminars have inspired, joined and intellectually fortified thousands.
It is notable that all three of these economists were or were on their way to becoming professional mathematicians before switching to economics. When still in his twenties, Keynes’ mathematical genius was already publicly celebrated, most notably by Whitehead and Russell, and he had already published what was to become for his first discipline a classic work. Guerrien’s first PhD was in mathematics, and Lawson was doing a PhD in mathematics at Cambridge when its economics department lured him over in an attempt to boost its mathematical competence.
The significance for me of Keynes, Guerrien and Lawson being mathematicians first and economists second is that it meant that they were not even for an hour taken in or intimidated by the aggressive scientism of neoclassical economists, and this has enabled them to write analytically about the dominant scientism with a quiet straightforwardness that is beyond the reach of most of us.
An example of this kind of writing that I am talking about is the short essay below that in 2002 Guerrien published in what is now the Real-World Economics Review. Read more…
On the consistency of microfounded macromodels
from Lars Syll
“New Keynesian” macroeconomist Simon Wren-Lewis has a post up on his blog, trying to answer a question posed by Brad DeLong, on why microfounded models dominate modern macro:
Brad DeLong asks why the New Keynesian (NK) model, which was originally put forth as simply a means of demonstrating how sticky prices within an RBC framework could produce Keynesian effects, has managed to become the workhorse of modern macro, despite its many empirical deficiencies …
Why are microfounded models so dominant? From my perspective this is a methodological question, about the relative importance of ‘internal’ (theoretical) versus ‘external’ (empirical) consistency …
I think this has two implications for those who want to question the microfoundations hegemony. The first is that the discussion needs to be about methodology, rather than individual models. Deficiencies with particular microfounded models, like the NK model, are generally well understood, and from a microfoundations point of view simply provide an agenda for more research. Second, lack of familiarity with methodology means that this discussion cannot presume knowledge that is not there … That makes discussion difficult, but I’m not sure it makes it impossible.
Recent Comments