## Economic forecasting — why it matters and why it is so often wrong

from **Lars Syll**

As Oskar Morgenstern noted in his 1928 classic *Wirtschaftsprognose: Eine Untersuchung ihrer Voraussetzungen und Möglichkeiten*, economic predictions and forecasts amount to little more than intelligent guessing.

Making forecasts and predictions obviously isn’t a trivial or costless activity, so why then go on with it?

The problems that economists encounter when trying to predict the future really underline how important it is for social sciences to incorporate Keynes’s far-reaching and incisive analysis of induction and evidential weight in his seminal *A Treatise on Probability* (1921).

According to Keynes, we live in a world permeated by unmeasurable uncertainty – not quantifiable stochastic risk – which often forces us to make decisions based on anything but ‘rational expectations.’ Keynes rather thinks that we base our expectations on the confidence or ‘weight’ we put on different events and alternatives. To Keynes, expectations are a question of weighing probabilities by ‘degrees of belief,’ beliefs that often have preciously little to do with the kind of stochastic probabilistic calculations made by the rational agents as modelled by ‘modern’ social sciences. And often we “simply do not know.” Read more…

## Hyman Minsky and the IS-LM obfuscation

from **Lars Syll**

As a young research stipendiate in the U.S. yours truly had the pleasure and privilege of having Hyman Minsky as a teacher. He was a great inspiration at the time. He still is.

The concepts which it is usual to ignore or deemphasize in interpreting Keynes — the cyclical perspective, the relations between investment and finance, and uncertainty, are the keys to an understanding of the full significance of his contribution …

The glib assumption made by Professor Hicks in his exposition of Keynes’s contribution that there is a simple, negatively sloped function, reflecting the productivity of increments to the stock of capital, that relates investment to the interest rate is a caricature of Keynes’s theory of investment … which relates the pace of investment not only to prospective yields but also to ongoing financial behavior …

The conclusion to our argument is that the missing step in the standard Keynesian theory was the explicit consideration of capitalist finance within a cyclical and speculative context. Once capitalist finance is introduced and the development of cash flows … during the various states of the economy is explicitly examined, then the full power of the revolutionary insights and the alternative frame of analysis that Keynes developed becomes evident …

The greatness of

The General Theorywas that Keynes visualized [the imperfections of the monetary-financial system] as systematic rather than accidental or perhaps incidental attributes of capitalism … Only a theory that was explicitly cyclical and overtly financial was capable of being useful …

If we are to believe Minsky — and I certainly think we should — then when people like Paul Krugman and other ‘New Keynesian’ critics of MMT and Post-Keynesian economics think of themselves as defending “the whole enterprise of Keynes/Hicks macroeconomic theory,” they are simply wrong since there is no such thing as a Keynes-Hicks macroeconomic theory!

There is nothing in the post-*General Theory* writings of Keynes that suggests that he considered Hicks’s IS-LM anywhere near a faithful rendering of his thoughts. Read more…

## Econometric testing

from** Lars Syll**

Debating econometrics and its shortcomings yours truly often gets the response from econometricians that “ok, maybe econometrics isn’t perfect, but you have to admit that it is a great technique for empirical testing of economic hypotheses.”

But is econometrics — really — such a great testing instrument?

Econometrics is supposed to be able to test economic theories. But to serve as a testing device you have to make many assumptions, many of which themselves cannot be tested or verified. To make things worse, there are also only rarely strong and reliable ways of telling us which set of assumptions is to be preferred. Trying to test and infer causality from (non-experimental) data you have to rely on assumptions such as disturbance terms being ‘independent and identically distributed’; functions being additive, linear, and with constant coefficients; parameters being’ ‘invariant under intervention; variables being ‘exogenous’, ‘identifiable’, ‘structural and so on. Unfortunately, we are seldom or never informed of where that kind of ‘knowledge’ comes from, beyond referring to the economic theory that one is supposed to test. Performing technical tests is of course needed, but perhaps even more important is to know — as David Colander put it — “how to deal with situations where the assumptions of the tests do not fit the data.”

That leaves us in the awkward position of having to admit that if the assumptions made do not hold, the inferences, conclusions, and testing outcomes econometricians come up with simply do not follow from the data and statistics they use.

The central question is “how do we learn from empirical data?” Read more…

## Economic modeling — a constructive critique

from** Lars Syll**

If we have independent reasons to believe that the phenomena under investigation are mechanical in Mill’s sense, well and good: mathematical modeling will prove an apt mode of representation … But if we have independent reasons to believe that there is more going on in the phenomena under investigation than a mathematical model can suggest – that is, that the phenomena in question are not in fact mechanical in the required sense – then mathematical modeling will prove misleading … Moreover, as will be discussed, the empirical assessment of such models using econometric methods will not be sufficient to reveal that mismatch.

These problems cannot themselves be addressed through reforms to mathematical methods. That would simply be to produce a more refined version of the wrong tool for the job, like sharpening one’s knife when what is needed is a spoon. Rather than striving to improve the quality of mathematical models given the assumption that the subject matter under investigation is mechanical in Mill’s sense and therefore susceptible of mathematical analysis, we need to ask a prior question, which is whether there is sufficient reason to feel confident that the subject matter under investigation is mechanical in the first place. That means scrutinizing the subject matter in the first instance in non-mathematical ways … We as scientists must remain sensitive to information about the phenomena in which we are interested that lies outside our models’ conceptual maps. In the case of economics, what this requires is a new field dedicated to qualitative empirical methods that would play a similar role to that played by econometrics in the matter of quantitative empirical methods.

Highly recommended reading!

Using formal mathematical modeling, mainstream economists sure can guarantee that the conclusions hold given the assumptions. However, the validity we get in abstract model worlds does not warrantly transfer to real-world economies. Read more…

## My philosophy of economics

from **Lars Syll**

A critique yours truly sometimes encounters is that as long as I cannot come up with some own alternative to the failing mainstream theory, I shouldn’t expect people to pay attention.

This is however to misunderstand the role of philosophy and methodology of economics!

As John Locke wrote in *An Essay Concerning Human Understanding*:

The Commonwealth of Learning is not at this time without Master-Builders, whose mighty Designs, in advancing the Sciences, will leave lasting Monuments to the Admiration of Posterity; But every one must not hope to be a Boyle, or a Sydenham; and in an Age that produces such Masters, as the Great-Huygenius, and the incomparable Mr. Newton, with some other of that Strain; ’tis Ambition enough to be employed as an Under-Labourer in clearing Ground a little, and removing some of the Rubbish, that lies in the way to Knowledge.

That’s what philosophy and methodology can contribute to economics — clear obstacles to science. Read more…

## How to ensure that models serve society

from** Lars Syll**

• Mind the assumptions — assess uncertainty and sensitivity.

• Mind the hubris — complexity can be the enemy of relevance.

• Mind the framing — match purpose and context.

• Mind the consequences — quantification may backfire.

• Mind the unknowns — acknowledge ignorance.

Andrea Saltelli, John Kay, Deborah Mayo, Philip B. Stark, et al.

Five principles I think modern times “the model is the message” economists would benefit much from pondering. And especially when it comes to the last principles, they would benefit enormously from reading.

More than a hundred years after John Maynard Keynes wrote his seminal *A Treatise on Probability* (1921), it is still very difficult to find economics and statistics textbooks that seriously try to incorporate his far-reaching and incisive analysis of induction and evidential weight. Read more…

## Economics beyond Krugman, Mankiw, and Rodrik

from **Lars Syll**

Economics students today are complaining more and more about the way economics is taught. The lack of fundamental diversity — not just path-dependent elaborations of the mainstream canon — and narrowing of the curriculum, dissatisfy econ students all over the world. The frustrating lack of real-world relevance has led many of them to demand the discipline to start developing a more open and pluralistic theoretical and methodological attitude.

Dani Rodrik — among economics journalists and commentators often described as a heterodox economist — has little understanding of these views, finding it hard to ‘understand these complaints in the light of the patent multiplicity of models within economics.’ Rodrik shares the view of his colleagues Paul Krugman and Greg Mankiw — both of whom he approvingly cites in his book *Economics Rules* — that there is nothing basically wrong with ‘standard theory’ and ‘economics textbooks.’ As long as policymakers and economists stick to ‘standard economic analysis’ everything is fine. Economics is just a method that makes us ‘think straight’ and ‘reach correct answers.’

Writes Rodrik in *Economics Rules*:

Pluralism with respect to conclusions is one thing; pluralism with respect to methods is something else … An aspiring economist has to formulate clear models … These models can incorporate a wide range of assumptions … but not all assumptions are equally acceptable. In economics, this means that the greater the departure from benchmark assumptions, the greater the burden of justifying and motivating why those departures are needed …

Some methods are better than others … For some these constraints represent a kind of methodological straitjacket that crowds out new thinking. But it is easy to exaggerate the rigidity of the rules within which the profession operates.

Young economics students that want to see a real change in economics and the way it’s taught, have to look beyond Rodrik, Mankiw, Krugman & Co. Read more…

## Freedman’s Rabbit Theorem

from **Lars Syll**

In econometrics one often gets the feeling that many of its practitioners think of it as a kind of automatic inferential machine: input data and out comes causal knowledge. This is like pulling a rabbit from a hat. Great, but as renowned statistician David Freedman had it, first you must put the rabbit in the hat. And this is where assumptions come into the picture.

The assumption of imaginary ‘superpopulations’ is one of the many dubious assumptions used in modern econometrics, and as Clint Ballinger has highlighted, this is a particularly questionable rabbit-pulling assumption:

Inferential statistics are based on taking a random sample from a larger population … and attempting to draw conclusions about a) the larger population from that data and b) the probability that the relations between measured variables are consistent or are artifacts of the sampling procedure.

However, in political science, economics, development studies and related fields the data often represents as complete an amount of data as can be measured from the real world (an ‘apparent population’). It is not the result of a random sampling from a larger population. Nevertheless, social scientists treat such data as the result of random sampling.

## Model uncertainty and ergodicity

from **Lars Syll**

Post Keynesian authors have offered various classifications of uncertainty … A common distinction is that of

epistemologicalversusontologicaluncertainty, with the former depending on the limitations of human reasoning and the latter on the actual nature of social systems …Models of ontological uncertainty tend to hinge on the existence of information that is critical to the decision-making task. Fundamental uncertainty occurs in “situations in which at least some essential information about future events cannot be known at the moment of decision because this information does not exist and cannot be inferred from any existing data set” (Dequech 1999, 415-416). For Davidson (1991, 131), “true” uncertainty arises when “the decision maker believes that no information regarding future prospects exists today and therefore the future is not calculable.”

In the model-based view of uncertainty, by contrast, it is not the existence of information that determines uncertainty, but the credibility of the model(s) used to encode available information. By focusing on the existence of information, or its completeness, these Post Keynesian accounts of ontological uncertainty implicitly accept the possibility that if economic agents had sufficient information they could apply that information to a model without uncertainty. Yet a suitably complex deterministic system … can prompt model uncertainty even if future outcomes are in principle knowable … Model uncertainty is thus epistemological rather than ontological in nature. It occurs even in environments with stable data generating processes.

An interesting paper that merits a couple of comments. Read more…

## The misuse of mathematics in economics

from **Lars Syll**

Many American undergraduates in Economics interested in doing a Ph.D. are surprised to learn that the first year of an Econ Ph.D. feels much more like entering a Ph.D. in solving mathematical models by hand than it does with learning economics. Typically, there is very little reading or writing involved, but loads and loads of fast algebra is required. Why is it like this? …

One reason to use math is that it is easy to use math to trick people. Often, if you make your assump-tions in plain English, they will sound ridiculous. But if you couch them in terms of equations, integrals, and matrices, they will appear more sophisticated, and the unrealism of the assumptions may not be obvious, even to people with Ph.D.’s from places like Harvard and Stanford, or to editors at top theory journals such as Econometrica …

Given the importance of signaling in all walks of life, and given the power of math, not just to illuminate and to signal, but also to trick, confuse, and bewilder, it thus makes perfect sense that roughly 99% of the core training in an economics Ph.D. is in fact in math rather than economics.

Indeed.

No, there is nothing wrong with mathematics *per se*. Read more…

## DSGE models — a macroeconomic dead end

from** Lars Syll**

Both approaches to DSGE macroeconometrics (VAR and Bayesian) have evident vulnerabilities, which substantially derive from how parameters are handled in the technique. In brief, parameters from formally elegant models are calibrated in order to obtain simulated values that reproduce some stylized fact and/or some empirical data distribution, thus relating the underlying theoretical model and the observational data. But there are at least three main respects in which this practice fails.

First of all, DSGE models have substantial difficulties in taking account of many important mechanisms that actually govern real economies, for example, institutional constraints like the tax system, thereby reducing DSGE power in policy analysis … In the attempt to deal with this serious problem, various parameter constraints on the model policy block are provided. They derive from institutional analysis and reflect policymakers’ operational procedures. However such model extensions, which are intended to reshape its predictions to reality and to deal with the underlying optimization problem, prove to be highly unflexible, turning DSGE into a “straitjacket tool” … In particular, the structure imposed on DSGE parameters entails various identification problems, such as observational equivalence, underidentification, and partial and weak identification.

These problems affect both empirical DSGE approaches. Fundamentally, they are ascribable to the likelihoods to estimate. In fact, the range of structural parameters that generate impulse response functions and data distributions fitting very close to the true ones does include model specifications that show very different features and welfare properties. So which is the right model specification (i.e., parameter set) to choose? As a consequence, reasonable estimates do not derive from the informative contents of models and data, but rather from the ancillary restrictions that are necessary to make the likelihoods informative, which are often arbitrary. Thus, after the Lucas’s super-exogeneity critique has been thrown out the door, it comes back through the window.

Our admiration for technical virtuosity should not blind us to the fact that we have to have a cautious attitude toward probabilistic inferences in economic contexts. We should look out for causal relations, but econometrics can never be more than a starting point in that endeavor since econometric (statistical) explanations are not explanations in terms of mechanisms, powers, capacities, or causes. Read more…

## Is economics nothing but a library of models?

from **Lars Syll**

Chameleons arise and are often nurtured by the following dynamic. First a bookshelf model is constructed that involves terms and elements that seem to have some relation to the real world and assumptions that are not so unrealistic that they would be dismissed out of hand. The intention of the author, let’s call him or her “Q,” in developing the model may be to say something about the real world or the goal may simply be to explore the implications of making a certain set of assumptions … If someone skeptical about X challenges the assumptions made by Q, some will say that a model shouldn’t be judged by the realism of its assumptions, since all models have assumptions that are unrealistic …

Chameleons are models that are offered up as saying something significant about the real world even though they do not pass through the filter. When the assumptions of a chameleon are challenged, various defenses are made (e.g., one shouldn’t judge a model by its assumptions, any model has equal standing with all other models until the proper empirical tests have been run, etc.). In many cases the chameleon will change colors as necessary, taking on the colors of a bookshelf model when challenged, but reverting back to the colors of a model that claims to apply the real world when not challenged.

As we all know, economics has become a model-based science. And in many of the methodology and philosophy of economics books published during the last two decades, this is seen as something positive.

In Dani Rodrik’s *Economics Rules* (OUP 2015) — just to take one illustrative example — economics is looked upon as nothing but a smorgasbord of ‘thought experimental’ models. For every purpose you may have, there is always an appropriate model to pick. The proliferation of economic models is unproblematically presented as a sign of great diversity and abundance of new ideas: Read more…

## The empirical turn in economics

from **Lars Syll**

Ce qui fait l’unité de la discipline est plutôt l’identification causale, c’est-à-dire un ensemble de méthodes statistiques qui permettent d’estimer les liens de cause à effet entre un facteur quelconque et des résultats économiques. Dans cette perspective, la démarche scientifique vise à reproduire

in vivol’expérience de laboratoire, où l’on peut distinguer aisément la différence de résultat entre un groupe auquel on administre un traitement et un autre groupe semblable qui n’est quant à lui pas affecté.Les outils statistiques permettraient aux économistes d’appliquer cette méthode en dehors du laboratoire, y compris à l’histoire et à tout autre sujet. Là encore, il faudrait considérablement nuancer ce constat. Mais, il ne me semble pas aberrant de dire que si, pour comprendre les canons de la discipline, tout économiste devait auparavant au moins maîtriser les bases du calcul rationnel, il s’agit surtout aujourd’hui de maîtriser les bases de l’identification économétrique (variables instrumentales et méthode des différences de différences en particulier).

Si les canons de la discipline ont changé, les rapports de l’économie dominante aux autres disciplines n’ont quant à eux pas évolué. Certains économistes se considéraient supérieurs auparavant car ils pensaient que seuls les modèles formels d’individu rationnel pouvaient expliquer les comportements de manière scientifique. Les autres explications tenant de l’évaluation subjective non rigoureuse.

Although discounting empirical evidence cannot be the right way to solve economic issues, there are still, as Monnet argues, several weighty reasons why we perhaps shouldn’t be too excited about the so-called ’empirical revolution’ in economics. Read more…

## On models and simplicity

from **Lars Syll**

When it comes to modelling yours truly does see the point emphatically made time after time by e. g. Paul Krugman about simplicity — at least as long as it doesn’t impinge on our truth-seeking. ‘Simple’ macroeconomic models may of course be an informative heuristic tool for research. But if practitioners of modern macroeconomics do not investigate and make an effort of providing a justification for the credibility of the simplicity assumptions on which they erect their building, it will not fulfil its tasks. Maintaining that economics is a science in the ‘true knowledge’ business, yours truly remains a sceptic of the pretences and aspirations of ‘simple’ macroeconomic models and theories. So far, I can’t really see that e. g. ‘simple’ microfounded models have yielded very much in terms of *realistic* and *relevant* economic knowledge.

All empirical sciences use simplifying or unrealistic assumptions in their modelling activities. That is not the issue – *as long as the assumptions made are not unrealistic in the wrong way or for the wrong reasons*.

Being able to model a ‘credible world,’ a world that somehow could be considered real or *similar* to the real world, is not the same as investigating the real world. Even though all theories are false since they simplify, they may still possibly serve our pursuit of truth. But then they cannot be unrealistic or false in *any* way. The falsehood or unrealisticness has to be* qualified*. Read more…

## Macroeconomics and the Friedman-Savage ‘as if’ logic

from **Lars Syll**

An objection to the hypothesis just presented that is likely to be raised by many … is that it conflicts with the way human beings actually behave and choose. … Is it not patently unrealistic to suppose that individuals … base their decision on the size of the

expected utility?While entirely natural and under-

standable, this objection is not strictly relevant … The hypothesis asserts rather that, in making a particular class of decisions, individuals behaveas ifthey calculated and compared expected utility andas ifthey knew the odds. The validity of this assertion … depend solely on whether it yields sufficiently accurate predictions about the class of decisions

with which the hypothesis deals.

‘Modern’ macroeconomics — Dynamic Stochastic General Equilibrium, New Synthesis, New Classical and New ‘Keynesian’ — still follows the Friedman-Savage ‘as if’ logic of denying the existence of genuine uncertainty and treat variables as if drawn from a known ‘data-generating process’ with a known probability distribution that unfolds over time and on which we, therefore, have access to heaps of historical time-series. If we do not assume that we know the ‘data-generating process’ – if we do not have the ‘true’ model – the whole edifice collapses. And of course, it has to. Who really honestly believes that we have access to this mythical Holy Grail, the data-generating process? Read more…

## The dangers of using unproved assumptions

from **Lars Syll**

The unpopularity of the principle of organic unities shows very clearly how great is the danger of the assumption of unproved additive formulas. The fallacy, of which ignorance of organic unity is a particular instance, may perhaps be mathematically represented thus: suppose f(x) is the goodness of x and f(y) is the goodness of y. It is then assumed that the goodness of x and y together is f(x) + f(y) when it is clearly f(x + y) and only in special cases will it be true that f(x + y) = f(x) + f(y). It is plain that it is never legitimate to assume this property in the case of any given function without proof.

J. M. Keynes “Ethics in Relation to Conduct” (1903)

Since econometrics doesn’t content itself with only making optimal *predictions*, but also aspires to *explain* things in terms of causes and effects, econometricians need loads of assumptions — the most important of these are *additivity* and *linearity*. Important, simply because if they are not true, your model is invalid and descriptively incorrect. It’s like calling your house a bicycle. No matter how you try, it won’t move you an inch. When the model is wrong — well, then it’s wrong.

## Leontief and the sorry state of economics

from **Lars Syll**

Page after page of professional economic journals are filled with mathematical formulas leading the reader from sets of more or less plausible but entirely arbitrary assumptions to precisely stated but irrelevant theoretical conclusions …

Year after year economic theorists continue to produce scores of mathematical models and to explore in great detail their formal properties; and the econometricians fit algebraic functions of all possible shapes to essentially the same sets of data without being able to advance, in any perceptible way, a systematic understanding of the structure and the operations of a real economic system.

Mainstream economics is, as noted by Leontief, hopelessly irrelevant to the understanding of the real world, and the main reason for this irrelevance is the failure of economists to match their methods with their subject of study. The fixation on constructing models showing the certainty of logical entailment has been detrimental to the development of relevant and realist economics. Insisting on formalistic-mathematical modeling forces the economist to give up on realism and real-world relevance.

It is — sad to say — a fact that within mainstream economics internal validity is everything and external validity next to nothing. Why anyone should be interested in that kinds of theories and models — as long as one does not come up with export licenses for the theories and models to the real world in which we live — is beyond comprehension.

## Macroeconomic aspirations

from **Lars Syll**

Some economists seem to be überjoyed by the fact that they are using the same ‘language’ as real business cycles macroeconomists and that they therefore somehow can learn something from them.

James Tobin obviously did not find any need to speak the RBC ‘language’:

They try to explain business cycles solely as problems of information, such as asymmetries and imperfections in the information agents have. Those assumptions are just as arbitrary as the institutional rigidities and inertia they find objectionable in other theories of business fluctuations … I try to point out how incapable the new equilibrium business cycles models are of explaining the most obvious observed facts of cyclical fluctuations … I don’t think that models so far from realistic description should be taken seriously as a guide to policy … I don’t think that there is a way to write down any model which at one hand respects the possible diversity of agents in taste, circumstances, and so on, and at the other hand also grounds behavior rigorously in utility maximization and which has any substantive content to it.

Arjo Klamer,

The New Classical Mcroeconomics: Conversations with the New Classical Economists and their Opponents,Wheatsheaf Books, 1984

Using the same microfoundational ‘language’ as mainstream macroeconomists don’t take us very far. Far better than having a common ‘language’ is to have a well-founded, realist, and relevant theory: Read more…

## The Nobel prize in economics — awarding popular misconceptions

from **Lars Syll**

This year’s Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel honours Ben Bernanke, Douglas Diamond and Philip Dybvig. In the view of the Royal Swedish Academy of Sciences, the laureates ‘have significantly improved our understanding of the role of banks in the economy’.

But what is the role of banks in the economy? The academy describes it this way: ‘To understand why a banking crisis can have such enormous consequences for society, we need to know what banks actually do: they receive money from people making deposits and channel it to borrowers.’ According to this view, banks are thus pure intermediaries or dealers of savings between saving households and investing companies. It is a view widespread in economics today but there has long been a completely different theory of the function of banks.

This was formulated, among others, by Joseph Schumpeter. In his

Theory of Economic DevelopmentSchumpeter wrote: ‘The banker, therefore, is not so much primarily a middleman in the commodity “purchasing power” as a producer of this commodity.’In this vein, in 2014 the Bank of England affirmed: ‘Money creation in practice differs from some popular misconceptions—banks do not act simply as intermediaries, lending out deposits that savers place with them.’ Three years later, the

Deutsche Bundesbanksimilarly spoke of the ‘popular misconception that banks act simply as intermediaries at the time of lending—ie that banks can only grant loans using funds placed with them previously as deposits by other customers’ …It is hard to understand how the Swedish academy could decide to honour a theory which—due to its ‘real analysis’—is unsuitable to represent monetary processes in reality. In terms of economic policy, it makes a fundamental difference whether banks are merely intermediaries of savings or insurance companies or whether they are producers of purchasing power. The ‘real analysis’ was an important factor in the inability of the economics profession to anticipate the Great Financial Crisis in time.

After that painful experience, to extol with the Nobel prize for economics the intermediation approach to banking is akin to posthumously offering Ptolemy the prize for physics—because he discovered that the sun revolved around the earth.

Yes indeed — money doesn’t matter in mainstream macroeconomic models. That’s true. According to the ‘classical dichotomy,’ real variables — output and employment — are independent of monetary variables, and so enable mainstream economics to depict the economy as basically a barter system.

But in the real world in which we happen to live, money certainly does matter. Money is not neutral and money matters in both the *short* run and the *long* run: Read more…

## On the validity of econometric inferences

from Lars Syll

The impossibility of proper specification is true generally in regression analyses across the social sciences, whether we are looking at the factors affecting occupational status, voting behavior, etc. The problem is that as implied by the three conditions for regression analyses to yield accurate, unbiased estimates, you need to investigate a phenomenon that has underlying mathematical regularities – and, moreover, you need to know what they are. Neither seems true. I have no reason to believe that the way in which multiple factors affect earnings, student achievement, and GNP have some underlying mathematical regularity across individuals or countries. More likely, each individual or country has a different function, and one that changes over time. Even if there was some constancy, the processes are so complex that we have no idea of what the function looks like.

Researchers recognize that they do not know the true function and seem to treat, usually implicitly, their results as a good-enough approximation. But there is no basis for the belief that the results of what is run in practice is anything close to the underlying phenomenon, even if there is an underlying phenomenon. This just seems to be wishful thinking. Most regression analysis research doesn’t even pay lip service to theoretical regularities. But you can’t just regress anything you want and expect the results to approximate reality. And even when researchers take somewhat seriously the need to have an underlying theoretical framework – as they have, at least to some extent, in the examples of studies of earnings, educational achievement, and GNP that I have used to illustrate my argument – they are so far from the conditions necessary for proper specification that one can have no confidence in the validity of the results.

Most work in econometrics and regression analysis is done on the assumption that the researcher has a theoretical model that is ‘true.’ Read more…

## Recent Comments