Home > Uncategorized > The limits of extrapolation in economics

The limits of extrapolation in economics

from Lars Syll

steelThere are two basic challenges that confront any account of extrapolation that seeks to resolve the shortcomings of simple induction. One challenge, which I call extrapolator’s circle, arises from the fact that extrapolation is worthwhile only when there are important limitations on what one can learn about the target by studying it directly. The challenge, then, is to explain how the suitability of the model as a basis for extrapolation can be established given only limited, partial information about the target … The second challenge is a direct consequence of the heterogeneity of populations studied in biology and social sciences. Because of this heterogeneity, it is inevitable there will be causally relevant differences between the model and the target population.

In economics — as a rule — we can’t experiment on the real-world target directly.  To experiment, economists therefore standardly construct ‘surrogate’ models and perform ‘experiments’ on them. To be of interest to us, these surrogate models have to be shown to be relevantly ‘similar’ to the real-world target, so that knowledge from the model can be exported to the real-world target. The fundamental problem highlighted by Steel is that this ‘bridging’ is deeply problematic​ — to show that what is true of the model is also true of the real-world target, we have to know what is true of the target, but to know what is true of the target we have to know that we have a good model  …

Most models in science are representations of something else. Models “stand for” or “depict” specific parts of a “target system” (usually the real world). A model that has neither surface nor deep resemblance to important characteristics of real economies ought to be treated with prima facie suspicion. How could we possibly learn about the real world if there are no parts or aspects of the model that have relevant and important counterparts in the real world target system? The burden of proof lays on the theoretical economists thinking they have contributed anything of scientific relevance without even hinting at any bridge enabling us to traverse from model to reality. All theories and models have to use sign vehicles to convey some kind of content that may be used for saying something of the target system. But purpose-built tractability assumptions — like, e. g., invariance, additivity, faithfulness, modularity, common knowledge, etc., etc. — made solely to secure a way of reaching deductively validated results in mathematical models, are of little value if they cannot be validated outside of the model.

All empirical sciences use simplifying or unrealistic assumptions in their modeling activities. That is (no longer) the issue – as long as the assumptions made are not unrealistic in the wrong way or for the wrong reasons.

Theories are difficult to directly confront with reality. Economists therefore build models of their theories. Those models are representations that are directly examined and manipulated to indirectly say something about the target systems.

There are economic methodologists and philosophers that argue for a less demanding view on modeling and theorizing in economics. And to some theoretical economists it is deemed quite enough to consider economics as a mere “conceptual activity” where the model is not so much seen as an abstraction from reality, but rather a kind of “parallel reality”. By considering models as such constructions, the economist distances the model from the intended target, only demanding the models to be credible, thereby enabling him to make inductive inferences to the target systems.

But what gives license to this leap of faith, this “inductive inference”? Within-model inferences in formal-axiomatic models are usually deductive, but that does not come with a warrant of reliability for inferring conclusions about specific target systems. Since all models in a strict sense are false (necessarily building in part on false assumptions) deductive validity cannot guarantee epistemic truth about the target system. To argue otherwise would surely be an untenable overestimation of the epistemic reach of surrogate models.

Models do not only face theory. They also have to look to the world. But being able to model a credible world, a world that somehow could be considered real or similar to the real world, is not the same as investigating the real world. Even though all theories are false, since they simplify, they may still possibly serve our pursuit of truth. But then they cannot be unrealistic or false in any way. The falsehood or unrealisticness has to be qualified (in terms of resemblance, relevance etc). At the very least, the minimalist demand on models in terms of credibility has to give away to a stronger epistemic demand of appropriate similarity and plausibility. One could of course also ask for a sensitivity or robustness analysis, but the credible world, even after having tested it for sensitivity and robustness, can still be a far way from reality – and unfortunately often in ways we know are important. Robustness of claims in a model does not per se give a warrant for exporting the claims to real world target systems.

Questions of external validity — the claims the extrapolation inference is supposed to deliver — are important. It can never be enough that models somehow are regarded as internally consistent. One always also has to pose questions of consistency with the data. Internal consistency without external validity is worth nothing.

  1. Helge Nome
    October 15, 2019 at 5:17 pm

    “Questions of external validity — the claims the extrapolation inference is supposed to deliver — are important. It can never be enough that models somehow are regarded as internally consistent. One always also has to pose questions of consistency with the data. Internal consistency without external validity is worth nothing.”

    Extrapolation is a product of a lazy mind in many fields of human activity, including so-called “climate change” : “The sky scrapers of New York City will disappear under the ocean as water levels rise”.

    Fortunately, Nature has a way of bringing us back to our senses.
    The predictions of so-called “economists”, be they reassuring or alarmist, are regularly composted in Nature’s garden.

  2. Frank Salter
    October 15, 2019 at 5:28 pm

    Extrapolation is the significant word! Extrapolation is only required if the underlying theoretical relationship is NOT known. in my paper, Transient Development RWER-81, Figure 4 on page 156 shows how all Solow’s equations, which fit the data well, are nonsensical when extrapolated. Only the theoretically justifiable equation transient relationship provides sensible value across the whole range of of possible values.

    • Frank Salter
      October 15, 2019 at 5:31 pm

      Correction! Delete the word equation from the penultimate line above.

  3. Ken Zimmerman
    October 21, 2019 at 12:12 pm

    Scientists, philosophers, and the like may be challenged in justifying induction, extrapolation. Many ordinary people are not. Police and private detectives do it all the time, as do many members of the general. They often call the process “connecting the dots.” The phrase “connect the dots” can be used as a metaphor to illustrate an ability (or inability) to associate one idea with another, to find the “big picture”, or salient feature, in a mass of data.

    Reuven Feuerstein features the connection of dots as the first tool in his cognitive development program. Feuerstein was an Israeli clinical, developmental, and cognitive psychologist, known for his theory of intelligence which states “it is not ‘fixed’, but rather modifiable”. Feuerstein is recognized for his work in developing the theories and applied systems of structural cognitive modifiability, mediated learning experience, cognitive map, deficient cognitive functions, learning propensity assessment device, instrumental enrichment programs, and shaping modifying environments. These interlocked practices provide educators with the skills and tools to systematically develop students’ cognitive functions and operations to build meta-cognition.

    Another version of the connect the dots struggle is “the travelling salesman problem.” The travelling salesman problem (TSP) asks the following question: “Given a list of cities and the distances between each pair of cities, what is the shortest possible route that visits each city and returns to the origin city?” It is an NP-hard problem in combinatorial optimization, important in operations research and theoretical computer science. The travelling purchaser problem and the vehicle routing problem are both generalizations of TSP.

    Steve Jobs writes about connecting the dots. “You can’t connect the dots looking forward, you can only connect them looking backwards.” To illustrate he tells the story of fonts on computers. Do you know why you have so many fonts on your computer? Steve Jobs said it is because of a series of events in his life that to most would appear unrelated and insignificant. He said that because he dropped out of college, he was able to sit in on a calligraphy class at the leading calligraphy school in the country – the very college he briefly attended solely because of an adoption agreement between his birthmother and adoptive parents. Later, it was Jobs who insisted on providing fonts on Apple computers, and the rest is history.

    Connect the dots is a puzzle, a mystery. Historians work with such puzzles frequently. They also teach with them. One advertisement for such teaching exclaims, “Kick start your student’s mental focus with the amazing Extreme Dot to Dot: U.S. History. Unlike conventional dot-to-dot puzzles that reveal too much of the picture and ruin the mystery, Extreme Dot to Dot: U.S. History’s puzzles are intricate, and so challenging that when your student looks at a page, he will have no idea what the end result will be.”

    So, you see extrapolation is a mystery, a human constructed mystery, but one that often allows humans to see their own actions more clearly.

  4. ghholtham
    October 21, 2019 at 11:54 pm

    Mr Salter is correct. Extrapolation is frequently necessary in economics because the underlying relationships are not known. The best you can do then is to extrapolate as intelligently as possible and be aware that the result is highly uncertain. There has been a regrettable tendency in economics to try to improve on extrapolation by inventing theoretical models based on choice-theoretic constructs that have no sound empirical basis. Since we do not usually have enough data to identify all the parameters in a realistic model, the habit grew up of imposing the results of the choice-theoretic models to fix the long run properties of the empirical models and using the degrees of freedom in statistical estimation to determine short-run dynamic adjustment properties. This meant short-run properties were data-driven to some extent but longer run properties had no empirical basis. It was an illegitimate approach in my view and open to Lars Sylls criticisms. Recent events are only slowly abolishing this error in method since they demonstrate that real world systems do not necessarily tend to a stable equilibrium. The choice theoretic models also made a fundamental category error in treating an aggregate like a population as if it could be modelled as two or three purposeful “rational” agents – although we know from aggregation theory that this is just not so.

    In fact most practical models in economics are a mixture of extrapolation and low-brow theory. We know, for example that consumer spending tends to rise when incomes do and we know from personal experience that if our income goes up we will tend to spend more – at least in Western societies. It is a short step from there to positing an aggregate “consumption function” which promiscuously mixes individual behaviour and aggregation elements. It won’t be entirely stable but may be stable enough to make predictions. We are not simply extrapolating because we have a general sense as to what is going on. Attempts to improve this homely approach by positing consumers who maximise their lifetime utility and optimise inter-temporally with their consumption/saving decision leads to pretty maths but violates what we know from simple observation and introspection and – crucially – does not improve the predictive performance of our model.

    But no sane economist expects to provide ” sensible value across the whole range of possible values.” That kind of theory just does not exist in economics and never will. We are dealing with a complex, evolving system with some self-referentiality. And you want a general theory? Partial models, used eclectically and judiciously when appropriate and then with awareness of fallibility is the best we can do.

  1. No trackbacks yet.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.