Home > Uncategorized > Some methodological perspectives on causal modeling in economics

Some methodological perspectives on causal modeling in economics

from Lars Syll

Causal modeling attempts to maintain this deductive focus within imperfect research by deriving models for observed associations from more elaborate causal (‘structural’) models with randomized inputs … But in the world of risk assessment … the causal-inference process cannot rely solely on deductions from models or other purely algorithmic approaches. Instead, when randomization is doubtful or simply false (as in typical applications), an honest analysis must consider sources of variation from uncontrolled causes with unknown, nonrandom interdependencies. Causal identification then requires nonstatistical information in addition to information encoded as data or their probability distributions …

157e4bb021a73ee61009ce85178c36c3a6d4069b53842d45f3dc54a39754676bThis need raises questions of to what extent can inference be codified or automated (which is to say, formalized) in ways that do more good than harm. In this setting, formal models – whether labeled ‘‘causal’’ or ‘‘statistical’’ – serve a crucial but limited role in providing hypothetical scenarios that establish what would be the case if the assumptions made were true and the input data were both trustworthy and the only data available. Those input assumptions include all the model features and prior distributions used in the scenario, and supposedly encode all information being used beyond the raw data file (including information about the embedding context as well as the study design and execution).

Overconfident inferences follow when the hypothetical nature of these inputs is forgotten and the resulting outputs are touted as unconditionally sound scientific inferences instead of the tentative suggestions that they are (however well informed) …

The practical limits of formal models become especially apparent when attempting to integrate diverse information sources. Neither statistics nor medical science begins to capture the uncertainty attendant in this process, and in fact both encourage pernicious overconfidence by failing to make adequate allowance for unmodeled uncertainty sources. Instead of emphasizing the uncertainties attending field research, statistics and other quantitative methodologies tend to focus on mathematics and often fall prey to the satisfying – and false – sense of logical certainty that brings to population inferences. Meanwhile, medicine focuses on biochemistry and physiology, and the satisfying – and false – sense of mechanistic certainty about results those bring to individual events.

Sander Greenland

Wise words from a renowned epidemiologist.

As long as economists and statisticians cannot identify their statistical theories with real-world phenomena there is no real warrant for taking their statistical inferences seriously.

Just as there is no such thing as a ‘free lunch,’ there is no such thing as a ‘free probability.’ To be able at all to talk about probabilities, you have to specify a model. If there is no chance set-up or model that generates the probabilistic outcomes or events -– in statistics one refers to any process where you observe or measure as an experiment (rolling a die) and the results obtained as the outcomes or events (number of points rolled with the die, being e. g. 3 or 5) of the experiment -– there, strictly seen, is no event at all.

Probability is a relational element. It always must come with a specification of the model from which it is calculated. And then to be of any empirical scientific value it has to be shown to coincide with (or at least converge to) real data generating processes or structures –- something seldom or never done!

And this is the basic problem with economic data. If you have a fair roulette-wheel, you can arguably specify probabilities and probability density distributions. But how do you conceive of the analogous ‘nomological machines’ for prices, gross domestic product, income distribution etc? Only by a leap of faith. And that does not suffice. You have to come up with some really good arguments if you want to persuade people into believing in the existence of socio-economic structures that generate data with characteristics conceivable as stochastic events portrayed by probabilistic density distributions!

koopmansThe tool of statistical inference becomes available as the result of a self-imposed limitation of the universe of discourse. It is assumed that the available observations have been generated by a probability law or stochastic process about which some incomplete knowledge is available a priori …

It should be kept in mind that the sharpness and power of these remarkable tools of inductive reasoning are bought by willingness to adopt a specification of the universe in a form suitable for mathematical analysis.

Yes indeed — using statistics and econometrics to make inferences you have to make lots of (mathematical) tractability assumptions. And especially since econometrics aspires to explain things in terms of causes and effects, it needs loads of assumptions, such as e.g. invariance, additivity and linearity.

Limiting model assumptions in economic science always have to be closely examined since if we are going to be able to show that the mechanisms or causes that we isolate and handle in our models are stable in the sense that they do not change when we ‘export’ them to our ‘target systems,’ we have to be able to show that they do not only hold under ceteris paribus conditions. If not, they are of limited value to our explanations and predictions of real economic systems.

Unfortunately, real world social systems are usually not governed by stable causal mechanisms or capacities. The kinds of ‘laws’ and relations that econometrics has established, are laws and relations about entities in models that presuppose causal mechanisms being invariant, atomistic and additive. But — when causal mechanisms operate in the real world they mostly do it in ever-changing and unstable ways. If economic regularities obtain they do so as a rule only because we engineered them for that purpose. Outside man-made ‘nomological machines’ they are rare, or even non-existant.

So — if we want to explain and understand real-world economies we should perhaps be a little bit more cautious with using universe specifications “suitable for mathematical analysis.”

It should be kept in mind, when we evaluate the application of statistics and econometrics, that the sharpness and power of these remarkable tools of inductive reasoning are bought by willingness to adopt a specification of the universe in a form suitable for mathematical analysis.

As emphasised by Greenland, can causality in social sciences — and economics — never solely be a question of statistical inference. Causality entails more than predictability, and to really in depth explain social phenomena require theory. Analysis of variation — the foundation of all econometrics — can never in itself reveal how these variations are brought about. First when we are able to tie actions, processes or structures to the statistical relations detected, can we say that we are getting at relevant explanations of causation.

Most facts have many different, possible, alternative explanations, but we want to find the best of all contrastive (since all real explanation takes place relative to a set of alternatives) explanations. So which is the best explanation? Many scientists, influenced by statistical reasoning, think that the likeliest explanation is the best explanation. But the likelihood of x is not in itself a strong argument for thinking it explains y. I would rather argue that what makes one explanation better than another are things like aiming for and finding powerful, deep, causal, features and mechanisms that we have warranted and justified reasons to believe in. Statistical — especially the variety based on a Bayesian epistemology — reasoning generally has no room for these kinds of explanatory considerations. The only thing that matters is the probabilistic relation between evidence and hypothesis.

Some statisticians and data scientists think that algorithmic formalisms somehow give them access to causality. That is, however, simply not true. Assuming ‘convenient’ things like faithfulness or stability is not to give proofs. It’s to assume what has to be proven. Deductive-axiomatic methods used in statistics do no produce evidence for causal inferences. The real casuality we are searching for is the one existing in the real-world around us. If there is no warranted connection between axiomatically derived theorems and the real-world, well, then we haven’t really obtained the causation we are looking for.

  1. Ken Zimmerman
    May 9, 2021 at 10:58 am

    From an anthropological perspective, it seems peculiar that physical and mental causalities are often contrasted in the psychological literature (Shultz 1982). While anthropologists would not be uncomfortable with the notion that social and non-social causality are distinct, most would be surprised to identify social causality with psychological (or mental) causality. It is a truism of the ethnological literature that social identity shapes behaviour, but ‘social identity’ is understood in the sense of social category identity (for a psychological model of identity more in accord with anthropological theory, see Tajfel (1981)). Across a range of otherwise conflicting anthropological theories runs a common concern with the way in which social life promotes corporate (or collective) identities that constrain and compel behaviour.

    Euro-American folk theory tends to obscure the societal and corporate aspects of identity and behavior that anthropological accounts (of traditional and modern societies) emphasize, preferring instead to conceptualize identity and the causes of behaviour in terms of a radical individuality (Geertz 1973; Bellah et al. 1985; Cushman 1990). Central to most Western psychological accounts is a notion of the masterful, bounded, and individuated self, characterized by a core personality that transcends situations and contexts. While this view can be interpreted as a scientific theory about mentation, it is also possible to interpret such descriptions as part of Euro-American folk theory. Perhaps paradoxically, such an ethnotheorv would be a theorv of society (or sociality) that ignores situational constraints, contextual variables, and other aspects of societal and cultural environment. None the less it is a theory of society. Accordingly, the evocation of a mentalistic logic in the psychologist’s discourse on trait-disposition reasoning might be described as a particular cultural construction of category identity (one in which category identity is largely effaced and individualized). Again, con-siderable literature on intellectual history supports the notion that the modem Euro-American emphasis on individualism is a relatively recent development (Mauss 1938; Dumont 1985).

    • Robert Locke
      May 10, 2021 at 10:02 am

      Plese quit quoting secondary sources, go live and work in rurope, what contrasts you’ll find.

      • Ken Zimmerman
        May 10, 2021 at 10:56 pm

        Primary source for anthropologists.

  2. Gerald Holtham
    May 10, 2021 at 8:23 pm

    Lars says:” …we have to be able to show that they do not only hold under ceteris paribus conditions. If not, they are of limited value to our explanations and predictions of real economic systems”.

    That is an impossible standard and aspiration in social studies. Any social theory makes ceteris paribus assumptions unless it is trying to explain everything in every social situation – itself a mad aspiration. What a model leaves out it cannot, by definition, explain or predict. Therefore its expectations are inevitably thwarted from time to time by changes in the ceteris paribus conditions. To expect anything else is crying for the moon.
    Any model is therefore of “limited value” in explaining economic systems. What is one supposed to expect – unlimited value? The interesting question is: is the model of any value at all in helping us understand a system or to predict its development.
    Lars sets an impossible standard and then points out that the standard is not met! He is right to criticise anyone making excessive claims (which certainly happens) but wrong to denigrate work because it does not guarantee impossible degrees of certainty. If you will accept only perfection you are certain to be disappointed..
    In practical applications every sensible analyst considers whether extraneous information should qualify or disqualify purely statistical inferences. Bayesian procedures explicitly allow for “mechanisms that we have warranted and justified reasons to believe in” (whatever they are) since they are part of the prior view essential to the Bayesian approach. Of course the Bayesian approach will then confront the prior with the quantitative data and tell you whether your prior should be modified. Is that so mistaken? What might the “warranted and justified” reasons be that are so powerful we must ignore other evidence? The Bayesian approach is more capacious than classical statistics in accommodating different sorts of information.

    • Ken Zimmerman
      May 14, 2021 at 8:13 am

      Gerald, set aside social sciences (including economics) and just consider everyday life. We recognize there are many events, ideas, beliefs , agents, etc.of which we are not aware that effect the actions we take the justifications we hold for those actions. We also recognize that almost never can any of us identify with absolute certainty or precision how those effects operate. Some social scientists claim they can do what everyday persons cannot. They can layout which cause goes with which effect and what the level of that relationship is. Can they? The available evidence seems to indicate sometimes but not often. And their guesses are inconsistent at best. Ceteris paribus is just the ‘formal’ recognition that uncertainty and imprecission are endemic in our lives.

  3. Edward Ross
    May 14, 2021 at 1:30 am

    in reply to Robert Locke may 10
    I agree with your inference that that practical experience and observation should take place before economic theory. Otherwise to put it bluntly economic theory becomes irrelevant. Ted

  4. Gerald Holtham
    May 15, 2021 at 1:11 am

    The world is complicated and we cannot address, far less answer, every possible question at once so ceteris paribus assumptions are inevitable. Fortunately reality seems to be structured so that it is possible to compartmentalise and address some issues while ignoring others. If it were not so, science would indeed be impossible. The aim of social science should surely be to do “what everyday persons cannot” and elucidate causes and effects that are not obvious. If you cannot advance knowledge beyond the commonplace, what’s the point? Social sciences have been conspicuously less successful than some physical sciences, we know. There are several reasons and the most important is the mutability of the subject matter. Mere opinion cannot change the gravitational constant but it can change social phenomena, like interest rates for example.
    Studies of specific episodes and cultures with all their particularities as done by historians and anthropologists can always turn up findings that are interesting and illuminating. Looking for valid generalisations or tendencies in social or economic behaviour that exist across different times and cultures is a much more hazardous undertaking. Success is not assured. There have been a lot of false starts and blind alleys and some going backwards but a little knowledge gained too. I don’t foresee the hit rate changing much but we might advance a bit quicker if people were more serious about testing theories and their limits empirically before investing in them.

    • Meta Capitalism
      May 15, 2021 at 4:28 am

      Looking for valid generalisations or tendencies in social or economic behaviour that exist across different times and cultures is a much more hazardous undertaking. ~ Gerald Holtham

      Reasonable generalizations Gerald, but I find you undervalue what critical historical knowledge can teach us. If one examines critically what is happening in the US and other places we see many patterns repeating themselves. The attack on the US Capital and Trump’s attempt to overturn a democratic election and the tactics he used are not new in light of critical historical evidence, to wit:

      “How Fascism Works: The Politics of Us and Them” by Jason Stanley.

      Start reading it for free: https://a.co/aRb70X9

      And there is also Madeleine Albright’s Fascism: A Warning:

      Description
      Product Description
      #1 New York Times Bestseller

      Best Books of 2018 –The Economist

      A personal and urgent examination of Fascism in the twentieth century and how its legacy shapes today’s world, written by one of America’s most admired public servants, the first woman to serve as U.S. secretary of state

      A Fascist, observes Madeleine Albright, “is someone who claims to speak for a whole nation or group, is utterly unconcerned with the rights of others, and is willing to use violence and whatever other means are necessary to achieve the goals he or she might have.”

      The twentieth century was defined by the clash between democracy and Fascism, a struggle that created uncertainty about the survival of human freedom and left millions dead. Given the horrors of that experience, one might expect the world to reject the spiritual successors to Hitler and Mussolini should they arise in our era. In Fascism: A Warning, Madeleine Albright draws on her experiences as a child in war-torn Europe and her distinguished career as a diplomat to question that assumption.

      Fascism, as she shows, not only endured through the twentieth century but now presents a more virulent threat to peace and justice than at any time since the end of World War II. The momentum toward democracy that swept the world when the Berlin Wall fell has gone into reverse. The United States, which historically championed the free world, is led by a president who exacerbates division and heaps scorn on democratic institutions. In many countries, economic, technological, and cultural factors are weakening the political center and empowering the extremes of right and left. Contemporary leaders such as Vladimir Putin and Kim Jong-un are employing many of the tactics used by Fascists in the 1920s and 30s.

      Fascism: A Warning is a book for our times that is relevant to all times. Written by someone who has not only studied history but helped to shape it, this call to arms teaches us the lessons we must understand and the questions we must answer if we are to save ourselves from repeating the tragic errors of the past.

  5. Ken Zimmerman
    May 15, 2021 at 11:57 am

    Everything has a history. A common statement by historians. Even Heraclitus’ famous statement, “You cannot step into the same river twice, for other waters are continually flowing on.” But it is via that everything has a history that situations and agents that cannot as Heraclitus points out be the same (change is continuous) are connected. History is the medium by which connections are made; generalizations are inspired. In everyday terms, history is the ‘thing’ humans create to accomplish this task.

  6. Robert Locke
    May 15, 2021 at 12:05 pm

    But the rule in history is to use original sources, secoecondary sources are suspect.

  7. Gerald Holtham
    May 15, 2021 at 5:57 pm

    Evidently lessons can be drawn from history but historians are very cautious about generalization. They are all too aware of the many elements at work in an historical situation and the many interwoven strands in the causal chain. It is possible to classify a set of political beliefs and practices as fascism and warn against it as always having had negative consequences. It is another to produce a causal model of fascism and to say that it arises when a particular set of factors exist or combine in a particular way. It would be even harder to assign relative weights to the causal factors or give probabilistic forecasts of the outcome were the factors to occur. It would be very difficult to test any such theory conclusively. And historians are right not to attempt any such thing. There are too many possible variables and not enough closely observed and recorded data to a make such a thing possible. The importance of studying history is clear but for the most part it cannot be reduced to a science as generally understood.
    Can any social study be scientific in the sense of producing fairly detailed causal models of particular phenomena and testing them successfully on data? Opinions seem to differ on that question, on this blog and in general. Economics certainly produces causal models but the empirical status of most of them is contested to say the least.

  8. Ken Zimmerman
    May 16, 2021 at 7:50 am

    Text books on histography write such as this on primary sources:
    “Primary sources refer to documents or other items that provide first-hand, eyewitness accounts of events. For example, if you are studying the civil rights movement, a newspaper article published the day after the 1965 Selma to Montgomery march and a memoir written by someone who participated in the march would both be considered primary sources.

    Historians use primary sources as the raw evidence to analyze and interpret the past. They publish secondary sources – often scholarly articles or books – that explain their interpretation. When you write a historical research paper, you are creating a secondary source based on your own analysis of primary source material.

    Examples of primary sources include diaries, journals, speeches, interviews, letters, memos, photographs, videos, public opinion polls, and government records, among many other things.”

    But it’s more complex than it seems. All the sources listed have been through at least one round of assessment and re-interpretation. Some through several. So while many are firsthand accounts as they are written by someone who experienced the event and may include opinions, they often are not ‘uniterpreted ‘ accounts. Uniterpreted accounts by humans are not as I see it even possible. Also, a large portion of the accounts are not by people actually involved in what is described in the account. They are descriptions of descriptions. Sometimes even descriptions of rumors and gossip.

    As for general models, particularly causative ones, these are no strangers to historians. See for example, Arnold Toynbee, Immanuel Wallerstein, Nail Ferguson, etc. In fact, it’s difficult to find an historian who doesn’t want explain the causes of history. But you are correct that most historians do not believe models of history can be tested in the sense that term is used in physics, chemistry, etc. After all, humans while composed of atoms, molecules, and similar, their actions, thoughts, beliefs, etc. ’emerge’ from these and as the term emerge makes clear are not completely determined by them. Humans are not wholly materially determined but neither are they wholly free from materiality. Thus, causal modeling for human beliefs, actions, and thoughts, and similar has less grip since human existence is between material determinacy and full autonomy. That’s what gives historians pause in pushing abstractions and generalizations too far in explaining and predicting human actions and beliefs.

  1. No trackbacks yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.