Home > Uncategorized > Econometrics — a second-best explanatory practice

Econometrics — a second-best explanatory practice

from Lars Syll

Consider two elections, A and B. For each of them, identify the events that cause a given percentage of voters to turn out. Once we have thus explained the turnout in election A and the turnout in election B, the explanation of the difference (if any) follows automatically, as a by-product. As a bonus, we might be able to explain whether identical turnouts in A and B are accidental, that is, due to differences that exactly offset each other, or not. In practice, this procedure might be too demanding. The data or he available theories might not allow us to explain the phenomena “in and of themselves.” We should be aware, however, that if we do resort to explanation of variation, we are engaging in a second-best explanatory practice.

Modern econometrics is fundamentally based on assuming — usually without any explicit justification — that we can gain causal knowledge by considering independent variables that may have an impact on the variation of a dependent variable. This is however, far from self-evident. Often the fundamental causes are constant forces that are not amenable to the kind of analysis econometrics supplies us with. As Stanley Lieberson has it in Making It Count:

LiebersonOne can always say whether, in a given empirical context, a given variable or theory accounts for more variation than another. But it is almost certain that the variation observed is not universal over time and place. Hence the use of such a criterion first requires a conclusion about the variation over time and place in the dependent variable. If such an analysis is not forthcoming, the theoretical conclusion is undermined by the absence of information …

Moreover, it is questionable whether one can draw much of a conclusion about causal forces from simple analysis of the observed variation … To wit, it is vital that one have an understanding, or at least a working hypothesis, about what is causing the event per se; variation in the magnitude of the event will not provide the answer to that question.

Trygve Haavelmo was making a somewhat similar point back in 1941, when criticizing the treatmeant of the interest variable in Tinbergen’s regression analyses. The regression coefficient of the interest rate variable being zero was according to Haavelmo not sufficient for inferring that “variations in the rate of interest play only a minor role, or no role at all, in the changes in investment activity.” Interest rates may very well play a decisive indirect role by influencing other causally effective variables. And:

the rate of interest may not have varied much during the statistical testing period, and for this reason the rate of interest would not “explain” very much of the variation in net profit (and thereby the variation in investment) which has actually taken place during this period. But one cannot conclude that the rate of influence would be inefficient as an autonomous regulator, which is, after all, the important point.

This problem of ‘nonexcitation’ — when there is too little variation in a variable to say anything about its potential importance, and we can’t identify the reason for the factual influence of the variable being ‘negligible’ — strongly confirms that causality in economics and other social sciences can never solely be a question of statistical inference. Causality entails more than predictability, and to really in depth explain social phenomena requires theory.

Analysis of variation — the foundation of all econometrics — can never in itself reveal how these variations are brought about. First when we are able to tie actions, processes or structures to the statistical relations detected, can we say that we are getting at relevant explanations of causation. Too much in love with axiomatic-deductive modeling, neoclassical economists especially tend to forget that accounting for causation — how causes bring about their effects — demands deep subject-matter knowledge and acquaintance with the intricate fabrics and contexts. As already Keynes argued in his A Treatise on Probability, statistics (and econometrics) should primarily be seen as means to describe patterns of associations and correlations, means that we may use as suggestions of possible causal realations. Forgetting that, economists will continue to be stuck with a second-best explanatory practice.

  1. Gerald Holtham
    September 20, 2021 at 4:44 pm

    “To wit, it is vital that one have an understanding, or at least a working hypothesis, about what is causing the event per se; variation in the magnitude of the event will not provide the answer to that question.”
    Indeed. You have to have a theory and some understanding of the domain of the theory. A particular covariation cannot simply be assumed to be general.
    It does not follow, though, that:
    “statistics (and econometrics) should primarily be seen as means to describe patterns of associations and correlations, means that we may use as suggestions of possible causal relations. ”
    Well, you can try to extract theory from data patterns if you like, though theory can come from other sources, like introspection, imagination or derivation or analogy with other theory. If the theory is any good, however, it will imply covariations in data. Those can be tested by econometrics. The purpose of econometrics is not to generate theory but to test it. In social studies tests are rarely conclusive because most hypotheses are joint hypotheses; it is always possible to change or tweak an auxiliary hypothesis to protect a theory. Knockouts are rare but an accumulation of failures should eventually undermine a fallacious theory. For that to happen statistical testing has to be done scrupulously and taken seriously

  2. Ken Zimmerman
    September 22, 2021 at 2:01 am

    We should take statistical testing as seriously as we take experiments. And experiments are not what science education teaches they are.

    In the scientist’s account, experiment is seen as the supreme arbiter of theory. Experimental facts dictate which theories are to be accepted and which rejected. Experimental data on scaling, neutral currents and charmed particles, for example, dictated that the quark-gauge theory picture was to be preferred over alternative descriptions of the world. There are, though, two well-known and forceful philosophical objections to this view, each of which implies that experiment cannot oblige scientists to make a particular choice of theories. First, even if one were to accept that experiment produces unequivocal fact, it would remain the case that choice of a theory is The Prehistory of High-Energy Physics underdetermined by any finite set of data. It is always possible to invent an unlimited set of theories, each one capable of explaining a given set of facts. Of course, many of these theories may seem implausible, but to speak of plausibility is to point to a role for scientific judgment: the relative plausibility of competing theories cannot be seen as residing in data which are equally well explained by all of them. Such judgments are intrinsic to theory choice, and clearly entail something more than a straightforward comparison of predictions with data. Furthermore, whilst one could in principle imagine that a given theory might be in perfect agreement with all of the relevant facts, historically this seems never to be the case. There are always misfits between theoretical predictions and contemporary experimental data. Again judgments are inevitable: which theories merit elaboration in the face of apparent empirical falsification, and which do not?

    The second objection to the scientist’s version is that the idea that experiment produces unequivocal fact is deeply problematic. At the heart of the scientist’s version is the image of experimental apparatus as a ‘closed’, perfectly well understood system. Just because the apparatus is closed in this sense, whatever data it produces must command universal assent; if everyone agrees upon how an experiment works and that it has been competently performed, there is no way in which its findings can be disputed. However, it appears that this is not an adequate image of actual experiments. They are better regarded as being performed upon ‘open’, imperfectly understood systems, and therefore experimental reports are fallible. This fallibility arises in two ways. First, scientists’ understanding of any experiment is dependent upon theories of how the apparatus performs, and if these theories change then so will the data produced. More far reaching than this, though, is the observation that experimental reports necessarily rest upon incomplete foundations. To give a relevant example, one can note that much of the effort which goes into the performance and interpretation of HEP (High Energy Physics) experiments is devoted to minimising ‘background’-physical processes which are uninteresting in themselves, but which can mimic the phenomenon under investigation. Experimenters do their best, of course, to eliminate all possible sources of background, but it is a commonplace of experimental science that this process has to stop somewhere if results are ever to be presented. Again a judgment is required, that enough has been done by the experimenters to make it probable that background effects cannot explain the reported signal, and such judgments can always, in principle, be called into question. The determined critic can always concoct some possible, if improbable, source of error which has not been ruled out by the experimenters.

    Missing from the scientist’s account, then, is any apparent reference to the judgments entailed in the production of scientific knowledge-judgments relating to the acceptability of experimental data as facts about natural phenomena, and judgments relating to the plausibility of theories. But this lack is only apparent. The scientist’s account avoids any explicit reference to judgments by retrospectively adjudicating upon their validity. By this I mean the following. Theoretical entities like quarks, and conceptualisations of natural phenomena like the weak neutral current, are in the first instance theoretical constructs: they appear as terms in theories elaborated by scientists. However, scientists typically make the realist identification of these constructs with the contents of nature, and then use this identification retrospectively to legitimate and make unproblematic existing scientific judgments. Thus, for example, the experiments which discovered the weak neutral current are now represented in the scientist’s account as closed systems just because the neutral current is seen to be real. Conversely, other observation reports which were once taken to imply the non-existence of the neutral current are now represented as being erroneous: clearly, if one accepts the reality of the neutral current, this must be the case. Similarly, by interpreting quarks and so on as real entities, the choice of quark models and gauge theories is made to seem unproblematic: if quarks really are the fundamental building blocks of the world, why should anyone want to explore alternative theories?

    Most scientists think of it as their purpose to explore the underlying structure of material reality, and it therefore seems quite reasonable for them to view their history in this way.6 But from the perspective of the historian the realist idiom is considerably less attractive. Its most serious shortcoming is that it is retrospective. One can only appeal to the reality of theoretical constructs to legitimate scientific judgments when one has already decided which constructs are real. And consensus over the reality of particular constructs is the outcome of a historical process. Thus, if one is interested in the nature of the process itself rather than in simply its conclusion, recourse to the reality of natural phenomena and theoretical entities is self defeating.

    How is one to escape from retrospection in analysing the history of science? To answer this question, it is useful to reformulate the objection to the scientist’s account in terms of the location of agency in science. In the scientist’s account, scientists do not appear as genuine agents. Scientists are represented rather as passive observers of nature: the facts of natural reality are revealed through experiment; the experimenter’s duty is simply to report what he sees; the theorist accepts such reports and supplies apparently unproblematic explanations of them. One gets little feeling that scientists actually do anything in their day-to-day practice. Inasmuch as agency appears anywhere in the scientist’s account it is ascribed to natural phenomena which, by manifesting themselves through the medium of experiment, somehow direct the evolution of science. Seen in this light, there is something odd about the scientist’s account. The attribution of agency to inanimate matter rather than to human actors is not a routinely acceptable notion. In this book, the view will be that agency belongs to actors not phenomena: scientists make their own history, they are not the passive mouthpieces of nature. This perspective has two advantages for the historian. First, while it may be the scientist’s job to discover the structure of nature, it is certainly not the historian’s. The historian deals in texts, which give him access not to natural reality but to the actions of scientists-scientific practice. The historian’s methods are appropriate to the exploration of what scientists were doing at a given time, but will never lead him to a quark or a neutral current. And, by paying attention to texts as indicators of contemporary scientific practice, the historian can escape from the retrospective idiom of the scientist. He can, in this way, attempt to understand the process of scientific development, and the judgments entailed in it, in contemporary rather than retrospective terms-but only, of course, if he distances himself from the realist identification of theoretical constructs with the contents of nature.

    This is where the mirror symmetry arises between the scientist’s account and that offered here. The scientist legitimates scientific judgments by reference to the state of nature; I attempt to understand them by reference to the cultural context in which they are made. I put scientific practice, which is accessible to the historian’s methods, at the centre of my account, rather than the putative but inaccessible reality of theoretical constructs. My goal is to interpret the historical development of particle physics, including the pattern of scientific judgments entailed in it, in terms of the dynamics of research practice.

    This mistaken science has been taken up by some social scientists. Including economists. Unfortunately, economists are not reflecting on either their history or their future.

    With thanks to Andrew Pickering.

  3. Gerald Holtham
    September 22, 2021 at 9:50 pm

    I really cannot see that there is any dispute here. Of course there is agency in science. Theories are invented by scientists and persisted with as long as they give sufficiently adequate accounts of the data as to generate useful technologies. They remain provisional and fallible. Falsification is never in practice achieved by a single anomaly but by an accumulation of them and ultimately by another theory that gives a better account of the data with fewer anomalies. Simplified accounts of this process may overstate the degree of certainty or assurance achieved but it is not right to say scientists share this over-simple view. Could there be other worlds with different theories of physical reality that account for it as well as our own? It is conceivable. The shoe will pinch in different places because all theories are schema approximating a complex reality. So yes, scientific theory may well be historically or culturally contingent. Reality imposes severe limits on what successful theories we can entertain – you cannot reasonably maintain the phlogiston theory of combustion. But we cannot assert that the constraints are sufficient to ensure a unique correct theory.
    But what are we to infer from this? That there is something wrong with science? Perfection is unattainable and you can’t do better than the best you can do. Humanity has not found a better method of acquiring knowledge than the scientific method: collect data, organise it on various taxonomies, theorise about it, test the theory on more data. Accept open discussion and ultimately discriminate among theories on the basis of the least bad approximation to the collected data. You will not escape error or abolish uncertainty but you cannot do better by retreating into metaphysics or looking for divine revelation. It might be fun for Ken Zimmerman to tease scientists with the thought that their theories are culturally dependent but when he turns the light on he is indebted to Faraday, Volta and Ampere. When he uses his cell phone he is indebted to Maxwell and his GPS depends on theories developed by Einstein. Are they the last word? No but there is no electric light, electronic device or satellite-based navigation system without them.
    Applying the same method to the artificial systems of human society is unquestionably more difficult. The generation of valid generalisations is harder and the domain of their validity in space and time is smaller and more contestable. Of course we can rest on the particularities of history but the human desire to find patterns and understand at a more fundamental level will lead to the basic process of conjecture and refutation. We agree that much economics has taken a rather sterile turn but it has been able to do so because conjecture has not been sufficiently disciplined by attempted refutation. The fact that experiment can be subjected to philosophical questions does not mean you can abandon experiment. And where you cannot experiment you must resort to statistical analysis and testing.
    Warnings against scientific hubris are fine but they should not be pushed to the point of obscurantism.

    • Ken Zimmerman
      September 24, 2021 at 8:07 am

      This is from my friend Andrew Pickering; a physicist specializing in high energy physics.

      The scientist legitimates scientific judgments by reference to the state of nature; I attempt to understand them by reference to the cultural context in which they are made. I put scientific practice, which is accessible to the historian’s methods, at the centre of my account, rather than the putative but inaccessible reality of theoretical constructs. My goal is to interpret the historical development of particle physics, including the pattern of scientific judgments entailed in it, in terms of the dynamics of research practice.

      A simplified overview.

      Suppose that a group of experimenters sets out to investigate some facet of a phenomenon whose existence is taken by the scientific community to be well established. Suppose, further, that when the experimenters analyse their data they find that their results do not conform to prior expectations. They are then faced with one of the problems of scientific judgment noted above, that of the potential fallibility of all experiments. Have they discovered something new about the world or is something amiss with their performance or interpretation of the experiment? From an examination of the details of the experiment alone, it is impossible to answer this question. However thorough the experimenters have been, the possibility of undetected error remains.10 Now suppose that a theorist enters the scene. He declares that the experimenters’ findings are not unexpected to him -they are the manifestation of some novel phenomenon which has a central position in his latest theory. This creates a new set of options for research practice. First, by identifying the unexpected findings with an attribute of nature rather than with the possible inadequacy of a particular experiment, it points the way forward for further experimental investigation. And secondly, since the new phenomenon is conceptualised within a theoretical framework, the field is open for theorists to elaborate further the original proposal.

      One can imagine a variety of sequels to this episode, but it is sufficient to outline two extreme cases. Suppose that a second generation of experiments is performed, aimed at further exploration of the new phenomenon, and that they find no trace of it. In this case, one would expect that suspicion will again fall upon the performance of the so-called discovery experiment, and that the theorist’s conjectures will once more be seen as pure theory with little or no empirical support. Conversely, suppose that the second-generation experiments do find traces which conform in some degree with expectations deriving from the new theory. In this case, one would expect scientific realism to begin to take over. The new phenomenon would be seen as a real attribute of nature, the original experiment would be regarded as a genuine discovery, and the initial theoretical conjecture would be seen as a genuine basis for the explanation of what had been observed. Furthermore, one would expect further generations of experimentation and theorising to take place, elaborating the founding experimental and theoretical achievements into what I am calling research traditions. This image of the founding and growth of research traditions is very schematic. It typifies, nevertheless, many of the historical developments which we will be examining, and I will therefore discuss it in some detail. In particular, I want to enquire into the conditions of growth of such traditions. I have so far spoken as though traditions are self-moving; as though succeeding generations of research come into being of their own volition. Clearly this image is inadequate as it stands: research traditions prosper only to the extent that scientists decide to work within them. What is lacking is a framework for understanding the dynamics of practice, the structuring of such decisions. I want, therefore, to present a simple model of this dynamics which will inform my historical account. The model can be encapsulated in the slogan ‘opportunism in context’.

      To explain what context entails, let me continue with the example of two research traditions, one experimental the other theoretical, devoted to the exploration and explanation of some natural phenomenon. It is, I think, clear that each generation of practice within one tradition provides a context wherein the succeeding generation of practice in the other can find both its justification and subject matter. Consider the theoretical tradition: to justify his choice to work within it, a theorist has only to cite the existence of a body of experimental data in need of explanation. And fresh data, from succeeding generations of experiment, constitute the subject matter for further elaborations of theory. Conversely, for the experimenter, his decision to investigate the phenomenon in question rather than some other process is justified by its theoretical interest, as manifested by the existence of the theoretical tradition. And each generation of theorising serves to mark out fresh problem areas to be investigated by the next generation of experiment. Thus, through their reference to the same natural phenomenon, theoretical and experimental traditions constitute mutually reinforcing contexts. Without in any way committing oneself to the reality of the phenomenon, then, one can observe that through the medium of the phenomenon the two traditions maintain a symbiotic relationship.

      This idea of the symbiosis of research practice, wherein the practice of each group of physicists constitutes both justification and subject matter for that of the others, will be central to my analysis of the history ofHEP. I have explained it for the simple case of experimental and theoretical traditions structured around a particular phenomenon, because this is the archetype of many of the developments to be discussed. But it will also underlie my treatment of more complex situations-where, for example, many traditions interact with one another, or when the traditions at issue are purely theoretical ones. In itself, though, reference to context is insufficient to explain the cultural dynamics of research traditions. The question remains of why particular scientists contribute to particular traditions in particular ways. Here I find it very useful to refer to ‘opportunism’. The point is this. Each scientist has at his disposal a distinctive set of resources for constructive research. These may be material-the experimenter, say, may have access to a particular piece of apparatus-or they may be intangible -expertise in particular branches of experiment or theory acquired in the course of a professional career, for example. 11 The key to my atialysis of the dynamics of research traditions will lie in the observation that these resources may be well or ill matched to particular contexts. Research strategies, therefore, are structured in terms of the relative opportunities presented by different contexts for the constructive exploitation of the resources available to individual scientists. Opportunism in context is the theme which runs through my historical account.

      I seek to explain the dynamics of practice in terms of the contexts within which researchers find themselves, and the resources which they have available for the exploitation of those contexts. It is, of course, impossible to discuss the entire practice of the HEP community at the micro-level of the individual, and when discussing routine developments within already established traditions I confine myself to an aggregated, macro-level analysis in terms of shared resources and contexts. As far as experimental traditions are concerned this creates no special problems. Resources for HEP experiment are limited by virtue of their expense; major items of equipment are located at a few centralised laboratories. Those facilities constitute the shared resources of HEP experimenters. The interest of theorists in particular phenomena likewise constitutes a shared context. If the facilities available are adequate to the investigation of questions of theoretical interest, one can readily understand that an experimental programme devoted to that phenomenon should flourish. Similarly, the data generated within experimental traditions constitute a shared context for theorists. But a problem arises when one comes to discuss the nature of shared theoretical resources: what is the theoretical equivalent of shared experimental facilities? One might look here to the shared material resources of theorists -the computing facilities, for example, which have played an increasingly significant role in the history of modern theoretical (as well as experimental) physics. But this would be insufficient to explain why quarks and gauge theory triumphed at the expense of other theoretical orientations within HEP. Instead I focus primarily upon the intangible resource of theoretical expertise. To explain how I use this concept let me briefly preview the main features of theory development which emerge from the historical account. The most striking feature of the conceptual development of HEP is that it proceeded through a process of modelling or analogy.12 Two key analogies were crucial to the establishment of the quark-gauge theory picture. As far as quarks themselves were concerned, the trick was for theorists to learn to see hadrons as quark composites, just as they had already learned to see nuclei as composites of neutrons and protons, and to see atoms as composites of nuclei and electrons. As far as the gauge theories of quark and lepton interactions were concerned, these were explicitly modelled upon the already established theory of electromagnetic interactions known as quantum electrodynamics. The point to note here is that the analysis of composite systems was, and is, part of the training and research experience of all theoretical physicists. Similarly, in the period we will be considering, the methods and techniques of quantum electrodynamics were part of the common theoretical culture of HEP. Thus expertise in the analysis of composite systems and, albeit to a lesser extent, quantum electrodynamics constituted a set of shared resources for particle physicists. And, as we shall see, the establishment of the quark and gauge-theory traditions of theoretical research depended crucially upon the analogical recycling of those resources into the analysis of various experimentally accessible phenomena. In discussing the development of established traditions, then, my primary explanatory variables are the shared material resources of experimenters and the shared expertise of theorists. However, there remains the problem of accounting for the founding of new traditions and the first steps towards the establishment of new phenomena. Such episodes are not to be understood in terms of the gross distribution of shared material resources or expertise, but they pose no special problems because of that. I will try to show that these episodes are just as comprehensible as routine developments within established traditions, and are similarly to be understood. The only difference between my accounts of the development of traditions and of their founding is that to discuss the latter I move, perforce, from the macro-level of the group to the micro-level of the individual. I aim experimental facilities? One might look here to the shared material resources of theorists -the computing facilities, for example, which have played an increasingly significant role in the history of modern theoretical (as well as experimental) physics. But this would be insufficient to explain why quarks and gauge theory triumphed at the expense of other theoretical orientations within HEP. Instead I focus primarily upon the intangible resource of theoretical expertise. To explain how I use this concept let me briefly preview the main features of theory development which emerge from the historical account.

  1. No trackbacks yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: