Home > Uncategorized > In praise of pluralism

In praise of pluralism

from Lars Syll

About – George F. DeMartinoRecognition of the speculative value of counterfactualizing provides the grounding for a defense of theoretical pluralism in economics. The existence of multiple contending theories in economics is inconvenient, of course. It casts doubt on the truth content of the counterfactual scenarios generated by the predominant approach and challenges the predominant causal claims … But that is precisely the virtue of contending theoretical perspectives in economics. They serve to generate alternative possible causal linkages that are missed when a profession assembles within one particular church and professes the truth of its sacred texts. Convergence around one theoretical approach generates unwarranted confidence in theoretical propositions and empirical inferences, suppresses recognition of alternative worlds, and restricts the proliferation of alternative scenarios that just might prepare us for unwelcome futures. The consequence of groupthink is repeated surprise when the world takes an unexpected turn for which it is grossly unprepared. The consequence is preventable human suffering …

George F. DeMartino

When mainstream economists today try to give a picture of modern economics as a pluralist enterprise, they silently ‘forget’ to mention that the change and diversity that gets their approval only take place within the analytic-formalistic modelling strategy that makes up the core of mainstream economics. You’re free to take your analytical formalist models and apply them to whatever you want — as long as you do it with a modelling methodology that is acceptable to the mainstream. If you do not follow this particular mathematical-deductive analytical formalism you’re not even considered doing economics. If you haven’t modelled your thoughts, you’re not in the economics business. But this isn’t pluralism. It’s a methodological reductionist straightjacket.

To most mainstream economists you only have knowledge of something when you can prove it, and so ‘proving’ theories with their models via deductions is considered the only certain way to acquire new knowledge. This is, however, a view for which there is no warranted epistemological foundation. Outside mathematics and logic, all human knowledge is conjectural and fallible.

Validly deducing things in closed analytical-formalist-mathematical models — built on atomistic-reductionist assumptions — doesn’t much help us understand or explain what is taking place in the real world we happen to live in. Validly deducing things from patently unreal assumptions — that we all know are purely fictional — makes most of the modelling exercises pursued by mainstream macroeconomists rather pointless. It’s simply not the stuff that real understanding and explanation in science is made of. Had mainstream economists not been so in love with their smorgasbord of models, they would have perceived this too. Telling us that the plethora of models that make up modern macroeconomics ‘are not right or wrong,’ but ‘just more or less applicable to different situations,’ is nothing short of hand waving.

Take macroeconomics as an example. Yes, there is a proliferation of macro models nowadays — but it almost exclusively takes place as a kind of axiomatic variation within the standard DSGE modelling framework. And — no matter how many thousands of models mainstream economists come up with, as long as they are just axiomatic variations of the same old mathematical-deductive ilk, they will not take us one single inch closer to giving us relevant and usable means to further our understanding and explanation of real economies.

Most mainstream economists seem to have no problem with this lack of fundamental diversity — not just path-dependent elaborations of the mainstream canon — and the vanishingly little real-world relevance that characterizes modern macroeconomics. To these economists, there is nothing basically wrong with ‘standard theory.’ As long as policymakers and economists stick to ‘standard economic analysis’ — DSGE — everything is fine. Economics is just a common language and method that makes us think straight and reach correct answers.

Most mainstream neoclassical economists are not for pluralism. They are fanatics insisting on using an axiomatic-deductive economic modelling strategy. To yours truly, this attitude is nothing but a late confirmation of Alfred North Whitehead’s complaint that “the self-confidence of learned people is the comic tragedy of civilisation.”

Mainstream economists today seem to maintain that new imaginative empirical methods — such as natural experiments, field experiments, lab experiments, RCTs — help us to answer questions concerning the validity of economic theories and models.

Yours truly beg to differ. There are few real reasons to share his optimism on the alleged pluralist and empirical revolution in economics.

I am basically — though not without reservations — in favour of the increased use of experiments and field studies within economics. Not least as an alternative to completely barren ‘bridge-less’ axiomatic-deductive theory models. My criticism is more about aspiration levels and what we believe that we can achieve with our mediational epistemological tools and methods in the social sciences.

The increasing use of natural and quasi-natural experiments in economics during the last couple of decades has led several prominent economists to triumphantly declare it as a major step on a recent path toward empirics, where instead of being a deductive philosophy, economics is now increasingly becoming an inductive science.

In randomized trials the researchers try to find out the causal effects that different variables of interest may have by changing circumstances randomly — a procedure somewhat (‘on average’) equivalent to the usual ceteris paribus assumption).

Besides the fact that ‘on average’ is not always ‘good enough,’ it amounts to nothing but hand waving to simpliciter assume, without argumentation, that it is tenable to treat social agents and relations as homogeneous and interchangeable entities.

Randomization is used to basically allow the econometrician to treat the population as consisting of interchangeable and homogeneous groups (‘treatment’ and ‘control’). The regression models one arrives at by using randomized trials tell us the average effect that variations in variable X has on the outcome variable Y, without having to explicitly control for effects of other explanatory variables R, S, T, etc., etc. Everything is assumed to be essentially equal except the values taken by variable X.

Just as e.g. econometrics, randomization promises more than it can deliver, basically because it requires assumptions that in practice are not possible to maintain.

Like econometrics, randomization is basically a deductive method. Real target systems are seldom epistemically isomorphic to our axiomatic-deductive models/systems, and even if they were, we still have to argue for the external validity of the conclusions reached from within these epistemically convenient models/systems. Causal evidence generated by randomization procedures may be valid in ‘closed’ models, but what we usually are interested in, is causal evidence in the real target system we happen to live in.

‘Ideally controlled experiments’ tell us with certainty what causes what effects — but only given the right ‘closures.’ Making appropriate extrapolations from (ideal, accidental, natural or quasi) experiments to different settings, populations or target systems, is not easy. ‘It works there ‘s no evidence for ‘it will work here.’ Causes deduced in an experimental setting still have to show that they come with an export warrant to the target population/system. The causal background assumptions made have to be justified, and without licenses to export, the value of ‘rigorous’ and ‘precise’ methods — and ‘on-average-knowledge’ — is despairingly small.

So, no, I find it hard to share mainstream economists’ enthusiasm and optimism on the value of the latest ’empirical’ trends in mainstream economics. I would argue that although different ’empirical’ approaches have been — more or less — integrated into mainstream economics, there is still a long way to go before economics has become a truly empirical science.

Heterodox critics are not ill-informed about the development of mainstream economics. Its methodology is still the same basic neoclassical one. It’s still non-pluralist. Although more and more economists work within the field of ’empirical’ economics, the foundation and ‘self-evident’ bench-mark is still of the neoclassical deductive-axiomatic ilk.

Sad to say, but we still have to wait for the revolution that will make economics an empirical and truly pluralist and relevant science. Until then, why not read DeMartino’s book The Tragic Science and get a glimpse of the future to come? Mainstream economics belongs to the past.

  1. June 10, 2024 at 7:27 pm

    Macroeconomists have variously been described as engineers, plumbers and dentists. A more accurate label for them given their ever-shifting subject is, as Professor Syll seems to agree, ‘novelists’. They are most certainly not scientists, practicing a rigorous science on a par with physics or chemistry, much as they would like to be. See my recent book ‘Shamanomics: a Short Guide to the Failure, Fallacies and Future of Macroeconomics’. See http://www.shamanomics.info for a snappy summary.

  2. Steven Klees
    June 13, 2024 at 6:05 pm

    I’ve been working in the economics of education field since grad school at Stanford in the early 1970s.  I’ve been a skeptic of quantitative methods since doing my regression analysis dissertation where I was trying to find the impact of an educational television secondary school program in Mexico.  I had a data set where the dependent variables were student test scores and I had literally dozens of theoretically relevant independent variables (including detailed classroom observation data).  There was really no conceptual or empirical way to reduce their numbers and I found, what I have found in subsequent regression studies, that alternative reasonable specifications gave VERY different regression coefficient results on variables of interest.  See my http://www.paecon.net/PAEReview/issue74/Klees74.pdf

    Are RCTs a way out of this causal analysis dilemma?  Many economists and others think so, but I don’t.  One of my dissertation advisors used to say trust experiments if the results “knock your socks off.”  But they usually don’t.  And, as Lars points out, even if an intervention “works there, there is no evidence [provided by an RCT that] it will work” elsewhere.  Plus, I think there are many reasons to question whether the results of most RCTs are even valid in the locale of the experiment.  My view is that even with a tight experiment, there are too many potentially intervening variables to trust that randomization “controls” for all their potential impact on the results.  RCTs basically only check a few differences between treatment and control groups when there are potentially dozens – like in my education regressions.  Sometimes I want to argue there are more potentially relevant causal variables than people in our samples for most processes that we study!  See a wonderful study by a former student of mine on the fundamental problems with RCTs.  https://www.tandfonline.com/doi/full/10.1080/17508487.2024.2314118

    And going back to the issue of generalizability, I believe that RCT results are so fragile that, when there are many of them on the same topic, they rarely agree which makes them useless for policy.  In my international economics of education field two issues have gotten LOTS of RCT studies and show no consistent results – the impact of conditional cash transfers and the impact of educational inputs on outcomes.  For the latter see https://openknowledge.worldbank.org/server/api/core/bitstreams/afc8088e-0ad2-5983-bd4d-efee541cbd61/content

    • David Harold Chester
      June 17, 2024 at 8:42 am

      The trouble with pluralism is that it makes the subject more complicated due to having at least two opposing points of view. This deserves some thought when it is being applied to macroeconomics. Even the definition of economics is pluralistic, because it is about how we seek to satisfy our unlimited ambitious needs by the application of the least effort on each.

      It seems to me that in this topic we have to bring into a state of balance two opposing effects, so that a logical combination of them both is true, and that much of our past confusion has been due to the failure to appreciate the need for this state of balance to be made more equal.

      When applied more directly to macroeconomics it can be seen in demand and supply having the same exchange values as well as when the effects of competition fight against monopolism. When we try to break down the situation into too much detail, this dualism effect vanishes but it also stops our topic from being properly macro- in nature, and the failure to see this has led to much past confusion with the pluralistic claims.

  1. No trackbacks yet.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.